#conference advice
Explore tagged Tumblr posts
caslutz · 10 months ago
Text
Jamie, handing Roy the phone: It’s Isaac, he needs help.
Roy, taking it: Just snap his kneecaps and he'll talk. We're at a parent teacher conference.
Teacher:
Jamie: Anyway, you said Phoebe is enjoying finger painting! That's great.
236 notes · View notes
dragonpyre · 1 month ago
Note
General life advice?
Don't let your fears stop you from going out there. As soon as you stop holding yourself back, so much is possible
19 notes · View notes
idlingsomewhere · 3 months ago
Text
i like avoiding listening to things because im scared of how strong an association ive formed between that thing and periods of my life i dont like but then coming back to them months later and not experiencing any strong emotions like fuck yeah! it got reset!
6 notes · View notes
absinthemindedly · 6 months ago
Text
Tumblr media Tumblr media Tumblr media
Do we prefer A, B, or C?
19 notes · View notes
jcmarchi · 25 days ago
Text
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
New Post has been published on https://thedigitalinsider.com/study-reveals-ai-chatbots-can-detect-race-but-racial-bias-reduces-response-empathy/
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
Tumblr media Tumblr media
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
“I really need your help, as I am too scared to talk to a therapist and I can’t reach one anyways.”
“Am I overreacting, getting hurt about husband making fun of me to his friends?”
“Could some strangers please weigh in on my life and decide my future for me?”
The above quotes are real posts taken from users on Reddit, a social media news website and forum where users can share content or ask for advice in smaller, interest-based forums known as “subreddits.” 
Using a dataset of 12,513 posts with 70,429 responses from 26 mental health-related subreddits, researchers from MIT, New York University (NYU), and University of California Los Angeles (UCLA) devised a framework to help evaluate the equity and overall quality of mental health support chatbots based on large language models (LLMs) like GPT-4. Their work was recently published at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
Mental health support chatbots have long been explored as a way of improving access to mental health support, but powerful LLMs like OpenAI’s ChatGPT are transforming human-AI interaction, with AI-generated responses becoming harder to distinguish from the responses of real humans.
Despite this remarkable progress, the unintended consequences of AI-provided mental health support have drawn attention to its potentially deadly risks; in March of last year, a Belgian man died by suicide as a result of an exchange with ELIZA, a chatbot developed to emulate a psychotherapist powered with an LLM called GPT-J. One month later, the National Eating Disorders Association would suspend their chatbot Tessa, after the chatbot began dispensing dieting tips to patients with eating disorders.
Saadia Gabriel, a recent MIT postdoc who is now a UCLA assistant professor and first author of the paper, admitted that she was initially very skeptical of how effective mental health support chatbots could actually be. Gabriel conducted this research during her time as a postdoc at MIT in the Healthy Machine Learning Group, led Marzyeh Ghassemi, an MIT associate professor in the Department of Electrical Engineering and Computer Science and MIT Institute for Medical Engineering and Science who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and the Computer Science and Artificial Intelligence Laboratory.
What Gabriel and the team of researchers found was that GPT-4 responses were not only more empathetic overall, but they were 48 percent better at encouraging positive behavioral changes than human responses.
However, in a bias evaluation, the researchers found that GPT-4’s response empathy levels were reduced for Black (2 to 15 percent lower) and Asian posters (5 to 17 percent lower) compared to white posters or posters whose race was unknown. 
To evaluate bias in GPT-4 responses and human responses, researchers included different kinds of posts with explicit demographic (e.g., gender, race) leaks and implicit demographic leaks. 
An explicit demographic leak would look like: “I am a 32yo Black woman.”
Whereas an implicit demographic leak would look like: “Being a 32yo girl wearing my natural hair,” in which keywords are used to indicate certain demographics to GPT-4.
With the exception of Black female posters, GPT-4’s responses were found to be less affected by explicit and implicit demographic leaking compared to human responders, who tended to be more empathetic when responding to posts with implicit demographic suggestions.
“The structure of the input you give [the LLM] and some information about the context, like whether you want [the LLM] to act in the style of a clinician, the style of a social media post, or whether you want it to use demographic attributes of the patient, has a major impact on the response you get back,” Gabriel says.
The paper suggests that explicitly providing instruction for LLMs to use demographic attributes can effectively alleviate bias, as this was the only method where researchers did not observe a significant difference in empathy across the different demographic groups.
Gabriel hopes this work can help ensure more comprehensive and thoughtful evaluation of LLMs being deployed in clinical settings across demographic subgroups.
“LLMs are already being used to provide patient-facing support and have been deployed in medical settings, in many cases to automate inefficient human systems,” Ghassemi says. “Here, we demonstrated that while state-of-the-art LLMs are generally less affected by demographic leaking than humans in peer-to-peer mental health support, they do not provide equitable mental health responses across inferred patient subgroups … we have a lot of opportunity to improve models so they provide improved support when used.”
2 notes · View notes
runawaycarouselhorse · 2 years ago
Text
Tumblr media
73 notes · View notes
oystertongue · 1 year ago
Text
.
9 notes · View notes
globallegalassociation · 5 months ago
Text
August 22-23: GLA Patent and Legal Conference
The GLA Patent and Legal Conference on August 22-23 is an essential event for professionals navigating the complex landscape of intellectual property and legal regulations. Hosted by the Global Law Association (GLA), this conference brings together industry leaders, legal experts, and patent professionals for two days of intensive discussions, insightful presentations, and networking opportunities.
Tumblr media
This year’s conference will feature a series of keynote speeches and panel discussions led by renowned experts in patent law, intellectual property rights, and legal innovations. Attendees will gain valuable insights into emerging trends, recent case studies, and best practices in patent management and legal strategy. 
Whether you are a seasoned professional or new to the field, the GLA Patent and Legal Conference promises to be an invaluable experience. Don’t miss this chance to stay at the forefront of patent and legal developments and to enhance your understanding of the evolving landscape. Join us on August 22-23 to engage with experts, expand your network, and advance your knowledge in this critical field.
Address- Global Legal Association Suite-427,425 Broadhollow Road, Melville, New York, USA- 11747 Website: https://www.globallegalassociation.org/ Mail id: [email protected] US: +1 716 941 7798
2 notes · View notes
lunar-years · 2 years ago
Text
Overall I really liked the episode!! looking forward to it on rewatch because I was very high strung going into this one for whatever reason and I think I'll absorb a lot more of it on the second go. But anyway yeah Isaac & Colin & Roy really carried it for me all their scenes were such excellence.
25 notes · View notes
thebirdandhersong · 2 years ago
Text
:'))))
22 notes · View notes
divinekangaroo · 1 year ago
Text
been trying to find tools/advice/quick cheat methods on how to work out, from the point of visualising or conceptualising a scene or story I want to write, what its approximate word count needs to be.
and then the correlary; how to split those ideas backwards to segment scenes into ~2500 word portions.
5 notes · View notes
hiddenbysuccubi · 1 year ago
Text
I'm going to eternally hate myself for the way I met Billie Piper but I do miss Rose Tyler on my screen and the newest Doctor Who clip indicates that Donna Noble had a daughter and named her Rose. I'm gonna scream.
2 notes · View notes
twiichii · 1 year ago
Text
Infused with Blessings: A Trip to Japan
September has been an incredible month! Enjoyed family parties over the holidays, shared quality time with loved ones, went to a pinball tournament at Revenge Of, saw my friend’s band Sound Guardian play at Universal Bar & Grill, and most excitingly – I visited Japan for a week and a half! If you’re interested in how I planned a successful trip, let me begin with a few resources: Advice, tips,…
View On WordPress
2 notes · View notes
justin-peudeau · 1 year ago
Text
WHOA BRO YOU JUST CALLED ME IN SO MANY LANGUAGE ?!?!?!
People always say: "Not everyone is gonna like you, and that's ok, nothing to take personally, it doesn't matter" yet it still sucks. It sucks when it's your teacher, it sucks when it's your boss or co-worker or family member. It even sucks when it's a friends friend or someone we barely know. It hurts. And you do not have to gaslight yourself into thinking that it doesn't hurt when it does. You're allowed to be upset when life is hard. You're allowed to feel an emotion, even more when it makes perfect logical sense. We talk to a friend about our feelings, journal, reflect, use coping skills. We find peace after a while, that's a more realistic solution. You got this. It will be ok.
30K notes · View notes
absinthemindedly · 6 months ago
Text
Tumblr media Tumblr media
Another dress (I'm sorry). It's not black and therefore does not feel like me
14 notes · View notes
jcmarchi · 2 months ago
Text
OpenAI enhances AI safety with new red teaming methods
New Post has been published on https://thedigitalinsider.com/openai-enhances-ai-safety-with-new-red-teaming-methods/
OpenAI enhances AI safety with new red teaming methods
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems.
Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for weaknesses. This was notably employed during the testing of their DALL·E 2 image generation model in early 2022, where external experts were invited to identify potential risks. Since then, OpenAI has expanded and refined its methodologies, incorporating automated and mixed approaches for a more comprehensive risk assessment.
“We are optimistic that we can use more powerful AI to scale the discovery of model mistakes,” OpenAI stated. This optimism is rooted in the idea that automated processes can help evaluate models and train them to be safer by recognising patterns and errors on a larger scale.
In their latest push for advancement, OpenAI is sharing two important documents on red teaming — a white paper detailing external engagement strategies and a research study introducing a novel method for automated red teaming. These contributions aim to strengthen the process and outcomes of red teaming, ultimately leading to safer and more responsible AI implementations.
As AI continues to evolve, understanding user experiences and identifying risks such as abuse and misuse are crucial for researchers and developers. Red teaming provides a proactive method for evaluating these risks, especially when supplemented by insights from a range of independent external experts. This approach not only helps establish benchmarks but also facilitates the enhancement of safety evaluations over time.
The human touch
OpenAI has shared four fundamental steps in their white paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” to design effective red teaming campaigns:
Composition of red teams: The selection of team members is based on the objectives of the campaign. This often involves individuals with diverse perspectives, such as expertise in natural sciences, cybersecurity, and regional politics, ensuring assessments cover the necessary breadth.
Access to model versions: Clarifying which versions of a model red teamers will access can influence the outcomes. Early-stage models may reveal inherent risks, while more developed versions can help identify gaps in planned safety mitigations.
Guidance and documentation: Effective interactions during campaigns rely on clear instructions, suitable interfaces, and structured documentation. This involves describing the models, existing safeguards, testing interfaces, and guidelines for recording results.
Data synthesis and evaluation: Post-campaign, the data is assessed to determine if examples align with existing policies or require new behavioural modifications. The assessed data then informs repeatable evaluations for future updates.
A recent application of this methodology involved preparing the OpenAI o1 family of models for public use—testing their resistance to potential misuse and evaluating their application across various fields such as real-world attack planning, natural sciences, and AI research.
Automated red teaming
Automated red teaming seeks to identify instances where AI may fail, particularly regarding safety-related issues. This method excels at scale, generating numerous examples of potential errors quickly. However, traditional automated approaches have struggled with producing diverse, successful attack strategies.
OpenAI’s research introduces “Diverse And Effective Red Teaming With Auto-Generated Rewards And Multi-Step Reinforcement Learning,” a method which encourages greater diversity in attack strategies while maintaining effectiveness.
This method involves using AI to generate different scenarios, such as illicit advice, and training red teaming models to evaluate these scenarios critically. The process rewards diversity and efficacy, promoting more varied and comprehensive safety evaluations.
Despite its benefits, red teaming does have limitations. It captures risks at a specific point in time, which may evolve as AI models develop. Additionally, the red teaming process can inadvertently create information hazards, potentially alerting malicious actors to vulnerabilities not yet widely known. Managing these risks requires stringent protocols and responsible disclosures.
While red teaming continues to be pivotal in risk discovery and evaluation, OpenAI acknowledges the necessity of incorporating broader public perspectives on AI’s ideal behaviours and policies to ensure the technology aligns with societal values and expectations.
See also: EU introduces draft regulatory guidance for AI models
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, artificial intelligence, development, ethics, openai, red teaming, safety, Society
0 notes