#rumman chowdhury
Explore tagged Tumblr posts
thefugitivesaint ¡ 1 year ago
Text
It’s one of my rare re-bloggings (breath deep and take it in people). Give the Rolling Stones article a read. Here’s a snippet: “As AI has exploded into the public consciousness, the men who created them have cried crisis. On May 2, Gebru’s former Google colleague Geoffrey Hinton appeared on the front page of The New York Times under the headline: “He Warns of Risks of AI He Helped Create.” That Hinton article accelerated the trend of powerful men in the industry speaking out against the technology they’d just released into the world; the group has been dubbed the AI Doomers. Later that month, there was an open letter signed by more than 350 of them — executives, researchers, and engineers working in AI. Hinton signed it along with OpenAI CEO Sam Altman and his rival Dario Amodei of Anthropic. The letter consisted of a single gut-dropping sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” How would that risk have changed if we’d listened to Gebru? What if we had heard the voices of the women like her who’ve been waving the flag about AI and machine learning? Researchers — including many women of color — have been saying for years that these systems interact differently with people of color and that the societal effects could be disastrous: that they’re a fun-house-style distorted mirror magnifying biases and stripping out the context from which their information comes; that they’re tested on those without the choice to opt out; and will wipe out the jobs of some marginalized communities. Gebru and her colleagues have also expressed concern about the exploitation of heavily surveilled and low-wage workers helping support AI systems; content moderators and data annotators are often from poor and underserved communities, like refugees and incarcerated people. Content moderators in Kenya have reported experiencing severe trauma, anxiety, and depression from watching videos of child sexual abuse, murders, rapes, and suicide in order to train ChatGPT on what is explicit content. Some of them take home as little as $1.32 an hour to do so. In other words, the problems with AI aren’t hypothetical. They don’t just exist in some SkyNet-controlled, Matrix version of the future. The problems with it are already here. “I’ve been yelling about this for a long time,” Gebru says. “This is a movement that’s been more than a decade in the making.” Edit: I failed to suggest that you, dear reader, should peruse Cathy O'Neil's 2016 book, 'Weapons Of Math Destruction' which details the opaque nature of algorithms and the biases that are baked into them. As O'Neil puts it, "Algorithms are opinions embedded in code." O'Neil is careful in what kinds of algorithms she targets with her critiques and isn't offering a blanket denunciation of algorithms per se, just the large, scalable and, in her evaluation, unfair algorithms that have begun to have direct impacts on people's lives, often in destructive ways. "I worried about the separation between technical models and real people, and about the moral repercussions of that separation," O'Neill writes."
These Women Tried to Warn Us About AI
87 notes ¡ View notes
mariacallous ¡ 3 months ago
Text
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This “red-teaming” exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.
The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.
“The average person utilizing one of these models doesn’t really have the ability to determine whether or not the model is fit for purpose,” says Theo Skeadas, chief of staff at Humane Intelligence. “So we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.”
The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use the AI 600-1 profile, part of NIST's AI risk management framework, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems' expected behavior.
“NIST's ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST's Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. “The ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.”
Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.
“The community should be broader than programmers,” Skeadas says. “Policymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.”
81 notes ¡ View notes
sarkos ¡ 1 year ago
Quote
Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls “2am brain”, a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. “It’s just like baking,” she says. “You can’t force it, you can’t turn the temperature up, you can’t make it go faster. It will take however long it takes. And when it’s done baking, it will present itself.” It was Chowdhury’s 2am brain that first coined the phrase “moral outsourcing” for a concept that now, as one of the leading thinkers on artificial intelligence, has become a key point in how she considers accountability and governance when it comes to the potentially revolutionary impact of AI. Moral outsourcing, she says, applies the logic of sentience and choice to AI, allowing technologists to effectively reallocate responsibility for the products they build onto the products themselves – technical advancement becomes predestined growth, and bias becomes intractable. “You would never say ‘my racist toaster’ or ‘my sexist laptop’,” she said in a Ted Talk from 2018. “And yet we use these modifiers in our language about artificial intelligence. And in doing so we’re not taking responsibility for the products that we build.” Writing ourselves out of the equation produces systematic ambivalence on par with what the philosopher Hannah Arendt called the “banality of evil” – the wilful and cooperative ignorance that enabled the Holocaust. “It wasn’t just about electing someone into power that had the intent of killing so many people,” she says. “But it’s that entire nations of people also took jobs and positions and did these horrible things.”
‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI | Artificial intelligence (AI) | The Guardian
21 notes ¡ View notes
ladythatsmyskull ¡ 1 year ago
Link
5 notes ¡ View notes
jrgsanta ¡ 2 months ago
Text
Las caras Humanas de la IA - Rumman Chowdhury
Las caras Humanas de la IA - Rumman Chowdhury Con este post quiero contribuir a recordar que detrĂĄs de cada gran logro hay personas que no deben quedar en segundo plano. #humanOverIA #IA #VisualThinking #pensamientovisual
🤖 ✨ Vivimos tiempos de cambio en los que parece ser que la IA está en el centro de todo. Sin embargo, muchas veces olvidamos que detrás de toda la IA que se pone a nuestro servicio hay personas que son las que la diseñan y construyen. Con este nuevo reto “Las caras Humanas de la IA” quiero poner mi pequeño grano de arena para contribuir al objetivo de que no olvidemos que detrás de cada gran…
0 notes
iotstv ¡ 5 months ago
Text
Your right to repair AI systems
0 notes
isearchgoood ¡ 5 months ago
Text
Your right to repair AI systems | Rumman Chowdhury
https://www.ted.com/talks/rumman_chowdhury_your_right_to_repair_ai_systems?rss=172BB350-0205&utm_source=dlvr.it&utm_medium=tumblr
0 notes
gerdfeed ¡ 5 months ago
Quote
No one wants to build a product on a model that makes things up," AI ethics expert Rumman Chowdhury cautioned Axios this week.
Experts Concerned by Signs of AI Bubble
0 notes
infradapt ¡ 1 year ago
Text
Senators Call for Government Oversight and Licensing of ChatGPT-Level AI
A bipartisan pair of senators, Richard Blumenthal, a Democrat, and Josh Hawley, a Republican, have proposed that a new regulatory body be established by the US government to oversee artificial intelligence (AI). This body would also limit the development of language models, such as OpenAI’s GPT-4, to licensed companies. The senators’ proposal, which was unveiled as a legislative framework, is intended to guide future laws and influence pending legislation.
The proposed framework suggests that the development of facial recognition and other high-risk AI applications should require a government license. Companies seeking such a license would need to conduct pre-deployment tests on AI models for potential harm, report post-launch issues, and allow independent third-party audits of their AI models. The framework also calls for companies to publicly disclose the training data used to develop an AI model. Moreover, it proposes that individuals adversely affected by AI should have the legal right to sue the company responsible for its creation.
As discussions in Washington over AI regulation intensify, the senators’ proposal could have significant impact. In the coming week, Blumenthal and Hawley will preside over a Senate subcommittee hearing focused on holding corporations and governments accountable for the deployment of harmful or rights-violating AI systems. Expected to testify at the hearing are Microsoft President Brad Smith and Nvidia’s chief scientist, William Dally.
The following day, Senator Chuck Schumer will convene the first in a series of meetings to explore AI regulation, a task Schumer has described as “one of the most difficult things we’ve ever undertaken.” Tech executives with a vested interest in AI, such as Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, comprise about half of the nearly 24-person guest list. Other attendees include trade union presidents from the Writers Guild and AFL-CIO federation, as well as researchers dedicated to preventing AI from infringing on human rights, such as Deb Raji from UC Berkeley and Rumman Chowdhury, the CEO of Humane Intelligence and former ethical AI lead at Twitter.
Anna Lenhart, a former AI ethics initiative leader at IBM and current PhD candidate at the University of Maryland, views the senators’ legislative framework as a positive development after years of AI experts testifying before Congress on the need for AI regulation. However, Lenhart is uncertain about how a new AI oversight body could encompass the wide range of technical and legal expertise necessary to regulate technology used in sectors as diverse as autonomous vehicles, healthcare, and housing.
The concept of using licenses to limit who can develop powerful AI systems has gained popularity in both the industry and Congress. OpenAI CEO Sam Altman suggested AI developer licensing during his Senate testimony in May, a regulatory approach that could potentially benefit his company. A bill introduced last month by Senators Lindsay Graham and Elizabeth Warren also calls for tech companies to obtain a government AI license, but it only applies to digital platforms of a certain size.
However, not everyone in the AI or policy field supports government licensing for AI development. The proposal has been criticized by the libertarian-leaning political campaign group Americans for Prosperity, which worries it could hamper innovation, and by the digital rights nonprofit Electronic Frontier Foundation, which warns of potential industry capture by wealthy or influential companies. Perhaps in response to these concerns, the legislative framework proposed by Blumenthal and Hawley recommends robust conflict of interest rules for the new AI regulatory body. Section 2: The AI Regulatory Body Personnel. The proposed AI regulatory framework by Senators Blumenthal and Hawley leaves a few inquiries unresolved. It is still undetermined whether the AI supervision would be handled by a freshly established federal agency or a department within an existing one. The senators have not yet defined the standards that would be employed to identify high-risk use cases that necessitate a development license.
Michael Khoo, the director of the climate disinformation program at the environmental non-profit, Friends of the Earth, suggests that the new proposal appears to be a promising initial move, however, additional details are required for an adequate evaluation of its concepts. His organization is a part of a group of environmental and tech accountability organizations that are appealing to lawmakers through a letter to Schumer, and a mobile billboard scheduled to circle around Congress in the upcoming week, to inhibit energy-consuming AI projects from exacerbating climate change.
Khoo concurs with the legislative framework’s insistence on documentation and public disclosure of negative impacts, but argues that the industry should not be allowed to determine what is considered detrimental. He also encourages Congress members to require businesses to disclose the energy consumption involved in training and deploying AI systems, and to consider the risk of misinformation proliferation when assessing the impact of AI models.
The legislative framework indicates that Congress is contemplating a more stringent approach to AI regulation compared to the federal government’s previous efforts, which included a voluntary risk-management framework and a non-binding AI bill of rights. In July, the White House reached a voluntary agreement with eight major AI companies, including Google, Microsoft, and OpenAI, but also assured that stricter regulations are on the horizon. During a briefing on the AI company compact, Ben Buchanan, the White House special adviser for AI, stated that legislation is necessary to protect society from potential AI harms.
https://www.infradapt.com/news/senators-call-for-government-oversight-and-licensing-of-chatgpt/
1 note ¡ View note
fahrni ¡ 1 year ago
Text
The Musk Files - Crossed Out
Tumblr media
No commentary this time around. I haven’t posted anything about Space Karen in a while so the articles have been stacking up.
Enjoy.
Tumblr media
Juli Clover • MacRumors
Twitter or “X” CEO Elon Musk today said that he plans to speak with Apple CEO Tim Cook about lower App Store fees for creators who earn money through subscriptions on the Twitter/X social network.
Charlie Warzel • The Atlantic
This question, with its exclamatory urgency, has never been more relevant to Twitter than in the past 48 hours, when Musk decided to nuke 17 years’ worth of brand awareness and rename the thing. The artist formerly known as Twitter is now X. What is happening?! indeed.
Tom Warren • The Verge
Twitter Blue, which Elon Musk is currently rebranding to X Blue, now includes the option to hide the notorious blue checkmark. Twitter Blue subscribers recently started noticing the “hide your blue checkmark” option on the web and in mobile apps, offering the ability to hide that they’re paying for Twitter and avoid memes about how “this mf paid for twitter.”
Asher Notheis • Washington Examiner
Actor Mark Hamill has called for people on social media to partake in a boycott of X, the platform formerly known as Twitter.
Robert Reich
Yesterday, it was reported that Elon Musk’s X Corp., parent of Twitter, has sent a letter to the Center for Countering Digital Hate (CCDH) accusing the nonprofit of making “a series of troubling and baseless claims that appear calculated to harm Twitter generally, and its digital advertising business specifically” — and threatening to sue CCDH.
Casey Newton • platformer.news
The X Corporation has in recent days devoted more time to signage-related issues than is prudent for a company that continues to lose advertisers, employees, and users’ time. But it’s consistent with Musk’s current incarnation as a cultural vandal, using his money and power to deface once-influential institutions and dare anyone to stop him.
Daring Fireball
Any normal company planning a product name change would have everything sorted out with the iOS App Store and Android Play Store ahead of time. Needless to say, X Corp is not a normal company and so of course they didn’t have anything sorted out.
Matt Binder • Mashable
Elon Musk and company take @x handle from its original user. He got zero dollars for it.
Rumman Chowdhury • The Atlantic
Everyone has an opinion about Elon Musk’s takeover of Twitter. I lived it. I saw firsthand the harms that can flow from unchecked power in tech. But it’s not too late to turn things around.
Casey Newton • platformer.news
On Monday afternoon, a crane rolled up to Twitter’s headquarters on Market Street. The plan was to remove the sign from the historic building’s facade, putting a symbolic end to the company that owner Elon Musk had over the weekend re-branded to X.
Taylor Lorenz • The Washington Post
Far-right Twitter influencers first on Elon Musk’s monetization scheme
Reuters
Elon Musk said Twitter’s cash flow remains negative because of a nearly 50% drop in advertising revenue and a heavy debt load.
1 note ¡ View note
fernand0 ¡ 1 year ago
Link
0 notes
ericvanderburg ¡ 1 year ago
Text
Can ethical AI surveillance exist? Data scientist Rumman Chowdhury doesn't think so
http://securitytc.com/Spq967
0 notes
mariacallous ¡ 1 year ago
Text
The US government should create a new body to regulate artificial intelligence—and restrict work on language models like OpenAI’s GPT-4 to companies granted licenses to do so. That’s the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress.
Under the proposal, developing face recognition and other “high risk” applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.
The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.
The senators’ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.
A day later, senator Chuck Schumer will host the first in a series of meetings to discuss how to regulate AI, a challenge Schumer has referred to as “one of the most difficult things we’ve ever undertaken.” Tech executives with an interest in AI, including Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, make up about half the almost-two-dozen-strong guest list. Other attendees represent those likely to be subjected to AI algorithms and include trade union presidents from the Writers Guild and union federation AFL-CIO, and researchers who work on preventing AI from trampling human rights, including UC Berkeley’s Deb Raji and Humane Intelligence CEO and Twitter’s former ethical AI lead Rumman Chowdhury.
Anna Lenhart, who previously led an AI ethics initiative at IBM and is now a PhD candidate at the University of Maryland, says the senators’ legislative framework is a welcome sight after years of AI experts appearing in Congress to explain how and why AI should be regulated.
“It's really refreshing to see them take this on and not wait for a series of insight forums or a commission that's going to spend two years and talk to a bunch of experts to essentially create this same list,” Lenhart says.
But she’s unsure how any new AI oversight body could host the broad range of technical and legal knowledge required to oversee technology used in many areas from self-driving cars to health care to housing. “That’s where I get a bit stuck on the licensing regime idea,” Lenhart says.
The idea of using licenses to restrict who can develop powerful AI systems has gained traction in both industry and Congress. OpenAI CEO Sam Altman suggested licensing for AI developers during testimony before the Senate in May—a regulatory solution that might arguably help his company maintain its leading position. A bill proposed last month by senators Lindsay Graham and Elizabeth Warren would also require tech companies to secure a government AI license but only covers digital platforms above a certain size.
Lenhart is not the only AI or policy expert skeptical of the government licensing for AI development. In May the idea drew criticism from both libertarian-leaning political campaign group Americans for Prosperity, which fears it would stifle innovation, and from the digital rights nonprofit Electronic Frontier Foundation, which warns of industry capture by companies with money or influential connections. Perhaps in response, the framework unveiled yesterday recommends strong conflict of interest rules for staff at the AI oversight body.
Blumenthal and Hawley’s new framework for future AI regulation leaves some questions unanswered. It's not yet clear if oversight of AI would come from a newly-created federal agency or a group inside an existing federal agency. Nor have the senators specified what criteria would be used to determine if a certain use case is defined as high risk and requires a license to develop.
Michael Khoo, climate disinformation program director at environmental nonprofit Friends of the Earth says the new proposal looks like a good first step but that more details are necessary to properly evaluate its ideas. His organization is part of a coalition of environmental and tech accountability organizations that via a letter to Schumer, and a mobile billboard due to drive circles around Congress next week, are calling on lawmakers to prevent energy-intensive AI projects from making climate change worse.
Khoo agrees with the legislative framework’s call for documentation and public disclosure of adverse impacts, but says lawmakers shouldn’t let industry define what’s deemed harmful. He also wants members of Congress to demand businesses disclose how much energy it takes to train and deploy AI systems and consider the risk of accelerating the spread of misinformation when weighing the impact of AI models.
The legislative framework shows Congress considering a stricter approach to AI regulation than taken so far by the federal government, which has launched a voluntary risk-management framework and nonbinding AI bill of rights. The White House struck a voluntary agreement in July with eight major AI companies, including Google, Microsoft, and OpenAI, but also promised that firmer rules are coming. At a briefing on the AI company compact, White House special adviser for AI Ben Buchanan said keeping society safe from AI harms will require legislation.
6 notes ¡ View notes
drcisneros ¡ 1 year ago
Link
0 notes
qudachuk ¡ 1 year ago
Link
One of the leading thinkers on artificial intelligence discusses responsibility, ‘moral outsourcing’ and bridging the gap between people and technologyRumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what...
0 notes
prairiemodernist ¡ 2 years ago
Text
Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. — Dr. Rumman Chowdhury
0 notes