#rumman chowdhury
Explore tagged Tumblr posts
Text
Itâs one of my rare re-bloggings (breath deep and take it in people). Give the Rolling Stones article a read. Hereâs a snippet: âAs AI has exploded into the public consciousness, the men who created them have cried crisis. On May 2, Gebruâs former Google colleague Geoffrey Hinton appeared on the front page of The New York Times under the headline: âHe Warns of Risks of AI He Helped Create.â That Hinton article accelerated the trend of powerful men in the industry speaking out against the technology theyâd just released into the world; the group has been dubbed the AI Doomers. Later that month, there was an open letter signed by more than 350 of them â executives, researchers, and engineers working in AI. Hinton signed it along with OpenAI CEO Sam Altman and his rival Dario Amodei of Anthropic. The letter consisted of a single gut-dropping sentence: âMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.â How would that risk have changed if weâd listened to Gebru? What if we had heard the voices of the women like her whoâve been waving the flag about AI and machine learning? Researchers â including many women of color â have been saying for years that these systems interact differently with people of color and that the societal effects could be disastrous: that theyâre a fun-house-style distorted mirror magnifying biases and stripping out the context from which their information comes; that theyâre tested on those without the choice to opt out; and will wipe out the jobs of some marginalized communities. Gebru and her colleagues have also expressed concern about the exploitation of heavily surveilled and low-wage workers helping support AI systems; content moderators and data annotators are often from poor and underserved communities, like refugees and incarcerated people. Content moderators in Kenya have reported experiencing severe trauma, anxiety, and depression from watching videos of child sexual abuse, murders, rapes, and suicide in order to train ChatGPT on what is explicit content. Some of them take home as little as $1.32 an hour to do so. In other words, the problems with AI arenât hypothetical. They donât just exist in some SkyNet-controlled, Matrix version of the future. The problems with it are already here. âIâve been yelling about this for a long time,â Gebru says. âThis is a movement thatâs been more than a decade in the making.â Edit: I failed to suggest that you, dear reader, should peruse Cathy O'Neil's 2016 book, 'Weapons Of Math Destruction' which details the opaque nature of algorithms and the biases that are baked into them. As O'Neil puts it, "Algorithms are opinions embedded in code." O'Neil is careful in what kinds of algorithms she targets with her critiques and isn't offering a blanket denunciation of algorithms per se, just the large, scalable and, in her evaluation, unfair algorithms that have begun to have direct impacts on people's lives, often in destructive ways. "I worried about the separation between technical models and real people, and about the moral repercussions of that separation," O'Neill writes."
These Women Tried to Warn Us About AI
#timnit gebru#joy buolamwini#safiya noble#rumman chowdhury#Seeta PeĂąa Gangadharan#rolling stone magazine#the problems with ai#I don't like tiktok but this video is fine#Cathy O'Neil
87 notes
¡
View notes
Text
At the 2023 Defcon hacker conference in Las Vegas, prominent AI tech companies partnered with algorithmic integrity and transparency groups to sic thousands of attendees on generative AI platforms and find weaknesses in these critical systems. This âred-teamingâ exercise, which also had support from the US government, took a step in opening these increasingly influential yet opaque systems to scrutiny. Now, the ethical AI and algorithmic assessment nonprofit Humane Intelligence is taking this model one step further. On Wednesday, the group announced a call for participation with the US National Institute of Standards and Technology, inviting any US resident to participate in the qualifying round of a nationwide red-teaming effort to evaluate AI office productivity software.
The qualifier will take place online and is open to both developers and anyone in the general public as part of NIST's AI challenges, known as Assessing Risks and Impacts of AI, or ARIA. Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The goal is to expand capabilities for conducting rigorous testing of the security, resilience, and ethics of generative AI technologies.
âThe average person utilizing one of these models doesnât really have the ability to determine whether or not the model is fit for purpose,â says Theo Skeadas, chief of staff at Humane Intelligence. âSo we want to democratize the ability to conduct evaluations and make sure everyone using these models can assess for themselves whether or not the model is meeting their needs.â
The final event at CAMLIS will split the participants into a red team trying to attack the AI systems and a blue team working on defense. Participants will use the AI 600-1 profile, part of NIST's AI risk management framework, as a rubric for measuring whether the red team is able to produce outcomes that violate the systems' expected behavior.
âNIST's ARIA is drawing on structured user feedback to understand real-world applications of AI models,â says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST's Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. âThe ARIA team is mostly experts on sociotechnical test and evaluation, and [is] using that background as a way of evolving the field toward rigorous scientific evaluation of generative AI.â
Chowdhury and Skeadas say the NIST partnership is just one of a series of AI red team collaborations that Humane Intelligence will announce in the coming weeks with US government agencies, international governments, and NGOs. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like âbias bounty challenges,â where individuals can be rewarded for finding problems and inequities in AI models.
âThe community should be broader than programmers,â Skeadas says. âPolicymakers, journalists, civil society, and nontechnical people should all be involved in the process of testing and evaluating of these systems. And we need to make sure that less represented groups like individuals who speak minority languages or are from nonmajority cultures and perspectives are able to participate in this process.â
81 notes
¡
View notes
Quote
Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls â2am brainâ, a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. âItâs just like baking,â she says. âYou canât force it, you canât turn the temperature up, you canât make it go faster. It will take however long it takes. And when itâs done baking, it will present itself.â It was Chowdhuryâs 2am brain that first coined the phrase âmoral outsourcingâ for a concept that now, as one of the leading thinkers on artificial intelligence, has become a key point in how she considers accountability and governance when it comes to the potentially revolutionary impact of AI. Moral outsourcing, she says, applies the logic of sentience and choice to AI, allowing technologists to effectively reallocate responsibility for the products they build onto the products themselves â technical advancement becomes predestined growth, and bias becomes intractable. âYou would never say âmy racist toasterâ or âmy sexist laptopâ,â she said in a Ted Talk from 2018. âAnd yet we use these modifiers in our language about artificial intelligence. And in doing so weâre not taking responsibility for the products that we build.â Writing ourselves out of the equation produces systematic ambivalence on par with what the philosopher Hannah Arendt called the âbanality of evilâ â the wilful and cooperative ignorance that enabled the Holocaust. âIt wasnât just about electing someone into power that had the intent of killing so many people,â she says. âBut itâs that entire nations of people also took jobs and positions and did these horrible things.â
âI do not think ethical surveillance can existâ: Rumman Chowdhury on accountability in AI | Artificial intelligence (AI) | The Guardian
21 notes
¡
View notes
Link
5 notes
¡
View notes
Text
Las caras Humanas de la IA - Rumman Chowdhury
Las caras Humanas de la IA - Rumman Chowdhury Con este post quiero contribuir a recordar que detrĂĄs de cada gran logro hay personas que no deben quedar en segundo plano. #humanOverIA #IA #VisualThinking #pensamientovisual
đ¤ ⨠Vivimos tiempos de cambio en los que parece ser que la IA estĂĄ en el centro de todo. Sin embargo, muchas veces olvidamos que detrĂĄs de toda la IA que se pone a nuestro servicio hay personas que son las que la diseĂąan y construyen. Con este nuevo reto âLas caras Humanas de la IAâ quiero poner mi pequeĂąo grano de arena para contribuir al objetivo de que no olvidemos que detrĂĄs de cada granâŚ
0 notes
Text
Your right to repair AI systems | Rumman Chowdhury
https://www.ted.com/talks/rumman_chowdhury_your_right_to_repair_ai_systems?rss=172BB350-0205&utm_source=dlvr.it&utm_medium=tumblr
0 notes
Quote
No one wants to build a product on a model that makes things up," AI ethics expert Rumman Chowdhury cautioned Axios this week.
Experts Concerned by Signs of AI Bubble
0 notes
Text
Senators Call for Government Oversight and Licensing of ChatGPT-Level AI
A bipartisan pair of senators, Richard Blumenthal, a Democrat, and Josh Hawley, a Republican, have proposed that a new regulatory body be established by the US government to oversee artificial intelligence (AI). This body would also limit the development of language models, such as OpenAIâs GPT-4, to licensed companies. The senatorsâ proposal, which was unveiled as a legislative framework, is intended to guide future laws and influence pending legislation.
The proposed framework suggests that the development of facial recognition and other high-risk AI applications should require a government license. Companies seeking such a license would need to conduct pre-deployment tests on AI models for potential harm, report post-launch issues, and allow independent third-party audits of their AI models. The framework also calls for companies to publicly disclose the training data used to develop an AI model. Moreover, it proposes that individuals adversely affected by AI should have the legal right to sue the company responsible for its creation.
As discussions in Washington over AI regulation intensify, the senatorsâ proposal could have significant impact. In the coming week, Blumenthal and Hawley will preside over a Senate subcommittee hearing focused on holding corporations and governments accountable for the deployment of harmful or rights-violating AI systems. Expected to testify at the hearing are Microsoft President Brad Smith and Nvidiaâs chief scientist, William Dally.
The following day, Senator Chuck Schumer will convene the first in a series of meetings to explore AI regulation, a task Schumer has described as âone of the most difficult things weâve ever undertaken.â Tech executives with a vested interest in AI, such as Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, comprise about half of the nearly 24-person guest list. Other attendees include trade union presidents from the Writers Guild and AFL-CIO federation, as well as researchers dedicated to preventing AI from infringing on human rights, such as Deb Raji from UC Berkeley and Rumman Chowdhury, the CEO of Humane Intelligence and former ethical AI lead at Twitter.
Anna Lenhart, a former AI ethics initiative leader at IBM and current PhD candidate at the University of Maryland, views the senatorsâ legislative framework as a positive development after years of AI experts testifying before Congress on the need for AI regulation. However, Lenhart is uncertain about how a new AI oversight body could encompass the wide range of technical and legal expertise necessary to regulate technology used in sectors as diverse as autonomous vehicles, healthcare, and housing.
The concept of using licenses to limit who can develop powerful AI systems has gained popularity in both the industry and Congress. OpenAI CEO Sam Altman suggested AI developer licensing during his Senate testimony in May, a regulatory approach that could potentially benefit his company. A bill introduced last month by Senators Lindsay Graham and Elizabeth Warren also calls for tech companies to obtain a government AI license, but it only applies to digital platforms of a certain size.
However, not everyone in the AI or policy field supports government licensing for AI development. The proposal has been criticized by the libertarian-leaning political campaign group Americans for Prosperity, which worries it could hamper innovation, and by the digital rights nonprofit Electronic Frontier Foundation, which warns of potential industry capture by wealthy or influential companies. Perhaps in response to these concerns, the legislative framework proposed by Blumenthal and Hawley recommends robust conflict of interest rules for the new AI regulatory body. Section 2: The AI Regulatory Body Personnel. The proposed AI regulatory framework by Senators Blumenthal and Hawley leaves a few inquiries unresolved. It is still undetermined whether the AI supervision would be handled by a freshly established federal agency or a department within an existing one. The senators have not yet defined the standards that would be employed to identify high-risk use cases that necessitate a development license.
Michael Khoo, the director of the climate disinformation program at the environmental non-profit, Friends of the Earth, suggests that the new proposal appears to be a promising initial move, however, additional details are required for an adequate evaluation of its concepts. His organization is a part of a group of environmental and tech accountability organizations that are appealing to lawmakers through a letter to Schumer, and a mobile billboard scheduled to circle around Congress in the upcoming week, to inhibit energy-consuming AI projects from exacerbating climate change.
Khoo concurs with the legislative frameworkâs insistence on documentation and public disclosure of negative impacts, but argues that the industry should not be allowed to determine what is considered detrimental. He also encourages Congress members to require businesses to disclose the energy consumption involved in training and deploying AI systems, and to consider the risk of misinformation proliferation when assessing the impact of AI models.
The legislative framework indicates that Congress is contemplating a more stringent approach to AI regulation compared to the federal governmentâs previous efforts, which included a voluntary risk-management framework and a non-binding AI bill of rights. In July, the White House reached a voluntary agreement with eight major AI companies, including Google, Microsoft, and OpenAI, but also assured that stricter regulations are on the horizon. During a briefing on the AI company compact, Ben Buchanan, the White House special adviser for AI, stated that legislation is necessary to protect society from potential AI harms.
https://www.infradapt.com/news/senators-call-for-government-oversight-and-licensing-of-chatgpt/
1 note
¡
View note
Text
The Musk Files - Crossed Out
No commentary this time around. I havenât posted anything about Space Karen in a while so the articles have been stacking up.
Enjoy.
Juli Clover ⢠MacRumors
Twitter or âXâ CEO Elon Musk today said that he plans to speak with Apple CEO Tim Cook about lower App Store fees for creators who earn money through subscriptions on the Twitter/X social network.
Charlie Warzel ⢠The Atlantic
This question, with its exclamatory urgency, has never been more relevant to Twitter than in the past 48 hours, when Musk decided to nuke 17 yearsâ worth of brand awareness and rename the thing. The artist formerly known as Twitter is now X. What is happening?! indeed.
Tom Warren ⢠The Verge
Twitter Blue, which Elon Musk is currently rebranding to X Blue, now includes the option to hide the notorious blue checkmark. Twitter Blue subscribers recently started noticing the âhide your blue checkmarkâ option on the web and in mobile apps, offering the ability to hide that theyâre paying for Twitter and avoid memes about how âthis mf paid for twitter.â
Asher Notheis ⢠Washington Examiner
Actor Mark Hamill has called for people on social media to partake in a boycott of X, the platform formerly known as Twitter.
Robert Reich
Yesterday, it was reported that Elon Muskâs X Corp., parent of Twitter, has sent a letter to the Center for Countering Digital Hate (CCDH) accusing the nonprofit of making âa series of troubling and baseless claims that appear calculated to harm Twitter generally, and its digital advertising business specificallyâ â and threatening to sue CCDH.
Casey Newton ⢠platformer.news
The X Corporation has in recent days devoted more time to signage-related issues than is prudent for a company that continues to lose advertisers, employees, and usersâ time. But itâs consistent with Muskâs current incarnation as a cultural vandal, using his money and power to deface once-influential institutions and dare anyone to stop him.
Daring Fireball
Any normal company planning a product name change would have everything sorted out with the iOS App Store and Android Play Store ahead of time. Needless to say, X Corp is not a normal company and so of course they didnât have anything sorted out.
Matt Binder ⢠Mashable
Elon Musk and company take @x handle from its original user. He got zero dollars for it.
Rumman Chowdhury ⢠The Atlantic
Everyone has an opinion about Elon Muskâs takeover of Twitter. I lived it. I saw firsthand the harms that can flow from unchecked power in tech. But itâs not too late to turn things around.
Casey Newton ⢠platformer.news
On Monday afternoon, a crane rolled up to Twitterâs headquarters on Market Street. The plan was to remove the sign from the historic buildingâs facade, putting a symbolic end to the company that owner Elon Musk had over the weekend re-branded to X.
Taylor Lorenz ⢠The Washington Post
Far-right Twitter influencers first on Elon Muskâs monetization scheme
Reuters
Elon Musk said Twitterâs cash flow remains negative because of a nearly 50% drop in advertising revenue and a heavy debt load.
1 note
¡
View note
Link
0 notes
Text
Can ethical AI surveillance exist? Data scientist Rumman Chowdhury doesn't think so
http://securitytc.com/Spq967
0 notes
Text
The US government should create a new body to regulate artificial intelligenceâand restrict work on language models like OpenAIâs GPT-4 to companies granted licenses to do so. Thatâs the recommendation of a bipartisan duo of senators, Democrat Richard Blumenthal and Republican Josh Hawley, who launched a legislative framework yesterday to serve as a blueprint for future laws and influence other bills before Congress.
Under the proposal, developing face recognition and other âhigh riskâ applications of AI would also require a government license. To obtain one, companies would have to test AI models for potential harm before deployment, disclose instances when things go wrong after launch, and allow audits of AI models by an independent third party.
The framework also proposes that companies should publicly disclose details of the training data used to create an AI model and that people harmed by AI get a right to bring the company that created it to court.
The senatorsâ suggestions could be influential in the days and weeks ahead as debates intensify in Washington over how to regulate AI. Early next week, Blumenthal and Hawley will oversee a Senate subcommittee hearing about how to meaningfully hold businesses and governments accountable when they deploy AI systems that cause people harm or violate their rights. Microsoft president Brad Smith and the chief scientist of chipmaker Nvidia, William Dally, are due to testify.
A day later, senator Chuck Schumer will host the first in a series of meetings to discuss how to regulate AI, a challenge Schumer has referred to as âone of the most difficult things weâve ever undertaken.â Tech executives with an interest in AI, including Mark Zuckerberg, Elon Musk, and the CEOs of Google, Microsoft, and Nvidia, make up about half the almost-two-dozen-strong guest list. Other attendees represent those likely to be subjected to AI algorithms and include trade union presidents from the Writers Guild and union federation AFL-CIO, and researchers who work on preventing AI from trampling human rights, including UC Berkeleyâs Deb Raji and Humane Intelligence CEO and Twitterâs former ethical AI lead Rumman Chowdhury.
Anna Lenhart, who previously led an AI ethics initiative at IBM and is now a PhD candidate at the University of Maryland, says the senatorsâ legislative framework is a welcome sight after years of AI experts appearing in Congress to explain how and why AI should be regulated.
âIt's really refreshing to see them take this on and not wait for a series of insight forums or a commission that's going to spend two years and talk to a bunch of experts to essentially create this same list,â Lenhart says.
But sheâs unsure how any new AI oversight body could host the broad range of technical and legal knowledge required to oversee technology used in many areas from self-driving cars to health care to housing. âThatâs where I get a bit stuck on the licensing regime idea,â Lenhart says.
The idea of using licenses to restrict who can develop powerful AI systems has gained traction in both industry and Congress. OpenAI CEO Sam Altman suggested licensing for AI developers during testimony before the Senate in Mayâa regulatory solution that might arguably help his company maintain its leading position. A bill proposed last month by senators Lindsay Graham and Elizabeth Warren would also require tech companies to secure a government AI license but only covers digital platforms above a certain size.
Lenhart is not the only AI or policy expert skeptical of the government licensing for AI development. In May the idea drew criticism from both libertarian-leaning political campaign group Americans for Prosperity, which fears it would stifle innovation, and from the digital rights nonprofit Electronic Frontier Foundation, which warns of industry capture by companies with money or influential connections. Perhaps in response, the framework unveiled yesterday recommends strong conflict of interest rules for staff at the AI oversight body.
Blumenthal and Hawleyâs new framework for future AI regulation leaves some questions unanswered. It's not yet clear if oversight of AI would come from a newly-created federal agency or a group inside an existing federal agency. Nor have the senators specified what criteria would be used to determine if a certain use case is defined as high risk and requires a license to develop.
Michael Khoo, climate disinformation program director at environmental nonprofit Friends of the Earth says the new proposal looks like a good first step but that more details are necessary to properly evaluate its ideas. His organization is part of a coalition of environmental and tech accountability organizations that via a letter to Schumer, and a mobile billboard due to drive circles around Congress next week, are calling on lawmakers to prevent energy-intensive AI projects from making climate change worse.
Khoo agrees with the legislative frameworkâs call for documentation and public disclosure of adverse impacts, but says lawmakers shouldnât let industry define whatâs deemed harmful. He also wants members of Congress to demand businesses disclose how much energy it takes to train and deploy AI systems and consider the risk of accelerating the spread of misinformation when weighing the impact of AI models.
The legislative framework shows Congress considering a stricter approach to AI regulation than taken so far by the federal government, which has launched a voluntary risk-management framework and nonbinding AI bill of rights. The White House struck a voluntary agreement in July with eight major AI companies, including Google, Microsoft, and OpenAI, but also promised that firmer rules are coming. At a briefing on the AI company compact, White House special adviser for AI Ben Buchanan said keeping society safe from AI harms will require legislation.
6 notes
¡
View notes
Link
0 notes
Link
One of the leading thinkers on artificial intelligence discusses responsibility, âmoral outsourcingâ and bridging the gap between people and technologyRumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what...
0 notes
Text
Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. â Dr. Rumman Chowdhury
0 notes