#julia angwin
Explore tagged Tumblr posts
mostlysignssomeportents · 2 years ago
Text
Everything advertised on social media is overpriced junk
Tumblr media
In “Behavioral Advertising and Consumer Welfare: An Empirical Investigation,” a trio of business researchers from Carnegie Mellon and Pamplin College investigate the difference between the goods purchased through highly targeted online ads and just plain web-searches, and conclude social media ads push overpriced junk:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4398428
If you’d like an essay-formatted version of this thread to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/04/08/late-stage-sea-monkeys/#jeremys-razors
Specifically, stuff that’s pushed to you via targeted ads costs an average of 10 percent more, and it significantly more likely to come from a vendor with a poor rating from the Better Business Bureau. This may seem trivial and obvious, but it’s got profound implications for media, commercial surveillance, and the future of the internet.
Writing in the New York Times, Julia Angwin — a legendary, muckraking data journalist — breaks down those implications. Angwin builds a case study around Jeremy’s Razors, a business that advertises itself as a “woke-free” shaving solution for manly men:
https://www.nytimes.com/2023/04/06/opinion/online-advertising-privacy-data-surveillance-consumer-quality.html
Jeremy’s Razors spends a fucking fortune on ads. According to Facebook’s Ad Library, the company spent $800,000 on FB ads in March, targeting fathers of school-age kids who like Hershey’s, ultimate fighting, hunting or Johnny Cash:
https://pluralistic.net/jeremys-targeting
Anti-woke razors are an objectively, hilariously stupid idea, but that’s not the point here. The point is that Jeremy’s has to spend $800K/month to reach its customers, which means that it either has to accept $800K less in profits, or make it up by charging more and/or skimping on quality.
Targeted advertising is incredibly expensive, and incredibly lucrative — for the ad-tech platforms that sit between creative workers and media companies on one side, and audiences on the other. In order to target ads, ad-tech companies have to collect deep, nonconsensual dossiers on every internet user, full of personal, sensitive and potentially compromising information.
The switch to targeted ads was part of the enshittification cycle, whereby companies like Facebook and Google lured in end-users by offering high-quality services — Facebook showed you the things the people you asked to hear from posted, and Google returned the best search results it could find.
Eventually, those users became locked in. Once all our friends were on Facebook, we held each other hostage, each unable to leave because the others were there. Google used its access to the capital markets to snuff out any rival search companies, spending tens of billions every year to be the default on Apple devices, for example.
Once we were locked in, the tech giants made life worse for us in order to make life better for media companies and advertisers. Facebook violated its promise to be the privacy-centric alternative to Myspace, where our data would never be harvested; it switched on mass surveillance and created cheap, accurate ad-targeting:
https://lawcat.berkeley.edu/record/1128876?ln=en
Google fulfilled the prophecy in its founding technical document, the Pagerank paper: “advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.” They, too, offered cheap, highly targeted ads:
http://infolab.stanford.edu/~backrub/google.html
Facebook and Google weren’t just kind to advertisers — they also gave media companies and creative workers a great deal, funneling vast quantities of traffic to both. Facebook did this by cramming media content into the feeds of people who hadn’t asked to see it, displacing the friends’ posts they had asked to see. Google did it by upranking media posts in search results.
Then we came to the final stage of the enshittification cycle: having hooked both end-users and business customers, Facebook and Google withdrew the surpluses from both groups and handed them to their own shareholders. Advertising costs went up. The share of ad income paid to media companies went down. Users got more ads in their feeds and search results.
Facebook and Google illegally colluded to rig the ad-market with a program called Jedi Blue that let the companies steal from both advertisers and media companies:
https://techcrunch.com/2022/03/11/google-meta-jedi-blue-eu-uk-antitrust-probes/
Apple blocked Facebook’s surveillance on its mobile devices, but increased its own surveillance of Iphone and Ipad users in order to target ads to them, even when those users explicitly opted out of spying:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Today, we live in the enshittification end-times, red of tooth and claw, where media companies’ revenues are dwindling and advertisers’ costs are soaring, and the tech giants are raking in hundreds of billions, firing hundreds of thousands of workers, and pissing away tens of billions on stock buybacks:
https://doctorow.medium.com/mass-tech-worker-layoffs-and-the-soft-landing-1ddbb442e608
As Angwin points out, in the era before behavioral advertising, Jeremy’s might have bought an ad in Deer & Deer Hunting or another magazine that caters to he-man types who don’t want woke razors; the same is true for all products and publications. Before mass, non-consensual surveillance, ads were based on content and context, not on the reader’s prior behavior.
There’s no reason that ads today couldn’t return to that regime. Contextual ads operate without surveillance, using the same “real-time bidding” mechanism to place ads based on the content of the article and some basic parameters about the user (rough location based on IP address, time of day, device type):
https://pluralistic.net/2020/08/05/behavioral-v-contextual/#contextual-ads
Context ads perform about as well as behavioral ads — but they have a radically different power-structure. No media company will ever know as much about a given user as an ad-tech giant practicing dragnet surveillance and buying purchase, location and finance data from data-brokers. But no ad-tech giant knows as much about the context and content of an article as the media company that published it.
Context ads are, by definition, centered on the media company or creative worker whose work they appear alongside of. They are much harder for tech giants to enshittify, because enshittification requires lock-in and it’s hard to lock in a publication who knows better than anyone what they’re publishing and what it means.
We should ban surveillance advertising. Period. Companies should not be allowed to collect our data without our meaningful opt-in consent, and if that was the standard, there would be no data-collection:
https://pluralistic.net/2022/03/22/myob/#adtech-considered-harmful
Remember when Apple created an opt out button for tracking, more than 94 percent of users clicked it (the people who clicked “yes” to “can Facebook spy on you?” were either Facebook employees, or confused):
https://www.cnbc.com/2022/02/02/facebook-says-apple-ios-privacy-change-will-cost-10-billion-this-year.html
Ad-targeting enables a host of evils, like paid political disinformation. It also leads to more expensive, lower-quality goods. “A Raw Deal For Consumers,” Sumit Sharma’s new Consumer Reports paper, catalogs the many other costs imposed on Americans due to the lack of tech regulation:
https://advocacy.consumerreports.org/wp-content/uploads/2023/04/A-Raw-Deal-for-US-Consumers_March-2023.pdf
Sharma describes the benefits that Europeans will shortly enjoy thanks to the EU’s Digital Markets Act and Digital Services Act, from lower prices to more privacy to more choice, from cloud gaming on mobile devices to competing app stores.
However, both the EU and the US — as well as Canada and Australia — have focused their news industry legislating on misguided “link taxes,” where tech giants are required to pay license fees to link to and excerpt the news. This is an approach grounded in the mistaken idea that tech giants are stealing media companies’ content — when really, tech giants are stealing their money:
https://pluralistic.net/2022/04/18/news-isnt-secret/#bid-shading
Creating a new pseudocopyright to control who can discuss the news is a terrible idea, one that will make the media companies beholden to the tech giants at a time when we desperately need deep, critical reporting on the tech sector. In Canada, where Bill C-18 is the latest link tax proposal in the running to become law, we’re already seeing that conflict of interest come into play.
As Jesse Brown and Paula Simons — a veteran reporter turned senator — discuss on the latest Canadaland podcast, the Toronto Star’s sharp and well-reported critical series on the tech giants died a swift and unexplained death immediately after the Star began receiving license fees for tech users’ links and excerpts from its reporting:
https://www.canadaland.com/paula-simons-bill-c-18/
Meanwhile, in Australia, the proposed “news bargaining code” stampeded the tech giants into agreeing to enter into “voluntary” negotiations with the media companies, allowing Rupert Murdoch’s Newscorp to claim the lion’s share of the money, and then conduct layoffs across its newsrooms.
While in France, the link tax depends on publishers integrating with Google Showcase, a product that makes Google more money from news content and makes news publishers more dependent on Google:
https://www.politico.eu/article/french-competition-authority-greenlights-google-pledges-over-paying-news-publishers/
A link tax only pays for so long as the tech giants remain dominant and continue to extract the massive profits that make them capable of paying the tax. But legislative action to fix the ad-tech markets, like Senator Mike Lee’s ad-tech breakup bill (cosponsored by both Ted Cruz and Elizabeth Warren!) would shift power to publishers, and with it, money:
https://www.lee.senate.gov/2023/3/the-america-act
With ad-tech intermediaries scooping up 50% or more of every advertising dollar, there is plenty of potential to save news without the need for a link tax. If unrigging the ad-tech market drops the platforms’ share of advertising dollars to a more reasonable 10%, then the advertisers and publishers could split the remainder, with advertisers spending 20% less and publishers netting 20% more.
Passing a federal privacy law would end surveillance advertising at the stroke of a pen, shifting the market to context ads that let publishers, not platforms, call the shots. As an added bonus, the law would stop Tiktok from spying on Americans, and also end Google, Facebook, Apple and Microsoft’s spying to boot:
https://pluralistic.net/2023/03/30/tik-tok-tow/#good-politics-for-electoral-victories
Mandating competition in app stores — as the Europeans are poised to do — would kill Google and Apple’s 30% “app store tax” — the percentage they rake off of every transaction from every app on Android and Ios. Drop that down to the 2–5% that the credit cards charge, and every media outlet’s revenue-per-subscriber would jump by 25%.
Add to that an end-to-end rule for tech giants requiring them to deliver updates from willing receivers to willing senders, so every newsletter you subscribed to would stay out of your spam folder and every post by every media company or creator you followed would show up in your feed:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
That would make it impossible for tech giants to use the sleazy enshittification gambit of forcing creative workers and media companies to pay to “boost” their content (or pay $8/month for a blue tick) just to get it in front of the people who asked to see it:
https://doctorow.medium.com/twiddler-1b5c9690cce6
The point of enshittification is that it’s bad for everyone except the shareholders of tech monopolists. Jeremy’s Razors are bad, winning a 2.7 star rating out of five:
https://www.facebook.com/JeremysRazors/reviews
The company charges more for these substandard razors, and you are more likely to find out about them, because of targeted, behavioral ads. These ads starve media companies and creative workers and make social media and search results terrible.
A link tax is predicated on the idea that we need Big Tech to stay big, and to dribble a few crumbs for media companies, compromising their ability to report on their deep-pocketed beneficiaries, in a way that advantages the biggest media companies and leaves small, local and independent press in the cold.
By contrast, a privacy law, ad-tech breakups, app-store competition and end-to-end delivery would shatter the power of Big Tech and shift power to users, creative workers and media companies. These are solutions that don’t just keep working if Big Tech goes away — they actually hasten that demise! What’s more, they work just as well for big companies as they do for independents.
Whether you’re the New York Times or you’re an ex-Times reporter who’s quit your job and now crowdfunds to cover your local school board and town council meetings, shifting control and the share of income is will benefit you, whether or not Big Tech is still in the picture.
Have you ever wanted to say thank you for these posts? Here’s how you can: I’m kickstarting the audiobook for my next novel, a post-cyberpunk anti-finance finance thriller about Silicon Valley scams called Red Team Blues. Amazon’s Audible refuses to carry my audiobooks because they’re DRM free, but crowdfunding makes them possible.
Image: freeimageslive.co.uk (modified) http://www.freeimageslive.co.uk/free_stock_image/using-mobile-phone-jpg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/
[Image ID: A man's hand holds a mobile phone. Its screen displays an Instagram ad. The ad has been replaced with a slice of a vintage comic book 'small ads' page.]
472 notes · View notes
charlottereber · 8 months ago
Text
"It feels like another sign that A.I. is not even close to living up to its hype. In my eyes, it’s looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do the task yourself. That realization has real implications for the way we, our employers and our government should deal with Silicon Valley’s latest dazzling new, new thing. Acknowledging A.I.’s flaws could help us invest our resources more efficiently and also allow us to turn our attention toward more realistic solutions.
...
I don’t think we’re in cryptocurrency territory, where the hype turned out to be a cover story for a number of illegal schemes that landed a few big names in prison. But it’s also pretty clear that we’re a long way from Mr. Altman’s promise that A.I. will become “the most powerful technology humanity has yet invented.”"
--Julia Angwin for the New York Times, May 13, 2024
0 notes
ravenkings · 8 months ago
Text
The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing? We can’t abandon work on improving A.I. The technology, however middling, is here to stay, and people are going to use it. But we should reckon with the possibility that we are investing in an ideal future that may not materialize.
–Julia Angwin, "Will A.I. Ever Live Up to Its Hype?" The New York Times, May 15, 2024
15 notes · View notes
writingprompts · 2 years ago
Text
Tumblr media
Has Online Advertising Gotten Too Creepy?
“Tech firms track nearly every click from website to website, develop detailed profiles of your interests and desires and make that data available to advertisers. That’s why you get those creepy ads in your Instagram feed or on websites that seem to know what you were just talking about.” 
— Julia Angwin
Julia Angwin has written an interesting essay in the New York Times titled, “If It’s Advertised to You Online, You Probably Shouldn’t Buy It. Here’s Why.” She explains how terrible and pervasive online advertising is and makes the point that if a product shows up in an online ad, it’s probably terrible quality: “The targeted ads … were pitching more expensive products from lower-quality vendors than identical products that showed up in a simple web search.” Do you have experience with any of this? What do you think about online advertising? What do you think you can do about it? What do you think governments should be doing about it? Can you imagine a different version of the world that takes a different approach to online advertising? Make your case for what should be done about online advertising or explain your thinking on this topic.
Narrative alternative: tell an imaginary story where someone is getting online advertisements that are just a little too creepy. 
58 notes · View notes
ryantate · 2 years ago
Text
Bret Victor in a talk with profound implications for modern communications:
“The wrong way to understand a system is to talk about it, to describe it. The right way to understand it is to get in there and model it and explore it. And you can’t do that in words. And so what we have is people are using these very old tools — people are explaining and convincing through reasoning and rhetoric instead of the newer tools of evidence and explorable models. We want a medium that supports that… that is naturally show and tell.”
Related posts are below the talk.
youtube
4 notes · View notes
mssarahmorgan · 8 months ago
Text
“'I find my feelings about A.I. are actually pretty similar to my feelings about blockchains: They do a poor job of much of what people try to do with them, they can’t do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial,' wrote Molly White, a cryptocurrency researcher and critic, in her newsletter last month... "Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?" --Julia Angwin, NYT
0 notes
jcmarchi · 9 months ago
Text
AI Snake Oil is now available to preorder
New Post has been published on https://thedigitalinsider.com/ai-snake-oil-is-now-available-to-preorder/
AI Snake Oil is now available to preorder
Tumblr media
We are happy to share that our book AI Snake Oil is now available to preorder across online booksellers. The publication date is September 24, 2024. If you have enjoyed reading our newsletter and would like to support our work, please preorder the book via Amazon, Bookshop, or your favorite bookseller. 
We get two recurring questions about the book: 
In an area as fast moving as AI, how long can a book stay relevant? 
How similar is the book to the newsletter?
The answer to both questions is the same. We know that book publishing moves at a slower timescale than AI. So the book is about the foundational knowledge needed to separate real advances from hype, rather than commentary on breaking developments. In writing every chapter, and every paragraph, we asked ourselves: will this be relevant in five years? This also means that there’s very little overlap between the newsletter and the book. 
Tumblr media
AI Snake Oil book cover
In the book, we explain the crucial differences between types of AI, why people, companies, and governments are falling for AI snake oil, why AI can’t fix social media, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. While generative AI is what drives press, predictive AI used in criminal justice, finance, healthcare, and other domains remains far more consequential in people’s lives. We discuss in depth how predictive AI can go wrong. We also warn of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies. 
The book is not just an explainer. Every chapter includes original scholarship. We plan to release exercises and discussion questions for classroom use; courses on the relationship between tech and society might benefit from the book. 
This is the first time we’ve written a mass-market book. We learned that preordering a book, as opposed to ordering after release, can make a big difference to its success. Preorder sales are used by retailers to decide which books to stock and promote after release. They help books get recognized on best-seller lists. They allow our publishers to anticipate how many copies need to be printed and how the book should be distributed. They are also used by online booksellers to make algorithmic recommendations. In short, pre-ordering early is the best way to support our work. You can preorder the book from your local retailers via Bookshop.org or on Amazon.
We couldn’t have written AI Snake Oil without the support of Hallie Stebbins, our editor at Princeton University Press. The book was peer reviewed by three experts: Melanie Mitchell and two anonymous reviewers. We also received informal peer reviews from Matt Salganik, Molly Crockett, and Chris Bail. All of this feedback helped us improve the book immensely, and we are grateful for the reviewers’ time and thoughtful attention. We are also grateful to Alondra Nelson, Julia Angwin, Kate Crawford, and Melanie Mitchell, who took the time to read the book and write blurbs. 
Thank you to the over 25,000 of you who subscribe to this newsletter. We look forward to continuing to write the newsletter even after the book is published. Analyzing topical and pressing questions about AI using the foundational understanding we develop in the book has been one of the most rewarding parts of our work. We hope the book will be useful to you. Thank you for supporting us.
Preorder links
US: Amazon, Bookshop, Barnes and Noble. UK: Waterstones. Canada: Indigo. Germany: Kulturkaufhaus. The book is available to preorder internationally on Amazon.
0 notes
aicommandhub · 10 months ago
Text
AI Black Box Under Siege: Researchers Revolt for AI Transparency
Tumblr media
AI Transparency Battle Royale! Over 100 top AI researchers are throwing down the gauntlet, demanding AI transparency from tech giants like OpenAI and Meta. In a scorching open letter, they accuse these companies of hindering independent research with their restrictive rules. The argument? These supposedly "safe" protocols are actually stifling the very investigations needed to ensure these powerful AI systems are used responsibly! This isn't just academic whining. Researchers are worried that strict protocols meant to catch bad actors are instead silencing vital research. Independent investigators are terrified of account bans or even lawsuits if they dare to stress-test AI models without a company's blessing. It's like trying to warn people about a dangerous product, only to get sued by the manufacturer for hurting their reputation. “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,”Authors of the letterTweet Who's Leading the Charge? A League of Extraordinary AI Watchdogs This isn't just a bunch of disgruntled researchers. This open letter is backed by a who's-who of AI experts, journalists, policymakers, and even a former European Parliament member. Let's break down the heavy hitters: - AI All-Stars: Think top minds from places like Stanford University, with names like Percy Liang gracing the letter. These are the folks who build the algorithms, not just theorize about them. - Investigative Journalists: Pulitzer Prize winners like Julia Angwin, famous for exposing tech's hidden biases, are signing on. They're not afraid to dig deep and uncover the flaws these shiny new AI tools might be hiding. - Policy Powerhouses: People like Renée DiResta from the Stanford Internet Observatory, who study the impact of AI on society, are demanding a seat at the table. They want to make sure these tools aren't used to manipulate elections or deepen inequalities. - Global Perspective: Marietje Schaake, a former member of European Parliament, adds international clout. This isn't just a US issue; the potential misuse of AI affects everyone and demands global solutions. Break Free Of Big Tech Control This rag-tag team of code-slingers, truth-seekers, and policy wonks might not wear capes, but they're fighting to ensure that AI serves the public interest, not just corporate profits. Tech Giants: Turning into the Evil Empires They Swore to Destroy? The letter throws some serious shade, accusing AI companies of emulating the secrecy that plagued early social media platforms. The examples are downright chilling: from OpenAI crying "hacker!" over copyright checks to Midjourney threatening artists with legal action The Midjourney saga is a prime example of how paranoid AI companies are getting. Artist Reid Southen dared to test if the image generator could be used to rip off copyrighted characters, and what did he get? Banned! Midjourney even went full-on drama queen in their terms of service, threatening lawsuits over copyright claims. Talk about an overreaction! This is just one example in a growing pattern – tech companies are clutching their precious algorithms, terrified of anyone peeking behind the curtain to find out what their AI creations are really up to. “If You knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find You and collect that money from You,” Read the terms here “We might also do other stuff, like try to get a court to make You pay our legal fees. Don’t do it.”MidJourneyTweet The Battle for AI Transparency: Researchers vs. Corporate Control This is more than just disgruntled academics versus big tech. It's a clash of ideologies with the future of AI at stake. On one side, researchers are demanding open access and safeguards to protect us from biased or harmful AI. On the other, tech giants are clinging to control, treating potential misuse like it's some far-fetched sci-fi dystopia instead of a very real risk that needs serious scrutiny. Read the full article
0 notes
aneddoticamagazinestuff · 6 years ago
Text
The End of Trust
New Post has been published on https://www.aneddoticamagazine.com/the-end-of-trust/
The End of Trust
EFF and McSweeney’s have teamed up to bring you The End of Trust (McSweeney’s 54). The first all-nonfiction McSweeney’s issue is a collection of essays and interviews focusing on issues related to technology, privacy, and surveillance.
The collection features writing by EFF’s team, including Executive Director Cindy Cohn, Education and Design Lead Soraya Okuda, Senior Investigative Researcher Dave Maass, Special Advisor Cory Doctorow, and board member Bruce Schneier.
Anthropologist Gabriella Coleman contemplates anonymity; Edward Snowden explains blockchain; journalist Julia Angwin and Pioneer Award-winning artist Trevor Paglen discuss the intersections of their work; Pioneer Award winner Malkia Cyril discusses the historical surveillance of black bodies; and Ken Montenegro and Hamid Khan of Stop LAPD Spying debate author and intelligence contractor Myke Cole on the question of whether there’s a way law enforcement can use surveillance responsibly.
The End of Trust is available to download and read right now under a Creative Commons BY-NC-ND license.
1 note · View note
antonio-velardo · 1 year ago
Text
Antonio Velardo shares: The OpenAI Coup Is Great for Microsoft. What Does It Mean for Us? by Julia Angwin
By Julia Angwin The OpenAI fracas most likely cements control of one of the most powerful and promising technologies on the planet under one of this country’s tech titans. Published: November 21, 2023 at 04:46PM from NYT Opinion https://ift.tt/K0jiUVM via IFTTT
Tumblr media
View On WordPress
0 notes
fuojbe-beowgi · 1 year ago
Photo
Tumblr media
"The Gatekeepers of Knowledge Don’t Want Us to See What They Know" by Julia Angwin via NYT Opinion https://www.nytimes.com/2023/07/14/opinion/big-tech-european-union-journalism.html?partner=IFTTT
0 notes
mostlysignssomeportents · 10 months ago
Text
This day in history
Tumblr media
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me on SUNDAY (Mar 24) with LAURA POITRAS in NYC, then Anaheim, and beyond!
Tumblr media
#20yrsago I just finished another novel! https://memex.craphound.com/2004/03/23/i-just-finished-another-novel/
#15yrsago New Zealand’s stupid copyright law dies https://arstechnica.com/tech-policy/2009/03/3-strikes-strikes-out-in-nz-as-government-yanks-law/
#10yrsago NSA hacked Huawei, totally penetrated its networks and systems, stole its sourcecode https://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html
#10yrsago Business Software Alliance accused of pirating the photo they used in their snitch-on-pirates ad https://torrentfreak.com/bsa-pirates-busted-140322/
#5yrsago Video from the Radicalized launch with Julia Angwin at The Strand https://www.youtube.com/watch?v=FbdgdH8ksaM
#5yrsago More than 100,000 Europeans march against #Article13 https://netzpolitik.org/2019/weit-mehr-als-100-000-menschen-demonstrieren-in-vielen-deutschen-staedten-fuer-ein-offenes-netz/
#5yrsago Procedurally generated infinite CVS receipt https://codepen.io/garrettbear/pen/JzMmqg
#5yrsago British schoolchildren receive chemical burns from “toxic ash” on Ash Wednesday https://metro.co.uk/2019/03/08/children-end-hospital-burns-heads-toxic-ash-wednesday-ash-8868433/
#5yrsago DCCC introduces No-More-AOCs rule https://theintercept.com/2019/03/22/house-democratic-leadership-warns-it-will-cut-off-any-firms-who-challenge-incumbents/
#1yrago The "small nonprofit school" saved in the SVB bailout charges more than Harvard https://pluralistic.net/2023/03/23/small-nonprofit-school/#north-country-school
9 notes · View notes
studieesshow · 2 years ago
Text
If It’s Advertised to You Online, You Probably Shouldn’t Buy It. Here’s Why.
By Julia Angwin Julia Angwin on a study that shows that surveillance-based advertising is not only destroying democracy — it also pitches us lousy, overpriced goods. Published: April 6, 2023 at 05:00AM via NYT Opinion https://ift.tt/W3LRArK
0 notes
ravenkings · 8 months ago
Text
The reality is that A.I. models can often prepare a decent first draft. But I find that when I use A.I., I have to spend almost as much time correcting and revising its output as it would have taken me to do the work myself. And consider for a moment the possibility that perhaps A.I. isn’t going to get that much better anytime soon. After all, the A.I. companies are running out of new data on which to train their models, and they are running out of energy to fuel their power-hungry A.I. machines. Meanwhile, authors and news organizations (including The New York Times) are contesting the legality of having their data ingested into the A.I. models without their consent, which could end up forcing quality data to be withdrawn from the models. Given these constraints, it seems just as likely to me that generative A.I. could end up like the Roomba, the mediocre vacuum robot that does a passable job when you are home alone but not if you are expecting guests. Companies that can get by with Roomba-quality work will, of course, still try to replace workers. But in workplaces where quality matters — and where workforces such as screenwriters and nurses are unionized — A.I. may not make significant inroads.
–Julia Angwin, "Will A.I. Ever Live Up to Its Hype?" The New York Times, May 15, 2024
0 notes
cleoenfaserum · 2 years ago
Text
Watch "Banning TikTok Won't Keep Us Safe: Julia Angwin Critiques Bipartisan Attack on Chinese Firm" on YouTube
youtube
0 notes
antoniorichardsonblog · 2 years ago
Text
Watch "Banning TikTok Won't Keep Us Safe: Julia Angwin Critiques Bipartisan Attack on Chinese Firm" on YouTube
youtube
0 notes