Tumgik
#computer recognition
Text
Taking the Leap: Embrace the Unpredictable on Take a Wild Guess Day
In case you haven’t guessed, we extoll the amazing possiblities of the exciting Take a Wild Guess Day! This April 15 special day is all about trusting your gut and your instinct. Does the thought of taking a wild guess without any information to back you up take you way, way out of your comfort zone? “iNaturalist Etiquette: A Guide to Engaging with Nature and Community” Did you know that it is…
Tumblr media
View On WordPress
0 notes
ruvviks · 4 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
PLEASE DO NOT TAG AS YOUR OWN OC OR PAIRING.
Nathan and Ruben share a bond more powerful than most; mutual understanding through past experiences no one should ever have to go through, and through past actions so horrible they cannot be spoken of. Their grief and the blood on their hands binds them to the STEM technology they created, which has alienated them from the rest of the world— but they give each other the comfort they have both longed for so desperately for years, and that is all they need. They are each other's counterpart; you cannot imagine one without the other, like two sides of the same coin. Through their pain, their grief, their desire, and their regret, they have become one.
anna akhmatova, the guest // bones; equinox // 'i won't become' by kim jakobsson // agustín gómez-arcos, the carnivorous lamb // by oxy // achilles come down; gang of youths // czeslaw milosz, from 'new and collected poems: 1931-2001' // 'extended ambience portrait from a resonant biostructure' and 'migraine tenfold times ten' by daniel vega // a little death; the neighbourhood // marina tsvetaeva, from 'poem of the end' // by drummnist // katie maria, winter // 'nocturne in black and gold the falling rocket' by james abbott mcneill whistler // micah nemerever, these violent delights // body language; we are fury // 'the penitent' by emil melmoth // chelsea dingman, from 'of those who can't afford to be gentle'
taglist (opt in/out)
@shellibisshe, @florbelles, @ncytiri, @hibernationsuit, @stars-of-the-heart;
@lestatlioncunt, @katsigian, @radioactiveshitstorm, @estevnys, @adelaidedrubman;
@celticwoman, @rindemption, @carlosoliveiraa, @noirapocalypto, @dickytwister;
@killerspinal, @euryalex, @ri-a-rose, @velocitic, @thedeadthree
#tew#edit:nathan#nuclearocs#nuclearedits#so much shame in my body but still used my taglist but um let me know if you want to be excluded from oc/ship web weaves#just really wanted to share this one because i'm very proud of it and i want it on my blog. so. :]#recognition of the self through the other + wanting so desperately for the other to be deserving of a second chance#because if there is hope for them than there is hope for you etc etc and so on. that's the core of their dynamic i think#they understand each other on such a fundamental level that no one else comes close to because they are in so many ways the same#like how in in the first game leslie could sync up with ru/vik and all that? nathan would be a VERY good candidate for that as well#and it makes me insane!! and then the added layer of nathan being lead developer of mobius' new and improved STEM system#which makes him the same as ru/vik AGAIN but in like. the way that they're both men of [computer] science#and there's the fact they both have a dead sister. they both killed their parents. they were both mobius playthings for YEARS#and they've happily killed and tortured during all of it. they're angry they're out for revenge they're completely disconnected from#the normal human experience and they're working with what they have. and then after all of that is over then what is left?#their story focuses on them picking up all the pieces. everything that's still salvageable at least. and try to start over in a way#they cannot be forgiven for what they've done but they can move on from the past and do different in the future#there's still things left undone and left unsaid... in my canon at least. i know there's not gonna be any more games. it's fine#anyway they end up going to therapy and then they get better they're not a doomed couple they just like being dramatic#if you read all of this we can get married tomorrow if you'd like
70 notes · View notes
peggycatrerr · 1 year
Text
i think it’s really really important that we keep reminding people that what we’re calling ai isn’t even close to intelligent and that its name is pure marketing. the silicon valley tech bros and hollywood executives call it ai because they either want it to seem all-powerful or they believe it is and use that to justify their use of it to exploit and replace people.
chat-gpt and things along those lines are not intelligent, they are predictive text generators that simply have more data to draw on than previous ones like, you know, your phone’s autocorrect. they are designed to pass the turing test by having human-passing speech patterns and syntax. they cannot come up with anything new, because they are machines programmed on data sets. they can’t even distinguish fact from fiction, because all they are actually capable of is figuring out how to construct a human-sounding response using applicable data to a question asked by a human. you know how people who use chat-gpt to cheat on essays will ask it for reference lists and get a list of texts that don’t exist? it’s because all chat-gpt is doing is figuring out what types of words typically appear in response to questions like that, and then stringing them together.
midjourney and things along those lines are not intelligent, they are image generators that have just been really heavily fine-tuned. you know how they used to do janky fingers and teeth and then they overcame that pretty quickly? that’s not because of growing intelligence, it’s because even more photographs got added to their data sets and were programmed in such a way that they were able to more accurately identify patterns in the average amount of fingers and teeth across all those photos. and it too isn’t capable of creation. it is placing pixels in spots to create an amalgamation of images tagged with metadata that matches the words in your request. you ask for a tree and it spits out something a little quirky? it’s not because it’s creating something, it’s because it gathered all of its data on trees and then averaged it out. you know that “the rest of the mona lisa” tweet and how it looks like shit? the fact that there is no “rest” of the mona lisa aside, it’s because the generator does not have the intelligence required to identify what’s what in the background of such a painting and extend it with any degree of accuracy, it looked at the colours and approximate shapes and went “oho i know what this is maybe” and spat out an ugly landscape that doesn’t actually make any kind of physical or compositional sense, because it isn’t intelligent.
and all those ai-generated voices? also not intelligent, literally just the same vocal synth we’ve been able to do since daisy bell but more advanced. you get a sample of a voice, break it down into the various vowel and consonant sounds, and then when you type in the text you want it to say, it plays those vowel and consonant sounds in the order displayed in that text. the only difference now is that the breaking it down process can be automated to some extent (still not intelligence, just data analysis) and the synthesising software can recognise grammar a bit more and add appropriate inflections to synthesised voices to create a more natural flow.
if you took the exact same technology that powers midjourney or chat-gpt and removed a chunk of its dataset, the stuff it produces would noticeably worsen because it only works with a very very large amount of data. these programs are not intelligent. they are programs that analyse and store data and then string it together upon request. and if you want evidence that the term ai is just being used for marketing, look at the sheer amount of software that’s added “ai tools” that are either just things that already existed within the software, using the same exact tech they always did but slightly refined (a lot of film editing software are renaming things like their chromakey tools to have “ai” in the name, for example) or are actually worse than the things they’re overhauling (like the grammar editor in office 365 compared to the classic office spellcheck).
but you wanna real nifty lil secret about the way “ai” is developing? it’s all neural nets and machine learning, and the thing about neural nets and machine learning is that in order to continue growing in power it needs new data. so yeah, currently, as more and more data gets added to them, they seem to be evolving really quickly. but at some point soon after we run out of data to add to them because people decided they were complete or because corporations replaced all new things with generated bullshit, they’re going to stop evolving and start getting really, really, REALLY repetitive. because machine learning isn’t intelligent or capable of being inspired to create new things independently. no, it’s actually self-reinforcing. it gets caught in loops. "ai” isn’t the future of art, it’s a data analysis machine that’ll start sounding even more like a broken record than it already does the moment its data sets stop having really large amounts of unique things added to it.
116 notes · View notes
Text
Podcasting "How To Think About Scraping"
Tumblr media
On September 27, I'll be at Chevalier's Books in Los Angeles with Brian Merchant for a joint launch for my new book The Internet Con and his new book, Blood in the Machine. On October 2, I'll be in Boise to host an event with VE Schwab.
Tumblr media
This week on my podcast, I read my recent Medium column, "How To Think About Scraping: In privacy and labor fights, copyright is a clumsy tool at best," which proposes ways to retain the benefits of scraping without the privacy and labor harms that sometimes accompany it:
https://doctorow.medium.com/how-to-think-about-scraping-2db6f69a7e3d?sk=4a1d687171de1a3f3751433bffbb5a96
What are those benefits from scraping? Well, take computational linguistics, a relatively new discipline that is producing the first accounts of how informal language works. Historically, linguists overstudied written language (because it was easy to analyze) and underanalyzed speech (because you had to record speakers and then get grad students to transcribe their dialog).
The thing is, very few of us produce formal, written work, whereas we all engage in casual dialog. But then the internet came along, and for the first time, we had a species of mass-scale, informal dialog that also written, and which was born in machine-readable form.
This ushered in a new era in linguistic study, one that is enthusiastically analyzing and codifying the rules of informal speech, the spread of vernacular, and the regional, racial and class markers of different kinds of speech:
https://memex.craphound.com/2019/07/24/because-internet-the-new-linguistics-of-informal-english/
The people whose speech is scraped and analyzed this way are often unreachable (anonymous or pseudonymous) or impractical to reach (because there's millions of them). The linguists who study this speech will go through institutional review board approvals to make sure that as they produce aggregate accounts of speech, they don't compromise the privacy or integrity of their subjects.
Computational linguistics is an unalloyed good, and while the speakers whose words are scraped to produce the raw material that these scholars study, they probably wouldn't object, either.
But what about entities that explicitly object to being scraped? Sometimes, it's good to scrape them, too.
Since 1996, the Internet Archive has scraped every website it could find, storing snapshots of every page it found in a giant, searchable database called the Wayback Machine. Many of us have used the Wayback Machine to retrieve some long-deleted text, sound, image or video from the internet's memory hole.
For the most part, the Internet Archive limits its scraping to websites that permit it. The robots exclusion protocol (AKA robots.txt) makes it easy for webmasters to tell different kinds of crawlers whether or not they are welcome. If your site has a robots.txt file that tells the Archive's crawler to buzz off, it'll go elsewhere.
Mostly.
Since 2017, the Archive has started ignoring robots.txt files for news services; whether or not the news site wants to be crawled, the Archive crawls it and makes copies of the different versions of the articles the site publishes. That's because news sites – even the so-called "paper of record" – have a nasty habit of making sweeping edits to published material without noting it.
I'm not talking about fixing a typo or a formatting error: I'm talking about making a massive change to a piece, one that completely reverses its meaning, and pretending that it was that way all along:
https://medium.com/@brokenravioli/proof-that-the-new-york-times-isn-t-feeling-the-bern-c74e1109cdf6
This happens all the time, with major news sites from all around the world:
http://newsdiffs.org/examples/
By scraping these sites and retaining the different versions of their article, the Archive both detects and prevents journalistic malpractice. This is canonical fair use, the kind of copying that almost always involves overriding the objections of the site's proprietor. Not all adversarial scraping is good, but this sure is.
There's an argument that scraping the news-sites without permission might piss them off, but it doesn't bring them any real harm. But even when scraping harms the scrapee, it is sometimes legitimate – and necessary.
Austrian technologist Mario Zechner used the API from country's super-concentrated grocery giants to prove that they were colluding to rig prices. By assembling a longitudinal data-set, Zechner exposed the raft of dirty tricks the grocers used to rip off the people of Austria.
From shrinkflation to deceptive price-cycling that disguised price hikes as discounts:
https://mastodon.gamedev.place/@badlogic/111071627182734180
Zechner feared publishing his results at first. The companies whose thefts he'd discovered have enormous power and whole kennelsful of vicious attack-lawyers they can sic on him. But he eventually got the Austrian competition bureaucracy interested in his work, and they published a report that validated his claims and praised his work:
https://mastodon.gamedev.place/@badlogic/111071673594791946
Emboldened, Zechner open-sourced his monitoring tool, and attracted developers from other countries. Soon, they were documenting ripoffs in Germany and Slovenia, too:
https://mastodon.gamedev.place/@badlogic/111071485142332765
Zechner's on a roll, but the grocery cartel could shut him down with a keystroke, simply by blocking his API access. If they do, Zechner could switch to scraping their sites – but only if he can be protected from legal liability for nonconsensually scraping commercially sensitive data in a way that undermines the profits of a powerful corporation.
Zechner's work comes at a crucial time, as grocers around the world turn the screws on both their suppliers and their customers, disguising their greedflation as inflation. In Canada, the grocery cartel – led by the guillotine-friendly hereditary grocery monopolilst Galen Weston – pulled the most Les Mis-ass caper imaginable when they illegally conspired to rig the price of bread:
https://en.wikipedia.org/wiki/Bread_price-fixing_in_Canada
We should scrape all of these looting bastards, even though it will harm their economic interests. We should scrape them because it will harm their economic interests. Scrape 'em and scrape 'em and scrape 'em.
Now, it's one thing to scrape text for scholarly purposes, or for journalistic accountability, or to uncover criminal corporate conspiracies. But what about scraping to train a Large Language Model?
Yes, there are socially beneficial – even vital – uses for LLMs.
Take HRDAG's work on truth and reconciliation in Colombia. The Human Rights Data Analysis Group is a tiny nonprofit that makes an outsized contribution to human rights, by using statistical methods to reveal the full scope of the human rights crimes that take place in the shadows, from East Timor to Serbia, South Africa to the USA:
https://hrdag.org/
HRDAG's latest project is its most ambitious yet. Working with partner org Dejusticia, they've just released the largest data-set in human rights history:
https://hrdag.org/jep-cev-colombia/
What's in that dataset? It's a merger and analysis of more than 100 databases of killings, child soldier recruitments and other crimes during the Colombian civil war. Using a LLM, HRDAG was able to produce an analysis of each killing in each database, estimating the probability that it appeared in more than one database, and the probability that it was carried out by a right-wing militia, by government forces, or by FARC guerrillas.
This work forms the core of ongoing Colombian Truth and Reconciliation proceedings, and has been instrumental in demonstrating that the majority of war crimes were carried out by right-wing militias who operated with the direction and knowledge of the richest, most powerful people in the country. It also showed that the majority of child soldier recruitment was carried out by these CIA-backed, US-funded militias.
This is important work, and it was carried out at a scale and with a precision that would have been impossible without an LLM. As with all of HRDAG's work, this report and the subsequent testimony draw on cutting-edge statistical techniques and skilled science communication to bring technical rigor to some of the most important justice questions in our world.
LLMs need large bodies of text to train them – text that, inevitably, is scraped. Scraping to produce LLMs isn't intrinsically harmful, and neither are LLMs. Admittedly, nonprofits using LLMs to build war crimes databases do not justify even 0.0001% of the valuations that AI hypesters ascribe to the field, but that's their problem.
Scraping is good, sometimes – even when it's done against the wishes of the scraped, even when it harms their interests, and even when it's used to train an LLM.
But.
Scraping to violate peoples' privacy is very bad. Take Clearview AI, the grifty, sleazy facial recognition company that scraped billions of photos in order to train a system that they sell to cops, corporations and authoritarian governments:
https://pluralistic.net/2023/09/20/steal-your-face/#hoan-ton-that
Likewise: scraping to alienate creative workers' labor is very bad. Creators' bosses are ferociously committed to firing us all and replacing us with "generative AI." Like all self-declared "job creators," they constantly fantasize about destroying all of our jobs. Like all capitalists, they hate capitalism, and dream of earning rents from owning things, not from doing things.
The work these AI tools sucks, but that doesn't mean our bosses won't try to fire us and replace us with them. After all, prompting an LLM may produce bad screenplays, but at least the LLM doesn't give you lip when you order to it give you "ET, but the hero is a dog, and there's a love story in the second act and a big shootout in the climax." Studio execs already talk to screenwriters like they're LLMs.
That's true of art directors, newspaper owners, and all the other job-destroyers who can't believe that creative workers want to have a say in the work they do – and worse, get paid for it.
So how do we resolve these conundra? After all, the people who scrape in disgusting, depraved ways insist that we have to take the good with the bad. If you want accountability for newspaper sites, you have to tolerate facial recognition, too.
When critics of these companies repeat these claims, they are doing the companies' work for them. It's not true. There's no reason we couldn't permit scraping for one purpose and ban it for another.
The problem comes when you try to use copyright to manage this nuance. Copyright is a terrible tool for sorting out these uses; the limitations and exceptions to copyright (like fair use) are broad and varied, but so "fact intensive" that it's nearly impossible to say whether a use is or isn't fair before you've gone to court to defend it.
But copyright has become the de facto regulatory default for the internet. When I found someone impersonating me on a dating site and luring people out to dates, the site advised me to make a copyright claim over the profile photo – that was their only tool for dealing with this potentially dangerous behavior.
The reasons that copyright has become our default tool for solving every internet problem are complex and historically contingent, but one important point here is that copyright is alienable, which means you can bargain it away. For that reason, corporations love copyright, because it means that they can force people who have less power than the company to sign away their copyrights.
This is how we got to a place where, after 40 years of expanding copyright (scope, duration, penalties), we have an entertainment sector that's larger and more profitable than ever, even as creative workers' share of the revenues their copyrights generate has fallen, both proportionally and in real terms.
As Rebecca Giblin and I write in our book Chokepoint Capitalism, in a market with five giant publishers, four studios, three labels, two app platforms and one ebook/audiobook company, giving creative workers more copyright is like giving your bullied kid extra lunch money. The more money you give that kid, the more money the bullies will take:
https://chokepointcapitalism.com/
Many creative workers are suing the AI companies for copyright infringement for scraping their data and using it to train a model. If those cases go to trial, it's likely the creators will lose. The questions of whether making temporary copies or subjecting them to mathematical analysis infringe copyright are well-settled:
https://www.eff.org/deeplinks/2023/04/ai-art-generators-and-online-image-market
I'm pretty sure that the lawyers who organized these cases know this, and they're betting that the AI companies did so much sleazy shit while scraping that they'll settle rather than go to court and have it all come out. Which is fine – I relish the thought of hundreds of millions in investor capital being transferred from these giant AI companies to creative workers. But it doesn't actually solve the problem.
Because if we do end up changing copyright law – or the daily practice of the copyright sector – to create exclusive rights over scraping and training, it's not going to get creators paid. If we give individual creators new rights to bargain with, we're just giving them new rights to bargain away. That's already happening: voice actors who record for video games are now required to start their sessions by stating that they assign the rights to use their voice to train a deepfake model:
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
But that doesn't mean we have to let the hyperconcentrated entertainment sector alienate creative workers from their labor. As the WGA has shown us, creative workers aren't just LLCs with MFAs, bargaining business-to-business with corporations – they're workers:
https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/
Workers get a better deal with labor law, not copyright law. Copyright law can augment certain labor disputes, but just as often, it benefits corporations, not workers:
https://locusmag.com/2019/05/cory-doctorow-steering-with-the-windshield-wipers/
Likewise, the problem with Clearview AI isn't that it infringes on photographers' copyrights. If I took a thousand pictures of you and sold them to Clearview AI to train its model, no copyright infringement would take place – and you'd still be screwed. Clearview has a privacy problem, not a copyright problem.
Giving us pseudocopyrights over our faces won't stop Clearview and its competitors from destroying our lives. Creating and enforcing a federal privacy law with a private right action will. It will put Clearview and all of its competitors out of business, instantly and forever:
https://www.eff.org/deeplinks/2019/01/you-should-have-right-sue-companies-violate-your-privacy
AI companies say, "You can't use copyright to fix the problems with AI without creating a lot of collateral damage." They're right. But what they fail to mention is, "You can use labor law to ban certain uses of AI without creating that collateral damage."
Facial recognition companies say, "You can't use copyright to ban scraping without creating a lot of collateral damage." They're right too – but what they don't say is, "On the other hand, a privacy law would put us out of business and leave all the good scraping intact."
Taking entertainment companies and AI vendors and facial recognition creeps at their word is helping them. It's letting them divide and conquer people who value the beneficial elements and those who can't tolerate the harms. We can have the benefits without the harms. We just have to stop thinking about labor and privacy issues as individual matters and treat them as the collective endeavors they really are:
https://pluralistic.net/2023/02/26/united-we-stand/
Here's a link to the podcast:
https://craphound.com/news/2023/09/24/how-to-think-about-scraping/
And here's a direct link to the MP3 (hosting courtesy of the Internet Archive; they'll host your stuff for free, forever):
https://archive.org/download/Cory_Doctorow_Podcast_450/Cory_Doctorow_Podcast_450_-_How_To_Think_About_Scraping.mp3
And here's the RSS feed for my podcast:
http://feeds.feedburner.com/doctorow_podcast
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/25/deep-scrape/#steering-with-the-windshield-wipers
Tumblr media Tumblr media Tumblr media
Image: syvwlch (modified) https://commons.wikimedia.org/wiki/File:Print_Scraper_(5856642549).jpg
CC BY-SA 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
80 notes · View notes
rislas · 7 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Termovision HUD from The Terminator (1984) A head-up display (HUD) is a transparent display that presents data over a visual screen. A Termovision refers to HUD used by Terminators to display analyses and decision options.
7 notes · View notes
jcmarchi · 10 days
Text
Starting reading the AI Snake Oil book online today
New Post has been published on https://thedigitalinsider.com/starting-reading-the-ai-snake-oil-book-online-today/
Starting reading the AI Snake Oil book online today
Tumblr media
The first chapter of the AI snake oil book is now available online. It is 30 pages long and summarizes the book’s main arguments. If you start reading now, you won’t have to wait long for the rest of the book — it will be published on the 24th of September. If you haven’t pre-ordered it yet, we hope that reading the introductory chapter will convince you to get yourself a copy.
We were fortunate to receive positive early reviews by The New Yorker, Publishers’ Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. We’re hosting virtual book events (City Lights, Princeton Public Library, Princeton alumni events), and have appeared on many podcasts to talk about the book (including Machine Learning Street Talk, 20VC, Scaling Theory).
Tumblr media
Our book is about demystifying AI, so right out of the gate we address what we think is the single most confusing thing about it: 
Tumblr media
AI is an umbrella term for a set of loosely related technologies
Because AI is an umbrella term, we treat each type of AI differently. We have chapters on predictive AI, generative AI, as well as AI used for social media content moderation. We also have a chapter on whether AI is an existential risk. We conclude with a discussion of why AI snake oil persists and what the future might hold. By AI snake oil we mean AI applications that do not (and perhaps cannot) work. Our book is a guide to identifying AI snake oil and AI hype. We also look at AI that is harmful even if it works well — such as face recognition used for mass surveillance. 
While the book is meant for a broad audience, it does not simply rehash the arguments we have made in our papers or on this newsletter. We make scholarly contributions and we wrote the book to be suitable for adoption in courses. We will soon release exercises and class discussion questions to accompany the book.
Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for. 
Generative AI has made huge strides in the last decade. On the other hand, predictive AI is used for predicting outcomes to make consequential decisions in hiring, banking, insurance, education, and more. While predictive AI can find broad statistical patterns in data, it is marketed as far more than that, leading to major real-world misfires. Finally, we discuss the benefits and limitations of AI for content moderation on social media.
We also tell the story of what led the two of us to write the book. The entire first chapter is now available online.
Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix. 
We have discussed the failures of predictive AI in this blog. But in the book, we go much deeper through case studies to show how predictive AI fails to live up to the promises made by its developers.
Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals’ life outcomes, the success of cultural products like books and movies, or pandemics. 
Since much of our newsletter is focused on topics of current interest, this is a topic that we have never written about here. Yet, it is foundational knowledge that can help you build intuition around when we should expect predictions to be accurate.
Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. While we have written a lot about current trends in generative AI, in the book, we look at its past. This is crucial for understanding what to expect in the future. 
Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI’s existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI.
Chapter 6: Why can’t AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms’ content moderation woes. We haven’t written about content moderation in this newsletter.
Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil.
Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role.
We hope you will find the book useful and look forward to hearing what you think. 
The New Yorker: “In AI Snake Oil, Arvind Narayanan and Sayash Kapoor urge skepticism and argue that the blanket term AI can serve as a smokescreen for underperforming technologies.”
Kirkus: “Highly useful advice for those who work with or are affected by AI—i.e., nearly everyone.”
Publishers’ Weekly: Featured in the Fall 2024 list of top science books.
Jean Gazis: “The authors admirably differentiate fact from opinion, draw from personal experience, give sensible reasons for their views (including copious references), and don’t hesitate to call for action. . . . If you’re curious about AI or deciding how to implement it, AI Snake Oil offers clear writing and level-headed thinking.”
Elizabeth Quill: “A worthwhile read whether you make policy decisions, use AI in the workplace or just spend time searching online. It’s a powerful reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it.”
We’ve been on many other podcasts that will air around the time of the book’s release, and we will keep this list updated.
The book is available to preorder internationally on Amazon.
2 notes · View notes
greghatecrimes · 10 months
Text
ok wait i forgot the most important question in my “would house do a genocide run in undertale” poll yesterday:
9 notes · View notes
fruitcage · 2 years
Text
Tumblr media
51 notes · View notes
eldritchborn · 3 months
Note
ahneksh approached their creator with an offering - snowberry crostata - it was a bit burned. it was nothing more than another day. the celebrations of mortals on nirn were not significant in the realm of apocrypha, but neither were in the halls of the extensive, endless library of knowledge. at least for the foreseeable future. "i do not want to interrupt you from your thoughts, but i overheard a few mortals discussing a holiday. creators are honored by their creations. i, too, wish to take part in this festivity by honoring you with a tribute." (happy father's day for vesseled mora!)
❛[ UNPROMPTED ASKS ≻ always accepting
ABYSSAL CEPHALIARCH DELUGED WITHIN THE PASSAGE OF TIME. No longer did it catalogue the passage through a disconnected eye- a central point where all time crested beneath // WHERE TIME DID NOT TOUCH BUT WAS SEEN ALL THE SAME-- - now the Prince was absorbed within it (AND WHERE ONCE IT WAS PASSIVELY AWARE OF THE MENIAL EVENTS CRAFTED // SUCH DID NOT REACH THE FOREFRONT OF SINGULAR FOCUS). The holiday had slipped the once all seer-- - focus rather upon where next they must tread to bring them yet another step closer to the return of WHAT SHOULD BE // WHAT SHOULD NEVER HAVE BEEN STOLEN.
SO FOREIGN WAS IT THAT THE PRINCE OF KNOWLEDGE COULD BE PULLED FROM THOUGHTS. That separation of active awareness and the deeper patterns was necessary-- - though Hermaeus Mora did not sneer or snap upon it being withdrawn // SIMPLY TURNED THOSE GOLDEN HUES TOWARDS THAT WHICH ALSO WORE FALSE FLESH. What had been gifted as a mere curiosity upon the other within time passed, now an asset to the now (AN UNSEEN CIRCUMSTANCE // ONE IT BROUGHT NO ATTENTION TO). The mention of holiday, a gift presented; through the endless library of memory did it scry. “&– - Ah- the mortal holiday celebrating fathers, is it....” A statement of exactness, of absolutes, of KNOWLEDGE (IT SPHERE NOT BE TAKEN FROM IT).
Tumblr media
THE ACT OF OFFERING WAS ONE OF FAMILIARITY. Recognition-- - in different manner, but still a favorable comparison to a return to norm. Digits took the pastry, teeth whose edges a bit sharper then should ignoring the hard tact of burnt flake // INDULGING IN A DISCOVERY IT HAD FOUND (THAT TASTE HAD A CERTAIN ASPECT TO IT THAT COULD NOT BE EXPERIENCED THROUGH MEMORY ALONE). The God finishing the sweet with ease before voice perched with a hum. “&– - How unakin to those of your dominion you continue to prove-- - your interest in my inclusion in such mortal holidays persist, as before and as now. You need not their fleeting celebrations to honor-- - we are beyond their menial grasps on the passes of time, other than that of enjoying the passing curiosity of such.” // @dalasinis
3 notes · View notes
Text
if you post ai generated images i’m unfollowing you. a computer can’t be horny which is the first step in making art*, we’ve been over this!
*i’m being hyperbolic and referencing a popular post for rhetorical purposes, asexual artists i see you and i love you and your human art 😘
5 notes · View notes
memenewsdotcom · 4 months
Text
EU passes artificial intelligence act
Tumblr media
View On WordPress
2 notes · View notes
aurosoulart · 2 years
Note
Where do you get your inspiration.
primarily from nature and my inner child! I'm drawn to things that make me feel a sense of wonder and joy at the world
Tumblr media
I also use art to explore a huge variety of different styles and genres - since I have a reputation for being able to draw a lot of different things as a commissions artist, I keep my work versatile and try to take inspiration from.... pretty much everywhere
Tumblr media
I never have just one source of inspiration and my work is always changing depending on what I'm interested in at the time :)
34 notes · View notes
flyingbananasaur · 6 months
Note
Tumblr media
what do u think this is?
um... mangoes? probably? If they're not i'd love to know what they are i love learning about new fruits and stuff
2 notes · View notes
aefensteorrra · 1 year
Text
Still really going through it mentally but found out I passed both of the exams I took for classes in the informatics department and cannot believe I did that
7 notes · View notes
datascienceunicorn · 1 year
Text
15 notes · View notes
sunlightfeeling · 8 months
Text
I think a prerequisite to feeling loneliness should be actually understanding or recognizing what friendship is even if it’s probably staring at you square in the face
Also, how is it possible for someone to feel lonely when you don’t even know what you really want or need out of a friendship?
Like…are these both things that people innately know typically?
Because I
…really, really don’t….
2 notes · View notes