#joseph weizenbaum
Explore tagged Tumblr posts
computerwives · 12 days ago
Text
Tumblr media Tumblr media
light reading - I have only just started the introduction but I think they should teach this in schools. Like to high schoolers.
4 notes · View notes
eagle-writes · 2 years ago
Text
Tumblr media
"I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people" - Joseph Weizenbaum, inventor of ELIZA, talking half a century ago
Tumblr media
7 notes · View notes
Text
I have many thoughts on the above - and it is very close to my opinion: Fundamentally, if you don't understand how something works, you will waste vast amounts of time and resources chasing fantasies of what you THINK it can do - and waste your life
AI is a marketing term
Tumblr media
It is remarkable how so many consider AI today to be some artificial wunderkind. Progress in the field is highly impressive, no doubt about it, but there needs to be perspective. Modern AI discourse has the misfortune of being around at a time when information quality is not necessarily as important as gaming social media algorithms or SEO. As someone who does marketing, I can see why big tech companies use AI instead of "machine learning" - you're far more likely to think of sci-fi, futuristic technology and progress when you use "artificial intelligence" than the more drab moniker of "machine learning". To many, ordinary, people it conjures images of a grand future of robots and flying cars. We've had these sci-fi images blasted into our brains throughout media. Tropes from the genre regularly pop up in other media, and entertainment, so much that you have to be very isolated to not experience them. The closer one is to the origins of the technology - the more they are aware of its limitations. Back in 2015, I had heard all about things such as Alexa, Siri and what they could potentially do. I had a hard science background in geology, and didn't fully believe the hype, but it did seem incredibly magical and an inscrutable quality to it.
AI hype is not new - but the original creators were academics, and arguably more introspective than their silicon valley funded counterparts today
Tumblr media
One of the main reasons I started looking at, and creating, software to begin with was because I picked up a copy of Joseph Weizenbaum's Computer Power and Human Reason purely by chance at a collectors bookshop in Ireland.
Computer Power and Human Reason - Link
Weizenbaum, for reference, created THE original chatbot, ELIZA, in 1966. He is considered an influential "AI" researcher of the 20th century. ELIZA, or DOCTOR as it was called, was designed to mimic a "Rogerian" psychiatrist - ie. one who reflects questions back to their patients to gain more information about them and make their subjects consider their actions. What Weizenbaum wrote, however, about using technology was what I found to be the most enlightening: "Most other programs could not vividly demonstrate the information-processing power of a computer to visitors who did not already have some specialized knowledge, say, of some branch of mathematics." "What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people"
What I miss about these 20th century academics is that they lived at a time when running a computer program meant literally punching holes into cards, and waiting an entire day for something incredibly mundane to occur. They were far more grounded than the software engineers using highly abstracted operating systems we take for granted today. Most importantly Weizenbaum, Zuse and others understood the human condition as well as the technical.
Nothing has changed.
Tumblr media
Chat-GPT is the closest thing I think we've seen to an ELIZA style program this century. It's so easy to use and understand, and seems almost human, but has fundamental limitations that many just can't see because they are unaware of how it works. You're even seeing many industry, and academic, professions try to use the AI in what I can only call misguided endeavours to leverage what they THINK it can do. I've had conversations with software engineers in industry who have been doing crazy things such as integrating NGO data into Chat GPT and, only now, coming to the realization that Open AI has terrible data security, and the answers given by these systems still have double-digit inaccuracies for many answers - even after fine-tuning the models. "A number of practicing psychiatrists seriously believed the DOCTOR computer program could grow into a nearly completely automatic form of psychotherapy… What must a psychiatrist who makes such suggestion think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?" - Joseph Weizenbaum
Learn the basics to learn everything
Tumblr media
In summary - if you want to get behind the hype then read up. Even a basic understanding of these tools is better than none at all. There are plenty of accessible books on AI which can cut across the money-making grift and the hype. Most important of all - don't forget to get outside and live your life. It'll flash by before you know it. "the teacher of computer science is himself subject to the enormous temptation to be arrogant because his knowledge is somehow "harder" than that of his humanist colleagues. But the hardness of the knowledge available to him is of no advantage at all. His knowledge is merely less ambiguous and therefore, like his computer languages, less expressive of reality... If the teacher, if anyone, is to be an example of a whole person to others, he must first strive to be a whole person. Without the courage to confront one's inner as well as one's outer worlds, such wholeness is impossible to achieve." - Joseph Weizenbaum
I went to a library book sale this weekend and I found a very old book called “Electronic Life: How to Think About Computers,” which was published in I think 1975? I’ve been reading it kind of like how I would read a historical document, and it’s lowkey fascinating
32K notes · View notes
kammartinez · 8 months ago
Text
0 notes
kamreadsandrecs · 9 months ago
Text
0 notes
august-chun · 2 years ago
Text
I first learned about Weizenbaum in one of my undergrad media rhetoric classes. I’ve thought of him often in these times.
0 notes
rbolick · 2 years ago
Text
Books On Books Collection - Margo Klass
Takeover (2023) Takeover (2023)Margo KlassCut-out vintage poster letters and numbers mounted on horn-book shaped tray. ChatGPT symbol covered by glass magnifying dome. H290 x W170 x D35 mm. Unique edition. Acquired from the artist, 26 June 2023.Photos: Books On Books Collection. Displayed with permission of the artist. In her response to the Northwoods Books Arts Guild challenge organized by…
Tumblr media
View On WordPress
1 note · View note
5poder · 2 years ago
Text
Suicidio de un hombre tras hablar con un chatbot de IA
Consternación en Bélgica por el suicidio de un hombre tras hablar con un chatbot de IA Tenía 30 años, tenía dos hijos y era investigador en el área de salud. El gobierno belga remarcó la necesidad de que los editores de contenido no eludan su responsabilidad. La Unión Europea prepara una “Ley de IA” (Inteligencia Artificial). Por qué los chatbots pueden manipular emocionalmente. Un hombre belga…
Tumblr media
View On WordPress
0 notes
krjpalmer · 2 months ago
Text
Tumblr media
ROM February 1978
The Valentine's sampler on the cover this month (including a Southwest Technical Products Corp. terminal) led into a software-focused issue (including Joseph Weizenbaum's skeptical take on artificial intelligence, illustrated with drawings of C-3P0).
31 notes · View notes
kata4a · 2 months ago
Note
hello, i just wanted to say thank you for recommending computer power and human reason by joseph weizenbaum. i have only read a little bit so far, but i am really enjoying it. it reminds me of another great book i read earlier this year, four arguments for the elimination of televison, by jerry mander.
I'm glad you're enjoying it! though I should confess that I have not actually read more than the excerpts I posted, and I only read those because I wanted a primary source for what I remembered about how the eliza program was received
3 notes · View notes
scifigeneration · 1 year ago
Text
AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024
by Anjana Susarla, Professor of Information Systems at Michigan State University, Casey Fiesler, Associate Professor of Information Science at the University of Colorado Boulder, and Kentaro Toyama Professor of Community Information at the University of Michigan
Tumblr media
2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.
We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.
One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.
However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.
So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.
I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.
youtube
Kentaro Toyama, Professor of Community Information, University of Michigan
In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.
Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.
The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.
Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.
Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.
Anjana Susarla, Professor of Information Systems, Michigan State University
In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.
Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.
Tumblr media
These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.
The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.
The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.
A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.
16 notes · View notes
spaceintruderdetector · 1 year ago
Text
Tumblr media
Zerzan and Carnes had assembled some of the most important critical appraisals of technology written to the date of its publication in 1991 with this book. Lewis Mumford, Jacques Ellul, Langdon Winner, Joseph Weizenbaum, Carolyn Merchant, Morris Berman, George Bradford, Jerry Mander, Stanley Diamond, Russel Means and many others offer a searing indictment of technology and its catastrophic social effects. Almost all of these essays or excerpts have appeared elsewhere, but this single text has collected many of the essential topics and critiques levied from some of the greatest critics of technoculture.
Questioning Technology: A Critical Anthology : John Zerzan : Free Download, Borrow, and Streaming : Internet Archive
15 notes · View notes
zagrebinaa · 1 year ago
Note
What is your opinion on artificial intelligence and its impact on society?
I do agree that AI will bring a huge redistribution of power in history. It's definitely a different wave of technology because of how it unleashes new powers and transforms existing ones.
One book that I came across that can bring a lot of thought about AI, was written in 1976, it's called "Computer Power and Human Reasons" by Joseph Weizenbaum. The book lays a case where AI will become possible but we should never allow computers to make important decisions because any technology will always lack human qualities such as compassion and wisdom. He makes a great point about the distinction between deciding and choosing, as deciding is a computational activity, something that can be programmed and choosing is always made by human beings using judgment and not a calculation.
7 notes · View notes
azspot · 2 years ago
Quote
I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed.
Joseph Weizenbaum
10 notes · View notes
theamandakjames · 2 years ago
Text
Tumblr media
🕰️✨ Did you know that the very first virtual assistant, "Eliza," was born in the 1960s? 🤖 Created by MIT's Joseph Weizenbaum, Eliza chatted through text and even mimicked therapy sessions. While basic compared to today's AI pals, Eliza set the stage for the advanced virtual helpers we can't live without now! 🚀🌟 #AIHistory #ElizaTheTrailblazer #theamandakjames
2 notes · View notes
ammg-old2 · 2 years ago
Text
It was a simpler time. A friend introduced us, pulling up a static yellow webpage using a shaky dial-up modem. A man stood forth, dressed in a dapper black pinstriped suit with a red-accented tie. He held one hand out, as if carrying an imaginary waiter’s tray. He looked regal and confident and eminently at my service. “Have a Question?” he beckoned. “Just type it in and click Ask!” And ask, I did. Over and over.
With his steady hand, Jeeves helped me make sense of the tangled mess of the early, pre-Google internet. He wasn’t perfect—plenty of context got lost between my inquiries and his responses. Still, my 11-year-old brain always delighted in the idea of a well-coiffed man chauffeuring me down the information superhighway. But things changed. Google arrived, with its clean design and almost magic ability to deliver exactly the answers I wanted. Jeeves and I grew apart. Eventually, in 2006, Ask Jeeves disappeared from the internet altogether and was replaced with the more generic Ask.com.
Many years later, it seems I owe Jeeves an apology: He had the right idea all along. Thanks to advances in artificial intelligence and the stunning popularity of generative-text tools such as ChatGPT, today’s search-engine giants are making huge bets on AI search chatbots. In February, Microsoft revealed its Bing Chatbot, which has thrilled and frightened early users for its ability to scour the internet and answer questions (not always correctly) with convincingly human-sounding language. The same week, Google demoed Bard, the company’s forthcoming attempt at an AI-powered chat-search product. But for all the hype, when I stare at these new chatbots, I can’t help but see the faint reflection of my former besuited internet manservant. In a sense, Bing and Bard are finishing what Ask Jeeves started. What people want when they ask a question is for an all-knowing, machine-powered guide to confidently present them with the right answer in plain language, just as a reliable friend would.
With this in mind, I decided to go back to the source. More than a decade after parting ways, I found myself on the phone with one of the men behind the machine, getting as close to Asking Jeeves as is humanly possible. These days, Garrett Gruener, Ask Jeeves’s co-creator, is a venture capitalist in the Bay Area. He and his former business partner David Warthen eventually sold Ask Jeeves to Barry Diller and IAC for just under $2 billion. Still, I wondered if Gruener had been unsettled by Jeeves’s demise. Did he, like me, see the new chatbots as the final form of his original idea? Did he feel vindicated or haunted by the fact that his creation may have simply been born far too early?
The original conception for Jeeves, Gruener told me, was remarkably similar to what Microsoft and Google are trying to build today. As a student at UC San Diego in the mid-1970s, Gruener—a sci-fi aficionado—got an early glimpse of ARPANET, the pre-browser predecessor to the commercial internet, and fell in love. Just over a decade later, as the web grew and the beginnings of the internet came into view, Gruener realized that people would need a way to find things in the morass of semiconnected servers and networks. “It became clear that the web needed search but that mere mortals without computer-science degrees needed something easy, even conversational,” he said. Inspired by Eliza, the famous chatbot designed by MIT’s Joseph Weizenbaum, Gruener dreamed of a search engine that could converse with people using natural-language processing. Unfortunately, the technology wasn’t sophisticated enough for Gruener to create his ideal conversational search bot.
So Gruener and Warthen tried a work-around. Their code allowed a user to write a statement in English, which was then matched to a preprogrammed vector, which Gruener explained to me as “a canonical snapshot of answers to what the engine thought you were trying to say.” Essentially, they taught the machine to recognize certain words and provide really broad categorical answers. “If you were looking for population stats for a country, the query would see all your words and associated variables and go, Well, this Boolean search seems close, so it’s probably this.” Jeeves would provide the answer, and then you could clarify whether it worked or not.
“We tried to discern what people were trying to say in search, but without actually doing the natural-recognition part of it,” Gruener said. After some brainstorming, they realized that they were essentially building a butler. One of Gruener’s friends mocked up a drawing of the friendly servant, and Jeeves was born.
Pre-Google, Ask Jeeves exploded in popularity, largely because it allowed people to talk with their search engine like a person. Within just two years, the site was handling more than 1 million queries a day. A massive Jeeves balloon floated down Central Park West during Macy’s 1999 Thanksgiving parade. But not long after the butler achieved buoyancy, the site started to lose ground in the search wars. Google’s web-crawling superiority led to hard times for Ask Jeeves. “None of us were very concerned about monetization in the beginning,” Gruener told me. “Everyone in search early on realized, if you got this right, you’d essentially be in the position of being the oracle. If you could be the company to go to in order to ask questions online, you’re going to be paid handsomely.”
Gruener isn’t bitter about losing out to Google. “If anything, I’m really proud of our Jeeves,” he told me. Listening to Gruener explain the history, it’s not hard to see why. In the mid-2000s, Google began to pivot search away from offering only 10 blue links to images, news, maps, and shopping. Eventually, the company began to fulfill parts of the Jeeves promise of answering questions with answer boxes. One way to look at the evolution of big search engines in the 21st century is that all companies are trying their best to create their own intuitive search butlers. Gruener told me that Ask Jeeves’s master plan had two phases, though the company was sold before it could tackle the second. Gruener had hoped that, eventually, Jeeves could act as a digital concierge for users. He’d hoped to employ the same vector technology to get people to ask questions and allow Jeeves to make educated guesses and help users complete all kinds of tasks. “If you look at Amazon’s Alexa, they’re essentially using the same approach we designed for Jeeves, just with voice,” Gruener said. Yesterday’s butler has been rebranded as today’s virtual assistant, and the technology is ubiquitous in many of our home devices and phones. “We were right for the consumer back then, and maybe we’d be right now. But at some point the consumer evolved,” he said.
I’ve been fixated on what might’ve been if Gruener’s vision had come about now. We might all be Jeevesing about the internet for answers to our mundane questions. Perhaps our Jeevesmail inboxes would be overflowing and we’d be getting turn-by-turn directions from an Oxford-educated man with a stiff English accent. Perhaps we’d all be much better off.
Gruener told me about an encounter he’d had during the search wars with one of Google’s founders at a TED conference (he wouldn’t specify which of the two). “I told him that we’re going to learn an enormous amount about the people who are using our platforms, especially as they become more conversational. And I said that it was a potentially dangerous position,” he said. “But he didn’t seem very receptive to my concerns.”
Near the end of our call, I offered an apology for deserting Jeeves like everyone else did. Gruener just laughed. “I find this future fascinating and, if I’m honest, a little validating,” he said. “It’s like, ultimately, as the tech has come around, the big guys have come around to what we were trying to do.”
2 notes · View notes