#Parrot Intelligence
Explore tagged Tumblr posts
Text
Years ago we had a budgie. He was the sweetest — really affectionate, lovely temperament. One day, however, he uncharacteristically nipped my mother’s finger, and she made a disapproving noise. He responded, “Naughty bird?” Now this was obviously a phrase he’d picked up, but it had never, ever been said to him with a rising inflection.
We don’t give parrots, even tiny ones like budgies, nearly enough credit.
Y'all want to know what thought is fucking with me today?
Parrots can learn the concept of questions. I don't know about the claim that chimpanzees that were taught sign language never learned to ask questions, or the theory that it simply wouldn't occur to them that the human handlers might know things that they personally do not, or that whatever information they have might be worth knowing. But I don't even remember where I read that, and at best it's an anecdote of an anecdote, but anyway, parrots.
The exact complexity of natural parrot communication in the wild is beyond human understanding for the time being, but you can catch glimpses of how complex it is by looking at how much they learn to pick up from human speech. Sure, they figure out that this sound means this object, animal, person, or other thing. Human says "peanut" and presents a peanut, so the sound "peanut" means peanut. Yes. But if you make the same sound with a rising intonation, you are inquiring about the possibility of a peanut.
A bird that's asking "peanut?" knows there is no peanut physically present in the current situation, but hypothetically, there could be a peanut. The human knows whether there will be a peanut. The bird knows that making this specific human sound with this specific intonation is a way of requesting for this information, and a polite way of informing the human that a peanut is desired.
"I get a peanut?" is a polite spoken request. There is no peanut here, but there could be a peanut. The bird knows that the human knows this. But without the rising intonation of a question, the statement "I get a peanut." is a firm implied threat. There is no peanut here, but there better fucking be one soon. The bird knows that the human knows this.
37K notes
·
View notes
Text
youtube
The Remarkable African Grey Parrot
Discover the incredible world of the African Grey Parrot, renowned for its intelligence and social nature!
Check out my other videos here: Animal Kingdom Animal Facts Animal Education
#Helpful Tips#Wild Wow Facts#African Grey Parrot#Parrot Intelligence#Talking Parrot#Parrot Training#African Grey Care#Exotic Birds#Parrot Speech#Bird Behavior#Pet Birds#Bird Training#African Grey Facts#Parrot Nutrition#Bird Enrichment#Grey Parrot Lifespan#Parrot Tricks#Bird Communication#Parrot Breeding#Parrot Vocalization#Bird Lovers#African Grey Talking#Smart Birds#Parrot Behavior#Parrot Bonding#Bird Health#African Grey Life#animal behavior#animal kingdom#animal science
0 notes
Text
youtube
African Grey Parrots: Unlocking Their Genius and Emotional Intelligence!
Discover the incredible world of African Grey Parrots in this in-depth video! These highly intelligent and emotionally sensitive birds are known for their remarkable ability to mimic human speech, but there’s so much more to them.
In this video, we’ll dive into the science behind their intelligence, their emotional needs, and what makes them one of the most fascinating pets to own. Stay tuned as we unlock the secrets behind their genius and learn how to care for these magnificent creatures.
#tiktokparrot#african grey parrot care#african grey parrot lifespan in captivity#african grey lifespan#african grey behavior#buying an african grey parrot#cute birds#african grey#african grey parrot#africangrey#Mitthu#talking parrot#tiktok parrot#parrot talking#smart parrot#african grey Intelligence#african grey parrot Intelligence#grey parrot intelligence#parrot intelligence#african grey parrot brain#african grey parrot talking#african grey parrot sounds#african grey talking#african grey parrot swearing#african grey as a pet#african grey aviary#african grey bird#african grey body language#Youtube
0 notes
Text
cockatoo ipad kid experiment
5K notes
·
View notes
Text
How plausible sentence generators are changing the bullshit wars
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#chatbots#plausible sentence generators#robot lawyers#robolawyers#ai#ml#machine learning#artificial intelligence#stochastic parrots#bullshit#bullshit generators#the bullshit wars#llms#large language models#writing#Ben Levinstein
2K notes
·
View notes
Note
in super spec bros, would yoshi ever be tempted to eat anyone in the mushroom kingdom? are they sapient?
he’s naturally curious, although socialized enough to not go around putting things (that move) in his mouth. In speculative bros, Yoshis are primarily frugivores and any meat they eat is usually bycatch. A toad might catch a Yoshi’s attention because of their bright colors, but do not activate any kind of prey drive.
Toads are sapient, but in this setting Yoshis are not. They are smart and inventive, though, which makes them engaging companions.
#yoshis intelligence here is very similar to a parrots#speculative biology#art#super speculative bros#super mario#yoshi#smb yoshi#smb mario#kips art#smb toad
906 notes
·
View notes
Text
Hot take
#i like my dragons sentient and we already have highly intelligent flying dinosaurs Right There 🦜🐦⬛#can you imagine how terrifying a bored lovebird the size of an elephant would be ?#that's not to say i don't enjoy dragons with cat-like traits i just think there's a missed opportunity#dragons#birds are dinosaurs#parrots#corvids#birblr#fantasy#here be dragons
211 notes
·
View notes
Video
tumblr
Smooth criminal
Source
528 notes
·
View notes
Text
Rusty the Good Dalek -- indefinite companion aboard Clara's diner✨️
#i dont know#seemed like a good idea at the time#🤣🤣🤣#doctor who#clara oswald#doctor who fandom#rusty the dalek#doctor who dalek#doctor who fanart#doctor x clara#idk if they can even be out their casing very long but shhh#i can ✨️dream✨️#i think Rusty likes jazz#and painting#and digestive biscuits#yeah#thats the boi#the intelligent parrot of the crew
666 notes
·
View notes
Text
These Parrots Won’t Stop Swearing. Will They Learn to Behave—or Corrupt the Entire Flock?
A British zoo hopes the good manners of a larger group will rub off on the eight misbehaving birds
A few years ago, a zoo in Britain went viral for its five foul-mouthed parrots that wouldn’t stop swearing. Now, three more birds at Lincolnshire Wildlife Park have developed the same bad habit—and zoo staffers have devised a risky plan to curb their bad behavior. “We’ve put eight really, really offensive, swearing parrots with 92 non-swearing ones,” Steve Nichols, the park’s chief executive, tells CNN’s Issy Ronald...
#parrot#zoos#bird#ornithology#animal behavior#animal communication#nature#animal intelligence#science
55 notes
·
View notes
Text
Right, so originally I didn't even start re-watching this video bc of James Somerton but bc I wanted to watch the iilluminaughtii part of it again (she's still making videos btw. the comments are turned off). I actually remember seeing the thumbnail for them *a lot* on my recommended but I clicked on one once and didn't even make it to the end because well. It did feel like she was reading me the newspaper lol. I really hate that kind of monotone voice so many Youtubers use now, like they're reciting you the information at the back of a cereal box. It's the auditory equivalent of watching paint dry to me.
#hbomberguy#iilluminaughtii#ironically i don't mind it when actual documentaries do this#but if a youtuber is just parroting facts at me#i tune them out and eventually click out#bc if i wanted that i would watch the fucking documentary#i'm not coming to youtube for your expert knowledge! i wanna know what you THINK#but a lot of these people looking for a quick buck are not intelligent enough to form their own opinions which is why they literally just#recite the information they've gained through minimal research back at you and call it a day#or worse that trend of recapping an entire show without giving their opinion or commenting or doing any kind of media analysis just.#fucking reading you the wikipedia entry for each episode i guess#(looking at you poorly edited lazy-ass wizards of waverly place video)
14 notes
·
View notes
Photo
Goffin’s cockatoo named third species that carries toolkits around in preparation for future tasks
47 notes
·
View notes
Text
So you know how the whole “culture war“ thing is just fascism? Guess what else is implicated in that? If you said AI, you’ve probably already read this really fantastic article.
If you haven’t, take 10. You need to know more about this, especially if you’ve been playing around with the machines. 
22 notes
·
View notes
Text
Do you think a Chocobo could learn to mimic words like Colibri/real life Parrots
No listen I know they’re not giant parrots and their canonical vocal range is Kweh I just think it would be funny if you were meeting with the WoL and their bird chirped and then just went and like. Swore, or said *no*, or. [Sloppy]
#does this have any real applications? no I was just thinking about parrots#and how Chocobo are very intelligent creatures and could probably make words if they wanted
3 notes
·
View notes
Text
Supervised AI isn't
It wasn't just Ottawa: Microsoft Travel published a whole bushel of absurd articles, including the notorious Ottawa guide recommending that tourists dine at the Ottawa Food Bank ("go on an empty stomach"):
https://twitter.com/parismarx/status/1692233111260582161
After Paris Marx pointed out the Ottawa article, Business Insider's Nathan McAlone found several more howlers:
https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8
There was the article recommending that visitors to Montreal try "a hamburger" and went on to explain that a hamburger was a "sandwich comprised of a ground beef patty, a sliced bun of some kind, and toppings such as lettuce, tomato, cheese, etc" and that some of the best hamburgers in Montreal could be had at McDonald's.
For Anchorage, Microsoft recommended trying the local delicacy known as "seafood," which it defined as "basically any form of sea life regarded as food by humans, prominently including fish and shellfish," going on to say, "seafood is a versatile ingredient, so it makes sense that we eat it worldwide."
In Tokyo, visitors seeking "photo-worthy spots" were advised to "eat Wagyu beef."
There were more.
Microsoft insisted that this wasn't an issue of "unsupervised AI," but rather "human error." On its face, this presents a head-scratcher: is Microsoft saying that a human being erroneously decided to recommend the dining at Ottawa's food bank?
But a close parsing of the mealy-mouthed disclaimer reveals the truth. The unnamed Microsoft spokesdroid only appears to be claiming that this wasn't written by an AI, but they're actually just saying that the AI that wrote it wasn't "unsupervised." It was a supervised AI, overseen by a human. Who made an error. Thus: the problem was human error.
This deliberate misdirection actually reveals a deep truth about AI: that the story of AI being managed by a "human in the loop" is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.
Our brains wire together neurons that we recruit when we practice a task. When we don't practice a task, the parts of our brain that we optimized for it get reused. Our brains are finite and so don't have the luxury of reserving precious cells for things we don't do.
That's why the TSA sucks so hard at its job – why they are the world's most skilled water-bottle-detecting X-ray readers, but consistently fail to spot the bombs and guns that red teams successfully smuggle past their checkpoints:
https://www.nbcnews.com/news/us-news/investigation-breaches-us-airports-allowed-weapons-through-n367851
TSA agents (not "officers," please – they're bureaucrats, not cops) spend all day spotting water bottles that we forget in our carry-ons, but almost no one tries to smuggle a weapons through a checkpoint – 99.999999% of the guns and knives they do seize are the result of flier forgetfulness, not a planned hijacking.
In other words, they train all day to spot water bottles, and the only training they get in spotting knives, guns and bombs is in exercises, or the odd time someone forgets about the hand-cannon they shlep around in their day-pack. Of course they're excellent at spotting water bottles and shit at spotting weapons.
This is an inescapable, biological aspect of human cognition: we can't maintain vigilance for rare outcomes. This has long been understood in automation circles, where it is called "automation blindness" or "automation inattention":
https://pubmed.ncbi.nlm.nih.gov/29939767/
Here's the thing: if nearly all of the time the machine does the right thing, the human "supervisor" who oversees it becomes incapable of spotting its error. The job of "review every machine decision and press the green button if it's correct" inevitably becomes "just press the green button," assuming that the machine is usually right.
This is a huge problem. It's why people just click "OK" when they get a bad certificate error in their browsers. 99.99% of the time, the error was caused by someone forgetting to replace an expired certificate, but the problem is, the other 0.01% of the time, it's because criminals are waiting for you to click "OK" so they can steal all your money:
https://finance.yahoo.com/news/ema-report-finds-nearly-80-130300983.html
Automation blindness can't be automated away. From interpreting radiographic scans:
https://healthitanalytics.com/news/ai-could-safely-automate-some-x-ray-interpretation
to autonomous vehicles:
https://newsroom.unsw.edu.au/news/science-tech/automated-vehicles-may-encourage-new-breed-distracted-drivers
The "human in the loop" is a figleaf. The whole point of automation is to create a system that operates at superhuman scale – you don't buy an LLM to write one Microsoft Travel article, you get it to write a million of them, to flood the zone, top the search engines, and dominate the space.
As I wrote earlier: "There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes 'OK' every time a dialog box appears":
https://pluralistic.net/2022/10/21/let-me-summarize/#i-read-the-abstract
Microsoft – like every corporation – is insatiably horny for firing workers. It has spent the past three years cutting its writing staff to the bone, with the express intention of having AI fill its pages, with humans relegated to skimming the output of the plausible sentence-generators and clicking "OK":
https://www.businessinsider.com/microsoft-news-cuts-dozens-of-staffers-in-shift-to-ai-2020-5
We know about the howlers and the clunkers that Microsoft published, but what about all the other travel articles that don't contain any (obvious) mistakes? These were very likely written by a stochastic parrot, and they comprised training data for a human intelligence, the poor schmucks who are supposed to remain vigilant for the "hallucinations" (that is, the habitual, confidently told lies that are the hallmark of AI) in the torrent of "content" that scrolled past their screens:
https://dl.acm.org/doi/10.1145/3442188.3445922
Like the TSA agents who are fed a steady stream of training data to hone their water-bottle-detection skills, Microsoft's humans in the loop are being asked to pluck atoms of difference out of a raging river of otherwise characterless slurry. They are expected to remain vigilant for something that almost never happens – all while they are racing the clock, charged with preventing a slurry backlog at all costs.
Automation blindness is inescapable – and it's the inconvenient truth that AI boosters conspicuously fail to mention when they are discussing how they will justify the trillion-dollar valuations they ascribe to super-advanced autocomplete systems. Instead, they wave around "humans in the loop," using low-waged workers as props in a Big Store con, just a way to (temporarily) cool the marks.
And what of the people who lose their (vital) jobs to (terminally unsuitable) AI in the course of this long-running, high-stakes infomercial?
Well, there's always the food bank.
"Go on an empty stomach."
Going to Burning Man? Catch me on Tuesday at 2:40pm on the Center Camp Stage for a talk about enshittification and how to reverse it; on Wednesday at noon, I'm hosting Dr Patrick Ball at Liminal Labs (6:15/F) for a talk on using statistics to prove high-level culpability in the recruitment of child soldiers.
On September 6 at 7pm, I'll be hosting Naomi Klein at the LA Public Library for the launch of Doppelganger.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
West Midlands Police (modified) https://www.flickr.com/photos/westmidlandspolice/8705128684/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
#pluralistic#automation blindness#humans in the loop#stochastic parrots#habitual confident liars#ai#artificial intelligence#llms#large language models#microsoft
1K notes
·
View notes
Text
PODCAST EPISODES OF LEGITIMATE A.I. CRITICISMS:
Citations Needed Podcast
youtube
Factually! With Adam Conover
youtube
Tech Won't Save Us
#podcasts#artificial intelligence#AI#stochastic parrots#AI Art#AI writing#WGA strike#technology#Youtube#Spotify
13 notes
·
View notes