#Machine Learning
Explore tagged Tumblr posts
destiel-news-network · 1 month ago
Text
Tumblr media
(Source)
45K notes · View notes
coral-skeleton · 2 days ago
Text
Yep, this one can be enjoyed guilt free.
I'd just also like to point out that AI used for denoising is a very different thing from modern-day generative AI. Denoising AI is, more often than not, a closed system, guided machine learning process, meaning it is trained on a subset of the dataset it is being used on, it does not scrape the internet for training sets, it does not steal from artists, more often than not it can be run completely locally on any mildly decent computer (it might just take a while if you don't have a fancy computer with the latest high end cpu and gpu specs), it does not need those massive servers and computing centers that severly worsen global warming to work.
The type of AI used for denoising is much more similar to the AI that's being used to cure cancer and improve our capabilities to do astronomical and astrophysical research than large language models and other generative AI and tbh, it's still worlds away from the curing cancer AI. It's so different from chat gpt that the same word absolutely should not be used for both of them. I would put a metaphor for trying to compare the two here, but I genuinely can't think of two things in the same category that are this dissimilar from each other in any other context.
Anyway, this type of "AI" is actually very normal mathematics and computer algorithms, and not the evil type of AI
AI in Unification?
If you, like me, couldn't fully enjoy Unification because there was a horrible feeling in your gut the whole time of "is this AI? Did Shatner really let them use AI? That seems like a thing he'd do, because he's kind of awful" then you've come to the right place.
I did a deep dive of the technologies used for Unification and while this isn't a 100% comprehensive guide here's what I've learned:
According to Trekmovie.com's article about the film, the production team used a "team of artists and animators, who combined digital and physical prosthetics with live-action location photography, virtual production, and CG set extensions" and used "OTOY’s “Octane” rendering software and the “Render Network” decentralized GPU rendering platform. Characters and props were digitized using OTOY’s Academy-Award winning “LightStage” scanning system."
So what are all these proprietary names / jargon, and are any of them AI?
LightStage: A scanning tech that allows for digital capture of a human face (probably used to capture the stand-ins faces and superimpose older footage of Spock / Kirk like they would for a video game motion capture or something) = Not AI
OctaneRender: "Fastest unbiased, spectrally correct GPU render engine" (Probably used for sets based on the example I'm seeing on OTOY's website. It DOES use AI for "denoising and lighting" but this is a feature of the program and not the only thing the program does, so it is unclear if this is something they would have employed for the shot film. If they did, this would not be used for character work / deep fakes, and given what little information is written about this tech I'm almost curious if it is even a full AI system at all or just an automatic denoiser that they've dubbed as AI to look impressive. So I'd say results inconclusive here at best.)
The Render Network: "The network connects node operators looking to monetize their idle GPU compute power with artists looking to scale intensive 3D-rendering work and with machine learning developers looking to train and tune AI models. Through a decentralized peer-to-peer network, the Render Network achieves unprecedented levels of scale, speed, and economic efficiency. " (This basically means people can use the platform FOR AI but means nothing in the context of whether AI was used for this project.)
TL;DR: AI is an umbrella term for a lot of technology and it seems if anything, there may have been some AI used in the background rendering process but nothing generative AI / deep fakes. In my cynical opinion, if they HAD used AI in general for this, I feel like they'd be shouting it from the rooftops right now since people who love AI won't shut up about it. I'm tentatively saying this was 99% made with traditional CGI and artist work as is stated in the Trekmovie.com article, but I wouldn't be surprised if that opinion changes as the day goes on and more information is released.
410 notes · View notes
river-taxbird · 3 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
f-identity · 2 years ago
Text
Tumblr media
[Image description: A series of posts from Jason Lefkowitz @[email protected] dated Dec 08, 2022, 04:33, reading:
It's good that our finest minds have focused on automating writing and making art, two things human beings do simply because it brings them joy. Meanwhile tens of thousands of people risk their lives every day breaking down ships, a task that nobody is in a particular hurry to automate because those lives are considered cheap https://www.dw.com/en/shipbreaking-recycling-a-ship-is-always-dangerous/a-18155491 (Headline: 'Recycling a ship is always dangerous.' on Deutsche Welle) A world where computers write and make art while human beings break their backs cleaning up toxic messes is the exact opposite of the world I thought I was signing up for when I got into programming
/end image description]
29K notes · View notes
kenyatta · 2 years ago
Quote
I once had ChatGPT insist that a particular composer wrote music for a game, even going so far as to list particular songs from the soundtrack that they were supposedly responsible for, and it helpfully provided hallucinatory citations when I asked for them (a broken link on the game publisher's website and a link to Wikipedia, which did not in fact support its assertion either now or at any point in the article's history). Nor could I find anywhere else on the internet where someone even mistakenly believed that that composer had worked on the game. ChatGPT lies not because it's regurgitating falsehoods that it found on the internet - it lies because it invents new falsehoods on its own. It's not just trained on stuff on the internet that's wrong; it's trained to be confidently wrong in general. It doesn't know what facts are, it just knows how to produce things that are shaped like facts and shove them in fact-shaped holes. I personally wasted 30 minutes of my life fact-checking/"not believing everything it says", when it confidently told me something surprising. My horizons were not broadened by exposing me to "different worldviews". This was unequivocally a negative experience for me.
comment on a MetaFilter post about AI: "My goal is to be helpful, harmless, and honest."
15K notes · View notes
mohammedhilles · 6 months ago
Text
Tumblr media Tumblr media Tumblr media
This the where my family stay right now they are staying in a school without any kind of privacy or any comfortable accommodation.kindly asking your help small things can help us in this situation
2K notes · View notes
disease · 3 months ago
Text
Tumblr media
Frank Rosenblatt, often cited as the Father of Machine Learning, photographed in 1960 alongside his most-notable invention: the Mark I Perceptron machine — a hardware implementation for the perceptron algorithm, the earliest example of an artificial neural network, est. 1943.
765 notes · View notes
stemgirlchic · 9 months ago
Text
why neuroscience is cool
space & the brain are like the two final frontiers
we know just enough to know we know nothing
there are radically new theories all. the. time. and even just in my research assistant work i've been able to meet with, talk to, and work with the people making them
it's such a philosophical science
potential to do a lot of good in fighting neurological diseases
things like BCI (brain computer interface) and OI (organoid intelligence) are soooooo new and anyone's game - motivation to study hard and be successful so i can take back my field from elon musk
machine learning is going to rapidly increase neuroscience progress i promise you. we get so caught up in AI stealing jobs but yes please steal my job of manually analyzing fMRI scans please i would much prefer to work on the science PLUS computational simulations will soon >>> animal testing to make all drug testing safer and more ethical !! we love ethical AI <3
collab with...everyone under the sun - psychologists, philosophers, ethicists, physicists, molecular biologists, chemists, drug development, machine learning, traditional computing, business, history, education, literally try to name a field we don't work with
it's the brain eeeeee
2K notes · View notes
gynoidgearhead · 7 months ago
Text
we need to come up for a good word for ""AI"" that doesn't imply it's artificial or intelligent and highlights the stolen human labor. like what if we call it "theftgen"
(workshop this with me)
1K notes · View notes
reasonsforhope · 15 days ago
Text
"As a Deaf man, Adam Munder has long been advocating for communication rights in a world that chiefly caters to hearing people. 
The Intel software engineer and his wife — who is also Deaf — are often unable to use American Sign Language in daily interactions, instead defaulting to texting on a smartphone or passing a pen and paper back and forth with service workers, teachers, and lawyers. 
It can make simple tasks, like ordering coffee, more complicated than it should be. 
But there are life events that hold greater weight than a cup of coffee. 
Recently, Munder and his wife took their daughter in for a doctor’s appointment — and no interpreter was available. 
To their surprise, their doctor said: “It’s alright, we’ll just have your daughter interpret for you!” ...
That day at the doctor’s office came at the heels of a thousand frustrating interactions and miscommunications — and Munder is not isolated in his experience.
“Where I live in Arizona, there are more than 1.1 million individuals with a hearing loss,” Munder said, “and only about 400 licensed interpreters.”
In addition to being hard to find, interpreters are expensive. And texting and writing aren’t always practical options — they leave out the emotion, detail, and nuance of a spoken conversation. 
ASL is a rich, complex language with its own grammar and culture; a subtle change in speed, direction, facial expression, or gesture can completely change the meaning and tone of a sign. 
“Writing back and forth on paper and pen or using a smartphone to text is not equivalent to American Sign Language,” Munder emphasized. “The details and nuance that make us human are lost in both our personal and business conversations.”
His solution? An AI-powered platform called Omnibridge. 
“My team has established this bridge between the Deaf world and the hearing world, bringing these worlds together without forcing one to adapt to the other,” Munder said. 
Trained on thousands of signs, Omnibridge is engineered to transcribe spoken English and interpret sign language on screen in seconds...
“Our dream is that the technology will be available to everyone, everywhere,” Munder said. “I feel like three to four years from now, we're going to have an app on a phone. Our team has already started working on a cloud-based product, and we're hoping that will be an easy switch from cloud to mobile to an app.” ...
At its heart, Omnibridge is a testament to the positive capabilities of artificial intelligence. "
-via GoodGoodGood, October 25, 2024. More info below the cut!
To test an alpha version of his invention, Munder welcomed TED associate Hasiba Haq on stage. 
“I want to show you how this could have changed my interaction at the doctor appointment, had this been available,” Munder said. 
He went on to explain that the software would generate a bi-directional conversation, in which Munder’s signs would appear as blue text and spoken word would appear in gray. 
At first, there was a brief hiccup on the TED stage. Haq, who was standing in as the doctor’s office receptionist, spoke — but the screen remained blank. 
“I don’t believe this; this is the first time that AI has ever failed,” Munder joked, getting a big laugh from the crowd. “Thanks for your patience.”
After a quick reboot, they rolled with the punches and tried again.
Haq asked: “Hi, how’s it going?” 
Her words popped up in blue. 
Munder signed in reply: “I am good.” 
His response popped up in gray. 
Back and forth, they recreated the scene from the doctor’s office. But this time Munder retained his autonomy, and no one suggested a 7-year-old should play interpreter. 
Munder’s TED debut and tech demonstration didn’t happen overnight — the engineer has been working on Omnibridge for over a decade. 
“It takes a lot to build something like this,” Munder told Good Good Good in an exclusive interview, communicating with our team in ASL. “It couldn't just be one or two people. It takes a large team, a lot of resources, millions and millions of dollars to work on a project like this.” 
After five years of pitching and research, Intel handpicked Munder’s team for a specialty training program. It was through that backing that Omnibridge began to truly take shape...
“Our dream is that the technology will be available to everyone, everywhere,” Munder said. “I feel like three to four years from now, we're going to have an app on a phone. Our team has already started working on a cloud-based product, and we're hoping that will be an easy switch from cloud to mobile to an app.” 
In order to achieve that dream — of transposing their technology to a smartphone — Munder and his team have to play a bit of a waiting game. Today, their platform necessitates building the technology on a PC, with an AI engine. 
“A lot of things don't have those AI PC types of chips,” Munder explained. “But as the technology evolves, we expect that smartphones will start to include AI engines. They'll start to include the capability in processing within smartphones. It will take time for the technology to catch up to it, and it probably won't need the power that we're requiring right now on a PC.” 
At its heart, Omnibridge is a testament to the positive capabilities of artificial intelligence. 
But it is more than a transcription service — it allows people to have face-to-face conversations with each other. There’s a world of difference between passing around a phone or pen and paper and looking someone in the eyes when you speak to them. 
It also allows Deaf people to speak ASL directly, without doing the mental gymnastics of translating their words into English.
“For me, English is my second language,” Munder told Good Good Good. “So when I write in English, I have to think: How am I going to adjust the words? How am I going to write it just right so somebody can understand me? It takes me some time and effort, and it's hard for me to express myself actually in doing that. This technology allows someone to be able to express themselves in their native language.” 
Ultimately, Munder said that Omnibridge is about “bringing humanity back” to these conversations. 
“We’re changing the world through the power of AI, not just revolutionizing technology, but enhancing that human connection,” Munder said at the end of his TED Talk. 
“It’s two languages,” he concluded, “signed and spoken, in one seamless conversation.”"
-via GoodGoodGood, October 25, 2024
432 notes · View notes
mostlysignssomeportents · 10 months ago
Text
I assure you, an AI didn’t write a terrible “George Carlin” routine
Tumblr media
There are only TWO MORE DAYS left in the Kickstarter for the audiobook of The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
On Hallowe'en 1974, Ronald Clark O'Bryan murdered his son with poisoned candy. He needed the insurance money, and he knew that Halloween poisonings were rampant, so he figured he'd get away with it. He was wrong:
https://en.wikipedia.org/wiki/Ronald_Clark_O%27Bryan
The stories of Hallowe'en poisonings were just that – stories. No one was poisoning kids on Hallowe'en – except this monstrous murderer, who mistook rampant scare stories for truth and assumed (incorrectly) that his murder would blend in with the crowd.
Last week, the dudes behind the "comedy" podcast Dudesy released a "George Carlin" comedy special that they claimed had been created, holus bolus, by an AI trained on the comedian's routines. This was a lie. After the Carlin estate sued, the dudes admitted that they had written the (remarkably unfunny) "comedy" special:
https://arstechnica.com/ai/2024/01/george-carlins-heirs-sue-comedy-podcast-over-ai-generated-impression/
As I've written, we're nowhere near the point where an AI can do your job, but we're well past the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job:
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
AI systems can do some remarkable party tricks, but there's a huge difference between producing a plausible sentence and a good one. After the initial rush of astonishment, the stench of botshit becomes unmistakable:
https://www.theguardian.com/commentisfree/2024/jan/03/botshit-generative-ai-imminent-threat-democracy
Some of this botshit comes from people who are sold a bill of goods: they're convinced that they can make a George Carlin special without any human intervention and when the bot fails, they manufacture their own botshit, assuming they must be bad at prompting the AI.
This is an old technology story: I had a friend who was contracted to livestream a Canadian awards show in the earliest days of the web. They booked in multiple ISDN lines from Bell Canada and set up an impressive Mbone encoding station on the wings of the stage. Only one problem: the ISDNs flaked (this was a common problem with ISDNs!). There was no way to livecast the show.
Nevertheless, my friend's boss's ordered him to go on pretending to livestream the show. They made a big deal of it, with all kinds of cool visualizers showing the progress of this futuristic marvel, which the cameras frequently lingered on, accompanied by overheated narration from the show's hosts.
The weirdest part? The next day, my friend – and many others – heard from satisfied viewers who boasted about how amazing it had been to watch this show on their computers, rather than their TVs. Remember: there had been no stream. These people had just assumed that the problem was on their end – that they had failed to correctly install and configure the multiple browser plugins required. Not wanting to admit their technical incompetence, they instead boasted about how great the show had been. It was the Emperor's New Livestream.
Perhaps that's what happened to the Dudesy bros. But there's another possibility: maybe they were captured by their own imaginations. In "Genesis," an essay in the 2007 collection The Creationists, EL Doctorow (no relation) describes how the ancient Babylonians were so poleaxed by the strange wonder of the story they made up about the origin of the universe that they assumed that it must be true. They themselves weren't nearly imaginative enough to have come up with this super-cool tale, so God must have put it in their minds:
https://pluralistic.net/2023/04/29/gedankenexperimentwahn/#high-on-your-own-supply
That seems to have been what happened to the Air Force colonel who falsely claimed that a "rogue AI-powered drone" had spontaneously evolved the strategy of killing its operator as a way of clearing the obstacle to its main objective, which was killing the enemy:
https://pluralistic.net/2023/06/04/ayyyyyy-eyeeeee/
This never happened. It was – in the chagrined colonel's words – a "thought experiment." In other words, this guy – who is the USAF's Chief of AI Test and Operations – was so excited about his own made up story that he forgot it wasn't true and told a whole conference-room full of people that it had actually happened.
Maybe that's what happened with the George Carlinbot 3000: the Dudesy dudes fell in love with their own vision for a fully automated luxury Carlinbot and forgot that they had made it up, so they just cheated, assuming they would eventually be able to make a fully operational Battle Carlinbot.
That's basically the Theranos story: a teenaged "entrepreneur" was convinced that she was just about to produce a seemingly impossible, revolutionary diagnostic machine, so she faked its results, abetted by investors, customers and others who wanted to believe:
https://en.wikipedia.org/wiki/Theranos
The thing about stories of AI miracles is that they are peddled by both AI's boosters and its critics. For boosters, the value of these tall tales is obvious: if normies can be convinced that AI is capable of performing miracles, they'll invest in it. They'll even integrate it into their product offerings and then quietly hire legions of humans to pick up the botshit it leaves behind. These abettors can be relied upon to keep the defects in these products a secret, because they'll assume that they've committed an operator error. After all, everyone knows that AI can do anything, so if it's not performing for them, the problem must exist between the keyboard and the chair.
But this would only take AI so far. It's one thing to hear implausible stories of AI's triumph from the people invested in it – but what about when AI's critics repeat those stories? If your boss thinks an AI can do your job, and AI critics are all running around with their hair on fire, shouting about the coming AI jobpocalypse, then maybe the AI really can do your job?
https://locusmag.com/2020/07/cory-doctorow-full-employment/
There's a name for this kind of criticism: "criti-hype," coined by Lee Vinsel, who points to many reasons for its persistence, including the fact that it constitutes an "academic business-model":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
That's four reasons for AI hype:
to win investors and customers;
to cover customers' and users' embarrassment when the AI doesn't perform;
AI dreamers so high on their own supply that they can't tell truth from fantasy;
A business-model for doomsayers who form an unholy alliance with AI companies by parroting their silliest hype in warning form.
But there's a fifth motivation for criti-hype: to simplify otherwise tedious and complex situations. As Jamie Zawinski writes, this is the motivation behind the obvious lie that the "autonomous cars" on the streets of San Francisco have no driver:
https://www.jwz.org/blog/2024/01/driverless-cars-always-have-a-driver/
GM's Cruise division was forced to shutter its SF operations after one of its "self-driving" cars dragged an injured pedestrian for 20 feet:
https://www.wired.com/story/cruise-robotaxi-self-driving-permit-revoked-california/
One of the widely discussed revelations in the wake of the incident was that Cruise employed 1.5 skilled technical remote overseers for every one of its "self-driving" cars. In other words, they had replaced a single low-waged cab driver with 1.5 higher-paid remote operators.
As Zawinski writes, SFPD is well aware that there's a human being (or more than one human being) responsible for every one of these cars – someone who is formally at fault when the cars injure people or damage property. Nevertheless, SFPD and SFMTA maintain that these cars can't be cited for moving violations because "no one is driving them."
But figuring out who which person is responsible for a moving violation is "complicated and annoying to deal with," so the fiction persists.
(Zawinski notes that even when these people are held responsible, they're a "moral crumple zone" for the company that decided to enroll whole cities in nonconsensual murderbot experiments.)
Automation hype has always involved hidden humans. The most famous of these was the "mechanical Turk" hoax: a supposed chess-playing robot that was just a puppet operated by a concealed human operator wedged awkwardly into its carapace.
This pattern repeats itself through the ages. Thomas Jefferson "replaced his slaves" with dumbwaiters – but of course, dumbwaiters don't replace slaves, they hide slaves:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
The modern Mechanical Turk – a division of Amazon that employs low-waged "clickworkers," many of them overseas – modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract "cloud" of human intelligence (the tasks MTurks perform are called "HITs," which stands for "Human Intelligence Tasks").
This is such a truism that techies in India joke that "AI" stands for "absent Indians." Or, to use Jathan Sadowski's wonderful term: "Potemkin AI":
https://reallifemag.com/potemkin-ai/
This Potemkin AI is everywhere you look. When Tesla unveiled its humanoid robot Optimus, they made a big flashy show of it, promising a $20,000 automaton was just on the horizon. They failed to mention that Optimus was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Likewise with the famous demo of a "full self-driving" Tesla, which turned out to be a canned fake:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
The most shocking and terrifying and enraging AI demos keep turning out to be "Just A Guy" (in Molly White's excellent parlance):
https://twitter.com/molly0xFFF/status/1751670561606971895
And yet, we keep falling for it. It's no wonder, really: criti-hype rewards so many different people in so many different ways that it truly offers something for everyone.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
Image:
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Ross Breadmore (modified) https://www.flickr.com/photos/rossbreadmore/5169298162/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
2K notes · View notes
tim-official · 11 months ago
Text
Tumblr media
art is work. If you didn't put in hard work it's not art. If you didn't bleed then you're taking shortcuts. you have to put in "effort" or your art is worthless. if you don't have a work ethic then you're worthy of derision. if you are unwilling or unable to suffer then you are unworthy of making art
this is so, so obviously a conservative, reactionary sentiment. This is what my fucking dad says about Picasso. "They just want to push the button" is word-for-word what people used to say about electronic music - not "real" instruments, no talent involved, no skill, worthless. how does this not disturb more people? this should disturb you! is everyone just seeing posts criticizing AI and slamming reblog without reading too close, or do people actually agree with this?
usual disclaimer: this is not a "pro-ai" stance. this is a "think about what values you actually have" stance. there are many more coherent ways to criticize it
1K notes · View notes
river-taxbird · 1 year ago
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes · View notes
prokopetz · 2 years ago
Text
Okay, so you know how search engine results on most popular topics have become useless because the top results are cluttered with page after page of machine-generated gibberish designed to trick people into clicking in so it can harvest their ad views?
And you know how the data sets that are used to train these gibberish-generating AIs are themselves typically machine-generated, via web scrapers using keyword recognition to sort text lifted from wiki articles and blog posts into topical subsets?
Well, today I discovered – quite by accident – that the training-data-gathering robots apparently cannot tell the difference between wiki articles about pop-psych personality typologies (e.g., Myers-Briggs type indicators, etc.) and wiki articles about Homestuck classpects.
The upshot is that when a bot that's been trained on the resulting data sets is instructed to write fake mental health resource articles, sometimes it will start telling you about Homestuck.
16K notes · View notes
mindblowingscience · 5 months ago
Text
We urgently need to move away from fossil fuels, but electric vehicles and other green technology can put their own pressures on the environment. That pressure could be eased with a new magnet design, free from rare-earth metals, that was built with AI in just three months. Rare-earth metals are essential components in modern-day gadgets and electric tech – including cars, wind turbines, and solar panels – but getting them out of the ground costs a lot in terms of money, energy, and environmental impact. As a result, technology that doesn't use these metals can help us transition towards a greener future more quickly. Enter UK company Materials Nexus, which has used its bespoke AI platform to create MagNex, a permanent magnet requiring no rare-earth metals.
Continue Reading.
427 notes · View notes