#google gemini explained
Explore tagged Tumblr posts
Text
AI GEMINI
youtube
#google gemini#google gemini ai#future of ai#artificial intelligence#gemini#gemini google#google ai gemini#what is google gemini#google ai#gemini ai#google#multimodal ai#gemini ai google#google gemini demo#gemini google ai#will google gemini be free#new google ai gemini#google gemini revolutionizing tech review#multimodal#the future of ai#google gemini explained#multimodal intelligence#multimodal intelligence network#Youtube
0 notes
Note
Idk if this makes sense but the discussions and discourse you have with other people combined with your sometimes scary understanding of Dan and Phil is very…. Sagittarius of you x Gemini Aquarius of them. If you don’t know, Sags are the sister signs of Geminis so it makes sense that you are able to easily see and understand Dan’s faults while not thinking poorly of him and getting the complexity. It’s also very Aquarius of Phil to recognize his own faults and shortcomings and for you to love him the way you do as a Sag because air signs and fire signs are the most compatible (friendship and romantically). This is parasocial of ME to YOU but I think you’d be genuinely good friends of theirs if given the opportunity. 🫶
I WISH I UNDERSTOOD ASTROLOGY ENOUGH TO UNDERSTAND THIS BUT I THANK YOU ANYWAY
#if anyone wants to explain more things about astrology to me please feel free#i want to know more but when i google i get dumb articles that are like#sagittariuses love travel! geminis are snakes! love your friends this week!#and it is empty and meaningless and i know that there is actual meaning to astrology and different cultures behind it
0 notes
Text
I just started grad school this fall after a few years away from school and man I did not realize how dire the AI/LLM situation is in universities now. In the past few weeks:
I chatted with a classmate about how it was going to be a tight timeline on a project for a programming class. He responded "Yeah, at least if we run short on time, we can just ask chatGPT to finish it for us"
One of my professors pulled up chatGPT on the screen to show us how it can sometimes do our homework problems for us and showed how she thanks it after asking it questions "in case it takes over some day."
I asked one of my TAs in a math class to explain how a piece of code he had written worked in an assignment. He looked at it for about 15 seconds then went "I don't know, ask chatGPT"
A student in my math group insisted he was right on an answer to a problem. When I asked where he got that info, he sent me a screenshot of Google gemini giving just blatantly wrong info. He still insisted he was right when I pointed this out and refused to click into any of the actual web pages.
A different student in my math class told me he pays $20 per month for the "computational" version of chatGPT, which he uses for all of his classes and PhD research. The computational version is worth it, he says, because it is wrong "less often". He uses chatGPT for all his homework and can't figure out why he's struggling on exams.
There's a lot more, but it's really making me feel crazy. Even if it was right 100% of the time, why are you paying thousands of dollars to go to school and learn if you're just going to plug everything into a computer whenever you're asked to think??
5K notes
·
View notes
Text
This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager. But come on. The jig is up. The technology that was at this time last year being somberly touted as so powerful that it posed an existential threat to humanity is now worrying investors because it is apparently incapable of generating passable marketing emails reliably enough. We’ve had at least a year of companies shelling out for business-grade generative AI, and the results—painted as shinily as possible from a banking and investment sector that would love nothing more than a new technology that can automate office work and creative labor—are one big “meh.” As a Bloomberg story put it last week, “Big Tech Fails to Convince Wall Street That AI Is Paying Off.” From the piece: Amazon.com Inc., Microsoft Corp. and Alphabet Inc. had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed. Shares in Google owner Alphabet have fallen 7.4% since it reported last week. Microsoft’s stock price has declined in the three days since the company’s own results. Shares of Amazon — the latest to drop its earnings on Thursday — plunged by the most since October 2022 on Friday. Silicon Valley hailed 2024 as the year that companies would begin to deploy generative AI, the type of technology that can create text, images and videos from simple prompts. This mass adoption is meant to finally bring about meaningful profits from the likes of Google’s Gemini and Microsoft’s Copilot. The fact that those returns have yet to meaningfully materialize is stoking broader concerns about how worthwhile AI will really prove to be. Meanwhile, Nvidia, the AI chipmaker that soared to an absurd $3 trillion valuation, is losing that value with every passing day—26% over the last month or so, and some analysts believe that’s just the beginning. These declines are the result of less-than-stellar early results from corporations who’ve embraced enterprise-tier generative AI, the distinct lack of killer commercial products 18 months into the AI boom, and scathing financial analyses from Goldman Sachs, Sequoia Capital, and Elliot Management, each of whom concluded that there was “too much spend, too little benefit” from generative AI, in the words of Goldman, and that it was “overhyped” and a “bubble” per Elliot. As CNN put it in its report on growing fears of an AI bubble, Some investors had even anticipated that this would be the quarter that tech giants would start to signal that they were backing off their AI infrastructure investments since “AI is not delivering the returns that they were expecting,” D.A. Davidson analyst Gil Luria told CNN. The opposite happened — Google, Microsoft and Meta all signaled that they plan to spend even more as they lay the groundwork for what they hope is an AI future. This can, perhaps, explain some of the investor revolt. The tech giants have responded to mounting concerns by doubling, even tripling down, and planning on spending tens of billions of dollars on researching, developing, and deploying generative AI for the foreseeable future. All this as high profile clients are canceling their contracts. As surveys show that overwhelming majorities of workers say generative AI makes them less productive. As MIT economist and automation scholar Daron Acemoglu warns, “Don’t believe the AI hype.”
6 August 2024
#ai#artificial intelligence#generative ai#silicon valley#Enterprise AI#OpenAI#ChatGPT#like to charge reblog to cast
184 notes
·
View notes
Text
Frank is Pisces
I made a thread for this theory on TSBS Discord server and it blew up (it blew up so much more than I ever thought it would, like Jesus Christ. I'm super happy about it though). I figured that since I moved all my canon info stuff over here from the server, I might as well move my theory stuff over too
Long post warning, since there's a lot here. A lot of this exists because I was possessed by my adhd demon one night, noticed something about Frank, and then ran wild with it
(Last two pictures are part of a chart that SqarletGecko made for this theory. If Sqarlet sees this at any point, hi, hello! I appreciate you for feeding into this, Sqarlet)
There will be more images tacked on later. I'd add them now, but unfortunately, there's a 10 picture limit to posts. As stupid as that is. ANYWAY
Frank’s strong enough to kill two different witherstorms. Although him outright killing them was never verified, it was one of the only ways he could’ve come back so soon. The only other way would’ve been to wait by the portal, but assuming Moon would’ve closed it off due to the dimensions that Frank was in housing witherstorms, Frank would’ve had to find another way out
His name is neither Frank nor Forkface, so it’s entirely possible that it could be Pisces
Sqarlet pointed out that Castor said “Pisces is probably off doing his own thing”, which could be anything, and it certainly doesn’t preclude Pisces being on earth as Frank, doing whatever he’s been doing
In the “Lunar Gets Friendzoned” vrchat episode, Castor mentions Pisces again, this time saying (in reference to how Lunar’s “final test” would go, and how someone would be sent to judge his ability to control his powers) “Could be Nebula, could be Libra. Could be Pisces, but I doubt that. Hell, it could even be Taurus.” This is the second time Castor’s mentioned Pisces, as if he doesn’t have a whopping 10 other signs he could pull names from (minus Gemini and Pisces, obviously). This could be a case of simply sticking to a smaller pool out of the 12 names, but still
Castor has mentioned Pisces offhandedly two different times. Yeah, there could be a really simple explanation for that, but it sticks out to me, and I can't pinpoint why
During an Uno video, they did a “one breath for yes, two for no” thing with Frank, and he confirmed that he had a bad past. Given some of the things that Castor has said about Taurus, it’s a possibility that Frank/Pisces was trying to get away from him. Some other Pisces traits are that they’re supposed to be super empathetic and deeply emotional. If Taurus said or did something that impacted Frank/Pisces enough, he could’ve chosen to leave
For a while, Frank didn’t seem to react much when people acted scared of him, but as he’s spent more time with the channels, he’s developing more emotionally, which would make sense if he was younger. According to a google search I ran, “Pisces emotional sensitivity is high, helping them to remain in tune with others also leaving them vulnerable to criticism, worrying about about the effects that their actions might have on others,” which could explain why he acted so sad when he briefly appeared in the lobby in an FFFS episode and everyone acted scared of him
Pisces has a heightened emotional sensitivity, they're very in tune with the emotions of those around them, and this in turn makes them worry about how others might react to them
In another Uno video, Foxy made a lighthearted comment to Frank about how he reminded him of his son, since he has a habit of inserting himself into situations and things. There are other characters who do this too, meaning that Foxy could’ve compared Frank to literally anyone, and yet, he chose a character that’s a child. If Frank is Pisces, he’d be the youngest of the astral bodies
If you watch him, Frank does have some childlike mannerisms and behaviors. The first thing is the way he sometimes wants to show someone something, but then gets upset when they touch it. It’s like when a kid gets a new toy and shows their friend, but then gets upset when their friends tries playing with it because it’s theirs. The second thing is him apparently drawing on the wall underneath Earth’s bed. Drawing on walls is something that kids do. The third thing is him trying to feed Earth a piece of pizza. Kids will sometimes try to feed people too, though it’s typically only with people they’re close with, that they know pretty well
If Frank is Pisces and therefore the youngest of the astral bodies, there’s a chance that he chose to show up in the daycare because he knows it’s a safe place for kids, and that there are good caretakers there (Sun, Moon, Earth, and Lunar). He could’ve chosen a daycare attendant-esque form to blend in better with the daycare environment, or he could’ve copied what he saw of the caretakers there, much like how kids copy the adults they see
Pisces is the youngest of the astral bodies, so everything that Frank does that seems like something a kid might do,, could sort of tie in with that. Frank has seemed to somewhat mature and "grow up" in a sense as of recently though, so these childlike habits and behaviors have begun to become rarer and rarer
Pisces has ties to illusions, dreams, and the subconscious. This could explain the times when Frank appeared in both Monty and Earth’s dreams, and then dragged the Stitchwraith into his own mind. In the case of Monty’s dream, Frank knew that they’d had problems with their dad and he’d even offered to be their listening ear, so seeing the shape Monty was in emotionally and mentally after their dad died, Frank may have guided them to an image of their dad to try to promote a form of closure and emotional healing and recovery before Monty woke up (did I mention that being a healer is also a Pisces trait?). He was silent in the dream, so Monty’s mind couldn't have heard his signature heavy breathing and did something funky with that. In Earth’s case, Frank somehow knew that she was having a nightmare and he came to wake her up, repeatedly saying “no fear”, as if he was telling her not to be afraid. With the Stitchwraith, the Stitchwraith wasn’t aware that he’d been pulled into his own mind. Frank didn’t confirm that he was until he told the Stitchwraith that “It’s just a bad dream, a nightmare”, and told him to wake up
More Pisces traits are wanting to help people and being a healer. Adding in the ties to illusions, dreams, and the subconscious, I feel like creating dreams to help people recover from things wouldn't be too farfetched. In the case of Earth, Frank knew she was having a nightmare and wanted to wake her up so that she wouldn't be scared anymore. To this day, I have no way to explain how he could've possibly known about her having a nightmare, aside from sensing her distress and/or having some kind of connection to her subconscious
I found a snippet of an article that said “When we meet Pisces, we are taken aback by their remote coldness; they often act like confused geniuses or oddballs who have trouble interacting with others.” It’s confirmed that Frank is highly intelligent; he somehow knew how to get into Moon’s computer and registered himself as the primary user, and he somehow swiped Foxy’s voicebox and installed it in himself, then took it back out and reinstalled it in Foxy. Frank is also an oddball, but I don’t really need to explain that one
Frank has made it abundantly clear that he doesn’t like Ruin. He’s had many opportunities to take him out or even just attack him, but he hasn’t. Castor has stated that astral bodies typically don’t get involved with things or associate with people unless they really need to. If Frank is Pisces, he may have avoided going after Ruin because it’d go against the rules
There are some mixed messages on whether or not Pisces is a rule follower, but a couple things I found that stuck out to me implied that sometimes they follow the rules, and sometimes they do whatever they want as long as they’re satisfied. We were given a glimpse of this with Frank when Lunar tried to run him over with a car in a vrchat episode. Frank didn’t seem to care as much about the fact that he was nearly run over, and instead, became agitated with Lunar for parking incorrectly. He also made Lunar get in his car, and then proceeded to move it out of the alley and to the drive thru, where Lunar was supposed to be anyways. He might’ve also reacted poorly to Monty and Earth bringing a goose into the restaurant they were in, since animals don’t belong there. Beyond that, Frank’s also gotten upset with Sun and tried to menacingly follow him around the room when Sun cheated during a game of Uno, showing that he doesn’t like cheaters
Frank seems to have a knack for interrupting people’s dates. Most of the time, he’s trying to help but doesn’t seem to grasp that his behavior might be making people uncomfortable, BUT according to google, the Pisces sign is known for being jealous of other people’s love lives. After the death of Ruined Monty, Frank may have started to feel a little jealous of others
According to google, “most Pisceans are very good at earning a stable living,” and “they are usually very effective in any career that needs collaboration with others or inventiveness.” It’s been stated before that Frank works in at least three different restaurants, and a restaurant type environment would require collaboration with others to keep everything running smoothly. He’d be earning good money from those three restaurant jobs, and another Pisces trait is wanting to help people, so working in a restaurant would be a way for him to do that
Strengths of Pisces would include being selfless and thoughtful, passionate and creative, gratitude and self-sacrifice, tolerance and a keen understanding, and being kind and sympathetic. Weaknesses of Pisces include being influenced by their surroundings, being careless, rash, and ill-disciplined, having an inability to confront reality due to an absence of confidence, being insecure, sentimentalism, indecisiveness, and a lack of foresight. These are all from a chart I found, and although I don’t think we’ve seen too many of these weaknesses in Frank yet, we’ve definitely seen a lot of the strengths
We have no idea where Frank was before he first showed up at the daycare. Fazbear apparently had him shipped in to replace Moon, but with how easily he got into Moon’s computer, it wouldn’t be too farfetched to assume that he could’ve possibly also hopped onto a different device and sent a fake email, posing as Fazbears to keep people from asking too many questions about why he’s there
Like Sqarlet has said, we’ve noticed a pattern of astral signs following the usual astrological descriptions, but it’s not necessarily a rule, nor is it to a T. It’s very possible that Frank mostly follows the general tendencies of Pisces, but still has individual quirks and such that are different. It could be a case where perfectionism or a preference for rule-following/organization is just a lil personal thing of his, unrelated to astrology
Another thing suggested by Sqarlet is that the Pisces dealio might explain why Frank’s general choice of appearance is generally uncanny/unsettling. Astral bodies don’t necessarily have a spectacular grasp on how to Aesthetic effectively, at least from an earthling’s perspective
Eclipse once used star power to get into Puppet’s dreamscape, when Puppet was on his way to get Lunar. If star power is what makes dreamscape stuff possible, then Frank would need access to it in order to do the same (see: all the dream stuff he did with Monty and him showing up in Stitchwraith’s dreamscape)
Whenever Frank appears, whoever he appears around is usually experiencing a negative emotion of some sort. Some instances of this can be seen when he showed up and tried to talk to Monty after Monty had a fight with their dad and was frustrated, when Earth and Sun were worrying about Lunar when Lunar lost his voice, when Sun got turned into a dragon and was panicking, and when Earth had a nightmare and was freaking out over it
Foxy’s implied that Frank was probably trying his best to be helpful, during all the dates that he unintentionally ruined. Which… means that although Frank is trying his best to be helpful, he might not understand what “normal” behavior looks like, then he messes up, and then he gets upset when people misunderstand him and his intentions, or when whatever he’s doing ends up backfiring in some way. Pisces’ loves to help others and can’t stand being misunderstood, so this lines up beautifully
During a podcast episode, Frank breathed heavier when Castor and Pollux were mentioned, implying that he may know them. Given how he even attempted to speak again, he likely had some kind of thoughts or feelings toward them, too
The astral bodies are typically aware of almost everything that happens. This might explain how Frank (if he’s Pisces) knew that Foxy would be alone on Christmas. It was only Foxy, Freddy, and Francine present when the Stitchwraith took FC, so unless Frank was spying on everyone or has taken to watching the channels like Ruin apparently does, he shouldn’t know what happened
If the Foxy’s Intervention episode is anything to go by, no one ever knows where Frank is, and yet, he still knows where he’s needed and goes there to help
With the way that Pisces deals with illusions just as much as dreams and the subconscious, it’s possible that the Frank that the Stitchwaith saw was just an illusion. This could explain why Frank wasn’t hit by Stitchwraith’s weird chest laser thing (unless he quickly teleported out of the way), and why we never saw him go up to the Stithwraith and try pushing him around or anything, despite how Frank was acting toward him. We’ve seen Frank physically interact with people before, so we know he can, but if he was an illusion, that wouldn’t be possible
Something else to consider that I haven't shared with the theory thread yet (that I can remember) is that when Pisces feels hurt or betrayed in any way, they can be incredibly vengeful. Their vengeance, according to a search I ran, could be intense enough to break trust and damage relationships, if they're not careful. In killing ruined Monty, Bloodmoon may have unknowingly made Frank feel so deeply hurt that Frank has now decided that he needs to get revenge on him. He kept saying things during their encounter that made it feel like he may be entertaining the idea of getting rid of one of the twins, which... follows a very "you took away someone who meant the world to me, let's see what happens when I do it to you" sort of logic
#sun and moon show#the sun and moon show#tsams#tsams frank#the sun and moon show frank#sun and moon show frank#sams frank#sun and moon show forkface#the sun and moon show forkface#tsams forkface#sams forkface#theory time#'but arson/skellies what about (insert x thing here'#i can guarantee you. I've thought all of this through at least a dozen times#I've been rotating it in my brain nonstop#ever since i first had the idea for all of this#it started with the way his face plates look when he opens them up#and it went downhill from there
182 notes
·
View notes
Text
Astrology Observations No. 8
*based on my personal experiences, take them with a grain of salt
-I wanted this to be my 18+ edition bc 8th house but I don’t have enough spicy observations yet lol
-Virgo placements & Scorpio placements having 100 invisible tests some has to pass before you let them get close to you
-Why yes my Jupiter is in my 3rd house. Why yes I did start recommending documentaries to documentary students because I watch that much educational content for fun.
-Cancer suns can beat out Leo suns for being the most charismatic one in the room and that’s coming from someone that loves Leo suns.
-Someone said to be a full time artist you either need the external structure of a rich sponsor/family with money/sugar daddy or the internal structure of being an earth sign and I love that lol (Idek how to explain how I got to the mad men zodiac roast where I saw that line lmao)
-Geminis lie but get so boldfaced they often contradict themselves a lot. And Cancers manipulate but often go so hard on it that it starts to become pretty obvious. Pisces can actually be the best at both, especially if they don’t have any earth placements, no energy to ground those illusions.
-Ok I’ve been googling birth charts to practice reading them (thank you for your dms I’ll reply soooon!) and I was surprised Robert j Oppenheimer (the guy that spearheaded making the atomic bomb) was a cancer moon because of all the destruction the bomb caused (and from the beginning they knew that was going to be a destructive endeavor) BUT Cancer’s are very nationalistic so I could see this being his rationale. Until all the violent consequences, of course. (Very specific but he may have been apart of the scientists that advocated for peace after the bomb dropped on Hiroshima and Nagasaki but I don’t know fs)
-Also a LOT of actors have aries moon and/or Leo mars (including aries moon for Cillian Murphy who’s playing Oppenheimer in that new movie, full circle connection)
-Virgo mars people can embody a certain gender fluidity. The women can dress in masculine clothing and carry a certain tinge of a masculine vibe and vice versa for feminine energy and virgo mars men. (Not sure if this applies to all mutable mars or not).
-Scorpio rising/pluto in the first house and wearing sunglasses on the streets just to avoid weird eye contact with strangers
-Saturn conjunction moon/Capricorn, Aquarius moon/Aquarius and/or Uranus 4th house - what was it like not having a childhood? (Gang, gang)
-*reads that Capricorn Mercury is straight forward*
Me with Capricorn Mercury rx:

#astro observations#astro notes#astroblr#aries moon#capricorn#capricorn mercury#uranus astrology#virgo mars#saturn#leo mars#Leo#cancer#scorpio#scorpio rising#pisces#gemini#Aquarius
856 notes
·
View notes
Note
I've been using an ai called "claude" a lot. I asked it how to do something in python, and it patiently explained it to me. When my code didn't work, I showed it my code and it told me what mistake I made. Then I fixed the mistake and it worked. This feels insanely cool and useful to me. Also, when I asked claude if you could infuse garlic into olive oil without heat, it said "maybe, but it's not recommended because you might get botulism." Maybe it's just google gemini specifically that sucks?
I've found far more uses for genAI in my personal projects than I have found professionally, because there are no stakes to my personal projects. Sure, genAIs can sometimes be very helpful! But all of them - Claude, Gemini, ChatGPT, Copilot, whatever else might be out there at this point - can be wrong. And their wrong answers look exactly like their right answers.
This is fine, if you're just coding from home, or creating French lessons for yourself, or whatever. But if my boss is going to see what I'm making, then I'm doing it myself, or I'm spending so much time fact-checking the AI output that I might as well have done it myself.
Maybe in another five years this technology will have become reliable enough that the time saved by using it isn't completely offset by the risk of it hallucinating something and making you look like an absolute fucking jackass in front of your VP, or your customer. But we're not there yet.
51 notes
·
View notes
Text
you're losing me (two) | am. targaryen and j. velaryon
Description: The 'fake-relationship' begins. Social media is taken by storm. Rating: General Audiences part one
Aemond was a gentleman - he always made sure that he was paying you his full attention. "Do you want to go shopping?" he asked, pointing at the shop in front of him. It was a small boutique - popular only to those that could afford the pieces inside. "Yes," you smiled, pulling him closer inside the building.
He didn't need to tell you twice.
"What are your parents like? Are they uptight or cool?" you inquired, dismissing the sales associate that wanted to help you. He took a deep breath - trying to find words that could explain his parents. "My dad is old, he's traditional in a sense - but my mom is trendy. She'll warm up to you." he smiled, watching as your eyebrows bumped into each other. Traditional and modest attire it is then.
There was a vintage white dress on the rack, you've seen it in one of the vogue covers from a few decades ago. It was an iconic piece, but not famous enough that it would seem tacky. His hands snake around your waist, "What?" you ask and he nudges your body slowly, pointing at the paparazzi that were standing outside of the door.
"Kiss me," he demanded, pulling your body closer - until you could smell his perfume. "You're demanding, you know that?" you tease, reaching to cup his cheeks. The paparazzi's outside of the window were having a seizure - making sure to take a picture of every moment. "- but you like that." he paused, pressing your lips together.
He tasted like black coffee and cherries. It was intoxicating - a taste that was new to your tongue. You couldn't help but let a moan seep through - he smirks through the kiss before letting you go. He pretends to be shocked that the paparazzi were outside. "Let's go," he mouthed, pulling you into a deeper part of the store where the 'media' couldn't see.
ynkittenworld: Y/N spotted with mystery man (two separate occasions), new album soon? world tour soon? WHO IS THIS MAN!!! side note: happy for you mommy 😺
100 comments 23,909 likes
reinaynworld: MOM EXPLAIN?
thegreatwar: I'm going to auntie bella's house ☹️
boogeyman: acc. to some ppl that saw them the man is aemond targaryen and he owns this tech startup that's worth billions. UR WELCOME AND THANK YOU FELLOW Y/N'S KITTENS
Aemond sits on your blue sofa, sipping his tea while browsing through social media. "Your fans are crazy, dude." he hummed - watching as people shared his face all around various platforms. He wasn't aware that he was famous enough to be discovered. "They even found my instagram account." he complained, seeing follow requests from people that he didn't know.
"If you can't handle the fire, don't go to the kitchen." you shrug, placing a piece of pastry on the round table. He was a gracious guest, always helping you around with household chores - after the whole scenario with the paparazzi, he was spending all his time inside your little hotel room. "Helaena's blasting my phone, she a big fan of you." he smiled, replying to his sister with a GIF of you laughing. "Helaena?" you ask while lying on his lap.
His fingers comb through your hair, untangling the knots that your hairbrush couldn't fix. "My older sister." he answered - staring deep into your features. He couldn't deny that you were beautiful, and that your personality stirred something inside of him - but you were just a summer thing, right? Something that would allow him to inherit all of his father's fortune.
"I know nothing about your family to be honest. Google didn't have anything other than your dating history and zodiac sign." you state, wanting him to talk about his family more. It was interesting to see a stoic and conniving man smile when he spoke about family. "- what's my zodiac sign?" he questioned, not knowing the answer.
"Gemini," you answer immediately. "- and you used to date Lindsay Lohan." you add with a smile. He was older than he looked. He places a strand of your hair away from your face. "Stalker," he mumbled - and you sat up straight, glaring at him playfully. You were about to tickle him, but your phone rings. 'Jacaerys' it read out, and you placed your phone in your pocket.
"We should hard-launch each other," you suggest and he raised his eyebrows. "What's that?" he inquired - unfamiliar with the word. "Posting each other on social media." you replied and he nods.
It would work perfectly with his plan.
(your first name) : our secret moments, in a crowded room...they got no idea about me and you. @officialaemondtargaryen
8,920 comments 1,238,098 likes
maybemaybemaybe: Rue, when was this?
officialaemondtargaryen: there is an indentation, in the shape of you😉 - (your first name) : 100% not typed by me using his phone
You couldn't stop giggling as his phone was bombarded with messages and calls from your fans. He could hardly use the thing properly. "They have got to stop." he sighed, while his phone buzzed and vibrated on the coffee table. "You need a new phone," you chuckle - watching while he tried to get rid of the messages.
"I have a private account, how are they doing this?" he sighs, all the while Aegon and Helaena keep calling his phone. Another laugh escapes your mouth - he defeatedly settles the phone on the table.
"You poor thing," you giggle, a small pout graces his lips. "You are going to be the poor thing after what I do to you." he chuckles, placing his hands on your waist. For a second you believe that the feelings between you were genuine - not because of the need for money. "What are you going to do?" you ask, and he smirks.
Hands trailing down to your sides and tickling you.
(your first name)
Very bored today, #AskMeAnything
wendychoi: what is your hairstyle called? - (your first name): doggy style
pinturilla23: Where did u meet dad? - (your first name): we met at a hotel garden 😁
yournamefan: Are you going to make an album about Aemond? - (your first name): yes he's gonna make a handsome muse 😉🥰
Lucerys stuffed a large piece of vanilla ice cream on his mouth. "(Your Name) is dating our uncle, how do you feel about that?" he teased, moving his body away so that Jace couldn't do anything to him. "I don't care," Jace huffed, reaching for a spoon to take a piece off his brother's ice cream. "He doesn't care but he's turning red." Joffrey piped, giggling loudly while eating his strawberry ice cream.
"I wonder if he'll take her to the reunion." Joffrey adds, and a sigh escapes from Jace's mouth. 'Give you my wild, give you a child.' turned to dust - and never to return. He knew that it was not his fault - your personalities just managed to clash with each other.
You were the kind to fight for something - to beg for him to fight for the relationship. He was the opposite of that.
If it was meant to be - then you wouldn't need to fight for it.
part three
@minaxcarter @glame @xcinnamonmalfoyx @winxchesters @yentroucnagol @hotchnerswife @itsabby15 @mxxny-lupin @joliettes @kemillyfreitas @mxtantrights
#aemond x you#aemond targaryen fanfiction#aemond smut#aemond targaryen x reader#modern aemond targaryen#modern aemond x reader#modern aemond x you#modern aemond targaryen x reader#modern aemond targaryen x you#modern aemond#hotd modern au#aemond x fem!reader#aemond imagine#aemond x y/n#aemond x reader#aemond fanfiction#prince aemond x you#aemond targaryen x you#aemond targaryen fic#aemond stannies#aemond fic#aemond fanfic#aemond fluff#aemond targaryen fluff#aemond targaryen smut#aemond targaryen x modern!reader#hotd x you#hotd smut#aemond targaryen fanfic#prince aemond fic
319 notes
·
View notes
Text
Contra Yishan: Google's Gemini issue is about racial obsession, not a Yudkowsky AI problem.
@yishan wrote a thoughtful thread:
Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents. [...] If you have a woke/anti-woke axe to grind, kindly set it aside now for a few minutes so that you can hear the rest of what I’m about to say, because it’s going to hit you from out of left field. [...] The important thing is how one of the largest and most capable AI organizations in the world tried to instruct its LLM to do something, and got a totally bonkers result they couldn’t anticipate. What this means is that @ESYudkowsky has a very very strong point. It represents a very strong existence proof for the “instrumental convergence” argument and the “paperclip maximizer” argument in practice.
See full thread at link.
Gemini's code is private and Google's PR flacks tell lies in public, so it's hard to prove anything. Still I think Yishan is wrong and the Gemini issue is about the boring old thing, not the new interesting thing, regardless of how tiresome and cliched it is, and I will try to explain why.
I think Google deliberately set out to blackwash their image generator, and did anticipate the image-generation result, but didn't anticipate the degree of hostile reaction from people who objected to the blackwashing.
Steven Moffat was a summary example of a blackwashing mindset when he remarked:
"We've kind of got to tell a lie. We'll go back into history and there will be black people where, historically, there wouldn't have been, and we won't dwell on that. "We'll say, 'To hell with it, this is the imaginary, better version of the world. By believing in it, we'll summon it forth'."
Moffat was the subject of some controversy when he produced a Doctor Who episode (Thin Ice) featuring a visit to 1814 Britain that looked far less white than the historical record indicates that 1814 Britain was, and he had the Doctor claim in-character that history has been whitewashed.
This is an example that serious, professional, powerful people believe that blackwashing is a moral thing to do. When someone like Moffat says that a blackwashed history is better, and Google Gemini draws a blackwashed history, I think the obvious inference is that Google Gemini is staffed by Moffat-like people who anticipated this result, wanted this result, and deliberately worked to create this result.
The result is only "bonkers" to outsiders who did not want this result.
Yishan says:
It demonstrates quite conclusively that with all our current alignment work, that even at the level of our current LLMs, we are absolutely terrible at predicting how it’s going to execute an intended set of instructions.
No. It is not at all conclusive. "Gemini is staffed by Moffats who like blackwashing" is a simple alternate hypothesis that predicts the observed results. Random AI dysfunction or disalignment does not predict the specific forms that happened at Gemini.
One tester found that when he asked Gemini for "African Kings" it consistently returned all dark-skinned-black royalty despite the existence of lightskinned Mediterranean Africans such as Copts, but when he asked Gemini for "European Kings" it mixed up with some black people, yellow and redskins in regalia.
Gemini is not randomly off-target, nor accurate in one case and wrong in the other, it is specifically thumb-on-scale weighted away from whites and towards blacks.
If there's an alignment problem here, it's the alignment of the Gemini staff. "Woke" and "DEI" and "CRT" are some of the names for this problem, but the names attract flames and disputes over definition. Rather than argue names, I hear that Jack K. at Gemini is the sort of person who asserts "America, where racism is the #1 value our populace seeks to uphold above all".
He is delusional, and I think a good step to fixing Gemini would be to fire him and everyone who agrees with him. America is one of the least racist countries in the world, with so much screaming about racism partly because of widespread agreement that racism is a bad thing, which is what makes the accusation threatening. As Moldbug put it:
The logic of the witch hunter is simple. It has hardly changed since Matthew Hopkins’ day. The first requirement is to invert the reality of power. Power at its most basic level is the power to harm or destroy other human beings. The obvious reality is that witch hunters gang up and destroy witches. Whereas witches are never, ever seen to gang up and destroy witch hunters. In a country where anyone who speaks out against the witches is soon found dangling by his heels from an oak at midnight with his head shrunk to the size of a baseball, we won’t see a lot of witch-hunting and we know there’s a serious witch problem. In a country where witch-hunting is a stable and lucrative career, and also an amateur pastime enjoyed by millions of hobbyists on the weekend, we know there are no real witches worth a damn.
But part of Jack's delusion, in turn, is a deliberate linguistic subversion by the left. Here I apologize for retreading culture war territory, but as far as I can determine it is true and relevant, and it being cliche does not make it less true.
US conservatives, generally, think "racism" is when you discriminate on race, and this is bad, and this should stop. This is the well established meaning of the word, and the meaning that progressives implicitly appeal to for moral weight.
US progressives have some of the same, but have also widespread slogans like "all white people are racist" (with academic motte-and-bailey switch to some excuse like "all complicit in and benefiting from a system of racism" when challenged) and "only white people are racist" (again with motte-and-bailey to "racism is when institutional-structural privilege and power favors you" with a side of America-centrism, et cetera) which combine to "racist" means "white" among progressives.
So for many US progressives, ending racism takes the form of eliminating whiteness and disfavoring whites and erasing white history and generally behaving the way Jack and friends made Gemini behave. (Supposedly. They've shut it down now and I'm late to the party, I can't verify these secondhand screenshots.)
Bringing in Yudkowsky's AI theories adds no predictive or explanatory power that I can see. Occam's Razor says to rule out AI alignment as a problem here. Gemini's behavior is sufficiently explained by common old-fashioned race-hate and bias, which there is evidence for on the Gemini team.
Poor Yudkowsky. I imagine he's having a really bad time now. Imagine working on "AI Safety" in the sense of not killing people, and then the Google "AI Safety" department turns out to be a race-hate department that pisses away your cause's goodwill.
---
I do not have a Twitter account. I do not intend to get a Twitter account, it seems like a trap best stayed out of. I am yelling into the void on my comment section. Any readers are free to send Yishan a link, a full copy of this, or remix and edit it to tweet at him in your own words.
61 notes
·
View notes
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this). Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations. An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
58 notes
·
View notes
Text
Ivan’s December 25 1991 astrology chart
@mslorelina-blog asked me to explain the astrology charts of the (official) fall of USSR and (official) formation of Russian Federation from my Ivan’s birthday post, so here I go ❤️
DISCLAIMER FIRST:
I’m just an enthusiastic learner of astrology and not an actual astrologer, so don’t be too serious abt this. These are real life charts but since this is for fandom purposes, I’ll be tying them into creative fandom endeavors (whicch means I’ll be talking about Ivan as a fictional character according to the charts).
I AM USING SIDEREAL VEDIC SYSTEM. If you don’t know what’s that, please google it up, but know that I’m not using the typical more common Tropical system. Signs shift a bit in sidereal (for example a Pisces in tropical is Aquarius in sidereal) so if you think the charts look wrong, it’s likely because I’m not using the system you have in mind.
Without further ado:
Fall of USSR. December 25 1991 19:35, Moscow
Sun sign: Sagittarius (Mula nakshatra)
Moon sign: Leo (Magha nakshatra)
Ascendant (lagna) sign: Cancer (Ashlesha nakshatra)
Other special features:
Chart ruler/lagnesh Moon in Magha (2nd house)
Atmakaraka (AK — soul significator, strongest planet): Venus in Libra (Vishakha nakshatra) 4th house
Darakaraka (DK — spouse significator, weakest planet): Moon in Magha (2nd house)
Nodal axis: North Node (Rahu) in Sagittarius (Purva Ashadha nakshatra), 6th house, and South Node (Ketu) in Gemini (Ardra nakshatra), 12th house
How would the chart mean for Ivan as a character?
Firstly, according to the big 3, the general description goes that Ivan will be aggressive and expansive (Sagittarius is a warring sign), proud and dignified but conservative (Leo), and sensitive and nurturing but emotionally vulnerable and overwhelming (Cancer)
His chart lord and Moon are in Leo Magha (2nd house of family). First and foremost, Ivan is tied to his (blood) family. He is possessive over them and is particularly spoiled by them (spoiler alert: I share this exact placement so I know firsthand about this). This placement means that he has his family wrapped around his little finger, and he is treated as ‘a king of his lair’ at the same time he is constantly babied. Magha nakshatra is ruled by planet Ketu (South Node) and represents ancestors; the natives would feel as if they have duties in upholding family traditions but at the same time they have their ancestors watching over them and blessing them. This is a powerful star of manifestation and royalty as ‘the power you have you’ve already wielded since born’. Magha natives usually have powerful ancestors or ones with interesting stories and karma to pass down for the natives to settle. As much as Magha natives tend to be conservative tho, we also tend to break away from some traditions in order to improve our lineage and ancestral karma.
At the same time, this is his weakest planet (DK; weakest by degree) so perhaps he would feel as if he is ‘cut out of these ancestral links’ some. However, being his chart lord, how he feels about his family and lineage (and personal resources, family is basically one of personal resources anyone could have) are the constantly occurring themes of his life. These are also things that he constantly seeks in his life.
Ivan has Moon conjunct Jupiter in 2nd house. Jupiter here can be bad and good at the same time; as Jupiter is a planet that expands things it comes into contact with, sometimes it expands Ivan’s resources and wealth but it can also increase family drama. Ivan is also highly generous toward his family.
Ivan has Atmakaraka (AK) Venus in Libra Vishakha 4th house. 4th house denotes motherland and/or mother, and him having the strongest planet here means he is very dearly tied to his motherland and his mother (if he had her). Venus in Libra is domicile/right at home, which means the house is strong and sturdy. Being Venus, Ivan likes good things — including beautiful things — in his homeland and is likely to get them also just by attraction, as Venus ultimately pulls you toward the things Venus signifies in the house. Moreover, Vishakha nakshatra is also a particularly ambitious star; the natives are sure to get what they want and will work hard to attain it. Venus in his 4th house also rules his 11th house of achievement and social circle; this means that Ivan befriends people who are like him most often, and his achievements and desires revolve around his motherland. This is quite a placement for a homebody.
Ivan has Mars in Jyeshtha (Scorpio) in 5th house of creativity and self expression, also domicile. This house also rules his 10th house (career, reputation and social status) which means that his reputation and social status are what he makes of them. Jyeshtha is quite a difficult star tho, as the natives would be involved in a cycle of constantly having to prove themselves through hard work, but not only for their own selves as the natives would also be responsible of others under their wing, or they feel they have to be responsible bc others often rely on them. Mars here makes Ivan aggressive in attaining those things, and he can also get stalkerish 😄. Mars here is conjunct Mercury which rules 3rd house of neighbors/courage and 12th house of loss or foreign places, so Ivan is sure to extends aggression towards those people as well. Since Mars is conjunct Mercury tho, which is a planet of communication, fun and tact, it’s honestly not all aggressive and he can be sensible.
Ivan’s Sun is Sagittarius in Mula Nakshatra (6th house). 6th house is the house of enemies, obstacles, and also illness/disease. Mula nakshatra is a powerful star that means roots or foundation; natives of this placement destroy or put things to ruin (mainly emotionally; not always physical) to return them to a blank slate for a new growth. With his sun here, which is a natural malefic, Ivan is someone who is sure to beat his enemies — of which he has a lot, because he has North Node (Rahu) conjunct his sun. Rahu works in amplifying to a high degree the significations of the house it sits in and the planet in its vicinity. Rahu in particular makes Ivan also highly ambitious about defeating obstacles, of which he has many, since Rahu basically amplifies both his obstacles and his desire to defeat them. It also happens that much of his obstacles come from his family, due to the fact that the dispositor of his Sun and Rahu is Jupiter in his 2nd house of family. However, lord of his Moon/Leo house is also Sun in his 6th house, which means people who were once enemies can become family. Pretty complicated placement, I would say.
With Rahu in 6th house and South Node (Ketu) in 12th house, Ivan is someone who is worldly and not much spiritual. Ketu represents what you already were bestowed with in your past life and Rahu is your unmet desires in this current life. 12th house indicates spirituality, solitude and detachment, and Ketu here means Ivan doesn’t need any of those things and thus won’t seek them out.
Ivan has Saturn in Capricorn (Shravana nakshatra), 7th house of relationship and business. This makes Ivan conservative in how he approaches relationships, but he is also highly loyal towards his (limited) partners. This makes him tend to only look for those who are his ‘traditional’ partners for business. At the same time, those partners also tend to be loyal towards him.
Also important to note that Ivan’s ascendant is in Cancer, Ashlesha nakshatra, and it is at an especially difficult point called gandanta. Gandanta means ‘knot at the end’ and signifies the highest degrees of a water sign PLUS the lowest degrees of its neighboring fire sign, in this case the junction between Cancer and Leo, that is, between Ashlesha and Magha. At the highest degree of Cancer, like shown here, Ivan ‘holds tightly to his self and doesn’t want to let go’, even tho the obvious path is to let go, since the water of Cancer has no way to go but into the fire and smoke of Leo. Only when you let go of your past issues that you can finally be liberated and go on to manifest reality, as shown in Leo. And guess what, Ivan’s Cancer’s lord is Moon in Magha so he does have the ability to liberate himself from the shackles of Ashlesha. Ashlesha itself is quite a difficult star, one that is highly possessive, dependent, and regarded as ‘poisonous’ (symbolized by a snake) due to its steadfast efforts in keeping what is his as his, even if he has to employ appalling measures for it (or what is thought of as appalling). Ashlesha at highest degree/gandanta point is the most possessive and unrelenting, and yet, it also has the most potential to shed its past baggage. This is very befitting of the end of USSR.
Official formation of Russian Federation. December 25 1991, 19:45. Moscow
It remains the same, only that the ascendant changes from Cancer to Leo and thus house signs move.
Sun: Sagittarius (Mula) 5th house (chart ruler/lagnesh)
Moon: Leo (Magha) 1st house
Lagna: Leo (Magha)
AK Vishakha Venus 3rd house, DK Magha Moon 1st house
Nodal axis: Ketu in Gemini 11th house, Rahu in Sagittarius 5th house
The nuances are considerably different, this time, but not a lot. Ivan is nurturing and emotional (lunar influence: from being ruled by Moon to having 1st house Moon). Since this is Magha, even though Ivan has changed the system of his nation, he essentially stays the same and still carries the ‘karma of his past life’. An actually logical notion because nations hardly ever change their core even though they’ve donned different clothes, for better or worse. His AK changes from motherland-focused to neighbor-focused, but motherland keeps being an important focus due to the Mars-Mercury combo. Now Ivan’s purpose and main focus of life is genuine self expression (5H) compared to beating enemies (6H), though 5H can also mean child rearing or even childbirth, however that means in relation to him, lol. It also can mean romance 😂 (or perhaps just genuine love for himself, bc the dispositor of Rahu and Sun is in his 1st house of himself?). Ivan now has Saturn in 6H; beating enemies is a slow but sure process, but this time he’s got stability compared to the utter chaos of the previous chart. Enemies might be traditional as well, as in, it’s just the same kind of enemies he’s had before. Unfortunately (or fortunately) he can kind of sleep on them all tho! Funny placement because he gets his relationships from his enemies. Also, with 11H Ketu Ivan willingly isolates himself more from wider social circle and doesn’t care abt it much beyond his neighbors or kins.
Whew! That was pretty long. I have to say that I won’t incorporate all of this into my personal headcanons about Ivan, but I think these are interesting and I noted down some.
22 notes
·
View notes
Text
A week after its algorithms advised people to eat rocks and put glue on pizza, Google admitted Thursday that it needed to make adjustments to its bold new generative AI search feature. The episode highlights the risks of Google’s aggressive drive to commercialize generative AI—and also the treacherous and fundamental limitations of that technology.
Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors. Using the technology to summarize online information promises can make search results easier to digest, but it is hazardous when online sources are contractionary or when people may use the information to make important decisions.
“You can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn't tell you to eat rocks takes a lot of work,” says Richard Socher, who made key contributions to AI for language as a researcher and, in late 2021, launched an AI-centric search engine called You.com.
Socher says wrangling LLMs takes considerable effort because the underlying technology has no real understanding of the world and because the web is riddled with untrustworthy information. “In some cases it is better to actually not just give you an answer, or to show you multiple different viewpoints,” he says.
Google’s head of search Liz Reid said in the company’s blog post late Thursday that it did extensive testing ahead of launching AI Overviews. But she added that errors like the rock eating and glue pizza examples—in which Google’s algorithms pulled information from a satirical article and jocular Reddit comment, respectively—had prompted additional changes. They include better detection of “nonsensical queries,” Google says, and making the system rely less heavily on user-generated content.
You.com routinely avoids the kinds of errors displayed by Google’s AI Overviews, Socher says, because his company developed about a dozen tricks to keep LLMs from misbehaving when used for search.
“We are more accurate because we put a lot of resources into being more accurate,” Socher says. Among other things, You.com uses a custom-built web index designed to help LLMs steer clear of incorrect information. It also selects from multiple different LLMs to answer specific queries, and it uses a citation mechanism that can explain when sources are contradictory. Still, getting AI search right is tricky. WIRED found on Friday that You.com failed to correctly answer a query that has been known to trip up other AI systems, stating that “based on the information available, there are no African nations whose names start with the letter ‘K.’” In previous tests, it had aced the query.
Google’s generative AI upgrade to its most widely used and lucrative product is part of a tech-industry-wide reboot inspired by OpenAI’s release of the chatbot ChatGPT in November 2022. A couple of months after ChatGPT debuted, Microsoft, a key partner of OpenAI, used its technology to upgrade its also-ran search engine Bing. The upgraded Bing was beset by AI-generated errors and odd behavior, but the company’s CEO, Satya Nadella, said that the move was designed to challenge Google, saying “I want people to know we made them dance.”
Some experts feel that Google rushed its AI upgrade. “I’m surprised they launched it as it is for as many queries—medical, financial queries—I thought they’d be more careful,” says Barry Schwartz, news editor at Search Engine Land, a publication that tracks the search industry. The company should have better anticipated that some people would intentionally try to trip up AI Overviews, he adds. “Google has to be smart about that,” Schwartz says, especially when they're showing the results as default on their most valuable product.
Lily Ray, a search engine optimization consultant, was for a year a beta tester of the prototype that preceded AI Overviews, which Google called Search Generative Experience. She says she was unsurprised to see the errors that appeared last week given how the previous version tended to go awry. “I think it’s virtually impossible for it to always get everything right,” Ray says. “That’s the nature of AI.”
Even if blatant errors like suggesting people eat rocks become less common, AI search can fail in other ways. Ray has documented more subtle problems with AI Overviews, including summaries that sometimes draw on poor sources such as sites that are from another region or even defunct websites—something she says could provide less useful information to users who are hunting for product recommendations, for instance. Those who work on optimizing content for Google’s Search algorithm are still trying to understand what’s going on. “Within our industry right now, the level of confusion is on the charts,” she says.
Even if industry experts and consumers get more familiar with how the new Google search behaves, don’t expect it to stop making mistakes. Daniel Griffin, a search consultant and researcher who is developing tools to make it easy to compare different AI-powered search services, says that Google faced similar problems when it launched Featured Snippets, which answered queries with text quoted from websites, in 2014.
Griffin says he expects Google to iron out some of the most glaring problems with AI Overviews, but that it’s important to remember no one has solved the problem of LLMs failing to grasp what is true, or their tendency to fabricate information. “It’s not just a problem with AI,” he says. “It’s the web, it’s the world. There’s not really a truth, necessarily.”
18 notes
·
View notes
Text
Astral body lore !UNDER CONSTRUCTION!
This post has a lot of information that I got from Starr⭐️ over on YouTube! So huge shoutout to her for all this!!!
She’s also got some killer music taste, (I’m listening to her Castor themed playlist as of writing this lol) so be sure to check her out on Spotify as well if you’d like!
CHARACTER BOIS!!!
Castor
Pollux
Gemini
Nebula
Taurus
Leo
Libra
Aries
The antimatter
GENERAL POINT OF THE ASTRALS
The main point of the astral bodies existence is to postpone the end of the universe as long as possible. The way they do this is by fighting the incarnations of antimatter that pop up all over the universe. Each astral is assigned a role, for example: Castor and Pollux scout out worlds to see if they are corrupted by antimatter. If so, Taurus and Leo destroy said planet before it can spread.
FACTS ABOUT some OF THE ASTRALS
Honorable mentions that I think are too small of facts/too obvious to get their own sections:
Since the astral body’s are made out of light (mentioned multiple times throughout the show, but I think the first mention is in ‘EARTH and POLLUX Become BEST FRIENDS?! In VRCHAT’) they can move through walls and most physical structures and surfaces.
It’s been shown that the astral are very bitchy about their schedules/pisces and sagittarius are friends(?), as shown In the episode ‘Castor and Pollux's PLAN for LUNAR?! in VRCHAT’ Castor and Pollux said this: “everyone’s getting angry! I mean- they like things to be punctual! They like things on their own timetable.” “Those two are very awful about it.” (Note: in this scene the twins are complaining about how the other astrals are rushing them on find out if lunar’s a threat or not, and their complaining about Leo and taurus in particular.) *sigh* “Pisces and Sagittarius are likely doing their own thing.”. I looked up what the personalities of Pisces and Sagittarius, and they generally seem like they’d probably be more nice (if google is correct), and I might be jumping to conclusions on this one, but I think that they might be friends, due to the way that castor mentioned them together, and google says that they have a very high friendship potential.
The astral shave their own culture. Mentioned in the current latest episode as of writing this: ‘Gemini SENDS A MESSAGE in VRChat’, Gemini mentions that the stars have their own etiquette, but never really gave any context beyond that since they thought if they told lunar, then it would break his mind, WHICH IS COMPLETELY UNFAIR. BESTIE I DO NOT CARE IF IT MAKES ME INSANE, IN CASE YOU COULDNT TELL FROM ME WRITING THIS ENTIRE ESSY, THE ADDED INFORMATION WILL ONLY MAKE MY HYPERFIXATION STRONGER. *cough cough* so anyways yeah this section is basically just a place holder till we get more info on their culture.
The astrals can for some reason not be seen by colorblind creatures??? Mentioned in the episode ’Lunar Is FRIEND ZONED! in VRChat’, because there was a deer standing near lunar and Gemini said “it must be comfortable in your presence” and when lunar asked why they didn’t care about Gemini, with which they responded with “no we are more or less invisible to them. See?” Then they started to run around the deer. And when asked why by lunar they replied with “most animals on your planet are colorblind. We are light. If light can’t reflect the way it’s meant to, then we kinda just come off as invisible.” WHICH MAKES NO F*CKING SENSE AT ALL!!! Cuz like yes. they wouldn’t be able to see them in their full glory, but if an animal were to look at them they’d at least see a glowing figure towering above them!!! So yea, it was probably just a bit to explain why a deer would be standing so close to them, but it still makes no sense.
The astral body’s can naturally produce star power without creating any anti matter. (Mentioned in the episode ‘Moon is THREATENED by GEMINI?! in VRCHAT’ I think.)
they also apparently have a jail as well. They can also essentially perma ban people from L I F E. Not killing them, when they do this apparently said victim gets sucked into the void and is in constant pain. So that’s fun- (mentioned in the episode ‘Gemini CONFRONTS Lunar again in VRChat’
Whenever the astrals are around antimatter, they start to feel progressively more nauseous. And wherever/whoever is admitting said antimatter, smells incredibly potent to the point where they can taste it. and it apparently smells like: [and I quote] “like if somebody mixed anchovies with rotten eggs”. (Judging from lunar’s reaction to Monty’s antimatter infested arm in the episode ‘Dazzle's NEW JOB in VRChat’.)
And lastly, and probably the most obvious of all, the astral body’s do not need to eat or sleep, since the astral body’s are made of light.
#sun and moon show lunar#sun and moon show spoilers#sun and moon show#lunar and earth show lunar#lunar and earth show pollux#lunar and earth show#lunar and earth show castor#lore analysis#Update: 04-23-2024 y’all’s favorite batshit crazy grandpa has figured out how to link all of the mentioned episodes :)#wow I know I’m so tec savvy#laes astral body#tag so starr can find this
25 notes
·
View notes
Text
Lawmakers have started reacting to Google’s admitted racial and historical bias, and one Republican senator wants to see the "breakup" of one of the most well-known and profitable tech companies.
"This is one of the most dangerous companies in the world. It actively solicits and forces left-wing bias down the throats of the American nation," Sen. J.D. Vance, R-Ohio, told FOX Business’ Maria Bartiromo in a "Sunday Morning Futures" exclusive interview.
The Alphabet-owned tech giant is scrambling to right the ship after pulling the plug on Gemini's image generation features, with CEO Sundar Pichai telling employees last Tuesday the company is working "around the clock" to fix the tool's bias, calling the images generated by the model "completely unacceptable."
Social media users had flagged Gemini was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.
Fox News Digital previously tested Gemini multiple times to see what kind of responses it would offer: when the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it "reinforces harmful stereotypes and generalizations about people based on their race."
Sen. Vance expanded on how the alleged bias can have ripple effects on other information sectors, including politics.
"Think about the effect this has on the presidential election when unbiased, non-committed voters are searching things about Donald Trump, and also about Joe Biden, right before they cast their ballots," the senator said.
"We cannot allow a company that is in bed with some of the worst people in the world to control the flow of information and to bias it in a left-wing direction," he continued. "We [have] got to break this company up and bring back some common sense standards."
Addressing the likelihood that legislative action may be taken against Google, the senator claimed there are "growing calls" across the political spectrum for a shakedown on the tech giant, noting it’s gotten "too big, too powerful."
"My friends on the left, Maria, say they feel like our democracy is under threat. The biggest way our democracy is under threat is you have these massive, international companies that are sort of controlling what we think, what we read, what information we consume," Vance said.
"That's a big problem. But I actually do think that there's going to be growing momentum to rein Google in," he added. "We saw this with the release of Gemini. This is a radically left-wing company that is trying to control how we consume information. If we let that happen, we are going to get exactly what we deserve."
On Saturday, House Judiciary Committee Chairman Jim Jordan, R-Ohio, wrote a letter to Google parent company Alphabet, demanding that the company explain whether the Biden administration influenced Gemini’s A.I. error.
14 notes
·
View notes
Text
Book Review: The Little Book of Curses and Maledictions for Everyday Use
This is: The Little Book of Curses and Maledictions for Everyday Use by Dawn Rae Downton Rating: 2/10 I really don't know what I expected going into this, but yet again this proves making up your own spellwork instead of following a spellbook is your best friend. Especially with hexing/left handed work.
Pros: The information about reversing/cancelling a hex is pretty spot on I think? I like the way the author explains what should warrant some cursing and what shouldn’t. To exhaust all options first. Which I 100% agree with. The author also uses poppets, which I do as well. Her method of making them is simple and easy to do. A good portion of the book seems hand-crafted/not stolen, which is a major plus for me, even if she does appropriate more towards the end of the book.
Cons: There is no sources listed. Absolutely none at all. Even for a book from 2009 this is an absolute sin in writing an informational text. There is NO excuse, especially when the author mentions numerous spirits that even with a good google produces little to no results.
The author's wording is also a bit suspect, which 2009 I know the g-slur, white and black magic talk, etc was super common then so I'm not holding that against her, but the bits about “Get his head examined and get unseated from his job” kinda…weirded me out? Maybe it’s because I’m neurodivergent myself but it feels like mental health is kinda damned in this book. Her lack of explaining things really sticks out. She does not explain why you use what you use.
For example:
Some of the spells don’t make a lot of sense, like “The Promise Keeper” on page 60. Who are you summoning? The person’s soul who betrayed you? Is it really so easy to rip the soul out of someone and bring it to you directly? Then the one invoking Rae and Skogsraet for being in collateral damage between two parties?? What is THAT going to do? There’s no rhyme or reason for the spell at all. (And this is a folkloric piece which I do use folklore in my work but not like this, not so…literally.)
She also misuses symbols a few examples…a Christian punk symbol is used as a summon symbol for a spirit. And the Gemini symbol is one of 4 used to summon one of the demons mentioned in the Lesser Key of Solomon.
Her information is also pretty dubious at best, claiming that Iwo is a deity/spirit when it's more of a…Greek ideology. The Celtic curse she mentions in the book isn't one I could find any information on despite her calling it an ancient Celtic one. Same for the Pollard curse she mentions, I cannot for the life of me find any information about a famous Blackguard named Pollard in British history. She appropriates from Jewish Mysticism, Hinduism, and actively blends in Thelema into the rest of the book as well. The part appropriating Jewish Mysticism calls it an "ancient egyptian summoning of angels" when the angels mentioned are…definitely not something from Egypt. I recognized some as Christian archangels, but the rest were all strictly Jewish angels.
I also understand my moral code is going to be different from an author's, that's alright. But the wording on some of the spells is…laughable. I will provide some examples! Love spells are one thing, spells to isolate your significant other are another and this book has a bit of both.
In short: Can't recommend this book, did give me a few laughs and a couple days worth of googling.
Some examples of the book that was appropriative and/or confusing/stuck out to me as strange or laughable. Peek that one crossroads curse to rid your computer of viruses!
17 notes
·
View notes
Note
Me sorprende que Macaque este de acuerdo que nombre a los niños por fanchilds de un fanfic del Viaje al oeste (Viaje al sur)
translated via google:
"I'm surprised that Macaque agrees that I name the children after fanchildren from a Journey to the West fanfic (Journey to the South)"
I feel like Macaque didn't even know about "Journey to the South" unless he was specifically looking for information about Sun Wukong online. XD
He would definetly know about and read "Journey to the West" just to get an idea of how the world remembers his and Wukong's story. But I don't think he'd know about the non-canonical fan sequels. As a goof I could see Macaque googling whether or not Wukong had kids and/or romantic partners in the last 1000 years and getting "Journey to the South" as a result. He'd be very confused.
Macaque: "So... you've had kids?" Wukong, chokes on water: "W-what?!" Macaque: "The other book told me you had like three kids. One really big scary daughter, and a pair of twin sons that didn't do much." Wukong (has read every adaptation of Jttw there is): "...OH! You mean the fan-sequel! That was written in the 1580s by some guys I didn't even know. It's part of a four-book shenmo series about another chinese folk hero. I only get mentioned in like two chapters." Macaque, six ears flutter: *is relieved to know SWK is telling the truth* "...ok, that definitely explains some things." Wukong, cheeky smile: "Were you worried that I had kids without you?" ( ¬‿¬) Macaque, tsundere-mode: "N-no! I was worried that some poor child had inherited your stupidity!" ( っ- ‸ – c) Wukong: "Big words coming from someone having a baby with me." Macaque: *is pregnant with MK during the AU* " "...touché"
Many years later as they're planning for more kids.
Macaque: "I'm a little stuck on names right now. My brain keeps going to weird theatre characters." Wukong: "How about Yuebei for a girl?" Macaque: "....lunar apogee? You mean like when the moon is farthest from earth?" Wukong, doesn't want to admit that he's read all Jttw fanfics and liked the name: "Ye... yeah. I wanna keep the space theme names a thing." Macaque: "....it is a really cute name. Let's keep it for if we have another girl." Wukong: *phew!*
Later once Yuebei and the Lunar Node Twins are born.
Macaque, holding Yuebei: "You totally named our kids after fanfic characters didn't you?" Wukong, holding the twin boys: "Haha! Wondered how long 'til you noticed!" Macaque: "Jerk. Now I'm attached to the names." Wukong: "In my defence you were gonna name the boys Castor and Pollux." Macaque: "That was until I discovered that they wouldn't be Geminis." Wukong: "Gods, we're both nerds." Macaque, lovingly: "Yeah..."
#the monkey king and the infant au#the monkey king and the infant#lmk shadowpeach au#shadowpeach being parents#shadowpeach#pregnancy tw#lmk yuebei xing#yuebei xing#journey to the south#lmk lunar nodelets#lmk jidu and luohou#jidu and luohou#lmk au#lego monkie kid#lmk fan children
46 notes
·
View notes