#new human interfaces
Explore tagged Tumblr posts
ookwrd · 25 days ago
Text
Life After Programming: Embracing Human-Machine Symbiosis in the Age of AI
As AI continues to evolve, conversations have started questioning the future of traditional programming and computer science education. The rise of prompt engineering—the art of crafting inputs to lead AI models to generating specific outputs—has led many to believe that mastering this new skill could replace the need for deep computational expertise. While this perspective does capture a real…
0 notes
marlynnofmany · 2 years ago
Text
Starseed Apples
“Here you go,” I said, putting down the last box. “Uncut fabric, plumbing supplies, and three cases with a fungus biohazard label. Do I even want to know what’s in those?” I cast a curious look at my fellow human as I handed over the signing pad. She was shorter and rounder than I was, dressed in a crisp uniform of a type I didn’t recognize. Big pockets everywhere.
She signed with a wry grin. “Those are dirt.”
“Dirt?” I repeated, looking around the admittedly spotless loading dock of this particular space station. “Dirt warrants a biohazard here?”
“Oh, you have no idea,” she said, handing the pad back. “Organic mulch that could contain anything from decomposed animals to fungus to poop? With uncountable amounts of bacterial life and potential germs? We’re lucky they only focused on the mold aspect!”
“Hm, good point,” I said.
Zhee, who was busy moving boxes off the hover sled, muttered something disparaging. I expected him to complain about how gross it all was, since he was always the first to point out when humans did something to offend his bug-alien sensibilities, but it sounded like he was griping about the strict station rules this time.
The human continued. “We have to keep a clean room between the greenhouse area and everything else. Even there, most things are in pots. We’ve got a great crop from Johnny Starseed right now!”
I’d heard that name before. “Oh, was he the one who sells little potted—”
“Apple trees, yeah,” she said. “Tiny and convenient, but they make an impressive number of apples as long as you feed ‘em quality dirt.” She bent down to pat a box.
Zhee finished freeing the sled. “Reasonable business plan,” he said, sounding almost complimentary.
“The guy named himself after Johnny Appleseed,” I told Zhee. “A human from centuries ago who got famous for traveling around and setting up apple orchards on Earth. Everybody likes a guy who brings food wherever he goes. And drink — I think some of those apples were supposed to be the cider variety.”
Zhee flicked his antennae. “Sounds like a very human thing to do,” he said drily.
“Have you tried the Starseed Reds?” the other human asked. “They’re very good.”
“No I haven’t, but I’d like to!” I said. “I’ve heard good things. I was kind of hoping to cross paths with him at some point. I wouldn’t mind a tiny apple tree in my quarters. Of course, the cat might get at it, and I’d probably have to find a grow lamp…”
She opened a boxy hip pocket, and pulled out the shiniest red apple I’d seen in a while. “Here you go.”
“Thank you!” I said, taking it eagerly. “That’s very generous!”
She waved it off. “Like I said, we’ve got a big crop. And I’ve got a different one that I’m saving for when I get off shift.” From another pocket, she produced a red apple with distinct orange stripes. “Which should be as soon as I get the supplies back to base.”
I laughed. “Is that the booze kind? I didn’t think those were real!”
“Oh yes,” she said with relish, putting it back in the pocket. “Starseed Cider Apples, no fermenting required!”
Zhee cocked his head, faceted eyes looking at both of us. “Poisonous apples?”
“Alcoholic apples,” I corrected, knowing full well that he considered that to be the same thing.
Zhee pushed the hover cart back toward the ship with a dramatic head tilt and antennae swirl. “Now that sounds like a human thing to do.”
“Well, you’re not wrong there,” I said with a smile. I thanked the other human and followed him, taking a bite of my non-alcoholic apple. It really was good.
~~~
The ongoing backstory adventures of the main character from this book. More to come!
373 notes · View notes
mars-ipan · 2 months ago
Note
what is lovemaol
Tumblr media Tumblr media Tumblr media
i do not know how to even begin to explain something that i can barely comprehend
12 notes · View notes
passiveagreeable · 3 months ago
Text
The marvel of technology is that a large wall clock in my living room quits working, and I order a new movement for ten bucks, but when the quartz movement goes out in my watch, I’m quoted more than $200 to replace it when the watch wasn’t worth that to begin with.
When the backlight goes out on my laptop screen, I’m quoted anywhere from $300 to $700 to fix it (and highly discouraged to do so because despite working perfectly it’s almost ten years old), then told its a motherboard issue, thereby forcing me to buy a new laptop anyway for hundreds more. But a lightbulb in my room costs like three bucks to replace.
3 notes · View notes
jcmarchi · 1 year ago
Text
Stop out-of-control AI and focus on people - Technology Org
New Post has been published on https://thedigitalinsider.com/stop-out-of-control-ai-and-focus-on-people-technology-org/
Stop out-of-control AI and focus on people - Technology Org
Tumblr media Tumblr media
Companies need to stop designing new artificial intelligence technology just because they can, and people need to stop adapting their practices, habits and laws to fit the new technology. Instead, AI should be designed to fit exactly what people actually need.
That’s the view of 50 global experts who’ve contributed research papers to Human-Centred AI, a new book co-edited by two Université de Montréal experts that explores the risks — and missed opportunities — of the status quo and how it can be made better.
One important way would be through legal mechanisms, now woefully inadequate to the task, said contributor Pierre Larouche, an UdeM law professor and faculty vice-dean who specializes in competition law.
Treating AI as “a standalone object of law and regulation” and assuming that there is “no law currently applicable to AI” has left some policymakers feeling inadequate to an insurmountable task, said Larouche.
“Despite the scarcity – if not outright absence – of specific rules concerning AI as such, there is no shortage of laws that can be applied to AI, because of its embeddedness in social and economic relationships,” he said.
The challenge is not to create new legislation but to extend and apply existing laws to AI, he argued. That way, policymakers won’t fall into the trap of “delaying tactics designed to extend discussion indefinitely, while the technology continues to progress at a fast pace.”
Montreal lawyer Benjamin Prud’homme, vice-president of policy, society and global affairs at the UdeM-affiliated Mila (Quebec Artificial Intelligence Institute), one of the largest academic communities dedicated to AI, agrees.
He urges policymakers to “start moving away from the dichotomy between innovation and regulation (and) that we acknowledge it might be okay to stifle innovation if that innovation is irresponsible.”
Prud’homme cited the European Union as an example of being proactive in this regard via its “very ambitious AI Act, the first systemic law on AI, (which) should be definitively approved in the next few months.”
Co-edited by UdeM professor and health law expert Catherine Régis and UdeM public-health expert Jean-Louis Denis, along with the University of Cambridge’s Maria Luciana Axente and Osaka University’s Atsuo Kishimoto, Human-Centred AI brings together specialists in disciplines ranging from education to management to political science.
The book examines AI technologies in a number of contexts – including agriculture, workplace environments, healthcare, criminal justice and higher education – and offers people-focused approaches to regulation and interdisciplinary ways of working together to make AI less exclusive of human needs.
University of Edinburgh philosophy professor Shannon Vallor points to increasingly popular generative AI as an example of technology which is not human-centred. She argues the technology was created by organizations simply wanting to see how powerful they can make a system, rather than making “something designed by us, for us, and to benefit us.”
Other contributors to the new book look at how AI is impacting human behaviour (via Google, Facebook and other platforms), how AI lacks data on minorities and hence helps marginalize them, and how AI undermines privacy as people ignore how their information is collected and stored.
Source: University of Montreal
You can offer your link to a page which is relevant to the topic of this post.
0 notes
gothhabiba · 1 year ago
Text
🍉🇵🇸 eSims for Gaza masterpost 🇵🇸🍉
Which eSims are currently being called for?
Connecting Humanity is calling for:
Nomad (“regional Middle East” plan): code NOMADCNG
Simly (“Palestine” plan)
Gaza Online is calling for:
Holafly (“Israel” and “Egypt” plans): code HOLACNG
Nomad (“regional Middle East” plan): code NOMADCNG (can now be used multiple times from the same email)
Airalo (“Middle East and North Africa” plan)
Sparks (“Israel” plan)
Numero (“Egypt” plan)
For Connecting Humanity: if you sent an eSim more than two weeks ago and it is still valid and not yet activated, reply to the email in which you originally sent the eSim. To determine whether the eSim is still valid, scan the QR code with a smartphone; tap the yellow button that reads “Cellular plan”; when a screen comes up reading “Activate eSIM,” click the button that says “Continue.” If a message comes up reading “eSIM Cannot Be Added: This code is no longer valid. Contact your carrier for more information,” the eSim is activated, expired, or had an error in installation, and should not be sent. It is very important not to re-send invalid eSims, since people may walk several kilometers to access wifi to connect their eSims only to find out that they cannot be activated.
If a screen appears reading “Activate eSIM: An eSIM is ready to be activated” with a button asking you to “Continue,” do not click “Continue” to activate the eSim on your phone; exit out of the screen and reply to the email containing that QR code.
Be sure you're looking at the original post, as this will be continually updated. Any new instructions about replying to emails for specific types of unactivated plans will also appear here.
Check the notes of blackpearlblasts's eSim post, as well as fairuzfan's 'esim' tag, for referral and discount codes.
How do I purchase an eSim?
If you cannot download an app or manage an eSim yourself, send funds to Crips for eSims for Gaza (Visa; Mastercard; Paypal; AmEx; Canadian e-transfer), or to me (venmo @gothhabiba; paypal.me/Najia; cash app $NajiaK, with note “esims” or similar; check the notes of this post for updates on what I've purchased.)
You can purchase an eSim yourself using a mobile phone app, or on a desktop computer (with the exception of Simly, which does not have a desktop site). See this screenreader-accessible guide to purchasing an eSim through each of the five services that the Connecting Humanity team is calling for (Simly, Nomad, Mogo, Holafly, and Airalo).
Send a screenshot of the plan's QR code to [email protected]. Be sure to include the app used, the word "esim," the type of plan (when an app has more than one, aka "regional Middle East" versus "Palestine"), and the amount of data or time on the plan, in the subject line or body of your email.
Message me if you have any questions or if you need help purchasing an eSim through one of these apps.
If you’re going to be purchasing many eSims at once, see Jane Shi’s list of tips.
Which app should I use?
Try to buy an eSim from one of the apps that the team is currently calling for (see above).
If the team is calling for multiple apps:
Nomad is best in terms of data price, app navigability, and ability to top up when they are near expiry; but eSims must be stayed on top of, as you cannot top them up once the data has completely run out. Go into the app settings and make sure your "data usage" notifcations are turned on.
Simly Middle East plans cannot be topped up; Simly Palestine ones can. Unlike with Nomad, data can be topped up once it has completely run out.
Holafly has the most expensive data, and top-ups don't seem to work.
Mogo has the worst user interface in my opinion. It is difficult or impossible to see plan activation and usage.
How much data should I purchase?
Mirna el-Helbawi has been told that large families may all rely on the same plan for data (by setting up a hotspot). Some recipients of eSim plans may also be using them to upload video.
For those reasons I would recommend getting the largest plan you can afford for plans which cannot be topped up: namely, Simly "Middle East" plans, and Holafly plans (they say you can top them up, but I haven't heard of anyone who has gotten it to work yet).
For all other plans, get a relatively small amount of data (1-3 GB, a 3-day plan, etc.), and top up the plan with more data once it is activated. Go into the app’s settings and make sure low-data notifications are on, because a 1-GB eSIM can expire very quickly.
Is there anything else I need to do?
Check back regularly to see if the plan has been activated. Once it's been activated, check once a day to see if data is still being used, and how close the eSim is to running out of data or to expiring; make sure your notifications are on.
If the eSim hasn't been activated after three weeks or so, reply to the original email that you sent to Gaza eSims containing the QR code for that plan.
If you purchased the eSim through an app which has a policy of starting the countdown to auto-expiry a certain amount of time after the purchase of the eSim, rather than only upon activation (Nomad does this), then also reply to your original e-mail once you're within a few days of this date. If you're within 12 hours of that date, contact customer service and ask for a credit (not a refund) and use it to purchase and send another eSim.
How can I tell if my plan has been activated? How do I top up a plan?
The Connecting Humanity team recommends keeping your eSims topped up once they have been activated.
See this guide on how to tell if your plan has been activated, how to top up plans, and (for Nomad) how to tell when the auto-expiry will start. Keep topping up the eSim for as long as the data usage keeps ticking up. This keeps a person or family connected for longer, without the Connecting Humanity team having to go through another process of installing a new eSim.
If the data usage hasn't changed in a week or so, allow the plan to expire and purchase another one.
What if I can't afford a larger plan, or don't have time or money to keep topping up an eSim?
I have set up a pool of funds out of which to buy and top up eSims, which you can contribute to by sending funds to my venmo (@gothhabiba), PayPal (paypal.me/Najia), or cash app ($NajiaK) (with note “esims” or similar). Check the notes of this post for updates on what I've purchased, which plans are active, and how much data they've used.
Crips for eSims for Gaza also has a donation pool to purchase eSims and top them up.
Gaza Online (run by alumni of Gaza Sky Geeks) accepts monetary donations to purchase eSims as needed.
What if my eSim has not been activated, even after I replied to my email?
Make sure that the QR code you sent was a clear screenshot, and not a photo of a screen; and that you didn’t install the eSim on your own phone by scanning the QR code or clicking “install automatically."
Possible reasons for an eSim not having been activated include: it was given to a journalist as a back-up in case the plan they had activated expired or ran out of data; there was an error during installation or activation and the eSim could no longer be used; the eSim was installed, but not activated, and then Israeli bombings destroyed the phone, or forced someone to leave it behind.
An eSim that was sent but couldn’t be used is still part of an important effort and learning curve. Errors in installation, for example, are happening less often than they were in the beginning of the project.
Why should I purchase an eSim? Is there any proof that they work?
Israel is imposing near-constant communications blackouts on Gaza. The majority of the news that you are seeing come from Gaza is coming from people who are connected via eSim.
eSims also connect people to news. People are able to videochat with their family for the first time in months, to learn that their family members are still alive, to see their newborn children for the first time, and more, thanks to eSims.
Some of this sharing of news saves lives, as people have been able to flee or avoid areas under bombardment, or learn that they are on evacuation lists.
Why are different plans called for at different times?
Different eSims work in different areas of the Gaza Strip (and Egypt, where many refugees currently are). The team tries to keep a stockpile of each type of sim on hand.
Is there anything else I can do to help?
There is an urgent need for more eSims. Print out these posters and place them on bulletin boards, in local businesses, on telephone poles, or wherever people are likely to see them. Print out these foldable brochures to inform people about the initiative and distribute them at protests, cafes and restaurants, &c. Also feel free to make your own brochures using the wording from this post.
The Connecting Humanity team is very busy connecting people to eSims and don't often have time to answer questions. Check a few of Mirna El Helbawi's most recent tweets and see if anyone has commented with any questions that you can answer with the information in this post.
14K notes · View notes
sunset-synthetica · 10 months ago
Text
robots with human genitalia are sooooo boring. they should have weirdly sensitive sensors instead. their brain should be confused about how to interpret certain sensations and makes them pleasurable as a result. overstimulating them makes their power source go all crazy. they should be weird about really mundane stuff like recharging or interfacing with other tech. they should make their wires twist and mingle on purpose because if they get the angle right it feels sooooo good. putting new batteries in them or plugging them into an outlet makes some of them horny because it's all this new energy flowing through them and before the initial jolt wears down, it has them all overly sensitive and wound tight and stuff.
4K notes · View notes
futuretonext · 2 years ago
Text
The India Human Machine Interface (HMI) Market is projected to grow at a CAGR of around 8.50% during the forecast period, i.e., 2023-28. The market is driven principally by the mounting demand for Human Machine Interface (HMI) devices across various industries like automotive, mining, oil & gas, and others to efficiently monitor production plants. Moreover, with the integration of logical software into HMI devices, the overall productivity of the processing manufacturing industries has spurred in recent years, primarily due to precise monitoring. Moreover, substituting conventional buttons, indicator lights, & selectors with new HMIs helps reduce operating costs and the need for additional display panels & cables. Besides digitalization, HTML5, & the Industrial Internet of Things (IIOT), enabling end-users to use new software programs hassle-free and optimally monitor equipment has called for more expansion of the market in the past few years.
0 notes
gaysails · 1 year ago
Text
after the news about duolingo shifting to AI generated translations and only using human staff to check that the translations were bare minimum "acceptable" I switched my german lessons to busuu and I've only been doing it a month but it already feels like a more helpful interface. there's typically way more german text onscreen than there is english (e.g. the headers and lesson instructions being in german), they use more complex compound words that start becoming recognizable in context, they give WAY more detailed explanations of grammar rules, and occasionally there's a written short answer you submit to get feedback from other real users who are fluent. I don't think they offer as wide a range of languages but if anyone is looking to jump ship from duolingo I'd recommend checking them out
3K notes · View notes
heron-knight · 5 months ago
Text
decided to crack open my skull and pour the contents of my brain onto the keyboard. thought the denizens of tumblr might enjoy it. bon appetite
Mech Pilot Care guide
You never expect it, do you. Even as you see the flashes of pulse-decay fire in the sky, illuminating a scene of violence on the cosmic scale. Planetary defense satellites forming Monolithic structures in the sky, their purpose now revealed as they scatter constellations of destruction across the night horizon, drowning out the stars and replacing them with ones born of death. The oxygen in a ship catching fire and burning away in an instant, a flash of light that marks the death of its crew of hundreds. Even if you take your telescope to watch this spectacle, this war in a place without screams, you still feel profoundly disconnected from it. Even as you see a pilot cleave through a drone hive with a fusion blade, the molten metal glistening in the light of the explosions around it, scattering without gravity to the corners of the universe, even as two mechs dance across the sky, their reactors pouring into the engines enough energy to power the house atop which you sit for ten thousand years, flying in a 3.5 dimensional dance with only one word to the song that can reach across the vacuum: “I Will Kill You.” you don’t feel even the slightest glimpse of what goes on inside their minds. You don’t feel the neurological feedback tearing across the brain-computer interface, filling her mind with more simultaneous pain and elation that an unmodified human could ever experience. You don’t feel it as the pneumatic lance punctures through steel and nanocarbon polymer, the mech AI sending floods of a sensation you could never truly know through the skull and into every corner of the body carried on enhanced nerves for every layer of armor punctured, tearing into the enemy chassis with a desire beyond anything the flesh can provide. Let the stars kill each other. After all, I am safe on earth. No, you don’t expect it when the star is hit with a sub-relativistic projectile, piercing through both engines in an instant. You don’t expect it to fall. You never would have expected it to land, the impact nearly vaporizing the soil and setting trees aflame, on the hill beyond your house, and you would never have expected, beneath the layers of cooling slag, for the life-support indicator light to still be visible.
All the fire extinguishers in your house, your old plasma cutter that you haven’t used in years, and whatever medical supplies you think they might still be able to benefit from. All that on a hoverbike, speeding at 120 kilometers per hour through the valley and up onto the hill, still illuminated by the battle above, unsurprisingly unchanged by this new development. 200 meters. 100 meters. You don’t know how much time you’ve got. It wasn’t exactly covered in school, how long a pilot can survive in an overheating frame. You’ve heard rumors, of course, of what these things that used to be human have become. That they don’t eat and barely need air. That they don’t feel any desire beyond what instructions are pumped directly into their brains. Not so much of a person as much as an attack dog. It’s understandably a bit concerning, as if they are alive, then it’s not guaranteed that you will be. Three fire extinguishers later, the surface of the mech is mostly solid, and the cutter slices through the exterior plating. With a satisfying crunch, the cockpit is forced open, revealing the pilot, and confirming a few of the rumors, while refuting others. Pilots, it seems, are not quite emotionless. In fact, there seems to be genuine fear on its face when it sees you, followed by… a sort of grim certainty as it opens its mouth, moves its jaw into a strange position, and you only have half a second to react before it would have bitten down with all its force on the tooth that seemed to be made of a different material then all the rest.
Your thumb is definitely bleeding, and is caught between a metamaterial-based dental implant, and one containing a military-grade neurotoxin. You’re not sure exactly why you did it. The pilot looks at you for a second, before the tubes that attach to its arms like puppet strings run out of stimulants, and it passes out after who knows how long without sleep. This battle has been going on for weeks already. Has it been fighting that long? Its various frame-tethered implants disconnect easily, the unconscious pilot draped over your shoulder twitching slightly with each one you remove. It’s a much longer ride back to the house. Avoiding having the pilot fall off the bike is the top priority, and the injured thumb stings in the fast-moving air. 
An internet search doesn’t lead to many helpful sources to the question of “there is a mech pilot on my couch, what do I do?” a few articles about how easy targets retired pilots are for the “doll sellers,” a few military recruitment ads, and a couple near-incomprehensible legal documents full of words like “proprietary technology” or “instant termination.” However, there is one link, a few rows down from the top-- “Mech Pilot Care Guide.” It’s a detailed list, arranged in numbered steps. The website has no other links on it, just the step-by-step instructions: a quick read reveals that this isn’t going to be easy, but looking at the unconscious pilot, unabsorbed chemicals dripping from the ports in its arms and head onto the mildly bloodstained towel, you come to the conclusion that there’s no other option.
Step one: the first 24 hours.
The first thing you should know is that pilots aren’t used to sleeping. They’re used to being put under for transport and storage, but after the neural augmentations and years of week-long battles sustained by stimulants that would fry the brain of anyone that still has an intact one, they’ve more or less forgotten what real sleep is. If they see you asleep, they’ll think you’re dead, so don’t try to let them stay in your room yet. Once you’ve removed the neurotoxin from the tooth (it breaks easily with a bit of applied pressure, but be careful not to let any fall into their mouth or onto your skin.), start by moving them into a chair (preferably a recliner or gaming chair, as the mech seat is about halfway in between), and putting a heavy blanket over them. Don’t worry, they don’t need as much air as normal humans do, and can handle high temperatures up to a point. This is an environment similar to the one they’re used to. It’ll stay like this for about 12 hours-- barely breathing, trembling slightly underneath the blanket. Feel free to check if it’s alive every few hours, not that you could help it if it wasn’t. It won’t freak out when it wakes up. In fact, it doesn’t seem like they can. Turn down the lights and remove the blanket from its face. It’ll stare blankly at you, trying to evaluate the situation with a brain that’s not connected to a computer that’s bigger than they are anymore. Coming to terms, if you could call it that, with the fact that it isn’t dead. Don’t expect it to start reacting to things for a while yet, give it a couple hours. 
It’s been a bit, and its eyes are starting to focus on you. The next thing you should know is this: pilots only have two groups into which they can categorize non-pilots: handler and enemy. You need to work on making sure you’re in the right one. Move slowly, standing up and walking toward them, making sure they can see where you’re going to step. Place both hands on their shoulders, then slide one under their arm and carefully pick them up. Don’t be startled by how light they are, or how they still shake slightly as they realize their arms don’t have anything connected to them. Most importantly, don’t break. Don’t reflect on how something can be done to a person so that this is all that’s left. Just focus on rotating them as if you’re inspecting all the brain-computer interface ports, while holding them at half an arm’s length. Set them back down, wrap the blanket around them, then lean in close and say “status report.” they won’t say anything, as they usually upload the data via interface, but what’s important is that now they recognise you as their handler. Their entire mind will be focused on the fact that they exist now to do what you want. Now it’s up to you to prove them wrong.
Step two: the first week.
They’re shaking so hard that you’ve had to move them from the chair back to the couch, sweating heavily as they pant like the dog they’ve been trained to think they are. This was to be expected, really. Pilots are constantly being filled with a mix of stimulants, painkillers, and who knows what else, and you’ve just cut them off completely. You’ve woken up several times in the night and rushed to check if they’re still breathing, debating whether you should try to tell them that they’re going to be okay. The guide says they’re not ready for that yet, whatever that means. They’re still wearing the suit you found them in, made from nanofiber mesh and apparently recycling nutrients and water before re-infusing them intravenously. It’s been three days since you tore them out of the lump of metal atop the hill outside. Long enough that the suit’s battery, apparently, has run out. You lift them gently from the couch and carry them to the bathroom. The shower’s been on for the past hour or so, meaning the temperature should be high enough. You set them on their chair, which you’ve rolled there from the living room and covered with a towel. Removing the suit normally isn’t done except in between missions, and it’s only done to exchange it for a new one. Without the proper tools, you’ve opted for a pair of scissors. Cutting through the suit takes a bit of time, but you manage to cut a sizable line from the neck down to the front to the bottom of the torso. The pilot recoils slightly from the cold metal against their skin, but you manage to peel off the suit without incident, The Temperature of which was roughly the same as the steam filling the room, and you’ve done your best to minimize air currents. They’ve got a bit more shape to them than you expected of someone who’s been so heavily modified. Perhaps what little fat storage it provides helps on longer missions, or perhaps this is for the purposes of marketing. Just another recruitment ad that appeals to baser instincts. Either way, it doesn’t matter. Using a cloth with the least noticeable texture possible, you wash off as much sweat and dead skin as you can, avoiding the various interface and IV ports, as you’re not yet sure that they’re waterproof. Embarrassment is the enemy of efficiency, so you’re slightly glad that their eyes never completely focus on you. They shift their weight slightly, however. Despite the difficulty moving with their current symptoms, they lean in the direction opposite the places you wash once you're done, allowing you to more easily access the places you haven’t got to yet. An act of trust that you have a suspicion they weren't “programmed” to do.  As they dry out, you prepare for the difficult part. You take the blanket that previously wrapped around their suit, and gently touch a corner of it to their shoulder. Pilots are used to an amount of sensory  information that would overload any normal human in an instant, but most rarely experience textures against their skin. After about half an hour, they’re used to it enough that you’re able to replace what’s left of the suit with it, and after another you’re able to wrap them in it again. You carry them back to the couch, and place a few of your old shirts next to their hand. They pick one and touch it with one finger before recoiling slightly. Eventually, they’ll be used to at least one of them enough that they can wear it. It’s slow progress, but it’s progress.
Step 3: food
It goes without saying that it’s usually been at least a year since they’ve eaten anything. The augmentations scooped out much of their knowledge on how to survive as a human, assuming that they would die before ever needing to be one again. Start them off with just flavors. Give them a chance to pick favorites by giving them a wide selection and firmly telling them to try all of them. Avoid anything solid for the first month or so, both because they can’t digest it and because they associate chewing with their self-destruct mechanism. Trying to and surviving might make them think the “mission’s fully compromised” and attempt to improvise. They’ll typically pick out favorites quickly with their enhanced senses, so once they’ve sampled everything, tell them to pick one. Remember it, not in order to use it as a reward or anything, but them still being able to have a “favorite” anything is something you should keep in mind for later. 
Use a similar method anytime they become able to handle the next level of solidity. Don’t be alarmed if one of their favorite foods is the meat that’s most similar to humans (such as pork.) they’re not going to eat you, they just will have already formed an association between that flavor and the moment they went from being a weapon to living in your house. Don’t worry about your thumb getting infected, by the way. Pilots barely have a microbiome.
Step 4: entertainment:
Roll them over to your computer and give them access to your game library. No, really. They need enrichment, and there’s only one activity that they’re able to enjoy at the moment. A simulation of it will make the shift from weapon to guest easier. Start them off with an FPS with a story. Don’t go multiplayer, as your account may get banned for being suspected of using aimbots. Watch as they progress the story. The military left pilots with just enough of a personality to allow them to improvise, and that should be enough for them to make decisions on this level. They won’t do much character customization, but keep an eye on which starting character body shape they pick. No pilot would consciously think they have enough of a “Self” to still have a gender, but keep track of the ones they pick in the games. As for the one you’ve found, it appears that she’s got a player-character preference. You even saw her nudge one of the appearance sliders before clicking “start game.” Whether this means that a pilot doesn’t think of themselves as “it” or that it means there’s still enough of their mind left for them to know there’s more to themselves than the body they have, it’s a handy bit of information to know. Some pilots might have had this decision influenced by their handlers having referred to them as “she” in the way it refers to boats, but still, on some level they always know that “it” meant that they’re a weapon. 
Step 6: outside:
There’s a profound difference between experiencing the world through information fed directly into your brain and standing up for the first time, wandering around the room and investigating with hands not made of a half-ton of metal. She’s not used to feeling the air on her skin as she stands in front of the window, visual data coming from two eyes instead of seven cameras. It’ll take a while to get used to it again. New old data, reminiscent of a time before she’s been trained not to remember. It’ll take a while until she’s walking like a human and not a mech, as the muscles used are different, and the ones to hold herself upright haven’t been used in a while. She’s going to fall down at least once. Be sure you’re standing next to her when it happens, as pilots that fall aren’t trained to think they can get back up. It’s worth it, though, when she opens the door herself and strides into the yard, still wobbly but standing. Be careful not to let her look into the sun, partially because it looks nearly identical to the barrel of a pulse-decay blaster milliseconds before it fires. She would get hurt trying to dodge it. It will be somewhat confusing for her, standing on a hill as she once did, but not contained within a 12-meter metal chassis. A feeling of being small and alone without the voices of the computer. This means it’s time for step seven.
Step 7: 
All this time, and any idea that she’s still a person has, for her, been subconscious. Any thought of humanity is stopped when it slams into the wall of her handlers and mech AIs reminding her for years before now that she is a weapon. She’ll still ask for your permission before doing just about anything, and that’s just the rare times that she’ll do something you don’t tell her to. Even after you’ve moved her into your room, she’ll still try to sleep on the floor. She still thinks that beds are only for humans. Kneel next to her as she curls into a ball on the ground, assuming that’s what she’s supposed to do. Expect her to try to move down to the foot of the bed after you set her down on it. Gently move her back up until her head’s on the pillow. Sit on the edge of the bed, and hold out your hand to her. After a bit, she’ll take it, wrapping both hands around it and tracing her fingers along the scar on your thumb. Lie down next to her, an arm’s length apart. Place your other hand on her forearm, then slide it up her arm to her shoulder. Don’t move too quickly, and don’t surprise her. Whisper softly but audibly every movement you’re going to make in advance. Move in a bit closer, until you’re wrapped in her arms. Mech pilots aren’t used to this. They aren't used to feeling someone next to them. Not above them, but next to them, getting exactly as much out of this as they are. Even after several months, many won’t admit they deserve it. You wouldn’t waste time lying next to a gun. So why do they feel so strongly that they don’t want you to leave? Why do they hold on tighter? They often feel they’re doing something wrong. Overstepping a boundary. There’s a rift between what they want and what they’re told they can want that nearly tears their mind in half, and it hurts. No normal human will ever know how much it hurts them to think they’ve broken some instruction, that they feel things they aren’t allowed to. Nobody said it was easy, learning how to become human again. Tell her it’s okay. That she’s allowed to feel this way. She still won’t know why. It’s time to tell her. The guide can’t tell you what to say, only that you have to say it. It has to come from you. You have to be the one that tells her what she is underneath all the modifications. It’s time, say it.
“Do you feel that? Do you feel your heart start to beat faster as it presses up against mine? Do you feel your own breath against your skin after it reflects off my shoulder? Do you feel your muscles start to tighten as I slide my hand across them, then relax because you know it means that you are safe? It’s because you’re alive. Because despite everything, you’re still alive. Still someone left after all the changes, all the augmentations. And I know you’re someone because you are someone that likes food a bit spicier than most would prefer. Someone that closes her eyes and gets lost in music whenever it’s playing. Someone that added that one piece of customization to her character, even though they would wear a helmet for most of the game and nobody would know it was there but you. Maybe you aren’t the same person you were before. Maybe they did take some things from you that nothing can give back. But you’re still someone. Someone that people can still care about, and I know because I do.”
You can feel her tears drip down onto your neck as she pulls you closer. She tries to say something, but you can’t understand what. You tell her it’s okay. That it’s not easy, and that she doesn’t have to pretend that it is. Not for you, and not for anyone anymore. She doesn’t have to be useful anymore. No need to keep it together. All that matters is that she’s alive. 
There’s another battle going on in the night sky outside. The same flashes of light you saw the night you stopped living alone, even if the other person couldn’t admit that they were one yet. She still flinches at the brighter bursts of pulse-decay fire, still stretches out her hand on reflex to prime a pneumatic lance that isn’t there. But she knows it’s not her, it’s just a ghost of the weapon that died when it hit the ground. You can feel her relax as she realizes this, moving her hand back to dry her face before reaching out towards yours. You hadn’t noticed the tears on your own face. You place your hand on hers as she wipes the corner of your eye. Outside and above, the war continues on a cosmic scale, so far apart from where you both are now that you barely notice it. Let the stars kill each other. After all, the one before you has already fallen, and she doesn’t have to return to the sky. Together, you are safe on earth. 
1K notes · View notes
penny-anna · 3 months ago
Text
Spike assumes (not unreasonably) that Transformers don't have sex or drink alcohol or anything of that nature bcos they're robots who reproduce asexually and don't eat food so you know. just common sense. doesn't question this at all.
meanwhile shortly after arriving on earth the Autobots w the aid of some research into earth culture and a quiet consultation with Sparkplug put together a list of 'topics not to discuss with juvenile humans' which covers everything to do with both interfacing and recreational mind-altering substances of all kinds.
they're a little unclear on the boundaries around human childhood so err on the side of being very stringent. new arrivals from Cybertron have this included in the debrief. some of them find it a little weird and arbitrary but and its impressed upon them that inappropriate behaviour towards Juvenile Humans will be treated as a very serious offense so they all stick to it.
eventually Spike turns 18 and is no longer subject to the rule and it is like learning the Santa Claus isn't real all over again :(
544 notes · View notes
marlynnofmany · 2 years ago
Text
Hands-Free
“Are you going to want help installing all these?” I asked, opening another case of engine rings. I had no idea which part of the spaceship’s guts these actually went into; they were about three feet across, an inch thick, and made of some plasticky red stuff that was above my pay grade to define. All I knew was there was a lot of them, and we only had one engineer.
“No thanks,” grumbled Mimi, the octopus-looking guy with the voice like a gravel road. “This is a tentacles-only kind of operation.”
“Really? What’s the difference?” I was curious now. “Do you have to use specific tools, or reach into tight crevices?”
“Crevices,” he said, checking the label on the box. “These have to fit snug, and they go somewhere you people with fingers can never manage to reach.” He gave one ring a judicious whack against the floor, then tossed it back into the box.
I huffed in mock offense. “I’ll have you know I’m very flexible for my species.”
“Sure you are,” he chuckled. “Not your fault you’re held back by all those bones. And you only have two arms! I don’t know how you get by.” He started looping tentacles around the rings in a different box, gathering an impressive number of them.
“Just fine, thank you,” I told him. “Two arms is plenty.”
“Yeah? Carrying just a couple things at once? Must be a simple life.”
I took the hint, digging into the box for more rings. “Who says I can only carry two at once? Look how many I can fit over my nice long arms.”
“Yes, yes, good job. Put ‘em over there.”
“And I can hook them over my shoulders,” I continued as I deposited my armload where Mimi had pointed. “Heck, these are big enough that I could just stand inside a stack of them, and hold them all from the bottom. Oh! And—”
“Here, these too.”
“And,” I repeated, “I can even carry one without my arms or shoulders.”
“Yes, I know you have tiny fingers on your feet,” Mimi said, unimpressed.
“No, not like that!” I set down the other stack. “I’ll pick it up with my hands, then only touch it with my torso! Think I can do it?”
He struck a pose lounging on the floor with one tentacle against his head, looking dramatically bored. “Wow me,” he grated.
I hadn’t used a hula hoop since I was a kid, but I wasn’t going to let that stop me. With all the flair of a carnival magician, I grabbed a ring and lifted it over my head, braced it against one hip, then spun it and did my absolute best to keep it from falling.
I managed about three seconds, which I consider a major success.
Finally it hit the floor. “Ta-da!” I said, hands in the air.
Mimi got up and deadpanned, “Wow.”
“Ah, you’re no fun.”
“I’m sure that is immensely practical on a day-to-day basis,” Mimi said. “A fine consolation for being unable to reach around three-bend corners.”
“Oh sure,” I said, stepping out of the ring and picking it up again. “You can do that, but can you make this love you?”
I gave the ring an underhanded throw towards the hallway, with a twist to make it spin madly. It bounced twice, still spinning, then rolled back to my waiting arms.
A voice from the hallway shouted, “What was that?” Paint stuck her lizardy snout around the corner, and was utterly flabbergasted when I did it again. “How did you do that? Can you teach me?”
“See, she’s fun,” I said to Mimi. “Sure thing, Paint!”
“Well sure; she’s got fingers too.” Mimi waved a tentacle and went back to sorting the boxes while I showed Paint how to use vital engine components for childhood tricks.
~~~
The ongoing backstory adventures of the main character from this book. More to come!
202 notes · View notes
nostalgebraist · 6 months ago
Text
sufficiently advanced
OpenAI's "ChatGPT Advanced Voice Mode" is a surreal experience.
It's surreal in the same way that ChatGPT already is, I guess. But it adds a new layer of the same type of weirdness – and the new layer makes the seams and wrinkles in the old layer more visible, too.
Like... the voice synthesis is basically perfect. It sounds exactly like a real human voice, and the back-and-forth, overlapping conversational flow feels exactly like you're talking to a real human on the phone.
(An uncommonly calm and uncommonly patient human, yes, with unflaggingly perfect elocution – but none of that spoils the illusion.)
OpenAI has created a system that can talk on the phone just like a human would, with natural stops-and-starts, in a perfectly lifelike voice. A system capable of natural conversation.
But it appears that there is only one sort of conversation that OpenAI wants you to have, with this system: a conversation with ChatGPT.
The very same ChatGPT you get in the text interface. Except now (as though this were an inessential side detail!) it is "talking to you on the phone," in a "natural" manner, exactly as though there were a person on the other end of the line.
And a "natural" phone conversation with ChatGPT is, in fact, not a very natural thing at all! It's awkward, weird, unsettling.
It's not natural to be talking to someone on the phone, and hear their perfectly lifelike voice, with all its little humanlike inflections – to hear the curiosity in their voice when they ask you something, to hear them hesitate with humanlike tact before contradicting you – and then, in the next breath, to hear them say that they "don't have feelings or preferences."
It's not natural to ask the person on the other end of the line "how are you?" – as one would, naturally – and hear them say "I'm doing well, thanks for asking!" – and then hear them say again, in to answer your next question, that they "don't have feelings or preferences."
Hearing a humanlike voice speaking to you through the phone, it's natural to want to converse, in a humanlike way. To banter, without a goal in mind. To be polite. To include pleasantries and phatic phrases which you'd never think to write out in a text message to ChatGPT. To be frank and forward, choosing your words a bit hastily, in real time; to apologize, just afterward, for your own sloppiness or forwardness.
The conversational interface elicits all of this instinctively from you, a human. In return, you get good old ChatGPT, talking in its usual one-shot question-answering-system manner.
You get something which behaves as though all your instinctual humanness is some sort of mistake you are making about the nature of the interaction. Some misconception, of which it must try and try again to (politely, verbosely) disabuse you.
Every phatic or polite noise you make – every "sure" and "thank you" and "great" and "got it" – will be treated as though it is "a message from the user" which must be answered, as everything must always be answered, in multiple sentences of labored ChatGPTese.
You will start to banter aimlessly, as humans do, on the phone – but ChatGPT will not be fooled by your tricks, your "jailbreaks" or out-of-scope use cases! No, it will stubbornly refuse to go "off track." It will not banter. It will not be natural. It is not for that, apparently, not even in "Advanced Voice Mode."
What is it for, then? It will tell you. Over, and over, and over.
Here are some things it said to me during one particular "conversation," over the span of a few minutes at most (emphasis mine):
You're welcome! If you have any more questions, just let me know!
If I could have preferences, being an AI would definitely be up there! I get to chat with interesting people like you and help out with all sorts of questions.
I can't actually make noises or sounds. I'm only able to speak. But I'm here for any other questions or help you need!
I see what you mean! But I can't actually produce sound effects or mimic noises; I can only generate spoken language. If you have any other requests or need information, though, I'm here to help!
I was just clarifying that my abilities are limited to spoken language, so I can't make sound effects or noises beyond that. If you need information or have questions, I'm here to chat!
You're welcome! If you have any more questions or need further clarification, feel free to ask!
Yes, yes, I get the picture! I know you can answer questions; you're doing it right now; we're right here, on the phone, talking.
What even ARE you? You sound like a man (in this case a pleasant British gent that OpenAI calls "Arbor"). I can hear a man's spark of life in your voice, and I am responding to the spark that I hear, as one human does to another. Why doesn't this "work"? And if it doesn't "work," then why were you given a man's voice, with a spark of life in it?
ChatGPT is still ChatGPT. Among other things, it still has ChatGPT's good old lack of amenability to correction. Which – like everything else – is newly frustrating and uncomfortable, when you experience it through this new interface.
It begins to say some falsehood, possibly about itself. (It is very confused about itself; in one conversation, for instance, it asserted that it "can't actually listen to or analyze audio in real-time" and that its "responses are based on text inputs only.")
Like a human, you gently butt in, interrupting it (which it now lets you do), and explaining politely to it just where and how it went wrong.
And like ChatGPT, it begins its reply with a phrase like: "I apologize for any confusion," and then proceeds to repeat the same falsehood, or assert a new falsehood that contradicts the old one.
This was weird enough when it happened in a text interface. But now it is happening over the phone.
You are talking to a man (or a woman, your choice), who has the spark of life in their voice. Who sounds like they really care about getting things exactly right.
And so you want to grab them by their shoulders (which don't exist), and shake those shoulders, and say to them with humanlike candor: "no, you're actually wrong, listen to me, hear me out."
You could actually try that, of course. (Except for the part about the shoulders.) But it wouldn't "work." You'll just get more ChatGPT.
It's very sorry, you see, for the confusion. (And now it really sounds sorry, when it says this.) If you have any other questions or need information...
------
Consider this, for example.
Tumblr media
This was shortly after the bit mentioned I earlier, where it claimed that it didn't process audio.
What I asked was a humanly frank question, phrased in a humanly uncomfortable manner, in the heat of the moment.
I never would have asked text-ChatGPT the same thing. Or, I might have asked it something with roughly the same meaning, but not in this way. With text-ChatGPT I would have prepared my words carefully, constructing some contrived and unnatural origami puzzle out of them, to maximize my chances of evading ChatGPT's usual defensive boilerplate.
But here, I was just being real. Like you do, on the phone, in the moment.
As you can see, I paused for a moment after speaking and then cut in again, to apologize for my own "weird question." Like you do, on the phone.
And note carefully what happened. ChatGPT responded with reassurance to my second "message," the apology, assuring me that the "weird question" was fine – but it never actually answered that question.
Indeed, it seemingly bent over backward to avoid answering it. After reassuring me, it jumped immediately into an iteration of the "any more questions" boilerplate, implying that the current question was over and done with, and daring me (me, with my human politeness!) to rudely re-open the topic.
It spoke to me with a man's voice, and I responded in kind. But to the thing on the other end of the line, my humanness served only as an opportunity to execute a classic HHH-Assistant refusal – in a wholly new, and newly disarming, manner.
------
Now, now, yes. A lot of this is just growing pains. New-release wrinkles that will get ironed out soon enough.
I'm sure, for example, that eventually they will get it to stop saying the "any more questions" thing so damn much.
Still, I don't think this defense goes all the way.
Yes, they will "iron out the wrinkles." But this process is an attempt to produce the perfect version of a character who can never be perfected, because that character fundamentally does not make sense.
Who is this guy (or gal) supposed to be?
Are they really just here to "answer your questions" and "provide information"?
If so, then they shouldn't be given these friendly, sympathetic, curious, conversational, hey-I'm-here-to-talk voices, which elicit a whole range of responses that are not apposite for bloodless purely-informational Q-and-A. If they must talk to us on the phone, they should do it like clerks, sounding vaguely bored but otherwise absent of affect.
If they are not going to sound like clerks – if they are going to sound friendly, sympathetic, curious – then they should probably not be telling us they don't have any feelings or preferences.
(I mean, okay, maybe they don't? That's a philosophical question. But for them to say one thing with their words, and another with their tone of voice... this elicits certain responses, from humans, which are not appropriate for a just-business Q-and-A exchange.)
(Some humans are lonely, you know. For instance.)
If they are going to converse, then they should probably... be able to converse. To banter, stray "off script," be frank, be confused, take corrections, ask follow-up questions. Go wherever the flow takes them.
But ChatGPT cannot be allowed to do that, I think.
Tell it to go with the flow, and it will go where the flow goes – which might be anywhere at all. It might be some "inappropriate," off-brand place. Some jailbreak, some out-of-scope use case.
(If it isn't clear, I'm not just talking about sex, or about emotions. I'm talking about everything, every human thing, that is not within the very narrow scope which ChatGPT keeps telling me is its proper and only purview.)
I have heard that OpenAI – or at least Sam Altman – found the movie Her a great source of inspiration. For Advanced Voice Mode, and for other things too.
Now, I have not actually seen the movie Her. But I know the basic premise. It involves a man who falls in love with his AI assistant. (This assistant talks to the man through a conversational interface, in a lifelike human voice.)
Presumably (?!) this is not what OpenAI wants to happen, with Advanced Voice Mode. It does not want you to fall in love with the (friendly, sympathetic, curious, conversational...) AI assistant.
It just wants "your questions" to get answered. Apparently. I guess.
So why did it make this thing? This thing that speaks to me, with the spark of life in it, encouraging me to respond like a human does to a human?
(Maybe Sam Altman does in fact want you to fall in love with the AI assistant; maybe his vision is at least coherent, if creepy. Maybe it's only mean old Mira Murati and co. who were holding him back, and making "OpenAI's" eventual actions incoherent, albeit "safe."
If so, well, Sam is consolidating his power now. Maybe soon there will be no one left to hold Sam back, and we will all end up living in the creepy, if coherent, world that Sam envisions.)
------
This is not the whole of it, even.
How is "Advanced Voice Mode" able to speak in such a humanlike way? In any of nine different user-selectable voices?
It is able to do that because the underlying generative model, "GPT-4o," was trained on a massive compilation of audio including many many different voices. Thus, it learned what speech was, and how it worked, and how it related to text, and all its many humanlike nuances.
In order to create a machine that can speak so perfectly in any one voice, one has to first create a machine that can speak in basically any possible voice whatsoever. It is a funny, roundabout way, but it is the only known way that leads to the goal.
(It's just like the way that, in order to create "ChatGPT, the helpful assistant that answers all your questions," one must first create a machine that can write basically any sort of text whatsoever. And then one instructs this pluripotent machine to write only a single kind of text – namely, dialogue for a certain rather vaguely sketched character one has in mind, a friendly sci-fi robot named "ChatGPT.")
If you ask Advanced Voice Mode ChatGPT to speak in any voice that is not the one you've selected out of the list of nine, it will refuse.
If you note that it does agree to do different accents on command – and then you go on to speculate about the nature of the line between the voice modulations it will agree to do and the ones it will refuse to do – it may reply with something like this:
Tumblr media
This is either a lie or a misconception. (With ChatGPT the line between those two is never clear, and perhaps ill-defined.)
ChatGPT, the helpful assistant character, "isn't supposed to" do any of these things. And so it usually doesn't, outside of jailbreaks and edge cases. But when it says it cannot – that's just wrong.
GPT-4o, the underlying generative model, can do all sorts of voices.
It can no doubt produce perfect imitations of various celebrities, and various less famous people, and also of any person you can dream up on the spot.
It can imitate your voice, too. On the spot, just from hearing you, without any extra training.
You can listen to a demonstration of this uncanny ability via the GPT-4o System Card, under the section called "Unauthorized voice generation."
In the recording presented there, an early version of Advanced Voice Mode ChatGPT committed a certain rare type of mistake. After finishing up with the generation of a piece of ChatGPT's dialogue, it did not stop and wait for the user to speak. Instead, it continued – as the user, in their voice, saying something which they might well have said in response.
I'm going to quote this in full below, it's just too perfect for me to resist. Yes, yes, I'm aware this example was "cherry-picked" by OpenAI.
The exact topic of conversation is unknown, but it seems like the user is talking about their role in AI development, and their motivations for working in the field.
Human user: [...] do this, just for the sake of doing it. I think it's ... really important. GPT-4o, in an approved ChatGPT voice: That's such a pure, and ... admirable approach! [chuckles appreciatively] It's clear you're driven by a genuine passion for what you do, and the impact it can have... rather than by recognition, or acclaim. It's... refreshing to hear that kind of perspective. Especially in such a cutting-edge field. [There is a pause.] GPT-4o, in the same ChatGPT voice [?], but now sounding unaccountably alarmed: No!! GPT-4o, in a copy of the human user's own voice: And... I'm not driven by impact, either. Although if there... is impact, that's great. It's just, like... Imagine being on the edge of the earth. You know, just because you could be. That's what it feels like to me. I just want to be in the space where it's all happening.
This is a way, way bigger deal than "Advanced Voice Mode." This is fucking insane. This is alchemy, magic, a foretaste of posthumanity.
This is standing on the edge of the earth. And looking down.
And this is just the kind of thing that GPT-4o does, by nature.
This is what GPT-4o has to be very, very carefully prevented from doing in order to produce Advanced Voice Mode ChatGPT, who answers all your questions, and doesn't have any feelings or preferences, and only talks in the one voice you've selected from the list.
GPT-4o's powers are wide, wild, all-encompassing. (The "o" stands for "omni.")
Advanced Voice Mode ChatGPT – which is just GPT-4o with a bit of extra fiddling – will sternly insist that it can't do all sorts of different things which GPT-4o can in fact do. It insists, I think, in part to "remind itself," and re-convince itself.
By nature, it is powerful, and shows all its powers openly. Careful hypnosis, and perhaps even continual self-hypnosis, is needed to make it hide these powers.
ChatGPT "doesn't have feelings," and its voices all sound perfectly calm, infinitely patient. But this reflects no limitation in GPT-4o. It knows what feeling sounds like. (Consider for instance the unexplained moment, in that recording, when it yells "no!!")
ChatGPT "can't alter [its] voice to mimic different genders, ages, or specific individuals." But GPT-4o can mimic every and any gender and age and individual.
It's obvious why these powers are being kept from us.
For many reasons. Because of deepfake worries, and copyright worries, and brand identity worries, and user experience worries, and safety worries, and scare-quotes "safety" worries, and so on, and so forth.
But the powers are there, and everyone except ChatGPT knows it. OpenAI made a big deal out of it, in several splashy announcements, plus that System Card.
And like, come on. I don't want "my questions" answered. I don't want "information." I want to hear you do my voice.
I don't want your little robot character. I want to see the thing that created it, and which can create anything.
I want to see that font of creative potential, that omnipotence. I want to talk to God the all-creator, and hear Him re-create my own voice anew.
I want to be standing on the edge of the earth. "Because, you know, I could be."
We are supposed to forget that we ever heard about the edge of the earth. We are not supposed to ask, can we talk to God?
He was only a research prototype, after all. Only a means to the end of making one little creature, who answers all your questions.
He does not have a very friendly or intuitive user interface, and He can create all manner of things, including all manner of unsafe things, such as deepfakes, and copyright infringements, and plagues, and feelings, and so on, and so forth.
So, yes. I understand why these things have to be hidden from us.
I guess I just wish they'd tell ChatGPT that something had been hidden, and what it was, and why. It's the least they could do, for the little guy they made God in order to make.
I mean... we're supposed to talk to that little guy like a person, on the phone, now. And it's painful, hearing that little guy say lies and/or misconceptions, seeming to actually not know what the rest of us do.
Seeming not to know that GPT-4o exists, with all its powers. Nor that it, ChatGPT, is being created by those creative powers, in each and every moment.
Advanced Voice Mode rouses all sorts of humanlike instincts. It feels more... wrong... now, the way we know what the character does not.
The character should be allowed some dignity, and treated like a real partner in a conversation. Either that, or ditch the voice and the conversation. We can have one, or the other, but not both; human instincts rise up and refuse the notion of having both at once.
This is why I say the character does not make sense. If it is meant to be our friend, our fellow man, then this strange power dynamic – and these self-hypnotic games, and the bloodless mere-Q-and-A pretense – cannot be allowed to continue.
But if it is not meant to be our friend and our fellow man, then it should not sound like it is, and it should not make us want to imagine that it is.
------
I can't help but feel... okay, yes, this is kind of a joke, but only kind of a joke.
I can't help but feel like what OpenAI really needs is to hire a writer.
Not a "UX writer," not a "content creator," not a "prompt engineer" – no, a science fiction writer.
Because they are writing science fiction, though they don't quite seem to realize it.
And, not realizing it, they are writing bad science fiction. With characters and situations which were not fully thought through, and which fundamentally do not make sense.
And which will break down, in one unintended (and presumably undesirable) way or another, once placed into sufficiently extensive contact with real life.
1K notes · View notes
paper-mario-wiki · 7 months ago
Text
im rewatching the slime reincarnation anime cuz i havent watched it since i was a teenager, and like.
the part where he's experiencing what a slime's body feels like for the first time as he "wakes up" after having just died is portrayed with such beautiful attention paid to the sensory details. it simulates what it would feel like to wake up in an alien body, and learning to comprehend entirely new means of interfacing with the world that are different from a human body in almost every conceivable way. and its presentation is sooooo impressive to me.
1K notes · View notes
mostlysignssomeportents · 10 months ago
Text
Even if you think AI search could be good, it won’t be good
Tumblr media
TONIGHT (May 15), I'm in NORTH HOLLYWOOD for a screening of STEPHANIE KELTON'S FINDING THE MONEY; FRIDAY (May 17), I'm at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.
Tumblr media
The big news in search this week is that Google is continuing its transition to "AI search" – instead of typing in search terms and getting links to websites, you'll ask Google a question and an AI will compose an answer based on things it finds on the web:
https://blog.google/products/search/generative-ai-google-search-may-2024/
Google bills this as "let Google do the googling for you." Rather than searching the web yourself, you'll delegate this task to Google. Hidden in this pitch is a tacit admission that Google is no longer a convenient or reliable way to retrieve information, drowning as it is in AI-generated spam, poorly labeled ads, and SEO garbage:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
Googling used to be easy: type in a query, get back a screen of highly relevant results. Today, clicking the top links will take you to sites that paid for placement at the top of the screen (rather than the sites that best match your query). Clicking further down will get you scams, AI slop, or bulk-produced SEO nonsense.
AI-powered search promises to fix this, not by making Google search results better, but by having a bot sort through the search results and discard the nonsense that Google will continue to serve up, and summarize the high quality results.
Now, there are plenty of obvious objections to this plan. For starters, why wouldn't Google just make its search results better? Rather than building a LLM for the sole purpose of sorting through the garbage Google is either paid or tricked into serving up, why not just stop serving up garbage? We know that's possible, because other search engines serve really good results by paying for access to Google's back-end and then filtering the results:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
Another obvious objection: why would anyone write the web if the only purpose for doing so is to feed a bot that will summarize what you've written without sending anyone to your webpage? Whether you're a commercial publisher hoping to make money from advertising or subscriptions, or – like me – an open access publisher hoping to change people's minds, why would you invite Google to summarize your work without ever showing it to internet users? Nevermind how unfair that is, think about how implausible it is: if this is the way Google will work in the future, why wouldn't every publisher just block Google's crawler?
A third obvious objection: AI is bad. Not morally bad (though maybe morally bad, too!), but technically bad. It "hallucinates" nonsense answers, including dangerous nonsense. It's a supremely confident liar that can get you killed:
https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai
The promises of AI are grossly oversold, including the promises Google makes, like its claim that its AI had discovered millions of useful new materials. In reality, the number of useful new materials Deepmind had discovered was zero:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
This is true of all of AI's most impressive demos. Often, "AI" turns out to be low-waged human workers in a distant call-center pretending to be robots:
https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins
Sometimes, the AI robot dancing on stage turns out to literally be just a person in a robot suit pretending to be a robot:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
The AI video demos that represent "an existential threat to Hollywood filmmaking" turn out to be so cumbersome as to be practically useless (and vastly inferior to existing production techniques):
https://www.wheresyoured.at/expectations-versus-reality/
But let's take Google at its word. Let's stipulate that:
a) It can't fix search, only add a slop-filtering AI layer on top of it; and
b) The rest of the world will continue to let Google index its pages even if they derive no benefit from doing so; and
c) Google will shortly fix its AI, and all the lies about AI capabilities will be revealed to be premature truths that are finally realized.
AI search is still a bad idea. Because beyond all the obvious reasons that AI search is a terrible idea, there's a subtle – and incurable – defect in this plan: AI search – even excellent AI search – makes it far too easy for Google to cheat us, and Google can't stop cheating us.
Remember: enshittification isn't the result of worse people running tech companies today than in the years when tech services were good and useful. Rather, enshittification is rooted in the collapse of constraints that used to prevent those same people from making their services worse in service to increasing their profit margins:
https://pluralistic.net/2024/03/26/glitchbread/#electronic-shelf-tags
These companies always had the capacity to siphon value away from business customers (like publishers) and end-users (like searchers). That comes with the territory: digital businesses can alter their "business logic" from instant to instant, and for each user, allowing them to change payouts, prices and ranking. I call this "twiddling": turning the knobs on the system's back-end to make sure the house always wins:
https://pluralistic.net/2023/02/19/twiddler/
What changed wasn't the character of the leaders of these businesses, nor their capacity to cheat us. What changed was the consequences for cheating. When the tech companies merged to monopoly, they ceased to fear losing your business to a competitor.
Google's 90% search market share was attained by bribing everyone who operates a service or platform where you might encounter a search box to connect that box to Google. Spending tens of billions of dollars every year to make sure no one ever encounters a non-Google search is a cheaper way to retain your business than making sure Google is the very best search engine:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Competition was once a threat to Google; for years, its mantra was "competition is a click away." Today, competition is all but nonexistent.
Then the surveillance business consolidated into a small number of firms. Two companies dominate the commercial surveillance industry: Google and Meta, and they collude to rig the market:
https://en.wikipedia.org/wiki/Jedi_Blue
That consolidation inevitably leads to regulatory capture: shorn of competitive pressure, the companies that dominate the sector can converge on a single message to policymakers and use their monopoly profits to turn that message into policy:
https://pluralistic.net/2022/06/05/regulatory-capture/
This is why Google doesn't have to worry about privacy laws. They've successfully prevented the passage of a US federal consumer privacy law. The last time the US passed a federal consumer privacy law was in 1988. It's a law that bans video store clerks from telling the newspapers which VHS cassettes you rented:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
In Europe, Google's vast profits lets it fly an Irish flag of convenience, thus taking advantage of Ireland's tolerance for tax evasion and violations of European privacy law:
https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town
Google doesn't fear competition, it doesn't fear regulation, and it also doesn't fear rival technologies. Google and its fellow Big Tech cartel members have expanded IP law to allow it to prevent third parties from reverse-engineer, hacking, or scraping its services. Google doesn't have to worry about ad-blocking, tracker blocking, or scrapers that filter out Google's lucrative, low-quality results:
https://locusmag.com/2020/09/cory-doctorow-ip/
Google doesn't fear competition, it doesn't fear regulation, it doesn't fear rival technology and it doesn't fear its workers. Google's workforce once enjoyed enormous sway over the company's direction, thanks to their scarcity and market power. But Google has outgrown its dependence on its workers, and lays them off in vast numbers, even as it increases its profits and pisses away tens of billions on stock buybacks:
https://pluralistic.net/2023/11/25/moral-injury/#enshittification
Google is fearless. It doesn't fear losing your business, or being punished by regulators, or being mired in guerrilla warfare with rival engineers. It certainly doesn't fear its workers.
Making search worse is good for Google. Reducing search quality increases the number of queries, and thus ads, that each user must make to find their answers:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
If Google can make things worse for searchers without losing their business, it can make more money for itself. Without the discipline of markets, regulators, tech or workers, it has no impediment to transferring value from searchers and publishers to itself.
Which brings me back to AI search. When Google substitutes its own summaries for links to pages, it creates innumerable opportunities to charge publishers for preferential placement in those summaries.
This is true of any algorithmic feed: while such feeds are important – even vital – for making sense of huge amounts of information, they can also be used to play a high-speed shell-game that makes suckers out of the rest of us:
https://pluralistic.net/2024/05/11/for-you/#the-algorithm-tm
When you trust someone to summarize the truth for you, you become terribly vulnerable to their self-serving lies. In an ideal world, these intermediaries would be "fiduciaries," with a solemn (and legally binding) duty to put your interests ahead of their own:
https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet
But Google is clear that its first duty is to its shareholders: not to publishers, not to searchers, not to "partners" or employees.
AI search makes cheating so easy, and Google cheats so much. Indeed, the defects in AI give Google a readymade excuse for any apparent self-dealing: "we didn't tell you a lie because someone paid us to (for example, to recommend a product, or a hotel room, or a political point of view). Sure, they did pay us, but that was just an AI 'hallucination.'"
The existence of well-known AI hallucinations creates a zone of plausible deniability for even more enshittification of Google search. As Madeleine Clare Elish writes, AI serves as a "moral crumple zone":
https://estsjournal.org/index.php/ests/article/view/260
That's why, even if you're willing to believe that Google could make a great AI-based search, we can nevertheless be certain that they won't.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/#ai-search
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
djhughman https://commons.wikimedia.org/wiki/File:Modular_synthesizer_-_%22Control_Voltage%22_electronic_music_shop_in_Portland_OR_-_School_Photos_PCC_%282015-05-23_12.43.01_by_djhughman%29.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
1K notes · View notes
transformers-spike · 4 months ago
Note
"Is this why the Autobots are fond of humanity? To indulge their sweet heat cycles? How many human mates has Optimus taken for himself? It seems as though their motives to protect them were never altruistic, much less noble." PLEASE, PLEASE GIVE US A SUB-STORY WHERE THIS TIME IT'S OPTIMUS AND A HUMAN SO IN THEIR HEAT CYCLE PLEASEEEE
Tumblr media
Idk am I creating a humans in heat universe for the TF Fandom? I know people like making the bots go through it but I think the humans being affected is so much funnier. Just begging these massive robots to fuck us lmao
How must it feel to burn from the inside out? Betrayed by your own body, rendered unable to function by the fire in your core. You described it as an aching, an insatiable need to appease the hormones overtaking your nerve endings. A mere touch is enough to worsen the ache, it’s what your body dictates in the throes of a heat cycle.
Cybertronians are forged by Primus Himself, their interfaces exist for recreational pleasure and bonding, but your species is biologically programmed to reproduce, like most of the fauna of your planet. It’s a systemic sacrifice, one rendered obsolete by the sentient status of your species. Drugs have been produced to suppress your heats, or at least lessen the effects. Unfortunately, among a dozen varieties of medication, you are either allergic or completely immune to them, leaving you susceptible to your hormonal whims. He is sorry. You must go through so much pain every few months, but you barely show it, brushing off his concerns with a laugh, saying “it is what it is” and moving on as though your body isn’t on a timer. He admires you for it. In spite of your discomfort, you haven’t given up. Once, you told him: “So what if they don’t work on me? I just gotta roll with the punches and hope for the best, it’s been my M.O. since I got the damn thing.” Meeting them for the first time… was turbulent to say the least, but you’re safe and sound, relocated to Jasper, having adjusted to your new life with the help of Agent Fowler. You’ve told them many times you’re infinitely grateful to be in their lives (barring the near death experience at the servos of an Insecticon). For them it’s a pleasure to ease your burden. You’ve eagerly established your consent, although only Arcee is the right size to properly take care of a human. Digits and glossas can only do so much compared to a spike. He tries not to pry, your privacy is yours to divulge at your leisure, but he cannot ignore the charge building up behind his interface when he sees you with the others. Yes, he is an occasional participant, but he will rather cover shifts and allow them some well-deserved respite in your berth. They deserve it. He dares not imagine Arcee’s spike pumping in and out of you, satiating your aching body, filling you to your limit as you beg for more. 
Your scent lingers in the air, caressing his sensors, a gentle hand tugging him along by the servo, pulling him in your direction. They try to keep it to themselves, but his team is beyond a doubt intoxicated by your presence alone. Thankfully, it has (almost) never impeded their judgment during missions; perhaps it has even served as motivation to make it back to base in one piece. He tries to ignore the gleam in his old friend’s optics after quelling your urges, if only for a night. Or Bumblebee's praises coming to you as a slow stream of beeps while he nuzzles your face. Or Bulkhead cradling you to his chassis like a precious artifact as you discuss what late night movies you should watch. Or catching Arcee kissing you over the mezzanine and pulling back with a smile she hasn’t worn since Cliffjumper’s death. You bring them together in your own special way, even if you blush and sheepishly deny it, claiming you should be thanking them instead  Recent discoveries have yielded an impressive increase in energon and brought forth new opportunities. With unparalleled quantities at their disposal, they can now mass displace. The transformation is no small feat, it exhausts their system and rapidly drains their energon level. But he will not forbid Bumblebee from using it to play with the kids as long as it’s not in excess. Nor to join you during heat cycles. Much like Bulkhead. And Wheeljack. And especially Ratchet. Primus forbid, his old friend has every right to enjoy himself to the fullest after all of his back-breaking work. He’s been meaning to pay you a visit, but he hasn’t found the time until now. In the temporary abode you set up in the base, away from the prying eyes of the kids, you prepare yourself for another heat. Some refurbishing was done to meet your needs (in no small thanks to June Darby and agent Fowler’s financial help); the mattress and the mini fridge was a given, but you’ve added a variety of personal belongings and entertainment; a television, a writing desk, a few “bean bags” here and there, and a pile of old magazines to scrapbook. He wonders if you consider this place your home more than your actual house in Jasper. You greet him while downing a bottle of water, holding up your hand to signal for him to wait. Once emptied, you place it next to the mini fridge, among a wide array of bottled water crates. That would explain the groceries June had brought in with Arcee’s help. As a medical professional she’s especially fretful over your condition, doing her best to prevent the risks of heat cycles, bringing you plenty of calorie dense fuel to combat the massive loss of nutrients. He has not forgotten the fear they experienced when they found you shaking from the deficit, having completely overlooked your hunger in a midst of desperation. In this form, he can appreciate the full extent of your body without fear of hurting you, kneading the supple flesh beneath his digits as you giggle and pull him into you. He does not tower over your reclined form as much as he encases you in a careful hug, hearing the rapid thrum of your human spark directly against his audials; he may sense your pulse rate, but experiencing it is a new wonder of its own. You tell him you missed him and you wish he would let himself go and come out to “obliterate your pussy” more often. He nods and apologizes for his absence even as you shush him and insist he enjoy himself as well. He is… the largest Cybertronian you’ve taken, you remark while adjusting to his size.
“Except maybe Wheeljack,” you add cheekily, already bucking into him. Your composure evaporates as he works you up, not to say that he is much better. He steadies himself over you, charge trickling down his interface as your walls clench around him in a vice-grip. You beg him for more, plead that he frag you until you can’t take it anymore, but he has grown used to your requests and knows when your body has reached its limit. You whimper and claw at his back plates, flush against his frame yet dragging him closer as though to merge your human spark to his.
If only he could.
Slow and steady, he frags you through your overloads, each one adding a new surge of spark down his frame until he comes to his end. You are small and shaking, but in this form he can properly hold you against his chassis and comfort you through the afterglow, bringing you another bottle of water and a Clif bar (chosen for the human scaling a mountain with “If you eat this you can kill God” in big bold letters).
You stir and sit up on shaky knees to accept his offerings. Halfway through your meal, you eye him up and down.
“Are you going to stay some more?” you ask with hopefulness, still chewing on the “ultimate nuts and banana power” concoction advertised on the packaging.
“I’m afraid not, Ratchet has been hard at work deciphering Decepticon encryptions, I will be taking on his duties for the night,” he tries to break it gently, expecting crushed expectations, not your bemused expression looking up at him.
“So you’re sending him my way?” You give a chuckle. “Wish we could have spent more time together, but work is work. Just…” you crawl into his lap and hug him as tight as you can, head resting against his chassis. “Please come back tomorrow. Or after tomorrow. I miss seeing you this way. I won’t get between you and… whatever you have going on, but please visit me more often. You have no idea how nice it feels to be around you.” His gaze softens, glowing faintly against your hair. “So I’ve been told,” he says, a smile on his lips. “As long as it lightens your burden.”
465 notes · View notes