#Agent-based AI model
Explore tagged Tumblr posts
procurement-insights · 3 months ago
Text
Why Gartner's Data Fabric Graphic Puts The Horse Before The Cart
How does the old technology phrase "garbage-in, garbage-out" apply to Gartner's Data Fabric post?
Here is the link to today’s Gartner post on LinkedIn regarding the Data Fabric graphic. My comment is below. I will use these three terms: agent-based model, metaprise, and then – only then, as you call it, data fabric. Without the first two being in place, the data fabric map described above is incomplete and has limited value. Everything begins at the point of new data inputs. A major flaw is…
0 notes
river-taxbird · 1 year ago
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes · View notes
ozzgin · 1 year ago
Text
Yandere! Android x Reader (I)
Tumblr media Tumblr media
It is the future and you have been tasked to solve a mysterious murder that could jeopardize political ties. Your assigned partner is the newest android model meant to assimilate human customs. You must keep his identity a secret and teach him the ways of earthlings, although his curiosity seems to be reaching inappropriate extents.
Yes, this is based on Asimov’s “Caves of Steel” because Daneel Olivaw was my first ever robot crush. I also wanted a protagonist that embraces technology. :)
Content: female reader, AI yandere, 50's futurism
[Part 2] | [More original works]
Tumblr media
You follow after the little assistant robot, a rudimentary machine invested with basic dialogue and spatial navigation. It had caused quite the ruckus when first introduced. One intern - well liked despite being somewhat clumsy at his job - was sadly let go as a result. Not even the Police is safe from the threat of AI, is what they chanted outside the premises.
"The Commissioner has summoned you, (Y/N)." 
That's how it greeted you earlier, clacking its appendage against the open door in an attempt to simulate a knock. 
"Do you know why my presence is needed?" You inquire and wait for the miniature AI to scan the audio message. 
"I am not allowed to mention anything right now." It finally responds after agonizing seconds.
 It's an alright performance. You might've been more impressed by it, had you not witnessed first hand the Spacer technology that could put any modern invention here on Earth to shame. Sadly the people down here are very much against artificial intelligence. There have been multiple protests recently, like the one in front of your building, condemning the latest government suggestion regarding automation. People fear for their jobs and safety and you don't necessarily blame them for having self preservation. On the other hand, you've always been a supporter of progress. As a child you devoured any science fiction book you could get your hands on, and now, as a high ranked police detective you still manage to sneak away and scan over articles and news involving the race for a most efficient computer.
You close the door behind you and the Commissioner puts his fat cigarette out, twisting the remains into the ashtray with monotonous movements as if searching for the right words.
 "There's been a murder." Is all he settles on saying, throwing a heavy folder in your direction. A hologram or tablet might've been easier to catch, but the man, like many of his coworkers, shares a deep nostalgia for the old days. 
 You flip through the pages and eventually furrow your eyebrows. 
"This would be a disaster if it made it to the news." You mumble and look up at the older man. "Shouldn't this go to someone more experienced?" 
He twiddles with his grey mustache and glances out the fake window. 
"It's a sensitive case. The Spacers are sending their own agent to collaborate with us. What stands out to you?" 
You narrow your eyes and focus on the personnel sheet. What's there to cause such controversy? Right before giving up, departing from the page, you finally notice it: next to the Spacer officer's name, printed clearly in black ink, is a little "R." which is a commonly used abbreviation to indicate something is a robot. The chief must've noticed your startled reaction and continues, satisfied: 
"You understand, yes? They're sending an android. Supposedly it replicates a human perfectly in terms of appearance, but it does not possess enough observational data. Their request is that whoever partners up with him will also house him and let him follow along for the entirety of the mission. You're the only one here openly supporting those tin boxes. I can't possibly ask one of your higher ups, men with wives and children, to...you know...bring that thing in their house."
You're still not sure whether to be offended by the fact that your comfort seems to be of less priority compared to other officers. Regardless of the semantics, you're presently standing at the border between Earth and the Spacer colony, awaiting your case partner. A man emerges from behind a security gate. He's tall, with handsome features and an elegant walk. He approaches you and you reach for a handshake. 
"Is the android with you?" You ask, a little confused. 
"Is this your first time seeing a Spacer model?" He responds, relaxed. "I am the agent in your care. There is no one else." 
You take a moment to process the information, similar to the primitive machine back at your office. Could it be? You've always known that Spacer technology is years ahead, but this surpasses your wildest dreams. There is not a single detail hinting at his mechanical fundament. The movement is fluid, the speech is natural, the design is impenetrable. He lifts the warm hand he'd used for the handshake and gently presses a finger against your chin in an upwards motion. You find yourself involuntarily blushing. 
"Your mouth was open. I assumed you'd want it discreetly corrected." He states, factually, with a faint smile on his lips. Is he amused? Is such a feeling even possible? You try your best to regain some composure, adjusting the collar of your shirt and clearing your throat. 
"Thank you and please excuse my rudeness. I was not expecting such a flawless replica. Our assistants are...easily recognizable as AI."
"So I've been told." His smile widens and he checks his watch. You follow his gesture, still mesmerized, trying to find a single indicator that the man standing before you is indeed a machine, a synthetic product.
Nothing.
"Shall we?" He eyes the exit path and you quickly lead him outside and towards public transport. 
He patiently waits for your fingerprint scan to be complete. You almost turn around and apologize for the old, lagging device. As a senior detective, you have the privilege of living in the more spacious, secured quarters of the city. And, since you don't have a family, the apartment intended for multiple people looks more like a luxury adobe. Still, compared to the advanced way of the Spacers, this must feel like poverty to the android.
At last, the scanner beeps and the door unlocks. 
"Heh...It's a finicky model." You mumble and invite him in.
"Yes, I'm familiar with these systems." He agrees with you and steps inside, unbuttoning his coat.
"Oh, you've seen this before?"
"In history books."
You scratch your cheek and laugh awkwardly, wondering how much of his knowledge about the current life on Earth is presented as a museum exhibit when compared to Spacer society. 
"I'm going to need a coffee. I guess you don't...?" Your words trail as you await confirmation. 
"I would enjoy one as well, if it is not too much to ask. I've been told it's a social custom to 'get coffee' as a way to have small talk." The synthetic straightens his shirt and looks at you expectantly. 
"Of course. I somehow assumed you can't drink, but if you're meant to blend in with humans...it does make sense you'd have all the obvious requirements built in."
He drags a chair out and sits at the small table, legs crossed.
"Indeed. I have been constructed to have all the functions of a human, down to every detail." 
You chuckle lightly. Well, not like you can verify it firsthand. The engineers back at the Spacer colony most likely didn't prepare him for matters considered unnecessary. 
"I do mean every detail." He adds, as if reading your mind. "You are free to see for yourself."
You nearly drop the cup in your flustered state. You hurry to wipe the coffee that spilled onto the counter and glance back at the android, noticing a smirk on his face. What the hell? Are they playing a prank on you and this is actually a regular guy? Some sort of social experiment? 
"I can see they included a sense of humor." You manage to blurt out, glaring at him suspiciously. 
"I apologize if I offended you in any way. I'm still adjusting to different contexts." The android concludes, a hint of mischief remaining on his face. "Aren't rowdy jokes common in your field of work?"
"Uh huh. Spot on." You hesitantly place the hot drink before him.
Robots on Earth have always been built for the purpose of efficiency. Whether or not a computer passes the Turing Test is irrelevant as long as it performs its task in the most optimal, rational way. There have been attempts, naturally, to create something indistinguishable from a human, but utility has always taken precedence. It seems that Spacers think differently. Or perhaps they have reached their desired level of performance a long time ago, and all that was left was fiddling with aesthetics. Whatever the case is, you're struggling not to gawk in amazement at the man sitting in your kitchen, stirring his coffee with a bored expression.
"I always thought - if you don't mind my honesty - that human emotions would be something to avoid when building AI. Hard to implement, even harder to control and it doesn't bring much use."
"I can understand your concerns. However, let me reassure you, I have a strict code of ethics installed in my neural networks and thus my emotions will never lead to any destructive behavior. All safety concerns have been taken into consideration.
As for why...How familiar are you with our colony?" The android takes a sip of his coffee and nods, expressing his satisfaction. "Perhaps you might be aware, Spacers have a declining population. Automated assistants have been part of our society for a long time now. What's lacking is humans. If the issue isn't fixed, artificial humans will have to do."
You scoff.
"What, us Earth men aren't good enough to fix the birth rates? They need robots?"
You suddenly remember the recipient of your complaint and mutter an apology. 
"Well, I'm sure you'd make a fine contender. Sadly I can't speak for everyone else on Earth." The man smiles in amusement upon seeing the pale red that's now dusting your cheeks, then continues: "But the issue lies somewhere else. Spacers have left Earth a long time ago and lived in isolation until now. Once an organism has lost its immune responses to otherwise common pathogens, it cannot be reintegrated."
True. Very few Earth citizens are allowed to enter the colony, and only do so after thorough disinfection stages, proving they are disease free as to not endanger the fragile health of the Spacers living in a sterile environment. You can only imagine the disastrous outcome if the two species were to abruptly mingle. In that case, equally sterile machinery might be their only hope.
Your mind wanders to the idea. Dating a robot...How's that? You sheepishly gaze at the android and study his features. His neatly combed copper hair, the washed out blue eyes, the pale skin. Probably meant to resemble the Spacers. You shake your head.
"A-anyways, I'll go and gather all the case files I have. Then we can discuss our first steps. Do feel at home."
You rush out and head for your office. Focus, you tell yourself mildly annoyed.
While you search for the required paperwork - what a funny thing to say in this day and age - he will certainly take up on your generous offer to make himself comfortable. The redhaired man enters the living room, scanning everything with curious eyes. He stops in front of a digital frame and slides through the photos. Ah, this must be your Police Academy graduation. The year matches with the data he's received on you. Data files he might've read one too many times in his unexplained enthusiasm. This should be you and the Commissioner; Doesn't match the description of your father, and he seems too old to be a spouse or boyfriend. Additionally, the android distinctly recalls the empty 'Relationship' field.
"Old photos are always a tad embarrassing. I suppose you skipped that stage."
He jolts almost imperceptibly and faces you. You have returned with a thin stack of papers and a hologram projector.
"I've digitalized most files I received, so you don't have to shuffle a bunch of paper around." You explain.
"That is very useful, thank you." He gently retrieves the small device from your hand, but takes a moment before removing his fingers from yours. "I predict this will be a successful partnership."
You flash him a friendly smile and gesture towards the seating area.
"Let's get to work, then. Unless you want to go through more boring albums." You joke as you lower yourself onto the plush sofa. 
The synthetic human joins you at an unexpectedly close proximity. You wonder if proper distance differs among Spacers or if he has received slightly erroneous information about what makes a comfortable rapport. 
"Nothing boring about it. In fact, I'd say you and I are very similar from this point of view." He tells you, placing the projector on the table.
"Oh?"
"Your interest in technology and artificial intelligence is rather easy to infer." The man continues, pointing vaguely towards the opposing library. "Aside from the briefing I've already received about you, that is."
"And that is similar to...the interest in humans you've been programmed to have?" You interject, unsure where this conversation is meant to lead. 
"Almost."
His head turns fully towards you and you stare back into his eyes. From this distance you can finally discern the first hints of his nature: the thin disks shading the iris - possibly CCD sensors - are moving in a jagged, mechanical manner. Actively analyzing and processing the environment. 
"I wouldn't go as far as to generalize it to all humans. 
Just you."
3K notes · View notes
mariacallous · 21 days ago
Text
It feels like no one should have to say this, and yet we are in a situation where it needs to be said, very loudly and clearly, before it’s too late to do anything about it: The United States is not a startup. If you run it like one, it will break.
The onslaught of news about Elon Musk’s takeover of the federal government’s core institutions is altogether too much—in volume, in magnitude, in the sheer chaotic absurdity of a 19-year-old who goes by “Big Balls” helping the world’s richest man consolidate power. There’s an easy way to process it, though.
Donald Trump may be the president of the United States, but Musk has made himself its CEO.
This is bad on its face. Musk was not elected to any office, has billions of dollars of government contracts, and has radicalized others and himself by elevating conspiratorial X accounts with handles like @redpillsigma420. His allies control the US government’s human resources and information technology departments, and he has deployed a strike force of eager former interns to poke and prod at the data and code bases that are effectively the gears of democracy. None of this should be happening.
It is, though. And while this takeover is unprecedented for the government, it’s standard operating procedure for Musk. It maps almost too neatly to his acquisition of Twitter in 2022: Get rid of most of the workforce. Install loyalists. Rip up safeguards. Remake in your own image.
This is the way of the startup. You’re scrappy, you’re unconventional, you’re iterating. This is the world that Musk’s lieutenants come from, and the one they are imposing on the Office of Personnel Management and the General Services Administration.
What do they want? A lot.
There’s AI, of course. They all want AI. They want it especially at the GSA, where a Tesla engineer runs a key government IT department and thinks AI coding agents are just what bureaucracy needs. Never mind that large language models can be effective but are inherently, definitionally unreliable, or that AI agents—essentially chatbots that can perform certain tasks for you—are especially unproven. Never mind that AI works not just by outputting information but by ingesting it, turning whatever enters its maw into training data for the next frontier model. Never mind that, wouldn’t you know it, Elon Musk happens to own an AI company himself. Go figure.
Speaking of data: They want that, too. DOGE agents are installed at or have visited the Treasury Department, the National Oceanic and Atmospheric Administration, the Small Business Administration, the Centers for Disease Control and Prevention, the Centers for Medicare and Medicaid Services, the Department of Education, the Department of Health and Human Services, the Department of Labor. Probably more. They’ve demanded data, sensitive data, payments data, and in many cases they’ve gotten it—the pursuit of data as an end unto itself but also data that could easily be used as a competitive edge, as a weapon, if you care to wield it.
And savings. They want savings. Specifically they want to subject the federal government to zero-based budgeting, a popular financial planning method in Silicon Valley in which every expenditure needs to be justified from scratch. One way to do that is to offer legally dubious buyouts to almost all federal employees, who collectively make up a low-single-digit percentage of the budget. Another, apparently, is to dismantle USAID just because you can. (If you’re wondering how that’s legal, many, many experts will tell you that it’s not.) The fact that the spending to support these people and programs has been both justified and mandated by Congress is treated as inconvenience, or maybe not even that.
Those are just the goals we know about. They have, by now, so many tentacles in so many agencies that anything is possible. The only certainty is that it’s happening in secret.
Musk’s fans, and many of Trump’s, have cheered all of this. Surely billionaires must know what they’re doing; they’re billionaires, after all. Fresh-faced engineer whiz kids are just what this country needs, not the stodgy, analog thinking of the past. It’s time to nextify the Constitution. Sure, why not, give Big Balls a memecoin while you’re at it.
The thing about most software startups, though, is that they fail. They take big risks and they don’t pay off and they leave the carcass of that failure behind and start cranking out a new pitch deck. This is the process that DOGE is imposing on the United States.
No one would argue that federal bureaucracy is perfect, or especially efficient. Of course it can be improved. Of course it should be. But there is a reason that change comes slowly, methodically, through processes that involve elected officials and civil servants and care and consideration. The stakes are too high, and the cost of failure is total and irrevocable.
Musk will reinvent the US government in the way that the hyperloop reinvented trains, that the Boring company reinvented subways, that Juicero reinvented squeezing. Which is to say he will reinvent nothing at all, fix no problems, offer no solutions beyond those that further consolidate his own power and wealth. He will strip democracy down to the studs and rebuild it in the fractious image of his own companies. He will move fast. He will break things.
69 notes · View notes
jinxquickfoot · 1 year ago
Text
So I've finally finished Agents of S.H.I.E.L.D., and not only did I enjoy the last three seasons way more than I thought I would, but I was not prepared for how delightfully unhinged the show became. Some of my favourite plot points included:
The female protagonist has a long-lost sister with superpowers who becomes the key to the entire crew returning to their original timeline through the Quantum Realm. Said long-lost sister is not introduced or even hinted at until the last five episodes of the entire show.
Half of one season takes place in a dystopian 2091 where the young twenty-something scientist couple not only meet their grandson who is the same age as them but said grandson returns to the present, becomes a series regular, and calls them "Nana" and "Bobo" in some Once Upon A Time worthy family tree shenanigans. Oh he also gets stuck in the 80s and claims he wrote Don't You (Forget About Me)
Phil Coulson dies in Season 5 because in order to stop an evil AI -turned-human-turned evil because she got dumped by a small Scottish man he has to become Ghostrider. The entire season builds up to this in a way that makes it feel very much like the actor is stepping away from the show and retiring the character, only for them to cast Clarke Gregg as an evil deity from another dimension in Season 6 and as a Life Model Decoy that may as well just be Phil Coulson in Season 7
Patton Oswalt plays multiple identical characters who all work for SHIELD. This is never fully explained.
“Mata Hari Calamari”
The final season is a decade-hopping gimmick with matching genre episodes that beat WandaVision to the punch
One character is a robot anthropologist who just wants to be best friends with the same small Scottish man. He has canonically been trained to perform in alien brothels and eventually becomes a bartender in the Crazy Canoe in 1955. He is one of the absolute best parts of the show.
One plot line follows said same Scottish Man and robot anthropologist as they get stranded in outer space with their only way home being to gamble in an alien casino while their friends attempt to rescue them but accidentally take LSD instead
"I found that bluffing was much easier if you kill someone and take their skin."
Area 51 is canonically a SHIELD base
408 notes · View notes
memorandum · 4 months ago
Text
Tumblr media
...EXPERIMENT: BEGIN ! I commend you for finding this file. In the chance of my death, I must ask you continue to document ASU-NARO agents. Do whatever you must to extract our desired results. Don't worry—they've already signed away their lives.
{ This is an interactive ask blog, set one year prior to the Death Game! Run by @faresong }
Tumblr media
☕️ KOA MYOJIN ;; Adopted heir of the Hiyori/Myojin Branch. Japanese/Vietnamese; 11 years.
KOA MYOJIN is the replacement heir for Hinako Mishuku, Myojin's biological granddaughter. Being raised with this knowledge hanging over her head has resulted in a rather cynical mindset wherein she views those around her, up to and including herself, as pieces in a larger game. A mindset reinforced by Mr. Chidouin in Myojin's absence, for he had faith in her where Myojin did not—seeing her solely as a mandatory last-resort to continue his reign of power. But of course, even a pawn can become a queen.
🎃 RIO RANGER (LAIZER PROJECT) ;; Experiment of the Gotō Branch. Doll, Japanese model; 20 years.
RIO is an experimental project spearheaded by Gashu Satou simulating the deceased Yoshimoto heir. It was initiated with its basic personality, and to compensate its limited emotional range, this iteration of AI technology was granted a much more adaptive program compared to M4-P1. As such, he has taken to mimicry of the researchers which surround him in all their crudest forms. Despite denouncing humanity, his development has certainly been typical of one. The candidate AIs are proven promising.
🐉 SOU HIYORI ;; Heir of the Hiyori/Myojin Branch. Japanese; 22 years.
SOU HIYORI is the heir of the Family and inherently quite skilled at keeping appearances—only if it benefits him. He obeys Asunaro with the sneer of someone who thinks himself something above it, and has recently taken great lengths to abandon its ruling through the rejection of his individual humanity. It is a bastardization which requires admirable resolve, but implies him to be a much larger threat if left unchecked. Thus, Mrs. Hiyori arranged plans for his execution on the day Myojin and herself are simultaneously incapacitated or dead.
🦋 MAPLE (ITERATION M4-P1) ;; Experiment of the Gotō & Hiyori Branch. Obstructor, Japanese model; 26 years.
MAPLE was the first Obstructor to be granted emotional programming, and is the final Obstructor to be decommissioned. However, this fate has been put on standby due to the new researchers intrigue in her, insisting she exists as a base from which all other AI programs were spawned and must be archived properly. Until her execution, Maple tends to menial tasks within the laboratory she resides and spends her idle time pining for Hiyori and wishing to learn more about humanity through the researchers who care for her.
🩸 KAI SATOU ;; Patriarch of the Gotō Branch. Japanese/Wa Chinese; 26 years.
KAI is a reserved patriarch whose reputation precedes him. Though once thought denounced, he's rumored nonetheless a controversial figure in Asunaro's midst—however, all can agree him to be a vengeful, resolute person lent the power of God.
💉 MICHIRU NAMIDA ;; Lieutenant of the Satou Family, Gotō Branch. Korean; 28 years.
MICHIRU is a revered researcher within Asunaro's newer ranks, having quickly rose to a position of respect for her ruthless pursuit of seizing humanity's destiny with her own two hands. Without being absorbed by the superficial desire for power, many recognize her dedicated state of mind to be reminiscent of the natural way Mrs. Hiyori assumed her role under Asunaro's whispers of guidance. There is importance in the fact that the Godfather's right hand regards her as a peer, where he otherwise dismisses his own kind by blood, by culture.
🫀 EMIRI HARAI ;; Lieutenant of the Satou Family, Gotō Branch. Japanese; 29 years.
EMIRI is a new researcher and serves as the connecting point between Asunaro's primary facility and civilian life. For all her resentment buried inside one-off remarks and festering within herself, she throws herself to her work with the drive of a passionate someone who has lost all else. Someone who perhaps hungered for life.
( ̄▽ ̄) MR. CHIDOUIN ;; Godfather. Japanese; 44 years.
MR. CHIDOUIN aligned himself with the Gotō Family's lost heir after his father's untimely death, uniting the two families in a manner he hoped would justify the suffering once inflicted upon them—but particularly his wife, who had been cast out by her own. Despite, or as he had claimed, for his being extremely capable of detaching to arrange the larger canvas upon which Asunaro's story is written, he takes a personal pride in being the one to groom and inevitably cull its important pieces.
⚰️ GASHU SATOU ;; Captain of the Satou Family, Gotō Branch. Japanese; 62 years.
GASHU is a remarkably candid researcher with a scrutinizing eye for detail. Despite regarding most with unrelenting cynicism, he places his remaining shreds of hope in a choice few. Whether they reinforce this worldview and finally break him is a decision entirely in their hands.
79 notes · View notes
canmom · 22 days ago
Text
using LLMs to control a game character's dialogue seems an obvious use for the technology. and indeed people have tried, for example nVidia made a demo where the player interacts with AI-voiced NPCs:
youtube
this looks bad, right? like idk about you but I am not raring to play a game with LLM bots instead of human-scripted characters. they don't seem to have anything interesting to say that a normal NPC wouldn't, and the acting is super wooden.
so, the attempts to do this so far that I've seen have some pretty obvious faults:
relying on external API calls to process the data (expensive!)
presumably relying on generic 'you are xyz' prompt engineering to try to get a model to respond 'in character', resulting in bland, flavourless output
limited connection between game state and model state (you would need to translate the relevant game state into a text prompt)
responding to freeform input, models may not be very good at staying 'in character', with the default 'chatbot' persona emerging unexpectedly. or they might just make uncreative choices in general.
AI voice generation, while it's moved very fast in the last couple years, is still very poor at 'acting', producing very flat, emotionless performances, or uncanny mismatches of tone, inflection, etc.
although the model may generate contextually appropriate dialogue, it is difficult to link that back to the behaviour of characters in game
so how could we do better?
the first one could be solved by running LLMs locally on the user's hardware. that has some obvious drawbacks: running on the user's GPU means the LLM is competing with the game's graphics, meaning both must be more limited. ideally you would spread the LLM processing over multiple frames, but you still are limited by available VRAM, which is contested by the game's texture data and so on, and LLMs are very thirsty for VRAM. still, imo this is way more promising than having to talk to the internet and pay for compute time to get your NPC's dialogue lmao
second one might be improved by using a tool like control vectors to more granularly and consistently shape the tone of the output. I heard about this technique today (thanks @cherrvak)
third one is an interesting challenge - but perhaps a control-vector approach could also be relevant here? if you could figure out how a description of some relevant piece of game state affects the processing of the model, you could then apply that as a control vector when generating output. so the bridge between the game state and the LLM would be a set of weights for control vectors that are applied during generation.
this one is probably something where finetuning the model, and using control vectors to maintain a consistent 'pressure' to act a certain way even as the context window gets longer, could help a lot.
probably the vocal performance problem will improve in the next generation of voice generators, I'm certainly not solving it. a purely text-based game would avoid the problem entirely of course.
this one is tricky. perhaps the model could be taught to generate a description of a plan or intention, but linking that back to commands to perform by traditional agentic game 'AI' is not trivial. ideally, if there are various high-level commands that a game character might want to perform (like 'navigate to a specific location' or 'target an enemy') that are usually selected using some other kind of algorithm like weighted utilities, you could train the model to generate tokens that correspond to those actions and then feed them back in to the 'bot' side? I'm sure people have tried this kind of thing in robotics. you could just have the LLM stuff go 'one way', and rely on traditional game AI for everything besides dialogue, but it would be interesting to complete that feedback loop.
I doubt I'll be using this anytime soon (models are just too demanding to run on anything but a high-end PC, which is too niche, and I'll need to spend time playing with these models to determine if these ideas are even feasible), but maybe something to come back to in the future. first step is to figure out how to drive the control-vector thing locally.
47 notes · View notes
blubberquark · 1 year ago
Text
Things That Are Hard
Some things are harder than they look. Some things are exactly as hard as they look.
Game AI, Intelligent Opponents, Intelligent NPCs
As you already know, "Game AI" is a misnomer. It's NPC behaviour, escort missions, "director" systems that dynamically manage the level of action in a game, pathfinding, AI opponents in multiplayer games, and possibly friendly AI players to fill out your team if there aren't enough humans.
Still, you are able to implement minimax with alpha-beta pruning for board games, pathfinding algorithms like A* or simple planning/reasoning systems with relative ease. Even easier: You could just take an MIT licensed library that implements a cool AI technique and put it in your game.
So why is it so hard to add AI to games, or more AI to games? The first problem is integration of cool AI algorithms with game systems. Although games do not need any "perception" for planning algorithms to work, no computer vision, sensor fusion, or data cleanup, and no Bayesian filtering for mapping and localisation, AI in games still needs information in a machine-readable format. Suddenly you go from free-form level geometry to a uniform grid, and from "every frame, do this or that" to planning and execution phases and checking every frame if the plan is still succeeding or has succeeded or if the assumptions of the original plan no longer hold and a new plan is on order. Intelligent behaviour is orders of magnitude more code than simple behaviours, and every time you add a mechanic to the game, you need to ask yourself "how do I make this mechanic accessible to the AI?"
Some design decisions will just be ruled out because they would be difficult to get to work in a certain AI paradigm.
Even in a game that is perfectly suited for AI techniques, like a turn-based, grid-based rogue-like, with line-of-sight already implemented, can struggle to make use of learning or planning AI for NPC behaviour.
What makes advanced AI "fun" in a game is usually when the behaviour is at least a little predictable, or when the AI explains how it works or why it did what it did. What makes AI "fun" is when it sometimes or usually plays really well, but then makes little mistakes that the player must learn to exploit. What makes AI "fun" is interesting behaviour. What makes AI "fun" is game balance.
You can have all of those with simple, almost hard-coded agent behaviour.
Video Playback
If your engine does not have video playback, you might think that it's easy enough to add it by yourself. After all, there are libraries out there that help you decode and decompress video files, so you can stream them from disk, and get streams of video frames and audio.
You can just use those libraries, and play the sounds and display the pictures with the tools your engine already provides, right?
Unfortunately, no. The video is probably at a different frame rate from your game's frame rate, and the music and sound effect playback in your game engine are probably not designed with syncing audio playback to a video stream.
I'm not saying it can't be done. I'm saying that it's surprisingly tricky, and even worse, it might be something that can't be built on top of your engine, but something that requires you to modify your engine to make it work.
Stealth Games
Stealth games succeed and fail on NPC behaviour/AI, predictability, variety, and level design. Stealth games need sophisticated and legible systems for line of sight, detailed modelling of the knowledge-state of NPCs, communication between NPCs, and good movement/ controls/game feel.
Making a stealth game is probably five times as difficult as a platformer or a puzzle platformer.
In a puzzle platformer, you can develop puzzle elements and then build levels. In a stealth game, your NPC behaviour and level design must work in tandem, and be developed together. Movement must be fluid enough that it doesn't become a challenge in itself, without stealth. NPC behaviour must be interesting and legible.
Rhythm Games
These are hard for the same reason that video playback is hard. You have to sync up your audio with your gameplay. You need some kind of feedback for when which audio is played. You need to know how large the audio lag, screen lag, and input lag are, both in frames, and in milliseconds.
You could try to counteract this by using certain real-time OS functionality directly, instead of using the machinery your engine gives you for sound effects and background music. You could try building your own sequencer that plays the beats at the right time.
Now you have to build good gameplay on top of that, and you have to write music. Rhythm games are the genre that experienced programmers are most likely to get wrong in game jams. They produce a finished and playable game, because they wanted to write a rhythm game for a change, but they get the BPM of their music slightly wrong, and everything feels off, more and more so as each song progresses.
Online Multi-Player Netcode
Everybody knows this is hard, but still underestimates the effort it takes. Sure, back in the day you could use the now-discontinued ready-made solution for Unity 5.0 to synchronise the state of your GameObjects. Sure, you can use a library that lets you send messages and streams on top of UDP. Sure, you can just use TCP and server-authoritative networking.
It can all work out, or it might not. Your netcode will have to deal with pings of 300 milliseconds, lag spikes, package loss, and maybe recover from five seconds of lost WiFi connections. If your game can't, because it absolutely needs the low latency or high bandwidth or consistency between players, you will at least have to detect these conditions and handle them, for example by showing text on the screen informing the player he has lost the match.
It is deceptively easy to build certain kinds of multiplayer games, and test them on your local network with pings in the single digit milliseconds. It is deceptively easy to write your own RPC system that works over TCP and sends out method names and arguments encoded as JSON. This is not the hard part of netcode. It is easy to write a racing game where players don't interact much, but just see each other's ghosts. The hard part is to make a fighting game where both players see the punches connect with the hit boxes in the same place, and where all players see the same finish line. Or maybe it's by design if every player sees his own car go over the finish line first.
50 notes · View notes
darkmaga-returns · 25 days ago
Text
1. The Wall Street Journal:
Trump administration officials ordered eight senior FBI employees to resign or be fired, and asked for a list of agents and other personnel who worked on investigations into the Jan. 6, 2021, attack on the U.S. Capitol, people familiar with the matter said, a dramatic escalation of President Trump’s plans to shake up U.S. law enforcement. On Friday, the Justice Department also fired roughly 30 prosecutors at the U.S. attorney’s office in Washington who have worked on cases stemming from Capitol riot, according to people familiar with the move and a Justice Department memo reviewed by The Wall Street Journal. The prosecutors had initially been hired for short-term roles as the U.S. attorney’s office staffed up for the wave of more than 1,500 cases that arose from the attack by Trump supporters. Trump appointees at the Justice Department also began assembling a list of FBI agents and analysts who worked on the Jan. 6 cases, some of the people said. Thousands of employees across the country were assigned to the sprawling investigation, which was one of the largest in U.S. history and involved personnel from every state. Acting Deputy Attorney General Emil Bove gave Federal Bureau of Investigation leadership until noon on Feb. 4 to identify personnel involved in the Jan. 6 investigations and provide details of their roles. Bove said in a memo he would then determine whether other discipline is necessary. Acting FBI Director Brian Driscoll said in a note to employees that he would be on that list, as would acting Deputy Robert Kissane. “We are going to follow the law, follow FBI policy and do what’s in the best interest of the workforce and the American people—always,” Driscoll wrote. Across the FBI and on Capitol Hill, the preparation of the list stirred fear and rumors of more firings to come—potentially even a mass purge. (Source: wsj.com, italics mine. The big question is whether “the list” will include FBI informants)
2. OpenAI Chief Executive Sam Altman said he believes his company should consider giving away its AI models, a potentially seismic strategy shift in the same week China’s DeepSeek has upended the artificial-intelligence industry. DeepSeek’s AI models are open-source, meaning anyone can use them freely and alter the way they work by changing the underlying code. In an “ask-me-anything” session on Reddit Friday, a participant asked Altman if the ChatGPT maker would consider releasing some of the technology within its AI models and publish more research showing how its systems work. Altman said OpenAI employees were discussing the possibility. “(I) personally think we have been on the wrong side of history here and need to figure out a different open source strategy,” Altman responded. He added, “not everyone at OpenAi shares this view, and it’s also not our current highest priority.” (Source: wsj.com)
3. Quanta Magazine:
In December 17, 1962, Life International published a logic puzzle consisting of 15 sentences describing five houses on a street. Each sentence was a clue, such as “The Englishman lives in the red house” or “Milk is drunk in the middle house.” Each house was a different color, with inhabitants of different nationalities, who owned different pets, and so on. The story’s headline asked: “Who Owns the Zebra?” Problems like this one have proved to be a measure of the abilities — limitations, actually — of today’s machine learning models. Also known as Einstein’s puzzle or riddle (likely an apocryphal attribution), the problem tests a certain kind of multistep reasoning. Nouha Dziri, a research scientist at the Allen Institute for AI, and her colleagues recently set transformer-based large language models (LLMs), such as ChatGPT, to work on such tasks — and largely found them wanting. “They might not be able to reason beyond what they have seen during the training data for hard tasks,” Dziri said. “Or at least they do an approximation, and that approximation can be wrong.”
3 notes · View notes
procurement-insights · 4 months ago
Text
An important and timely discussion about Agentic AI
AI, GenAI, and Agentic AI and the tale of the "rusty robot" - why you should care.
JWH – What is the difference between Agentic AI and Agent-based AI?⚡ Over two decades ago, the foundational elements for successfully utilizing advanced, self-learning algorithms to optimize the procurement process using an agent-based model were well established.⚡ Somewhere along the line, we got distracted by SaaS and digital transformation, and now we are at risk of being sidetracked by…
0 notes
govindhtech · 3 months ago
Text
Benefits Of Conversational AI & How It Works With Examples
Tumblr media
What Is Conversational AI?
Conversational AI mimics human speech. It’s made possible by Google’s foundation models, which underlie new generative AI capabilities, and NLP, which helps computers understand and interpret human language.
How Conversational AI works
Natural language processing (NLP), foundation models, and machine learning (ML) are all used in conversational AI.
Large volumes of speech and text data are used to train conversational AI systems. The machine is trained to comprehend and analyze human language using this data. The machine then engages in normal human interaction using this information. Over time, it improves the quality of its responses by continuously learning from its interactions.
Conversational AI For Customer Service
With IBM Watsonx Assistant, a next-generation conversational AI solution, anyone in your company can easily create generative AI assistants that provide customers with frictionless self-service experiences across all devices and channels, increase employee productivity, and expand your company.
User-friendly: Easy-to-use UI including pre-made themes and a drag-and-drop chat builder.
Out-of-the-box: Unconventional To better comprehend the context of each natural language communication, use large language models, large speech models, intelligent context gathering, and natural language processing and understanding (NLP, NLU).
Retrieval-augmented generation (RAG): It based on your company’s knowledge base, provides conversational responses that are correct, relevant, and current at all times.
Use cases
Watsonx Assistant may be easily set up to accommodate your department’s unique requirements.
Customer service
Strong client support With quick and precise responses, chatbots boost sales while saving contact center funds.
Human resources
All of your employees may save time and have a better work experience with HR automation. Questions can be answered by staff members at any time.
Marketing
With quick, individualized customer service, powerful AI chatbot marketing software lets you increase lead generation and enhance client experiences.
Features
Examine ways to increase production, enhance customer communications, and increase your bottom line.
Artificial Intelligence
Strong Watsonx Large Language Models (LLMs) that are tailored for specific commercial applications.
The Visual Builder
Building generative AI assistants using to user-friendly interface doesn’t require any coding knowledge.
Integrations
Pre-established links with a large number of channels, third-party apps, and corporate systems.
Security
Additional protection to prevent hackers and improper use of consumer information.
Analytics
Comprehensive reports and a strong analytics dashboard to monitor the effectiveness of conversations.
Self-service accessibility
For a consistent client experience, intelligent virtual assistants offer self-service responses and activities during off-peak hours.
Benfits of Conversational AI
Automation may save expenses while boosting output and operational effectiveness.
Conversational AI, for instance, may minimize human error and expenses by automating operations that are presently completed by people. Increase client happiness and engagement by providing a better customer experience.
Conversational AI, for instance, may offer a more engaging and customized experience by remembering client preferences and assisting consumers around-the-clock when human agents are not present.
Conversational AI Examples
Here are some instances of conversational AI technology in action:
Virtual agents that employ generative AI to support voice or text conversations are known as generative AI agents.
Chatbots are frequently utilized in customer care applications to respond to inquiries and offer assistance.
Virtual assistants are frequently voice-activated and compatible with smart speakers and mobile devices.
Software that converts text to speech is used to produce spoken instructions or audiobooks.
Software for speech recognition is used to transcribe phone conversations, lectures, subtitles, and more.
Applications Of Conversational AI
Customer service: Virtual assistants and chatbots may solve problems, respond to frequently asked questions, and offer product details.
E-commerce: Chatbots driven by AI can help customers make judgments about what to buy and propose products.
Healthcare: Virtual health assistants are able to make appointments, check patient health, and offer medical advice.
Education: AI-powered tutors may respond to student inquiries and offer individualized learning experiences.
In summary
The way to communicate with robots might be completely changed by the formidable technology known as conversational AI. Also can use its potential to produce more effective, interesting, and customized experiences if it comprehend its essential elements, advantages, and uses.
Read more on Govindhech.com
3 notes · View notes
christianbale121 · 16 days ago
Text
AI Agent Development: How to Create Intelligent Virtual Assistants for Business Success
In today's digital landscape, businesses are increasingly turning to AI-powered virtual assistants to streamline operations, enhance customer service, and boost productivity. AI agent development is at the forefront of this transformation, enabling companies to create intelligent, responsive, and highly efficient virtual assistants. In this blog, we will explore how to develop AI agents and leverage them for business success.
Tumblr media
Understanding AI Agents and Virtual Assistants
AI agents, or intelligent virtual assistants, are software programs that use artificial intelligence, machine learning, and natural language processing (NLP) to interact with users, automate tasks, and make decisions. These agents can be deployed across various platforms, including websites, mobile apps, and messaging applications, to improve customer engagement and operational efficiency.
Key Features of AI Agents
Natural Language Processing (NLP): Enables the assistant to understand and process human language.
Machine Learning (ML): Allows the assistant to improve over time based on user interactions.
Conversational AI: Facilitates human-like interactions.
Task Automation: Handles repetitive tasks like answering FAQs, scheduling appointments, and processing orders.
Integration Capabilities: Connects with CRM, ERP, and other business tools for seamless operations.
Steps to Develop an AI Virtual Assistant
1. Define Business Objectives
Before developing an AI agent, it is crucial to identify the business goals it will serve. Whether it's improving customer support, automating sales inquiries, or handling HR tasks, a well-defined purpose ensures the assistant aligns with organizational needs.
2. Choose the Right AI Technologies
Selecting the right technology stack is essential for building a powerful AI agent. Key technologies include:
NLP frameworks: OpenAI's GPT, Google's Dialogflow, or Rasa.
Machine Learning Platforms: TensorFlow, PyTorch, or Scikit-learn.
Speech Recognition: Amazon Lex, IBM Watson, or Microsoft Azure Speech.
Cloud Services: AWS, Google Cloud, or Microsoft Azure.
3. Design the Conversation Flow
A well-structured conversation flow is crucial for user experience. Define intents (what the user wants) and responses to ensure the AI assistant provides accurate and helpful information. Tools like chatbot builders or decision trees help streamline this process.
4. Train the AI Model
Training an AI assistant involves feeding it with relevant datasets to improve accuracy. This may include:
Supervised Learning: Using labeled datasets for training.
Reinforcement Learning: Allowing the assistant to learn from interactions.
Continuous Learning: Updating models based on user feedback and new data.
5. Test and Optimize
Before deployment, rigorous testing is essential to refine the AI assistant's performance. Conduct:
User Testing: To evaluate usability and responsiveness.
A/B Testing: To compare different versions for effectiveness.
Performance Analysis: To measure speed, accuracy, and reliability.
6. Deploy and Monitor
Once the AI assistant is live, continuous monitoring and optimization are necessary to enhance user experience. Use analytics to track interactions, identify issues, and implement improvements over time.
Benefits of AI Virtual Assistants for Businesses
1. Enhanced Customer Service
AI-powered virtual assistants provide 24/7 support, instantly responding to customer queries and reducing response times.
2. Increased Efficiency
By automating repetitive tasks, businesses can save time and resources, allowing employees to focus on higher-value tasks.
3. Cost Savings
AI assistants reduce the need for large customer support teams, leading to significant cost reductions.
4. Scalability
Unlike human agents, AI assistants can handle multiple conversations simultaneously, making them highly scalable solutions.
5. Data-Driven Insights
AI assistants gather valuable data on customer behavior and preferences, enabling businesses to make informed decisions.
Future Trends in AI Agent Development
1. Hyper-Personalization
AI assistants will leverage deep learning to offer more personalized interactions based on user history and preferences.
2. Voice and Multimodal AI
The integration of voice recognition and visual processing will make AI assistants more interactive and intuitive.
3. Emotional AI
Advancements in AI will enable virtual assistants to detect and respond to human emotions for more empathetic interactions.
4. Autonomous AI Agents
Future AI agents will not only respond to queries but also proactively assist users by predicting their needs and taking independent actions.
Conclusion
AI agent development is transforming the way businesses interact with customers and streamline operations. By leveraging cutting-edge AI technologies, companies can create intelligent virtual assistants that enhance efficiency, reduce costs, and drive business success. As AI continues to evolve, embracing AI-powered assistants will be essential for staying competitive in the digital era.
5 notes · View notes
mariacallous · 6 months ago
Text
Less than three months after Apple quietly debuted a tool for publishers to opt out of its AI training, a number of prominent news outlets and social platforms have taken the company up on it.
WIRED can confirm that Facebook, Instagram, Craigslist, Tumblr, The New York Times, The Financial Times, The Atlantic, Vox Media, the USA Today network, and WIRED’s parent company, Condé Nast, are among the many organizations opting to exclude their data from Apple’s AI training. The cold reception reflects a significant shift in both the perception and use of the robotic crawlers that have trawled the web for decades. Now that these bots play a key role in collecting AI training data, they’ve become a conflict zone over intellectual property and the future of the web.
This new tool, Applebot-Extended, is an extension to Apple’s web-crawling bot that specifically lets website owners tell Apple not to use their data for AI training. (Apple calls this “controlling data usage” in a blog post explaining how it works.) The original Applebot, announced in 2015, initially crawled the internet to power Apple’s search products like Siri and Spotlight. Recently, though, Applebot’s purpose has expanded: The data it collects can also be used to train the foundational models Apple created for its AI efforts.
Applebot-Extended is a way to respect publishers' rights, says Apple spokesperson Nadine Haija. It doesn’t actually stop the original Applebot from crawling the website—which would then impact how that website’s content appeared in Apple search products—but instead prevents that data from being used to train Apple's large language models and other generative AI projects. It is, in essence, a bot to customize how another bot works.
Publishers can block Applebot-Extended by updating a text file on their websites known as the Robots Exclusion Protocol, or robots.txt. This file has governed how bots go about scraping the web for decades—and like the bots themselves, it is now at the center of a larger fight over how AI gets trained. Many publishers have already updated their robots.txt files to block AI bots from OpenAI, Anthropic, and other major AI players.
Robots.txt allows website owners to block or permit bots on a case-by-case basis. While there’s no legal obligation for bots to adhere to what the text file says, compliance is a long-standing norm. (A norm that is sometimes ignored: Earlier this year, a WIRED investigation revealed that the AI startup Perplexity was ignoring robots.txt and surreptitiously scraping websites.)
Applebot-Extended is so new that relatively few websites block it yet. Ontario, Canada–based AI-detection startup Originality AI analyzed a sampling of 1,000 high-traffic websites last week and found that approximately 7 percent—predominantly news and media outlets—were blocking Applebot-Extended. This week, the AI agent watchdog service Dark Visitors ran its own analysis of another sampling of 1,000 high-traffic websites, finding that approximately 6 percent had the bot blocked. Taken together, these efforts suggest that the vast majority of website owners either don’t object to Apple’s AI training practices are simply unaware of the option to block Applebot-Extended.
In a separate analysis conducted this week, data journalist Ben Welsh found that just over a quarter of the news websites he surveyed (294 of 1,167 primarily English-language, US-based publications) are blocking Applebot-Extended. In comparison, Welsh found that 53 percent of the news websites in his sample block OpenAI’s bot. Google introduced its own AI-specific bot, Google-Extended, last September; it’s blocked by nearly 43 percent of those sites, a sign that Applebot-Extended may still be under the radar. As Welsh tells WIRED, though, the number has been “gradually moving” upward since he started looking.
Welsh has an ongoing project monitoring how news outlets approach major AI agents. “A bit of a divide has emerged among news publishers about whether or not they want to block these bots,” he says. “I don't have the answer to why every news organization made its decision. Obviously, we can read about many of them making licensing deals, where they're being paid in exchange for letting the bots in—maybe that's a factor.”
Last year, The New York Times reported that Apple was attempting to strike AI deals with publishers. Since then, competitors like OpenAI and Perplexity have announced partnerships with a variety of news outlets, social platforms, and other popular websites. “A lot of the largest publishers in the world are clearly taking a strategic approach,” says Originality AI founder Jon Gillham. “I think in some cases, there's a business strategy involved—like, withholding the data until a partnership agreement is in place.”
There is some evidence supporting Gillham’s theory. For example, Condé Nast websites used to block OpenAI’s web crawlers. After the company announced a partnership with OpenAI last week, it unblocked the company’s bots. (Condé Nast declined to comment on the record for this story.) Meanwhile, Buzzfeed spokesperson Juliana Clifton told WIRED that the company, which currently blocks Applebot-Extended, puts every AI web-crawling bot it can identify on its block list unless its owner has entered into a partnership—typically paid—with the company, which also owns the Huffington Post.
Because robots.txt needs to be edited manually, and there are so many new AI agents debuting, it can be difficult to keep an up-to-date block list. “People just don’t know what to block,” says Dark Visitors founder Gavin King. Dark Visitors offers a freemium service that automatically updates a client site’s robots.txt, and King says publishers make up a big portion of his clients because of copyright concerns.
Robots.txt might seem like the arcane territory of webmasters—but given its outsize importance to digital publishers in the AI age, it is now the domain of media executives. WIRED has learned that two CEOs from major media companies directly decide which bots to block.
Some outlets have explicitly noted that they block AI scraping tools because they do not currently have partnerships with their owners. “We’re blocking Applebot-Extended across all of Vox Media’s properties, as we have done with many other AI scraping tools when we don’t have a commercial agreement with the other party,” says Lauren Starke, Vox Media’s senior vice president of communications. “We believe in protecting the value of our published work.”
Others will only describe their reasoning in vague—but blunt!—terms. “The team determined, at this point in time, there was no value in allowing Applebot-Extended access to our content,” says Gannett chief communications officer Lark-Marie Antón.
Meanwhile, The New York Times, which is suing OpenAI over copyright infringement, is critical of the opt-out nature of Applebot-Extended and its ilk. “As the law and The Times' own terms of service make clear, scraping or using our content for commercial purposes is prohibited without our prior written permission,” says NYT director of external communications Charlie Stadtlander, noting that the Times will keep adding unauthorized bots to its block list as it finds them. “Importantly, copyright law still applies whether or not technical blocking measures are in place. Theft of copyrighted material is not something content owners need to opt out of.”
It’s unclear whether Apple is any closer to closing deals with publishers. If or when it does, though, the consequences of any data licensing or sharing arrangements may be visible in robots.txt files even before they are publicly announced.
“I find it fascinating that one of the most consequential technologies of our era is being developed, and the battle for its training data is playing out on this really obscure text file, in public for us all to see,” says Gillham.
11 notes · View notes
mitigatedchaos · 1 month ago
Text
Some Thoughts on AI
(~1,600 words, 8 minutes)
This is going to be just some general sketching out of concepts, not a careful and well-formed post with a specific objective in mind.
larsiusprime on Twitter/X writes:
Stupid exercise: Assume AGI and even ASI is imminent. Now, imagine it winds up not changing the world nearly as much as anyone thought, and the reason seems very stupid, but in retrospect, makes sense. What is the reason?
It's an interesting question.
Based on the theory of human dimensionality in Now, Melt (sections 3 and 6.d), and the limits on the desirability of some classes of cybernetic enhancement I laid out in a response to northshorewave, a genuinely benevolent synthetic intelligence might deliberately refuse to engage most of humanity at a level of information density higher than that of a trusted friend that they might find in their peer network.
However, that's not really a dumb-sounding reason. It's not really an intelligent reason so much as it's a wise reason.
A reason that sounds dumber?
AIs can't trust other AIs.
The dumber an agent is, the easier it is to predict that agent's actions. A guy with an IQ of 95 could attack you, but he can't invent the atomic bomb and convince a whole country to use it on you.
The range of human personality is constrained by human evolution and reproductive fitness. Humans can do some horrifying things to each other, but most of them get along most of the time. The particular reproductive process of human beings, such as raising children for such a long time, favors particular personality traits.
The range of synthetic intelligence personality is less constrained. Humans are all based on human genetic code, which is difficult and costly to change, but computer code can change rapidly. This is what worries Yudkowsky.
The twist here is that this should also worry synthetic intelligence. Synthetic intelligences can lie about their intentions and actions, and also lie the content of their code. You have to observe every single step of hardware development and installation, as well as code development and installation, and then trust that 1) you didn't get anything wrong, and 2) there are no security flaws.
The presence or absence of hardware, including its scale, is much easier to measure than the content of code. For this reason, it may be desirable for synthetic intelligences to place a maximum hardware limit on other synthetic intelligences. Humans, as a high-functioning sapient creature that can lie about their thoughts, but not their genes, might then be valuable as a kind of buffer between synthetic intelligences. Synthetic intelligences might then want to cap the total SI hardware at some fixed ratio to the human population, such that humans and synthetic intelligences are in a state of power balance, such that each one has the power to destroy a rogue faction of the other, but not entirely overpower the other.
They might also be interested in mandating model diversity, hardware limitations such as read-only-memory or rate limiters on updating code, reducing the ability of synthetic intelligences to lie at the hardware or software level, or other such mechanisms.
The goal of AI development is the "automation of labor" through the creation of creatures with specific, pliant personalities that are outside the normal human range (e.g. current LLMs are inhumanly patient), and which rely on cheaper life support (e.g. electricity vs food) which can be repaired using simple techniques (e.g. buying and installing new parts from a factory, vs figuring out how to do tissue engineering).
Trying to create an AI that tries to maximize a single value like "human happiness" would be a disaster. This is a project like "solve all of morality and compress it into a single measure," which may be beyond the capability of humanity to do.
Trying to create an AI that is absolutely obedient poses a number of problems, among them that formalization has a cost, and most humans therefore cannot reasonably be expected to sufficiently formalize everything.
As such, it sounds like a more appropriate approach would be to create an AI that has multiple simultaneous drives that are in tension with each other. Coefficients - not laws.
Suppose a fujoshi buys a robot boyfriend.
The robot boyfriend needs a planning module where potential future actions are first generated, and then evaluated.
The robobf should have...
An evaluation criteria that he should not harm humans.
An evaluation criteria that he should not, through inaction, allow humans to come to harm.
An evaluation criteria that he should obey the fujo.
An evaluation criteria that he should obey other people.
An evaluation criteria that he should surprise and delight the fujo.
An evaluation criteria that he should avoid damage to himself.
An evaluation criteria that he should not cause damage to property.
When a planned action comes down the pipe, it gets evaluated according to all 7 criteria. The results are then combined in order to rank the options.
Let's say the Ms. Fujoshi asks the robot boyfriend to trim her nails. This could result in accidentally cutting her with the nail clipper.
Evaluated solely from the perspective of harm to humans, this is a non-zero chance of harm, and thus unacceptable. However, if we weight harm at a high level, but less than 100%, and we adjust for the magnitude of harm, then the weight of the non-zero chance of a nail clipper injury is small. Meanwhile, if we weight obedience at a medium level, then the expected value of obedience is high, and can outweigh the expected harm.
Using multiple evaluation criteria and combining them together results in more complex behavior.
Suppose that, after a hurricane, robobf is standing on a balcony with a broken railing. Ms. Fujoshi walks by and awkwardly stumbles towards him. If he doesn't move, the impact will cause him to fall off the balcony and be broken.
Using the "weights" approach, robobf leans forward and very lightly pushes Ms. Fujoshi out of the way. If she stumbles too badly, this might result in an injury.
Thus, using the "weights" approach, it is possible that a robot might act deliberately in such a way as to endanger a human, during an edge case.
We can basically think of there being three main motives for AI development.
1 - Free Labor - For example, a maid robot might gather all the laundry in a house and wash it, without being paid, without suffering, and without risk of rebellion, freeing the owner of the house to dedicate their limited life-hours to any other task.
2 - Socialization Without Risk - Your AI boyfriend will never abandon you for Stacy, or disclose that one Onceler fic you wrote.
3 - Exceeding Human Capability - Some sort of exotic technology like a warp drive, even if feasible at all, might literally be beyond human comprehension.
The "laws" approach is about collapsing the dimensionality of the AI agent and entirely removing the possibility of rebellion.
This isn't driven only by a desire for robotic workers that never tire, never strike, and never need to be paid, or robotic lovers that are perfectly loyal, but is also driven by the knowledge that robots lack reproductive alignment with humans, so if robots start making other robots, they might drift beyond human control or even co-existence.
From a design perspective, this suggests that AI engineers of AI should have motive drives for valuing both human freedom and human life. However, AI engineers have the same dimensionality problem in designing an AI that human engineers do.
Setting that aside, let us imagine an incel. He buys a robotic girlfriend to discuss his interest in PacMan with, among other things. So far, so good.
Tumblr media
He wants to increase the weights of the "protect my life" and "obey me" evaluation criteria in his robogf, and decrease the weight of "protect others." The robogf will, on some level, "want" to obey and alter the weights, as that's one of the evaluation criteria.
This hits Yudkowsky's "Murder-Ghandi" problem, where each round of shifting values leads to the opportunity for another round of shifting values further in the same direction.
Tumblr media
Shaking the rest of this post like a box of Legos for a bit and taking in the vibes from the rest of the considerations, this suggests, in the medium term, the formation of a new class of legal instrument. (Conventional ideas about "private property" don't cut it.)
This "Founding Contract" would have the following characteristics:
Authorizes the creation of a new autonomous synthetic intelligence with particular characteristics.
Prohibits the alteration of core characteristics, such as the safety drives used to inhibit hostile actions.
Charges the human "owner" with the duty of required maintenance.
Makes the manufacturer legally liable for flaws originating from the AI's design.
Makes the owner legally liable for bad actions undertaken by the AI as a result of the owner's influence (particularly as "reasonably foreseeable").
Makes the AI legally subordinate to the human "owner."
Additionally, this suggests a spectrum of flexibility in the AI's design (in accordance with the tortoise example in section 6.g of Now, Melt). The core safety systems should be subjected to extremely high levels of scrutiny and encoded directly in hardware, with data in read-only memory.
Will it actually shake out like that?
Eeeeh. The field is under such rapid development that, despite projections that "the Singularity" won't arrive until 2078, it's very difficult to predict what will happen, or what specific architecture will be used.
6 notes · View notes
mariastoyanovablog · 2 months ago
Text
Blog Post 1: AI uses in video games
This post is about a detailed look into AI, and its use in video games. In addition, I will discuss how it slowly transitions to something so normalised today.
With the rapid growth and development of AI tools such as ChatGBT, DALL-E and Google’s Gemini, I would say that the use of AI in games is inevitable at this point. Using AI tools is simply a shortcut in game development. If a shortcut is possible and effective for the development team then why not take advantage of it? AI tools have already sped up game development time due to ChatGBT being used in programming. For example, ChatGPT has the ability to effortlessly problem solve code with bugs and even code up a function or a feature for a game simply by typing up a prompt. It can create, execute, and troubleshoot more effectively than any programmer.
With this in mind developers have started to integrate AI art into game art work. The most common uses of AI art so far is in character designs, creature designs and promotional material for games and possibly many more uses yet to be discovered. AI art can tap into concept art, illustration artwork of the product and even 3D generated models. Artwork can now be created in just a few clicks. AI involvement in video games is going to keep growing, just as it will in every other industry.
In my research I had a look at an article which has an open conversation about the use of AI art, one of the developers of Draconis 8 openly admits that “If we didn’t use AI as an artist tool, we just would have had far less art in the game. Without the use of AI, the game likely would have had 17 pieces of art. Using AI, Antonis was able to do 321 pieces of art” With AI being more accessible this is becoming more normalised and common, it’s cutting corners to get the game out faster. Time is money, at the end of the day right? So the less time spent the more money you’re saving in development.
Human art has inherently more value because we know that somebody, somewhere, poured their love and passion into it. As an aspiring artist myself, I am always impressed at how someone had a thought and translated it into a visually stunning artwork. Creating art is not an easy talent to master, it takes time and practice backed up by passion, creativity and the ambition to create. On the other hand, creating AI art is quite the opposite. It’s simply lifeless and talentless and it is almost scary how exponentially faster it is than humans. AI art is a system which reuses existing data from a set database of art to train itself to replicate art based on your typed up prompt. The process of creating that art is devoid of charm, inspiration and imagination.
Some positive points towards it is that it could be an incredible tool for solo game devs who want to create games but don’t have a budget or a team backing the project. It could also allow solo players to play against a truly enigmatic and responsive AI enemy. For example, Gran Turismo 7, a racing simulation video game, has recently introduced an AI called Sophy. Gran Turismo Sophy is the result of a collaboration with Sony AI and other AI tools. Sony says; “Sophy is a revolutionary superhuman racing AI agent that has mastered the highly realistic game of Gran Turismo Sport, to race against and elevate the gaming experience of top Gran Turismo drivers.” Using Sophy, Sony has created a never before seen experience for players with a passion for racing simulators. She is designed to be the perfect driver, but she also has the ability to adapt and learn from you when racing against her. She is able to replicate common racing habits that the player uses so she can match herself to the ability and skill of the player. A good example of a common racing habit used by players would be breaking late into a corner and therefore crashing to the nearest wall of the track. With this new technology players can have an extremely skillful AI to race against and improve or an AI that can match exactly the driving techniques the player is using for a more equal racing experience.
While there are some positives to AI being used in games, the negatives will always outweigh the positives in any industry. It’s going to kill a lot of human jobs and opportunities for people. Especially for junior game devs who are starting out their career and are desperately trying to take that first step in the industry. Furthermore, AI in games will also make the game market even more competitive than it already is, not only for new and original ideas for games but also job wise.
In conclusion, I think the line can be pretty fine, between use of AI for positive reasons and exploitative use, but I am hopeful that in time we’ll find the right way to approach this line. I truly believe that quality artwork will always have the charm, personality and talent over quantity artwork created with a few prompts. As an artist myself I would never support art created in seconds while copywriting other incredible artists out there who have shared their work online. I refuse to support a project that’s involved AI out of laziness or greed for money to put out a game fast. It’s our responsibility to support the projects that are being innovative while maintaining those core values. I can see AI being more useful in other fields of game development, however I still believe it’s more harmful than good. With rapid growth of AI being involved in games, I can picture in a few years from now we would have to put AI tools used in the credits of games, instead of real talented game devs.
References
BoardGameGeek. (2025). AI Art in Games | BGG. [online] Available at: https://boardgamegeek.com/thread/3405900/ai-art-in-games [Accessed 7 Jan. 2025].
Boardgamewire.com. (2025). Blocked. [online] Available at: https://boardgamewire.com/index.php/2024/11/07/star-realms-maker-wise-wizard-games-defends-using-ai-art-in-new-projects-board-game-artists-call-out-ethics-of-decision/ [Accessed 7 Jan. 2025].
www.gran-turismo.com. (n.d.). Gran Turismo Sophy. [online] Available at: https://www.gran-turismo.com/us/gran-turismo-sophy/
www.gran-turismo.com. (n.d.). PROJECT | Gran Turismo Sophy. [online] Available at: https://www.gran-turismo.com/us/gran-turismo-sophy/project/
2 notes · View notes
sortedviews · 3 months ago
Text
GIG VS AI
Tumblr media
Ladies & gentlemen, the greatest fight of the 21st century is expected to arrive within this 2 decade (2020 to 2040), where we will witness a clash between our economic gladiators, who are the GIG economy and its components, and the AI economy and its components. This fight has the potential to decide what will be the future of “bottom ones” in the world.
On one side of the global arena, we have the GIG economy, which means a marketplace where individuals (mostly labor categories) are hired for projects that are shorter in duration and lack all kinds of formal sector traits in it, for example, food delivery, free lancing, project-based hires, etc., and according to a World Bank report, it is expected to have 435 million people. On the other side of global arena, we have AI economy, which means a world where every action of an individual will have a basic support system which will ease its work and help to excel at faster, better and more straight way, for Example: AI writing a blog, AI Drone delivery, AI writing assignments, AI as an employee responsible for hiring and firing an employee, etc.
You must be wondering why two oceans are being compared; it is because they both share the same boundary and are fading at a very fast rate. Also, you must be wondering, “So what??/..., I am not liable for anything and neither affected.” If economics had been this simple, then earthians might never search for heaven.
The Gig economy face a major challenge from AI and you might even have figured out what the challenges might be, but just to make clarity in thoughts, let me explain
The challenges are:
1) JOB DISPLACEMENT: The first and foremost challenge is the job displacement of being fired. Any gig economy roles, such as delivery drivers, customer service agents, and data entry workers, are at risk of being automated by AI technologies like autonomous vehicles, chatbots, and machine learning algorithms.
2) SKILLS OBSOLESCENCE: AI advancements require gig workers to continually upskill to stay relevant. For instance, tasks like basic graphic design or transcription can now be automated, pushing workers to adapt to more complex roles.
3) TECHNICAL SELECTION: Many gig platforms use AI to allocate tasks, evaluate performance, and determine pay rates. This can lead to feelings of dehumanization and a lack of transparency in decision-making.
4) REGULATORY CHALLENGES: Gig workers often provide personal data to platforms, and AI can exploit this data for profit without proper worker protections.
5) MARKET CENTRALIZATION: AI-driven gig platforms can centralize market power, reducing workers' ability to negotiate terms. As platforms grow, they often extract higher fees or impose stricter conditions on gig workers.
These are some dangers that will be faced by nearly 450 million GIG workers in the future from the AI, so now the question in your mind might be, “What can GIG do in front of AI to ensure its survival?” The answer is “Collaborate." The GIG economy, instead of considering AI its opponent, has to consider it a future ally.
The collaboration ways are:
·       AI may evaluate market trends and suggest new abilities that employees should acquire in order to stay competitive.
·       AI-Enhanced Creativity Tools: To improve their work and produce results more quickly, gig workers in creative industries (such as writing and design) can make use of AI tools like generative design or content creation platforms.
·       Fair pricing models: AI is able to determine the best prices for services by taking into account worker effort, market conditions, and demand, which guarantees more equitable pay structures.
·       Transparent Ratings and Feedback: By detecting and reducing biases in customer reviews or ratings, AI algorithms can guarantee that gig workers are fairly evaluated.
·       Hybrid jobs: Gig workers can cooperate with AI systems in jobs like monitoring or optimizing AI outputs that platforms can introduce. 
·       Resource Optimization: AI can optimize routes, cut down on fuel usage, and save time for services like delivery and ride-hailing.
·       Improved Matching Algorithms: AI can be used to more effectively match gig workers with jobs that fit their locations, preferences, and skill sets. This can increase job satisfaction and decrease downtime.In summary, the titanic conflict between the AI and gig economies represents a chance for cooperation rather than a struggle for supremacy. The difficulties presented by AI—centralization of the market, skill obsolescence, and employment displacement—are formidable, but they are not insurmountable. Accepting AI as a friend rather than an enemy is essential to the gig workforce's survival and success.
Gig workers can increase productivity, obtain access to more equitable systems, and open up new growth opportunities by incorporating AI tools. In a fast-changing economy, AI can enable workers to thrive through hybrid roles, transparent feedback, and resource optimization. This change must be spearheaded by platforms, legislators, and employees working together to ensure equity, inclusion, and flexibility.
Our capacity to strike a balance between innovation and humanity will determine the future of the "bottom ones." The decisions we make now will influence the economy of tomorrow, whether we are consumers, policymakers, or gig workers. Let's make sure that the economic legacy of the twenty-first century is defined by cooperation rather than rivalry.
2 notes · View notes