#Agent-based AI model
Explore tagged Tumblr posts
Text
Why Gartner's Data Fabric Graphic Puts The Horse Before The Cart
How does the old technology phrase "garbage-in, garbage-out" apply to Gartner's Data Fabric post?
Here is the link to today’s Gartner post on LinkedIn regarding the Data Fabric graphic. My comment is below. I will use these three terms: agent-based model, metaprise, and then – only then, as you call it, data fabric. Without the first two being in place, the data fabric map described above is incomplete and has limited value. Everything begins at the point of new data inputs. A major flaw is…
0 notes
Text
Amazon Bedrock gains new AI models, tools, and features
New Post has been published on https://thedigitalinsider.com/amazon-bedrock-gains-new-ai-models-tools-and-features/
Amazon Bedrock gains new AI models, tools, and features
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service.
The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency.
Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth as customers flock to the service for its broad selection of leading models, tools to easily customise with their data, built-in responsible AI features, and capabilities for developing sophisticated agents.
“With this new set of capabilities, we are empowering customers to develop more intelligent AI applications that will deliver greater value to their end-users.”
Amazon Bedrock expands its model diversity
AWS is set to become the first cloud provider to feature models from AI developers Luma AI and poolside, while also incorporating Stability AI’s latest release.
Through its new Amazon Bedrock Marketplace, customers will have access to over 100 emerging and specialised models from across industries, ensuring they can select the most appropriate tools for their unique needs.
Luma AI’s Ray 2
Luma AI, known for advancing generative AI in video content creation, brings its next-generation Ray 2 model to Amazon Bedrock. This model generates high-quality, lifelike video outputs from text or image inputs and allows organisations to create detailed outputs in fields such as fashion, architecture, and graphic design. AWS’s presence as the first provider for this model ensures businesses can experiment with new camera angles, cinematographic styles, and consistent characters with a frictionless workflow.
poolside’s malibu and point
Designed to address challenges in modern software engineering, poolside’s models – malibu and point – specialise in code generation, testing, documentation, and real-time code completion. Importantly, developers can securely fine-tune these models using their private datasets. Accompanied by Assistant – an integration for development environments – poolside’s tools allow engineering teams to accelerate productivity, ship projects faster, and increase accuracy.
Stability AI’s Stable Diffusion 3.5 Large
Amazon Bedrock customers will soon gain access to Stability AI’s text-to-image model Stable Diffusion 3.5 Large. This addition supports businesses in creating high-quality visual media for use cases in areas like gaming, advertising, and retail.
Through the Bedrock Marketplace, AWS also enables access to over 100 specialised models. These include solutions tailored to fields such as biology (EvolutionaryScale’s ESM3 generative model), financial data (Writer’s Palmyra-Fin), and media (Camb.ai’s text-to-audio MARS6).
Zendesk, a global customer service software firm, leverages Bedrock’s marketplace to personalise support across email and social channels using AI-driven localisation and sentiment analysis tools. For example, they use models like Widn.AI to tailor responses based on real-time sentiment in customers’ native languages.
Scaling inference with new Amazon Bedrock features
Large-scale generative AI applications require balancing the cost, latency, and accuracy of inference processes. AWS is addressing this challenge with two new Amazon Bedrock features:
Prompt Caching
The new caching capability reduces redundant processing of prompts by securely storing frequently used queries, saving on both time and costs. This feature can lead to up to a 90% reduction in costs and an 85% decrease in latency. For example, Adobe incorporated Prompt Caching into its Acrobat AI Assistant to summarise documents and answer questions, achieving a 72% reduction in response times during initial testing.
Intelligent Prompt Routing
This feature dynamically directs prompts to the most suitable foundation model within a family, optimising results for both cost and quality. Customers such as Argo Labs, which builds conversational voice AI solutions for restaurants, have already benefited. While simpler queries (like booking tables) are handled by smaller models, more nuanced requests (e.g., dietary-specific menu questions) are intelligently routed to larger models. Argo Labs’ usage of intelligent Prompt Routing has not only improved response quality but also reduced costs by up to 30%.
Data utilisation: Knowledge bases and automation
A key attraction of generative AI lies in its ability to extract value from data. AWS is enhancing its Amazon Bedrock Knowledge Bases to ensure organisations can deploy their unique datasets for richer AI-powered user experiences.
Using structured data
AWS has introduced capabilities for structured data retrieval within Knowledge Bases. This enhancement allows customers to query data stored across Amazon services like SageMaker Lakehouse and Redshift through natural-language prompts, with results translated back into SQL queries. Octus, a credit intelligence firm, plans to use this capability to provide clients with dynamic, natural-language reports on its structured financial data.
GraphRAG integration
By incorporating automated graph modelling (powered by Amazon Neptune), customers can now generate and connect relational data for stronger AI applications. BMW Group, for instance, will use GraphRAG to augment its virtual assistant MAIA. This assistant taps into BMW’s wealth of internal data to deliver comprehensive responses and premium user experiences.
Separately, AWS has unveiled Amazon Bedrock Data Automation, a tool that transforms unstructured content (e.g., documents, video, and audio) into structured formats for analytics or retrieval-augmented generation (RAG). Companies like Symbeo (automated claims processing) and Tenovos (digital asset management) are already piloting the tool to improve operational efficiency and data reuse.
[embedded content]
The expansion of Amazon Bedrock’s ecosystem reflects its growing popularity, with the service recording a 4.7x increase in its customer base over the last year. Industry leaders like Adobe, BMW, Zendesk, and Tenovos have all embraced AWS’s latest innovations to improve their generative AI capabilities.
Most of the newly announced tools – such as inference management, Knowledge Bases with structured data retrieval, and GraphRAG – are currently in preview, while notable model releases from Luma AI, poolside, and Stability AI are expected soon.
See also: Alibaba Cloud overhauls AI partner initiative
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, Amazon, amazon web services, artificial intelligence, aws, bedrock, models
#adobe#advertising#agents#ai#ai & big data expo#ai assistant#AI models#AI-powered#Alibaba#alibaba cloud#Amazon#Amazon Web Services#amp#Analysis#Analytics#applications#architecture#Articles#artificial#Artificial Intelligence#audio#automation#AWS#bases#bedrock#Big Data#Biology#BMW#california#challenge
0 notes
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Text
Yandere! Android x Reader (I)
It is the future and you have been tasked to solve a mysterious murder that could jeopardize political ties. Your assigned partner is the newest android model meant to assimilate human customs. You must keep his identity a secret and teach him the ways of earthlings, although his curiosity seems to be reaching inappropriate extents.
Yes, this is based on Asimov’s “Caves of Steel” because Daneel Olivaw was my first ever robot crush. I also wanted a protagonist that embraces technology. :)
Content: female reader, AI yandere, 50's futurism
[Part 2] | [More original works]
You follow after the little assistant robot, a rudimentary machine invested with basic dialogue and spatial navigation. It had caused quite the ruckus when first introduced. One intern - well liked despite being somewhat clumsy at his job - was sadly let go as a result. Not even the Police is safe from the threat of AI, is what they chanted outside the premises.
"The Commissioner has summoned you, (Y/N)."
That's how it greeted you earlier, clacking its appendage against the open door in an attempt to simulate a knock.
"Do you know why my presence is needed?" You inquire and wait for the miniature AI to scan the audio message.
"I am not allowed to mention anything right now." It finally responds after agonizing seconds.
It's an alright performance. You might've been more impressed by it, had you not witnessed first hand the Spacer technology that could put any modern invention here on Earth to shame. Sadly the people down here are very much against artificial intelligence. There have been multiple protests recently, like the one in front of your building, condemning the latest government suggestion regarding automation. People fear for their jobs and safety and you don't necessarily blame them for having self preservation. On the other hand, you've always been a supporter of progress. As a child you devoured any science fiction book you could get your hands on, and now, as a high ranked police detective you still manage to sneak away and scan over articles and news involving the race for a most efficient computer.
You close the door behind you and the Commissioner puts his fat cigarette out, twisting the remains into the ashtray with monotonous movements as if searching for the right words.
"There's been a murder." Is all he settles on saying, throwing a heavy folder in your direction. A hologram or tablet might've been easier to catch, but the man, like many of his coworkers, shares a deep nostalgia for the old days.
You flip through the pages and eventually furrow your eyebrows.
"This would be a disaster if it made it to the news." You mumble and look up at the older man. "Shouldn't this go to someone more experienced?"
He twiddles with his grey mustache and glances out the fake window.
"It's a sensitive case. The Spacers are sending their own agent to collaborate with us. What stands out to you?"
You narrow your eyes and focus on the personnel sheet. What's there to cause such controversy? Right before giving up, departing from the page, you finally notice it: next to the Spacer officer's name, printed clearly in black ink, is a little "R." which is a commonly used abbreviation to indicate something is a robot. The chief must've noticed your startled reaction and continues, satisfied:
"You understand, yes? They're sending an android. Supposedly it replicates a human perfectly in terms of appearance, but it does not possess enough observational data. Their request is that whoever partners up with him will also house him and let him follow along for the entirety of the mission. You're the only one here openly supporting those tin boxes. I can't possibly ask one of your higher ups, men with wives and children, to...you know...bring that thing in their house."
You're still not sure whether to be offended by the fact that your comfort seems to be of less priority compared to other officers. Regardless of the semantics, you're presently standing at the border between Earth and the Spacer colony, awaiting your case partner. A man emerges from behind a security gate. He's tall, with handsome features and an elegant walk. He approaches you and you reach for a handshake.
"Is the android with you?" You ask, a little confused.
"Is this your first time seeing a Spacer model?" He responds, relaxed. "I am the agent in your care. There is no one else."
You take a moment to process the information, similar to the primitive machine back at your office. Could it be? You've always known that Spacer technology is years ahead, but this surpasses your wildest dreams. There is not a single detail hinting at his mechanical fundament. The movement is fluid, the speech is natural, the design is impenetrable. He lifts the warm hand he'd used for the handshake and gently presses a finger against your chin in an upwards motion. You find yourself involuntarily blushing.
"Your mouth was open. I assumed you'd want it discreetly corrected." He states, factually, with a faint smile on his lips. Is he amused? Is such a feeling even possible? You try your best to regain some composure, adjusting the collar of your shirt and clearing your throat.
"Thank you and please excuse my rudeness. I was not expecting such a flawless replica. Our assistants are...easily recognizable as AI."
"So I've been told." His smile widens and he checks his watch. You follow his gesture, still mesmerized, trying to find a single indicator that the man standing before you is indeed a machine, a synthetic product.
Nothing.
"Shall we?" He eyes the exit path and you quickly lead him outside and towards public transport.
He patiently waits for your fingerprint scan to be complete. You almost turn around and apologize for the old, lagging device. As a senior detective, you have the privilege of living in the more spacious, secured quarters of the city. And, since you don't have a family, the apartment intended for multiple people looks more like a luxury adobe. Still, compared to the advanced way of the Spacers, this must feel like poverty to the android.
At last, the scanner beeps and the door unlocks.
"Heh...It's a finicky model." You mumble and invite him in.
"Yes, I'm familiar with these systems." He agrees with you and steps inside, unbuttoning his coat.
"Oh, you've seen this before?"
"In history books."
You scratch your cheek and laugh awkwardly, wondering how much of his knowledge about the current life on Earth is presented as a museum exhibit when compared to Spacer society.
"I'm going to need a coffee. I guess you don't...?" Your words trail as you await confirmation.
"I would enjoy one as well, if it is not too much to ask. I've been told it's a social custom to 'get coffee' as a way to have small talk." The synthetic straightens his shirt and looks at you expectantly.
"Of course. I somehow assumed you can't drink, but if you're meant to blend in with humans...it does make sense you'd have all the obvious requirements built in."
He drags a chair out and sits at the small table, legs crossed.
"Indeed. I have been constructed to have all the functions of a human, down to every detail."
You chuckle lightly. Well, not like you can verify it firsthand. The engineers back at the Spacer colony most likely didn't prepare him for matters considered unnecessary.
"I do mean every detail." He adds, as if reading your mind. "You are free to see for yourself."
You nearly drop the cup in your flustered state. You hurry to wipe the coffee that spilled onto the counter and glance back at the android, noticing a smirk on his face. What the hell? Are they playing a prank on you and this is actually a regular guy? Some sort of social experiment?
"I can see they included a sense of humor." You manage to blurt out, glaring at him suspiciously.
"I apologize if I offended you in any way. I'm still adjusting to different contexts." The android concludes, a hint of mischief remaining on his face. "Aren't rowdy jokes common in your field of work?"
"Uh huh. Spot on." You hesitantly place the hot drink before him.
Robots on Earth have always been built for the purpose of efficiency. Whether or not a computer passes the Turing Test is irrelevant as long as it performs its task in the most optimal, rational way. There have been attempts, naturally, to create something indistinguishable from a human, but utility has always taken precedence. It seems that Spacers think differently. Or perhaps they have reached their desired level of performance a long time ago, and all that was left was fiddling with aesthetics. Whatever the case is, you're struggling not to gawk in amazement at the man sitting in your kitchen, stirring his coffee with a bored expression.
"I always thought - if you don't mind my honesty - that human emotions would be something to avoid when building AI. Hard to implement, even harder to control and it doesn't bring much use."
"I can understand your concerns. However, let me reassure you, I have a strict code of ethics installed in my neural networks and thus my emotions will never lead to any destructive behavior. All safety concerns have been taken into consideration.
As for why...How familiar are you with our colony?" The android takes a sip of his coffee and nods, expressing his satisfaction. "Perhaps you might be aware, Spacers have a declining population. Automated assistants have been part of our society for a long time now. What's lacking is humans. If the issue isn't fixed, artificial humans will have to do."
You scoff.
"What, us Earth men aren't good enough to fix the birth rates? They need robots?"
You suddenly remember the recipient of your complaint and mutter an apology.
"Well, I'm sure you'd make a fine contender. Sadly I can't speak for everyone else on Earth." The man smiles in amusement upon seeing the pale red that's now dusting your cheeks, then continues: "But the issue lies somewhere else. Spacers have left Earth a long time ago and lived in isolation until now. Once an organism has lost its immune responses to otherwise common pathogens, it cannot be reintegrated."
True. Very few Earth citizens are allowed to enter the colony, and only do so after thorough disinfection stages, proving they are disease free as to not endanger the fragile health of the Spacers living in a sterile environment. You can only imagine the disastrous outcome if the two species were to abruptly mingle. In that case, equally sterile machinery might be their only hope.
Your mind wanders to the idea. Dating a robot...How's that? You sheepishly gaze at the android and study his features. His neatly combed copper hair, the washed out blue eyes, the pale skin. Probably meant to resemble the Spacers. You shake your head.
"A-anyways, I'll go and gather all the case files I have. Then we can discuss our first steps. Do feel at home."
You rush out and head for your office. Focus, you tell yourself mildly annoyed.
While you search for the required paperwork - what a funny thing to say in this day and age - he will certainly take up on your generous offer to make himself comfortable. The redhaired man enters the living room, scanning everything with curious eyes. He stops in front of a digital frame and slides through the photos. Ah, this must be your Police Academy graduation. The year matches with the data he's received on you. Data files he might've read one too many times in his unexplained enthusiasm. This should be you and the Commissioner; Doesn't match the description of your father, and he seems too old to be a spouse or boyfriend. Additionally, the android distinctly recalls the empty 'Relationship' field.
"Old photos are always a tad embarrassing. I suppose you skipped that stage."
He jolts almost imperceptibly and faces you. You have returned with a thin stack of papers and a hologram projector.
"I've digitalized most files I received, so you don't have to shuffle a bunch of paper around." You explain.
"That is very useful, thank you." He gently retrieves the small device from your hand, but takes a moment before removing his fingers from yours. "I predict this will be a successful partnership."
You flash him a friendly smile and gesture towards the seating area.
"Let's get to work, then. Unless you want to go through more boring albums." You joke as you lower yourself onto the plush sofa.
The synthetic human joins you at an unexpectedly close proximity. You wonder if proper distance differs among Spacers or if he has received slightly erroneous information about what makes a comfortable rapport.
"Nothing boring about it. In fact, I'd say you and I are very similar from this point of view." He tells you, placing the projector on the table.
"Oh?"
"Your interest in technology and artificial intelligence is rather easy to infer." The man continues, pointing vaguely towards the opposing library. "Aside from the briefing I've already received about you, that is."
"And that is similar to...the interest in humans you've been programmed to have?" You interject, unsure where this conversation is meant to lead.
"Almost."
His head turns fully towards you and you stare back into his eyes. From this distance you can finally discern the first hints of his nature: the thin disks shading the iris - possibly CCD sensors - are moving in a jagged, mechanical manner. Actively analyzing and processing the environment.
"I wouldn't go as far as to generalize it to all humans.
Just you."
#yandere#yandere x darling#yandere x reader#yandere x you#yandere male#male yandere#male yandere x reader#yandere robot#yandere android#robot x human#android x reader#robot x reader#yandere scenarios#yandere imagines#yandere oc#yandere original character#yandere imagine#yandere fic
3K notes
·
View notes
Text
...EXPERIMENT: BEGIN ! I commend you for finding this file. In the chance of my death, I must ask you continue to document ASU-NARO agents. Do whatever you must to extract our desired results. Don't worry—they've already signed away their lives.
{ This is an interactive ask blog, set one year prior to the Death Game! Run by @faresong }
☕️ KOA MYOJIN ;; Adopted heir of the Hiyori/Myojin Branch. Japanese/Vietnamese; 11 years.
KOA MYOJIN is the replacement heir for Hinako Mishuku, Myojin's biological granddaughter. Being raised with this knowledge hanging over her head has resulted in a rather cynical mindset wherein she views those around her, up to and including herself, as pieces in a larger game. A mindset reinforced by Mr. Chidouin in Myojin's absence, for he had faith in her where Myojin did not—seeing her solely as a mandatory last-resort to continue his reign of power. But of course, even a pawn can become a queen.
🎃 RIO RANGER (LAIZER PROJECT) ;; Experiment of the Gotō Branch. Doll, Japanese model; 20 years.
RIO is an experimental project spearheaded by Gashu Satou simulating the deceased Yoshimoto heir. It was initiated with its basic personality, and to compensate its limited emotional range, this iteration of AI technology was granted a much more adaptive program compared to M4-P1. As such, he has taken to mimicry of the researchers which surround him in all their crudest forms. Despite denouncing humanity, his development has certainly been typical of one. The candidate AIs are proven promising.
🐉 SOU HIYORI ;; Heir of the Hiyori/Myojin Branch. Japanese; 22 years.
SOU HIYORI is the heir of the Family and inherently quite skilled at keeping appearances—only if it benefits him. He obeys Asunaro with the sneer of someone who thinks himself something above it, and has recently taken great lengths to abandon its ruling through the rejection of his individual humanity. It is a bastardization which requires admirable resolve, but implies him to be a much larger threat if left unchecked. Thus, Mrs. Hiyori arranged plans for his execution on the day Myojin and herself are simultaneously incapacitated or dead.
🦋 MAPLE (ITERATION M4-P1) ;; Experiment of the Gotō & Hiyori Branch. Obstructor, Japanese model; 26 years.
MAPLE was the first Obstructor to be granted emotional programming, and is the final Obstructor to be decommissioned. However, this fate has been put on standby due to the new researchers intrigue in her, insisting she exists as a base from which all other AI programs were spawned and must be archived properly. Until her execution, Maple tends to menial tasks within the laboratory she resides and spends her idle time pining for Hiyori and wishing to learn more about humanity through the researchers who care for her.
🩸 KAI SATOU ;; Patriarch of the Gotō Branch. Japanese/Wa Chinese; 26 years.
KAI is a reserved patriarch whose reputation precedes him. Though once thought denounced, he's rumored nonetheless a controversial figure in Asunaro's midst—however, all can agree him to be a vengeful, resolute person lent the power of God.
💉 MICHIRU NAMIDA ;; Lieutenant of the Satou Family, Gotō Branch. Korean; 28 years.
MICHIRU is a revered researcher within Asunaro's newer ranks, having quickly rose to a position of respect for her ruthless pursuit of seizing humanity's destiny with her own two hands. Without being absorbed by the superficial desire for power, many recognize her dedicated state of mind to be reminiscent of the natural way Mrs. Hiyori assumed her role under Asunaro's whispers of guidance. There is importance in the fact that the Godfather's right hand regards her as a peer, where he otherwise dismisses his own kind by blood, by culture.
🫀 EMIRI HARAI ;; Lieutenant of the Satou Family, Gotō Branch. Japanese; 29 years.
EMIRI is a new researcher and serves as the connecting point between Asunaro's primary facility and civilian life. For all her resentment buried inside one-off remarks and festering within herself, she throws herself to her work with the drive of a passionate someone who has lost all else. Someone who perhaps hungered for life.
( ̄▽ ̄) MR. CHIDOUIN ;; Godfather. Japanese; 44 years.
MR. CHIDOUIN aligned himself with the Gotō Family's lost heir after his father's untimely death, uniting the two families in a manner he hoped would justify the suffering once inflicted upon them—but particularly his wife, who had been cast out by her own. Despite, or as he had claimed, for his being extremely capable of detaching to arrange the larger canvas upon which Asunaro's story is written, he takes a personal pride in being the one to groom and inevitably cull its important pieces.
⚰️ GASHU SATOU ;; Captain of the Satou Family, Gotō Branch. Japanese; 62 years.
GASHU is a remarkably candid researcher with a scrutinizing eye for detail. Despite regarding most with unrelenting cynicism, he places his remaining shreds of hope in a choice few. Whether they reinforce this worldview and finally break him is a decision entirely in their hands.
#speak#profiles#&. some useful tags ->#my art#plot#answers#;;#yttd#your turn to die#kimi ga shine#koa myojin#fake hinako mishuku#rio ranger#sou hiyori#kai satou#michiru namida#maple yttd#emiri harai#meister#gashu satou
71 notes
·
View notes
Text
So I've finally finished Agents of S.H.I.E.L.D., and not only did I enjoy the last three seasons way more than I thought I would, but I was not prepared for how delightfully unhinged the show became. Some of my favourite plot points included:
The female protagonist has a long-lost sister with superpowers who becomes the key to the entire crew returning to their original timeline through the Quantum Realm. Said long-lost sister is not introduced or even hinted at until the last five episodes of the entire show.
Half of one season takes place in a dystopian 2091 where the young twenty-something scientist couple not only meet their grandson who is the same age as them but said grandson returns to the present, becomes a series regular, and calls them "Nana" and "Bobo" in some Once Upon A Time worthy family tree shenanigans. Oh he also gets stuck in the 80s and claims he wrote Don't You (Forget About Me)
Phil Coulson dies in Season 5 because in order to stop an evil AI -turned-human-turned evil because she got dumped by a small Scottish man he has to become Ghostrider. The entire season builds up to this in a way that makes it feel very much like the actor is stepping away from the show and retiring the character, only for them to cast Clarke Gregg as an evil deity from another dimension in Season 6 and as a Life Model Decoy that may as well just be Phil Coulson in Season 7
Patton Oswalt plays multiple identical characters who all work for SHIELD. This is never fully explained.
“Mata Hari Calamari”
The final season is a decade-hopping gimmick with matching genre episodes that beat WandaVision to the punch
One character is a robot anthropologist who just wants to be best friends with the same small Scottish man. He has canonically been trained to perform in alien brothels and eventually becomes a bartender in the Crazy Canoe in 1955. He is one of the absolute best parts of the show.
One plot line follows said same Scottish Man and robot anthropologist as they get stranded in outer space with their only way home being to gamble in an alien casino while their friends attempt to rescue them but accidentally take LSD instead
"I found that bluffing was much easier if you kill someone and take their skin."
Area 51 is canonically a SHIELD base
#agents of shield#spoilers#agents of shield spoilers#marvel#mcu#honestly this show isn't perfect but it was much stronger than I was anticipating#esp as CA:TWS literally blew up their premise before their first season was even finished
409 notes
·
View notes
Text
Things That Are Hard
Some things are harder than they look. Some things are exactly as hard as they look.
Game AI, Intelligent Opponents, Intelligent NPCs
As you already know, "Game AI" is a misnomer. It's NPC behaviour, escort missions, "director" systems that dynamically manage the level of action in a game, pathfinding, AI opponents in multiplayer games, and possibly friendly AI players to fill out your team if there aren't enough humans.
Still, you are able to implement minimax with alpha-beta pruning for board games, pathfinding algorithms like A* or simple planning/reasoning systems with relative ease. Even easier: You could just take an MIT licensed library that implements a cool AI technique and put it in your game.
So why is it so hard to add AI to games, or more AI to games? The first problem is integration of cool AI algorithms with game systems. Although games do not need any "perception" for planning algorithms to work, no computer vision, sensor fusion, or data cleanup, and no Bayesian filtering for mapping and localisation, AI in games still needs information in a machine-readable format. Suddenly you go from free-form level geometry to a uniform grid, and from "every frame, do this or that" to planning and execution phases and checking every frame if the plan is still succeeding or has succeeded or if the assumptions of the original plan no longer hold and a new plan is on order. Intelligent behaviour is orders of magnitude more code than simple behaviours, and every time you add a mechanic to the game, you need to ask yourself "how do I make this mechanic accessible to the AI?"
Some design decisions will just be ruled out because they would be difficult to get to work in a certain AI paradigm.
Even in a game that is perfectly suited for AI techniques, like a turn-based, grid-based rogue-like, with line-of-sight already implemented, can struggle to make use of learning or planning AI for NPC behaviour.
What makes advanced AI "fun" in a game is usually when the behaviour is at least a little predictable, or when the AI explains how it works or why it did what it did. What makes AI "fun" is when it sometimes or usually plays really well, but then makes little mistakes that the player must learn to exploit. What makes AI "fun" is interesting behaviour. What makes AI "fun" is game balance.
You can have all of those with simple, almost hard-coded agent behaviour.
Video Playback
If your engine does not have video playback, you might think that it's easy enough to add it by yourself. After all, there are libraries out there that help you decode and decompress video files, so you can stream them from disk, and get streams of video frames and audio.
You can just use those libraries, and play the sounds and display the pictures with the tools your engine already provides, right?
Unfortunately, no. The video is probably at a different frame rate from your game's frame rate, and the music and sound effect playback in your game engine are probably not designed with syncing audio playback to a video stream.
I'm not saying it can't be done. I'm saying that it's surprisingly tricky, and even worse, it might be something that can't be built on top of your engine, but something that requires you to modify your engine to make it work.
Stealth Games
Stealth games succeed and fail on NPC behaviour/AI, predictability, variety, and level design. Stealth games need sophisticated and legible systems for line of sight, detailed modelling of the knowledge-state of NPCs, communication between NPCs, and good movement/ controls/game feel.
Making a stealth game is probably five times as difficult as a platformer or a puzzle platformer.
In a puzzle platformer, you can develop puzzle elements and then build levels. In a stealth game, your NPC behaviour and level design must work in tandem, and be developed together. Movement must be fluid enough that it doesn't become a challenge in itself, without stealth. NPC behaviour must be interesting and legible.
Rhythm Games
These are hard for the same reason that video playback is hard. You have to sync up your audio with your gameplay. You need some kind of feedback for when which audio is played. You need to know how large the audio lag, screen lag, and input lag are, both in frames, and in milliseconds.
You could try to counteract this by using certain real-time OS functionality directly, instead of using the machinery your engine gives you for sound effects and background music. You could try building your own sequencer that plays the beats at the right time.
Now you have to build good gameplay on top of that, and you have to write music. Rhythm games are the genre that experienced programmers are most likely to get wrong in game jams. They produce a finished and playable game, because they wanted to write a rhythm game for a change, but they get the BPM of their music slightly wrong, and everything feels off, more and more so as each song progresses.
Online Multi-Player Netcode
Everybody knows this is hard, but still underestimates the effort it takes. Sure, back in the day you could use the now-discontinued ready-made solution for Unity 5.0 to synchronise the state of your GameObjects. Sure, you can use a library that lets you send messages and streams on top of UDP. Sure, you can just use TCP and server-authoritative networking.
It can all work out, or it might not. Your netcode will have to deal with pings of 300 milliseconds, lag spikes, package loss, and maybe recover from five seconds of lost WiFi connections. If your game can't, because it absolutely needs the low latency or high bandwidth or consistency between players, you will at least have to detect these conditions and handle them, for example by showing text on the screen informing the player he has lost the match.
It is deceptively easy to build certain kinds of multiplayer games, and test them on your local network with pings in the single digit milliseconds. It is deceptively easy to write your own RPC system that works over TCP and sends out method names and arguments encoded as JSON. This is not the hard part of netcode. It is easy to write a racing game where players don't interact much, but just see each other's ghosts. The hard part is to make a fighting game where both players see the punches connect with the hit boxes in the same place, and where all players see the same finish line. Or maybe it's by design if every player sees his own car go over the finish line first.
50 notes
·
View notes
Text
Less than three months after Apple quietly debuted a tool for publishers to opt out of its AI training, a number of prominent news outlets and social platforms have taken the company up on it.
WIRED can confirm that Facebook, Instagram, Craigslist, Tumblr, The New York Times, The Financial Times, The Atlantic, Vox Media, the USA Today network, and WIRED’s parent company, Condé Nast, are among the many organizations opting to exclude their data from Apple’s AI training. The cold reception reflects a significant shift in both the perception and use of the robotic crawlers that have trawled the web for decades. Now that these bots play a key role in collecting AI training data, they’ve become a conflict zone over intellectual property and the future of the web.
This new tool, Applebot-Extended, is an extension to Apple’s web-crawling bot that specifically lets website owners tell Apple not to use their data for AI training. (Apple calls this “controlling data usage” in a blog post explaining how it works.) The original Applebot, announced in 2015, initially crawled the internet to power Apple’s search products like Siri and Spotlight. Recently, though, Applebot’s purpose has expanded: The data it collects can also be used to train the foundational models Apple created for its AI efforts.
Applebot-Extended is a way to respect publishers' rights, says Apple spokesperson Nadine Haija. It doesn’t actually stop the original Applebot from crawling the website—which would then impact how that website’s content appeared in Apple search products—but instead prevents that data from being used to train Apple's large language models and other generative AI projects. It is, in essence, a bot to customize how another bot works.
Publishers can block Applebot-Extended by updating a text file on their websites known as the Robots Exclusion Protocol, or robots.txt. This file has governed how bots go about scraping the web for decades—and like the bots themselves, it is now at the center of a larger fight over how AI gets trained. Many publishers have already updated their robots.txt files to block AI bots from OpenAI, Anthropic, and other major AI players.
Robots.txt allows website owners to block or permit bots on a case-by-case basis. While there’s no legal obligation for bots to adhere to what the text file says, compliance is a long-standing norm. (A norm that is sometimes ignored: Earlier this year, a WIRED investigation revealed that the AI startup Perplexity was ignoring robots.txt and surreptitiously scraping websites.)
Applebot-Extended is so new that relatively few websites block it yet. Ontario, Canada–based AI-detection startup Originality AI analyzed a sampling of 1,000 high-traffic websites last week and found that approximately 7 percent—predominantly news and media outlets—were blocking Applebot-Extended. This week, the AI agent watchdog service Dark Visitors ran its own analysis of another sampling of 1,000 high-traffic websites, finding that approximately 6 percent had the bot blocked. Taken together, these efforts suggest that the vast majority of website owners either don’t object to Apple’s AI training practices are simply unaware of the option to block Applebot-Extended.
In a separate analysis conducted this week, data journalist Ben Welsh found that just over a quarter of the news websites he surveyed (294 of 1,167 primarily English-language, US-based publications) are blocking Applebot-Extended. In comparison, Welsh found that 53 percent of the news websites in his sample block OpenAI’s bot. Google introduced its own AI-specific bot, Google-Extended, last September; it’s blocked by nearly 43 percent of those sites, a sign that Applebot-Extended may still be under the radar. As Welsh tells WIRED, though, the number has been “gradually moving” upward since he started looking.
Welsh has an ongoing project monitoring how news outlets approach major AI agents. “A bit of a divide has emerged among news publishers about whether or not they want to block these bots,” he says. “I don't have the answer to why every news organization made its decision. Obviously, we can read about many of them making licensing deals, where they're being paid in exchange for letting the bots in—maybe that's a factor.”
Last year, The New York Times reported that Apple was attempting to strike AI deals with publishers. Since then, competitors like OpenAI and Perplexity have announced partnerships with a variety of news outlets, social platforms, and other popular websites. “A lot of the largest publishers in the world are clearly taking a strategic approach,” says Originality AI founder Jon Gillham. “I think in some cases, there's a business strategy involved—like, withholding the data until a partnership agreement is in place.”
There is some evidence supporting Gillham’s theory. For example, Condé Nast websites used to block OpenAI’s web crawlers. After the company announced a partnership with OpenAI last week, it unblocked the company’s bots. (Condé Nast declined to comment on the record for this story.) Meanwhile, Buzzfeed spokesperson Juliana Clifton told WIRED that the company, which currently blocks Applebot-Extended, puts every AI web-crawling bot it can identify on its block list unless its owner has entered into a partnership—typically paid—with the company, which also owns the Huffington Post.
Because robots.txt needs to be edited manually, and there are so many new AI agents debuting, it can be difficult to keep an up-to-date block list. “People just don’t know what to block,” says Dark Visitors founder Gavin King. Dark Visitors offers a freemium service that automatically updates a client site’s robots.txt, and King says publishers make up a big portion of his clients because of copyright concerns.
Robots.txt might seem like the arcane territory of webmasters—but given its outsize importance to digital publishers in the AI age, it is now the domain of media executives. WIRED has learned that two CEOs from major media companies directly decide which bots to block.
Some outlets have explicitly noted that they block AI scraping tools because they do not currently have partnerships with their owners. “We’re blocking Applebot-Extended across all of Vox Media’s properties, as we have done with many other AI scraping tools when we don’t have a commercial agreement with the other party,” says Lauren Starke, Vox Media’s senior vice president of communications. “We believe in protecting the value of our published work.”
Others will only describe their reasoning in vague—but blunt!—terms. “The team determined, at this point in time, there was no value in allowing Applebot-Extended access to our content,” says Gannett chief communications officer Lark-Marie Antón.
Meanwhile, The New York Times, which is suing OpenAI over copyright infringement, is critical of the opt-out nature of Applebot-Extended and its ilk. “As the law and The Times' own terms of service make clear, scraping or using our content for commercial purposes is prohibited without our prior written permission,” says NYT director of external communications Charlie Stadtlander, noting that the Times will keep adding unauthorized bots to its block list as it finds them. “Importantly, copyright law still applies whether or not technical blocking measures are in place. Theft of copyrighted material is not something content owners need to opt out of.”
It’s unclear whether Apple is any closer to closing deals with publishers. If or when it does, though, the consequences of any data licensing or sharing arrangements may be visible in robots.txt files even before they are publicly announced.
“I find it fascinating that one of the most consequential technologies of our era is being developed, and the battle for its training data is playing out on this really obscure text file, in public for us all to see,” says Gillham.
11 notes
·
View notes
Text
All major AI developers are racing to create “agents” that will perform tasks on your computer: Apple, Google, Microsoft, OpenAI, Anthropic, etc. AI Agents will read your computer screen, browse the Internet, and perform tasks on your computer. Hidden agents will be harvesting your personal data, analyzing your hard drives for contraband, and ratting you out to the police. It’s a brave new world, after all. ⁃ Patrick Wood, TN Editor.
Google is reportedly gearing up to introduce its interpretation of the large action model concept known as “Project Jarvis,” with a preview potentially arriving as soon as December, according to The Information. This project aims to streamline various tasks for users, including research gathering, product purchasing, and flight booking.
Sources familiar with the initiative indicate that Jarvis will operate through a future version of Google’s Gemini technology and is specifically optimized for use with the Chrome web browser.
The primary focus of Project Jarvis is to help users automate everyday web-based tasks. The tool is designed to take and interpret screenshots, allowing it to interact with web pages by clicking buttons or entering text on behalf of users. While in its current state, Jarvis reportedly takes a few seconds to execute each action, the goal is to enhance user efficiency by handling routine online activities more seamlessly.
This move aligns with a broader trend among major AI companies working on similar capabilities. For instance, Microsoft is developing Copilot Vision, which will facilitate interactions with web pages.
Apple is also expected to introduce features that allow its AI to understand on-screen content and operate across multiple applications. Additionally, Anthropic has launched a beta update for Claude, which aims to assist users in managing their computers, while OpenAI is rumored to be working on a comparable solution.
Despite the anticipation surrounding Jarvis, The Information warns that the timeline for Google’s preview in December may be subject to change. The company is considering a limited release to select testers to help identify and resolve any issues before a broader launch. This approach reflects Google’s intention to refine the tool through user feedback, ensuring it meets expectations upon its official introduction.
Read full story here…
3 notes
·
View notes
Text
GIG VS AI
Ladies & gentlemen, the greatest fight of the 21st century is expected to arrive within this 2 decade (2020 to 2040), where we will witness a clash between our economic gladiators, who are the GIG economy and its components, and the AI economy and its components. This fight has the potential to decide what will be the future of “bottom ones” in the world.
On one side of the global arena, we have the GIG economy, which means a marketplace where individuals (mostly labor categories) are hired for projects that are shorter in duration and lack all kinds of formal sector traits in it, for example, food delivery, free lancing, project-based hires, etc., and according to a World Bank report, it is expected to have 435 million people. On the other side of global arena, we have AI economy, which means a world where every action of an individual will have a basic support system which will ease its work and help to excel at faster, better and more straight way, for Example: AI writing a blog, AI Drone delivery, AI writing assignments, AI as an employee responsible for hiring and firing an employee, etc.
You must be wondering why two oceans are being compared; it is because they both share the same boundary and are fading at a very fast rate. Also, you must be wondering, “So what??/..., I am not liable for anything and neither affected.” If economics had been this simple, then earthians might never search for heaven.
The Gig economy face a major challenge from AI and you might even have figured out what the challenges might be, but just to make clarity in thoughts, let me explain
The challenges are:
1) JOB DISPLACEMENT: The first and foremost challenge is the job displacement of being fired. Any gig economy roles, such as delivery drivers, customer service agents, and data entry workers, are at risk of being automated by AI technologies like autonomous vehicles, chatbots, and machine learning algorithms.
2) SKILLS OBSOLESCENCE: AI advancements require gig workers to continually upskill to stay relevant. For instance, tasks like basic graphic design or transcription can now be automated, pushing workers to adapt to more complex roles.
3) TECHNICAL SELECTION: Many gig platforms use AI to allocate tasks, evaluate performance, and determine pay rates. This can lead to feelings of dehumanization and a lack of transparency in decision-making.
4) REGULATORY CHALLENGES: Gig workers often provide personal data to platforms, and AI can exploit this data for profit without proper worker protections.
5) MARKET CENTRALIZATION: AI-driven gig platforms can centralize market power, reducing workers' ability to negotiate terms. As platforms grow, they often extract higher fees or impose stricter conditions on gig workers.
These are some dangers that will be faced by nearly 450 million GIG workers in the future from the AI, so now the question in your mind might be, “What can GIG do in front of AI to ensure its survival?” The answer is “Collaborate." The GIG economy, instead of considering AI its opponent, has to consider it a future ally.
The collaboration ways are:
· AI may evaluate market trends and suggest new abilities that employees should acquire in order to stay competitive.
· AI-Enhanced Creativity Tools: To improve their work and produce results more quickly, gig workers in creative industries (such as writing and design) can make use of AI tools like generative design or content creation platforms.
· Fair pricing models: AI is able to determine the best prices for services by taking into account worker effort, market conditions, and demand, which guarantees more equitable pay structures.
· Transparent Ratings and Feedback: By detecting and reducing biases in customer reviews or ratings, AI algorithms can guarantee that gig workers are fairly evaluated.
· Hybrid jobs: Gig workers can cooperate with AI systems in jobs like monitoring or optimizing AI outputs that platforms can introduce.
· Resource Optimization: AI can optimize routes, cut down on fuel usage, and save time for services like delivery and ride-hailing.
· Improved Matching Algorithms: AI can be used to more effectively match gig workers with jobs that fit their locations, preferences, and skill sets. This can increase job satisfaction and decrease downtime.In summary, the titanic conflict between the AI and gig economies represents a chance for cooperation rather than a struggle for supremacy. The difficulties presented by AI—centralization of the market, skill obsolescence, and employment displacement—are formidable, but they are not insurmountable. Accepting AI as a friend rather than an enemy is essential to the gig workforce's survival and success.
Gig workers can increase productivity, obtain access to more equitable systems, and open up new growth opportunities by incorporating AI tools. In a fast-changing economy, AI can enable workers to thrive through hybrid roles, transparent feedback, and resource optimization. This change must be spearheaded by platforms, legislators, and employees working together to ensure equity, inclusion, and flexibility.
Our capacity to strike a balance between innovation and humanity will determine the future of the "bottom ones." The decisions we make now will influence the economy of tomorrow, whether we are consumers, policymakers, or gig workers. Let's make sure that the economic legacy of the twenty-first century is defined by cooperation rather than rivalry.
2 notes
·
View notes
Text
Benefits Of Conversational AI & How It Works With Examples
What Is Conversational AI?
Conversational AI mimics human speech. It’s made possible by Google’s foundation models, which underlie new generative AI capabilities, and NLP, which helps computers understand and interpret human language.
How Conversational AI works
Natural language processing (NLP), foundation models, and machine learning (ML) are all used in conversational AI.
Large volumes of speech and text data are used to train conversational AI systems. The machine is trained to comprehend and analyze human language using this data. The machine then engages in normal human interaction using this information. Over time, it improves the quality of its responses by continuously learning from its interactions.
Conversational AI For Customer Service
With IBM Watsonx Assistant, a next-generation conversational AI solution, anyone in your company can easily create generative AI assistants that provide customers with frictionless self-service experiences across all devices and channels, increase employee productivity, and expand your company.
User-friendly: Easy-to-use UI including pre-made themes and a drag-and-drop chat builder.
Out-of-the-box: Unconventional To better comprehend the context of each natural language communication, use large language models, large speech models, intelligent context gathering, and natural language processing and understanding (NLP, NLU).
Retrieval-augmented generation (RAG): It based on your company’s knowledge base, provides conversational responses that are correct, relevant, and current at all times.
Use cases
Watsonx Assistant may be easily set up to accommodate your department’s unique requirements.
Customer service
Strong client support With quick and precise responses, chatbots boost sales while saving contact center funds.
Human resources
All of your employees may save time and have a better work experience with HR automation. Questions can be answered by staff members at any time.
Marketing
With quick, individualized customer service, powerful AI chatbot marketing software lets you increase lead generation and enhance client experiences.
Features
Examine ways to increase production, enhance customer communications, and increase your bottom line.
Artificial Intelligence
Strong Watsonx Large Language Models (LLMs) that are tailored for specific commercial applications.
The Visual Builder
Building generative AI assistants using to user-friendly interface doesn’t require any coding knowledge.
Integrations
Pre-established links with a large number of channels, third-party apps, and corporate systems.
Security
Additional protection to prevent hackers and improper use of consumer information.
Analytics
Comprehensive reports and a strong analytics dashboard to monitor the effectiveness of conversations.
Self-service accessibility
For a consistent client experience, intelligent virtual assistants offer self-service responses and activities during off-peak hours.
Benfits of Conversational AI
Automation may save expenses while boosting output and operational effectiveness.
Conversational AI, for instance, may minimize human error and expenses by automating operations that are presently completed by people. Increase client happiness and engagement by providing a better customer experience.
Conversational AI, for instance, may offer a more engaging and customized experience by remembering client preferences and assisting consumers around-the-clock when human agents are not present.
Conversational AI Examples
Here are some instances of conversational AI technology in action:
Virtual agents that employ generative AI to support voice or text conversations are known as generative AI agents.
Chatbots are frequently utilized in customer care applications to respond to inquiries and offer assistance.
Virtual assistants are frequently voice-activated and compatible with smart speakers and mobile devices.
Software that converts text to speech is used to produce spoken instructions or audiobooks.
Software for speech recognition is used to transcribe phone conversations, lectures, subtitles, and more.
Applications Of Conversational AI
Customer service: Virtual assistants and chatbots may solve problems, respond to frequently asked questions, and offer product details.
E-commerce: Chatbots driven by AI can help customers make judgments about what to buy and propose products.
Healthcare: Virtual health assistants are able to make appointments, check patient health, and offer medical advice.
Education: AI-powered tutors may respond to student inquiries and offer individualized learning experiences.
In summary
The way to communicate with robots might be completely changed by the formidable technology known as conversational AI. Also can use its potential to produce more effective, interesting, and customized experiences if it comprehend its essential elements, advantages, and uses.
Read more on Govindhech.com
#ConversationalAI#AI#NLP#machinelearning#generativeAI#LLM#AIchatbot#News#Technews#Technology#Technologynews#Technologytrends#Govindhtech
2 notes
·
View notes
Text
An important and timely discussion about Agentic AI
AI, GenAI, and Agentic AI and the tale of the "rusty robot" - why you should care.
JWH – What is the difference between Agentic AI and Agent-based AI?⚡ Over two decades ago, the foundational elements for successfully utilizing advanced, self-learning algorithms to optimize the procurement process using an agent-based model were well established.⚡ Somewhere along the line, we got distracted by SaaS and digital transformation, and now we are at risk of being sidetracked by…
0 notes
Text
RAG Evolution – A Primer to Agentic RAG
New Post has been published on https://thedigitalinsider.com/rag-evolution-a-primer-to-agentic-rag/
RAG Evolution – A Primer to Agentic RAG
What is RAG (Retrieval-Augmented Generation)?
Retrieval-Augmented Generation (RAG) is a technique that combines the strengths of large language models (LLMs) with external data retrieval to improve the quality and relevance of generated responses. Traditional LLMs use their pre-trained knowledge bases, whereas RAG pipelines will query external databases or documents in runtime and retrieve relevant information to use in generating more accurate and contextually rich responses. This is particularly helpful in cases where the question is either complex, specific, or based on a given timeframe, given that the responses from the model are informed and enriched with up-to-date domain-specific information.
The Present RAG Landscape
Large language models have completely revolutionized how we access and process information. Reliance solely on internal pre-input knowledge, however, could limit the flexibility of their answers-especially for complex questions. Retrieval-Augmented Generation addresses this problem by letting LLMs acquire and analyze data from other available outside sources to produce more accurate and insightful answers.
Recent development in information retrieval and natural language processing, especially LLM and RAG, opens up new frontiers of efficiency and sophistication. These developments could be assessed on the following broad contours:
Enhanced Information Retrieval: Improvement of information retrieval in RAG systems is quite important for working efficiently. Recent works have developed various vectors, reranking algorithms, hybrid search methods for the improvement of precise search.
Semantic caching: This turns out to be one of the prime ways in which computational cost is cut down without having to give up on consistent responses. This means that the responses to current queries are cached along with their semantic and pragmatic context attached, which again promotes speedier response times and delivers consistent information.
Multimodal Integration: Besides text-based LLM and RAG systems, this approach also covers the visuals and other modalities of the framework. This allows for access to a greater variety of source material and results in responses that are increasingly sophisticated and progressively more accurate.
Challenges with Traditional RAG Architectures
While RAG is evolving to meet the different needs. There are still challenges that stand in front of the Traditional RAG Architectures:
Summarisation: Summarising huge documents might be difficult. If the document is lengthy, the conventional RAG structure might overlook important information because it only gets the top K pieces.
Document comparison: Effective document comparison is still a challenge. The RAG framework frequently results in an incomplete comparison since it selects the top K random chunks from each document at random.
Structured data analysis: It’s difficult to handle structured numerical data queries, such as figuring out when an employee will take their next vacation depending on where they live. Precise data point retrieval and analysis aren’t accurate with these models.
Handling queries with several parts: Answering questions with several parts is still restricted. For example, discovering common leave patterns across all areas in a large organisation is challenging when limited to K pieces, limiting complete research.
Move towards Agentic RAG
Agentic RAG uses intelligent agents to answer complicated questions that require careful planning, multi-step reasoning, and the integration of external tools. These agents perform the duties of a proficient researcher, deftly navigating through a multitude of documents, comparing data, summarising findings, and producing comprehensive, precise responses.
The concept of agents is included in the classic RAG framework to improve the system’s functionality and capabilities, resulting in the creation of agentic RAG. These agents undertake extra duties and reasoning beyond basic information retrieval and creation, as well as orchestrating and controlling the various components of the RAG pipeline.
Three Primary Agentic Strategies
Routers send queries to the appropriate modules or databases depending on their type. The Routers dynamically make decisions using Large Language Models on which the context of a request falls, to make a call on the engine of choice it should be sent to for improved accuracy and efficiency of your pipeline.
Query transformations are processes involved in the rephrasing of the user’s query to best match the information in demand or, vice versa, to best match what the database is offering. It could be one of the following: rephrasing, expansion, or breaking down of complex questions into simpler subquestions that are more readily handled.
It also calls for a sub-question query engine to meet the challenge of answering a complex query using several data sources.
First, the complex question is decomposed into simpler questions for each of the data sources. Then, all the intermediate answers are gathered and a final result synthesized.
Agentic Layers for RAG Pipelines
Routing: The question is routed to the relevant knowledge-based processing based on relevance. Example: When the user wants to obtain recommendations for certain categories of books, the query can be routed to a knowledge base containing knowledge about those categories of books.
Query Planning: This involves the decomposition of the query into sub-queries and then sending them to their respective individual pipelines. The agent produces sub-queries for all items, such as the year in this case, and sends them to their respective knowledge bases.
Tool use: A language model speaks to an API or external tool, knowing what that would entail, on which platform the communication is supposed to take place, and when it would be necessary to do so. Example: Given a user’s request for a weather forecast for a given day, the LLM communicates with the weather API, identifying the location and date, then parses the return coming from the API to provide the right information.
ReAct is an iterative process of thinking and acting coupled with planning, using tools, and observing. For example, to design an end-to-end vacation plan, the system will consider user demands and fetch details about the route, touristic attractions, restaurants, and lodging by calling APIs. Then, the system will check the results with respect to correctness and relevance, producing a detailed travel plan relevant to the user’s prompt and schedule.
Planning Dynamic Query: Instead of performing sequentially, the agent executes numerous actions or sub-queries concurrently and then aggregates these results. For example, if one wants to compare the financial results of two companies and determine the difference in some metric, then the agent would process data for both companies in parallel before aggregating findings; LLMCompiler is one such framework that leads to such efficient orchestration of parallel calling of functions.
Agentic RAG and LLMaIndex
LLMaIndex represents a very efficient implementation of RAG pipelines. The library simply fills in the missing piece in integrating structured organizational data into generative AI models by providing convenience for tools in processing and retrieving data, as well as interfaces to various data sources. The major components of LlamaIndex are described below.
LlamaParse parses documents.
The Llama Cloud for enterprise service with RAG pipelines deployed with the least amount of manual labor.
Using multiple LLMs and vector storage, LlamaIndex provides an integrated way to build applications in Python and TypeScript with RAG. Its characteristics make it a highly demanded backbone by companies willing to leverage AI for enhanced data-driven decision-making.
Key Components of Agentic Rag implementation with LLMaIndex
Let’s go into depth on some of the ingredients of agentic RAG and how they are implemented in LlamaIndex.
1. Tool Use and Routing
The routing agent picks which LLM or tool is best to use for a given question, based on the prompt type. This leads to contextually sensitive decisions such as whether the user wants an overview or a detailed summary. Examples of such approaches are Router Query Engine in LlamaIndex, which dynamically chooses tools that would maximize responses to queries.
2. Long-Term Context Retention
While the most important job of memory is to retain context over several interactions, in contrast, the memory-equipped agents in the agentic variant of RAG remain continually aware of interactions that result in coherent and context-laden responses.
LlamaIndex also includes a chat engine that has memory for contextual conversations and single shot queries. To avoid overflow of the LLM context window, such a memory has to be in tight control over during long discussion, and reduced to summarized form.
3. Subquestion Engines for Planning
Oftentimes, one has to break down a complicated query into smaller, manageable jobs. Sub-question query engine is one of the core functionalities for which LlamaIndex is used as an agent, whereby a big query is broken down into smaller ones, executed sequentially, and then combined to form a coherent answer. The ability of agents to investigate multiple facets of a query step by step represents the notion of multi-step planning versus a linear one.
4. Reflection and Error Correction
Reflective agents produce output but then check the quality of that output to make corrections if necessary. This skill is of utmost importance in ensuring accuracy and that what comes out is what was intended by a person. Thanks to LlamaIndex’s self-reflective workflow, an agent will review its performance either by retrying or adjusting activities that do not meet certain quality levels. But because it is self-correcting, Agentic RAG is somewhat dependable for those enterprise applications in which dependability is cardinal.
5. Complex agentic reasoning:
Tree-based exploration applies when agents have to investigate a number of possible routes in order to achieve something. In contrast to sequential decision-making, tree-based reasoning enables an agent to consider manifold strategies all at once and choose the most promising based on assessment criteria updated in real time.
LlamaCloud and LlamaParse
With its extensive array of managed services designed for enterprise-grade context augmentation within LLM and RAG applications, LlamaCloud is a major leap in the LlamaIndex environment. This solution enables AI engineers to focus on developing key business logic by reducing the complex process of data wrangling. Another parsing engine available is LlamaParse, which integrates conveniently with ingestion and retrieval pipelines in LlamaIndex. This constitutes one of the most important elements that handles complicated, semi-structured documents with embedded objects like tables and figures. Another important building block is the managed ingestion and retrieval API, which provides a number of ways to easily load, process, and store data from a large set of sources, such as LlamaHub’s central data repository or LlamaParse outputs. In addition, it supports various data storage integrations.
Conclusion
Agentic RAG represents a shift in information processing by introducing more intelligence into the agents themselves. In many situations, agentic RAG can be combined with processes or different APIs in order to provide a more accurate and refined result. For instance, in the case of document summarisation, agentic RAG would assess the user’s purpose before crafting a summary or comparing specifics. When offering customer support, agentic RAG can accurately and individually reply to increasingly complex client enquiries, not only based on their training model but the available memory and external sources alike. Agentic RAG highlights a shift from generative models to more fine-tuned systems that leverage other types of sources to achieve a robust and accurate result. However, being generative and intelligent as they are now, these models and Agenitc RAGs are on a quest to a higher efficiency as more and more data is being added to the pipelines.
#agent#agentic RAG#agents#ai#AI models#Algorithms#Analysis#API#APIs#applications#approach#assessment#bases#Books#Building#Business#challenge#Cloud#communication#Companies#comparison#comprehensive#data#data analysis#data storage#data-driven#Database#databases#Design#details
0 notes
Text
Google to develop AI that takes over computers, The Information reports
(Reuters) - Alphabet's Google is developing artificial intelligence technology that takes over a web browser to complete tasks such as research and shopping, The Information reported on Saturday.
Google is set to demonstrate the product code-named Project Jarvis as soon as December with the release of its next flagship Gemini large language model, the report added, citing people with direct knowledge of the product.
Microsoft backed OpenAI also wants its models to conduct research by browsing the web autonomously with the assistance of a “CUA,” or a computer-using agent, that can take actions based on its findings, Reuters reported in July.
Anthropic and Google are trying to take the agent concept a step further with software that interacts directly with a person’s computer or browser, the report said.
Google didn’t immediately respond to a Reuters request for comment.
2 notes
·
View notes
Text
Generative AI’s Role in IT Service Management: A Game-Changer for Efficiency and Innovation
In the rapidly evolving landscape of IT Service Management (ITSM), emerging technologies continually reshape the way organizations deliver, manage, and optimize IT services. One of the most disruptive innovations today is Generative AI, which is transforming how IT professionals approach their tasks. By harnessing the capabilities of machine learning and artificial intelligence, Generative AI is enhancing service efficiency, improving user experience, and paving the way for more predictive and proactive IT operations.
Generative AI, which refers to AI models capable of producing new content, data, or solutions based on learned patterns from vast datasets, has significant implications for IT Service Management. With the rise of Generative AI certification, professionals can gain the skills needed to harness this transformative technology. It goes beyond traditional automation, enabling ITSM teams to move from reactive problem-solving to proactive service enhancement. This technology offers more than just automated responses; it introduces intelligent, data-driven insights that can optimize IT service delivery and innovation.
1. Enhancing Service Desk Operations
One of the most prominent roles of Generative AI in ITSM is its impact on service desk operations. The service desk is the frontline of IT support, managing a multitude of tickets, incidents, and requests daily. Traditionally, managing these operations required significant human effort, with support teams spending time on repetitive, low-value tasks such as ticket classification, incident management, and basic troubleshooting.
Generative AI, particularly through AI-powered chatbots and virtual agents, is revolutionizing these operations. These intelligent tools can process vast amounts of data from historical tickets and documentation, enabling them to resolve common issues, provide step-by-step guidance, and offer tailored responses to users. For example, instead of waiting for human intervention, a virtual agent can quickly resolve a password reset request or troubleshoot a network connectivity issue. By automating these tasks, IT service teams can focus on more complex issues, ultimately improving productivity and reducing response times. Enrolling in a Generative AI Course can provide deeper insights into how these technologies work and how to leverage them for improved IT service management.
Moreover, generative AI models can continuously learn from interactions, becoming more effective and accurate over time. As a result, the service desk can provide more consistent, 24/7 support to users, ensuring that even complex queries are addressed swiftly without the need for manual escalation.
2. Improving Incident Management and Resolution
Incident management is one of the core processes of ITSM, requiring prompt and efficient handling of issues to minimize downtime and service disruption. Generative AI is playing a crucial role in optimizing this process by providing predictive insights and automating parts of incident resolution.
AI models can analyze past incidents, detect patterns, and predict potential future issues before they escalate into major problems. This predictive capability allows IT teams to proactively address vulnerabilities and risks in the IT infrastructure, thus preventing costly downtime. Additionally, when incidents do occur, Generative AI can quickly suggest solutions or provide troubleshooting guides to service desk staff based on historical data and contextual analysis.
Generative AI also enhances collaboration by providing real-time insights and recommendations to various teams across the organization. For example, if an incident is reported, AI can instantly identify similar cases, suggest resolutions, or alert relevant teams about recurring patterns, significantly speeding up the resolution process.
3. Streamlining Change and Release Management
Change management in ITSM involves controlling and overseeing modifications to IT systems, services, or applications. It’s a delicate balance between innovation and maintaining system stability. Generative AI can assist by providing detailed risk assessments, forecasting potential impacts of proposed changes, and recommending the best timing or methods for implementation.
By analyzing past changes and their outcomes, AI models can identify the most effective strategies for rolling out new services or updates. This capability is particularly useful for release management, where AI can simulate the impact of changes across different environments before they are implemented in production. Generative AI models can also automate routine aspects of the release process, such as code testing or deployment verification, ensuring faster and more reliable updates.
4. Optimizing Knowledge Management
Effective knowledge management is vital for ITSM teams to resolve incidents swiftly and maintain high service levels. Generative AI plays a transformative role by not only indexing and searching knowledge repositories but also creating new knowledge artifacts based on the data it processes.
For instance, AI can analyze IT service logs, historical ticket data, and other internal documents to automatically generate new troubleshooting guides or best practices. This ensures that the knowledge base remains up to date, reducing the time IT professionals spend searching for solutions. Furthermore, AI-driven knowledge management can enhance training and onboarding by providing real-time, contextual learning experiences for new employees, helping them adapt to complex IT environments more quickly.
5. Facilitating IT Asset and Configuration Management
IT asset management and configuration management are critical for ensuring that IT services are delivered efficiently and securely. Generative AI can support these processes by automating the tracking and auditing of IT assets, enabling real-time updates to configuration management databases (CMDBs), and generating recommendations for optimizing resource utilization.
AI models can also provide insights into the lifecycle of IT assets, predicting when equipment or software may need maintenance or replacement. This proactive approach reduces the likelihood of service disruptions due to outdated or malfunctioning assets, ensuring smoother and more reliable service delivery.
6. Driving Continuous Service Improvement
Continuous service improvement (CSI) is a key principle in ITSM, focusing on the ongoing enhancement of IT services. Generative AI plays a vital role in this area by offering real-time analytics and insights that inform decision-making.
With access to vast amounts of data, Generative AI can identify trends, predict future service demands, and recommend ways to optimize performance. For example, it can analyze service response times, user feedback, and system performance metrics to highlight areas for improvement. This data-driven approach helps IT teams make informed decisions and implement strategies that align with business goals and user expectations.
Conclusion: The Future of IT Service Management with Generative AI
Generative AI is not just another tool in the ITSM toolkit; it represents a paradigm shift in how IT services are delivered and managed. By automating routine tasks, providing predictive insights, and enabling more proactive service management, Generative AI empowers IT teams to focus on innovation and continuous improvement. As AI technology continues to evolve, its role in ITSM will only grow, offering new opportunities for enhancing efficiency, reducing operational costs, and delivering superior user experiences.
Incorporating Generative AI into ITSM strategies is no longer optional but essential for organizations aiming to stay competitive in the digital age. As this technology becomes more integrated into IT operations, businesses will experience a new era of service management, characterized by increased automation, smarter decision-making, and a relentless focus on innovation.
#Generative AI Certification#Generative AI Course#Generative AI Training#Artificial Intelligence#Generative AI Technology#Generative AI Benefits#Generative AI in ITSM#Generative AI Importance
2 notes
·
View notes
Text
AI in Digital Marketing: Revolutionizing the Future of Marketing
The rise of Artificial Intelligence (AI) is transforming every industry, and digital marketing is no exception. AI's integration into marketing strategies has opened up a new realm of possibilities, enhancing how businesses interact with their customers. From automating tasks to providing personalized experiences, AI in digital marketing is revolutionizing how brands operate. In this blog, we’ll explore how AI is reshaping the future of digital marketing and why it’s a game-changer for businesses.
1. Personalized Marketing at Scale
AI allows digital marketers to deliver personalized content to consumers like never before. By analyzing user behavior, search patterns, and social interactions, AI algorithms can predict what a customer is likely to be interested in. This means businesses can send targeted ads, emails, and content to users at just the right time, increasing the chances of conversion. Personalized marketing helps boost engagement and customer satisfaction by ensuring relevant content reaches the audience.
Key Takeaway: AI helps tailor content based on customer data, enabling personalized marketing strategies that boost engagement and conversions.
2. Chatbots and Customer Support
AI-powered chatbots are revolutionizing customer support in digital marketing. These intelligent bots provide 24/7 customer service, instantly answering questions and resolving issues. This not only improves customer satisfaction but also frees up human agents to handle more complex queries. Many businesses now use AI chatbots to handle basic inquiries, provide recommendations, and assist customers in real-time.
Key Takeaway: AI chatbots streamline customer service, offering instant support and freeing up resources for businesses.
3. Enhanced SEO and Content Creation
AI tools are increasingly being used in SEO (Search Engine Optimization) and content creation. From analyzing top-ranking keywords to predicting trending topics, AI can help marketers optimize their content for better visibility on search engines. Tools like GPT-based models are being used to generate high-quality content that aligns with SEO strategies, making content marketing more efficient.
AI can also analyze existing content and suggest improvements, ensuring your website ranks higher on search engines like Google. Marketers no longer need to guess which keywords to target; AI tools provide data-driven insights that lead to better SEO outcomes.
Key Takeaway: AI optimizes SEO strategies by providing data-driven insights and automating content creation.
4. Predictive Analytics for Campaigns
AI takes digital marketing to the next level with predictive analytics. By analyzing historical data, AI algorithms can forecast trends, customer behaviors, and future market movements. This allows businesses to create more effective marketing campaigns that resonate with their target audience. Predictive analytics helps marketers make smarter decisions about where to allocate their budget, which platforms to focus on, and which content formats to prioritize.
Key Takeaway: AI enables marketers to predict trends and behaviors, leading to more strategic and successful marketing campaigns.
5. Automated Advertising and Media Buying
AI has also automated the process of buying ad space, ensuring that businesses get the most value from their digital advertising spend. AI tools can optimize ads in real-time, adjusting bids and placements to ensure maximum ROI. Programmatic advertising, powered by AI, takes the guesswork out of media buying by using algorithms to place ads where they are most likely to convert.
Key Takeaway: AI automates ad buying and optimization, ensuring businesses get the best results from their marketing budget.
6. Social Media Management and Monitoring
AI tools have made it easier than ever to manage and monitor social media. Social media platforms now utilize AI to track user engagement, analyze sentiment, and optimize content posting schedules. AI can also provide insights into which types of posts resonate most with your audience, helping businesses refine their social media strategies.
Key Takeaway: AI simplifies social media management by providing valuable insights into user behavior and engagement trends.
7. Visual and Voice Search Optimization
With the rise of visual and voice search, AI is helping marketers adapt to new search behaviors. AI-powered tools can optimize images for visual search platforms and help businesses prepare for voice search queries by optimizing for natural language processing (NLP). As more consumers use voice assistants like Siri and Alexa, optimizing for voice search has become a crucial part of digital marketing strategies.
Key Takeaway: AI is enabling businesses to stay ahead in visual and voice search trends by optimizing content accordingly.
Conclusion
AI in digital marketing is not just a trend—it’s the future. From automating mundane tasks to providing deep insights into consumer behavior, AI is helping businesses enhance their marketing efforts. Brands that embrace AI will not only improve their efficiency but also create more personalized, engaging experiences for their customers. As AI technology continues to evolve, its impact on digital marketing will only grow, making it a crucial tool for businesses looking to stay competitive.
2 notes
·
View notes