#rosie the robot maid
Explore tagged Tumblr posts
Text
I already did this but whateve
Hello, tumblr user. Before you is a tumblr post asking you to name a female fictional character. You have unlimited time to tag a female character, NOT a male one.
Begin.
#amelia bedelia#sakura haruno#bedelia du maurier#marinette dupain cheng#rayla#zatanna#chichi#hinata hyuga#claire bennet#lois lane#gogo tomago#galadriel#allura#yang xiao long#ruby rose#sailor jupiter#sailor mercury#asuka langley soryu#Liko#orla#chun-li#apple white#rosie the robot maid#judy jetson#usagi tsukino#chibiusa#ariel#sypha belnades#queen beryl#ursula
56K notes
·
View notes
Text
🤖 Rosey the Robot Maid 🛸
#Rosey the Robot#The Jetsons#character study#cartoon fanart#Hanna Barbera#Warner Bros Animation#Cartoon Network#Boomerang from Cartoon Network#Jean Vander Pyl#Tress MacNeille#Iwao Takamoto#futuristic#space age#60s cartoons#80s cartoons#robot#maid#Rosie's Boyfriend#Rosie Come Home#Mother's Day for Rosie#Robot’s Revenge#Wedding Bells for Rosey
7 notes
·
View notes
Text
Got to figure out to how to fit everything I brought with me plus Christmas presents that include a freaking Roomba (thank you so much, Mom and Dad, very generous but also !!!!) into two suitcases. This should be interesting.
#random personal stuff#the other big question is what does one name the Roomba#because apparently they need names#my mom calls hers Rosie after the Jetsons' robot maid#I could continue in that vein and call mine Jeeves or Alfred or something#or it could be Murderbot#but that's a little difficult to explain to the general public?#anyway I'll think of something eventually#but will take good ideas into Consideration
34 notes
·
View notes
Text
The robots at the Tesla event being tele-operated is so funny to me, mostly because I keep thinking about what it must be like to be one of the people doing that. Like, rich people paying you to serve drinks, but you're doing it through a machine that cost tens of thousands of dollars? This is some scifi dystopia stuff. Actually, I think I must have seen it before somewhere, but Surrogates is the closest that comes to mind, and that was people teleoperating fit, attractive robots to live their ideal lives, which is kind of the opposite.
I'm feeling really compelled to cook up a short story about teleoperated robot maids. Maybe some The Jetsons fanfic where Rosie is being run by someone in South America.
57 notes
·
View notes
Note
Neil spaceman....
No but for real, I imagine that in like 200 years there'll be a lot of space-related surnames like Jupteriano and Kosmovich. Just imagine.....
In Spanish the Jetsons were translated to "Los Supersónicos", George was just called "Súper" and his boss was called "Júpiter"
I looked it up (it's been ages since I watched it) and the rest of the family also had names like Ultra (the mom) Lucero (the girl, and Lucero is an actual female name in Spanish) Cometín (the boy) and Astro the dog. Funny enough, Rosie the robot maid was translated as "Robotina" which is what some people call Roombas here.
Where was I going with this? Oh yeah, in the Soviet Union it wasn't uncommon to name kids after revolutionary or scientific concepts. So you had kids named Radiy (for radium) and Industriy. IIRC there were kids also named for electricity, cosmonautics, vaccines, and of course the names of Lenin, Marx, concepts such as world revolution, the new era... Like we say in Argentina, los soviéticos estaban en una, pero una muy buena, if only we had half of the optimism those people had for the future
36 notes
·
View notes
Note
What I don't understand about AI is shouldn't we want to teach the robots to do the menial tasks that nobody likes so we can have more time to spend on creative and other fun pursuits? Not have the robots do the fun stuff so we can continue to suffer doing the worst jobs. I want the Jetsons future with Rosie the Robot maid, not this, and I don't understand why ANYONE wants this. An AI can't possibly write or make music or paint as well as a human!
Of course, we should use AI, if we use it at all, to reduce human misery and increase our free time so people can have more fun and make more art. But the people who are driving the current AI push into creative arts don't see things that way. From what I can tell, they're basically soulless techbros and/or greedy billionaires who don't give a shit about art, or the quality of human life, or even the looming dangers of the singularity. They will happily destroy jobs, human creativity, and even humanity itself if it will make their stock options go up and let them buy bigger yachts.
#ask me anything#tv writing#ask me stuff#ai#ai art generator#chatgpt#wgawest#wga west#stand with the wga#wga strike#support the wga#wga strong#sag aftra#sag strike#amptp#fuck the amptp
203 notes
·
View notes
Text
⚠︎ ⟮ NPTs ⟯ ... Murder_drones.mp3 ⟩ Cyn
﹫ ❲ Requested by anonymous ❳
「 NAMES 」
Cyn / Sin, Cynthia / Sinthia, Doily / Doilie¹, Marnierre², Salem, Marianne, Marie Anne, Lady, Doll / Dolly / Dollie, Cutlerie³, Sweetie, Belle, Bow, Lacy / Lacie / Lace, Ribbon, Sash, Felicity, Lolita, Teacup, Porcelain / Porcelynn, Silver, Delilah, Velvet / Velvette, Paisley / Paislie, Creepie, Leslie, Rose / Rosie, Pearl, Opal, Filistata⁴, Dear / Dearie, Ciel, Jacqueline / Jacquelynne², Curtsy, Embroiderie⁵, Boudreaux², Envy, Violynne⁶
¹ After doilies. ² French. ³ After cutlery. ⁴ A kind of arachnid. ⁵ After embroidery. ⁶ After violins.
「 PRONOUNS 」
Che/Cher/Cherie, Fem/Femme, Goth/Gothic, Maid/Maids, Sweet/Sweeties, Solve/Solvers, Wyrm/Wyrms, Eldritch/Eldritchs, See/Seer, Fabric/Fabrics, Sh3/H3r, Dear/Dearie, Error/404, 404/404s, Glitch/Glitches, Tick/Tock, Sh_/H_r, X/Xs, Stitch/Stitches,🪞/🪞s, 🥂/🥂s, ☕/☕s, ⚠️/⚠️s, ⚠︎/⚠︎s
「 TITLES 」
The Lady / Gentleman of the mansion / manor, The absolute solver of fabrics, ( Prn ) with a golden stare, The most elegant robot / drone / android, His / N's little sister, The sweetest / cutest / most adorable younger sister, The little girl / ( Label ) with a dark secret, ( Prn ) with a dark secret, ( Prn ) who is attending the gala, ( Prn ) who is hosting a tea party, ( Prn) who is playing with ( prn ) dolls, ( Prn ) eldritch form
#https://absoluteSolver_npts#murder drones#md#md cyn#murder drones cyn#the absolute solver of fabrics#absolute solver#npts#npt suggestions
54 notes
·
View notes
Text
I've finally been able to sketch something, It ain't much, but it's honest work.
"Jeeves" is the most basic robotic domestic servant offered in C.H.C. space; he is relatively cheaply made, with no options to alter or change his personality or appearance; he is stuck as an "elderly" butler; and he struggles to complete more physical tasks as well as tasks that require high dexterity as Jeeves only has three fingers on each hand. However, his low cost and easy maintenance ensure that he is still one of the best sellers for those in need of domestic servants, though some find his featureless face, sunken eyes, and disproportionate arms off-putting.
"Rosie" is the newest and greatest robotic servant, considerably bulkier and more expensive than Jeeves, with the base personality of an elderly housemaid. Rosie does well in any role given to her; she works with AI cards much like Jeeves, but they are far larger and therefore have much greater flexibility per card. While Jeeves may need separate AI cards for housework, chauffeuring, shopping, and yard work, Rosie would only need one all-encompassing domestic AI card. Rosie is highly customizable, and with the use of personality cards that change her personality from the baseline maid, she can suit her owner's liking in terms of looks and personality. Combined with this, Rosie has a "face," allowing her to "look at" and express herself to her owners, making her far more popular than Jeeves, as many see their Rosie as part of the family, despite the constant reminders from the manufacturer that Rosie does not have feelings and has no attachment to any one person.
Many view Jeeves and Rosie as a couple, and the manufacturer has capitalized on this. If Jeeves and Rosie are in close proximity and idle, such as when recharging or waiting outside of a building for their owners, they will sometimes have "conversations," prepackaged voice lines they say to each other, often more ad than actual discussion, with both machines praising the construction and quality of each other or Rosie pointing out her "superiority" over Jeeves.
K luv U bi
#art#digital art#digital#oc#art of mine#original character#character design#oc art#oc illustration#illustration#digital illustration#robot#robot art#robot oc#when i wrote this it didn't seem like that much words#artists on tumblr#scifi art#sci fi#scifi#sketch
3 notes
·
View notes
Text
Saw a post about the personification of objects, and how humans are particularly good at personifying and pack-bonding with robots so I wanted to tell this story.
A few years back my folks got a robot vacuum. At the same time, my sister was playing thru Detroit: Become Human. The robo-vac has a lil blue light when it's running, and it turns orange and then red as the battery drains, so my sister and I started calling it Connor after an android character in the game.
We kept doing this and eventually my parents started calling him Connor too. Shortly after, my dad went and got a bunch of vinyl stickers printed and had leftover space to fill on the page so he printed out a big picture of Connor's face and stuck it to the vacuum so that the light lines up with the character's LED.
A while later, they got a second vac for the upper floor and named her Rosie, complete with her own picture of Rosie the robot maid from The Jetsons.
5 notes
·
View notes
Text
spongebob gender role commentary I did for a class discussion :)
The TV show I picked was (unsurprisingly) Spongebob Squarepants. Commonly shortened to just Spongebob, the animated TV series follows the famous yellow sea sponge Spongebob and his friends through various misadventures and scenarios. There is no overarching plot, though the series does have a few movie installments with solid, albeit completely unrelated, plots. (There's even a Spongebob musical! It's awesome, I recommend it!) Many of the episodes are quirky, colorful, silly, and sometimes outrageous. The show is rated TV-Y and recommended for children ages 7+. In short, however, Spongebob Squarepants is a bubbly young sea sponge who lives in a pineapple under the sea next to his best friend Patrick Star and reluctant neighbor and coworker Squidward Tentacles. Spongebob works as head fry cook at the most popular eating establishment in his hometown of Bikini Bottom, the Krusty Krab, owned and operated by Mr. Eugene Krabs, a notorious cheapskate. There are plenty of recurring characters as well, such as Mr. Krabs' archnemesis Sheldon J. Plankton, owner of the Chum Bucket, Plankton's computer wife Karen, Spongebob's pet snail Gary, Sandy Cheeks the Texan squirrel, and Mr. Krabs' whale daughter Pearl. Many of the episodes feature one or more of these characters and show them in a range of situations, from going to a school dance to covering up an accidental homicide. Overall, this show is definitely one that many of my friends weren't allowed to watch growing up because it was too weird for their parents to handle.
When it comes to gender stereotypes in Spongebob, I can proudly say that many aspects of the show defy gender expectations. (Perhaps that's another reason why many parents don't let their kids watch it!) To list a few examples, let's start with male characters with feminine characteristics. Spongebob as a character is not at all a depiction of traditional masculinity; he has big sweet eyes and little eyelashes, rosy cheeks, buck teeth, and a high-pitched voice (voiced by the iconic Tom Kenny!). None of these traits are consistent with the stereotypical strong, burly man often idealized by young boys. Spongebob is kind and loving to all, even to those who don't like him, like Squidward and Plankton. Spongebob's effeminate nature is never made to seem like a bad thing, though. Spongebob has worn a variety of flashy, feminine or gender nonconforming costumes that no one bats an eye at. (Like that one episode where he wears a little maid dress...yeah.) Another example of non-traditional masculinity can be seen in Squidward. Though not as outwardly effeminate as Spongebob, Squidward partakes in hobbies not typically associated with those of men. Squidward is a lover of the fine arts and music; he enjoys painting and playing the clarinet; he likes to pamper himself with rich sweets and cares a lot about his appearance. None of these things are explicitly feminine, though that's easy for me to say as someone who doesn't see things as gendered anymore. Traditionally speaking, these kinds of activities are often perceived as feminine, so to see a male character openly doing them is certainly a defiance of gender norms, especially for kids' TV.
Let's move onto female characters. Two female characters come to mind when thinking about gender in Spongebob: Sandy and Pearl. Sandy exhibits several traits inconsistent with those of female characters in children's media. For starters, Sandy is insanely smart; she's always depicted tinkering with robots and machines and gadgets. The show makes it clear that Sandy is among the smartest characters in the show, if not the smartest. Secondly, Sandy is strong and athletic; she loves karate and extreme sports, and has been depicted on multiple occasions partaking in dangerous, thrilling activities. There's a whole episode where Sandy makes Spongebob do extreme things with her, and it drives him to the point of exhaustion. Many of these traits are commonly associated with male characters, i.e. strength, smarts, and passion for dangerous activity. Sandy is never once depicted as weak or in need of saving; she's a strong, smart, independent southern lady who isn't afraid to throw down.
Next up is Pearl! I think Pearl as a character is wonderful because while she is quite feminine, she's not the typical depiction of a teenage girl. Pearl isn't small or frail; as a whale, she's tall, strong, and athletic (cheerleader!). Her voice is relatively deep, and despite her size and strength, she is still seen loving and indulging in girly things like clothes, jewelry, boy bands, and pink! I really like how Pearl shows young girls that you can be feminine even when you aren't the cookie cutter image of a young girl.
Finally, am I surprised by a lot of this? The answer: yes and no. Spongebob first aired in 1999 (over 20 years ago!) and a lot of the character portrayals are relatively progressive for its time. Subverting gender expectations has been a thing in children's TV for a while, but typically only with one or two characters. Spongebob makes no apologies whatsoever with its role reversals; boys can like girly things, and girls can like boyish things! It's honestly so impactful to me, especially since I grew up with this show (and the og Teen Titans, of course.) On the other hand, it's not so surprising, especially with a lot of the newer episodes (yes, it's still airing...) Much of the new Spongebob is outwardly non-stereotypical when it comes to traditional gender norms, and at this point, I can't say I'm surprised. Maybe it's because I've always loved this show and I'm biased, but a lot of the stuff in these episodes just makes perfect sense to me. I can definitely understand why parents wouldn't want their kids watching Spongebob, though. Challenges to gender norms scare some people, and considering America's history, that makes sense. I hope, going forward, that more kids will get to grow up alongside Spongebob and his silly antics so that they too may grow into the mindset that gender is a social construct, and you should do whatever makes you happy.
4 notes
·
View notes
Note
I feel Maid Iggy would just create Rosie the Robot from The Jetsons to do the work for her XD
not if Velvette is making her wear it
0 notes
Text
Yuno - Amnesia Memories S/I
Name: Yuno
Height: 5'7 (170cm)
Gender: Female
Eyes: Green
Hair: Black (w/ pink ombre)
Birthday: November 1st
Professional Status/Occupation: Maid Sheeps'. University student (Major: Education | Minor: Law)
Relatives: Unnamed parents
Appearance:
Yuno is a tall young girl with tan skin and rosy cheeks. She has long black hair with a light pink ombre that is slightly curled at the end and dark green eyes. She wears a dark purple dress with a silver zipper aligned vertically on the center of her chest, zipped all the way up. On top of the zipper is a light purple bow with smaller ribbon tails. The skirt portion consists of a lighter purple layer with a ruffled bottom that runs asymmetrically across her upper thighs, while the rest of the skirt stops to her mid thighs. Over her dress is a black jacket that is always unbuttoned and has gold star buttons. She also wears black tights with pastel purple and white checkered patterns, and dark grey boots with a light grey strap around the ankle, and red rose ornaments tied to each strap. Her purse runs over her right shoulder and has a pink and white rose design as the strap. Finally she also wears a purple headband with a magenta rose decal and purple ribbons near it
Personality:
Yuno is a shy, yet cheerful young lady. Initially, she comes off as awkward and nervous when speaking with others. While polite, she has trouble relating to others, Mine almost describing her when they first met 'robotic'. This is also evident when she first was not very good with her waitress job at Maid Sheeps'. Eventually, she becomes more accustomed to speaking with others, getting better at being a waitress and even befriending her coworkers. Past her awkwardness, her true nature shines: A kind-hearted and empathetic girl. However, while wanting to see the best in people, this also causes her to be a bit naive. It's also revealed that pre memory loss, Yuno was in love with Toma since childhood, even to the point of slight obsession. She even asked Shin to send her pictures of Toma, which the latter even had his own photo album of her photos
Skills + Talents:
In addition to being in the photography club, Yuno also enjoys shopping, accessorizing, experimenting with makeup and nail polish, playing video games, and baking
Trivia:
Yuno enjoys chicken cooked in any way, but dislikes zucchini
Yuno, while enjoys baking, admits it's more of her forte than actually cooking
She is childhood friends with Shin and Toma
Mine and Rika has stated that Yuno dresses very stylishly
She's oblivious to when men flirt with her, which caused Shin and Toma to drag her away from men that were making advances
In Pre-K, she said that she wanted to marry both Shin and Toma, to which Toma replied that she could only marry one. Shin initially said he would marry her. This changed in elementary school when Yuno said that she wanted to marry Toma
#self ship community#selfshipper#self shipping community#s/i community#self insert#self insert community#f/o community
0 notes
Text
A faster, better way to train general-purpose robots
New Post has been published on https://sunalei.org/news/a-faster-better-way-to-train-general-purpose-robots/
A faster, better way to train general-purpose robots
In the classic cartoon “The Jetsons,” Rosie the robotic maid seamlessly switches from vacuuming the house to cooking dinner to taking out the trash. But in real life, training a general-purpose robot remains a major challenge.
Typically, engineers collect data that are specific to a certain robot and task, which they use to train the robot in a controlled environment. However, gathering these data is costly and time-consuming, and the robot will likely struggle to adapt to environments or tasks it hasn’t seen before.
To train better general-purpose robots, MIT researchers developed a versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks.
Their method involves aligning data from varied domains, like simulations and real robots, and multiple modalities, including vision sensors and robotic arm position encoders, into a shared “language” that a generative AI model can process.
By combining such an enormous amount of data, this approach can be used to train a robot to perform a variety of tasks without the need to start training it from scratch each time.
This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.
“In robotics, people often claim that we don’t have enough training data. But in my view, another big problem is that the data come from so many different domains, modalities, and robot hardware. Our work shows how you’d be able to train a robot with all of them put together,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.
Wang’s co-authors include fellow EECS graduate student Jialiang Zhao; Xinlei Chen, a research scientist at Meta; and senior author Kaiming He, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Neural Information Processing Systems.
Inspired by LLMs
A robotic “policy” takes in sensor observations, like camera images or proprioceptive measurements that track the speed and position a robotic arm, and then tells a robot how and where to move.
Policies are typically trained using imitation learning, meaning a human demonstrates actions or teleoperates a robot to generate data, which are fed into an AI model that learns the policy. Because this method uses a small amount of task-specific data, robots often fail when their environment or task changes.
To develop a better approach, Wang and his collaborators drew inspiration from large language models like GPT-4.
These models are pretrained using an enormous amount of diverse language data and then fine-tuned by feeding them a small amount of task-specific data. Pretraining on so much data helps the models adapt to perform well on a variety of tasks.
“In the language domain, the data are all just sentences. In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture,” he says.
Robotic data take many forms, from camera images to language instructions to depth maps. At the same time, each robot is mechanically unique, with a different number and orientation of arms, grippers, and sensors. Plus, the environments where data are collected vary widely.
The MIT researchers developed a new architecture called Heterogeneous Pretrained Transformers (HPT) that unifies data from these varied modalities and domains.
They put a machine-learning model known as a transformer into the middle of their architecture, which processes vision and proprioception inputs. A transformer is the same type of model that forms the backbone of large language models.
The researchers align data from vision and proprioception into the same type of input, called a token, which the transformer can process. Each input is represented with the same fixed number of tokens.
Then the transformer maps all inputs into one shared space, growing into a huge, pretrained model as it processes and learns from more data. The larger the transformer becomes, the better it will perform.
A user only needs to feed HPT a small amount of data on their robot’s design, setup, and the task they want it to perform. Then HPT transfers the knowledge the transformer grained during pretraining to learn the new task.
Enabling dexterous motions
One of the biggest challenges of developing HPT was building the massive dataset to pretrain the transformer, which included 52 datasets with more than 200,000 robot trajectories in four categories, including human demo videos and simulation.
The researchers also needed to develop an efficient way to turn raw proprioception signals from an array of sensors into data the transformer could handle.
“Proprioception is key to enable a lot of dexterous motions. Because the number of tokens is in our architecture always the same, we place the same importance on proprioception and vision,” Wang explains.
When they tested HPT, it improved robot performance by more than 20 percent on simulation and real-world tasks, compared with training from scratch each time. Even when the task was very different from the pretraining data, HPT still improved performance.
“This paper provides a novel approach to training a single policy across multiple robot embodiments. This enables training across diverse datasets, enabling robot learning methods to significantly scale up the size of datasets that they can train on. It also allows the model to quickly adapt to new robot embodiments, which is important as new robot designs are continuously being produced,” says David Held, associate professor at the Carnegie Mellon University Robotics Institute, who was not involved with this work.
In the future, the researchers want to study how data diversity could boost the performance of HPT. They also want to enhance HPT so it can process unlabeled data like GPT-4 and other large language models.
“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all. While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models,” he says.
This work was funded, in part, by the Amazon Greater Boston Tech Initiative and the Toyota Research Institute.
0 notes
Text
A faster, better way to train general-purpose robots
New Post has been published on https://thedigitalinsider.com/a-faster-better-way-to-train-general-purpose-robots/
A faster, better way to train general-purpose robots
In the classic cartoon “The Jetsons,” Rosie the robotic maid seamlessly switches from vacuuming the house to cooking dinner to taking out the trash. But in real life, training a general-purpose robot remains a major challenge.
Typically, engineers collect data that are specific to a certain robot and task, which they use to train the robot in a controlled environment. However, gathering these data is costly and time-consuming, and the robot will likely struggle to adapt to environments or tasks it hasn’t seen before.
To train better general-purpose robots, MIT researchers developed a versatile technique that combines a huge amount of heterogeneous data from many of sources into one system that can teach any robot a wide range of tasks.
Their method involves aligning data from varied domains, like simulations and real robots, and multiple modalities, including vision sensors and robotic arm position encoders, into a shared “language” that a generative AI model can process.
By combining such an enormous amount of data, this approach can be used to train a robot to perform a variety of tasks without the need to start training it from scratch each time.
This method could be faster and less expensive than traditional techniques because it requires far fewer task-specific data. In addition, it outperformed training from scratch by more than 20 percent in simulation and real-world experiments.
“In robotics, people often claim that we don’t have enough training data. But in my view, another big problem is that the data come from so many different domains, modalities, and robot hardware. Our work shows how you’d be able to train a robot with all of them put together,” says Lirui Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.
Wang’s co-authors include fellow EECS graduate student Jialiang Zhao; Xinlei Chen, a research scientist at Meta; and senior author Kaiming He, an associate professor in EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Neural Information Processing Systems.
Inspired by LLMs
A robotic “policy” takes in sensor observations, like camera images or proprioceptive measurements that track the speed and position a robotic arm, and then tells a robot how and where to move.
Policies are typically trained using imitation learning, meaning a human demonstrates actions or teleoperates a robot to generate data, which are fed into an AI model that learns the policy. Because this method uses a small amount of task-specific data, robots often fail when their environment or task changes.
To develop a better approach, Wang and his collaborators drew inspiration from large language models like GPT-4.
These models are pretrained using an enormous amount of diverse language data and then fine-tuned by feeding them a small amount of task-specific data. Pretraining on so much data helps the models adapt to perform well on a variety of tasks.
“In the language domain, the data are all just sentences. In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture,” he says.
Robotic data take many forms, from camera images to language instructions to depth maps. At the same time, each robot is mechanically unique, with a different number and orientation of arms, grippers, and sensors. Plus, the environments where data are collected vary widely.
The MIT researchers developed a new architecture called Heterogeneous Pretrained Transformers (HPT) that unifies data from these varied modalities and domains.
They put a machine-learning model known as a transformer into the middle of their architecture, which processes vision and proprioception inputs. A transformer is the same type of model that forms the backbone of large language models.
The researchers align data from vision and proprioception into the same type of input, called a token, which the transformer can process. Each input is represented with the same fixed number of tokens.
Then the transformer maps all inputs into one shared space, growing into a huge, pretrained model as it processes and learns from more data. The larger the transformer becomes, the better it will perform.
A user only needs to feed HPT a small amount of data on their robot’s design, setup, and the task they want it to perform. Then HPT transfers the knowledge the transformer grained during pretraining to learn the new task.
Enabling dexterous motions
One of the biggest challenges of developing HPT was building the massive dataset to pretrain the transformer, which included 52 datasets with more than 200,000 robot trajectories in four categories, including human demo videos and simulation.
The researchers also needed to develop an efficient way to turn raw proprioception signals from an array of sensors into data the transformer could handle.
“Proprioception is key to enable a lot of dexterous motions. Because the number of tokens is in our architecture always the same, we place the same importance on proprioception and vision,” Wang explains.
When they tested HPT, it improved robot performance by more than 20 percent on simulation and real-world tasks, compared with training from scratch each time. Even when the task was very different from the pretraining data, HPT still improved performance.
“This paper provides a novel approach to training a single policy across multiple robot embodiments. This enables training across diverse datasets, enabling robot learning methods to significantly scale up the size of datasets that they can train on. It also allows the model to quickly adapt to new robot embodiments, which is important as new robot designs are continuously being produced,” says David Held, associate professor at the Carnegie Mellon University Robotics Institute, who was not involved with this work.
In the future, the researchers want to study how data diversity could boost the performance of HPT. They also want to enhance HPT so it can process unlabeled data like GPT-4 and other large language models.
“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all. While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models,” he says.
This work was funded, in part, by the Amazon Greater Boston Tech Initiative and the Toyota Research Institute.
#000#ai#ai model#Amazon#approach#architecture#arm#artificial#Artificial Intelligence#author#Brain#Building#Carnegie Mellon University#challenge#computer#Computer Science#Computer Science and Artificial Intelligence Laboratory (CSAIL)#Computer science and technology#conference#cooking#data#datasets#Design#diversity#domains#Electrical engineering and computer science (EECS)#engineering#engineers#Environment#Forms
0 notes
Text
A faster, better way to train general-purpose robots
In the classic cartoon “The Jetsons,” Rosie the robotic maid seamlessly switches from vacuuming the house to cooking dinner to taking out the trash. But in real life, training a general-purpose robot remains a major challenge. Typically, engineers collect data that are specific to a certain robot and task, which they use to train the robot in a controlled environment. However, gathering these…
0 notes
Photo
Mike talks with filmmaker SK Dale about his latest work, Subservience. It's the story of a family with a new robot maid who's a little more conniving then Rosie from The Jetsons. https://www.spreaker.com/episode/special-report-s-k-dale-on-subservience--61640586
0 notes