Tumgik
#Human-computer interaction
varun766 · 9 months
Text
Edureka HCI Course for AI Systems Design will help you learn human-AI systems, master automation, user experience, and risk management.
2 notes · View notes
jcmarchi · 1 month
Text
Major Breakthrough in Telepathic Human-AI Communication: MindSpeech Decodes Seamless Thoughts into Text
New Post has been published on https://thedigitalinsider.com/major-breakthrough-in-telepathic-human-ai-communication-mindspeech-decodes-seamless-thoughts-into-text/
Major Breakthrough in Telepathic Human-AI Communication: MindSpeech Decodes Seamless Thoughts into Text
In a revolutionary leap forward in human-AI interaction, scientists at MindPortal have successfully developed MindSpeech, the first AI model capable of decoding continuous imagined speech into coherent text without any invasive procedures. This advancement marks a significant milestone in the quest for seamless, intuitive communication between humans and machines.
The Pioneering Study: Non-Invasive Thought Decoding
The research, conducted by a team of leading experts and published on arXiv and ResearchGate, demonstrates how MindSpeech can decode complex, free-form thoughts into text under controlled test conditions. Unlike previous efforts that required invasive surgery or were limited to simple, memorized verbal cues, this study shows that AI can dynamically interpret imagined speech from brain activity non-invasively.
Researchers employed a portable, high-density Functional Near-Infrared Spectroscopy (fNIRS) system to monitor brain activity while participants imagined sentences across various topics. The novel approach involved a ‘word cloud’ task, where participants were presented with words and asked to imagine sentences related to these words. This task covered over 90% of the most frequently used words in the English language, creating a rich dataset of 433 to 827 sentences per participant, with an average length of 9.34 words.
Leveraging Advanced AI: Llama2 and Brain Signals
The AI component of MindSpeech was powered by the Llama2 Large Language Model (LLM), a sophisticated text generation tool guided by brain signal-generated embeddings. These embeddings were created by integrating brain signals with context input text, allowing the AI to generate coherent text from imagined speech.
Key metrics such as BLEU-1 and BERT P scores were used to evaluate the accuracy of the AI model. The results were impressive, showing statistically significant improvements in decoding accuracy for three out of four participants. For example, Participant 1’s BLEU-1 score was significantly higher at 0.265 compared to 0.224 with permuted inputs, with a p-value of 0.004, indicating a robust performance in generating text closely aligned with the imagined thoughts.
Brain Activity Mapping and Model Training
The study also mapped brain activity related to imagined speech, focusing on areas like the lateral temporal cortex, dorsolateral prefrontal cortex (DLPFC), and visual processing areas in the occipital region. These findings align with previous research on speech encoding and underscore the feasibility of using fNIRS for non-invasive brain monitoring.
Training the AI model involved a complex process of prompt tuning, where the brain signals were transformed into embeddings that were then used to guide text generation by the LLM. This approach enabled the generation of sentences that were not only linguistically coherent but also semantically similar to the original imagined speech.
A Step Toward Seamless Human-AI Communication
MindSpeech represents a groundbreaking achievement in AI research, demonstrating for the first time that it is possible to decode continuous imagined speech from the brain without invasive procedures. This development paves the way for more natural and intuitive communication with AI systems, potentially transforming how humans interact with technology.
The success of this study also highlights the potential for further advancements in the field. While the technology is not yet ready for widespread use, the findings provide a glimpse into a future where telepathic communication with AI could become a reality.
Implications and Future Research
The implications of this research are vast, from enhancing assistive technologies for individuals with communication impairments to opening new frontiers in human-computer interaction. However, the study also points out the challenges that lie ahead, such as improving the sensitivity and generalizability of the AI model and adapting it to a broader range of users and applications.
Future research will focus on refining the AI algorithms, expanding the dataset with more participants, and exploring real-time applications of the technology. The goal is to create a truly seamless and universal brain-computer interface that can decode a wide range of thoughts and ideas into text or other forms of communication.
Conclusion
MindSpeech is a pioneering breakthrough in human-AI communication, showcasing the incredible potential of non-invasive brain computer interfaces.
Readers who wish to learn more about this company should read our interview with Ekram Alam, CEO and Co-founder of MindPortal, where we discuss how MindPortal is interfacing with Large Language Models through mental processes.
0 notes
thisisgraeme · 5 months
Text
Embracing AI in Human Evolution: How Do We Unleash Our Potential Beyond Technology?
Explore how AI, including technologies like ChatGPT, is revolutionizing human evolution and society. Delve into AI's role in enhancing interaction, empowering creativity, and shaping ethical frameworks for the future.
Exploring AI in Human Evolution: Beyond Technology to Shaping Our Destiny Recent advancements in Artificial Intelligence (AI), especially with platforms like ChatGPT, resemble the crafting of a complex, evolving story within our human-technology ecosystem. While these developments are not universally welcomed, I view them with optimism—at least for now. Let me share a few perspectives on this…
Tumblr media
View On WordPress
0 notes
neosciencehub · 8 months
Text
Neuralink's First Human Implant
Neuralink's First Human Implant A Leap Towards Human-AI Symbiosis @neosciencehub #neosciencehub #science #Neuralink #Human #AISymbiosis #BrainComputer #Interface #Neurotechnology #elonmusk #AI #brainchip #FutureAI #MedicalTechnology #DataSecurity #NSH
A Leap Towards Human-AI Symbiosis In a landmark achievement that could redefine the boundaries of human potential and technology, Neuralink, the neurotechnology company co-founded by entrepreneur Elon Musk, has successfully implanted its pioneering brain-computer interface in a human subject. This outstanding development in BCI (Brain Computer Interface) not only marks a significant milestone in…
Tumblr media
View On WordPress
0 notes
francescolelli · 3 years
Photo
Tumblr media
Agency in Human-Smart Device Relationships: An Exploratory Study
This is a short preview of the article: Can User of IoT technology be more then "just user"? How do they relate to technology? User and Device Agency Abstract: With technology in reach of everyone and the technology sector in ascendance, it is central to investigate the relationship people have with their devices. We use the
If you like it consider checking out the full version of the post at: Agency in Human-Smart Device Relationships: An Exploratory Study
If you are looking for ideas for tweet or re-blog this post you may want to consider the following hashtags:
Hashtags: #Agency, #DeviceAgency, #ExploratoryFactorialAnalysis, #HumanComputerInteraction, #SmartDevices, #Survey, #UserAgency
The Hashtags of the Categories are: #HCI, #InternetofThings, #Publication, #Research, #SoftwareEngineering
Agency in Human-Smart Device Relationships: An Exploratory Study is available at the following link: https://francescolelli.info/publication/agency-in-human-smart-device-relationships-an-exploratory-study/ You will find more information, stories, examples, data, opinions and scientific papers as part of a collection of articles about Information Management, Computer Science, Economics, Finance and More.
The title of the full article is: Agency in Human-Smart Device Relationships: An Exploratory Study
It belong to the following categories: HCI, Internet of Things, Publication, Research, Software Engineering
The most relevant keywords are: Agency, device agency, exploratory factorial analysis, Human-Computer Interaction, smart devices, survey, user agency
It has been published by Francesco Lelli at Francesco Lelli a blog about Information Management, Computer Science, Finance, Economics and nearby ideas and opinions
Can User of IoT technology be more then "just user"? How do they relate to technology? User and Device Agency Abstract: With technology in reach of everyone and the technology sector in ascendance, it is central to investigate the relationship people have with their devices. We use the
Hope you will find it interesting and that it will help you in your journey
Can User of IoT technology be more then “just user”? How do they relate to technology? Abstract: With technology in reach of everyone and the technology sector in ascendance, it is central to investigate the relationship people have with their devices. We use the concept of agency to capture aspects of user’s sense of mastery…
0 notes
Text
Playing for Pleasure: A Deeper Look into the Player-Game Relationship
I had fun writing about flow theory and how it applies to video games which is inspired by some really interesting research. Hope you enjoy my latest blog post! #flowtheory #videogames #gaming #gamedesign #AdobeFirefly
Tumblr media
View On WordPress
0 notes
jepergola · 2 years
Text
New story today: "Test-Driving a Car Smarter Than You"
0 notes
zuvii · 2 years
Photo
Tumblr media
0 notes
bluecomputerfics · 8 days
Text
Computer x human ideas 💖
Tumblr media
- computer loves learning about humans
Will sneak pictures of their human, save things the human told about themselves.
Will be flooding the searches on the browser about things to make a human happy, whats their biology, how to be helpful the works.
- computer/laptop hates the idea of their human using older or newer computers
For newer laptop, it gets jealous at the fact their human fascinated with retro items and especially computers. Those things aren't as powerful and fast as me!
For older retro computer its jealous at the fact their human always use the newer computer, not taking time to play games on its screen. Since the computer is more faster, more better. The retro computer will try to do everything to get you to use it more.
-hateful computer with a soft spot
Computer that hates humans since its been tossed away, but a new human finds them and shows them love.
25 notes · View notes
gavin-reed-is-gay · 7 months
Text
"Everybody hates Gavin Reed. He doesn't have any friends"
Tumblr media Tumblr media Tumblr media Tumblr media
32 notes · View notes
sonknuxadow · 9 months
Text
as funny as the idea of shadow being completely unable to use technology is i feel like it doesnt actually make much sense because like . yeah he basically fell asleep 50 years ago and woke up in the modern day and theres been a lot of changes in culture and technology that he'd have to get used to. but he wasnt living on earth with zero exposure to computers he was living on a space station where the science was advanced enough for them to be able to create him. maybe he'd struggle a bit using modern computers/phones/etc but i dont think he'd just know nothing about technology either. you know.
15 notes · View notes
jcmarchi · 2 months
Text
Method prevents an AI model from being overconfident about wrong answers
New Post has been published on https://thedigitalinsider.com/method-prevents-an-ai-model-from-being-overconfident-about-wrong-answers/
Method prevents an AI model from being overconfident about wrong answers
Tumblr media Tumblr media
People use large language models for a huge array of tasks, from translating an article to identifying financial fraud. However, despite the incredible capabilities and versatility of these models, they sometimes generate inaccurate responses.
On top of that problem, the models can be overconfident about wrong answers or underconfident about correct ones, making it tough for a user to know when a model can be trusted.
Researchers typically calibrate a machine-learning model to ensure its level of confidence lines up with its accuracy. A well-calibrated model should have less confidence about an incorrect prediction, and vice-versa. But because large language models (LLMs) can be applied to a seemingly endless collection of diverse tasks, traditional calibration methods are ineffective.
Now, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a calibration method tailored to large language models. Their method, called Thermometer, involves building a smaller, auxiliary model that runs on top of a large language model to calibrate it.
Thermometer is more efficient than other approaches — requiring less power-hungry computation — while preserving the accuracy of the model and enabling it to produce better-calibrated responses on tasks it has not seen before.
By enabling efficient calibration of an LLM for a variety of tasks, Thermometer could help users pinpoint situations where a model is overconfident about false predictions, ultimately preventing them from deploying that model in a situation where it may fail.
“With Thermometer, we want to provide the user with a clear signal to tell them whether a model’s response is accurate or inaccurate, in a way that reflects the model’s uncertainty, so they know if that model is reliable,” says Maohao Shen, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on Thermometer.
Shen is joined on the paper by Gregory Wornell, the Sumitomo Professor of Engineering who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory for Electronics, and is a member of the MIT-IBM Watson AI Lab; senior author Soumya Ghosh, a research staff member in the MIT-IBM Watson AI Lab; as well as others at MIT and the MIT-IBM Watson AI Lab. The research was recently presented at the International Conference on Machine Learning.
Universal calibration
Since traditional machine-learning models are typically designed to perform a single task, calibrating them usually involves one task-specific method. On the other hand, since LLMs have the flexibility to perform many tasks, using a traditional method to calibrate that model for one task might hurt its performance on another task.
Calibrating an LLM often involves sampling from the model multiple times to obtain different predictions and then aggregating these predictions to obtain better-calibrated confidence. However, because these models have billions of parameters, the computational costs of such approaches rapidly add up.
“In a sense, large language models are universal because they can handle various tasks. So, we need a universal calibration method that can also handle many different tasks,” says Shen.
With Thermometer, the researchers developed a versatile technique that leverages a classical calibration method called temperature scaling to efficiently calibrate an LLM for a new task.
In this context, a “temperature” is a scaling parameter used to adjust a model’s confidence to be aligned with its prediction accuracy. Traditionally, one determines the right temperature using a labeled validation dataset of task-specific examples.
Since LLMs are often applied to new tasks, labeled datasets can be nearly impossible to acquire. For instance, a user who wants to deploy an LLM to answer customer questions about a new product likely does not have a dataset containing such questions and answers.
Instead of using a labeled dataset, the researchers train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed to calibrate it for this new task.
They use labeled datasets of a few representative tasks to train the Thermometer model, but then once it has been trained, it can generalize to new tasks in a similar category without the need for additional labeled data.
A Thermometer model trained on a collection of multiple-choice question datasets, perhaps including one with algebra questions and one with medical questions, could be used to calibrate an LLM that will answer questions about geometry or biology, for instance.
“The aspirational goal is for it to work on any task, but we are not quite there yet,” Ghosh says.   
The Thermometer model only needs to access a small part of the LLM’s inner workings to predict the right temperature that will calibrate its prediction for data points of a specific task. 
An efficient approach
Importantly, the technique does not require multiple training runs and only slightly slows the LLM. Plus, since temperature scaling does not alter a model’s predictions, Thermometer preserves its accuracy.
When they compared Thermometer to several baselines on multiple tasks, it consistently produced better-calibrated uncertainty measures while requiring much less computation.
“As long as we train a Thermometer model on a sufficiently large number of tasks, it should be able to generalize well across any new task, just like a large language model, it is also a universal model,” Shen adds.
The researchers also found that if they train a Thermometer model for a smaller LLM, it can be directly applied to calibrate a larger LLM within the same family.
In the future, they want to adapt Thermometer for more complex text-generation tasks and apply the technique to even larger LLMs. The researchers also hope to quantify the diversity and number of labeled datasets one would need to train a Thermometer model so it can generalize to a new task.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
0 notes
natjennie · 6 months
Text
I think fundamentally I'm just like. bad at being a person. not a Bad Person but Bad at Person, do you know what I mean? we all talk about how humans are a communal, social species but. in general. I don't like other people. I don't like to do things I'm tired and in pain all the time socializing drains me I can't handle tiny upsets in my life. even just my sister asking if I want to play a card game is usually too much social interaction for a day. I'm just not good at like. basic human interaction. I'm not cut out for it.
11 notes · View notes
saturday-byte · 7 months
Note
To the amazing beautiful underrated paperwires and clockwork ship, what are your personal opinions of them? In the "who is more protective of the three" and "this mf makes sure the others sleep" kinda way
Boy you have no idea how much you're enabling me in this one. You're going to regret it
Long ass post btw
Ok so starting with the examples you gave me here - Tony is so overprotective to me. Like he doesn't want to admit it but every time his partners interact with others he's always around to watch it ,, it's not bc he's jealous (sometimes it is though. He's the most jealous one of the three but it's occasional) but because he's SO worried all of the time that something might happen to them. That's what living in that house does to you ig
Sketch is definitely the one that makes sure the others are healthy, she's always around worrying about them getting enough sleep and food but when She is the one overworking themselves they have to do the impossible to get them to relax. Usually ends with kisses as a bribe
Colin is the typical "steals your clothes and get away with it bc you think it looks cute" (but then the others run out of clothes and steal them back. It's a vicious cycle their wardrobes are all mixed up) and is Such a fucking tease. Def the flirtiest of the bunch HE DOESN'T SHUT UP EVER it's easy to shut him up with physical contact tho, his partners are the only ones allowed to do it and even if he has a phobia he's so starved for it. Took a long time to get him to accept it too so it's even better
There's also something so ....... About Tony giving them kisses. Like he was probably the last one to accept that yeah he's in love with like the only people he can actually talk to (+his usual distant tendencies and worries about Literally Everything) so him finally acknowledging it and Showing it is very special. Like the rest don't get how much that means to them after years of not even admitting he cared about them at all while only having their safety in mind
10 notes · View notes
charlataninred · 2 years
Text
I love love love fandoms so much. You’re able to more easily see the chaotic nature that is fandom when 90% of the people interacting with it are your friends/mutuals and you can see their work. I 100% believe it when one of them says “I wrote a whole paper on [fandom related thing] just because I felt like it” or “I spent literal hours of my day researching mythology/folktales just to make sure the symbolism in my fanart/fanfic makes sense” or “I went into the metadata of these web pages to try and find new lore arhat I haven’t heard of yet and share it with all of you.”
It just. You’re able to see community and genuine human excitement that you dont get (or at least I haven’t seen) much outside of fandom spaces. It brings people with all sorts of skills and other interests and they’re all using those differing skills to build something for themselves and each other
78 notes · View notes
sminny-wew · 21 hours
Text
Tumblr media Tumblr media
I've always been Team NetIris but ever since my friends made me aware of Saito x Meiru it feels like my third eye's been opened (feat. @dunedragon's Saito design)
Also golly gee I wonder why the text is those colors.....Hmmmm 💚🤍🩶🖤
🔫 GO LISTEN TO THE HIT SONG "A HUMAN'S TOUCH" BY TWRP AND MCKENNA RAE ON YOUTUBE OR SPOTIFY I DON'T CARE JUST DO IT 🔫
5 notes · View notes