#University of Michigan AI study
Explore tagged Tumblr posts
Text
5 things about AI you may have missed today: OpenAI's CLIP is biased, AI reunites family after 25 years, more
Study finds OpenAI’s CLIP is biased in favour of wealth and underrepresents poor nations; Retail giants harness AI to cut online clothing returns and enhance the customer experience; Northwell Health implements AI-driven device for rapid seizure detection; White House concerns grow over UAE’s rising influence in global AI race- this and more in our daily roundup. Let us take a look. 1. Study…
View On WordPress
#ai#AI in healthcare#AI representation accuracy#AI Seizure detection technology#Beijing DeepGlint technology#Ceribell medical device#DALL-E image generator#DeepGlint algorithm#G42 AI company#global AI race#HT tech#MySizeID#online clothing returns#OpenAI#openai CLIP#tech news#UAE AI advancements#University of Michigan AI study#Walmart AI initiatives#White House AI concerns
0 notes
Text
Twitter will remove nonconsensual nude images within hours as long as that media is reported for having violated someone’s copyright. If the same content is reported just as nonconsensual intimate media, Twitter will not remove it within weeks, and might never remove it at all, according to a pre-print study from researchers at the University of Michigan. The paper, which has yet to be peer reviewed, argues that the difference between how quickly Twitter responds to copyright violation claims and how it can ignore reports of deepfake porn highlights the need for better legislation on nonconsensual content, which will force social media companies to respond to those reports.
i get that moderation can be difficult and expensive but there are reasons I have doubts when the social media companies claim there is nothing they can do.
177 notes
·
View notes
Text
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a “confabulation” or “hallucination” in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached “human level robustness” in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI’s warnings against using Whisper for “high-risk domains,” over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children’s Hospital Los Angeles are among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
Nabla acknowledges that Whisper can confabulate, but it also reportedly erases original audio recordings “for data safety reasons.” This could cause additional issues, since doctors cannot verify accuracy against the source material. And deaf patients may be highly impacted by mistaken transcripts since they would have no way to know if medical transcript audio is accurate or not.
The potential problems with Whisper extend beyond health care. Researchers from Cornell University and the University of Virginia studied thousands of audio samples and found Whisper adding nonexistent violent content and racial commentary to neutral speech. They found that 1 percent of samples included “entire hallucinated phrases or sentences which did not exist in any form in the underlying audio” and that 38 percent of those included “explicit harms such as perpetuating violence, making up inaccurate associations, or implying false authority.”
In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”
An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.
Why Whisper Confabulates
The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, "Researchers aren’t certain why Whisper and similar tools hallucinate," but that isn't true. We know exactly why Transformer-based AI models like Whisper behave this way.
Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.
The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn't enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.
According to OpenAI in 2022, Whisper learned those statistical relationships from “680,000 hours of multilingual and multitask supervised data collected from the web.” But we now know a little more about the source. Given Whisper's well-known tendency to produce certain outputs like "thank you for watching," "like and subscribe," or "drop a comment in the section below" when provided silent or garbled inputs, it's likely that OpenAI trained Whisper on thousands of hours of captioned audio scraped from YouTube videos. (The researchers needed audio paired with existing captions to train the model.)
There's also a phenomenon called “overfitting” in AI models where information (in this case, text found in audio transcriptions) encountered more frequently in the training data is more likely to be reproduced in an output. In cases where Whisper encounters poor-quality audio in medical notes, the AI model will produce what its neural network predicts is the most likely output, even if it is incorrect. And the most likely output for any given YouTube video, since so many people say it, is “thanks for watching.”
In other cases, Whisper seems to draw on the context of the conversation to fill in what should come next, which can lead to problems because its training data could include racist commentary or inaccurate medical information. For example, if many examples of training data featured speakers saying the phrase “crimes by Black criminals,” when Whisper encounters a “crimes by [garbled audio] criminals” audio sample, it will be more likely to fill in the transcription with “Black."
In the original Whisper model card, OpenAI researchers wrote about this very phenomenon: "Because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself."
So in that sense, Whisper "knows" something about the content of what is being said and keeps track of the context of the conversation, which can lead to issues like the one where Whisper identified two women as being Black even though that information was not contained in the original audio. Theoretically, this erroneous scenario could be reduced by using a second AI model trained to pick out areas of confusing audio where the Whisper model is likely to confabulate and flag the transcript in that location, so a human could manually check those instances for accuracy later.
Clearly, OpenAI's advice not to use Whisper in high-risk domains, such as critical medical records, was a good one. But health care companies are constantly driven by a need to decrease costs by using seemingly "good enough" AI tools—as we've seen with Epic Systems using GPT-4 for medical records and UnitedHealth using a flawed AI model for insurance decisions. It's entirely possible that people are already suffering negative outcomes due to AI mistakes, and fixing them will likely involve some sort of regulation and certification of AI tools used in the medical field.
87 notes
·
View notes
Text
AI is deciphering a 2,000-year-old 'lost book' describing life after Alexander the Great
A 2,000-year-old "lost book" discussing the dynasties that succeeded Alexander the Great may finally be deciphered nearly two millennia after the text was partially destroyed in the eruption of Mount Vesuvius in A.D. 79 and, centuries later, handed off to Napoleon Bonaparte.
The reason for the breakthrough? Researchers are using machine learning, a branch of artificial intelligence, to discern the faint ink on the rolled-up papyrus scroll.
"It's probably a lost work," Richard Janko, the Gerald F. Else distinguished university professor of classical studies at the University of Michigan, said during a presentation at the joint annual meeting of the Archaeological Institute of America and the Society for Classical Studies, held in New Orleans last month. The research is not yet published in a peer-reviewed journal.
Only small parts of the heavily damaged text can be read right now. "It contains the names of a number of Macedonian dynasts and generals of Alexander," Janko said, noting that it also includes "several mentions of Alexander himself." Read more.
414 notes
·
View notes
Note
AC6 College AU? Rusty plays the lacrosse, Raven is either a programmer or an engineer major, Ayre is an AI made by him, and Freud is one of the faculty members.
i've actually thought about a college AU!
Well, technically, a university AU bc I'm from the UK and college is a diff thing entirely to uni. Also I have no idea what American uni culture is like LMAO (idek what lacrosse is, rusty would be good at football or rugby tho) BUT ANYWAYS my idea for it was:
RUSTY: an undergraduate taking the BSc (Hons) in Ecology and Conservation (not sure if it's different outside of UK, but an "honours degree" is more difficult than a standard degree, and is more attractive to employers as a result). He's got an avid interest in ecology and zoology, and has plans to be a conservationist upon graduating.
621: He's actually a professor at the university, but looks so young that most mistake him as a student if they're not in his department. He teaches ethical hacking and cyber security, and has several rumours about him, such as he used to be a notorious hacker who was eventually caught and strongarmed into working for the government in lieu of a prison sentence, and now spends his time teaching the next generation, etc, etc. Is it true? Who knows...
WALTER: He's the university librarian, and everyone is scared of him because he's so stern and always has this aura of intensity, even when just checking a book out for someone. For some reason he's on very good terms with 621... many people theorise on their history, because Walter's also very good with computers... everyone is also aware that Walter and Michigan are a thing bc those guys ain't subtle in the slightest.
FREUD: He teaches sport science and students either love him or hate him. He's like the human personnification of marmite. He's very enthusiastic about health and remaining in peak condition, and he expects his students to give 110% in his classes. He's always butting heads with Snail, who's in the same department as him. People take bets on how long it'll take for them to fight in the parking lot and who would win (Freud has 'is insane' strength but Snail would be powered by sheer rage that has been repressed for x amount of years).
IGUAZU: He works in the on-campus cafe as a barista. He's surly and curt but makes the best damn coffee in the city so no one really complains about it. He seems to know 621 and has some kind of hate-love relationship with him? It's complicated. He yells at 621 every time he walks into the cafe but also knows his order off by heart so it's very hmmm (more fuel for the rumour mills).
MICHIGAN: He teaches War Studies and History, and while he's a pretty demanding professor, most students love his energetic style of teaching. Many assume him to be a red and blue blooded American on account of his bombastic personality, American accent and insisting on being called Michigan - he's actually French and the estranged son of a well-known billionaire. It's Michigan's deepest darkest secret.
AYRE: Every so often 621 will have a guest speaker in his classes who calls in remotely (and voice only) called Ayre. He says she's an "old work colleague" and they're both very vague about what work they were colleagues in, but people enjoy Ayre's guest appearances as she's very friendly - in direct contrast to the very taciturn and almost cold 621.
O'KEEFFE: He works in HR, but most students joke that he's more of an information broker than anything. Though he acts put upon, he's willing to help out students trying to navigate the byzantine beaucracy of university paperwork and how to squeeze out as much as possible from their loans or signposting people to things that can help them. He's easily bribed with coffee and cigarettes, but honestly, he'd help out even without a bribe... very much one of those people who look gruff and unfriendly on the outside, but actually a good person underneath it all.
FLATWELL: He owns a bakery just outside of university grounds that's popular with the students. One of the reasons Rusty chose this univeristy to do his degree: his Uncle lives just outside of it, and was willing to house him, letting Rusty save money for dormitory and food and stuff. Seems to have some ~history~ with O'Keeffe in HR...
UH THOSE ARE THE MAIN CHARACTERS/ROLES and the plot would be Rusty crossing paths with 621 and thinking him very cute (621 would be the type to dress in cardigans and wear glasses), and also a fellow student... is totally unaware that he's a faculty member from an entirely different department. Tl;dr after some flirting and a few dates, Rusty only realises that 621 is a faculty member when he mentions off hand about needing to go back early to mark papers.
Rusty: oh you're... a TA? helping out in your last year? 621: no i teach Rusty: Rusty: wait how old are you-
In this I'm thinking Rusty would be in his mid-twenties, and 621 would be in his late thirties. So, about a 10 year gap between them, give or take a year.
But yeah. Coughs. That's.... that's my university au idea... mmhm. yeah...
#armored core#armored core 6#all i do at work is think of random aus tbh#if i had access to my computer at work i'd probably write even more fic than i do now...#lies down#sighs#armored core......
16 notes
·
View notes
Text
AI chips could get a sense of time with memristor that can be tuned
Artificial neural networks may soon be able to process time-dependent information, such as audio and video data, more efficiently. The first memristor with a "relaxation time" that can be tuned is reported today in Nature Electronics, in a study led by the University of Michigan. Memristors, electrical components that store information in their electrical resistance, could reduce AI's energy needs by about a factor of 90 compared to today's graphical processing units. Already, AI is projected to account for about half a percent of the world's total electricity consumption in 2027, and that has the potential to balloon as more companies sell and use AI tools. "Right now, there's a lot of interest in AI, but to process bigger and more interesting data, the approach is to increase the network size. That's not very efficient," said Wei Lu, the James R. Mellor Professor of Engineering at U-M and co-corresponding author of the study with John Heron, U-M associate professor of materials science and engineering.
Read more.
#Materials Science#Science#Memristors#Electronics#Artificial intelligence#Computational materials science#Oxides#University of Michigan
7 notes
·
View notes
Text
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.
The full extent of the problem is difficult to discern, but researchers and engineers said they frequently have come across Whisper’s hallucinations in their work. A University of Michigan researcher conducting a study of public meetings, for example, said he found hallucinations in eight out of every 10 audio transcriptions he inspected, before he started trying to improve the model.
A machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper.The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.
That trend would lead to tens of thousands of faulty transcriptions over millions of recordings, researchers said.
------
Yes, that does seem like a problem. If only there were a solution, such as, say, NOT USING IT!
2 notes
·
View notes
Text
Robotics Courses in USA
The USA is a great place for those who want to pursue a career in robotics because it is at the forefront of technical breakthroughs. Mechanical engineering, electrical engineering, computer science, and artificial intelligence are all combined in the multidisciplinary discipline of robotics. Pursuing robotics courses in the USA can lead to cutting-edge chances in automation, artificial intelligence, and machine learning, regardless of your level of experience.
Why Study Robotics in the USA?
Top-Rated Universities
Some of the best colleges in the world, with cutting-edge facilities, knowledgeable teachers, and industry contacts, are located in the USA. Robotics programs at universities like Carnegie Mellon University, Stanford, and MIT are well-known throughout the world.
Access to Prominent Businesses
Leading tech firms, including Google, Amazon Robotics, Boston Dynamics, and Tesla, are based in the USA. Numerous colleges have collaborations with these businesses, providing students with direct exposure to the sector, research possibilities, and internships.
An Environment Driven by Innovation
Robotics students in the USA have the opportunity to engage in innovative research projects and collaborate with talent from around the world, all while emphasizing creativity. This setting encourages innovation and hands-on learning.
Top Robotics Courses in the USA
Master of Science in Robotics - Carnegie Mellon University
Bachelor of Science in Robotics Engineering - Worcester Polytechnic Institute (WPI)
Master of Engineering in Robotics - University of Michigan
Ph.D. in Robotics - Georgia Institute of Technology
Ph.D. in Robotics - Georgia Institute of Technology
Core Topics Covered in Robotics Courses
Learning and Artificial Intelligence
Find out how AI can be used to make robots resemble human intelligence.
Kinematics and mechanics
Recognize the motion and physical makeup of robots.
Systems Embedded
Explore the sensors and microcontrollers that power robotics.
Languages Used in Programming
Learn programming languages such as ROS (Robot Operating System), C++, and Python.
Robot-Human Interaction
Examine the ways that people and robots work together in practical settings.
A successful career in one of the most exciting areas of technology can be attained by enrolling in a robotics course in the United States. Students graduate as leaders in the robotics industry thanks to a combination of industry exposure, academic quality, and innovative learning settings.
The USA is the location to realize your robotic goals, whether you're creating humanoid robots or transforming industries with AI-driven automation.
To know more, click here.
0 notes
Text
Up to 30% of the power used to train AI is wasted: Here's how to fix it
Read the full story from the University of Michigan. A less wasteful way to train large language models, such as the GPT series, finishes in the same amount of time for up to 30% less energy, according to a study.
0 notes
Text
AI Simplify Science Communication Enhancing Understanding and Trust
The concept of utilizing Artificial Intelligence (AI) to simplify science communication and restore public trust in science is an active area of research. The complexity of scientific language often poses significant barriers to public understanding, necessitating more accessible communication methods. Recent studies highlight AI’s innovative role in bridging the gap between scientific discourse and public comprehension.
Simplifying Scientific Content with AI
AI is proving to be a powerful tool for producing simplified scientific summaries that enhance public comprehension. Research led by David Markowitz, an associate professor of communication at Michigan State University, demonstrates the ability of AI to generate clear, concise summaries of scientific articles. The study utilizes the advanced language model GPT-4 by OpenAI to distill complex scientific content into understandable formats. The Role of AI in Generating Summaries In a striking demonstration of AI's capabilities, Markowitz's research reveals that AI-generated summaries significantly outperform traditional human-written content in terms of readability. These AI summaries use simpler language and more common terms, making them easier for the general public to grasp the core ideas of scientific studies.
Photo by Pavel Danilyuk Improving Public Comprehension Empirical evidence supports that participants exposed to AI-generated summaries were more likely to provide accurate and detailed recaps of the content compared to those who read human-authored summaries. This gain in understanding underlines AI's potential to foster better public engagement with science by demystifying complex topics.
Impact of Simplified Science Communication on Public Trust
The way scientific information is communicated can have profound implications for how the public perceives scientists and their work. Markowitz’s experiments indicate a direct correlation between simplified science communication through AI and the enhanced credibility of scientists in the eyes of the public. Credibility and Trustworthiness of Scientists Studies reveal that scientists whose research is described using simpler terminology are viewed as more credible than those whose work is communicated through complex jargon. As clarity increases, so does public trust, suggesting that effective communication strategies can promote a more favorable outlook towards scientific endeavors. Encouraging Engagement in Scientific Issues Improved science communication has the potential to drive greater public engagement with scientific issues. By making science more relatable and accessible, individuals may feel more empowered to discuss scientific topics, participate in dialogues, and support evidence-based policies.
Critical Engagement and Evaluation of AI in Science Communication
As AI's role in science communication expands, it is crucial to critically evaluate its capabilities and address potential biases in the information it disseminates. A comprehensive framework for assessing AI-generated content focuses on the reliability of data sources, recency, and adherence to scientific rigor. Framework for Evaluating AI Technologies Implementing a structured framework involves evaluating the underlying data from which AI derives its summaries. Considering the reliability of these data sources is integral to ensuring that the public receives accurate representations of scientific knowledge. The Role of Anthropomorphism in AI Interaction How users perceive AI can greatly influence their trust levels and the overall effectiveness of communication. Understanding the anthropomorphism of AI—how people interact with and attribute human-like qualities to AI technologies—is fundamental, especially for users with varying levels of digital literacy.
Addressing Misinformation and Ethical Considerations
While AI has the potential to simplify scientific communication, it also introduces challenges surrounding misinformation and oversimplification. Balancing accessible communication with the complexity inherent in scientific discourse is crucial to avoid misinterpretations. The Importance of Transparency in AI-Generated Content Transparency is essential in AI-generated communications. It is vital for AI systems to clarify the origins of their summaries to mitigate risks of bias and misunderstanding, ensuring that consumers of this information can trust its validity. Establishing Norms for AI in Academic Publishing The academic publishing industry is currently navigating the integration of AI into its processes. There is a pressing need to establish norms that scrutinize the use of AI in producing and presenting scientific content to safeguard the integrity of academic communication.
AI Applications in Public Health Communication
AI technology is also making strides in addressing public health messaging and risk perception. Research into how AI can 'Extract Insights from Digital Public Health Data' illustrates how AI can analyze massive datasets and provide meaningful insights that inform public health policies. Leveraging AI for Analyzing Public Health Data AI's ability to process and analyze large volumes of data holds considerable promise for enhancing the effectiveness of public health communications. Studies show that AI can identify trends, assess risks, and support the decision-making processes of public health officials. Strategies for Effective Health Communication Integrating AI into public health communication strategies can streamline the dissemination of information, ensuring transparency and combating misinformation. Adopting best practices in health communication can help in building public trust and improving health outcomes.
Photo by Merlin Lightpainting
Building Trust and Engagement in Science Communication
Various initiatives and workshops aimed at enhancing science communication through AI are emerging, focusing specifically on building trust between scientists and the public. Events emphasize the importance of dialogue, inclusivity, and accessibility in scientific discourse. Innovative Events and Symposiums Conferences like the "Civic Science & Ethics in the Age of AI: Building Trust" symposium at the University of Notre Dame aim to develop innovative strategies for effective science communication. These events foster productive exchanges between scientists and the public, highlighting the essential role of trust in science communication. Government and Organizational Efforts in AI Innovation Organizations such as the National Institute of Standards and Technology (NIST) are spearheading efforts to promote responsible AI development. The NIST AI Innovation Lab works on establishing standards and guidelines that contribute to building trust in the use of AI across various sectors of society.
The Future of AI in Science Communication
The integration of AI into science communication presents an exciting opportunity to simplify complex scientific information and enhance public understanding. However, ethical considerations, transparency, and the preservation of scientific nuance must remain paramount as the technology continues to evolve. As we look forward, the role of AI in science communication is expected to expand, ultimately contributing to a more informed public and deeper engagement with scientific issues. For more similar news, I invite you to visit my blog at FROZENLEAVES NEWS. ``` Read the full article
0 notes
Text
In 10 Seconds, AI Model Detects Brain Tumor
An innovative AI-powered model, FastGlioma, is transforming neurosurgery by accurately detecting residual cancerous brain tumor within ten seconds. Researchers at the University of Michigan and University of California, San Francisco developed this model. This breakthrough technology outperforms traditional methods, significantly improving patient outcomes. Published in Nature, the study…
0 notes
Text
In 10 Seconds, AI Model Detects Brain Tumor
An innovative AI-powered model, FastGlioma, is transforming neurosurgery by accurately detecting residual cancerous brain tumor within ten seconds. Researchers at the University of Michigan and University of California, San Francisco developed this model. This breakthrough technology outperforms traditional methods, significantly improving patient outcomes. Published in Nature, the study…
0 notes
Text
🤯 Ever felt like you wanted to learn something new but the cost of courses was a barrier? 🤔
Well, guess what? Coursera just threw open the doors to a treasure trove of free courses from top-tier universities like Stanford and Yale! 🎉
Imagine learning: 🧠
Machine Learning from Stanford! 🤖
The Science of Well-Being from Yale! 😌
Python Programming from the University of Michigan! 💻
And that's just scratching the surface! 🤩 There's a course for every interest, from tech to personal development.
While free is awesome, there are some things to remember:
No certificates unless you pay. 💰
Some interactive features are limited. 😔
Content quality can vary. 😕
But don't let that discourage you! 💪 Here's how to make the most of Coursera's free courses:
Set a study schedule. 🗓️
Take notes and practice. ✍️
Connect with other learners in forums. 💬
Apply your knowledge in real-world situations. 🌎
Pro Tip: 🏆 Consider combining courses with your existing knowledge, building a personalized learning path, and taking advantage of other resources like coding platforms and tutorial videos.
In the end, it's about gaining valuable knowledge and skills, not just certificates. 📚 Coursera's free courses are a game-changer, making learning accessible and flexible for anyone who wants to level up! 🚀
Additional Resources:
For more insights on online learning and course creation, check out Zam Man Optimagain's Course Creation Masterclass. Learn how to leverage AI tools to develop mini-courses and monetize your expertise efficiently.
0 notes
Text
The most extensive collection of transcription factor binding data in human tissues ever compiled
- By InnoNurse Staff -
Transcription factors (TFs) are proteins that bind to specific DNA sequences to regulate the transcription of genetic information from DNA to mRNA, impacting gene expression and various biological processes, including brain functions. While TFs have been studied extensively, their binding dynamics in human tissues are not well understood.
Researchers from the HudsonAlpha Institute for Biotechnology, University of California-Irvine, and University of Michigan compiled the largest TF binding dataset to date, aiming to understand how TFs contribute to gene expression and brain function. This dataset could reveal how gene regulation impacts neurodegenerative and psychiatric disorders.
The study, led by Dr. Richard Myers, utilized an innovative technique called ChIP-seq to capture and sequence DNA fragments bound by TFs. Experiments were conducted on different brain regions from postmortem tissues donated by individuals, allowing the researchers to map TF activity in the genome.
Findings suggest that regions bound by fewer TFs might be crucial, as minor changes there could significantly impact nearby genes. The dataset can help scientists study TFs, gene regulation, and their roles in specific brain functions and diseases, potentially aiding in the development of new therapies.
Image credit: Loupe et al. (Nature Neuroscience, 2024).
Read more at Medical Xpress
///
Other recent news and insights
BU researchers develop a new AI program to predict the likelihood of Alzheimer's disease (Boston University)
Northeastern researchers create an AI system for breast cancer diagnosis with nearly 100% accuracy (Northeastern University)
Wearable sensors and AI aims to revolutionize balance assessment (Florida Atlantic University)
#transcription factors#data science#medtech#health informatics#health tech#dna#mrna#genetics#genomics#neuroscience#brain#ai#alzheimers#cancer#oncology#breast cancer#diagnostics#wearables#sensors#balance#aging
0 notes
Text
Can AI translate your dog's bark? New research says yes
New Post has been published on https://petn.ws/qDHAy
Can AI translate your dog's bark? New research says yes
Imagine if you could understand what your dog is trying to tell you with every bark, whine, or growl. This intriguing possibility is the focus of a recent study by researchers at the University of Michigan, in collaboration with the National Institute of Astrophysics, Optics and Electronics in Puebla, Mexico. The researchers are exploring how […]
See full article at https://petn.ws/qDHAy #DogNews
0 notes