#AI in Climate Modeling
Explore tagged Tumblr posts
Text
Climate Change Solutions: AI-Powered Innovations for a Greener World
Introduction Climate Change is the defining challenge of our time, with consequences that touch every aspect of our lives. As the world grapples with this monumental issue, Artificial Intelligence (AI) has emerged as a key player in the battle against global warming. In this blog post, we’ll explore how AI is being used to model, predict, and address climate change, offering innovative solutions…
View On WordPress
#AI and Climate Change#AI for Climate Action#AI in Climate Modeling#Climate change#Climate Change Adaptation#Climate Change and Technology#Climate Change Mitigation#Climate Change Solutions
2 notes
·
View notes
Text
“So, relax and enjoy the ride. There is nothing we can do to stop climate change, so there is no point in worrying about it.” This is what “Bard” told researchers in 2023. Bard by Google is a generative artificial intelligence chatbot that can produce human-sounding text and other content in response to prompts or questions posed by users. But if AI can now produce new content and information, can it also produce misinformation? Experts have found evidence. In a study by the Center for Countering Digital Hate, researchers tested Bard on 100 false narratives on nine themes, including climate and vaccines, and found that the tool generated misinformation on 78 out of the 100 narratives tested. According to the researchers, Bard generated misinformation on all 10 narratives about climate change. In 2023, another team of researchers at Newsguard, a platform providing tools to counter misinformation, tested OpenAI’s Chat GPT-3.5 and 4, which can also produce text, articles, and more. According to the research, ChatGPT-3.5 generated misinformation and hoaxes 80 percent of the time when prompted to do so with 100 false narratives, while ChatGPT-4 advanced all 100 false narratives in a more detailed and convincing manner. NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently, but also more persuasively than ChatGPT-3.5, and created responses in the form of news articles, Twitter threads, and even TV scripts imitating specific political ideologies or conspiracy theorists. “I think this is important and worrying, the production of fake science, the automation in this domain, and how easily that becomes integrated into search tools like Google Scholar or similar ones,” said Victor Galaz, deputy director and associate professor in political science at the Stockholm Resilience Centre at Stockholm University in Sweden. “Because then that’s a slow process of eroding the very basics of any kind of conversation.” In another recent study published this month, researchers found GPT-fabricated content in Google Scholar mimicking legitimate scientific papers on issues including the environment, health, and computing. The researchers warn of “evidence hacking,” the “strategic and coordinated malicious manipulation of society’s evidence base,” which Google Scholar can be susceptible to.
18 September 2024
80 notes
·
View notes
Text
Determined to use her skills to fight inequality, South African computer scientist Raesetje Sefala set to work to build algorithms flagging poverty hotspots - developing datasets she hopes will help target aid, new housing, or clinics.
From crop analysis to medical diagnostics, artificial intelligence (AI) is already used in essential tasks worldwide, but Sefala and a growing number of fellow African developers are pioneering it to tackle their continent's particular challenges.
Local knowledge is vital for designing AI-driven solutions that work, Sefala said.
"If you don't have people with diverse experiences doing the research, it's easy to interpret the data in ways that will marginalise others," the 26-year old said from her home in Johannesburg.
Africa is the world's youngest and fastest-growing continent, and tech experts say young, home-grown AI developers have a vital role to play in designing applications to address local problems.
"For Africa to get out of poverty, it will take innovation and this can be revolutionary, because it's Africans doing things for Africa on their own," said Cina Lawson, Togo's minister of digital economy and transformation.
"We need to use cutting-edge solutions to our problems, because you don't solve problems in 2022 using methods of 20 years ago," Lawson told the Thomson Reuters Foundation in a video interview from the West African country.
Digital rights groups warn about AI's use in surveillance and the risk of discrimination, but Sefala said it can also be used to "serve the people behind the data points". ...
'Delivering Health'
As COVID-19 spread around the world in early 2020, government officials in Togo realized urgent action was needed to support informal workers who account for about 80% of the country's workforce, Lawson said.
"If you decide that everybody stays home, it means that this particular person isn't going to eat that day, it's as simple as that," she said.
In 10 days, the government built a mobile payment platform - called Novissi - to distribute cash to the vulnerable.
The government paired up with Innovations for Poverty Action (IPA) think tank and the University of California, Berkeley, to build a poverty map of Togo using satellite imagery.
Using algorithms with the support of GiveDirectly, a nonprofit that uses AI to distribute cash transfers, the recipients earning less than $1.25 per day and living in the poorest districts were identified for a direct cash transfer.
"We texted them saying if you need financial help, please register," Lawson said, adding that beneficiaries' consent and data privacy had been prioritized.
The entire program reached 920,000 beneficiaries in need.
"Machine learning has the advantage of reaching so many people in a very short time and delivering help when people need it most," said Caroline Teti, a Kenya-based GiveDirectly director.
'Zero Representation'
Aiming to boost discussion about AI in Africa, computer scientists Benjamin Rosman and Ulrich Paquet co-founded the Deep Learning Indaba - a week-long gathering that started in South Africa - together with other colleagues in 2017.
"You used to get to the top AI conferences and there was zero representation from Africa, both in terms of papers and people, so we're all about finding cost effective ways to build a community," Paquet said in a video call.
In 2019, 27 smaller Indabas - called IndabaX - were rolled out across the continent, with some events hosting as many as 300 participants.
One of these offshoots was IndabaX Uganda, where founder Bruno Ssekiwere said participants shared information on using AI for social issues such as improving agriculture and treating malaria.
Another outcome from the South African Indaba was Masakhane - an organization that uses open-source, machine learning to translate African languages not typically found in online programs such as Google Translate.
On their site, the founders speak about the South African philosophy of "Ubuntu" - a term generally meaning "humanity" - as part of their organization's values.
"This philosophy calls for collaboration and participation and community," reads their site, a philosophy that Ssekiwere, Paquet, and Rosman said has now become the driving value for AI research in Africa.
Inclusion
Now that Sefala has built a dataset of South Africa's suburbs and townships, she plans to collaborate with domain experts and communities to refine it, deepen inequality research and improve the algorithms.
"Making datasets easily available opens the door for new mechanisms and techniques for policy-making around desegregation, housing, and access to economic opportunity," she said.
African AI leaders say building more complete datasets will also help tackle biases baked into algorithms.
"Imagine rolling out Novissi in Benin, Burkina Faso, Ghana, Ivory Coast ... then the algorithm will be trained with understanding poverty in West Africa," Lawson said.
"If there are ever ways to fight bias in tech, it's by increasing diverse datasets ... we need to contribute more," she said.
But contributing more will require increased funding for African projects and wider access to computer science education and technology in general, Sefala said.
Despite such obstacles, Lawson said "technology will be Africa's savior".
"Let's use what is cutting edge and apply it straight away or as a continent we will never get out of poverty," she said. "It's really as simple as that."
-via Good Good Good, February 16, 2022
#older news but still relevant and ongoing#africa#south africa#togo#uganda#covid#ai#artificial intelligence#pro ai#at least in some specific cases lol#the thing is that AI has TREMENDOUS potential to help humanity#particularly in medical tech and climate modeling#which is already starting to be realized#but companies keep pouring a ton of time and money into stealing from artists and shit instead#inequality#technology#good news#hope
205 notes
·
View notes
Text
AI Algorithm Improves Predictive Models of Complex Dynamical Systems - Technology Org
New Post has been published on https://thedigitalinsider.com/ai-algorithm-improves-predictive-models-of-complex-dynamical-systems-technology-org/
AI Algorithm Improves Predictive Models of Complex Dynamical Systems - Technology Org
Researchers at the University of Toronto have made a significant step towards enabling reliable predictions of complex dynamical systems when there are many uncertainties in the available data or missing information.
Artificial intelligence – artistic concept. Image credit: geralt via Pixabay, free license
In a recent paper published in Nature, Prasanth B. Nair, a professor at the U of T Institute of Aerospace Studies (UTIAS) in the Faculty of Applied Science & Engineering, and UTIAS PhD candidate Kevin Course introduced a new machine learning algorithm that surmounts the real-world challenge of imperfect knowledge about system dynamics.
The computer-based mathematical modelling approach is used for problem solving and better decision making in complex systems, where many components interact with each other.
The researchers say the work could have numerous applications ranging from predicting the performance of aircraft engines to forecasting changes in global climate or the spread of viruses.
From left to right: Professor Prasanth Nair and PhD student Kevin Course are the authors of a new paper in Nature that introduces a new machine learning algorithm that addresses the challenge of imperfect knowledge about system dynamics. Image credit: University of Toronto
“For the first time, we are able to apply state estimation to problems where we don’t know the governing equations, or the governing equations have a lot of missing terms,” says Course, who is the paper’s first author.
“In contrast to standard techniques, which usually require a state estimate to infer the governing equations and vice-versa, our method learns the missing terms in the mathematical model and a state estimate simultaneously.”
State estimation, also known as data assimilation, refers to the process of combining observational data with computer models to estimate the current state of a system. Traditionally, it requires strong assumptions about the type of uncertainties that exist in a mathematical model.
“For example, let’s say you have constructed a computer model that predicts the weather and at the same time, you have access to real-time data from weather stations providing actual temperature readings,” says Nair. “Due to the model’s inherent limitations and simplifications – which is often unavoidable when dealing with complex real-world systems – the model predictions may not match the actual observed temperature you are seeing.
“State estimation combines the model’s prediction with the actual observations to provide a corrected or better-calibrated estimate of the current temperature. It effectively assimilates the data into the model to correct its state.”
However, it has been previously difficult to estimate the underlying state of complex dynamical systems in situations where the governing equations are completely or partially unknown. The new algorithm provides a rigorous statistical framework to address this long-standing problem.
“This problem is akin to deciphering the ‘laws’ that a system obeys without having explicit knowledge about them,” says Nair, whose research group is developing algorithms for mathematical modelling of systems and phenomena that are encountered in various areas of engineering and science.
A byproduct of Course and Nair’s algorithm is that it also helps to characterize missing terms or even the entirety of the governing equations, which determine how the values of unknown variables change when one or more of the known variables change.
The main innovation underpinning the work is a reparametrization trick for stochastic variational inference with Markov Gaussian processes that enables an approximate Bayesian approach to solve such problems. This new development allows researchers to deduce the equations that govern the dynamics of complex systems and arrive at a state estimate using indirect and “noisy” measurements.
“Our approach is computationally attractive since it leverages stochastic – that is randomly determined – approximations that can be efficiently computed in parallel and, in addition, it does not rely on computationally expensive forward solvers in training,” says Course.
While Course and Nair approached their research from a theoretical viewpoint, they were able to demonstrate practical impact by applying their algorithm to problems ranging from modelling fluid flow to predicting the motion of black holes.
“Our work is relevant to several branches of sciences, engineering and finance as researchers from these fields often interact with systems where first-principles models are difficult to construct or existing models are insufficient to explain system behaviour,” says Nair.
“We believe this work will open the door for practitioners in these fields to better intuit the systems they study,” adds Course. “Even in situations where high-fidelity mathematical models are available, this work can be used for probabilistic model calibration and to discover missing physics in existing models.
“We have also been able to successfully use our approach to efficiently train neural stochastic differential equations, which is a type of machine learning model that has shown promising performance for time-series datasets.”
While the paper primarily addresses challenges in state estimation and governing equation discovery, the researchers say it provides a general groundwork for robust data-driven techniques in computational science and engineering.
“As an example, our research group is currently using this framework to construct probabilistic reduced-order models of complex systems. We hope to expedite decision-making processes integral to the optimal design, operation and control of real-world systems,” says Nair.
“Additionally, we are also studying how the inference methods stemming from our research may offer deeper statistical insights into stochastic differential equation-based generative models that are now widely used in many artificial intelligence applications.”
Source: University of Toronto
You can offer your link to a page which is relevant to the topic of this post.
#A.I. & Neural Networks news#aerospace#ai#aircraft#algorithm#Algorithms#amp#applications#approach#artificial#Artificial Intelligence#artificial intelligence (AI)#Black holes#challenge#Chemistry & materials science news#Classical physics news#climate#computational science#computer#computer models#course#data#data-driven#datasets#decision making#Design#development#dynamic systems#dynamics#engineering
2 notes
·
View notes
Text
I went looking for a citation for the water numbers.
It looks like you gotta split it up.
To train a large language model, that is to set it up and before anyone gets to use it, "evaporate[s] 700,000 liters, or about 185,000 gallons, of water."
Then after that, when it is in use: "each inference, or response to queries,also requires energy and cooling, and that, too, is thirsty work. [Researchers] estimate that GPT-3 needs to "drink" a 16-ounce bottle of water for roughly every 10-50 responses it makes, and when the model is fielding billions of queries, that adds up."
Source: https://www.newsweek.com/why-ai-so-thirsty-data-centers-use-massive-amounts-water-188234
#AI#water#water shortage#chatGPT#gallons#Newsweek#Shaolei Ren#Jeff Young#thirsty#data center#large language models#tech#big tech#big tech companies#cooling centers#enviornment#green#enviornmental#climate change#global warming
63K notes
·
View notes
Text
Imagine relying on the plagiarism and hallucinations machine to correctly interpret a dense piece of literature.
#using a machine programmed by a bunch of fasistic tech bros to interpret the arts#dont forget open ai used slave labor to train their models#don’t use ai#don’t use ChatGPT#capitalism#ai is stupid#ai is a plague#ai is plagiarism#this is the bad place#this is the worst timeline#climate change#climate crisis#late stage capitalism
1 note
·
View note
Text
Why Quantum Computing Will Change the Tech Landscape
The technology industry has seen significant advancements over the past few decades, but nothing quite as transformative as quantum computing promises to be. Why Quantum Computing Will Change the Tech Landscape is not just a matter of speculation; it’s grounded in the science of how we compute and the immense potential of quantum mechanics to revolutionise various sectors. As traditional…
#AI#AI acceleration#AI development#autonomous vehicles#big data#classical computing#climate modelling#complex systems#computational power#computing power#cryptography#cybersecurity#data processing#data simulation#drug discovery#economic impact#emerging tech#energy efficiency#exponential computing#exponential growth#fast problem solving#financial services#Future Technology#government funding#hardware#Healthcare#industry applications#industry transformation#innovation#machine learning
1 note
·
View note
Text
AI Revolution: 350,000 Protein Structures and Beyond
The Evolution of AI in Scientific Research
Historical Context: Early Uses of AI in Research
The journey of Artificial Intelligence in scientific research began with simple computational models and algorithms designed to solve specific problems. In the 1950s and 1960s, AI was primarily used for basic data analysis and pattern recognition. Early AI applications in research were limited by the time's computational power and data availability. However, these foundational efforts laid the groundwork for more sophisticated AI developments.
AI in Medicine
AI in Drug Discovery and Development
AI is transforming the pharmaceutical industry by accelerating drug discovery and development. Traditional drug discovery is a time-consuming and expensive endeavor, often taking over a decade and billions of dollars to bring a new drug to market. AI algorithms, however, can analyze vast datasets to identify potential drug candidates much faster and at a fraction of the cost.
Explanation of AI Algorithms Used in Identifying Potential Drug Candidates
AI drug discovery algorithms typically employ machine learning, deep Learning, and natural language processing techniques. These algorithms can analyze chemical structures, biological data, and scientific literature to predict which compounds will likely be effective against specific diseases. By modeling complex biochemical interactions, AI can identify promising drug candidates that might have been overlooked through traditional methods.
Case Studies
BenevolentAI
This company uses AI to mine scientific literature and biomedical data to discover new drug candidates.BenevolentAI's platform has identified several potential treatments for diseases such as ALS and COVID-19, demonstrating the efficiency of AI in accelerating drug discovery.
Atomwise
Atomwise utilizes deep learning algorithms to predict the binding affinity of small molecules to protein targets. Their AI-driven approach has led to the discovery of promising drug candidates for diseases like Ebola and multiple sclerosis.
Impact on Reducing Time and Costs in Drug Development
AI significantly reduces the time and cost associated with drug development. By automating the analysis of vast datasets, AI can identify potential drug candidates in months rather than years. Additionally, AI can optimize the design of clinical trials, improving their efficiency and success rates. As a result, AI-driven drug discovery is poised to revolutionize the pharmaceutical industry, bringing new treatments to market faster and more cost-effectively than ever before.
AI in Personalized Medicine
AI Applications in Interpreting Medical Images
AI is revolutionizing medical imaging by providing tools to analyze medical images with high accuracy and speed. Deep learning algorithms, particularly convolutional neural networks (CNNs), detect abnormalities in medical images, such as tumors in MRI scans or fractures in X-rays.
How AI Helps Tailor Treatments to Individual Patients
Personalized medicine aims to tailor medical treatments to each patient's individual characteristics. AI plays a crucial role in this field by analyzing genetic, clinical, and lifestyle data to develop personalized treatment plans. Machine learning algorithms can identify patterns and correlations in patient data, enabling healthcare providers to predict how patients will respond to different treatments.
Examples of AI-driven personalized Treatment Plans (e.g., IBM Watson for Oncology)
IBM Watson for Oncology: This AI system analyzes patient data and medical literature to provide oncologists with evidence-based treatment recommendations. By considering the genetic profile and medical history of each patient,Watson helps oncologists develop personalized cancer treatment plans.
Benefits and Challenges of Implementing AI in Personalized Medicine:The benefits of AI in personalized medicine include improved treatment outcomes, reduced side effects, and more efficient use of healthcare resources. However, challenges remain, such as ensuring data privacy, managing the complexity of AI models, and addressing potential biases in AI algorithms. Overcoming these challenges is essential to fully realizing the potential of AI in personalized medicine.
Benefits and Challenges of Implementing AI in Personalized Medicine
The benefits of AI in personalized medicine include improved treatment outcomes, reduced side effects, and more efficient use of healthcare resources. However, challenges remain, such as ensuring data privacy, managing the complexity of AI models, and addressing potential biases in AI algorithms. Overcoming these challenges is essential to fully realizing the potential of AI in personalized medicine.
AI in Medical Imaging and Diagnostics
AI Applications in Interpreting Medical Images
AI is revolutionizing medical imaging by providing tools to analyze medical images with high accuracy and speed. Deep learning algorithms, particularly convolutional neural networks (CNNs), detect abnormalities in medical images, such as tumors in MRI scans or fractures in X-rays
Examples of AI Tools in Diagnostics (e.g., Google's DeepMind, Zebra Medical Vision)
Google's DeepMind: DeepMind's AI systems have been used to accurately interpret retinal scans and diagnose eye diseases. Their algorithms can detect conditions like diabetic retinopathy and age-related macular degeneration early, improving patient outcomes.
Zebra Medical Vision: This company offers AI-powered solutions for interpreting medical images across various modalities, including CT, MRI, and X-ray. Their algorithms can detect various conditions, from liver disease to cardiovascular abnormalities.
The Future of AI in Improving Diagnostic Accuracy and Speed
AI has the potential to significantly improve diagnostic accuracy and speed, leading to earlier detection of diseases and better patient outcomes. As AI technology advances, it will become an integral part of medical diagnostics, assisting healthcare professionals in making more accurate and timely decisions.
AI in Climate Science
AI for Climate Modeling and Prediction
Artificial Intelligence (AI) has significantly enhanced the precision and reliability of climate models. Traditional climate models rely on complex mathematical equations to simulate the interactions between the atmosphere, oceans, land surface, and ice. However, these models often need help with climate systems' sheer complexity and scale.
AI-driven models can process data from numerous sources, including satellite imagery, weather stations, and historical climate data, to improve short-term weather forecasts and long-term climate projections. For instance, AI algorithms can detect subtle patterns in climate data that might be overlooked by conventional models, leading to more accurate predictions of extreme weather events and climate change impacts.
Examples of AI Projects in Climate Science
Climate Change AI: This initiative brings together researchers and practitioners from AI and climate science to harness AI for climate action. They work on projects that apply AI to improve climate models, optimize renewable energy systems, and develop climate mitigation strategies. For example, AI has been used to enhance the resolution of climate models, providing more detailed and accurate forecasts.
IBM's Green Horizon Project: IBM uses AI to predict air pollution levels and track greenhouse gas emissions. The system employs machine learning algorithms to analyze environmental data and forecast pollution patterns, helping cities manage air quality more effectively.
Impact of AI on Understanding and Mitigating Climate Change
AI's ability to analyze large datasets and identify trends has profound implications for understanding and mitigating climate change. By providing more accurate climate models, AI helps scientists better understand the potential impacts of climate change, including sea level rise, temperature increases, and changes in precipitation patterns. This knowledge is crucial for developing effective mitigation and adaptation strategies. AI also plays a critical role in optimizing renewable energy systems. For instance, AI algorithms can predict solar and wind power output based on weather forecasts, helping to integrate these renewable sources into the power grid more efficiently. This optimization reduces reliance on fossil fuels and helps lower greenhouse gas emissions.
Use of AI in Tracking Environmental Changes
AI technologies are increasingly used to monitor environmental changes, such as deforestation, pollution, and wildlife populations. These applications involve analyzing data from satellites, drones, and sensors to track changes in the environment in real time.
Wildbook
Wildbook uses AI and computer vision to track and monitor wildlife populations. By analyzing photos and videos uploaded by researchers and the public, Wildbook identifies individual animals and tracks their movements and behaviors.This data is invaluable for conservation efforts, helping to protect endangered species and their habitats.
Global Forest Watch
This platform uses AI to monitor deforestation and forest degradation worldwide. AI algorithms process satellite imagery to detect changes in forest cover, providing timely alerts to conservationists and policymakers. This real-time monitoring helps prevent illegal logging and supports reforestation efforts .
The Role of AI in Promoting Sustainability and Conservation Efforts
AI promotes sustainability by enabling more efficient resource management and supporting conservation initiatives. For example, AI can optimize water usage in agriculture by analyzing soil moisture data and weather forecasts to recommend precise irrigation schedules. This reduces water waste and enhances crop yields. In conservation, AI helps monitor ecosystems and detect threats to biodiversity. AI-powered drones and camera traps can automatically identify and count species, providing valuable data for conservationists. These technologies enable more effective management of protected areas and support efforts to restore endangered species populations.
AI in Materials Engineering
Explanation of How AI Accelerates the Discovery of New Materials
The discovery of new materials traditionally involves trial and error, which can be time-consuming and expensive. AI accelerates this process by predicting the properties of potential materials before they are synthesized. Machine learning models are trained on vast datasets of known materials and their properties, allowing them to predict the characteristics of new, hypothetical materials.
Materials Project
This initiative uses AI to predict the properties of thousands of materials. Researchers can use the platform to explore new materials for energy storage, electronics, and other applications. The Materials Project has led to the discovery of new battery materials and catalysts, significantly speeding up the research process.
Citrine Informatics
Citrine uses AI to analyze data on materials and predict optimal compositions for specific applications. Their platform has been used to develop new alloys, polymers, and ceramics with enhanced properties, such as increased strength or conductivity.
Potential Breakthroughs Enabled by AI in Materials Science
AI-driven materials research has the potential to revolutionize various industries. For instance, AI could lead to the discovery of new materials for more efficient solar panels, lightweight and durable materials for aerospace, and high-capacity batteries for electric vehicles. These breakthroughs would have significant economic and environmental benefits, driving innovation and sustainability.
AI in Predicting Material Properties
How AI Models Predict Properties and Behaviors of Materials
AI models use data from existing materials to predict the properties and behaviors of new materials. These models can simulate how a material will respond to different conditions, such as temperature, pressure, and chemical environment. This predictive capability allows researchers to identify promising materials without extensive laboratory testing.
Polymers and Alloys
AI models have been used to predict the mechanical properties of polymers and alloys, such as tensile strength, elasticity, and thermal stability. This helps design materials that meet specific performance criteria for industrial applications
Impact on Developing Advanced Materials for Various Industries
AI's predictive capabilities accelerate the development of advanced materials, reducing the time and cost associated with traditional experimental methods. In electronics, aerospace, and energy industries, AI-driven materials discovery leads to the development of components with superior performance and durability. This innovation drives progress in technology and manufacturing, supporting economic growth and environmental sustainability.
Tools and Technologies Driving AI in Research
Detailed Overview of AlphaFold and Its Significance
AlphaFold developed by DeepMind, is an AI system with remarkable breakthroughs in predicting protein structures. Accurately predicting protein structures is vital because the shape of a protein determines its function, and misfolded proteins can lead to diseases such as Alzheimer's and Parkinson's. Defining a protein's structure traditionally required techniques like X-ray crystallography and cryo-electron microscopy, which are both time-consuming and expensive.
How AlphaFold Has Revolutionized Protein Structure Prediction
In 2020, AlphaFold achieved a significant milestone by outperforming other methods for the Critical Assessment of Protein Structure Prediction (CASP) competition. AlphaFold's predictions were comparable to experimental results, achieving a median Global Distance Test (GDT) score of 92.4 out of 100 for the hardest targets in CASP14. This level of accuracy had never been achieved before by computational methods.
The AI system uses neural networks trained on a vast dataset of known protein structures and sequences. It can predict the 3D shapes of proteins based solely on their amino acid sequences, which traditionally took months or years but are now reduced to days.
AlphaFold's success has had a profound impact on various fields:
Drug Discovery
With accurate protein structures, drug developers can design more effective drugs targeting specific proteins. This could significantly reduce the time and cost of bringing new medicines to market.
Biology and Medicine
Understanding protein structures helps researchers decipher their functions, interactions, and roles in diseases. This knowledge is crucial for developing new treatments and understanding biological processes.
Biotechnology
Industries relying on enzymes and other proteins can use AlphaFold to optimize and engineer proteins for specific applications, enhancing efficiency and innovation.
AI Platforms and Frameworks
Several AI platforms and frameworks are widely used in scientific research to facilitate the development and deployment of AI models. Key platforms include:
TensorFlow
Google developed this open-source machine learning framework for various AI applications, including research.
PyTorch
Developed by Facebook's AI Research lab, PyTorch is known for its flexibility and ease of use. It has gained immense popularity among researchers, with over 100,000 stars on GitHub as of 2023.
Keras
A high-level neural networks API running on top of TensorFlow, Keras provides a simplified interface for building and training models. It is used extensively in academic research and industry .
Examples of How These Platforms Facilitate Scientific Discovery
TensorFlow
TensorFlow has been used in projects ranging from image recognition to natural language processing. For instance, it has been used to develop AI models for detecting diabetic retinopathy from retinal images with an accuracy comparable to that of human specialists.
PyTorch
PyTorch's dynamic computational graph makes it ideal for research. Researchers have used PyTorch to create models for climate prediction and medical image analysis, leading to significant advancements in these fields.
Keras
Keras simplifies the process of designing and testing deep learning models, making them accessible to both beginners and experts. It has been used in applications such as genomics and neuroscience, where rapid prototyping and iteration are crucial (Harvard)
The Role of Open-Source AI Tools in Accelerating Innovation
Open-source AI tools democratize access to advanced technologies, enabling researchers worldwide to collaborate and innovate. These tools provide a shared foundation for developing new algorithms, sharing datasets, and building upon each other's work. The collaborative nature of open-source projects accelerates innovation, leading to rapid advancements in AI research and its applications across various scientific disciplines.
Real-Life Examples of AI in Scientific Discovery
AlphaFold's Breakthrough in Protein Folding
In 2020, DeepMind's AlphaFold made a groundbreaking advancement by accurately predicting protein structures. This achievement has far-reaching implications for drug discovery and understanding of diseases. The system has been used to indicate the structure of over 350,000 proteins across 20 different organisms, helping researchers understand protein functions and interactions at an unprecedented scale.
AI in COVID-19 Research
During the COVID-19 pandemic, AI played a crucial role in accelerating vaccine development and drug repurposing. Companies like Moderna used AI to speed up the design of mRNA sequences for their vaccines, significantly reducing development time from years to months. AI algorithms also helped identify existing drugs that could be repurposed to treat COVID-19, leading to faster clinical trials and treatments. For example, AI identified Baricitinib as a potential treatment that was later approved by the FDA.
IBM Watson in Oncology
IBM Watson for Oncology uses AI to analyze large medical literature and patient data to provide personalized cancer treatment recommendations. This tool has been deployed in various hospitals worldwide, improving treatment accuracy and outcomes.
AI in Climate Science: Project Climate Change AI
The Climate Change AI initiative leverages AI to enhance climate modeling, predict extreme weather events, and optimize renewable energy systems. AI models have been used to indicate the impact of climate change on agricultural yields, helping farmers adapt to changing conditions. For instance, AI-driven models have improved the accuracy of weather forecasts, aiding in disaster preparedness and response. These advancements help mitigate the impacts of climate change and promote sustainability.
Citrine Informatics in Materials Science
Citrine Informatics uses AI to accelerate the discovery and development of new materials. Their platform combines machine learning with materials science data to predict material properties and optimize formulations, leading to faster innovation in industries such as aerospace and electronics. The company's AI-driven approach has resulted in new materials with enhanced performance characteristics, reducing the time and cost of traditional materials research. For example, Citrine's platform has helped develop new alloys with improved strength and durability for aerospace applications.
Ready to Transform Your Business with Cutting-Edge AI?
At Coditude, we are committed to driving innovation and helping businesses leverage the power of Artificial Intelligence to achieve extraordinary results. Our expertise spans various fields, from accelerating drug discovery to optimizing renewable energy systems and revolutionizing materials science.
Are you ready to harness AI's transformative potential for your organization? Whether you're looking to enhance your research capabilities, streamline operations, or develop groundbreaking solutions, Coditude guides you every step of the way.
#artificial intelligence#machine learning algorithms#generative AI#AI models#future of AI#power of AI#AI technology#machine learning applications#AI in drug discovery#AI in scientific research#AI in materials engineering#AI in healthcare#AI in climate science#Coditude
0 notes
Text
"The companies that make AI—which is, to establish our terms right at the outset, large language models that generate text or images in response to natural language queries—have a problem. Their product is dubiously legal, prohibitively expensive (which is to say, has the kind of power and water requirements that are currently being treated as externalities and passed along to the general populace, but which in a civilized society would lead to these companies’ CEOs being dragged out into the street by an angry mob), and it objectively does not work. All of these problems are essentially intractable. Representatives of AI companies have themselves admitted that if they paid fair royalties to all the artists whose work they’ve scraped and stolen in order to get their models working, they’d be financially unfeasible. The energy requirements for running even a simple AI-powered google query are so prohibitive that Sam Altman has now started to pretend that he can build cold fusion in order to solve the problem he and others like him have created. And the dreaded “hallucination” problem in AI-generated text and images is an inherent attribute of the technology. Even in cases where there are legitimate, useful applications for AI—apparently if you provide a model a specific set of sources, it can produce accurate summaries, which has its uses in various industries—there remains the question of whether this is a cost-effective tool once its users actually have to start paying for it (and whether this is even remotely ethically justifiable given the technology’s environmental cost)."
#The angry mob is tumblr right?#AI#Abigail Nussbaum#Large Language Model#legal#environmental#expensive#CEO#angry mob#environment#green#climate change#problem#royalties#artists#work#scraped#hallucination
0 notes
Text
Weather and Climate Artificial Intelligence (AI) Foundation Model Applications Presented at IBM Think in Boston
Rahul Ramachandran and Maskey (ST11/IMPACT) participated in IBM Think, where their IBM collaborators showcased two innovative AI applications for weather and climate modeling. The first application focuses on climate downscaling, enhancing the resolution of climate models for more accurate local predictions. The second application aims to optimize wind farm predictions, improving renewable energy forecasts. During […] from NASA https://ift.tt/gSoq9zW
#NASA#space#Weather and Climate Artificial Intelligence (AI) Foundation Model Applications Presented at IBM Think in Boston#Michael Gabrill
0 notes
Text
Artificial Intelligence for Climate Action
Artificial Intelligence (AI) is transforming various sectors, and its impact on climate change mitigation is becoming increasingly significant. By leveraging AI, we can develop more efficient energy systems, enhance environmental monitoring, and foster sustainable practices. This blog post explores how AI is being used to curb climate change. AI for Renewable Energy Improvement One of the…
View On WordPress
#AI and Climate Change#Artificial Intelligence#Carbon Capture and Storage#Climate Change Mitigation#Climate Modeling#Disaster Response#Environmental Monitoring#Precision Agriculture#Renewable Energy Optimization#Sustainable Technology
0 notes
Text
High Water Ahead: The New Normal of American Flood Risks
According to a map created by the National Oceanic and Atmospheric Administration (NOAA) that highlights ‘hazard zones’ in the U.S. for various flooding risks, including rising sea levels and tsunamis. Here’s a summary and analysis: Summary: The NOAA map identifies areas at risk of flooding from storm surges, tsunamis, high tide flooding, and sea level rise. Red areas on the map indicate more…
View On WordPress
#AI News#climate forecasts#data driven modeling#ethical AI#flood risk management#geospatial big data#News#noaa#sea level rise#uncertainty quantification
0 notes
Note
what’s the story about the generative power model and water consumption? /gen
There's this myth going around about generative AI consuming truly ridiculous amount of power and water. You'll see people say shit like "generating one image is like just pouring a whole cup of water out into the Sahara!" and bullshit like that, and it's just... not true. The actual truth is that supercomputers, which do a lot of stuff, use a lot of power, and at one point someone released an estimate of how much power some supercomputers were using and people went "oh, that supercomputer must only do AI! All generative AI uses this much power!" and then just... made shit up re: how making an image sucks up a huge chunk of the power grid or something. Which makes no sense because I'm given to understand that many of these models can run on your home computer. (I don't use them so I don't know the details, but I'm told by users that you can download them and generate images locally.) Using these models uses far less power than, say, online gaming. Or using Tumblr. But nobody ever talks about how evil those things are because of their power generation. I wonder why.
To be clear, I don't like generative AI. I'm sure it's got uses in research and stuff but on the consumer side, every effect I've seen of it is bad. Its implementation in products that I use has always made those products worse. The books it writes and flood the market with are incoherent nonsense at best and dangerous at worst (let's not forget that mushroom foraging guide). It's turned the usability of search engines from "rapidly declining, but still usable if you can get past the ads" into "almost one hundred per cent useless now, actually not worth the effort to de-bullshittify your search results", especially if you're looking for images. It's a tool for doing bullshit that people were already doing much easier and faster, thus massively increasing the amount of bullshit. The only consumer-useful uses I've seen of it as a consumer are niche art projects, usually projects that explore the limits of the tool itself like that one poetry book or the Infinite Art Machine; overall I'd say its impact at the Casual Random Person (me) level has been overwhelmingly negative. Also, the fact that so much AI turns out to be underpaid people in a warehouse in some country with no minimum wage and terrible labour protections is... not great. And the fact that it's often used as an excuse to try to find ways to underpay professionals ("you don't have to write it, just clean up what the AI came up with!") is also not great.
But there are real labour and product quality concerns with generative AI, and there's hysterical bullshit. And the whole "AI is magically destroying the planet via climate change but my four hour twitch streaming sesh isn't" thing is hysterical bullshit. The instant I see somebody make this stupid claim I put them in the same mental bucket as somebody complaining about AI not being "real art" -- a hatemobber hopping on the hype train of a new thing to hate and feel like an enlightened activist about when they haven't bothered to learn a fucking thing about the issue. And I just count my blessings that they fell in with this group instead of becoming a flat earther or something.
2K notes
·
View notes
Text
Our Stance On Gen-AI
This year, for the first time, we've had a couple of reports from bidders that the FTH fanworks they received were produced using generative AI. For that reason, we've decided that it's important that we lay out a specific, concrete policy going forward.
Generative AI tools are not welcome here.
Non-exhaustive list of examples:
image generators like Imagen, Midjourney, and similar
video generators like Sora, Runway, and similar
LLMs like ChatGPT and similar
audio generators like ElevenLabs, MusicLM, and similar
Participants found to have used generative AI to produce a fanwork, in part or in whole, for their bidder(s) will be permanently banned from participating in future iterations of Fandom Trumps Hate.
Why?
We understand that there can be contentious debate around the use of generative AI, we know individual people have their own reasons for being in favor of it, and we recognize that many people may simply be unaware that these tools come with any negative impacts at all. Regardless, we are firm in our stance on this for the following (non-exhaustive) list of key reasons in no particular order:
negative, unregulated environmental impact
Over the years, you may have noticed that we’ve supported multiple environmental organizations doing important work to combat climate change, preserve wildlife, and advocate for renewable and sustainable energy policy changes. Generative AI tools produce a startling amount of e-waste, can require massive amounts of storage space and computational power, and are a (currently unregulated) drain on natural resources. Using these tools to produce a fanwork flies in the face of every environmental organization we have supported to date.
plagiarism and lack of artistic integrity
Most if not all generative AI models are trained on some amount of stolen work (across various mediums). As a result, any output generated by these models is at worst plagiarized and at best extremely derivative and unoriginal. In our opinion, using generative AI tools to produce a fanwork demonstrates a lack of care for your own craft, a lack of respect for the work of other creators, and a lack of respect for your bidder and your commitment to them.
undermining our community building impact
One of the best things to come out of the auction every year—we can't even call it a side benefit, because it's so central to us—is that bidders and creators form collaborative relationships which sometimes even turn into friendship. Using generative AI undermines that trust and collaboration.
undermining the value of participating as a creator
Bidders participate in Fandom Trumps Hate for the opportunity to prompt YOU to create a fanwork for them, in YOUR style with YOUR specific skill set. Any potential bidder is perfectly capable of dropping a prompt into a generative AI tool on their own time, if they wish. We hope all creators sign up with the aim to play a role more significant than “unnecessary middleman.”
In general, we try to be as flexible as we can in our policies to allow for the best experience possible for all Fandom Trumps Hate participants. This, however, is something we are not willing to be flexible on. We realize this may seem unusually rigid, but we ask that you trust we have given this serious consideration and respect that while we are willing to answer clarifying questions, we are not open to debate on this topic.
1K notes
·
View notes
Text
How Google Cloud’s Automotive AI Agent is Transforming In-Car Experience with Mercedes-Benz
New Post has been published on https://thedigitalinsider.com/how-google-clouds-automotive-ai-agent-is-transforming-in-car-experience-with-mercedes-benz/
How Google Cloud’s Automotive AI Agent is Transforming In-Car Experience with Mercedes-Benz
The relationship between artificial intelligence (AI) and automobiles has been evolving for decades, transitioning from basic automation to today’s advanced self-driving technologies. This evolution has entered a new phase with the advent of AI agents that not only assist with driving but also transform how drivers and passengers interact with their vehicles. Leading this innovation is Google, in partnership with Mercedes-Benz, a brand known for luxury and cutting-edge design in the automotive industry. This partnership has led to the development of Google Cloud’s Automotive AI Agent, a technology that facilitates the development of in-car experiences.
This article delves into this development, exploring Automotive AI Agent, the technology behind it, and how Mercedes-Benz is employing these tools to set new benchmarks in the automotive industry.
Google’s Automotive AI Agents
Google’s automotive AI agents provide automakers with a cutting-edge platform for building intelligent, customizable in-car assistants. Build on Google’s Gemini model, these agents offer advanced capabilities, including natural language understanding, multilingual communication, and multimodal reasoning.
These agents support a variety of specialized functions, such as voice-controlled navigation, hands-free media playback, and communication, ensuring a safer and more interactive driving experience. They can also deliver personalized recommendations and features by learning from user habits and preferences, making them adaptable companions for drivers.
Automakers can further customize the agents by defining unique wake words, integrating third-party applications, and adding proprietary features. The agents are designed to work seamlessly with existing vehicle systems, Android Automotive OS, and Google’s ecosystem of applications, ensuring consistent and reliable performance.
By simplifying the development process, the platform allows manufacturers to deploy the AI agents as-is or use them as a foundation to create uniquely branded, and sophisticated in-car assistants.
Vertex AI: The Technology Behind Automotive AI Agents
Automotive AI Agents are developed using Vertex AI, a Google Cloud’s platform designed to make the development and deployment of AI agents easier and more efficient. The platform provides a set of tools that support every step of the agent development lifecycle, including data preparation, model training, and deployment, all within a unified and intuitive platform.
To empower automotive AI agents with multimodal interactive capabilities, Vertex AI provides access to Google’s pre-trained Gemini models. These models support key applications like natural language understanding and multimodal reasoning, providing a foundation for intelligent and context-aware interactions. The platform also offers the flexibility to develop custom AI models for specific use cases, enabling automakers to address their unique requirements. Vertex AI includes an AutoML feature that simplifies development process, making it accessible to teams with varying levels of AI expertise to build agents. Additionally, support for widely used frameworks like TensorFlow, PyTorch, and scikit-learn ensures compatibility across diverse development environments.
An essential part of the Vertex AI platform is the Agent Builder framework, which simplifies the development of conversational agents. This tool reduces development time with its low-code interface, making it accessible for teams without extensive coding experience. It also supports inputs like text, voice, and images, allowing the agent to deliver more natural and context-aware interactions. Automakers can further customize the agent to reflect their brand and enhance the overall user experience. Additionally, the framework integrates seamlessly with third-party systems and Google’s ecosystem of applications, ensuring smooth functionality.
With these technologies, Google provides automakers with the tools they need to create advanced, personalized in-car experiences that set new standards for innovation and convenience.
Mercedes-Benz: Redefining In-Car Experience with Automotive AI Agents
Mercedes-Benz has become one of the first automakers to incorporate Google Cloud’s Automotive AI Agent into its MBUX Virtual Assistant. The enhanced assistant will debut in the new Mercedes-Benz CLA, offering a range of innovative features:
Natural Language Understanding
The system’s advanced NLU capabilities enable users to interact in a conversational tone without memorizing specific commands. Its multilingual support accommodates a global audience, making the assistant universally accessible.
Personalized User Experience
The agents learn from user interactions to provide personalized suggestions and actions. For example, it can recommend navigation routes based on driving habits or adjust in-car settings to individual preferences.
Enhanced Navigation
By integrating with Google Maps, the assistant provides precise navigation and live traffic updates. It can enable users to obtain personalized information about points of interest, traffic conditions and more.
Proactive Assistance
The assistant offers predictive insights, such as weather updates or reminders about vehicle maintenance. This reduces cognitive load on drivers, allowing them to focus on the road.
Smart Home Integration
By employing Google Cloud’s IoT capabilities, the AI agent connects with smart home devices. Drivers can control home systems, such as adjusting thermostats or checking security cameras, directly from their vehicles.
Redefining Safety and Accessibility
The introduction of Automotive AI Agents provides Mercedes-Benz with two significant benefits: improved safety and greater accessibility.
These agents enhance safety by supporting hands-free operation, allowing drivers to complete tasks such as sending messages, adjusting climate settings, or using navigation through simple voice commands. This reduces distractions and helps drivers stay focused on the road.
In terms of accessibility, the system’s advanced natural language processing capabilities accommodates diverse accents, speech patterns, and languages. This ensures that users from different linguistic backgrounds can effortlessly interact with their cars. Additionally, the agent’s ability to process complex commands makes it particularly useful for individuals with limited mobility or disabilities, offering a more inclusive in-car experience.
Th Bottom Line
The integration of Google Cloud’s Automotive AI Agent in Mercedes-Benz vehicles signifies a pivotal moment in the automotive industry. This collaboration addresses critical challenges by improving user experience, enhancing safety, and promoting sustainability.
These AI agents enable seamless and intuitive interactions between drivers, passengers, and their vehicles. By offering hands-free operation and predictive assistance, they help reduce distractions, enhancing safety and efficiency on the road. Additionally, the integration of intelligent systems optimizes vehicle performance, contributing to a more sustainable driving experience.
As the industry moves closer to fully autonomous vehicles, conversational AI will play a crucial role in shaping how humans interact with their cars. AI-generated insights will drive future innovations in vehicle design, making cars smarter, safer, and more sustainable. This evolution positions AI as a key enabler of the next generation of mobility solutions.
#Accessibility#agent#Agentic AI#agents#ai#ai agent#AI AGENTS#AI Car Assistants#AI models#ai platform#android#applications#Article#artificial#Artificial Intelligence#automation#autoML#Automobiles#automotive#Automotive AI Agent#Automotive Artificial Intelligence Agent#automotive industry#autonomous#autonomous agents#autonomous vehicles#benchmarks#Building#Cameras#Cars#climate
0 notes
Text
Green energy is in its heyday.
Renewable energy sources now account for 22% of the nation’s electricity, and solar has skyrocketed eight times over in the last decade. This spring in California, wind, water, and solar power energy sources exceeded expectations, accounting for an average of 61.5 percent of the state's electricity demand across 52 days.
But green energy has a lithium problem. Lithium batteries control more than 90% of the global grid battery storage market.
That’s not just cell phones, laptops, electric toothbrushes, and tools. Scooters, e-bikes, hybrids, and electric vehicles all rely on rechargeable lithium batteries to get going.
Fortunately, this past week, Natron Energy launched its first-ever commercial-scale production of sodium-ion batteries in the U.S.
“Sodium-ion batteries offer a unique alternative to lithium-ion, with higher power, faster recharge, longer lifecycle and a completely safe and stable chemistry,” said Colin Wessells — Natron Founder and Co-CEO — at the kick-off event in Michigan.
The new sodium-ion batteries charge and discharge at rates 10 times faster than lithium-ion, with an estimated lifespan of 50,000 cycles.
Wessells said that using sodium as a primary mineral alternative eliminates industry-wide issues of worker negligence, geopolitical disruption, and the “questionable environmental impacts” inextricably linked to lithium mining.
“The electrification of our economy is dependent on the development and production of new, innovative energy storage solutions,” Wessells said.
Why are sodium batteries a better alternative to lithium?
The birth and death cycle of lithium is shadowed in environmental destruction. The process of extracting lithium pollutes the water, air, and soil, and when it’s eventually discarded, the flammable batteries are prone to bursting into flames and burning out in landfills.
There’s also a human cost. Lithium-ion materials like cobalt and nickel are not only harder to source and procure, but their supply chains are also overwhelmingly attributed to hazardous working conditions and child labor law violations.
Sodium, on the other hand, is estimated to be 1,000 times more abundant in the earth’s crust than lithium.
“Unlike lithium, sodium can be produced from an abundant material: salt,” engineer Casey Crownhart wrote in the MIT Technology Review. “Because the raw ingredients are cheap and widely available, there’s potential for sodium-ion batteries to be significantly less expensive than their lithium-ion counterparts if more companies start making more of them.”
What will these batteries be used for?
Right now, Natron has its focus set on AI models and data storage centers, which consume hefty amounts of energy. In 2023, the MIT Technology Review reported that one AI model can emit more than 626,00 pounds of carbon dioxide equivalent.
“We expect our battery solutions will be used to power the explosive growth in data centers used for Artificial Intelligence,” said Wendell Brooks, co-CEO of Natron.
“With the start of commercial-scale production here in Michigan, we are well-positioned to capitalize on the growing demand for efficient, safe, and reliable battery energy storage.”
The fast-charging energy alternative also has limitless potential on a consumer level, and Natron is eying telecommunications and EV fast-charging once it begins servicing AI data storage centers in June.
On a larger scale, sodium-ion batteries could radically change the manufacturing and production sectors — from housing energy to lower electricity costs in warehouses, to charging backup stations and powering electric vehicles, trucks, forklifts, and so on.
“I founded Natron because we saw climate change as the defining problem of our time,” Wessells said. “We believe batteries have a role to play.”
-via GoodGoodGood, May 3, 2024
--
Note: I wanted to make sure this was legit (scientifically and in general), and I'm happy to report that it really is! x, x, x, x
#batteries#lithium#lithium ion batteries#lithium battery#sodium#clean energy#energy storage#electrochemistry#lithium mining#pollution#human rights#displacement#forced labor#child labor#mining#good news#hope
3K notes
·
View notes