#Difference between data science and machine learning
Explore tagged Tumblr posts
Text
0 notes
naya-mishra · 2 years ago
Text
This article highlights the key difference between Machine Learning and Artificial Intelligence based on approach, learning, application, output, complexity, etc.
2 notes · View notes
healthylifewithus · 1 year ago
Text
Complete Excel, AI and Data Science mega bundle.
Unlock Your Full Potential with Our 100-Hour Masterclass: The Ultimate Guide to Excel, Python, and AI.
Why Choose This Course? In today’s competitive job market, mastering a range of technical skills is more important than ever. Our 100-hour comprehensive course is designed to equip you with in-demand capabilities in Excel, Python, and Artificial Intelligence (AI), providing you with the toolkit you need to excel in the digital age.
To read more click here &lt;<
Become an Excel Pro Delve deep into the intricacies of Excel functions, formulae, and data visualization techniques. Whether you’re dealing with basic tasks or complex financial models, this course will make you an Excel wizard capable of tackling any challenge.
Automate Your Workflow with Python Scripting in Python doesn’t just mean writing code; it means reclaiming your time. Automate everyday tasks, interact with software applications, and boost your productivity exponentially.
If you want to get full course click here &lt;<
Tumblr media
Turn Ideas into Apps Discover the potential of Amazon Honeycode to create custom apps tailored to your needs. Whether it’s for data management, content tracking, or inventory — transform your creative concepts into practical solutions.
Be Your Own Financial Analyst Unlock the financial functionalities of Excel to manage and analyze business data. Create Profit and Loss statements, balance sheets, and conduct forecasting with ease, equipping you to make data-driven decisions.
Embark on an AI Journey Step into the future with AI and machine learning. Learn to build advanced models, understand neural networks, and employ TensorFlow. Turn big data into actionable insights and predictive models.
Master Stock Prediction Gain an edge in the market by leveraging machine learning for stock prediction. Learn to spot trends, uncover hidden patterns, and make smarter investment decisions.
Who Is This Course For? Whether you’re a complete beginner or a seasoned professional looking to upskill, this course offers a broad and deep understanding of Excel, Python, and AI, preparing you for an ever-changing work environment.
Invest in Your Future This isn’t just a course; it’s a game-changer for your career. Enroll now and set yourself on a path to technological mastery and unparalleled career growth.
Don’t Wait, Transform Your Career Today! Click here to get full course &lt;<
Tumblr media
1 note · View note
nasa · 2 years ago
Text
Caution: Universe Work Ahead 🚧
We only have one universe. That’s usually plenty – it’s pretty big after all! But there are some things scientists can’t do with our real universe that they can do if they build new ones using computers.
The universes they create aren’t real, but they’re important tools to help us understand the cosmos. Two teams of scientists recently created a couple of these simulations to help us learn how our Nancy Grace Roman Space Telescope sets out to unveil the universe’s distant past and give us a glimpse of possible futures.
Caution: you are now entering a cosmic construction zone (no hard hat required)!
Tumblr media
This simulated Roman deep field image, containing hundreds of thousands of galaxies, represents just 1.3 percent of the synthetic survey, which is itself just one percent of Roman's planned survey. The full simulation is available here. The galaxies are color coded – redder ones are farther away, and whiter ones are nearer. The simulation showcases Roman’s power to conduct large, deep surveys and study the universe statistically in ways that aren’t possible with current telescopes.
One Roman simulation is helping scientists plan how to study cosmic evolution by teaming up with other telescopes, like the Vera C. Rubin Observatory. It’s based on galaxy and dark matter models combined with real data from other telescopes. It envisions a big patch of the sky Roman will survey when it launches by 2027. Scientists are exploring the simulation to make observation plans so Roman will help us learn as much as possible. It’s a sneak peek at what we could figure out about how and why our universe has changed dramatically across cosmic epochs.
youtube
This video begins by showing the most distant galaxies in the simulated deep field image in red. As it zooms out, layers of nearer (yellow and white) galaxies are added to the frame. By studying different cosmic epochs, Roman will be able to trace the universe's expansion history, study how galaxies developed over time, and much more.
As part of the real future survey, Roman will study the structure and evolution of the universe, map dark matter – an invisible substance detectable only by seeing its gravitational effects on visible matter – and discern between the leading theories that attempt to explain why the expansion of the universe is speeding up. It will do it by traveling back in time…well, sort of.
Seeing into the past
Looking way out into space is kind of like using a time machine. That’s because the light emitted by distant galaxies takes longer to reach us than light from ones that are nearby. When we look at farther galaxies, we see the universe as it was when their light was emitted. That can help us see billions of years into the past. Comparing what the universe was like at different ages will help astronomers piece together the way it has transformed over time.
Tumblr media
This animation shows the type of science that astronomers will be able to do with future Roman deep field observations. The gravity of intervening galaxy clusters and dark matter can lens the light from farther objects, warping their appearance as shown in the animation. By studying the distorted light, astronomers can study elusive dark matter, which can only be measured indirectly through its gravitational effects on visible matter. As a bonus, this lensing also makes it easier to see the most distant galaxies whose light they magnify.
The simulation demonstrates how Roman will see even farther back in time thanks to natural magnifying glasses in space. Huge clusters of galaxies are so massive that they warp the fabric of space-time, kind of like how a bowling ball creates a well when placed on a trampoline. When light from more distant galaxies passes close to a galaxy cluster, it follows the curved space-time and bends around the cluster. That lenses the light, producing brighter, distorted images of the farther galaxies.
Roman will be sensitive enough to use this phenomenon to see how even small masses, like clumps of dark matter, warp the appearance of distant galaxies. That will help narrow down the candidates for what dark matter could be made of.
Tumblr media
In this simulated view of the deep cosmos, each dot represents a galaxy. The three small squares show Hubble's field of view, and each reveals a different region of the synthetic universe. Roman will be able to quickly survey an area as large as the whole zoomed-out image, which will give us a glimpse of the universe’s largest structures.
Constructing the cosmos over billions of years
A separate simulation shows what Roman might expect to see across more than 10 billion years of cosmic history. It’s based on a galaxy formation model that represents our current understanding of how the universe works. That means that Roman can put that model to the test when it delivers real observations, since astronomers can compare what they expected to see with what’s really out there.
Tumblr media
In this side view of the simulated universe, each dot represents a galaxy whose size and brightness corresponds to its mass. Slices from different epochs illustrate how Roman will be able to view the universe across cosmic history. Astronomers will use such observations to piece together how cosmic evolution led to the web-like structure we see today.
This simulation also shows how Roman will help us learn how extremely large structures in the cosmos were constructed over time. For hundreds of millions of years after the universe was born, it was filled with a sea of charged particles that was almost completely uniform. Today, billions of years later, there are galaxies and galaxy clusters glowing in clumps along invisible threads of dark matter that extend hundreds of millions of light-years. Vast “cosmic voids” are found in between all the shining strands.
Astronomers have connected some of the dots between the universe’s early days and today, but it’s been difficult to see the big picture. Roman’s broad view of space will help us quickly see the universe’s web-like structure for the first time. That’s something that would take Hubble or Webb decades to do! Scientists will also use Roman to view different slices of the universe and piece together all the snapshots in time. We’re looking forward to learning how the cosmos grew and developed to its present state and finding clues about its ultimate fate.
Tumblr media
This image, containing millions of simulated galaxies strewn across space and time, shows the areas Hubble (white) and Roman (yellow) can capture in a single snapshot. It would take Hubble about 85 years to map the entire region shown in the image at the same depth, but Roman could do it in just 63 days. Roman’s larger view and fast survey speeds will unveil the evolving universe in ways that have never been possible before.
Roman will explore the cosmos as no telescope ever has before, combining a panoramic view of the universe with a vantage point in space. Each picture it sends back will let us see areas that are at least a hundred times larger than our Hubble or James Webb space telescopes can see at one time. Astronomers will study them to learn more about how galaxies were constructed, dark matter, and much more.
The simulations are much more than just pretty pictures – they’re important stepping stones that forecast what we can expect to see with Roman. We’ve never had a view like Roman’s before, so having a preview helps make sure we can make the most of this incredible mission when it launches.
Learn more about the exciting science this mission will investigate on Twitter and Facebook.
Make sure to follow us on Tumblr for your regular dose of space!
2K notes · View notes
crying-fantasies · 6 months ago
Note
Hiii! Sorry if I keep sending you more ask-
I was wondering what if Mayhem accidentally got transported into the TFP universe? (Considering it's the Idw comics, anything wacky is possible 😭)
How will Soundwave from that universe react to Mayhem? And will Mayhem react to his 'dad'?
Thanks for the ask! Sorry for being so late.
In reality this could go down in many different ways.
TFP Soundwave, as far as I've seen him, holds Laserbeak as his own last shard of sanity, maybe in the past he had all of them and, well, the gladiator pits, the war, that little drone is all he has left, I don't know if in TFP the concept of sparkling or new sparks exist like it does on BV or this AU, but maybe Laserbeak was someone like that for him.
Having that in mind, as in by some fucked up chain of events (as in having Sunset near with his inherited bad and absurd luck), Mayhem were to fall in any other universe (because after the whole Brainstorm-made-a-time-machine-and-teared-the-whole-reality-apart well, now we have a multiverse!) you can bet that first and foremost Soundwave, as in the one in this reality, will tear even more the barriers between realities to have his sparkling back.
Meanwhile TFP Soundwave looks at the sparkling with a big interrogation mark on his visor, because where did he come from? How come his bio signature is so similar to his but he can't pinpoint from where, or what, the other part has come? How old is him? Depending of such, if Mayhem is still a newly forged mech then you can bet TFP Soundwave will guard him like a jealous mother (in that phase they aren't different from human babies), no one can look at such delicate little sparkle of joy, if Megatron asks nicely and is obviously in his right mind maybe he would consider it, if Mayhem was already an operative mech it would be so much difficult to hide him, and he has to drag him back more than once when he tries to reach the autobots, Mayhem has learned a lot from his sire back in his own reality, to this point everyone knows about him, the vehicons know better than stop him because he can only get so far before a pair of tentacles catch and drag him back again, no matter that his sharp digits claw at the ground, once back inside he is given a portion of energon to please be calm.
He would never let him interact with Starscream by obvious reasons, maybe Breakdown can talk more comfortable with him, but absolutely no permission to talk or interact or even be in the presence of Shockwave or Knockout, Soundwave doesn't like their look of "science" when they look at Mayhem.
Now with SG Soundwave, believe it or not, Mayhem feels even more horrified by different reasons, all his life his sire, his father, has always been taciturn, quiet, maybe a little gloomy, but overall he always thrived in his affection, while not so big like he has seen with other families, he knows his father loves him with little things.
SG Soundwave makes him sputter with his headband and overall colors, the way he talks and moves are so unnatural that make him feel itchy, but above all else what gets him totally out of his zone is that SG Soundwave makes a quick scanner at him, processes the data in two clicks, and soon tackles him down with a hug calling him "my baby".
In the SG universe they know of different realities thanks to Cliffjumper, and SG Soundwave has to recompose himself a moment and let go of the young mech, there is paint transfer in both of them due to the obvious crash.
If TFP Soundwave was creepy to Mayhem, he can't even start to describe SG Soundwave, but he does recognize him as creepy in his own way.
Obviously this is the most talkative Soundwave, "How many centuries are you? Have you been well from where you come from? Are you a medic? So proud! Do your do my young mech! Did you came from the hotspot in Kaon like me? If you didn't that's fine too!"
And obviously so, this Soundwave gets on the slightly different wavelength Mayhem does, "Do you have a sire? A carrier? Did I've you alone?" as he shows every other righteous decepticon the young mech, Mayhem has never seen other decepticons so fast and easy, they look happy to see him, totally different from his reality, "Such a handsome young lad!", somebot says, and SG Soundwave is quick to answer: "Sure is! Bet he got it from his other mentor!" as his visor shines green with absolute glee, the questions previously done return and Mayhem doesn't know if he should mangle this reality and tell him, he decided not to, because everything is backwards here, what if you are different too?
Mayhem doesn't want to answer, and he gets another cybertronian equivalent of aneurism when he meets the cassettes of this reality, at least he got to meet SG Ravage, because he never meet the one of his reality, never had the chance.
SG Soundwave is maybe the only one to help him return to his reality, retracting his battle mask to give a little peck on his helm, "Take care of you and the fam', Lil' doc".
Once Mayhem is safely returned, SG Soundwave makes a bee line to you, who seems to have a very bad day or your usual sour and tired expression is somehow worse, drinking some kind of human beverage to keep your sanity intact, but every ounce of sanity it's throw out of the window when he sits next to you, hands together as if he is begging or praying, your coffee is dripping from your mouth as he says "I wanna've your sparkling, my amor", because he recognized the wavelength of Mayhem's spark mimicking the one of a human, a human he knows very well.
Flatline has to come and help you as your coffee goes down the wrong way, Soundwave has this idea on his helm and nothing, nothing,will take it away.
Your destiny is sealed as Megatron looks at you helplessly, maybe you have a reason to date Soundwave now.
61 notes · View notes
codingquill · 10 months ago
Text
NP-Completeness
Machine learning #1
Complexity Recap
Tumblr media
Assuming one already knows what algorithmic complexity is, I am writing this paragraph to explain it to those who don't. The complexity of an algorithm can be measured by its efficiency. So let's say I have a problem P for which I've proposed two algorithmsA and B as solutions. How could I determine which algorithm is the most efficient in terms of time and data support? Well, by using complexity ( space complexity and time complexity ). One way of measuring complexity is O notation.
 Exponential Complexity O(2^n)
Tumblr media
In algorithms, this complexity arises when the time or space required for computation doubles with each additional element in the input. Problems with exponential complexity are typically NP-hard or NP-complete
P-Class and NP-Class
The P-class represents the set of polynomial problems, i.e., problems for which a solution can be found in polynomial time.
The NP-class represents the set of non-deterministic polynomial problems, i.e., problems for which a solution can be verified in polynomial time. This means it's easy to verify if a proposed solution is correct, but finding that solution might be difficult.
P = NP Question?
The question of whether P is different from NP is one of the most important problems in theoretical computer science. It is part of the seven problems selected by the Clay Institute in the year 2000, for which a reward of one million dollars is offered to whoever solves them.
NP-Complete Problem Class
NP-complete problems are problems that are both in the NP-class and are harder than all other problems in the NP-class.Solving any of these problems would be considered a major breakthrough in theoretical computer science and could have a significant impact in many areas such as optimization, planning, cryptography, etc. There is no specific monetary reward associated with solving an NP-complete problem, but it would be considered a significant achievement.
From an NP Problem to Another 
A polynomial reduction is a process that transforms an instance of one problem into an equivalent instance of another problem. In the case of polynomial reduction between NP-complete problems, this means that if you can efficiently solve problem B, then you can also efficiently solve problem A. This leads to the fact that if only one problem is solved from the list, then all the other problems will also be solved.
Approximation Algorithms
Computer scientists have devised approximation algorithms to efficiently solve NP-complete problems, even if finding an exact solution in polynomial time is not possible. These algorithms seek to find a solution that is close to optimality, although it may not be the optimal solution.
We can divide them into two categories:
Heuristics: These are explicit algorithms that propose solutions for NP-complete problems.
Metaheuristics: Unlike heuristics, metaheuristics are general strategies that guide the search for a solution without prescribing a specific algorithm.
21 notes · View notes
pandeypankaj · 4 months ago
Text
What's the difference between Machine Learning and AI?
Machine Learning and Artificial Intelligence (AI) are often used interchangeably, but they represent distinct concepts within the broader field of data science. Machine Learning refers to algorithms that enable systems to learn from data and make predictions or decisions based on that learning. It's a subset of AI, focusing on statistical techniques and models that allow computers to perform specific tasks without explicit programming.
Tumblr media
On the other hand, AI encompasses a broader scope, aiming to simulate human intelligence in machines. It includes Machine Learning as well as other disciplines like natural language processing, computer vision, and robotics, all working towards creating intelligent systems capable of reasoning, problem-solving, and understanding context.
Understanding this distinction is crucial for anyone interested in leveraging data-driven technologies effectively. Whether you're exploring career opportunities, enhancing business strategies, or simply curious about the future of technology, diving deeper into these concepts can provide invaluable insights.
In conclusion, while Machine Learning focuses on algorithms that learn from data to make decisions, Artificial Intelligence encompasses a broader range of technologies aiming to replicate human intelligence. Understanding these distinctions is key to navigating the evolving landscape of data science and technology. For those eager to deepen their knowledge and stay ahead in this dynamic field, exploring further resources and insights on can provide valuable perspectives and opportunities for growth 
5 notes · View notes
nunuslab24 · 6 months ago
Text
What are AI, AGI, and ASI? And the positive impact of AI
Understanding artificial intelligence (AI) involves more than just recognizing lines of code or scripts; it encompasses developing algorithms and models capable of learning from data and making predictions or decisions based on what they’ve learned. To truly grasp the distinctions between the different types of AI, we must look at their capabilities and potential impact on society.
To simplify, we can categorize these types of AI by assigning a power level from 1 to 3, with 1 being the least powerful and 3 being the most powerful. Let’s explore these categories:
1. Artificial Narrow Intelligence (ANI)
Also known as Narrow AI or Weak AI, ANI is the most common form of AI we encounter today. It is designed to perform a specific task or a narrow range of tasks. Examples include virtual assistants like Siri and Alexa, recommendation systems on Netflix, and image recognition software. ANI operates under a limited set of constraints and can’t perform tasks outside its specific domain. Despite its limitations, ANI has proven to be incredibly useful in automating repetitive tasks, providing insights through data analysis, and enhancing user experiences across various applications.
2. Artificial General Intelligence (AGI)
Referred to as Strong AI, AGI represents the next level of AI development. Unlike ANI, AGI can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. It can reason, plan, solve problems, think abstractly, and learn from experiences. While AGI remains a theoretical concept as of now, achieving it would mean creating machines capable of performing any intellectual task that a human can. This breakthrough could revolutionize numerous fields, including healthcare, education, and science, by providing more adaptive and comprehensive solutions.
3. Artificial Super Intelligence (ASI)
ASI surpasses human intelligence and capabilities in all aspects. It represents a level of intelligence far beyond our current understanding, where machines could outthink, outperform, and outmaneuver humans. ASI could lead to unprecedented advancements in technology and society. However, it also raises significant ethical and safety concerns. Ensuring ASI is developed and used responsibly is crucial to preventing unintended consequences that could arise from such a powerful form of intelligence.
The Positive Impact of AI
When regulated and guided by ethical principles, AI has the potential to benefit humanity significantly. Here are a few ways AI can help us become better:
• Healthcare: AI can assist in diagnosing diseases, personalizing treatment plans, and even predicting health issues before they become severe. This can lead to improved patient outcomes and more efficient healthcare systems.
• Education: Personalized learning experiences powered by AI can cater to individual student needs, helping them learn at their own pace and in ways that suit their unique styles.
• Environment: AI can play a crucial role in monitoring and managing environmental changes, optimizing energy use, and developing sustainable practices to combat climate change.
• Economy: AI can drive innovation, create new industries, and enhance productivity by automating mundane tasks and providing data-driven insights for better decision-making.
In conclusion, while AI, AGI, and ASI represent different levels of technological advancement, their potential to transform our world is immense. By understanding their distinctions and ensuring proper regulation, we can harness the power of AI to create a brighter future for all.
7 notes · View notes
frank-olivier · 21 days ago
Text
Tumblr media
The Mathematical Foundations of Machine Learning
In the world of artificial intelligence, machine learning is a crucial component that enables computers to learn from data and improve their performance over time. However, the math behind machine learning is often shrouded in mystery, even for those who work with it every day. Anil Ananthaswami, author of the book "Why Machines Learn," sheds light on the elegant mathematics that underlies modern AI, and his journey is a fascinating one.
Ananthaswami's interest in machine learning began when he started writing about it as a science journalist. His software engineering background sparked a desire to understand the technology from the ground up, leading him to teach himself coding and build simple machine learning systems. This exploration eventually led him to appreciate the mathematical principles that underlie modern AI. As Ananthaswami notes, "I was amazed by the beauty and elegance of the math behind machine learning."
Ananthaswami highlights the elegance of machine learning mathematics, which goes beyond the commonly known subfields of calculus, linear algebra, probability, and statistics. He points to specific theorems and proofs, such as the 1959 proof related to artificial neural networks, as examples of the beauty and elegance of machine learning mathematics. For instance, the concept of gradient descent, a fundamental algorithm used in machine learning, is a powerful example of how math can be used to optimize model parameters.
Ananthaswami emphasizes the need for a broader understanding of machine learning among non-experts, including science communicators, journalists, policymakers, and users of the technology. He believes that only when we understand the math behind machine learning can we critically evaluate its capabilities and limitations. This is crucial in today's world, where AI is increasingly being used in various applications, from healthcare to finance.
A deeper understanding of machine learning mathematics has significant implications for society. It can help us to evaluate AI systems more effectively, develop more transparent and explainable AI systems, and address AI bias and ensure fairness in decision-making. As Ananthaswami notes, "The math behind machine learning is not just a tool, but a way of thinking that can help us create more intelligent and more human-like machines."
The Elegant Math Behind Machine Learning (Machine Learning Street Talk, November 2024)
youtube
Matrices are used to organize and process complex data, such as images, text, and user interactions, making them a cornerstone in applications like Deep Learning (e.g., neural networks), Computer Vision (e.g., image recognition), Natural Language Processing (e.g., language translation), and Recommendation Systems (e.g., personalized suggestions). To leverage matrices effectively, AI relies on key mathematical concepts like Matrix Factorization (for dimension reduction), Eigendecomposition (for stability analysis), Orthogonality (for efficient transformations), and Sparse Matrices (for optimized computation).
The Applications of Matrices - What I wish my teachers told me way earlier (Zach Star, October 2019)
youtube
Transformers are a type of neural network architecture introduced in 2017 by Vaswani et al. in the paper “Attention Is All You Need”. They revolutionized the field of NLP by outperforming traditional recurrent neural network (RNN) and convolutional neural network (CNN) architectures in sequence-to-sequence tasks. The primary innovation of transformers is the self-attention mechanism, which allows the model to weigh the importance of different words in the input data irrespective of their positions in the sentence. This is particularly useful for capturing long-range dependencies in text, which was a challenge for RNNs due to vanishing gradients. Transformers have become the standard for machine translation tasks, offering state-of-the-art results in translating between languages. They are used for both abstractive and extractive summarization, generating concise summaries of long documents. Transformers help in understanding the context of questions and identifying relevant answers from a given text. By analyzing the context and nuances of language, transformers can accurately determine the sentiment behind text. While initially designed for sequential data, variants of transformers (e.g., Vision Transformers, ViT) have been successfully applied to image recognition tasks, treating images as sequences of patches. Transformers are used to improve the accuracy of speech-to-text systems by better modeling the sequential nature of audio data. The self-attention mechanism can be beneficial for understanding patterns in time series data, leading to more accurate forecasts.
Attention is all you need (Umar Hamil, May 2023)
youtube
Geometric deep learning is a subfield of deep learning that focuses on the study of geometric structures and their representation in data. This field has gained significant attention in recent years.
Michael Bronstein: Geometric Deep Learning (MLSS Kraków, December 2023)
youtube
Traditional Geometric Deep Learning, while powerful, often relies on the assumption of smooth geometric structures. However, real-world data frequently resides in non-manifold spaces where such assumptions are violated. Topology, with its focus on the preservation of proximity and connectivity, offers a more robust framework for analyzing these complex spaces. The inherent robustness of topological properties against noise further solidifies the rationale for integrating topology into deep learning paradigms.
Cristian Bodnar: Topological Message Passing (Michael Bronstein, August 2022)
youtube
Sunday, November 3, 2024
4 notes · View notes
blubberquark · 10 months ago
Text
Language Models and AI Safety: Still Worrying
Previously, I have explained how modern "AI" research has painted itself into a corner, inventing the science fiction rogue AI scenario where a system is smarter than its guardrails, but can easily outwitted by humans.
Two recent examples have confirmed my hunch about AI safety of generative AI. In one well-circulated case, somebody generated a picture of an "ethnically ambiguous Homer Simpson", and in another, somebody created a picture of "baby, female, hispanic".
These incidents show that generative AI still filters prompts and outputs, instead of A) ensuring the correct behaviour during training/fine-tuning, B) manually generating, re-labelling, or pruning the training data, C) directly modifying the learned weights to affect outputs.
In general, it is not surprising that big corporations like Google and Microsoft and non-profits like OpenAI are prioritising racist language or racial composition of characters in generated images over abuse of LLMs or generative art for nefarious purposes, content farms, spam, captcha solving, or impersonation. Somebody with enough criminal energy to use ChatGPT to automatically impersonate your grandma based on your message history after he hacked the phones of tens of thousands of grandmas will be blamed for his acts. Somebody who unintentionally generates a racist picture based on an ambiguous prompt will blame the developers of the software if he's offended. Scammers could have enough money and incentives to run the models on their own machine anyway, where corporations have little recourse.
There is precedent for this. Word2vec, published in 2013, was called a "sexist algorithm" in attention-grabbing headlines, even though the bodies of such articles usually conceded that the word2vec embedding just reproduced patterns inherent in the training data: Obviously word2vec does not have any built-in gender biases, it just departs from the dictionary definitions of words like "doctor" and "nurse" and learns gendered connotations because in the training corpus doctors are more often men, and nurses are more often women. Now even that last explanation is oversimplified. The difference between "man" and "woman" is not quite the same as the difference between "male" and "female", or between "doctor" and "nurse". In the English language, "man" can mean "male person" or "human person", and "nurse" can mean "feeding a baby milk from your breast" or a kind of skilled health care worker who works under the direction and supervision of a licensed physician. Arguably, the word2vec algorithm picked up on properties of the word "nurse" that are part of the meaning of the word (at least one meaning, according tot he dictionary), not properties that are contingent on our sexist world.
I don't want to come down against "political correctness" here. I think it's good if ChatGPT doesn't tell a girl that girls can't be doctors. You have to understand that not accidentally saying something sexist or racist is a big deal, or at least Google, Facebook, Microsoft, and OpenAI all think so. OpenAI are responding to a huge incentive when they add snippets like "ethnically ambiguous" to DALL-E 3 prompts.
If this is so important, why are they re-writing prompts, then? Why are they not doing A, B, or C? Back in the days of word2vec, there was a simple but effective solution to automatically identify gendered components in the learned embedding, and zero out the difference. It's so simple you'll probably kick yourself reading it because you could have published that paper yourself without understanding how word2vec works.
I can only conclude from the behaviour of systems like DALL-E 3 that they are either using simple prompt re-writing (or a more sophisticated approach that behaves just as prompt rewriting would, and performs as badly) because prompt re-writing is the best thing they can come up with. Transformers are complex, and inscrutable. You can't just reach in there, isolate a concept like "human person", and rebalance the composition.
The bitter lesson tells us that big amorphous approaches to AI perform better and scale better than manually written expert systems, ontologies, or description logics. More unsupervised data beats less but carefully labelled data. Even when the developers of these systems have a big incentive not to reproduce a certain pattern from the data, they can't fix such a problem at the root. Their solution is instead to use a simple natural language processing system, a dumb system they can understand, and wrap it around the smart but inscrutable transformer-based language model and image generator.
What does that mean for "sleeper agent AI"? You can't really trust a model that somebody else has trained, but can you even trust a model you have trained, if you haven't carefully reviewed all the input data? Even OpenAI can't trust their own models.
15 notes · View notes
spacetimewithstuartgary · 2 months ago
Text
Tumblr media Tumblr media
Algorithm used on Mars rover helps scientists on Earth see data in a new way
A new algorithm tested on NASA's Perseverance Rover on Mars may lead to better forecasting of hurricanes, wildfires, and other extreme weather events that impact millions globally.
Georgia Tech Ph.D. student Austin P. Wright is first author of a paper that introduces Nested Fusion. The new algorithm improves scientists' ability to search for past signs of life on the Martian surface.
This innovation supports NASA's Mars 2020 mission. In addition, scientists from other fields working with large, overlapping datasets can use Nested Fusion's methods for their studies.
Wright presented Nested Fusion at the 2024 International Conference on Knowledge Discovery and Data Mining (KDD 2024) where it was a runner-up for the best paper award. The work is published in the journal Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
"Nested Fusion is really useful for researchers in many different domains, not just NASA scientists," said Wright. "The method visualizes complex datasets that can be difficult to get an overall view of during the initial exploratory stages of analysis."
Nested Fusion combines datasets with different resolutions to produce a single, high-resolution visual distribution. Using this method, NASA scientists can more easily analyze multiple datasets from various sources at the same time. This can lead to faster studies of Mars' surface composition to find clues of previous life.
The algorithm demonstrates how data science impacts traditional scientific fields like chemistry, biology, and geology.
Even further, Wright is developing Nested Fusion applications to model shifting climate patterns, plant and animal life, and other concepts in the earth sciences. The same method can combine overlapping datasets from satellite imagery, biomarkers, and climate data.
"Users have extended Nested Fusion and similar algorithms toward earth science contexts, which we have received very positive feedback," said Wright, who studies machine learning (ML) at Georgia Tech.
"Cross-correlational analysis takes a long time to do and is not done in the initial stages of research when patterns appear and form new hypotheses. Nested Fusion enables people to discover these patterns much earlier."
Wright is the data science and ML lead for PIXLISE, the software that NASA JPL scientists use to study data from the Mars Perseverance Rover.
Perseverance uses its Planetary Instrument for X-ray Lithochemistry (PIXL) to collect data on mineral composition of Mars' surface. PIXL's two main tools that accomplish this are its X-ray Fluorescence (XRF) Spectrometer and Multi-Context Camera (MCC).
When PIXL scans a target area, it creates two co-aligned datasets from the components. XRF collects a sample's fine-scale elemental composition. MCC produces images of a sample to gather visual and physical details like size and shape.
A single XRF spectrum corresponds to approximately 100 MCC imaging pixels for every scan point. Each tool's unique resolution makes mapping between overlapping data layers challenging. However, Wright and his collaborators designed Nested Fusion to overcome this hurdle.
In addition to progressing data science, Nested Fusion improves NASA scientists' workflow. Using the method, a single scientist can form an initial estimate of a sample's mineral composition in a matter of hours. Before Nested Fusion, the same task required days of collaboration between teams of experts on each different instrument.
"I think one of the biggest lessons I have taken from this work is that it is valuable to always ground my ML and data science problems in actual, concrete use cases of our collaborators," Wright said.
"I learn from collaborators what parts of data analysis are important to them and the challenges they face. By understanding these issues, we can discover new ways of formalizing and framing problems in data science."
Nested Fusion won runner-up for the best paper in the applied data science track. Hundreds of other papers were presented at the conference's research track, workshops, and tutorials.
Wright's mentors, Scott Davidoff and Polo Chau, co-authored the Nested Fusion paper. Davidoff is a principal research scientist at the NASA Jet Propulsion Laboratory. Chau is a professor at the Georgia Tech School of Computational Science and Engineering (CSE).
"I was extremely happy that this work was recognized with the best paper runner-up award," Wright said. "This kind of applied work can sometimes be hard to find the right academic home, so finding communities that appreciate this work is very encouraging."
3 notes · View notes
Text
What is Data Science? Introduction, Basic Concepts & Process
Tumblr media
what is data science? Complete information about data science for beginner to advance you search what is data science data science is like data analyzing, data saving, database etc.
1 note · View note
head-post · 3 months ago
Text
4.4 billion people can’t get safe drink
About 4.4 billion people around the world lack access to safe drinking water, according to a new study released on Thursday.
The figure is nearly double previous estimates by the World Health Organisation, according to the study published in the scientific journal Science.
Swiss scientists used computer modelling to estimate the level of access to water in different regions. The study analysed water access data collected from 64,723 households from 27 low- and middle-income countries between 2016 and 2020. The surveys investigated the conditions of the water supply, including its protection from chemical and faecal contamination.
The collected data was applied in machine learning algorithms, which were then augmented with global geospatial data such as climatic conditions, topography, hydrology and population density. Using this model, the researchers drew conclusions about water access in other countries with similar characteristics.
The analysis showed that sub-Saharan Africa, South Asia and East Asia have the greatest problems with access to safe water. In these regions, bacterial and chemical contamination and lack of infrastructure remain major problems. In sub-Saharan Africa, for example, about 650 million people do not have access to drinking water directly in or near their homes.
Although the study did not focus on high-income countries, the researchers recognise that there may be populations with limited access to clean water in these regions as well.
Read more HERE
Tumblr media
3 notes · View notes
teamcuriosity · 1 year ago
Text
[A MewTube video from Team Curiosity's official channel is attached. The title is "TCBFSC—Dr. Ryan Alston: Universe Types—Parallel vs. Alternate
Transcript:
Dr. Ryan Alston walks out from stage left. He approaches the podium without looking at the stage or behind himself, keeping his eyes dead focused ahead. As he sets a small deck of flash cards in front of him and switches the mic on, he leers out over the crowd. With the PowerPoint behind him flipped on, he leaned into the mic and spoke in a stern, yet stilted tone.
Ryan: Good evening. I am Dr. Ryan Alston, admin at Team Curiosity. I'm a theore—physicist. I am a physicist specializing in space-time anomalies. Today, you're going to learn about the multiverse. Specifically, the difference between parallel and alternate universes.
He seems to have the wind taken out of his ass momentarily at the flubbed line, but maintains speed and flips the presentation slide. The next slide simply says "timelines" and has an illustration with a person standing at the base of a branching path.
Ryan: Now, to clarify first: I'm not talking about timelines. Timelines can create something akin to a parallel universe effect, but time itself being relative, it is entirely possible to shift forward, backward, and along different branches of a timeline within the same universe. There are a few pokemon who display this trait—rare as they are.
He turns to the next slide, gesturing to it with his open hand. It simply says "parallel universes."
Ryan: I am talking about universes. Parallel universes are the ones most often confused for different timelines, and are fairly well-documented by Alolan scientists. These universes can appear wildly different than your own, even containing different creatures. However, the same fundamental physics and base concepts will still apply. The Ultra Beasts of the wormholes in Alola are simply pokemon from another universe, and they follow the same rules as powerful pokemon when brought here.
He flips the slide. Silhouettes of Solgaleo and Lunala are portrayed.
Ryan: There are a few pokemon known to be able to traverse the universes. They're... Again, rare. But they are able to traverse to parallel universes and return safely. The research is secretive, and we are working on uncovering the true extent of their capabilities, but it is there.
Ryan then flips to the next slide. It says "alternate universes".
Ryan: Alternate universes are entirely different. They abide by different physics, different rules. The fundamentals have shifted entirely. Maybe they look vastly different, maybe they don't, but their core set of universal constants do not align with your own. As some of you may know. I am a faller. I am from one of these universe types. Pokemon do not exist, and our laws of physics vary drastically. This makes us an alternate.
He flips to the next slide. It is blank. His expression almost seems solemn, but his brow remains stern.
Ryan: ... As far as we know, there are no ways to force travel between these universes. There are known cases of people falling into this universe from an alternate, but there are no cases of them being able to return. Research is in progress to change this, but it's all experimental now.
He pauses for a moment, then shuts the presentation off. He momentarily shuts the microphone off to take a breath before flipping it back on.
Ryan: We may track down a pokemon at some point in the future capable of travel, or maybe we might build a machine to open the path by force. Either way, it is something we cannot accomplish so long as the research currently in place remains as secretive as it is. Science is built on the shoulders of giants, after all. However, when it comes to raw data, that's for the scientists to figure out the intricacies of. As for the rest of you... I hope you at least remember this difference.
He took a bow, then walks off of the stage as stoic as he came in.
23 notes · View notes
mvishnukumar · 3 months ago
Text
Can statistics and data science methods make predicting a football game easier?
Hi,
Statistics and data science methods can significantly enhance the ability to predict the outcomes of football games, though they cannot guarantee results due to the inherent unpredictability of sports. Here’s how these methods contribute to improving predictions:
Tumblr media
Data Collection and Analysis: 
Collecting and analyzing historical data on football games provides a basis for understanding patterns and trends. This data can include player statistics, team performance metrics, match outcomes, and more. Analyzing this data helps identify factors that influence game results and informs predictive models.
Feature Engineering:
 Feature engineering involves creating and selecting relevant features (variables) that contribute to the prediction of game outcomes. For football, features might include team statistics (e.g., goals scored, possession percentage), player metrics (e.g., player fitness, goals scored), and contextual factors (e.g., home/away games, weather conditions). Effective feature engineering enhances the model’s ability to capture important aspects of the game.
Predictive Modeling: 
Various predictive models can be used to forecast football game outcomes. Common models include:
Logistic Regression: This model estimates the probability of a binary outcome (e.g., win or lose) based on input features.
Random Forest: An ensemble method that builds multiple decision trees and aggregates their predictions. It can handle complex interactions between features and improve accuracy.
Support Vector Machines (SVM): A classification model that finds the optimal hyperplane to separate different classes (e.g., win or lose).
Poisson Regression: Specifically used for predicting the number of goals scored by teams, based on historical goal data.
Machine Learning Algorithms: 
Advanced machine learning algorithms, such as gradient boosting and neural networks, can be employed to enhance predictive accuracy. These algorithms can learn from complex patterns in the data and improve predictions over time.
Simulation and Monte Carlo Methods: 
Simulation techniques and Monte Carlo methods can be used to model the randomness and uncertainty inherent in football games. By simulating many possible outcomes based on historical data and statistical models, predictions can be made with an understanding of the variability in results.
Model Evaluation and Validation: 
Evaluating the performance of predictive models is crucial. Metrics such as accuracy, precision, recall, and F1 score can assess the model’s effectiveness. Cross-validation techniques ensure that the model generalizes well to new, unseen data and avoids overfitting.
Consideration of Uncertainty: 
Football games are influenced by numerous unpredictable factors, such as injuries, referee decisions, and player form. While statistical models can account for many variables, they cannot fully capture the uncertainty and randomness of the game.
Continuous Improvement: 
Predictive models can be continuously improved by incorporating new data, refining features, and adjusting algorithms. Regular updates and iterative improvements help maintain model relevance and accuracy.
In summary, statistics and data science methods can enhance the ability to predict football game outcomes by leveraging historical data, creating relevant features, applying predictive modeling techniques, and continuously refining models. While these methods improve the accuracy of predictions, they cannot eliminate the inherent unpredictability of sports. Combining statistical insights with domain knowledge and expert analysis provides the best approach for making informed predictions.
Tumblr media
3 notes · View notes
archiveofkloss · 6 months ago
Text
“We’re just seeing the very beginning of what’s ahead and what will be possible,” the supermodel and entrepreneur tells ELLE.
karlie on the future of women in tech:
"I’ve been doing this work for almost a decade now, and so much has changed in ways that make me very optimistic. I went to a public school in Missouri. I’m 31 years old, so it’s been a while since I was in high school, but back when I was a student, they did not have computer science programs. Now they do, and so do many, many, many public schools and private schools across the United States. There are now entry points for women and girls to start to learn how to code. It is much more understood how much technology is a part of shaping our world in every industry—not just in Silicon Valley, but also in music, media, finance, and business. But there’s a lot more, unfortunately, that continues to need to happen."
on growing kode with klossy into a global nonprofit:
"Kode With Klossy focuses on creating inclusive spaces that teach highly technical skills. We have AI machine learning and web dev. We have mobile app development and data science. They all are very creative applications of technology. Ultimately, right now, our programs are rooted in teaching the fundamentals of code and scaling the amount of people in our programs. This summer, we’re going to have 5,000 scholarships for free that we are giving to students to be a part of Kode With Klossy. We’ve trained hundreds of teachers through the years. We’ll have a few hundred instructors and instructor assistants this summer alone in our program. So what we’re focused on is continuing to ignite creative passion around technology."
on using technology to advance the fashion industry:
"We’re just seeing the very beginning of what’s ahead and what will be possible. That’s why it’s so important people realize that tech is not just for tech alone. It is [a tool to] drive better solutions across all industries and all businesses. Fashion is one of the biggest polluters of water. The industry has a lot of big problems to solve, and that’s part of why I’m optimistic and excited about more people seeing the overlap between the two. There is intersection in these spaces, and we can drive solutions in scalable ways when we see these intersections."
on embracing your fears:
"Natalie Massenet, the founder of Net-a-Porter, is an amazing entrepreneur and somebody I feel lucky to call a friend. She asked me years ago, and it’s always stuck with me through different personal and professional moments, “What would you do if you weren’t afraid?” That has always resonated, because we can get so stuck in our heads about being afraid of all sorts of different things—afraid of what other people will think, afraid of failure."
on the value of community in entrepreneurship:
"It takes a lot of courage for anyone [to be an entrepreneur]. It doesn’t matter your gender, your age, your experience level, that’s where community really does make a difference. It’s not just a talking point. So many of our Kode With Klossy scholars have come back as instructor assistants, and are now in peer leadership positions. So many of them have gone on to win hackathons and scholarships. It comes down to this collective community that continues to support and foster new connections among each other."
on breathing new life into Life magazine:
"Part of why I’m so excited about what we can build and what we are building with Bedford [Media, the company launched by Kloss and her husband, Joshua Kushner] is this intersection of a creative space like media—print media—and how you can continue to drive innovation with technology. And so that’s something that we’re very focused on, how to integrate the two. Lots more that we’re going to share at the right time, but we’re heads down on building the team and the company right now. I’m super excited."
on showing up for the people you love:
"I have two young babies, and I want to be the best mom I can be. So many of us are juggling so many different responsibilities and identities, both personally and professionally. Having women in leadership positions is so important, because our lived experiences are different from our male counterparts. And by the way, theirs is different from ours. It matters that, in leadership positions, to have different lived experiences across ages, genders, geographies, and ethnicities. It ultimately leads to better outcomes. All that to say, I’m just trying the best I can every day to show up for the people that I love and do what I can to help others."
on the intrinsic value in heirloom pieces:
"For our wedding, my husband bought me a beautiful Cartier watch. Some day I will pass that on to our daughter, if I’m lucky enough to have one. Or [I’ll pass it on to] my son; I have two sons. For our wedding, I also bought myself beautiful diamond earrings. There was something very symbolic about that to me, like, okay, I can also buy myself something. That’s why jewelry, to me—as we’re talking about female entrepreneurship and women in business and women in tech—is something that’s so emotional and personal. So I bought myself these vintage diamond earrings from the ’20s, with this beautiful, rich history of where they had been and who had owned them and wore them before. That’s the power of jewelry, whether it’s vintage or new, you create memories and it marks moments in life and in time. And then to be able to share that with future generations is something I find really beautiful."
5 notes · View notes