Tumgik
#difference between ai and data science
healthylifewithus · 11 months
Text
Complete Excel, AI and Data Science mega bundle.
Unlock Your Full Potential with Our 100-Hour Masterclass: The Ultimate Guide to Excel, Python, and AI.
Why Choose This Course? In today’s competitive job market, mastering a range of technical skills is more important than ever. Our 100-hour comprehensive course is designed to equip you with in-demand capabilities in Excel, Python, and Artificial Intelligence (AI), providing you with the toolkit you need to excel in the digital age.
To read more click here &lt;<
Become an Excel Pro Delve deep into the intricacies of Excel functions, formulae, and data visualization techniques. Whether you’re dealing with basic tasks or complex financial models, this course will make you an Excel wizard capable of tackling any challenge.
Automate Your Workflow with Python Scripting in Python doesn’t just mean writing code; it means reclaiming your time. Automate everyday tasks, interact with software applications, and boost your productivity exponentially.
If you want to get full course click here &lt;<
Tumblr media
Turn Ideas into Apps Discover the potential of Amazon Honeycode to create custom apps tailored to your needs. Whether it’s for data management, content tracking, or inventory — transform your creative concepts into practical solutions.
Be Your Own Financial Analyst Unlock the financial functionalities of Excel to manage and analyze business data. Create Profit and Loss statements, balance sheets, and conduct forecasting with ease, equipping you to make data-driven decisions.
Embark on an AI Journey Step into the future with AI and machine learning. Learn to build advanced models, understand neural networks, and employ TensorFlow. Turn big data into actionable insights and predictive models.
Master Stock Prediction Gain an edge in the market by leveraging machine learning for stock prediction. Learn to spot trends, uncover hidden patterns, and make smarter investment decisions.
Who Is This Course For? Whether you’re a complete beginner or a seasoned professional looking to upskill, this course offers a broad and deep understanding of Excel, Python, and AI, preparing you for an ever-changing work environment.
Invest in Your Future This isn’t just a course; it’s a game-changer for your career. Enroll now and set yourself on a path to technological mastery and unparalleled career growth.
Don’t Wait, Transform Your Career Today! Click here to get full course &lt;<
Tumblr media
1 note · View note
naya-mishra · 1 year
Text
This article highlights the key difference between Machine Learning and Artificial Intelligence based on approach, learning, application, output, complexity, etc.
2 notes · View notes
the-iotacademy · 2 years
Text
youtube
0 notes
Text
The Coprophagic AI crisis
Tumblr media
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in TORONTO on Mar 22, then with LAURA POITRAS in NYC on Mar 24, then Anaheim, and more!
Tumblr media
A key requirement for being a science fiction writer without losing your mind is the ability to distinguish between science fiction (futuristic thought experiments) and predictions. SF writers who lack this trait come to fancy themselves fortune-tellers who SEE! THE! FUTURE!
The thing is, sf writers cheat. We palm cards in order to set up pulp adventure stories that let us indulge our thought experiments. These palmed cards – say, faster-than-light drives or time-machines – are narrative devices, not scientifically grounded proposals.
Historically, the fact that some people – both writers and readers – couldn't tell the difference wasn't all that important, because people who fell prey to the sf-as-prophecy delusion didn't have the power to re-orient our society around their mistaken beliefs. But with the rise and rise of sf-obsessed tech billionaires who keep trying to invent the torment nexus, sf writers are starting to be more vocal about distinguishing between our made-up funny stories and predictions (AKA "cyberpunk is a warning, not a suggestion"):
https://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html
In that spirit, I'd like to point to how one of sf's most frequently palmed cards has become a commonplace of the AI crowd. That sleight of hand is: "add enough compute and the computer will wake up." This is a shopworn cliche of sf, the idea that once a computer matches the human brain for "complexity" or "power" (or some other simple-seeming but profoundly nebulous metric), the computer will become conscious. Think of "Mike" in Heinlein's *The Moon Is a Harsh Mistress":
https://en.wikipedia.org/wiki/The_Moon_Is_a_Harsh_Mistress#Plot
For people inflating the current AI hype bubble, this idea that making the AI "more powerful" will correct its defects is key. Whenever an AI "hallucinates" in a way that seems to disqualify it from the high-value applications that justify the torrent of investment in the field, boosters say, "Sure, the AI isn't good enough…yet. But once we shovel an order of magnitude more training data into the hopper, we'll solve that, because (as everyone knows) making the computer 'more powerful' solves the AI problem":
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
As the lawyers say, this "cites facts not in evidence." But let's stipulate that it's true for a moment. If all we need to make the AI better is more training data, is that something we can count on? Consider the problem of "botshit," Andre Spicer and co's very useful coinage describing "inaccurate or fabricated content" shat out at scale by AIs:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265
"Botshit" was coined last December, but the internet is already drowning in it. Desperate people, confronted with an economy modeled on a high-speed game of musical chairs in which the opportunities for a decent livelihood grow ever scarcer, are being scammed into generating mountains of botshit in the hopes of securing the elusive "passive income":
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
Botshit can be produced at a scale and velocity that beggars the imagination. Consider that Amazon has had to cap the number of self-published "books" an author can submit to a mere three books per day:
https://www.theguardian.com/books/2023/sep/20/amazon-restricts-authors-from-self-publishing-more-than-three-books-a-day-after-ai-concerns
As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels. Even sources considered to be nominally high-quality, from Cnet articles to legal briefs, are contaminated with botshit:
https://theconversation.com/ai-is-creating-fake-legal-cases-and-making-its-way-into-real-courtrooms-with-disastrous-results-225080
Ironically, AI companies are setting themselves up for this problem. Google and Microsoft's full-court press for "AI powered search" imagines a future for the web in which search-engines stop returning links to web-pages, and instead summarize their content. The question is, why the fuck would anyone write the web if the only "person" who can find what they write is an AI's crawler, which ingests the writing for its own training, but has no interest in steering readers to see what you've written? If AI search ever becomes a thing, the open web will become an AI CAFO and search crawlers will increasingly end up imbibing the contents of its manure lagoon.
This problem has been a long time coming. Just over a year ago, Jathan Sadowski coined the term "Habsburg AI" to describe a model trained on the output of another model:
https://twitter.com/jathansadowski/status/1625245803211272194
There's a certain intuitive case for this being a bad idea, akin to feeding cows a slurry made of the diseased brains of other cows:
https://www.cdc.gov/prions/bse/index.html
But "The Curse of Recursion: Training on Generated Data Makes Models Forget," a recent paper, goes beyond the ick factor of AI that is fed on botshit and delves into the mathematical consequences of AI coprophagia:
https://arxiv.org/abs/2305.17493
Co-author Ross Anderson summarizes the finding neatly: "using model-generated content in training causes irreversible defects":
https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/
Which is all to say: even if you accept the mystical proposition that more training data "solves" the AI problems that constitute total unsuitability for high-value applications that justify the trillions in valuation analysts are touting, that training data is going to be ever-more elusive.
What's more, while the proposition that "more training data will linearly improve the quality of AI predictions" is a mere article of faith, "training an AI on the output of another AI makes it exponentially worse" is a matter of fact.
Tumblr media
Name your price for 18 of my DRM-free ebooks and support the Electronic Frontier Foundation with the Humble Cory Doctorow Bundle.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/03/14/14/inhuman-centipede#enshittibottification
Tumblr media
Image: Plamenart (modified) https://commons.wikimedia.org/wiki/File:Double_Mobius_Strip.JPG
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
553 notes · View notes
trekmupf · 3 months
Text
Tumblr media
Do Androids dream of electric sheep?
Tumblr media
Pro:
Nurse Chapel episode! Interesting insight into her personal life and inner workings as well as her loyalties
Yes Dr. Korby we understand without you saying it. That's why the female Bot is wearing barely any clothes and is super beautiful. For sure.
Kirk on the Spinning wheel, woooooo!
DoubleKirk the second
I mean we have to mention the phallic stone right
unspoken Kirk and Spock communication and trust in each other! Kirks entire plan on the imposter-Kirk getting caught relies on Spock understanding what he's trying to say, and it works, I love that for them.
First real „Kirk is not the womanizer pop culture thinks he is“: he only kisses the woman / android to manipulate / get closer to his goal and not out of pleasure
The android design & clothes in the episode are great
Great funky lightchoices in the cavesystem
First time Androids, a classic sci-fi narrative, which in different ways explores what it means to be human and what makes humanity better than AI; In this case the concept of feelings (see quote) but also Ruk without empathy and based on facts deciding who gets to live or die
First time Kirk outsmarting stronger opponents instead of using force, happens especially often with AI (in this case both Androids)
The tension in the episode holds up, with Kirk being trapped in an unfamiliar environment, Chapel torn between her fiance and duty and the other characters being possible enemies
The reveal that Dr. Korby was also a cyborg was great
Tumblr media
Look, the beautiful stone design needs to be in a review. Science.
Con
pacing was a little off at times
Other AI episodes have explored the humanity vs. AI theme better later in the series
Slight brush with the idea that you can cheat death / become immortal by becoming a computer / android, but it happens so close to the end that it can't be looked at close enough. How much of the person Korby is still in there? He clearly shows emotions unlike the other Androids
based on that: Ruk said that the androids were turned off, essentially killed, so it was self defense - if the androids have human rights. Again, this is not the focus of the episode, but does raise question the episode doesn't answer (this won't get explored until Data in TNG later on)
Tumblr media
I just really loved this moment. Kirk is so small!
Counter
Shirtless Kirk (I mean completly naked Kirk, technically)
Kirk fake womanizer (uses kissing to get them out)
Evil AI (this time it's androids)
Brains over Brawl
Meme: Kirk orgasm face.gif
Quote: "Can you imagine how life could be improved if we could do away with jealousy, greed, hate?" - Corby "It can also be improved by eliminating love, tenderness, sentiment. The other side of the coin, Doctor" - Kirk
Moment: Kirk and Spock talking about his use of unseemlgy language at the end.
Summary: Good episode that introduces the classic AI / Android storyline as well as some of it's ethical connotations, shows Kirk's ingenuity and how he and Spock work together perfectly. Previous Episode - Next Episode - All TOS Reviews
Tumblr media
free naked spinny wheel Kirk at the end!
18 notes · View notes
Text
Tumblr media
AI helps distinguish dark matter from cosmic noise
Dark matter is the invisible force holding the universe together – or so we think. It makes up around 85% of all matter and around 27% of the universe’s contents, but since we can’t see it directly, we have to study its gravitational effects on galaxies and other cosmic structures. Despite decades of research, the true nature of dark matter remains one of science’s most elusive questions.
According to a leading theory, dark matter might be a type of particle that barely interacts with anything else, except through gravity. But some scientists believe these particles could occasionally interact with each other, a phenomenon known as self-interaction. Detecting such interactions would offer crucial clues about dark matter’s properties.
However, distinguishing the subtle signs of dark matter self-interactions from other cosmic effects, like those caused by active galactic nuclei (AGN) – the supermassive black holes at the centers of galaxies – has been a major challenge. AGN feedback can push matter around in ways that are similar to the effects of dark matter, making it difficult to tell the two apart.
In a significant step forward, astronomer David Harvey at EPFL’s  Laboratory of Astrophysics has developed a deep-learning algorithm that can untangle these complex signals. Their AI-based method is designed to differentiate between the effects of dark matter self-interactions and those of AGN feedback by analyzing images of galaxy clusters – vast collections of galaxies bound together by gravity. The innovation promises to greatly enhance the precision of dark matter studies.
Harvey trained a Convolutional Neural Network (CNN) – a type of AI that is particularly good at recognizing patterns in images – with images from the BAHAMAS-SIDM project, which models galaxy clusters under different dark matter and AGN feedback scenarios. By being fed thousands of simulated galaxy cluster images, the CNN learned to distinguish between the signals caused by dark matter self-interactions and those caused by AGN feedback.
Among the various CNN architectures tested, the most complex - dubbed “Inception” – proved to also be the most accurate. The AI was trained on two primary dark matter scenarios, featuring different levels of self-interaction, and validated on additional models, including a more complex, velocity-dependent dark matter model.
Inceptionachieved an impressive accuracy of 80% under ideal conditions, effectively identifying whether galaxy clusters were influenced by self-interacting dark matter or AGN feedback. It maintained is high performance even when the researchers introduced realistic observational noise that mimics the kind of data we expect from future telescopes like Euclid.
What this means is that Inception – and the AI approach more generally – could prove incredibly useful for analyzing the massive amounts of data we collect from space. Moreover, the AI’s ability to handle unseen data indicates that it’s adaptable and reliable, making it a promising tool for future dark matter research.
AI-based approaches like Inception could significantly impact our understanding of what dark matter actually is. As new telescopes gather unprecedented amounts of data, this method will help scientists sift through it quickly and accurately, potentially revealing the true nature of dark matter.
10 notes · View notes
reasonsforhope · 1 year
Text
"In the oldest and most prestigious young adult science competition in the nation, 17-year-old Ellen Xu used a kind of AI to design the first diagnosis test for a rare disease that struck her sister years ago.
With a personal story driving her on, she managed an 85% rate of positive diagnoses with only a smartphone image, winning her $150,000 grand for a third-place finish.
Kawasaki disease has no existing test method, and relies on a physician’s years of training, ability to do research, and a bit of luck.
Symptoms tend to be fever-like and therefore generalized across many different conditions. Eventually if undiagnosed, children can develop long-term heart complications, such as the kind that Ellen’s sister was thankfully spared from due to quick diagnosis.
Xu decided to see if there were a way to design a diagnostic test using deep learning for her Regeneron Science Talent Search medicine and health project. Organized since 1942, every year 1,900 kids contribute adventures.
She designed what is known as a convolutional neural network, which is a form of deep-learning algorithm that mimics how our eyes work, and programmed it to analyze smartphone images for potential Kawasaki disease.
However, like our own eyes, a convolutional neural network needs a massive amount of data to be able to effectively and quickly process images against references.
For this reason, Xu turned to crowdsourcing images of Kawasaki’s disease and its lookalike conditions from medical databases around the world, hoping to gather enough to give the neural network a high success rate.
Xu has demonstrated an 85% specificity in identifying between Kawasaki and non-Kawasaki symptoms in children with just a smartphone image, a demonstration that saw her test method take third place and a $150,000 reward at the Science Talent Search."
-Good News Network, 3/24/23
75 notes · View notes
pandeypankaj · 2 months
Text
What's the difference between Machine Learning and AI?
Machine Learning and Artificial Intelligence (AI) are often used interchangeably, but they represent distinct concepts within the broader field of data science. Machine Learning refers to algorithms that enable systems to learn from data and make predictions or decisions based on that learning. It's a subset of AI, focusing on statistical techniques and models that allow computers to perform specific tasks without explicit programming.
Tumblr media
On the other hand, AI encompasses a broader scope, aiming to simulate human intelligence in machines. It includes Machine Learning as well as other disciplines like natural language processing, computer vision, and robotics, all working towards creating intelligent systems capable of reasoning, problem-solving, and understanding context.
Understanding this distinction is crucial for anyone interested in leveraging data-driven technologies effectively. Whether you're exploring career opportunities, enhancing business strategies, or simply curious about the future of technology, diving deeper into these concepts can provide invaluable insights.
In conclusion, while Machine Learning focuses on algorithms that learn from data to make decisions, Artificial Intelligence encompasses a broader range of technologies aiming to replicate human intelligence. Understanding these distinctions is key to navigating the evolving landscape of data science and technology. For those eager to deepen their knowledge and stay ahead in this dynamic field, exploring further resources and insights on can provide valuable perspectives and opportunities for growth 
5 notes · View notes
nunuslab24 · 4 months
Text
What are AI, AGI, and ASI? And the positive impact of AI
Understanding artificial intelligence (AI) involves more than just recognizing lines of code or scripts; it encompasses developing algorithms and models capable of learning from data and making predictions or decisions based on what they’ve learned. To truly grasp the distinctions between the different types of AI, we must look at their capabilities and potential impact on society.
To simplify, we can categorize these types of AI by assigning a power level from 1 to 3, with 1 being the least powerful and 3 being the most powerful. Let’s explore these categories:
1. Artificial Narrow Intelligence (ANI)
Also known as Narrow AI or Weak AI, ANI is the most common form of AI we encounter today. It is designed to perform a specific task or a narrow range of tasks. Examples include virtual assistants like Siri and Alexa, recommendation systems on Netflix, and image recognition software. ANI operates under a limited set of constraints and can’t perform tasks outside its specific domain. Despite its limitations, ANI has proven to be incredibly useful in automating repetitive tasks, providing insights through data analysis, and enhancing user experiences across various applications.
2. Artificial General Intelligence (AGI)
Referred to as Strong AI, AGI represents the next level of AI development. Unlike ANI, AGI can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. It can reason, plan, solve problems, think abstractly, and learn from experiences. While AGI remains a theoretical concept as of now, achieving it would mean creating machines capable of performing any intellectual task that a human can. This breakthrough could revolutionize numerous fields, including healthcare, education, and science, by providing more adaptive and comprehensive solutions.
3. Artificial Super Intelligence (ASI)
ASI surpasses human intelligence and capabilities in all aspects. It represents a level of intelligence far beyond our current understanding, where machines could outthink, outperform, and outmaneuver humans. ASI could lead to unprecedented advancements in technology and society. However, it also raises significant ethical and safety concerns. Ensuring ASI is developed and used responsibly is crucial to preventing unintended consequences that could arise from such a powerful form of intelligence.
The Positive Impact of AI
When regulated and guided by ethical principles, AI has the potential to benefit humanity significantly. Here are a few ways AI can help us become better:
• Healthcare: AI can assist in diagnosing diseases, personalizing treatment plans, and even predicting health issues before they become severe. This can lead to improved patient outcomes and more efficient healthcare systems.
• Education: Personalized learning experiences powered by AI can cater to individual student needs, helping them learn at their own pace and in ways that suit their unique styles.
• Environment: AI can play a crucial role in monitoring and managing environmental changes, optimizing energy use, and developing sustainable practices to combat climate change.
• Economy: AI can drive innovation, create new industries, and enhance productivity by automating mundane tasks and providing data-driven insights for better decision-making.
In conclusion, while AI, AGI, and ASI represent different levels of technological advancement, their potential to transform our world is immense. By understanding their distinctions and ensuring proper regulation, we can harness the power of AI to create a brighter future for all.
7 notes · View notes
stra-tek · 1 year
Text
With Picard season 3 ending very soon, let's review the endings to modern Trek, and see the chances of it ending well. Or rather, you read what I think.
Discovery S1: Flat as a pancake end to the Klingon war, following epic and satisfying ends to the Klingon sarcophagus ship and Lorca's mutiny storylines. Massive backstage unheavel meant the end and beginning had very different creative direction. 1/5.
Discovery S2: I loved it, a massive epic space battle, a big emotional farewell between Spock and Michael (argubly the first time Discovery paused a massive crisis for a therapeutic chitchat, which would go on to become a tiresome cliche) and great time travel scenes even though the Red Angel did things Michael never could have (like disable the Ba'ul technology as if by magic) so it doesn't work if you look closely. Still, more creative upheaval and rumours of a massive change in creative direction mid-season, supposedly dropping a faith vs science plotline which would have featured a religious Captain Pike butting heads with Michael for the fun but cliche Control/Skynet story. Set up Section 31, Strange New Worlds spin-offs and Discovery's jump to the 32nd century. 5/5 despite flaws I was buzzing afterwards.
Picard S1: A flat ending to the synth storyline, a weird choice to kill Picard and make him a synth (perhaps a fix job for an originally planned sacrifice they had to work around when they decided to carry the show on?) and with some solid gold scenes with Picard and Data in the weird synth computer simulation afterlife. 2/5.
Discovery S3: A fun end with big fist fights in an impossible hammerspace between Discovery's decks. 3/5.
Picard S2. The final scenes between Q and Picard were solid gold. The rest was runny shite. 1/5.
Discovery S4. Big alien aliens. Deus ex machina brings Book back from the dead. Book gets community service for terrorist activities. Peace is made with the floaty giant aliens. Flat again, as was the whole season IMHO. 2/5.
Lower Decks S1. I don't remember. 3/5 it was always fun and watchable.
Strange New Worlds S1: The Ghost of Christmas Future and a reboot of Balance of Terror. Iffy Jim Kirk casting. 4/5.
Lower Decks S2. I remember Carol's arrest. 3/5.
Prodigy S1. Amazing season, amazing finale. Every emotional beat was earned. 5/5 the best of modern Trek, watch it if you haven't.
Lower Decks S3. Loved the race between Cerritos and the Texas-class. But holy shit Starfleet needs to stop with letting AI control anything important. The end with the Cali-class saving the say was ace. 4/5.
Don't fuck it up, Picard S3 people. Your season has been an amazing, contrived, fanwank explosion clusterfuck that somehow works really well so far. Don't ruin it.
56 notes · View notes
blubberquark · 8 months
Text
Language Models and AI Safety: Still Worrying
Previously, I have explained how modern "AI" research has painted itself into a corner, inventing the science fiction rogue AI scenario where a system is smarter than its guardrails, but can easily outwitted by humans.
Two recent examples have confirmed my hunch about AI safety of generative AI. In one well-circulated case, somebody generated a picture of an "ethnically ambiguous Homer Simpson", and in another, somebody created a picture of "baby, female, hispanic".
These incidents show that generative AI still filters prompts and outputs, instead of A) ensuring the correct behaviour during training/fine-tuning, B) manually generating, re-labelling, or pruning the training data, C) directly modifying the learned weights to affect outputs.
In general, it is not surprising that big corporations like Google and Microsoft and non-profits like OpenAI are prioritising racist language or racial composition of characters in generated images over abuse of LLMs or generative art for nefarious purposes, content farms, spam, captcha solving, or impersonation. Somebody with enough criminal energy to use ChatGPT to automatically impersonate your grandma based on your message history after he hacked the phones of tens of thousands of grandmas will be blamed for his acts. Somebody who unintentionally generates a racist picture based on an ambiguous prompt will blame the developers of the software if he's offended. Scammers could have enough money and incentives to run the models on their own machine anyway, where corporations have little recourse.
There is precedent for this. Word2vec, published in 2013, was called a "sexist algorithm" in attention-grabbing headlines, even though the bodies of such articles usually conceded that the word2vec embedding just reproduced patterns inherent in the training data: Obviously word2vec does not have any built-in gender biases, it just departs from the dictionary definitions of words like "doctor" and "nurse" and learns gendered connotations because in the training corpus doctors are more often men, and nurses are more often women. Now even that last explanation is oversimplified. The difference between "man" and "woman" is not quite the same as the difference between "male" and "female", or between "doctor" and "nurse". In the English language, "man" can mean "male person" or "human person", and "nurse" can mean "feeding a baby milk from your breast" or a kind of skilled health care worker who works under the direction and supervision of a licensed physician. Arguably, the word2vec algorithm picked up on properties of the word "nurse" that are part of the meaning of the word (at least one meaning, according tot he dictionary), not properties that are contingent on our sexist world.
I don't want to come down against "political correctness" here. I think it's good if ChatGPT doesn't tell a girl that girls can't be doctors. You have to understand that not accidentally saying something sexist or racist is a big deal, or at least Google, Facebook, Microsoft, and OpenAI all think so. OpenAI are responding to a huge incentive when they add snippets like "ethnically ambiguous" to DALL-E 3 prompts.
If this is so important, why are they re-writing prompts, then? Why are they not doing A, B, or C? Back in the days of word2vec, there was a simple but effective solution to automatically identify gendered components in the learned embedding, and zero out the difference. It's so simple you'll probably kick yourself reading it because you could have published that paper yourself without understanding how word2vec works.
I can only conclude from the behaviour of systems like DALL-E 3 that they are either using simple prompt re-writing (or a more sophisticated approach that behaves just as prompt rewriting would, and performs as badly) because prompt re-writing is the best thing they can come up with. Transformers are complex, and inscrutable. You can't just reach in there, isolate a concept like "human person", and rebalance the composition.
The bitter lesson tells us that big amorphous approaches to AI perform better and scale better than manually written expert systems, ontologies, or description logics. More unsupervised data beats less but carefully labelled data. Even when the developers of these systems have a big incentive not to reproduce a certain pattern from the data, they can't fix such a problem at the root. Their solution is instead to use a simple natural language processing system, a dumb system they can understand, and wrap it around the smart but inscrutable transformer-based language model and image generator.
What does that mean for "sleeper agent AI"? You can't really trust a model that somebody else has trained, but can you even trust a model you have trained, if you haven't carefully reviewed all the input data? Even OpenAI can't trust their own models.
15 notes · View notes
aibelbinzacariah · 1 month
Text
The hidden link between science and philosophy
As a kid of various interests and likings, I have always found myself leveraging ideas of science, math, psychology, and philosophy. I don’t know if it’s the hidden interest I had in personality traits of people or if it’s the serenity I found in literature but I, never in a million years wanted to pursue it as I found it as a dead end when it comes to our world which revolves around money making.
Well… Interests can be altered to situations. But can one’s internal interests be burnt to ashes just like that? That’s enough beating around the bush. My internal liking for philosophy and forced liking for science have brought me here. This might sound stupid to one… or maybe everyone but I am trying to get it out of my head and here’s a blog for the same.
Take this example. A simple machine, let it be mechanical or electronic, absorbs heat which in turn reduces the efficiency of the system. Now make it work for days under heavy workload and you will see how its efficiency is downgraded as the days go by. Now take a look at a simple human, Make him/her work for about 12 hours with no rest and you will see their efficiency go down just like that. Now let’s assume you hate our species and force them to not sleep and again find how it affects them as a machine’s continuous workload affects it. Here you can see how severe workload and stress affect something living and nonliving in more or less the same manner with just differences in the rate/parameter.
Let’s take another example, but here I will dive deeper into computers and mainly AI.
A person who is knowledgeable and healthy would be efficient and faster at work in the domain of their expertise than someone who is illiterate and has poor health, similarly, a computer with the most efficient and fast components when fed with the right amount of data would have the best output when used right.
Keeping the outputs aside, Some critical similarities can also be found when it comes to working on certain machinery. Take AI for an example, The way neurons work in them is similar to the way human being’s neurons work (my knowledge here is limited but am open to constructive criticism) The quality and quantity of knowledge you feed to a human being and an AI model has its similarity when it comes to the output and yes let’s keep the parameter differences in mind but in the world of calculus the rates can be calculated.
If you find all this slightly acceptable but don’t see a point of it, let me carry you forward… In a world where AI growth is unpredictable, the weird ideas of philosophical approach might sound obnoxious to you but it might just be me, I find it doable. This is how we predict and calculate its growth. Here where the philosophies of living and non-living are just subjects to be laughed at, I find it extremely interesting. Maybe a follow-up of a blog could make you understand better because I have a long list of examples and experiences that make me feel the same. May it be hallucinations or dreams, As an AI student who should’ve been excavating deeply rooted meanings of works of Robert Booker, I find these connections fascinating and perhaps of certain potential. If not now, later.
3 notes · View notes
anomalocaris2hu · 1 year
Text
Valle Verde Episode 2 Japanese Translation and Speculation
As I mentioned in my previous post, this episode has a lot more Japanese in it than the first one. So much, in fact, that I won't be able to translate it all. I will try and transcribe and translate the most salient parts. As before, make sure you watch the original video before reading this or it will not make much sense.
Tumblr media
All the locations are ??? except for "city hall."
Tumblr media
"Hoc est verum" is a Latin phrase meaning "this is the truth." Searching gives me a longer phrase "Hoc est verum et nihili nisi verum," meaning "This is the truth and nothing but the truth." This is another clear reference to Christianity, echoing verses such as John 14:6 (using the NRSVUE):
Jesus said to him, “I am the way and the truth and the life. No one comes to the Father except through me.
Tumblr media
Mori no Machi would be "Forest Town," presumably another city in the world of Valle Verde.
Tumblr media
This is the clearest shot of that poster so far, and I'm confident that it says 私を信じて "trust in me."
Tumblr media
記録 "Records" as in a hall of records or archives.
Tumblr media
Umi no Machi is "Sea(side) Town." Also, the name Berenjeno sounds like the word *berenjena* which means "eggplant."
Tumblr media
キョロちゃんのプリクラ大作戦 is a real video game. The title means something like "Kyoro-chan's Great Photo-booth Tactics." Kyoro-chan is a mascot character for a chocolate brand, and apparently the game is a side-scrolling action game where you play as Kyoro-chan and go around to photo-booths to take your picture. This is the kind of game that I doubt ever saw the light of day outside of Japan, but it's listed as belonging to Matias, who is established as being from Argentina.
Tumblr media
The Shizuoka Institute of Science and Technology (SIST) is once-again involved with another game, this one called Tharsis: The Legend.
Tumblr media
The title is meant to say something like "THARSIS: The Legend" but does so in a way that doesn't make much grammatical sense. I think 伝説のTHARSIS would have worked better.
Tumblr media
The summer of 1997, which I'm pretty sure is the time the events of Valle Verde are taking place based on VHS timestamps.
Now we get into the meat of the untranslated Japanese in this part; the whole opening of THARSIS is narrated in Japanese without subtitles. The Japanese is quite stilted at points and probably wasn't written by a native speaker, so my translation will be what I think the writer was trying to say.
Tumblr media
Bargoff 「私は今、主導権を握っている。状況は?」 B 「我々は、20分前に潜入させられた。誰かが、ダクトから入った。」
Bargoff: I am in control now. What's the situation?
B: We were infiltrated 20 minutes ago. Someone entered from the ducts.
Tumblr media
Eva 「悲しいことに、彼らはどこにでもいることができます。」
Eva: Unfortunately, they could be anywhere.
Tumblr media
Once the game is started, we get subtitles, so I'll only translate if there's a significant difference between the Japanese and subtitles.
Tumblr media
"Connection lost"
Tumblr media
"Sodom? Gomorrah? No, those who play (at being) God."
Tumblr media
The base 64 decodes to "There can be no peace in the world."
Now for the speculation. It seems like whatever the monster in THARSIS is has leaked into Valle Verde - we're shown these insect-like things "sucking" the data out of entities from other games (including Angel Quest).
As for what Nottt is, he seems to be some kind of AI created for Valle Verde. Nottt saw the children that were absorbed into the game (through the THBrain?) and heard their screams in multiple languages. In order to not comprehend the screams any longer, he removed the modules that allowed him to know all languages except for Spanish.
I'm not sure how the THBrain works exactly, but it may be some kind of brain-computer interface. When the game is generating content (as opposed to playing scripted content), the text box changes color and a gear icon appears. Perhaps information from the brain of the user (such as subconscious thoughts, etc.) is used to generate new content?
10 notes · View notes
elsa16744 · 2 months
Text
How Can You Ensure Data Quality in Healthcare Analytics and Management?
Tumblr media
Healthcare facilities are responsible for the patient’s recovery. Pharmaceutical companies and medical equipment manufacturers also work toward alleviating physical pain, stress levels, and uncomfortable body movement issues. Still, healthcare analytics must be accurate for precise diagnosis and effective clinical prescriptions. This post will discuss data quality management in the healthcare industry. 
What is Data Quality in Healthcare? 
Healthcare data quality management includes technologies and statistical solutions to verify the reliability of acquired clinical intelligence. A data quality manager protects databases from digital corruption, cyberattacks, and inappropriate handling. So, medical professionals can get more realistic insights using data analytics solutions. 
Laboratories have started emailing the test results to help doctors, patients, and their family members make important decisions without wasting time. Also, assistive technologies merge the benefits of the Internet of Things (IoT) and artificial intelligence (AI) to enhance living standards. 
However, poor data quality threatens the usefulness of healthcare data management solutions. 
For example, pharmaceutical companies and authorities must apply solutions that remove mathematical outliers to perform high-precision data analytics for clinical drug trials. Otherwise, harmful medicines will reach the pharmacist’s shelf, endangering many people. 
How to Ensure Data Quality in the Healthcare Industry? 
Data quality frameworks utilize different strategies to prevent processing issues or losing sensitive intelligence. If you want to develop such frameworks to improve medical intelligence and reporting, the following 7 methods can aid you in this endeavor. 
Method #1| Use Data Profiling 
A data profiling method involves estimating the relationship between the different records in a database to find gaps and devise a cleansing strategy. Data cleansing in healthcare data management solutions has the following objectives. 
Determine whether the lab reports and prescriptions match the correct patient identifiers. 
If inconsistent profile matching has occurred, fix it by contacting doctors and patients. 
Analyze the data structures and authorization levels to evaluate how each employee is accountable for specific patient recovery outcomes. 
Create a data governance framework to enforce access and data modification rights strictly. 
Identify recurring data cleaning and preparation challenges. 
Brainstorm ideas to minimize data collection issues that increase your data cleaning efforts. 
Ensure consistency in report formatting and recovery measurement techniques to improve data quality in healthcare. 
Data cleaning and profiling allow you to eliminate unnecessary and inaccurate entries from patient databases. Therefore, healthcare research institutes and commercial life science businesses can reduce processing errors when using data analytics solutions. 
Method #2| Replace Empty Values 
What is a null value? Null values mean the database has no data corresponding to a field in a record. Moreover, these missing values can skew the results obtained by data management solutions used in the healthcare industry. 
Consider that a patient left a form field empty. If all the care and life science businesses use online data collection surveys, they can warn the patients about the empty values. This approach relies on the “prevention is better than cure” principle. 
Still, many institutions, ranging from multispecialty hospitals to clinical device producers, record data offline. Later, the data entry officers transform the filled papers using scanners and OCR (optical character recognition). 
Empty fields also appear in the database management system (DBMS), so the healthcare facilities must contact the patients or reporting doctors to retrieve the missing information. They use newly acquired data to replace the null values, making the analytics solutions operate seamlessly. 
Method #3| Refresh Old Records 
Your physical and psychological attributes change with age, environment, lifestyle, and family circumstances. So, what was true for an individual a few years ago is less likely to be relevant today. While preserving historical patient databases is vital, hospitals and pharma businesses must periodically update obsolete medical reports. 
Each healthcare business maintains a professional network of consulting physicians, laboratories, chemists, dietitians, and counselors. These connections enable the treatment providers to strategically conduct regular tests to check how patients’ bodily functions change throughout the recovery. 
Therefore, updating old records in a patient’s medical history becomes possible. Other variables like switching jobs or traveling habits also impact an individual’s metabolism and susceptibility to illnesses. So, you must also ask the patients to share the latest data on their changed lifestyles. Freshly obtained records increase the relevance of healthcare data management solutions. 
Method #4| Standardize Documentation 
Standardization compels all professionals to collect, store, visualize, and communicate data or analytics activities using unified reporting solutions. Furthermore, standardized reports are integral to improving data governance compliance in the healthcare industry. 
Consider the following principles when promoting a documentation protocol to make all reports more consistent and easily traceable. 
A brand’s visual identities, like logos and colors, must not interfere with clinical data presentation. 
Observed readings must go in the designated fields. 
Both the offline and online document formats must be identical. 
Stakeholders must permanently preserve an archived copy of patient databases with version control as they edit and delete values from the records. 
All medical reports must arrange the data and insights to prevent ambiguity and misinterpretation. 
Pharma companies, clinics, and FDA (food and drug administration) benefit from reporting standards. After all, corresponding protocols encourage responsible attitudes that help data analytics solutions avoid processing problems. 
Method #5| Merge Duplicate Report Instances 
A report instance is like a screenshot that helps you save the output of visualization tools related to a business query at a specified time interval. However, duplicate reporting instances are a significant quality assurance challenge in healthcare data management solutions. 
For example, more than two nurses and one doctor will interact with the same patients. Besides, patients might consult different doctors and get two or more treatments for distinct illnesses. Such situations result in multiple versions of a patient’s clinical history. 
Data analytics solutions can process the data collected by different healthcare facilities to solve the issue of duplicate report instances in the patients’ databases. They facilitate merging overlapping records and matching each patient with a universally valid clinical history profile. 
Such a strategy also assists clinicians in monitoring how other healthcare professionals prescribe medicine to a patient. Therefore, they can prevent double dosage complications arising from a patient consuming similar medicines while undergoing more than one treatment regime. 
Method #6| Audit the DBMS and Reporting Modules 
Chemical laboratories revise their reporting practices when newly purchased testing equipment offers additional features. Likewise, DBMS solutions optimized for healthcare data management must receive regular updates. 
Auditing the present status of reporting practices will give you insights into efficient and inefficient activities. Remember, there is always a better way to collect and record data. Monitor the trends in database technologies to ensure continuous enhancements in healthcare data quality. 
Simultaneously, you want to assess the stability of the IT systems because unreliable infrastructure can adversely affect the decision-making associated with patient diagnosis. You can start by asking the following questions. 
Questions to Ask When Assessing Data Quality in Healthcare Analytics Solutions 
Can all doctors, nurses, agents, insurance representatives, patients, and each patient’s family members access the required data without problems? 
How often do the servers and internet connectivity stop functioning correctly? 
Are there sufficient backup tools to restore the system if something goes wrong? 
Do hospitals, research facilities, and pharmaceutical companies employ end-to-end encryption (E2EE) across all electronic communications? 
Are there new technologies facilitating accelerated report creation? 
Will the patient databases be vulnerable to cyberattacks and manipulation? 
Are the clinical history records sufficient for a robust diagnosis? 
Can the patients collect the documents required to claim healthcare insurance benefits without encountering uncomfortable experiences? 
Is the presently implemented authorization framework sufficient to ensure data governance in healthcare? 
 Has the FDA approved any of your prescribed medications? 
Method #7| Conduct Skill Development Sessions for the Employees  
Healthcare data management solutions rely on advanced technologies, and some employees need more guidance to use them effectively. Pharma companies are aware of this as well, because maintaining and modifying the chemical reactions involved in drug manufacturing will necessitate specialized knowledge. 
Different training programs can assist the nursing staff and healthcare practitioners in developing the skills necessary to handle advanced data analytics solutions. Moreover, some consulting firms might offer simplified educational initiatives to help hospitals and nursing homes increase the skill levels of employees. 
Cooperation between employees, leadership, and public authorities is indispensable to ensure data quality in the healthcare and life science industries. Otherwise, a lack of coordination hinders the modernization trends in the respective sectors. 
Conclusion 
Healthcare analytics depends on many techniques to improve data quality. For example, cleaning datasets to eliminate obsolete records, null values, or duplicate report instances remains essential, and multispecialty hospitals agree with this concept. 
Therefore, medical professionals invest heavily in standardized documents and employee education to enhance data governance. Also, you want to prevent cyberattacks and data corruption. Consider consulting reputable firms to audit your data operations and make clinical trials more reliable. 
SG Analytics is a leader in healthcare data management solutions, delivering scalable insight discovery capabilities for adverse event monitoring and medical intelligence. Contact us today if you want healthcare market research and patent tracking assistance. 
3 notes · View notes
archiveofkloss · 4 months
Text
“We’re just seeing the very beginning of what’s ahead and what will be possible,” the supermodel and entrepreneur tells ELLE.
karlie on the future of women in tech:
"I’ve been doing this work for almost a decade now, and so much has changed in ways that make me very optimistic. I went to a public school in Missouri. I’m 31 years old, so it’s been a while since I was in high school, but back when I was a student, they did not have computer science programs. Now they do, and so do many, many, many public schools and private schools across the United States. There are now entry points for women and girls to start to learn how to code. It is much more understood how much technology is a part of shaping our world in every industry—not just in Silicon Valley, but also in music, media, finance, and business. But there’s a lot more, unfortunately, that continues to need to happen."
on growing kode with klossy into a global nonprofit:
"Kode With Klossy focuses on creating inclusive spaces that teach highly technical skills. We have AI machine learning and web dev. We have mobile app development and data science. They all are very creative applications of technology. Ultimately, right now, our programs are rooted in teaching the fundamentals of code and scaling the amount of people in our programs. This summer, we’re going to have 5,000 scholarships for free that we are giving to students to be a part of Kode With Klossy. We’ve trained hundreds of teachers through the years. We’ll have a few hundred instructors and instructor assistants this summer alone in our program. So what we’re focused on is continuing to ignite creative passion around technology."
on using technology to advance the fashion industry:
"We’re just seeing the very beginning of what’s ahead and what will be possible. That’s why it’s so important people realize that tech is not just for tech alone. It is [a tool to] drive better solutions across all industries and all businesses. Fashion is one of the biggest polluters of water. The industry has a lot of big problems to solve, and that’s part of why I’m optimistic and excited about more people seeing the overlap between the two. There is intersection in these spaces, and we can drive solutions in scalable ways when we see these intersections."
on embracing your fears:
"Natalie Massenet, the founder of Net-a-Porter, is an amazing entrepreneur and somebody I feel lucky to call a friend. She asked me years ago, and it’s always stuck with me through different personal and professional moments, “What would you do if you weren’t afraid?” That has always resonated, because we can get so stuck in our heads about being afraid of all sorts of different things—afraid of what other people will think, afraid of failure."
on the value of community in entrepreneurship:
"It takes a lot of courage for anyone [to be an entrepreneur]. It doesn’t matter your gender, your age, your experience level, that’s where community really does make a difference. It’s not just a talking point. So many of our Kode With Klossy scholars have come back as instructor assistants, and are now in peer leadership positions. So many of them have gone on to win hackathons and scholarships. It comes down to this collective community that continues to support and foster new connections among each other."
on breathing new life into Life magazine:
"Part of why I’m so excited about what we can build and what we are building with Bedford [Media, the company launched by Kloss and her husband, Joshua Kushner] is this intersection of a creative space like media—print media—and how you can continue to drive innovation with technology. And so that’s something that we’re very focused on, how to integrate the two. Lots more that we’re going to share at the right time, but we’re heads down on building the team and the company right now. I’m super excited."
on showing up for the people you love:
"I have two young babies, and I want to be the best mom I can be. So many of us are juggling so many different responsibilities and identities, both personally and professionally. Having women in leadership positions is so important, because our lived experiences are different from our male counterparts. And by the way, theirs is different from ours. It matters that, in leadership positions, to have different lived experiences across ages, genders, geographies, and ethnicities. It ultimately leads to better outcomes. All that to say, I’m just trying the best I can every day to show up for the people that I love and do what I can to help others."
on the intrinsic value in heirloom pieces:
"For our wedding, my husband bought me a beautiful Cartier watch. Some day I will pass that on to our daughter, if I’m lucky enough to have one. Or [I’ll pass it on to] my son; I have two sons. For our wedding, I also bought myself beautiful diamond earrings. There was something very symbolic about that to me, like, okay, I can also buy myself something. That’s why jewelry, to me—as we’re talking about female entrepreneurship and women in business and women in tech—is something that’s so emotional and personal. So I bought myself these vintage diamond earrings from the ’20s, with this beautiful, rich history of where they had been and who had owned them and wore them before. That’s the power of jewelry, whether it’s vintage or new, you create memories and it marks moments in life and in time. And then to be able to share that with future generations is something I find really beautiful."
5 notes · View notes
timefadesaway · 1 year
Text
i’ve just woken up so i can’t be bothered to word this properly AND I’m not even a techie but tbh a lot of the hatred of AI is like. a bit pearl-clutchy or screaming at a film of a train coming towards you or whatever. there is obviously a lot to be said in regard to implementing it within creative industries and in terms of using it to just avoid paying artists but just like robotics it is something that should have great utility if used ‘meaningfully’. like the idea that automation should be used to make lives easier and to remove a large portion of labour from society without impacting wellbeing/way of life. to make people have more time of their own rather than time at work. obviously not going to happen in the current economic system but there is nothing inherently wrong with AI it’s just about utility… and of course we should be using it to replace or support menial, difficult or dangerous jobs not creativity and entertainment and art etc etc. although honestly in some ways the same thing goes for AI art in general (when you look at it as a tool). models have bad data scraping methods and things but i don’t think all AI art is bad just bc of the fact that it’s made by AI. it just asked some questions. because what is art? like if someone programmed a robot to create an original piece of art without that kind of theft then we could call them the artist really even if it was the robot that did it. the robot itself is the art. same goes for AI imo but it’s extremely reliant on the ethics. there are not as many divisions between tech/science and art as you think. again obviously current usage of AI on the forefront of things is not good. but i don’t think it’s the end of the world we just have to utilise it differently. like are we kind of scapegoating it for just the problems of automation under capitalism
14 notes · View notes