#Data science algorithms and models
Explore tagged Tumblr posts
Text
What is Data Science? Introduction, Basic Concepts & Process
Tumblr media
what is data science? Complete information about data science for beginner to advance you search what is data science data science is like data analyzing, data saving, database etc.
1 note · View note
jcmarchi · 18 days ago
Text
Birago Jones, Co-Founder and CEO of Pienso – Interview Series
New Post has been published on https://thedigitalinsider.com/birago-jones-co-founder-and-ceo-of-pienso-interview-series/
Birago Jones, Co-Founder and CEO of Pienso – Interview Series
Birago Jones is the CEO and Co-Founder of Pienso, a no-code/low-code platform for enterprises to train and deploy AI models without the need for advanced data science or programming skills. Today, Birago’s customers include the US government and Sky, the largest broadcaster in the UK. Pienso is based on Birago’s research from the Massachusetts Institute of Technology (MIT), where he and his co-founder Karthik Dinakar served as research assistants in the MIT Media Lab. He is a distinguished authority in the intersection of artificial intelligence (AI) and human-computer interaction (HCI), and an advocate for responsible AI.
Pienso‘s interactive learning interface is designed to enable users to harness AI to its fullest potential without any coding. The platform guides users through the process of training and deploying large language models (LLMs) that are imprinted with their expertise and fine-tuned to answer their specific questions.
What initially attracted you to pursue your studies in AI, HCI (Human Computer Interaction) and user experience?
I had already been developing personal projects focused on creating accessibility tools and applications for the blind, such as a haptic digital braille reader using a smartphone and an indoor wayfinding system (digital cane). I believed AI could enhance and support these efforts.
Pienso was initially conceived during your time at MIT, how did the concept of training machine learning models to be accessible to non-technical users originate?
My co-founder Karthik and I met in grad school while we were both conducting research in the MIT Media Lab. We had teamed up for a class project to build a tool that would help social media platforms moderate and flag bullying content. The tool was gaining lots of traction, and we were even invited to the White House to give a demonstration of the technology during a cyberbullying summit.
There was just one problem: while the model itself worked the way it was supposed to, it wasn’t trained on the right data, so it wasn’t able to identify harmful content that used teenage slang. Karthik and I were working together to figure out a solution, and we later realized that we could fix this issue if we found a way for teenagers to directly train the model data.
This was the “Aha” moment that would later inspire Pienso: subject-matter experts, not AI engineers like us, should be able to more easily provide input on model training data. We ended up developing point-and-click tools that allow non-experts to train large amounts of data at scale. We then took this technology to local Cambridge, Massachusetts schools and elicited the help of local teenagers to train their algorithms, which allowed us to capture more nuance in the algorithms than previously possible. With this technology, we went to work with organizations like MTV and Brigham and Women’s Hospital.
Could you share the genesis story of how Pienso was then spun out of MIT into its own company?
We always knew that this technology could provide value beyond the use case we built, but it wasn’t until 2016 that we finally made the jump to commercialize it, when Karthik completed his PhD. By that time, deep learning was exploding in popularity, but it was mainly AI engineers who were putting it to use because nobody else had the expertise to train and serve these models.
What are the key innovations and algorithms that enable Pienso’s no-code interface for building AI models? How does Pienso ensure that domain experts, without technical background, can effectively train AI models?
Pienso eliminates the barriers of “MLOps” — data cleaning, data labeling, model training and deployment. Our platform uses a semi-supervised machine learning approach, which allows users to start with unlabeled training data and then use human expertise to annotate large volumes of text data rapidly and accurately without having to write any code. This process trains deep learning models which are capable of accurately classifying and generating new text.
How does Pienso offer customization in AI model development to cater to the specific needs of different organizations?
We are strong believers that no one model can solve every problem for every company. We need to be able to build and train custom models if we want AI to understand the nuances of each specific company and use case. That’s why Pienso makes it possible to train models directly on an organization’s own data. This alleviates the privacy concerns of using foundational models, and can also deliver more accurate insights.
Pienso also integrates with existing enterprise systems through APIs, allowing inference results to be delivered in different formats. Pienso can also operate without relying on third-party services or APIs, meaning that data never needs to be transmitted outside of a secure environment. It can be deployed on major cloud providers as well as on-premise, making it an ideal fit for industries that require strong security and compliance practices, such as government agencies or finance.
How do you see the platform evolving in the next few years?
In the next few years, Pienso will continue to evolve by focusing on even greater scalability and efficiency. As the demand for high-volume text analytics grows, we’ll enhance our ability to handle larger datasets with faster inference times and more complex analysis. We’re also committed to reducing the costs associated with scaling large language models to ensure enterprises get value without compromising on speed or accuracy.
We’ll also push further into democratizing AI. Pienso is already a no-code/low-code platform, but we envision expanding the accessibility of our tools even more. We’ll continuously refine our interface so that a broader range of users, from business analysts to technical teams, can continue to train, tune, and deploy models without needing deep technical expertise.
As we work with more customers across diverse industries, Pienso will adapt to offer more tailored solutions. Whether it’s finance, healthcare, or government, our platform will evolve to incorporate industry-specific templates and modules to help users fine-tune their models more effectively for their specific use cases.
Pienso will become even more integrated within the broader AI ecosystem, seamlessly working alongside the solutions / tools from the major cloud providers and on-premise solutions. We’ll focus on building stronger integrations with other data platforms and tools, enabling a more cohesive AI workflow that fits into existing enterprise tech stacks.
Thank you for the great interview, readers who wish to learn more should visit Pienso.
0 notes
quickinsights · 6 months ago
Text
0 notes
daisyjones12 · 2 years ago
Text
Tumblr media
Probabilistic model-based clustering is an excellent approach to understanding the trends that may be inferred from data and making future forecasts. The relevance of model based clustering, one of the first subjects taught in data science, cannot be overstated. These models serve as the foundation for machine learning models to comprehend popular trends and their behavior. You can also learn about neural network guides and python for data science if you are interested in further career prospects of data science. 
0 notes
getreview4u · 2 years ago
Photo
Tumblr media Tumblr media Tumblr media
(via All you need to know about Machine Learning | meaning, tool, technique, math, algorithm, AI, accuracy, …etc)
0 notes
kwakudamoah · 2 years ago
Text
Maximizing the Benefits of Risk Scoring and Classification in Forensic Analytics
Unlock the full potential of your forensic analytics with risk scoring and classification approaches. Learn how they can enhance fraud detection, improve regulatory compliance, optimize ICT systems and operations, analyze transactions and crime.
Unlock the Power of Forensic Analytics with Risk Scoring and Classification Forensic analytics plays a crucial role in many areas of business and government operations. From detecting and preventing fraud, to ensuring regulatory compliance and improving operations, to analyzing transactional data and detecting crime, the use of risk scoring and classification approaches can greatly enhance the…
Tumblr media
View On WordPress
1 note · View note
mariacallous · 2 months ago
Text
Arvind Narayanan, a computer science professor at Princeton University, is best known for calling out the hype surrounding artificial intelligence in his Substack, AI Snake Oil, written with PhD candidate Sayash Kapoor. The two authors recently released a book based on their popular newsletter about AI’s shortcomings.
But don’t get it twisted—they aren’t against using new technology. “It's easy to misconstrue our message as saying that all of AI is harmful or dubious,” Narayanan says. He makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.
In AI Snake Oil, those guilty of perpetuating the current hype cycle are divided into three core groups: the companies selling AI, researchers studying AI, and journalists covering AI.
Hype Super-Spreaders
Companies claiming to predict the future using algorithms are positioned as potentially the most fraudulent. “When predictive AI systems are deployed, the first people they harm are often minorities and those already in poverty,” Narayanan and Kapoor write in the book. For example, an algorithm previously used in the Netherlands by a local government to predict who may commit welfare fraud wrongly targeted women and immigrants who didn’t speak Dutch.
The authors turn a skeptical eye as well toward companies mainly focused on existential risks, like artificial general intelligence, the concept of a super-powerful algorithm better than humans at performing labor. Though, they don’t scoff at the idea of AGI. “When I decided to become a computer scientist, the ability to contribute to AGI was a big part of my own identity and motivation,” says Narayanan. The misalignment comes from companies prioritizing long-term risk factors above the impact AI tools have on people right now, a common refrain I’ve heard from researchers.
Much of the hype and misunderstandings can also be blamed on shoddy, non-reproducible research, the authors claim. “We found that in a large number of fields, the issue of data leakage leads to overoptimistic claims about how well AI works,” says Kapoor. Data leakage is essentially when AI is tested using part of the model’s training data—similar to handing out the answers to students before conducting an exam.
While academics are portrayed in AI Snake Oil as making “textbook errors,” journalists are more maliciously motivated and knowingly in the wrong, according to the Princeton researchers: “Many articles are just reworded press releases laundered as news.” Reporters who sidestep honest reporting in favor of maintaining their relationships with big tech companies and protecting their access to the companies’ executives are noted as especially toxic.
I think the criticisms about access journalism are fair. In retrospect, I could have asked tougher or more savvy questions during some interviews with the stakeholders at the most important companies in AI. But the authors might be oversimplifying the matter here. The fact that big AI companies let me in the door doesn’t prevent me from writing skeptical articles about their technology, or working on investigative pieces I know will piss them off. (Yes, even if they make business deals, like OpenAI did, with the parent company of WIRED.)
And sensational news stories can be misleading about AI’s true capabilities. Narayanan and Kapoor highlight New York Times columnist Kevin Roose’s 2023 chatbot transcript interacting with Microsoft's tool headlined “Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’” as an example of journalists sowing public confusion about sentient algorithms. “Roose was one of the people who wrote these articles,” says Kapoor. “But I think when you see headline after headline that's talking about chatbots wanting to come to life, it can be pretty impactful on the public psyche.” Kapoor mentions the ELIZA chatbot from the 1960s, whose users quickly anthropomorphized a crude AI tool, as a prime example of the lasting urge to project human qualities onto mere algorithms.
Roose declined to comment when reached via email and instead pointed me to a passage from his related column, published separately from the extensive chatbot transcript, where he explicitly states that he knows the AI is not sentient. The introduction to his chatbot transcript focuses on “its secret desire to be human” as well as “thoughts about its creators,” and the comment section is strewn with readers anxious about the chatbot’s power.
Images accompanying news articles are also called into question in AI Snake Oil. Publications often use clichéd visual metaphors, like photos of robots, at the top of a story to represent artificial intelligence features. Another common trope, an illustration of an altered human brain brimming with computer circuitry used to represent the AI’s neural network, irritates the authors. “We're not huge fans of circuit brain,” says Narayanan. “I think that metaphor is so problematic. It just comes out of this idea that intelligence is all about computation.” He suggests images of AI chips or graphics processing units should be used to visually represent reported pieces about artificial intelligence.
Education Is All You Need
The adamant admonishment of the AI hype cycle comes from the authors’ belief that large language models will actually continue to have a significant influence on society and should be discussed with more accuracy. “It's hard to overstate the impact LLMs might have in the next few decades,” says Kapoor. Even if an AI bubble does eventually pop, I agree that aspects of generative tools will be sticky enough to stay around in some form. And the proliferation of generative AI tools, which developers are currently pushing out to the public through smartphone apps and even formatting devices around it, just heightens the necessity for better education on what AI even is and its limitations.
The first step to understanding AI better is coming to terms with the vagueness of the term, which flattens an array of tools and areas of research, like natural language processing, into a tidy, marketable package. AI Snake Oil divides artificial intelligence into two subcategories: predictive AI, which uses data to assess future outcomes; and generative AI, which crafts probable answers to prompts based on past data.
It’s worth it for anyone who encounters AI tools, willingly or not, to spend at least a little time trying to better grasp key concepts, like machine learning and neural networks, to further demystify the technology and inoculate themselves from the bombardment of AI hype.
During my time covering AI for the past two years, I’ve learned that even if readers grasp a few of the limitations of generative tools, like inaccurate outputs or biased answers, many people are still hazy about all of its weaknesses. For example, in the upcoming season of AI Unlocked, my newsletter designed to help readers experiment with AI and understand it better, we included a whole lesson dedicated to examining whether ChatGPT can be trusted to dispense medical advice based on questions submitted by readers. (And whether it will keep your prompts about that weird toenail fungus private.)
A user may approach the AI’s outputs with more skepticism when they have a better understanding of where the model’s training data came from—often the depths of the internet or Reddit threads—and it may hamper their misplaced trust in the software.
Narayanan believes so strongly in the importance of quality education that he began teaching his children about the benefits and downsides of AI at a very young age. “I think it should start from elementary school,” he says. “As a parent, but also based on my understanding of the research, my approach to this is very tech-forward.”
Generative AI may now be able to write half-decent emails and help you communicate sometimes, but only well-informed humans have the power to correct breakdowns in understanding around this technology and craft a more accurate narrative moving forward.
28 notes · View notes
tangibletechnomancy · 2 years ago
Text
How To Use AI To Fake A Scandal For Fun, Profit, and Clout
Or, I Just Saw People I Know To Be Reasonable Fall For A Fake "Ripoff" And Now I'm Going To Gently Demonstrate What Really Happened
So, we all know what people say about AI. It's just an automatic collage machine, it's stealing your data (as if the rest of the mainstream internet isn't - seriously, we should be using that knee-jerk disgust response to demand better internet privacy laws rather than try to beef up copyright so that compliance has to come at the beginning rather than the end of the process and you can be sued on suspicion of referencing, but I digress...), it can't create anything novel, some people go so far as to claim it's not even synthesizing anything, but just acting as a search engine and returning something run through a filter and "proving" it by "searching" for their own art and "finding" it.
And those are blatant lies.
The thing is, the reason AI is such a breakthrough - and the reason we memed with it so hard when DALL-E Mini and DALL-E 2 first dropped - is because it CAN create novel output. Because it CAN visualize the absurd ideas that no one has ever posted to the internet before. In fact, it would be a bigger breakthrough in computer science if we DID come up with an automatic collage machine - something that knows where to cut out a part of one image and paste it onto another, then smooth out the lighting and colors to make them fairly consistent, to make it look like what we would recognize as an image we're asking for? That would make the denoising algorithm on steroids that a diffusion model is look like child's play.
But, unlike the posts that claim that they're just acting as a collage maker at best and a search engine at worst, I'm not going to ask you to take my word for it (and stick a pin in this point, we'll come back to it later). I'm going to ask you to go to Simple Stable (or Craiyon, or the Karlo demo, if Google Colab feels too complicated for you - or if you like, do all of the above) and throw in a shitpost prompt or two. Ask for a velociraptor carousel pony ridden by a bunny. Ask for Godzilla fighting a wacky waving inflatable arm flailing tube man. Ask for an oil painting of a capybara wearing an ornate princess gown. Shitpost with it like we did before these myths took hold.
Now take your favorite result(s) and reverse image search them. Did you get anything remotely similar to your generated image? Probably not!
So then, how did someone end up getting a near perfect recreation of their work? Was that just some kind of wacky, one-in-a-million coincidence?
Well - oh no, look at that, I asked it for a simplistic character drawing and it happened to me too, it just returned a drawing of mine that I never even uploaded, and it's the worst drawing I've done since the fifth grade even just to embarrass me! Oh no, what happened, did they change things right under my nose, has digital surveillance gotten even WORSE?? Look, see, here's the original on the left, compare it to the output on the right - scary!! They're training on the contents of your computer in real time now, aaaagh!!
Tumblr media Tumblr media
Except, of course, for the fact that the entire paragraph above was a lie and I did this on purpose in a way no one could possibly recreate from a text prompt, even with a perfect description.
How?
See, some models have this nifty little function called img2img. It can be used for anything from guiding the composition of your final image with a roughly drawn layout, to turning a building into a dragon...to post-processing of a hand-drawn image, to blatantly fucking lying about how AI works.
I took 5 minutes out of my day to crudely draw a character. I uploaded the image to this post. I saved the post as a draft. I stuck the image URL in the init_image field in Simple Stable, cranked the init strength up to 0.8, cleared all text prompts, and ran it. It did exactly what I told it to and tried to lightly refine the image I gave it.
If you see someone claiming that an AI stole their image with this kind of "proof", and the image they're comparing is not ITSELF a parody of an extremely well-known piece such as the Mona Lisa, or just so extremely generic that the level of similarity could be a coincidence (you/your favorite artist do/es not own the rule of thirds or basic fantasy creatures, just to name one family of example I've seen), this is what happened.
So from here you must realize that it is deeply insidious that posts that make these claims usually imply or even outright state that you should NOT try to recreate this but instead just take their word for it, stressing ~DON'T FEED THE MACHINE~. It's always some claim about "ohhh, the more you use them, the more they learn, I made a SACRIFICE so you don't have to" - but txt2img functions can't use your interaction to learn jack shit. There's no new information in a text prompt for them TO learn. Most img2img models can't learn from your input either, for that matter! I still recommend being careful about corporate img2img toys - we know that Facebook, for instance, is happy to try and beef up facial recognition for the WORST possible reasons - but if you're worried about your privacy and data harvesting, any given txt2img model is one of the least worrying things on the internet today.
So do be careful with your privacy online, and PLEASE use your very understandable knee-jerk horror response to how much extremely personal content can be found in training databases as a call to DEMAND better privacy laws ("do not track" should not be just for show ffs) and compliance with security protocols in fields that deal with very private information (COMMON CRAWL DOESN'T GO FAR OUT OF ITS WAY, IT SHOULD NEVER HAVE BEEN ABLE TO GET ANY MEDICAL IMAGES THE PATIENTS DIDN'T SHARE THEMSELVES HOLY SHIT, SOME HOSPITAL WORKERS AND/OR MEDICAL COMMUNICATIONS DEVELOPERS BETTER BE GETTING FIRED AND/OR SUED) - but don't just believe a convenient and easy-to-disprove lie because it aligns with that feeling.
419 notes · View notes
punkeccentricenigma · 1 year ago
Text
DONATELLO X READER "a Night Ride"
Relationship status: Romantic Reader prounouns: She/Her Words: 2739 TW: Slight angst (I guess? I'm not sure), Some grammatical errors because english is not my first language. Author's note: Yooo, this is my first time writing a oneshot in the last few years, i'm kinda proud of it, lmao. Anyway, enjoy.
.⋆。⋆˚。⋆。˚。⋆. .⋆。⋆˚。⋆。˚。⋆.
The pale moonlight slightly illuminated the sky above, much like New York itself, adding to the charm of the colorful lights that refused to fade despite the late hour of the night.
The Turtle Tank gracefully maneuvered through the uncrowded streets, its loud engine echoing around, serving as an unspoken warning to pedestrians to watch their step when crossing the road. Two people were inside the vehicle: Donatello, who else? He usually didn't allow his brothers to take the tank without him because he knew how chaotic they could be and how they might destroy everything in their path. The only exception was when April needed help with Mayhem, and as a reward, she offered pizza. That's when Raph took the Turtle Tank. He didn't cause much damage to the vehicle's body, so the purple genius spared him a strong reprimand. This time.
The other person was [Y.N], another human acquaintance of the turtles. Why was she there? And at this hour? Well...
"I can't believe I had to pick you up at this hour because some guy stood you up!" Yes, that was the reason. You see, [Y.N] had a date scheduled for tonight with a guy from her school, which was supposed to take place at a restaurant on the other side of New York. She wasn't a fan of such fancy outings, but the excitement of the meeting had gotten to her, and that's how it ended up. She had waited for a few hours for the no-show date instead of going straight to her apartment and crying into her pillow. At least then, she would have had a slight chance of catching a taxi and not having to call Donatello, who was clearly annoyed. Tough luck.
"I'm not a fan of such vocabulary, oh, who am I kidding? I am, so I'll say it: Didn't I tell you!?" The purple enthusiast began waving his hands during his monologue, trying to express his emotions somehow. Right, Donnie had warned the teenager, and not just once. If she had to say anything now, she'd confess it lasted a whole week.
"[Y.N], going on a date with such a normie won't end well," Soft-shell casually declared, appearing out of nowhere in the kitchen. Well, maybe not 'nowhere,' as it was their base's kitchen, so he had every right to be there - but no one expected the turtle to emerge from his workshop.
The teenager had a puzzled look as she nibbled on one of the sandwiches she and Leo had made for their movie night. "Why?" She didn't want to dismiss Donatello; she knew he genuinely cared about her and was trying his best to help despite his quirks, but this was already the fourth 'rational' argument this week! "He's not Dale, so nothing more annoying can happen!"
"Sorry, but I disagree," his robotic arms unfolded a whiteboard with potential threat assessments or risky behaviors. [Y.N]'s eyes flattened to read the small font; was that Helvetica? "According to my calculations, the chance that this guy is not suitable for you is precisely 76.43 percent. Of course, this number didn't come out of thin air. It's based on a series of algorithms and data analyses I conduct every day. I take into account factors like communication and conflict resolution skills, emotional availability, attachment style, and even past behaviors. It's quite a sophisticated model, if I may say so." The science enthusiast's proud smile said it all.
"Wow."
"My calculations are always reliable, sure, sometimes I make mistakes, but not in matters like these!" It wasn't entirely true. Matters of the heart weren't Donatello's strong suit, which often led to friction between him and his family. Heck, even Doctor Delicate Touch had to help him when Shelldon went through his rebellious phase! But when it came to someone as close as [Y.N]? He didn't want to be wrong.
The girl bit her cheek from the inside, tilting slightly to the side as the turtle turned left again. Her eyes occasionally tracked the new streetlamp, trying to gather her thoughts.
"Don't tell me you're showing her that board," a red-slider turtle peeked out from behind the whiteboard. "Yeah, you're showing her." His eyes didn't express surprise, more like indifference to his righteousness.
Donatello rolled his black eyes, tucking the presentation back into his battle shell as Leonardo sidestepped him gracefully, grabbing a plate full of sandwiches. His gaze settled on the teenager, who had her back turned to him and was slightly bent over.
"You were snacking, weren't you?" [Y.N] twitched slightly at her friend's keen observation. She slowly turned her head towards Leo, her smile seeming somewhat embarrassed.
"No?"
"Spots around your mouth from mustard say something else," Leonardo pointed out, pointing with his finger. The embarrassed teenager chuckled softly, feeling her posture slightly break.
"Okay, you caught me!" Despite being in despair, her voice also conveyed false drama. "But what can I do when you make such awesome sandwiches?? You guys live in the sewers, after all!" Donnie chuckled quietly to himself, knowing where his friend picked up these habits. It might not be a matter of great pride, but it made an impression. "Well, give me another one!" Before anyone could react, the girl practically lunged at Leo to reach the plate of food he had deliberately moved away from himself.
"Nuh-uh, because there won't be enough for the others." He easily comically pushed his friend away and headed towards the exit, winking at his brother in passing. Donatello rolled his eyes, knowing what was going on. He wasn't happy about it, but there was nothing he could do about his (not) twin's foolishness, or at least he didn't want a repeat of the last time he meddled in his brothers' affairs.
Finally, his dark eyes settled on the girl, who chuckled with a smile. She wanted to wipe her face with the sleeve of her hoodie, but the mechanical hand had her wrist in its grip. "Huh?"
"Didn't your mom teach you good manners?" Donnie approached her, taking a single sheet of paper towel from the red kitchen countertop nearby.
"I repeat, you guys live in the sewers, so what I wanted to do is the least of your worries." [Y.N] laughed, trying in vain to free her hand from the scientist's robotic grasp. "Can you let me go, Dr. Octopus?"
When she attempted to jerk her wrist again, Donatello began gently wiping her lips with the paper towel in a slow, deliberate motion, getting narrowed pupils in response. The boy didn't have the courage to look into her eyes, despite the brave activity he was currently engaged in, especially when his thumb lingered at the corner of her mouth for a second longer than it should have.
Once he finished wiping, he took the paper and stepped back slightly, realizing what he had done. When they both locked eyes, warmth flooded their cheeks, and the shock added to the turtle's expression. It was clear who was more in control of their emotions here, hm?
The boy coughed abruptly, averted his gaze, and straightened up - he didn't even notice when he had been slouching. "Living in the sewers doesn't compromise my hygiene," he commented a bit too loudly, feeling his voice crack with each word. "I'd say it's Leo who's more likely to." He chuckled slightly, and the girl joined in. "Well, anyway! Movie marathon coming up, so, see you in a few minutes??" Since when was he feeling so hot?? "See you!" He finally shouted, panicking and fleeing the kitchen.
[Y.N] chuckled with a smile, covering the lower part of her face.
[Y.N] sighed shakily, covering the lower part of her face.
"Oh, for Newton's sake, I feel like punching someone! ... Is this how Raph usually feels when he looks at us?" The red light appeared on the traffic signal, reflecting off the dark Turtle Tank's body. When the boy stopped the vehicle for a moment, he heard quiet sobbing. Confused, he looked to the side and saw [Y.N], who had started crying uncontrollably.
"I'm sorry."
The turtle's eyes widened. Her voice seemed to slowly shatter like transparent glass between each tear drop, and her posture was completely destroyed as she bent in half on the soft seat, completely covering her face.
Donatello glanced out of the corner of his eye at the front windshield, wanting to check if the light had changed - it was still red, so he immediately got up and approached the girl, squatting by the seat. He didn't handle his emotions well, especially someone else's, but he felt a pang in the depths of his heart that he wanted to get rid of. With a slight hesitation, he placed his three-fingered hand on her back, gently moving it up and down - Splinter, and then Raphael often did this to comfort the science enthusiast when he struggled with something.
"I should have listened to you," the teenager began, "It was a mistake to hope for a good time with that person." The boy felt terrible. Yes, he had wanted to help her understand her mistake at the time, but he still hoped that despite his unpredictable intellect, he was wrong. "God, I just want to hide in my room and never come out."
"Don't apologize, it's not your fault." Her eyes peeked out from behind her fingers. Donnie's eyebrows furrowed seeing [Y.N]'s bloodshot and red eyes. "Who would have thought he wouldn't show up after all?"
"You," she sighed heavily, straightening up. Her expression conveyed sorrow. "Your calculations turned out somewhat effective."
Donatello looked at her with empathy, trying to find the right words that could comfort her. He gently raised his hand and lightly tapped her shoulder, trying to convey support.
"Science... doesn't always get it right." [Y.N]'s eyes widened at his words. Why did he think that way? Science was practically one of Donnie's defining characteristics, it was unthinkable. Sure, Leo or Mikey might say that, but not him, not her genius acquaintance who would want to rule the world! [Y.N] was now certain that something was going on deep within him.
"What are you saying?" Her voice wasn't supposed to sound less casual, slightly mocking, but she couldn't help it. "Science doesn't get it right? That's so... illogical of you!"
Her eyes met his dark ones again, expressing strong uncertainty and... enchantment, quite enchantment. His face was perfectly illuminated by the city lights, causing a slight blush of astonishment on the teenager's face.
"Science doesn't always have it right," he repeated and stood upright. His fists were tightly clenched, and his posture was rigid. "And I'll prove it to you."
"How?"
His mouth opened for a second, but he closed it again, momentarily struggling with whether to confess one thing, but now there was no turning back, he had to do it. 'Calm down, Donatello, calm down...'
"When I calculated our 'compatibility,' the result came out excessively negative..." he began, trying with all his might not to take his eyes off the young girl. He didn't want his friend to think he was weird! Although, could there be anything weirder than a teenage mutant ninja turtle with a high IQ? "But... but I feel something else."
'Wait, he calculated our compatibility?' [Y.N] repeated in her thoughts, trying to understand the meaning of those words as quickly as possible. Compatibility. Compatibility... the teenager's blush deepened. 'Is he into me...?!'
She was snapped out of her thoughts by a touch. She felt the boy grab her hands in his, gently squeezing them.
"Numbers don't make sense in this situation," he began. "So... will you go on a date with me?" His voice seemed uncertain, not in terms of his words but about himself. As mentioned earlier, he was a mutated ninja turtle; what chance did he have? But for some time now, he couldn't resist the growing feelings for [Y.N], who, as one of the few, had gotten close to him and understood him. He knew how annoying he could be with his habits, strong sarcasm, or introverted nature, but it didn't bother her, at least most of the time, and he really appreciated that.
The silence stretched on infinitely, causing even greater nervousness on Donatello's part.
"... I've only just been dumped by one guy."
"Oh, right!" Donnie looked startled, like a deer in headlights. Yes, what an idiot! He should have thought this through, or at least used less direct words! How does it look now? "I'm sorry, this was inappropriate; we can forg--!"
"But I'll go." Another silence.
"..."
"..."
"What?"
"Well, you know, let's wait a week for today's emotions to settle," she smoothly took his wrists in her hands. Her smile, despite the slight nervousness of the situation, radiated a pleasant feeling, full of strange comfort, as if not judging him at all. "But after that, I'd be happy to go on a date with you."
Donatello seemed... disconnected. A million thoughts swirled in his mind. Was this real?
"Donnie?" He blinked a few times and looked at the person in front of him again. After a brief moment, he smiled, tilting his head slightly.
"Thanks." That's all he said, and the traffic light turned green. Without waiting, he took the driver's seat and drove on.
"So, on our date, maybe we can watch something? Like... Oppeinhamer?"
"Oh, you know me so well!"
Bonus:
"I'm in position, Tails," the nonchalant voice of the red-slider turtle was audible through a small communication device. [Y.N] chuckled softly, watching out of the corner of her eye as Donatello, with a grimace on his face, sat down next to her on the edge of the residential building's roof.
"My code name is 'Shadow,' Leo!" The turtle sighed heavily, furrowing his brows. "And no, it's not a reference to Sonic!"
"You can't fool me," Leonardo laughed, leaning out from behind the building's wall, sticking his tongue out in the same direction where the pair is.
"Be quiet, Bluey," this time [Y.N] spoke up, bringing the communicator closer to her lips. Seeing the gloomy expression on Leo's face instead of his usual smile, the pair burst into mocking giggles.
"Yeah, yeah, keep making fun of the fact that I watched that show at 3 in the morning." The teenager muttered quietly, resting his weapon on his shoulder. "If you couldn't sleep, you'd watch it too!"
Donatello rolled his eyes, accompanied by his rare smile, and discreetly took the girl's hand. Meanwhile, [Y.N] gently rested her head on his shoulder, giggling again.
"Wasn't your code name 'Purple Knight' by any chance?" She asked, lightly moving her feet.
"It was, but you know, most changes are good, and I'm getting older, so it's natural that I change my nickname~."
The girl raised one eyebrow slightly, adjusting her position a bit to look at Donnie. He met her gaze, which weakened after a moment, and a hint of embarrassment appeared on his forehead.
"FINE, maybe it is a reference to Sonic!" He declared loudly, gesturing. "I've been catching up on Sonic Prime lately; you can't blame me!"
[Y.N] burst into laughter, hugging the boy. For the first few seconds, his body stiffened, but after a while, he put his arm around her. However, out of the corner of his eye, Donatello noticed someone walking on the sidewalk.
"It is Shadow. Bluey, stay alert, the target is approaching," he said through the headset, putting on his special goggles.
"Mhm."
The target was the same boy who had stood [Y.N] up a few weeks earlier on the day of their almost date. Yes, it was Donatello's idea, wanting to seek revenge for his almost-partner.
"Now, Bluey!"
Leonardo leaped out from behind the wall, right in front of the unsuspecting boy who needed a few seconds to grasp the situation.
"Hey, buddy, how's life treating you?" The turtle asked with a malicious grin.
"A talking turtle?!"
"One who happens to be an awesome ninja!" He chuckled, swinging his sword. After a brief moment, a bright blue portal appeared beneath the teenager.
His scream lasted only a nanosecond as he disappeared into the blue void, eliciting laughter from Leonardo. "Have a nice trip to New Jersey~!"
81 notes · View notes
jcmarchi · 20 days ago
Text
Despite its impressive output, generative AI doesn’t have a coherent understanding of the world
New Post has been published on https://thedigitalinsider.com/despite-its-impressive-output-generative-ai-doesnt-have-a-coherent-understanding-of-the-world/
Despite its impressive output, generative AI doesn’t have a coherent understanding of the world
Tumblr media Tumblr media
Large language models can do impressive things, like write poetry or generate viable computer programs, even though these models are trained to predict words that come next in a piece of text.
Such surprising capabilities can make it seem like the models are implicitly learning some general truths about the world.
But that isn’t necessarily the case, according to a new study. The researchers found that a popular type of generative AI model can provide turn-by-turn driving directions in New York City with near-perfect accuracy — without having formed an accurate internal map of the city.
Despite the model’s uncanny ability to navigate effectively, when the researchers closed some streets and added detours, its performance plummeted.
When they dug deeper, the researchers found that the New York maps the model implicitly generated had many nonexistent streets curving between the grid and connecting far away intersections.
This could have serious implications for generative AI models deployed in the real world, since a model that seems to be performing well in one context might break down if the task or environment slightly changes.
“One hope is that, because LLMs can accomplish all these amazing things in language, maybe we could use these same tools in other parts of science, as well. But the question of whether LLMs are learning coherent world models is very important if we want to use these techniques to make new discoveries,” says senior author Ashesh Rambachan, assistant professor of economics and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Rambachan is joined on a paper about the work by lead author Keyon Vafa, a postdoc at Harvard University; Justin Y. Chen, an electrical engineering and computer science (EECS) graduate student at MIT; Jon Kleinberg, Tisch University Professor of Computer Science and Information Science at Cornell University; and Sendhil Mullainathan, an MIT professor in the departments of EECS and of Economics, and a member of LIDS. The research will be presented at the Conference on Neural Information Processing Systems.
New metrics
The researchers focused on a type of generative AI model known as a transformer, which forms the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence, such as the next word in a sentence.
But if scientists want to determine whether an LLM has formed an accurate model of the world, measuring the accuracy of its predictions doesn’t go far enough, the researchers say.
For example, they found that a transformer can predict valid moves in a game of Connect 4 nearly every time without understanding any of the rules.
So, the team developed two new metrics that can test a transformer’s world model. The researchers focused their evaluations on a class of problems called deterministic finite automations, or DFAs. 
A DFA is a problem with a sequence of states, like intersections one must traverse to reach a destination, and a concrete way of describing the rules one must follow along the way.
They chose two problems to formulate as DFAs: navigating on streets in New York City and playing the board game Othello.
“We needed test beds where we know what the world model is. Now, we can rigorously think about what it means to recover that world model,” Vafa explains.
The first metric they developed, called sequence distinction, says a model has formed a coherent world model it if sees two different states, like two different Othello boards, and recognizes how they are different. Sequences, that is, ordered lists of data points, are what transformers use to generate outputs.
The second metric, called sequence compression, says a transformer with a coherent world model should know that two identical states, like two identical Othello boards, have the same sequence of possible next steps.
They used these metrics to test two common classes of transformers, one which is trained on data generated from randomly produced sequences and the other on data generated by following strategies.
Incoherent world models
Surprisingly, the researchers found that transformers which made choices randomly formed more accurate world models, perhaps because they saw a wider variety of potential next steps during training. 
“In Othello, if you see two random computers playing rather than championship players, in theory you’d see the full set of possible moves, even the bad moves championship players wouldn’t make,” Vafa explains.
Even though the transformers generated accurate directions and valid Othello moves in nearly every instance, the two metrics revealed that only one generated a coherent world model for Othello moves, and none performed well at forming coherent world models in the wayfinding example.
The researchers demonstrated the implications of this by adding detours to the map of New York City, which caused all the navigation models to fail.
“I was surprised by how quickly the performance deteriorated as soon as we added a detour. If we close just 1 percent of the possible streets, accuracy immediately plummets from nearly 100 percent to just 67 percent,” Vafa says.
When they recovered the city maps the models generated, they looked like an imagined New York City with hundreds of streets crisscrossing overlaid on top of the grid. The maps often contained random flyovers above other streets or multiple streets with impossible orientations.
These results show that transformers can perform surprisingly well at certain tasks without understanding the rules. If scientists want to build LLMs that can capture accurate world models, they need to take a different approach, the researchers say.
“Often, we see these models do impressive things and think they must have understood something about the world. I hope we can convince people that this is a question to think very carefully about, and we don’t have to rely on our own intuitions to answer it,” says Rambachan.
In the future, the researchers want to tackle a more diverse set of problems, such as those where some rules are only partially known. They also want to apply their evaluation metrics to real-world, scientific problems.
This work is funded, in part, by the Harvard Data Science Initiative, a National Science Foundation Graduate Research Fellowship, a Vannevar Bush Faculty Fellowship, a Simons Collaboration grant, and a grant from the MacArthur Foundation.
0 notes
spacetimewithstuartgary · 3 months ago
Text
Tumblr media
AI helps distinguish dark matter from cosmic noise
Dark matter is the invisible force holding the universe together – or so we think. It makes up around 85% of all matter and around 27% of the universe’s contents, but since we can’t see it directly, we have to study its gravitational effects on galaxies and other cosmic structures. Despite decades of research, the true nature of dark matter remains one of science’s most elusive questions.
According to a leading theory, dark matter might be a type of particle that barely interacts with anything else, except through gravity. But some scientists believe these particles could occasionally interact with each other, a phenomenon known as self-interaction. Detecting such interactions would offer crucial clues about dark matter’s properties.
However, distinguishing the subtle signs of dark matter self-interactions from other cosmic effects, like those caused by active galactic nuclei (AGN) – the supermassive black holes at the centers of galaxies – has been a major challenge. AGN feedback can push matter around in ways that are similar to the effects of dark matter, making it difficult to tell the two apart.
In a significant step forward, astronomer David Harvey at EPFL’s  Laboratory of Astrophysics has developed a deep-learning algorithm that can untangle these complex signals. Their AI-based method is designed to differentiate between the effects of dark matter self-interactions and those of AGN feedback by analyzing images of galaxy clusters – vast collections of galaxies bound together by gravity. The innovation promises to greatly enhance the precision of dark matter studies.
Harvey trained a Convolutional Neural Network (CNN) – a type of AI that is particularly good at recognizing patterns in images – with images from the BAHAMAS-SIDM project, which models galaxy clusters under different dark matter and AGN feedback scenarios. By being fed thousands of simulated galaxy cluster images, the CNN learned to distinguish between the signals caused by dark matter self-interactions and those caused by AGN feedback.
Among the various CNN architectures tested, the most complex - dubbed “Inception” – proved to also be the most accurate. The AI was trained on two primary dark matter scenarios, featuring different levels of self-interaction, and validated on additional models, including a more complex, velocity-dependent dark matter model.
Inceptionachieved an impressive accuracy of 80% under ideal conditions, effectively identifying whether galaxy clusters were influenced by self-interacting dark matter or AGN feedback. It maintained is high performance even when the researchers introduced realistic observational noise that mimics the kind of data we expect from future telescopes like Euclid.
What this means is that Inception – and the AI approach more generally – could prove incredibly useful for analyzing the massive amounts of data we collect from space. Moreover, the AI’s ability to handle unseen data indicates that it’s adaptable and reliable, making it a promising tool for future dark matter research.
AI-based approaches like Inception could significantly impact our understanding of what dark matter actually is. As new telescopes gather unprecedented amounts of data, this method will help scientists sift through it quickly and accurately, potentially revealing the true nature of dark matter.
10 notes · View notes
Text
This day in history
Tumblr media
#15yrsago EULAs + Arbitration = endless opportunity for abuse https://archive.org/details/TheUnconcionabilityOfArbitrationAgreementsInEulas
#15yrsago Wikipedia’s facts-about-facts make the impossible real https://web.archive.org/web/20091116023225/http://www.make-digital.com/make/vol20/?pg=16
#10yrsago Youtube nukes 7 hours’ worth of science symposium audio due to background music during lunch break https://memex.craphound.com/2014/11/25/youtube-nukes-7-hours-worth-of-science-symposium-audio-due-to-background-music-during-lunch-break/
#10yrsago El Deafo: moving, fresh YA comic-book memoir about growing up deaf https://memex.craphound.com/2014/11/25/el-deafo-moving-fresh-ya-comic-book-memoir-about-growing-up-deaf/
#5yrsago Networked authoritarianism may contain the seeds of its own undoing https://crookedtimber.org/2019/11/25/seeing-like-a-finite-state-machine/
#5yrsago After Katrina, neoliberals replaced New Orleans’ schools with charters, which are now failing https://www.nola.com/news/education/article_0c5918cc-058d-11ea-aa21-d78ab966b579.html
#5yrsago Talking about Disney’s 1964 Carousel of Progress with Bleeding Cool: our lost animatronic future https://bleedingcool.com/pop-culture/castle-talk-cory-doctorow-on-disneys-carousel-of-progress-and-lost-optimism/
#5yrsago Tiny alterations in training data can introduce “backdoors” into machine learning models https://arxiv.org/abs/1903.06638
#5yrsago Leaked documents document China’s plan for mass arrests and concentration-camp internment of Uyghurs and other ethnic minorities in Xinjiang https://www.icij.org/investigations/china-cables/exposed-chinas-operating-manuals-for-mass-internment-and-arrest-by-algorithm/
#5yrsago Hong Kong elections: overconfident Beijing loyalist parties suffer a near-total rout https://www.scmp.com/news/hong-kong/politics/article/3039132/results-blog
#5yrsago Library Socialism: a utopian vision of a sustaniable, luxuriant future of circulating abundance https://memex.craphound.com/2019/11/25/library-socialism-a-utopian-vision-of-a-sustaniable-luxuriant-future-of-circulating-abundance/
#1yrago The moral injury of having your work enshittified https://pluralistic.net/2023/11/25/moral-injury/#enshittification
7 notes · View notes
izicodes · 2 years ago
Note
Hi! I’m a student currently learning computer science in college and would love it if you had any advice for a cool personal project to do? Thanks!
Personal Project Ideas
Tumblr media
Hiya!! 💕
It's so cool that you're a computer science student, and with that, you have plenty of options for personal projects that can help with learning more from what they teach you at college. I don't have any experience being a university student however 😅
Someone asked me a very similar question before because I shared my projects list and they asked how I come up with project ideas - maybe this can inspire you too, here's the link to the post [LINK]
However, I'll be happy to share some ideas with you right now. Just a heads up: you can alter the projects to your own specific interests or goals in mind. Though it's a personal project meaning not an assignment from school, you can always personalise it to yourself as well! Also, I don't know the level you are, e.g. beginner or you're pretty confident in programming, if the project sounds hard, try to simplify it down - no need to go overboard!!
Tumblr media
But here is the list I came up with (some are from my own list):
Personal Finance Tracker
A web app that tracks personal finances by integrating with bank APIs. You can use Python with Flask for the backend and React for the frontend. I think this would be great for learning how to work with APIs and how to build web applications 🏦
Online Food Ordering System
A web app that allows users to order food from a restaurant's menu. You can use PHP with Laravel for the backend and Vue.js for the frontend. This helps you learn how to work with databases (a key skill I believe) and how to build interactive user interfaces 🙌🏾
Movie Recommendation System
I see a lot of developers make this on Twitter and YouTube. It's a machine-learning project that recommends movies to users based on their past viewing habits. You can use Python with Pandas, Scikit-learn, and TensorFlow for the machine learning algorithms. Obviously, this helps you learn about how to build machine-learning models, and how to use libraries for data manipulation and analysis 📊
Image Recognition App
This is more geared towards app development if you're interested! It's an Android app that uses image recognition to identify objects in a photo. You can use Java or Kotlin for the Android development and TensorFlow for machine learning algorithms. Learning how to work with image recognition and how to build mobile applications - which is super cool 👀
Social Media Platform
(I really want to attempt this one soon) A web app that allows users to post, share, and interact with each other's content. Come up with a cool name for it! You can use Ruby on Rails for the backend and React for the frontend. This project would be great for learning how to build full-stack web applications (a plus cause that's a trend that companies are looking for in developers) and how to work with user authentication and authorization (another plus)! 🎭
Text-Based Adventure Game
If you're interested in game developments, you could make a simple game where users make choices and navigate through a story by typing text commands. You can use Python for the game logic and a library like Pygame for the graphics. This project would be great for learning how to build games and how to work with input/output. 🎮
Weather App
Pretty simple project - I did this for my apprenticeship and coding night classes! It's a web app that displays weather information for a user's location. You can use Node.js with Express for the backend and React for the frontend. Working with APIs again, how to handle asynchronous programming, and how to build responsive user interfaces! 🌈
Online Quiz Game
A web app that allows users to take quizzes and compete with other players. You could personalise it to a module you're studying right now - making a whole quiz application for it will definitely help you study! You can use PHP with Laravel for the backend and Vue.js for the frontend. You get to work with databases, build real-time applications, and maybe work with user authentication. 🧮
Chatbot
(My favourite, I'm currently planning for this one!) A chatbot that can answer user questions and provide information. You can use Python with Flask for the backend and a natural language processing library like NLTK for the chatbot logic. If you want to mauke it more beginner friendly, you could use HTML, CSS and JavaScript and have hard-coded answers set, maybe use a bunch of APIs for the answers etc! This project would be great because you get to learn how to build chatbots, and how to work with natural language processing - if you go that far! 🤖
Tumblr media
Another place I get inspiration for more web frontend dev projects is on Behance and Pinterest - on Pinterest search for like "Web design" or "[Specific project] web design e.g. shopping web design" and I get inspiration from a bunch of pins I put together! Maybe try that out!
I hope this helps and good luck with your project!
Tumblr media
178 notes · View notes
jestergirlbosom · 1 year ago
Text
The crazy thing about AI is I spent the summer of 2021 learning basic machine learning i.e. feed forward neural networks and convolutional neural networks to apply to science. People had models they'd trained on cat images to try to create new cat images and they looked awful but it was fun. And it was also beginning to look highly applicable to the field I'm in (astronomy/cosmology) where you will have an immense amount of 2d image data that can't easily be parsed with an algorithm and would take too many hours for a human to sift through. I was genuinely kinda excited about it then. But seemingly in a blink of an eye we have these chat bots and voice AI and thieving art AI and it's all deadset on this rapid acceleration into cyberpunk dystopia capitalist hellscape and I hate it hate hate it
33 notes · View notes
kwakudamoah · 2 years ago
Text
Predictive Modelling and Risk Forecasting of Crime Using Linked ID
Using advanced data mining and statistical methodology to identify individuals at risk and prevent crime in your community Crime fighting is a major concern for governments and law enforcement agencies worldwide. To combat crime, various interventions such as social programs and credit assessments have been implemented. However, these interventions are not always effective in preventing crime.…
Tumblr media
View On WordPress
0 notes
goeswiththeflo · 5 months ago
Text
Um. Situation at work today. Can't quite decide how to feel/deal with it.
I'm first author on an academic manuscript. I sent the full draft around to coauthors for review.
It's about a community science mobile app, so we included the devs as authors as a courtesy since they've done all the work to create the thing.
Instead of giving me edits, one of the devs told me he uploaded the manuscript to chatgpt for "kicks and giggles" and that it had given him "interesting feedback" that he'd be happy to forward to me?!?!
Does this mean my manuscript draft is now part of the algorithm training data? I don't understand how he thought this was ok? I don't want an applied statistics model to give me edits! I want to know if the dev felt like I represented his work correctly! I feel squicked.
Need to figure out diplomatic email language to be like "no. Please give me real edits. I can't accept chatgpt feedback anyways because no academic journal worth it's salt should accept work you haven't written yourself." Blargh. Like what's the etiquette here?
9 notes · View notes