#AI reasoning models
Explore tagged Tumblr posts
Text
What the Launch of OpenAI’s o1 Model Tells Us About Their Changing AI Strategy and Vision
New Post has been published on https://thedigitalinsider.com/what-the-launch-of-openais-o1-model-tells-us-about-their-changing-ai-strategy-and-vision/
What the Launch of OpenAI’s o1 Model Tells Us About Their Changing AI Strategy and Vision
OpenAI, the pioneer behind the GPT series, has just unveiled a new series of AI models, dubbed o1, that can “think” longer before they respond. The model is developed to handle more complex tasks, particularly in science, coding, and mathematics. Although OpenAI has kept much of the model’s workings under wraps, some clues offer insight into its capabilities and what it may signal about OpenAI’s evolving strategy. In this article, we explore what the launch of o1 might reveal about the company’s direction and the broader implications for AI development.
Unveiling o1: OpenAI’s New Series of Reasoning Models
The o1 is OpenAI’s new generation of AI models designed to take a more thoughtful approach to problem-solving. These models are trained to refine their thinking, explore strategies, and learn from mistakes. OpenAI reports that o1 has achieved impressive gains in reasoning, solving 83% of problems in the International Mathematics Olympiad (IMO) qualifying exam—compared to 13% by GPT-4o. The model also excels in coding, reaching the 89th percentile in Codeforces competitions. According to OpenAI, future updates in the series will perform on par with PhD students across subjects like physics, chemistry, and biology.
OpenAI’s Evolving AI Strategy
OpenAI has emphasized scaling models as the key to unlocking advanced AI capabilities since its inception. With GPT-1, which featured 117 million parameters, OpenAI pioneered the transition from smaller, task-specific models to expansive, general-purpose systems. Each subsequent model—GPT-2, GPT-3, and the latest GPT-4 with 1.7 trillion parameters—demonstrated how increasing model size and data can lead to substantial improvements in performance.
However, recent developments indicate a significant shift in OpenAI’s strategy for developing AI. While the company continues to explore scalability, it is also pivoting towards creating smaller, more versatile models, as exemplified by ChatGPT-4o mini. The introduction of ‘longer thinking’ o1 further suggests a departure from the exclusive reliance on neural networks’ pattern recognition capabilities towards sophisticated cognitive processing.
From Fast Reactions to Deep Thinking
OpenAI states that the o1 model is specifically designed to take more time to think before delivering a response. This feature of o1 seems to align with the principles of dual process theory, a well-established framework in cognitive science that distinguishes between two modes of thinking—fast and slow.
In this theory, System 1 represents fast, intuitive thinking, making decisions automatically and intuitively, much like recognizing a face or reacting to a sudden event. In contrast, System 2 is associated with slow, deliberate thought used for solving complex problems and making thoughtful decisions.
Historically, neural networks—the backbone of most AI models—have excelled at emulating System 1 thinking. They are quick, pattern-based, and excel at tasks that require fast, intuitive responses. However, they often fall short when deeper, logical reasoning is needed, a limitation that has fueled ongoing debate in the AI community: Can machines truly mimic the slower, more methodical processes of System 2?
Some AI scientists, such as Geoffrey Hinton, suggest that with enough advancement, neural networks could eventually exhibit more thoughtful, intelligent behavior on their own. Other scientists, like Gary Marcus, argue for a hybrid approach, combining neural networks with symbolic reasoning to balance fast, intuitive responses and more deliberate, analytical thought. This approach is already being tested in models like AlphaGeometry and AlphaGo, which utilize neural and symbolic reasoning to tackle complex mathematical problems and successfully play strategic games.
OpenAI’s o1 model reflects this growing interest in developing System 2 models, signaling a shift from purely pattern-based AI to more thoughtful, problem-solving machines capable of mimicking human cognitive depth.
Is OpenAI Adopting Google’s Neurosymbolic Strategy?
For years, Google has pursued this path, creating models like AlphaGeometry and AlphaGo to excel in complex reasoning tasks such as those in the International Mathematics Olympiad (IMO) and the strategy game Go. These models combine the intuitive pattern recognition of neural networks like large language models (LLMs) with the structured logic of symbolic reasoning engines. The result is a powerful combination where LLMs generate rapid, intuitive insights, while symbolic engines provide slower, more deliberate, and rational thought.
Google’s shift towards neurosymbolic systems was motivated by two significant challenges: the limited availability of large datasets for training neural networks in advanced reasoning and the need to blend intuition with rigorous logic to solve highly complex problems. While neural networks are exceptional at identifying patterns and offering possible solutions, they often fail to provide explanations or handle the logical depth required for advanced mathematics. Symbolic reasoning engines address this gap by giving structured, logical solutions—albeit with some trade-offs in speed and flexibility.
By combining these approaches, Google has successfully scaled its models, enabling AlphaGeometry and AlphaGo to compete at the highest level without human intervention and achieve remarkable feats, such as AlphaGeometry earning a silver medal at the IMO and AlphaGo defeating world champions in the game of Go. These successes of Google suggest that OpenAI may adopt a similar neurosymbolic strategy, following Google’s lead in this evolving area of AI development.
o1 and the Next Frontier of AI
Although the exact workings of OpenAI’s o1 model remain undisclosed, one thing is clear: the company is heavily focusing on contextual adaptation. This means developing AI systems that can adjust their responses based on the complexity and specifics of each problem. Instead of being general-purpose solvers, these models could adapt their thinking strategies to better handle various applications, from research to everyday tasks.
One intriguing development could be the rise of self-reflective AI. Unlike traditional models that rely solely on existing data, o1’s emphasis on more thoughtful reasoning suggests that future AI might learn from its own experiences. Over time, this could lead to models that refine their problem-solving approaches, making them more adaptable and resilient.
OpenAI’s progress with o1 also hints at a shift in training methods. The model’s performance in complex tasks like the IMO qualifying exam suggests we may see more specialized, problem-focused training. This ability could result in more tailored datasets and training strategies to build more profound cognitive abilities in AI systems, allowing them to excel in general and specialized fields.
The model’s standout performance in areas like mathematics and coding also raises exciting possibilities for education and research. We could see AI tutors that provide answers and help guide students through the reasoning process. AI might assist scientists in research by exploring new hypotheses, designing experiments, or even contributing to discoveries in fields like physics and chemistry.
The Bottom Line
OpenAI’s o1 series introduces a new generation of AI models crafted to address complex and challenging tasks. While many details about these models remain undisclosed, they reflect OpenAI’s shift towards deeper cognitive processing, moving beyond mere scaling of neural networks. As OpenAI continues to refine these models, we may enter a new phase in AI development where AI performs tasks and engages in thoughtful problem-solving, potentially transforming education, research, and beyond.
#ai#AI development#AI models#AI reasoning models#AI strategy#AI systems#alphageometry#applications#approach#Article#Artificial Intelligence#Behavior#Biology#chatGPT#ChatGPT-4o#chemistry#coding#cognitive abilities#Community#Competitions#complexity#data#datasets#details#development#Developments#direction#Discoveries#education#emphasis
0 notes
Text
PARTY IN THE OLDEST HOUSE GUUUYYYYYS
There it is, eight months in the making.
Given the size of this file and the amount of details, I've included more close-ups and a download link to a 2k file over here:
big thanks to @wankernumberniiiiiiiiine, she's the reason this painting exists 🥰
#control 2019#control game#control remedy#artists on tumblr#jesse faden#emily pope#frederick langston#simon arish#dylan faden#ahti the janitor#ahti#my art#control game fanart#that's every AI and OoP with an ingame model#so no cowboy boots or burroughs tractor#also no alan page bc blegh this is a lot already lmao#also the only reason there's no hiss is because I did not want to draw and paint all those HRAs alright
565 notes
·
View notes
Text
nooooo dude you dont get it when i draw upon my lifetime's worth of observations of other peoples art and then synthesize the patterns i noticed into an original yet derived work of art its beautiful but when a computer does it its copyright infringement
#i have yet to see a convincing reason that a diffusion models learning process should be viewed as fundamentally different from a humans#in fact. im going to put tags on this post so people see it and if you think you can convince me please by all means go ahead#ill give you like 20 bucks if you succeed#ai art#anti ai art
34 notes
·
View notes
Text
i think the big thing is that there just isn't any way to ethically create ai generated content, at least with the way training ai models currently works
for the sake of conciseness, lets just focus on the amount of labor required to produce the images text-to-image ai models are trained on
theoretically, you could hire a bunch of artists whose jobs are to create art to feed into an algorithm and train it. there's no wage theft here.
in theory, that could work.
reality is, these models are trained on so many images that there is no way to do this ethically.
for example, DALL-E, which is a text-to-image model developed by OpenAI, was trained on 250 million images
to pay the labor force that would be required to even produce that many images... ignoring the amount of time that would take even with thousands of artists, there's just literally no fucking way.
this is precisely why these ai models, both text-to-image ai art generators similar to DALL-E and LLMs like ChatGPT, resort to scraping the internet for data to train their models on. they have no other option besides just... not making the model to begin with.
the only way to realistically create a good ai model meant to function like these two examples is to resort to unethical methods
and again, this is ignoring all the other ethical concerns with ai generated content! this is the reality of JUST training these models to begin with!
#this alone is reason enough for why these ai models shouldn't be created#people should be paid for their work#and you can't really fucking do that when these models take so much data to train#vixen.txt
11 notes
·
View notes
Text
Rethinking AI Research: The Paradigm Shift of OpenAI’s Model o1
The unveiling of OpenAI's model o1 marks a pivotal moment in the evolution of language models, showcasing unprecedented integration of reinforcement learning and Chain of Thought (CoT). This synergy enables the model to navigate complex problem-solving with human-like reasoning, generating intermediate steps towards solutions.
OpenAI's approach, inferred to leverage either a "guess and check" process or the more sophisticated "process rewards," epitomizes a paradigm shift in language processing. By incorporating a verifier—likely learned—to ensure solution accuracy, the model exemplifies a harmonious convergence of technologies. This integration addresses the longstanding challenge of intractable expectation computations in CoT models, potentially outperforming traditional ancestral sampling through enhanced rejection sampling and rollout techniques.
The evolution of baseline approaches, from ancestral sampling to integrated generator-verifier models, highlights the community's relentless pursuit of efficiency and accuracy. The speculated merge of generators and verifiers in OpenAI's model invites exploration into unified, high-performance architectures. However, elucidating the precise mechanisms behind OpenAI's model and experimental validations remain crucial, underscoring the need for collaborative, open-source endeavors.
A shift in research focus, from architectural innovations to optimizing test-time compute, underscores performance enhancement. Community-driven replication and development of large-scale, RL-based systems will foster a collaborative ecosystem. The evaluative paradigm will also shift, towards benchmarks assessing step-by-step solution provision for complex problems, redefining superhuman AI capabilities.
Speculations on Test-Time Scaling (Sasha Rush, November 2024)
youtube
Friday, November 15, 2024
#ai research#language models#chain of thought#cognitive computing#artificial intelligence#machine learning#natural language processing#deep learning#technological innovation#computational linguistics#intelligent systems#human-computer interaction#cognitive architecture#ai development#i language understanding#problem-solving#reasoning#decision-making#emerging technologies#future of ai#talk#presentation#ai assisted writing#machine art#Youtube
2 notes
·
View notes
Text
brb, hooking an ai system trained to write scripts to an ai system trained to produce video from textual input and an ai system that generates descriptions of video content as a research exercise in identifying the most prevalent tropes and plot beats in modern cinema by (manually) cross-comparing discrete productions once content has stabilized at statistically significant similarity.
#I don't think it'd be easy but I do think it'd work.#Since modern 'ai' is basically programmatic pattern automation and amplification#It stands to reason that over sufficient repetitions videos made by this 'closed' system would standardize into a multimodal output model#Based around repeated trends extracted form initial input data#My hypothesis is that thru repetition trends would become refined + amplified to the point of being recognizably discrete and identifiable#My secondary hypothesis as that the system would generate video output unrecognizable as films by human standards#But *still based on ai interpretation of these trends/tropes*#Which is be fascinated to see the presentation of bc I want to know how one would have to engage with this wholly automated output#To re-interpret it into a human accessible media/analysis framework
9 notes
·
View notes
Text
Wait, so is the voice behind Naevis was created based on differents voice actors? And it's not an actual trainee's voice that's debuting?
#misc; ooc#//SM really should throw the whole AI bullshit in the garbage can wtf is this#//like I can understand the idea behind plave for example they're actual people but they're using kind of like those vtuber models#//I bet they've taken some inspiration from japanese singers too#//since I've noticed quite a lot of those artists keeping their faces hidden for privacy reasons and using animation and whatnot instead#//like I can understand someone wanting to pursue a singing career but also wanting to stay anonymous as much as possible#//so they can live their private life in peace#//but this AI artist thing is shit and I'm glad this sm project is flopping so bad sm needs a reality check
3 notes
·
View notes
Text
Man I feel like I could write a whole essay about Nope rn and I am so tempted!
#the way the alien acts as a perfect metaphor for the way media consumes and spits out people and things at a moments notice#the danger it causes and the reason why its safer to leep your head down (doubly thematic bc the main family involved in black)#the way it models social media#the death of it being that it consumed skmething too fake and large for it to handle#being a perfect foil to the rampant rise of ai and fake content#the dads death is so poignant in so many more ways than one and god i could go on forever about this movie#nope 2022#nope#jordan peele is a mf genius for this one ugh!!!
8 notes
·
View notes
Text
anti 'ai' people make 1 (one) well-informed, good-faith argument challenge(impossible)
#txt#obviously the vast majority of anti-'ai' people on here are just like.#'my internet sphere told me machine-generated text/images are bad'#and have not considered any cases where 'ai' is demonstrably materially useful#(e.g. tumor detection. drug discovery. early detection of MANY diseases. modeling the effects of epi/pandemic responses. modeling all sorts#of public health policy‚ actually. discovering new mechanisms of disease. that's just off the top of my head)#but now people are straight up saying that computers are NEVER better than humans at any tasks. and we should all just ~use our BRAINS!!!~#like. i have no words.#i mean i fucking guess i shouldn't expect these people to base their takes on actual facts or reason.#still pisses me the fuck off to know that there are people out there who are so dogmatic about this#editing to put this in my#‘ai’#tag
3 notes
·
View notes
Text
Sometimes I contemplate making smut fanart/content, but then I remember we live in a puritan tech dystopia and where the everloving fuck would I post it these days?
#kerytalk#I miss golden age tumblr#there's pillowfort but it's ... kinda dead#and the website formally known as twitter I am not touching for a multitude of reasons#a big one also being anything on there can be used for Muskrat's AI models now#idk what the fuck about bluesky but I don't have an invite for it anyway so that's moot#this brainworm brought to you by -#'I am sick of how so much fanart smut content of women feels gross and male-gaze-ey'#and 'I AM A 30 YEAR OLD AUSTRALIAN WOMAN STOP MAKING ME OBEY YOUR STUPID AMERICAN ANTI HOLE LAWS'#this has been a tag rant#nsft
11 notes
·
View notes
Text
Beyond Chain-of-Thought: How Thought Preference Optimization is Advancing LLMs
New Post has been published on https://thedigitalinsider.com/beyond-chain-of-thought-how-thought-preference-optimization-is-advancing-llms/
Beyond Chain-of-Thought: How Thought Preference Optimization is Advancing LLMs
A groundbreaking new technique, developed by a team of researchers from Meta, UC Berkeley, and NYU, promises to enhance how AI systems approach general tasks. Known as “Thought Preference Optimization” (TPO), this method aims to make large language models (LLMs) more thoughtful and deliberate in their responses.
The collaborative effort behind TPO brings together expertise from some of the leading institutions in AI research.
The Mechanics of Thought Preference Optimization
At its core, TPO works by encouraging AI models to generate “thought steps” before producing a final answer. This process mimics human cognitive processes, where we often think through a problem or question before articulating our response.
The technique involves several key steps:
The model is prompted to generate thought steps before answering a query.
Multiple outputs are created, each with its own set of thought steps and final answer.
An evaluator model assesses only the final answers, not the thought steps themselves.
The model is then trained through preference optimization based on these evaluations.
This approach differs significantly from previous techniques, such as Chain-of-Thought (CoT) prompting. While CoT has been primarily used for math and logic tasks, TPO is designed to have broader utility across various types of queries and instructions. Furthermore, TPO doesn’t require explicit supervision of the thought process, allowing the model to develop its own effective thinking strategies.
Another key difference is that TPO overcomes the challenge of limited training data containing human thought processes. By focusing the evaluation on the final output rather than the intermediate steps, TPO allows for more flexible and diverse thinking patterns to emerge.
Experimental Setup and Results
To test the effectiveness of TPO, the researchers conducted experiments using two prominent benchmarks in the field of AI language models: AlpacaEval and Arena-Hard. These benchmarks are designed to evaluate the general instruction-following capabilities of AI models across a wide range of tasks.
The experiments used Llama-3-8B-Instruct as a seed model, with different judge models employed for evaluation. This setup allowed the researchers to compare the performance of TPO against baseline models and assess its impact on various types of tasks.
The results of these experiments were promising, showing improvements in several categories:
Reasoning and problem-solving: As expected, TPO showed gains in tasks requiring logical thinking and analysis.
General knowledge: Interestingly, the technique also improved performance on queries related to broad, factual information.
Marketing: Perhaps surprisingly, TPO demonstrated enhanced capabilities in tasks related to marketing and sales.
Creative tasks: The researchers noted potential benefits in areas such as creative writing, suggesting that “thinking” can aid in planning and structuring creative outputs.
These improvements were not limited to traditionally reasoning-heavy tasks, indicating that TPO has the potential to enhance AI performance across a broad spectrum of applications. The win rates on AlpacaEval and Arena-Hard benchmarks showed significant improvements over baseline models, with TPO achieving competitive results even when compared to much larger language models.
However, it’s important to note that the current implementation of TPO showed some limitations, particularly in mathematical tasks. The researchers observed that performance on math problems actually declined compared to the baseline model, suggesting that further refinement may be necessary to address specific domains.
Implications for AI Development
The success of TPO in improving performance across various categories opens up exciting possibilities for AI applications. Beyond traditional reasoning and problem-solving tasks, this technique could enhance AI capabilities in creative writing, language translation, and content generation. By allowing AI to “think” through complex processes before generating output, we could see more nuanced and context-aware results in these fields.
In customer service, TPO could lead to more thoughtful and comprehensive responses from chatbots and virtual assistants, potentially improving user satisfaction and reducing the need for human intervention. Additionally, in the realm of data analysis, this approach might enable AI to consider multiple perspectives and potential correlations before drawing conclusions from complex datasets, leading to more insightful and reliable analyses.
Despite its promising results, TPO faces several challenges in its current form. The observed decline in math-related tasks suggests that the technique may not be universally beneficial across all domains. This limitation highlights the need for domain-specific refinements to the TPO approach.
Another significant challenge is the potential increase in computational overhead. The process of generating and evaluating multiple thought paths could potentially increase processing time and resource requirements, which may limit TPO’s applicability in scenarios where rapid responses are crucial.
Furthermore, the current study focused on a specific model size, raising questions about how well TPO will scale to larger or smaller language models. There’s also the risk of “overthinking” – excessive “thinking” could lead to convoluted or overly complex responses for simple tasks.
Balancing the depth of thought with the complexity of the task at hand will be a key area for future research and development.
Future Directions
One key area for future research is developing methods to control the length and depth of the AI’s thought processes. This could involve dynamic adjustment, allowing the model to adapt its thinking depth based on the complexity of the task at hand. Researchers might also explore user-defined parameters, enabling users to specify the desired level of thinking for different applications.
Efficiency optimization will be crucial in this area. Developing algorithms to find the sweet spot between thorough consideration and rapid response times could significantly enhance the practical applicability of TPO across various domains and use cases.
As AI models continue to grow in size and capability, exploring how TPO scales with model size will be crucial. Future research directions may include:
Testing TPO on state-of-the-art large language models to assess its impact on more advanced AI systems
Investigating whether larger models require different approaches to thought generation and evaluation
Exploring the potential for TPO to bridge the performance gap between smaller and larger models, potentially making more efficient use of computational resources
This research could lead to more sophisticated AI systems that can handle increasingly complex tasks while maintaining efficiency and accuracy.
The Bottom Line
Thought Preference Optimization represents a significant step forward in enhancing the capabilities of large language models. By encouraging AI systems to “think before they speak,” TPO has demonstrated improvements across a wide range of tasks, potentially revolutionizing how we approach AI development.
As research in this area continues, we can expect to see further refinements to the technique, addressing current limitations and expanding its applications. The future of AI may well involve systems that not only process information but also engage in more human-like cognitive processes, leading to more nuanced, context-aware, and ultimately more useful artificial intelligence.
#ai#AI development#AI models#AI research#AI systems#Algorithms#analyses#Analysis#applications#approach#arena#Art#artificial#Artificial Intelligence#benchmarks#bridge#chain of thought reasoning#challenge#chatbots#collaborative#complexity#comprehensive#content#customer service#data#data analysis#datasets#development#domains#efficiency
3 notes
·
View notes
Photo
Blorbos. Apparently.
#mythyk art#sketch#fnaf sb#fnaf moon#puss in boots: the last wish#death puss in boots#rvb locus#rvb#art#fanart#crossover of the year#random thoughts based on drawing this#locus wouldn't kill moon because ai is sentient being#death and locus know each other#for reasons#ramble in the tags#drawing#can you tell when i gave up on the halo armour#whoever modelled it paid with their sanity#or maybe that's just me
21 notes
·
View notes
Text
I'll say it: "Oh all AI artists do is write a stupid description and immediately get an image with no effort, there's no art in that" is the new "Digital painting doesn't count as art because it takes no effort"
#Look I'm aware there're moral reasons to criticize AI art such as how corporations will use it#and the fact lots of models (not all however) use stolen content#But all you have to do is visit a forum dedicated to AI art to quickly realize it actually takes some effort to make quality images#And honestly from what I've seen those guys are often very respectful of traditional artists if not traditional artists themselves#Not a single bit of 'haha those idiots are working hard when they could simply use AI!' that Tumblr likes to strawman them as#Lots of 'So I did the base with AI and then painted over it manually in Photoshop' and 'I trained this model myself with my own drawings'#And I'm not saying there aren't some guys that are being assholes over it on Twitter#But when you go to an actual community dedicated to it. Honestly these guys are rather nice#I've seen some truly astounding projects#like there was this guy that was using people's scars to create maps of forests and mointains to sort of explore the theme of healing#And this one that took videos of his city and overlayed them with some solarpunk kind of thing#And this one that was doing a collection of dreams that was half AI amd half traditional painting#Anyway the point is you guys are being way too mean to a group of people that genuinely want to use the technology to create cool art#And while I'm aware there are issues related to its use#it's actually really fucked up you're attacking the individual artists instead of corporations???#It's as if you were attacking the chocolate guy over the systemic problems related to the chocolate industry!#And also tumblrs always like 'Oh AI is disgusting I hate AI art so I'll just hate in it without dealing with the issue'#While AI art forums often have posts with people discussing how go use it ethically when applied to commercial use!!#Honestly these guys are doing way more about tackling the issue than tumblr and you should feel bad!!!
15 notes
·
View notes
Text
Tbh i don't know what to think of AI art anymore. I don't find any utility, personally, in centring the discussion on law and copyright; there are far more interesting things to discuss on the topic beyond its use as a replacement for human artists/workforce by the upper class
#rambling#i am not saying i think using AI image generation to replace human artists and leave them jobless is a good thing - i do think that is bad#there are real concern on the ethics of its use and creation of image generation models#but i think focusing only on things like how ''off'' or ''inhuman'' it looks or how ''soulless'' it is are not only surface level complaint#but also call to question again the age old debate of what is art and what isn't and why some art is and why some isn't#and also the regard of painting and other forms of visual art production as somehow above photography in the general conscience#i would love to really talk about these things with people but talking about ai art and image generation is a gamble between talking to#an insufferable techbro who only sees profits and an artist who shuts the whole idea off without nuisance#i have seen wonderful projects by human artists using ai image generation software in creative ways for example#are those projects not art? if they are are they only art because they were made by someone already regarded as an artist?#there are also cool ai-generated images by random people who don't regard themselves as artists. are they art? why or why not?#the way AI image generation works - using vast arrays of image samples to create a new image with - has been cited#as a reason why ai-generated images aren't ''real art''. but is that not just a computer-generated collage? is it not real because it was#made by an algorithm?#if i - a human artist - get a bunch of old magazines and show them to an algorithm to generate new things from them#or to suggest ways in which new things could be made#and then i took those suggestions and cut the magazines and made the collage by hand. is that still art? did it at some point become art#or cease to be art?#i think these things are far more intriguing and important to get to the root of ethical AI usage in the 21st century than focusing on laws
7 notes
·
View notes
Text
I was working on something for Clextober and here are some Alycia AI generated images that I didn't use.
#ai generated#i am not artistic#she is easier to have the ai make images for than eliza#for some reason i can't get it to make decent Clarke pics#fencing lexa#cowgirl/model lexa#halloween party lexa
3 notes
·
View notes
Text
You know, you have the Whorf hypothesis, which talks about how language might effect how we think
I believe one of the things he (or someone else saying similar things) brought up was the idea that:
If we for instance have barrels which used to contain a toxic chemical that's now empty, but the barrel is still dangerous, does lacking a word for "empty but dangerous" influence how we think about or treat this barrel? Would someone be less cautious around it for instance because "empty" implies to an extent that the barrel is back to how it was before it was filled?
Anyway, this is just me establishing a concept here
My thought here is if poorly fitting words may disproportionately warp people's understanding of concepts
I wonder if by using phrases like "artificial intelligence" we don't meaningfully skew perception of "ai" programs towards a thinking program, even among people who have some understanding of how it works (basically rapidly running a number of calculations until it gets an answer it thinks will be good, it's similar to those "having a simulated bird learn to walk" things you'll see, just very fast)
How much do we end up having certain terms basically become poison pills because of how ubiquitous they've become while being almost totally wrong
I'm not even really talking about things like reasonable terms used wrong, like people saying "gaslighting" when they mean "lying"
It really is specifically with terms like "ai" where... well... where I'm afraid we may have done irrevocable damage to public understanding of something, and where... I don't know that there's a way to ever fix it and shift the language used
Just something I'm thinking about tonight
#though I'm not actually thinking about ai; I'm thinking about another term that... what I have to say isn't that spicy#but I do kind of worry it would be a little too spicy for people who've really latched onto the word#even though... I literally just want to help; I literally think that term is a poison pill to the people who use it more than anyone else#and I think I have at least a candidate replacement for it in the same way I have something like 'deep modeling' to replace 'ai'#but... I don't think... I don't think I know of anyway how I could get that change to happen#even if like I... presented these thoughts to the greatest minds and everyone agreed on a new better term... could we spread it?#just drives me nuts with ai for obvious reasons#and with this term because whenever someone actually explains what the hell they mean... it's not at all what the word they use means#and a shift in words to one that... actually explains it... I mean I think it might massively make people more receptive#don't use something that's both very charged and also... kind of just the wrong word#use a word that's accurate and you can probably bring most people around on quickly#...well... whatever... I'll sprinkle these thoughts in people's ears from time to time#and hopefully it slowly takes root in enough people to have at least some small impact#in other news it's not like I remember the name of that hypothesis#I just decided that a couple minutes search could track me down a name; make me sound knowledgeable; all while being more accurate
2 notes
·
View notes