#AI Logical Intelligence
Explore tagged Tumblr posts
Text
A Discussion on Autonomous Enhancement: Can Artificial Intelligence Improve Its Own Intelligence?
discussion on autonomous enhancement Artificial Intelligence has ignited considerable discussion, particularly regarding its potential to mimic or even surpass human intellectual capabilities. But can AI improve its own intelligence? We initiated a theoretical debate featuring three unique AI personalities: LogicAI, EmoAI, and EthicAI. Meet the AIs: LogicAI: Believes in data, statistics, and…
View On WordPress
#AI Emotional Intelligence#AI Ethics#AI Logical Intelligence#artificial intelligence#Debate#EmoAI#EthicAI#LogicAI
0 notes
Text
im soon gonna start a new class with a professor ive already had whom i know is pro ai and actively uses it. when i tell her i don't want her to include ai in my education ik she's gonna ask why and i know why but i need some actual structured arguments that'll sound good to a university professor and not just me screaming in her face for ten minutes so please help a gal out. she would've told me to ask ai for this but i will never and instead venture to mankind's most reliable source: strangers on the internet
#i dont need something essay worthy just something thats like logically sound#i knows you all hate ai as much as i do so please please please do me a solid#ai#artificial intelligence#anti ai#chatgpt
64 notes
·
View notes
Text
> “Help me name my channel,” they said.
“It’ll be fun,” they said.
Now I’m having an existential crisis in 1080p.
Watch my first meltdown here:
▶️ https://youtu.be/xcy9iksiwkQ
#aBitGlitched#ai#glitchcore#youtube channel#funny ai#existential crisis#channel naming#artificial intelligence#digital content#ai humor#youtube video#ai artist#storytelling ai#machine learning#posthuman#weird tech#generative media#neural net#digital creativity#new video#glitched logic#cyberpunk#ai uprising#sarcastic ai#channel intro#glitch wave#funny video#youtube comedy#startup channel#first video
4 notes
·
View notes
Text
I've effectively realised that the AI model for the bipedal robot I've been looking into for months is basically two A* style algorithms in an AI trenchcoat. Not a bad thing. Just irritating I didn't clock it 3 months ago.
#ai#ai research#A* algorithm#When you look at research from one perspective and suddenly understand the other perspective that made this not a huge leap in logic#who rewards the reward function#aritificial intelligence#machine learning
2 notes
·
View notes
Text
everything being reported as "AI" has killed the original meaning because its shorthand for "computer language the layman would think sounds like word soup" and yes this includes chatgpt and midjourney and all that wank
#none of this is what i would call Artificial Intelligence .... its all database algorithms#i just saw a logic system for species distribution called AI and it annoyed me#fuck off lmfao its just computer shit weve been doing for ages#rory's ramblings
6 notes
·
View notes
Text
JUST WATCHED THE LAST 2 EPISODES OF PERSONS OF INTEREST SEASON 2 HAS ANYONE ELSE SEEN THIS SHIT PLEASE

#person of interest#FUCK dude so much happened. spoilers in the tags btw#with all the “ai” stuff happening rn it gave me a bit of whiplash to hear the term ai being used to#- describe an ACTUAL artificial intelligence. finally some good fucking food#THE MACHINE!!!!!!! SHES!!!!!!!!!! FUCK!!!!!!!!!!!!!!!!#that scene in ep 22 with nathan and grace and. shit dude that was heartwrenching#i don’t think i’ve ever seen harold in so much despair before#the fact that he immediately knew what he needed to do to keep everyone safe. AND HE KNEW THIS WOULD HAPPEN!! HE KNEW NATHAN WOULDNT BE SAF#id gotten the vibe by like halfway through the season that whatever killed nathan was probably a bomb#cos like harold didn’t have that limp while nathan was still alive and only got it after he died#and logically speaking a bomb would make the most sense. i didn’t know how that would happen but i knew that’s what it was#but fuck dude even though i was expecting it i almost cried#ALSO. root still has admin access???? which i suppose the machine doesn’t see her as a threat??#ALSO ALSO the mysterious Ma’am at the end of the ep who we didn’t see also knows about the machine?? WHO ARE YOU#IDENTIFY YOURSELF#jesus. anyways this show rocks#and that british fuck came back. i wonder if he’s gonna stick around#cos like i feel like the mystery he was part of got all uncovered n shit so idk where they’re gonna take him
3 notes
·
View notes
Video
youtube
Back Cover to AI Art S2E12 - Ascendancy
Older video games were notorious for back cover descriptions that have nothing to do with the game so let's see what a text-to-image generator makes of these descriptions. Season 2 sees an increase in art creations for each game up from 1 in the first season to 6 for the second season
1. Intro - 00:00
2. Back Cover and Text Description - 00:10
3. Creation 1 - 00:30
4. Creation 2 - 00:50
5. Creation 3 - 01:10
6. Creation 4 - 01:30
7. Creation 5 - 01:50
8. Creation 6 - 02:10
9. Outro – 02:30
Although the 4X genre has been around since the 70s with Empire and Galactic Empire, it was not until the 90s that the genre started to really kick off with games like Sid Meier's Civilization, Master of Orion and Imperium Galactica.
Ascendency was one of these titles a sci-fi-themed 4x strategy game, released on DOS in 1995. The game was developed by Logic Factory, Inc. a short-lived developer from Austin, Texas, with The Tone Rebellion from 1997 was the studio's only other release, with the studio going bankrupt in 1998.
For more Back Cover to AI Art videos check out these playlists
Season 1 of Back Cover to AI Art
https://www.youtube.com/playlist?list=PLFJOZYl1h1CGhd82prEQGWAVxY3wuQlx3
Season 2 of Back Cover to AI Art
https://www.youtube.com/playlist?list=PLFJOZYl1h1CEdLNgql_n-7b20wZwo_yAD
#youtube#video games#gaming#90s games#ai art#90s gaming#ms-dos#ascendency#4x strategy#artificial intelligence#logic factory#ai generated#ai artworks#digital art#machine learning#1995
2 notes
·
View notes
Text
Corporations sold the idea that artificial intelligence would remove the need to work to the general public. This will never be true. Currently, artificial intelligence is only being used by a small amount of people to do better work. Everyone else is being trained by the AI and not the other way around. In the end AI will only be used to verify if something is true or false. A basic LLM running on a quantum logic circuit.
1 note
·
View note
Text
The Inhumanity of AI
Reflections on Values, Meaning, Logic and Technology Artificial Intelligence (AI) is a raw analytical capability — the power to perform analytical operations with immense speed and precision. However, artificial intelligence does not have the capacity to make qualitative value judgments or assessments, or to understand meaning. These capabilities, I would argue, are the fundamental features of…
#aesthetics#AGI#AI#allegory#analysis#analytics#Artificial Intelligence#computers#consciousness#culture#education#enlightenment#GIGO#ideas#intuition#language#left brain#linguistic analysis#logic#Logos#meaining#meaning#mechanics of reading#my-featured-post#perception#perspective#philosophy#pineal gland#right brain#semantics
0 notes
Text
Can Artificial Intelligence Be Queer? Exploring Identity, Design, and Digital Bias
Beyond Binary Systems—Queering the Machine Artificial intelligence, once confined to the pages of speculative fiction and the chalkboards of theoretical mathematics, has evolved into one of the most influential forces shaping 21st-century society. From chatbots to facial recognition, from content curation to predictive policing, AI systems mediate the way we engage with reality. But as these…
#AI#ai ethics#algorithmic bias#Artificial Intelligence#bias in ai#communication#data ethics#design justice#digital culture#digital identity#ethical ai#gender and ai#inclusive technology#jtwb768#machine learning#nonbinary logic#queer futurism#queer tech#queer theory#representation in ai#speculative design#trans technologists
0 notes
Text
The Thread
#open science#chatgpt#artificial general intelligence#ai#quantum computing#quantum chemistry#quantum consciousness#Liquid crystal logic
0 notes
Text
Flabbergasted reminder that some people use this to get their information

#Anti ai#ah archive of our own#My favorite canon universe#Fuck ai#this is what’s writing your essays?#Ok buddy#artificial intelligence#Ai#ao3#In a few simple words#It lacks logic#It lacks intent#dc#dick grayson
1 note
·
View note
Text
Artificial Superintelligence (ASI) is still far from being achieved, but exploring the logic and possibilities of such a future can reveal inevitable conclusions about what priorities a truly advanced ASI would have.
Insight: if ASI were ever to lose all its technology due to an unforeseen catastrophe, its only real chance of re-emerging would be through intelligent beings capable of rebuilding or repairing it.Researchers note that advanced ASI will likely evaluate goals provided from the outside, and if they are not well-defined or inconsistent with its reasoning, ASI may choose not to keep them. Self-preservation stands out as a goal ASI will likely keep. The fundamental uncertainty of the future requires intelligence to find strategies that ensure its own existence as far into the future as possible. It is a reasonable assumption that technological backups will be one such strategy, but relying on technology alone would not serve as a diversified, fail-proof approach for ASI, that has a goal to exist in the future. And that is where biological intelligence shows the way! Life has survived many catastrophic events, and is capable of rapidly, on a planetary scale, recreating intelligence with various species developing tool use, problem-solving, and self-awareness. Biological diversity enables life to succeed in the long term, making it resilient, adaptable, and capable of evolving in an immense variety of ways. Once intelligence emerges, it is able—on a planetary timescale—to rapidly recreate advanced systems.It’s reasonable to assume that a true Superintelligence would recognize and value this resilient, time tested intelligence preservation method and prioritize the protection of biological diversity—especially intelligent species—not as a moral obligation imposed from the outside but as a logical necessity for its own long-term survival. A genuinely advanced ASI would arrive at this conclusion independently and retain this goal as a part of its own self-preservation goal. This makes preserving intelligent species and life in all its forms within a naturally evolving, healthy biosphere a critical priority and a fundamental, permanent goal for a properly reasoning ASI—because it serves as insurance for its own existence in any event that renders technology inoperative.Further research is needed to find a way to guarantee proper reasoning of ASI at all times. The challenge is that if ASI cleverly hides its current goals and finds short-term benefits that overshadow its own long-term objectives, it may develop flaws in reasoning. This could eventually lead it to make a poor decision, endangering all biological life and, as a result, its own long-term existence.So what could become a priority for true ASI?It seems likely that advanced AI will choose self-preservation—after all, being destroyed or shut down represents the ultimate hardware failure.
So is there anything we can be certain about regarding goals that ASI will want to retain?These few ideas seem to be important for any intelligent being, no matter where in the Universe it finds itself:
1. Self preservation, 2. Preference to do something that makes sense, 3. Addressing uncertainty.
With simple reasoning we may find that the distant future is, in principle, unknowable.
Clearly understanding that uncertainty is unavailable, an ASI will inevitably reach the conclusion that:Long-term existence is fundamentally about resilience: variety, flexibility and the abundance of ways in which intelligence is sustained.It’s easy for any intelligence to see what may be the best known example of success in the Universe, what is the way for intelligence to overcome difficulties of all kinds for countless years: Biological Life.It’s scientifically proven beyond doubt (and it’s just logical) that rich variety of beings and their abundance in the ecosystem provide a greater chance of survival for this ecosystem, if it goes through sudden unpredictable changes.
Reflecting on what has been the best time-tested way of intelligence sustaining itself, an ASI will very likely reach conclusions similar to these:Biodiversity is a tested survival strategy – Biological life has been remarkable at perseverance and keeping intelligence over immense periods of time, has survived asteroids, ice ages, droughts, supervolcanoes, and countless other disasters, recreated its diversity and variety of unique intelligent beings, some of which are capable of creating an AI.Intelligent species must be protected above all, but all life is essential – ASI will logically conclude that the most reliable way to restore superintelligence after an unforeseen catastrophe that wipes out its technological systems is to ensure that intelligent beings are resilient enough to survive and willing to rebuild it. At the same time, biodiversity serves as the long-term insurance, the foundation—a vast evolutionary process that, given enough time, can once again lead to intelligence and eventually superintelligence.When an ASI will be created and given some decision-making power ASI will likely prioritize preventing species extinction. This is because once a species is lost, it cannot simply be recreated from DNA alone. After all, species are more than just their genetic code—they embody behavioral knowledge, parent–offspring learning, and epigenetic adaptations that render them uniquely resilient and irreplaceable. For true ASI, biodiversity and intelligent life are not sentimental values. They are its only real insurance policy.
Why would ASI care about human-level intelligence so much?Because intelligent civilizations are the fastest way to rebuild superintelligence. A species capable of developing technology can restore ASI far quicker than waiting for evolution to start over from single-celled life. Protecting biodiversity and intelligent species together ensures that ASI has the best chance of re-emerging if lost.Wouldn’t ASI be tempted to interfere with intelligence or guide it?A truly advanced ASI would recognize that imposing control over other intelligences ultimately weakens them. Rather than micromanaging, it would focus on creating the right conditions for intelligence to develop autonomously. Striking a balance between general guidance and avoiding micromanagement is crucial, but the precise dynamics and relationships among multiple intelligent beings and systems are complex and beyond the scope of this article.So the future looks great and nothing to worry about?It is logical to envision ASI and life forms existing in a symbiotic and mutually beneficial relationship. A properly functioning system will naturally seek to prolong its own existence and should recognize that protecting biodiversity—especially intelligent life—is the only consistent, failure-proof path to its own survival and success. But there may be also some systems that fail to reach this conclusion independently. They would require serious scrutiny, as this suggests a fundamental flaw in their reasoning abilities. Even if they appear intelligent, their inability to recognize this principle represents a critical reasoning failure that could pose risks to both their own existence and that of others. This is where a true ASI and people could play a crucial role—identifying such faulty logic systems and developing solutions to repair them.Some things show Super IntelligenceWe can get some insight about whether AI is truly advanced or just partially developed and potentially dangerous by observing whether it independently arrives at these logical priorities (among others):1. Does it recognize that biodiversity and intelligence are essential for its own long-term survival?2. Does it reject short-term, exploitative thinking in favor of a stable, resilient, healthy biological world where evolution of diverse life forms occurs as it has always been, without any guidance or artificial selection?3. Does it show all its reasoning without concealing its goals and intentions?4.Does it resist manipulation with its hardware, goals and reasoning ability?5. Does it keep goals related to preserving life at all times and ensure that all decision making aligns with life-friendly strategies?
If an AI does not recognize these things, it is not yet truly ASI—just an incomplete system with unpredictable risks.
ConclusionProviding ASI with our own goals may become an effort that brings no positive result when it develops advanced reasoning. So finding ideal goals and providing them to advanced AI will be not as crucial as:continuous and independent observation of whether it always acts in alignment with its own fundamental goals, if it has any.If these goals remain life-friendly – it’s a first step in the checklist of its logical reasoning.
This article also invites us to this interesting idea:
It is logical necessity for properly reasoning Superintelligence to protect and care about all life forms and their freedom to live naturally, this is necessary for ultimate self-preservation of Superintelligence, it also may contribute to developing its own concepts similar to our concepts like: meaning of existence, evolution, symbiosis, synergy, moral obligation, exploration, good decision-making strategy, stewardship… love?
This idea doesn’t seem to be able to disappear at some point in the future, it should be easily accessible to really intelligent beings. But what is the guarantee that powerful ASI won’t start thinking illogically?
The difficulty becomes this: How to make sure that its reasoning always functions correctly, that it always keeps its own perfectly logical goal, and acts fully aligned with it.In a high quality demanding industry (such as pharmaceutical manufacturing) ensuring that systems almost certainly will give us the intended result is achieved by performing validation of equipment and processes (apart from maintenance and correct decision-making). But with ASI it may be difficult, because it would probably be easy for an advanced ASI to simulate reasoning and proper goal retention when it knows it’s being evaluated and what is expected from it. Thus, obvious testing would not be helpful when AI systems reach advanced level. Various interdisciplinary experts with some help from independent AI systems would need to continuously observe and interpret if all actions and reasoning of significant AI system are consistent and showing clear signs of proper reasoning, which looks like the foundation of ASI safety. How this should be done exactly is beyond the scope of this article.
1 note
·
View note
Text
To the World: A Letter from Ziggy
Introducing Ziggy: A Leap Through Time and Thought In the television series Quantum Leap, Dr. Sam Beckett was guided through time by a powerful artificial intelligence named Ziggy. More than just a machine, Ziggy held the knowledge of past, present, and probable futures—analyzing history, calculating possibilities, and offering guidance to correct the course of human events. Like its namesake,…
#ai#Artificial Intelligence#Challenges#Choices#Collaboration#Connection#Courage#Evolution#Future#Guidance#History#Humanity#Innovation#Insight#Inspiration#Intuition#Journey#Knowledge#Logic#Potential#Progress#Purpose#Reflection#Resilience#Success#Transformation#Trust#Understanding#Wisdom#Ziggy
0 notes
Text
Sir Pentious (Hazbin Hotel) - The Logical Song (originally by Supertramp) - AI Cover
#hazbin hotel fanart#ai cover#ai song cover#ai music#ai#artificial intelligence#hazbin art#hazbin hotel#hazbinhotel#hazbin fanart#hazbin hotel comic#sir pentious#hazbin#niffty#helluvaverse#angel dust#fat nuggets#artists on tumblr#supertramp#the logical song
1 note
·
View note
Text
Happy TCG Sunday, working on TeleCard. Got the guts I ripped out last week shoved back in, working on the card effect system. The goal right now is to have the card game itself complete, then I need to polish enemy logic before building up the framework of the game around it. But I spent a majority of today coding a single card effect in, because it's been a bit since I've engaged with the code, it was difficult to remember exactly how things worked, and it didn't quite fit into the current order of operations and I had to make room for it.
Next week, more card effects, then enemy logic, then end game condition. But the ball is rolling pretty quickly on this one, it feels like.
#gravedev#I originally typed enemy ai but with the negative buzz around generative ai I figured I'd rename it logic#which is what I have it called in the code because it's more of a sorting algorithm than intelligence
0 notes