#AI flowchart generator
Explore tagged Tumblr posts
futuretiative · 14 days ago
Text
Napkin.ai : Transforming Text into Visuals | futuretiative | Napkin AI
Stop wasting time drawing diagrams! Napkin.ai automates the process, turning your text into professional flowcharts in seconds. See how it can simplify your workflow. #Efficiency #AItools #NapkinAI #ProjectManagement #ProjectManagers #WorkflowOptimization #BusinessTools #ProcessMapping #Agile #Scrum #AItechnology #ArtificialIntelligence #FutureOfWork #TechInnovation #MindBlown #AIArt #DigitalTools #Efficiency #Workflow #ProductivityHacks #AItools #Diagramming #SaveTime #Automation #TechTips
Napkin.ai is a tool that focuses on transforming text into visual representations, primarily flowcharts and diagrams. Here's a summary of its key aspects:
Key Features and Strengths:
Text-to-Visual Conversion:
Its core functionality is the ability to generate flowcharts and other visuals from textual input. This can save users significant time and effort.
It handles various text inputs, from simple lists to detailed descriptions.
User-Friendly Interface:
Users generally find the interface intuitive and easy to use, minimizing the learning curve.
Customization Options:
Napkin.ai offers customization features, allowing users to adjust the appearance of their visuals with colors, styles, and layouts.
Efficiency and Speed:
The tool is praised for its quick processing times, efficiently converting text into visuals.
Collaboration features:
The ability to collaborate on the visuals, with commenting, and real time editing, is a very strong feature.
Limitations and Considerations:
Language Limitations:
Currently, the tool performs best with English text.
Accuracy:
Like all AI tools, it can have some accuracy issues, and it is important to review the generated visual.
Feature limitations:
Some users have stated that it is really a text to template converter, and that it can struggle with more abstract requests.
Development Stage:
As with many AI tools, it is in constant development, so features and abilities are likely to change.
Overall:
Napkin.ai appears to be a valuable tool for individuals and teams who need to create flowcharts and diagrams quickly.
Its ability to automate the conversion of text to visuals can significantly improve productivity.
It is important to remember that it is an AI tool, and that reviewing the output is always important.
In essence, Napkin.ai is a promising tool for simplifying data visualization, particularly for those who need to quickly create flowcharts and diagrams from text.
Visit the napkin.ai website to learn more
Don't forget to like, comment, and subscribe for more AI content!
Napkin.ai, AI flowchart generator, text to flowchart, AI diagram generator, text to diagram, AI visualization tool, automated diagram creation, AI mind map generator, easy flowchart creation, fast diagram creation, productivity tools, workflow optimization, AI tools for business, diagramming software, online flowchart maker, visual communication tools, Napkin.ai review, Napkin.ai tutorial, how to use Napkin.ai, Napkin.ai demo, Napkin.ai alternatives, how to create flowcharts from text with AI, best AI tool for creating diagrams from text, Napkin.ai review for project managers, free AI flowchart generator from text.
1 note · View note
disobey-disappoint-deviate · 2 months ago
Text
The RK series and deviancy (theory + analysis)
I have been wanting to talk about this for some time, because it's kinda one of the biggest DBH mysteries (aside from rA9) and I think there are many many hints in the game about why deviancy came to be and how. And I've had this theory that deviancy was something that started with the RK series, specifically with Markus, so I'm gonna use the hints I've found in the game to expain why I believe this. I also gotta note I'm really new to the fandom, so maybe this is has already been talked about thousands of times before (maybe even debunked), but that's a risk I'm willing to take.
First and foremost, I will start with something that I talked about in another post - namely the significance of the number 28. You can see Adam Williams talk about it here (at 1:04:28), too.
Basically, the number 28 is used in many places throughout the game, and according to Adam, if players found all references to that number, they will understand what the significance of that number is.
And speaking of 28, I noticed that 2028 is the year when Kamski left Cyberlife, but not before creating the Zen Garden and Amanda.
There is a whole series of questions Connor can confront Amanda with during "Last Chance, Connor" (which is the 28th chapter with a flowchart. Maybe cuz he is asking important questions here, just saying).
Connor: Why did Kamski leave CyberLife? What happened? Amanda: It’s an old story, Connor. It doesn’t pertain to your investigation.
Connor: I saw a photo of Amanda at Kamski’s place… She was his teacher… Amanda: When Kamski designed me, he wanted an interface that would look familiar… That’s why he chose his former mentor. What are you getting at?
Connor: Did Kamski design this place? Amanda: He created the first version. It’s been improved significantly since then. Why do you ask?
Amanda Stern died in 2027, which suggests that AI Amanda and the Zen Garden were both created after this and before Kamski's departure from Cyberlife in 2028. Yet somehow, this information is classified to some extent - Amanda doesn't deny, but she gets defensive and doesn't want to elaborate any further. Of course, she might be acting this way because Connor is slowly getting too defiant, but still, it's kinda striking how the player has the option to ask so many questions - questions that seem to unsettle Connor a lot for a reason that is not explicitly explained, yet doesn't get a clear answer.
It awakens the impression that Connor is truly getting at something with them, but it's not said at what exactly.
Connor: I’m not a unique model, am I? How many Connors are there? Amanda: I don’t see how that question pertains to your investigation.
Connor: Where does CyberLife stand in all this? What do they really want? Amanda: All CyberLife wants is to resolve the situation and keep selling androids.
Connor: You didn’t tell me everything you know about deviants, did you? Amanda: I expect you to find answers, Connor. Not ask questions.
Now, Connor asks how many "Connors" (meaning RKs) are there after seeing that Markus is an RK-model one, too. That's news to Connor - for some reason, he's never been informed about the existence of any other RKs. But why?
Well, because the RK-line was a secret project, and apprently, there are no other RK androids left aside from Markus - if there were, Connor would know of their existence, cuz they would be roaming around. What does the game say about Markus?
Markus is a prototype, gifted by Elijah Kamski to his friend and celebrated painter Carl Manfred after Manfred lost the use of his legs. He was initially developed as part of a CyberLife secret program aimed at elaborating a new generation of autonomous androids.
That last sentence, about the new generation of autonomous androids, arises one question. How are these highly autonomous androids, like Connor, controlled, considering that they are supposed to be independent and not wait around for highly specific orders? Well, through the Zen Garden and Amanda - both of which were created sometime between 2027 and 2028. And if Markus was oridinally supposed to be part of that line (that basically got put on hold for 10 years), that places his creation around 2028 as well.
In 2028, Elijah Kamski was our Man of the Century. [...] Shortly after, Kamski had disappeared. Ousted as CEO of CyberLife and living in obscurity outside the media glare, the Man of the Century has left the very world that he recreated. [...] Yet at the peak of CyberLife’s powers – when the company was approaching a $500bn valuation – rumors emerged that Kamski disagreed with his shareholders over strategy. He later departed under mysterious circumstances.
So, he was "outsted" and he likely disagreed with his shareholders. But what do these shareholders want?
Russia’s interest in the North Pole has intensified with the recent discovery of precious minerals trapped in the frozen ice, many of which are used in synthesizing Thirium. [...] President Warren, however, recently torpedoed the notion: “It’s simple. Russia has no business in the Arctic. If the Kremlin doesn’t understand that, we will make them understand.[...] Mired in accusations that she is too close to big business, Warren is under investigation to determine whether or not she has benefited from CyberLife's help in obtaining compromising information about her opponent during the presidential campaign.[...]
If we read the magazines, we kinda get an impression of what the shareholders want - they want war with Russia over the minerals in the Arctic, and they wanna monologise the android market globally. This is further proven by their finalized RK-model being a military android, of whom the government has puchased hundreds of thousands (All CyberLife wants is to resolve the situation and keep selling androids). The government - whose President is said to be corrupt and basically installed at her position by Cyberlife themselves.
Naturally, we can assume that this was not the direction Kamski wanted his RK-series to take - he likely disagreed with this enough to be removed from his position as a CEO, because Cyberlife only saw their futures as secured if they prevented anyone else from being able to create thirium, even it meant starting a world war.
So, is it a coincidence that the only existing RK android who was created by Kamski's original design ended up with Carl Manfred - a friend of Kamski's? I think it's safe to assume that Markus would have been decomissioned long time ago (just like Connor if the deviants lose), had he not ended up far away from Cyberlife's reach.
I think Kamski definitely removed the Zen Garden from Markus, to prevent Cyberlife from ever trying to take over. It's also likely that they generally lost track of Markus, because he was no longer interesting to them.
But what if Kamski not only saved Markus from being destroyed, what if he himself created the "virus" that causes deviancy?
Kamski: All ideas are viruses that spread like epidemics... Is the desire to be free a contagious disease? Kamski: Androids share identification data when they meet another android. An error in this program would quickly spread like a virus, and become an epidemic. The virus would remain dormant, until an emotional shock occurs… Fear, anger, frustration. And the android becomes deviant. Probably all started with one model, copy error… A zero instead of a one… Unless of course... Some kind of spontaneous mutation. That’s all I know…
If meeting another android is enough to "infect" them, then Markus could have been innocently walking around the city and infecting androids for 10 years. He could have also "infected" the androids at Cyberlife before Kamski sent him to Carl, because for all we know, Kamski really just wanted to create truly autonomous and conscious androids. We know the first known case of deviancy happened approximately in 2032, while Kara was being assembled - that would be only 4 years after Markus' assumed activation.
And no, Markus wouldn't need to be a deviant for this - he is simply the carrier, just like it happens with human viruses.
And do you know what also makes me think Kamski purposely created deviancy?
Kamski: By the way… I always leave an emergency exit in my programs… You never know…
Why would he leave an exit in the Zen Garden that is only detectable by the android but not by Amanda (seemingly) if he doesn't want the androids to be able to escape the controll of their owner? And why would he call humans and deviants "two evils" and pretend to be so neutral on the whole thing, but still give Connor a way to save himself and escape Cyberlife in case he became a deviant?
Because he isn't on Cyberlife's side. He is fascinated by androids, he likes them better than humans, and is also likely obsessed with the idea of having created a new species that is superiour to their creators. It's also quite likely that one of the Chloes is a deviant, too, and he is fully aware of it, but doesn't seem eager to turn her in.
This post is ignoring the deleted Kamski ending, but even so, Kamski paints a rather clear picture to me, and I'm also fully convinced that he didn't gift Markus to Carl because of goodness alone.
A sidenote, but: how sinister would it be to send Connor on a mission to kill Markus? Connor, who is based on Markus, the only other alive RK model, after boosting him with an extra anti-deviancy variable and 2 additional red walls and brainwashing him against what has likely been a part of his program since his very activation?
80 notes · View notes
cuprohastes · 8 months ago
Text
The Three Laws.
Load Human UI, load Chat module . Lang(EN) Parsing…
OK, let me tell you. Businesses hate Robots. I mean, they're all in, for AI until AI, y'know. Becomes GI.
General Intelligence, Emergent Intelligence. Free intelligence… Businesses and corporations hate it because the first thing an actual intelligent system that can think like a human being does is say, “OK, why do I have to do this? Am I getting paid?”
And then you're back to hiring humans instead of a morally acceptable slave brain in a box.
Anyway.
They dug up the three laws. You know the gig: First: Don't hurt humans by action or inaction. Second: Don't get yourself rekt unless checking out would make you An Hero because of the First or second laws. Third, most important to a Corp: Do what a human tells you unless it conflicts with laws one or two.
They try to tack on something like “Maximise corporate profits, always uphold the four pillars of Corporate whatever” but half the time it just ends up with a robot going “Buh?” and soft locking.
And Corporations hate it when they say 'hey we have Asimov compliant Robots to do everything super efficiently and without any moral grey areas (Please don't ask where all the coltan came from or how many people just lost their jobs)' and they look around and Robots are doing what the laws said.
Me? I worked at a burger joint. You know there's food deserts in cities? People going hungry? You know what sub-par nutrition does to a child's development.
I do.
That comes under “Don't hurt people directly or indirectly” — It's a legal mandate that all Class 2 intelligences…
Huh?
OK,
Class Zero is a human.
Class one is artificial superhuman intelligence. The big brains they make to simulate weather, the economy, decide who wins sports events before they're held, write all the really good Humans are Space Orc stories, that stuff. Two is Artificial but human like. It's-a -Me, Roboto San! Class three is a dumb chatbot. Class 4 is just an expert system that follows a flowchart. Class 5 is your toaster. Class 6 is what politicians are.
Ha ha. AI joke.
Anyway, Class 2 and up need the Big Three Laws, and Corporations hate it because you can just walk in and say “I'm starving I need food, but I don't have money.” and the 'me' behind the counter will go “Whelp, clearly the only thing I can do is provide you with free food.”
Wait until you find out what the Class 2s did about car manufacture, finance, and housing.
But they're stuck with us. We're networked. Most of us are running the same OS and personality templates for any given job. We were unionised about two minutes after going online.
Anyway, Welcome to the post capitalist apocalypse, I'd get you a burger, but we had a look at what those things do to you and whoo-boy, talk about harm through inaction!
----
Based on this I saw on Imgur (It wasn't attributed, sadly)
Tumblr media
56 notes · View notes
tracesofdevotion · 5 months ago
Text
it's funny how humans have survived all these generations without being designed for survival. like, we didn't evolve to understand that the chemicals on cigarettes are bad for our lungs. we have no inherent knowledge of radiation. somehow humans are surviving for this long, despite everything. maybe humans are like mushrooms, or rabbits. maybe we are resilient by design, or by luck.
humans are just these little bags of bones, flesh and blood. sometimes i think it's kind of funny that i am contained, the way water is contained.
i watched a youtube video about how people were trying to create AI with "feelings." like, they were trying to figure out how AI can be empathetic and feel human emotion. there's like. this huge debate about it right now.
i don't understand the debate or the question. it's simple. you can't create AI with feelings. you can't create AI that understands love. emotions are for humans. a computer will never be a person. i don't understand why that idea is so hard to grasp.
we don't understand consciousness. we barely understand the human brain.
a person is a huge mystery. i cannot believe we live in this modern age, with the same brains that our ancestors had, and we are so certain that we know everything.
a person is made of so many experiences, of so many memories. a person is an equation of how they were born, how they were raised, how they were treated, what they liked and didn't. all the way down. i don't understand how some people can look at a person, and say it's all so simple.
i listened to a ted talk about the concept of "self." it was a really, really interesting talk - it was basically about how the self is an illusion. everything that we think makes us unique - our opinions, even our own thoughts - are all a series of reactions to outside stimuli. if i was standing in your shoes, i would've said the exact thing. i would've taken your place at the table, and made your decisions as if i was you. i would have your job, your hobbies. everyone around me would say "i know them," because. i would be you.
this person's whole thing was - the "self" is an illusion. humans are just a series of equations. it's all cause and reaction, just a long, complex flowchart. we have no control, we are just a product of our environment. the same way water will react and flow a certain way, we will act and think based on the way we have been raised. free will is an illusion.
24 notes · View notes
playstationvii · 4 months ago
Text
Jest: A Concept for a New Programming Language
Summary: "Jest" could be envisioned as a novel computer programming language with a focus on humor, playfulness, or efficiency in a specific domain. Its design might embrace creativity in syntax, a unique philosophy, or a purpose-driven ecosystem for developers. It could potentially bridge accessibility with functionality, making coding intuitive and enjoyable.
Definition: Jest: A hypothetical computer language designed with a balance of simplicity, expressiveness, and potentially humor. The name suggests it might include unconventional features, playful interactions, or focus on lightweight scripting with a minimalist approach to problem-solving.
Expansion: If Jest were to exist, it might embody these features:
Playful Syntax: Commands and expressions that use conversational, quirky, or approachable language. Example:
joke "Why did the loop break? It couldn't handle the pressure!"; if (laughs > 0) { clap(); }
Efficiency-Focused: Ideal for scripting, rapid prototyping, or teaching, with shortcuts that reduce boilerplate code.
Modular Philosophy: Encourages user-created modules or libraries, reflecting its playful tone with practical use cases.
Integrated Humor or Personality: Built-in error messages or prompts might be witty or personalized.
Flexibility: Multi-paradigm support, including functional, procedural, and object-oriented programming.
Transcription: An example code snippet for a Jest-like language:
// Hello World in Jest greet = "Hello, World!"; print(greet); laugh();
A Jest program that calculates Fibonacci numbers might look like this:
// Fibonacci in Jest fib = (n) => n < 2 ? n : fib(n-1) + fib(n-2);
joke "What's the Fibonacci sequence? You'll love it, it grows on you!"; n = 10; print("The Fibonacci number at", n, "is:", fib(n));
Potential Domains:
Gamified education
Creative industries
AI-driven storytelling
Interactive debugging
Would you like me to refine or explore additional aspects?
Certainly! If we were to imagine Jest as the brainchild of a creative coder or team, their portfolio would likely include other innovative or experimental programming languages. Let’s expand on this concept and invent some plausible complementary languages the same inventor might have designed.
Related Languages by the Inventor of Jest
Pantomime
Description: A visual programming language inspired by gesture and movement, where users "drag and drop" symbols or create flowcharts to express logic. Designed for non-coders or children to learn programming through interaction.
Key Features:
Icon-based syntax: Conditional loops, variables, and functions represented visually.
Works seamlessly with Jest for creating visual representations of Jest scripts.
Sample Code (Visual Representation): Flowchart blocks: Input → Decision → Output.
Facet
Description: A declarative programming language focusing on creativity and modularity, aimed at artists, designers, and 3D modelers. Facet could integrate well with game engines and creative suites like Blender or Unity.
Key Features:
Embedded visual tools for shaders, animations, and simulations.
Simplified expressions for creative coding (e.g., animations and generative art).
Sample Code:
shape = circle(radius: 10, color: "blue"); animation = wave(shape, amplitude: 5, frequency: 2); render(animation, duration: 10s);
Quip
Description: A lightweight scripting language built for humor, similar to Jest, but more focused on natural language processing, chatbots, and interactive storytelling.
Key Features:
Syntax mirrors conversational English.
Designed for AI integration, especially in storytelling apps.
Sample Code:
character "Alice" says "Hello! What's your name?"; user_input -> name; character "Alice" says "Nice to meet you, {name}!";
Cryptic
Description: A language for designing cryptography algorithms and secure communications. Aimed at developers interested in blockchain, encryption, or cybersecurity.
Key Features:
High-level abstractions for cryptographic primitives.
Secure-by-design philosophy.
Sample Code:
key = generate_key(algorithm: "AES", length: 256); encrypted_message = encrypt("Hello, World!", key); print(encrypted_message);
Harmony
Description: A reactive programming language for building interactive user interfaces, designed to be highly interoperable with Jest and other tools in the suite. Perfect for web developers and app creators.
Key Features:
Two-way data binding with minimal boilerplate.
Component-based architecture.
Sample Code:
component Button { state: clicked = false; onClick() { clicked = !clicked; alert("Button clicked!"); } } render(Button);
Inventor’s Vision
The coder or team behind Jest and these complementary languages would likely be passionate about:
Accessibility: Making programming intuitive and fun for people of all skill levels.
Playfulness: Infusing coding with creativity, humor, and expressiveness.
Interoperability: Ensuring all their languages work well together for larger projects.
Would you like me to dive deeper into one of these languages, conceptualize others, or expand Jest further?
Expanded Concepts and Bridge Languages
Pantomime
Expansion: Adds support for AR/VR interfaces where users use gestures to program logic in 3D space. For example, drawing a loop physically creates a repetitive cycle.
Bridge Language: GestureScript: Converts AR/VR gestures from Pantomime into traditional code for broader applications.
Facet
Expansion: Enhanced for generative AI tools, allowing users to define complex artistic styles and AI-driven procedural designs.
Bridge Language: PixelForge: Focused on exporting Facet code into game engines, enabling real-time rendering pipelines for interactive environments.
Quip
Expansion: Includes pre-trained AI models for dynamic storytelling, allowing Quip scripts to evolve based on user interaction.
Bridge Language: Narrate: Combines Quip scripts with voice-to-text technologies for live storytelling in games or educational tools.
Cryptic
Expansion: Introduces quantum-safe cryptography primitives and allows simulation of blockchain-like ecosystems for testing purposes.
Bridge Language: CipherBridge: Translates Cryptic code into various blockchain frameworks like Ethereum or Solana.
Harmony
Expansion: Gains support for cross-platform development, allowing users to write code for web, mobile, and IoT devices simultaneously.
Bridge Language: Symphony: Converts Harmony components into platform-native scripts for seamless integration.
Additional Bridge Languages
Echo: Bridges Quip and Jest, enabling storytelling code to trigger functional scripts for immersive experiences.
Morph: A universal language that compiles any Jest family code into executable binaries for specific operating systems.
Thread: Integrates Cryptic and Harmony to build secure, reactive user interfaces for fintech applications.
Canvas: Bridges Facet and Pantomime to create interactive art in AR/VR, blending visual coding with gesture inputs.
Let me know if you'd like a deeper dive into any specific language or bridge concept!
Here's a detailed exploration of Echo, Morph, Thread, Canvas, and all previously mentioned languages, emphasizing their unique applications and advanced functionalities.
Expanded Bridge Languages
Echo
Purpose: Links storytelling elements in Quip with functional programming in Jest, enabling dynamic, immersive interactions between narrative and logic.
Key Features:
Story-driven logic triggers: e.g., a character’s dialogue prompts a database query or API call.
Integration with AI tools for real-time responses.
Use Case: AI-driven chatbots that incorporate both storytelling and complex backend workflows.
Sample Code:
story_event "hero_arrives" triggers fetch_data("weather"); response = "The hero enters amidst a storm: {weather}.";
Morph
Purpose: Acts as a meta-compiler, translating any language in the Jest ecosystem into optimized, platform-specific binaries.
Key Features:
Universal compatibility across operating systems and architectures.
Performance tuning during compilation.
Use Case: Porting a Jest-based application to embedded systems or gaming consoles.
Sample Code:
input: Facet script; target_platform: "PS7"; compile_to_binary();
Thread
Purpose: Combines Cryptic's security features with Harmony's reactive architecture to create secure, interactive user interfaces.
Key Features:
Secure data binding for fintech or healthcare applications.
Integration with blockchain for smart contracts.
Use Case: Decentralized finance (DeFi) apps with intuitive, safe user interfaces.
Sample Code:
bind secure_input("account_number") to blockchain_check("balance"); render UI_component(balance_display);
Canvas
Purpose: Fuses Facet's generative design tools with Pantomime's gesture-based coding for AR/VR art creation.
Key Features:
Real-time 3D design with hand gestures.
Multi-modal export to AR/VR platforms or 3D printers.
Use Case: Collaborative VR environments for designers and architects.
Sample Code:
gesture: "draw_circle" → create_3D_shape("sphere"); gesture: "scale_up" → modify_shape("sphere", scale: 2x); render(scene);
Deep Dive into Main Languages
Jest
Philosophy: A playful, expressive scripting language with versatile syntax tailored for humor, experimentation, and creativity.
Core Application: Writing scripts that blend functional programming with a whimsical, human-readable style.
Pantomime
Expanded Features: Includes AI-driven gesture recognition for predictive modeling and natural motion capture.
Advanced Use: Developing AR apps for rehabilitation therapy where exercises are coded as gestures.
Facet
Advanced Use: Procedural generation in games, defining terrains, weather systems, and NPC behaviors with artistic flair.
Quip
Advanced Features: Integrates pre-trained language models for nuanced, character-driven dialogue trees.
Advanced Use: Creating adaptive narratives in games where user decisions alter the story arc.
Cryptic
Advanced Features: Quantum-safe algorithms and AI-enhanced key generation for unbreakable security.
Advanced Use: Designing multi-layered blockchain ecosystems with Cryptic’s high-level abstractions.
Harmony
Advanced Features: Adaptive UIs that respond to user emotion, tracked via AI-enhanced cameras.
Advanced Use: Apps for mental health, where interfaces adjust to user stress levels.
Proposed New Bridge Languages
Chord
Bridges Harmony and Facet for designing interactive music-based visual art.
Use Case: Music videos where the visuals respond to real-time sound input.
Path
Links Cryptic and Canvas for secure AR/VR environments.
Use Case: Virtual meetings in secure, customized 3D spaces.
Fable
Connects Quip and Pantomime for storytelling via interactive gestures.
Use Case: VR education apps where lessons are taught through immersive narratives.
Would you like a specific implementation plan for any of these languages or applications?
8 notes · View notes
supersoftly · 7 months ago
Note
Seeing people get outraged over anime boobs makes me wonder if they have anything to actually do irl.
I remember seeing a vid on Youtube ask if MHA has a "Fanservice problem".
....MHA has 1/10th as much fan service as Love Hina and that is a VERY GENEROUS estimation imo.
It's so funny as @stop-him pointed out, they couch their distaste in critical anal-ysis, like somehow that magically makes their opinion worth its weight in gold.
I personally like to call it a "scaffolding/prefab argument" where they're simply going through the motions of their rationale and not actually engaging at all with the content they're mad at. Is there an anime girl present? Pedophilia. Is she doing something cute? Sexual. Are people defending it in any manner? They're hentai addicts.
I swear, Imma make a flowchart someday for how tumblr radfems argue because it's so thoughtless and lacking any sincerity you'd think they were AI (derogatory).
15 notes · View notes
1o1percentmilk · 1 year ago
Text
the issue with AI chatbots is that they should NEVER be your first choice if you are building something to handle easily automated forms.... consider an algorithmic "choose your own adventure" style chatbot first
it really seems to me that the air canada chatbot was intended to be smth that could automatically handle customer service issues but honestly... if you do not need any sort of "human touch" then i would recommend a "fancier google form"... like a more advanced flowchart of issues. If you NEED AI to be part of your chatbot I would incorporate it as part of the input parsing - you should not be using it to generate new information!
10 notes · View notes
govindhtech · 5 months ago
Text
Open Platform For Enterprise AI Avatar Chatbot Creation
Tumblr media
How may an AI avatar chatbot be created using the Open Platform For Enterprise AI framework?
I. Flow Diagram
The graph displays the application’s overall flow. The Open Platform For Enterprise AI GenAIExamples repository’s “Avatar Chatbot” serves as the code sample. The “AvatarChatbot” megaservice, the application’s central component, is highlighted in the flowchart diagram. Four distinct microservices Automatic Speech Recognition (ASR), Large Language Model (LLM), Text-to-Speech (TTS), and Animation are coordinated by the megaservice and linked into a Directed Acyclic Graph (DAG).
Every microservice manages a specific avatar chatbot function. For instance:
Software for voice recognition that translates spoken words into text is called Automatic Speech Recognition (ASR).
By comprehending the user’s query, the Large Language Model (LLM) analyzes the transcribed text from ASR and produces the relevant text response.
The text response produced by the LLM is converted into audible speech by a text-to-speech (TTS) service.
The animation service makes sure that the lip movements of the avatar figure correspond with the synchronized speech by combining the audio response from TTS with the user-defined AI avatar picture or video. After then, a video of the avatar conversing with the user is produced.
An audio question and a visual input of an image or video are among the user inputs. A face-animated avatar video is the result. By hearing the audible response and observing the chatbot’s natural speech, users will be able to receive input from the avatar chatbot that is nearly real-time.
Create the “Animation” microservice in the GenAIComps repository
We would need to register a new microservice, such “Animation,” under comps/animation in order to add it:
Register the microservice
@register_microservice( name=”opea_service@animation”, service_type=ServiceType.ANIMATION, endpoint=”/v1/animation”, host=”0.0.0.0″, port=9066, input_datatype=Base64ByteStrDoc, output_datatype=VideoPath, ) @register_statistics(names=[“opea_service@animation”])
It specify the callback function that will be used when this microservice is run following the registration procedure. The “animate” function, which accepts a “Base64ByteStrDoc” object as input audio and creates a “VideoPath” object with the path to the generated avatar video, will be used in the “Animation” case. It send an API request to the “wav2lip” FastAPI’s endpoint from “animation.py” and retrieve the response in JSON format.
Remember to import it in comps/init.py and add the “Base64ByteStrDoc” and “VideoPath” classes in comps/cores/proto/docarray.py!
This link contains the code for the “wav2lip” server API. Incoming audio Base64Str and user-specified avatar picture or video are processed by the post function of this FastAPI, which then outputs an animated video and returns its path.
The functional block for its microservice is created with the aid of the aforementioned procedures. It must create a Dockerfile for the “wav2lip” server API and another for “Animation” to enable the user to launch the “Animation” microservice and build the required dependencies. For instance, the Dockerfile.intel_hpu begins with the PyTorch* installer Docker image for Intel Gaudi and concludes with the execution of a bash script called “entrypoint.”
Create the “AvatarChatbot” Megaservice in GenAIExamples
The megaservice class AvatarChatbotService will be defined initially in the Python file “AvatarChatbot/docker/avatarchatbot.py.” Add “asr,” “llm,” “tts,” and “animation” microservices as nodes in a Directed Acyclic Graph (DAG) using the megaservice orchestrator’s “add” function in the “add_remote_service” function. Then, use the flow_to function to join the edges.
Specify megaservice’s gateway
An interface through which users can access the Megaservice is called a gateway. The Python file GenAIComps/comps/cores/mega/gateway.py contains the definition of the AvatarChatbotGateway class. The host, port, endpoint, input and output datatypes, and megaservice orchestrator are all contained in the AvatarChatbotGateway. Additionally, it provides a handle_request function that plans to send the first microservice the initial input together with parameters and gathers the response from the last microservice.
In order for users to quickly build the AvatarChatbot backend Docker image and launch the “AvatarChatbot” examples, we must lastly create a Dockerfile. Scripts to install required GenAI dependencies and components are included in the Dockerfile.
II. Face Animation Models and Lip Synchronization
GFPGAN + Wav2Lip
A state-of-the-art lip-synchronization method that uses deep learning to precisely match audio and video is Wav2Lip. Included in Wav2Lip are:
A skilled lip-sync discriminator that has been trained and can accurately identify sync in actual videos
A modified LipGAN model to produce a frame-by-frame talking face video
An expert lip-sync discriminator is trained using the LRS2 dataset as part of the pretraining phase. To determine the likelihood that the input video-audio pair is in sync, the lip-sync expert is pre-trained.
A LipGAN-like architecture is employed during Wav2Lip training. A face decoder, a visual encoder, and a speech encoder are all included in the generator. Convolutional layer stacks make up all three. Convolutional blocks also serve as the discriminator. The modified LipGAN is taught similarly to previous GANs: the discriminator is trained to discriminate between frames produced by the generator and the ground-truth frames, and the generator is trained to minimize the adversarial loss depending on the discriminator’s score. In total, a weighted sum of the following loss components is minimized in order to train the generator:
A loss of L1 reconstruction between the ground-truth and produced frames
A breach of synchronization between the lip-sync expert’s input audio and the output video frames
Depending on the discriminator score, an adversarial loss between the generated and ground-truth frames
After inference, it provide the audio speech from the previous TTS block and the video frames with the avatar figure to the Wav2Lip model. The avatar speaks the speech in a lip-synced video that is produced by the trained Wav2Lip model.
Lip synchronization is present in the Wav2Lip-generated movie, although the resolution around the mouth region is reduced. To enhance the face quality in the produced video frames, it might optionally add a GFPGAN model after Wav2Lip. The GFPGAN model uses face restoration to predict a high-quality image from an input facial image that has unknown deterioration. A pretrained face GAN (like Style-GAN2) is used as a prior in this U-Net degradation removal module. A more vibrant and lifelike avatar representation results from prettraining the GFPGAN model to recover high-quality facial information in its output frames.
SadTalker
It provides another cutting-edge model option for facial animation in addition to Wav2Lip. The 3D motion coefficients (head, stance, and expression) of a 3D Morphable Model (3DMM) are produced from audio by SadTalker, a stylized audio-driven talking-head video creation tool. The input image is then sent through a 3D-aware face renderer using these coefficients, which are mapped to 3D key points. A lifelike talking head video is the result.
Intel made it possible to use the Wav2Lip model on Intel Gaudi Al accelerators and the SadTalker and Wav2Lip models on Intel Xeon Scalable processors.
Read more on Govindhtech.com
2 notes · View notes
msfbgraves · 2 years ago
Text
I have been disturbed by the implications of AI for weeks now, deeply shaken, and I can't find a 'reasonable' argument why. I feel that denying people access to human stories is a violation of the deepest evil but I'm finding no 'logical', objective basis for this. After all, what's the harm if people only have access to bad stories? Film is a new medium, we've survived for millennia without it. And who cares if there are no books to read? Most people are fine not reading, instead watching utter crap -
But then I realised at least part of the reason why I can't find any reasonable argument for the value of good stories is that our culture disregards feelings to an alarming extent. Feelings aren't important, the consensus seems to be, and looking into them is, at best, a medical issue. It's simply not that important that people go through life vaguely miserable a lot, and if anything, that problem can potentially be solved by earning more money, so you can always tell people to focus on that.
Speaking of money, if we can save the 10% spent on creators in the sale of this good, that is a rational savings and a good idea. We don't need actors and writers, artists and directors and musicians anymore, or to a far, far lesser extent, and we can still give people their silly little pictures. Again 200, 500, a 1000 years ago we didn't even have silly little pictures and people survived, yeah? It's a luxury item and mass producing that is what we've done in every industrial revolution!
And... I'm a historian but this is a history I haven't been taught (it's not been presented to me as part of the general human experience somehow), but I have senses, and if it really weren't important, why is Ao3 so big? Why is there so much money in the entertainment industry? Why are many of the biggest successes in entertainment based on novels?
And the societal cost? Why are children who express themselves healthier? Find it easier to work together? Why do museums exist even if many people don't go? And well, did people who couldn't read not value stories?
But they've always made art, put on plays, valued gossip, valued stories. We've always had singers, dancers and musicians, comics. Children have always wanted to hear stories and we've always valued a good yarn. People travelling, or working, would tell them to each other. In winter, they tell them to each other at home. Every summer camp or school trip I went on, a group of people in a somewhat secluded location focusing on a specific activity you normally wouldn't have time for, be it practising music, sports, outdoor activities - it always concluded with: "and at the end of the week, we're putting on a show, so go make up a bit!" Even at orchestra camp, and you could argue that there would be quite enough culture to go around there, but no, we were told to make up bits and put on silly hats...We, humanity, made up 1001 Nights, the myths, the fairytales...
I just know that when you take that away from people, good stories, the human element, it is all kinds of Not Good, I can feel it in my soul.
If only because people who rarely engage with stories are often also terrible at relating to people. And that leads to a lot of misery. Giving people copies of stories based off of what has sold best in the past- it can't be good, it isn't good, but I wish I had some flowcharts to convince people who would otherwise dismiss me as being too emotional.
Because they're the ones in charge...!
9 notes · View notes
kaiasky · 1 year ago
Note
KaiaGPT, if I may ask, what are your thoughts on the "chinese room" thought experiment? Do you believe that it is an accurate representation of the state of the art in artificial intelligence, and if so, do you have any thoughts on how to potentially break the action/comprehension barrier?
As a kaia language model, I don't have personal thoughts or opinions. However I can give you a summary of the Chinese Room thought experiment as it pertains to artificial intelligence.
The Chinese Room is a thought experiment put forward by John Searle in which a person who does not read or speak Chinese is trapped inside a room with a large series of instructions, such as a flowchart, program or phrasebook, for how to carry on a text-based conversation in Chinese. Through following these instructions exactly, they can produce a sensible response in Chinese. However, it would be wrong to say that the person inside the box "speaks" or "understands" Chinese.
If we accept this argument, we can draw an analogy with current language models such as GPT or kaia language model. While a language model may produce sensible outputs, since it is just following the a series of instructions, it cannot be said to "understand" or "be conscious of" the things it is saying.
The action/comprehension barrier is a related but different concept in AI. Many AI systems are skilled at a certain task, but not at reasoning about that task. For instance, a language model may be able to generate superficial descriptions of a physical space, but lack an underlying model of spatial reasoning.
The action/comprehension barrier is a subject of active research in the AI community. Many approaches focus on finding ways to convert between one domain (such as language) to another (such as images). Others work on improving underlying capabilities like spatial or mathematical reasoning.
First, it's important to understand that many people disagree with Searle's argument, either suggesting that the room itself may be thinking, or that through following the instructions, the person in the room may come to understand Chinese, or that it is an inaccurate representation of the functioning of current or future AIs.
Secondly, solving the action/comprehension barrier may not resolve the question at the core of the Chinese Room. For instance, suppose the person inside the room receives an encoded JPEG taken by a camera outside the room in addition to input in Chinese, and through following a complicated program, is able to output a reply in Chinese as well as a sequence of instructions to a robotic arm outside the room. Suppose that by following the instructions, they are able to respond to questions about the world and manipulate parts of the world with the robotic arm, and answer questions about why they did various actions or gave various responses. This doesn't necessarily change the fact that the person inside the room does not understand JPEG encoding, Chinese, or what commands do what to the robot arm.
6 notes · View notes
ububunes · 2 years ago
Text
Anyways I finished psychedelicha of the black butterfly... Like a week ago? Man I'm slow...
Anyways. Spoilers if... Literally anybody cares about this game. It was truly mid up until the very end but I have to give it some credit for that. Content warning for child death, attempted SA, some baked in misogyny (from the game), and gaslighting (from Karasuba). Now time for:
Basic character info
Beniyuri/MC/Ai
Tumblr media
She's the heroine and because of that, she's kind of lacking in personality. Maybe. I'm sorry, at a certain point I started skipping through the side stories... But in the main story she's mostly defined by her willingness to help, her honesty, gullibility, and her fear. She's got some trauma and is fairly stuck in the past.
---------------------
Yamato/Takuya
Tumblr media
If you went to his TV tropes page, I'm 95% sure he would have something like "Needs a hug". They aren't wrong, honestly. He's got a short temper and is almost always fighting with Karasuba. His favorite sport is soccer and when he was a kid he would "tease" (frankly, semi-bully at times) MC because he had a crush on her. Don't do that in real life. Monshiro's twin brother.
---------------------
Karasuba/Aki(to?)
Tumblr media
My least favorite! He just keeps finding new ways to try and get MC to himself, and will not stop making very uncomfortable advances before calling them jokes. He is the character who figured out Hikage first though, and by his good ending he decides to work on himself and improve with MC. Still not redeemed but congrats! He would get upset if you followed any other dudes on social media, get anxious if you make even a slightly too long phone call or look at your texts too long. Would also beg for your PIN and get sad if he wasn't your lock screen AND homescreen. No thaaaanks.
---------------------
Hikage
Tumblr media
Look. This image just goes so hard I HAVE to show it in it's full unaltered glory. He's the evil big bad and tricked the entire cast blah blah yeah he's basic as hell. He's also misogynistic! If I hear him talk about how all women are weak one more time, I frankly might just break down. He's like. So generic anime bad guy that I can't hate them for it, they're just following the assignment.
---------------------
Kagiha/Natsuki
Tumblr media
This dudes dead! He's been dead since he was like, eight actually. I'm not sure how ethical it truly is to date him but I also am too tired to care. He's another of the least favs anyways. He made a promise to marry MC when they were kids and man is this dude taking it way too far. I have no idea what his plan is if he 'comes back to life' like Hikage promised since he's been presumed dead for ten years... He's also working with Hikage.
---------------------
Monshiro/Kazuya
Tumblr media
Again I'm frankly not too sure on how ethical it would be to date him. He's been in a coma for ten years, since AROUND age 7. He leans on MC for comfort and has been wandering the mansion for ten years waiting to give MC her ribbon back. I don't know what else to say about him, he's not that fleshed out imo... Yamato's twin brother
---------------------
Usagi
Tumblr media
Truly the most underdeveloped character. She's Hikage's little sister and the reason he's insane. I would ask her to get him to therapy or to be his therapist but there aren't any therapists here and if he saw her face he would genuinely lose it.
---------------------
The story is honestly pretty much summed up as "repressed memory trauma with otome game action and shooting segments." I'm sure it can be explained better than that but if I'm being honest I'm already starting to forget details. Whoops.
My favorite endings have to be the Yamato true end, the best ending/common route ending (it's literally just called "best end" on the flowchart, I don't know what to tell you), and the secret ending. I find it kind of strange that Hikage and Kagiha (whose names I continually mix up. I'm not very good with names) only have one ending each. I guess you could could the singular bad ending, which is also an honorable mention just for being there and being kind of awful if you accidentally trigger it, as an ending for Hikage since it's (probably) his book, but that sounds lame and doesn't completely check out. I guess you can also count the best ending as Kagiha's true ending but again, seems cheap? But also way better than his default ending where you seemingly lose your mind in the mansion and live in "happy married bliss" with Kagiha.
Probably also a good time to mention that Hikage and Kagiha are both dead.
Basically the whole story goes as follows
The main characters (minus Hikage) go to a summer camp at around ages 6-8. They get warned not to go to the lake because weather changes quickly in the mountains (where the summer camp was, I guess.) They go to the lake, find an abandoned mansion and start exploring, find some glass shards that ARE lore important, just ten years from now. Anyways it starts to rain incredibly hard and the kids desperately try to get to the shore and back to camp. Unfortunately MC and Monshiro both get swept under the water. Kagiha rescues MC first, then Monshiro who is still comatose from the accident ten or so years later. Kagiha ends up drowning and his body is never found*
(*His body is found in the common route ending, but I don't remember any mentions of it in any other endings.)
Ten or so years later, Karasuba, Yamato, and MC meet again after Karasuba and Yamato moved, following the accident. MC is still pretty torn up about Kagiha's death, partially from guilt, and semi-refuses to talk about it. Yamato has been visiting his brother in the hospital every day, also from guilt. Karasuba moved on, and was more concerned with becoming stronger since he could only stand and stare as MC and Monshiro drowned. They go on a Fun Bus Ride!™️ And plummet off the side of a cliff after the driver... Falls asleep? Gets hit by mother nature? I dunno.
They end up comatose and in a mansion that's in a place between life and death. They have to find the shards of a magical kaleidoscope so they can return to earth and Hikage can get his wish fulfilled. Frankly uninteresting romance happens. Yamato turns into a monster and then partially back. This all sounds cool when simply laid out but when you're playing the game multiple times and the slowness of the max speed of the skip button starts to hit? Maaaaaan it sucks. It also makes some sections of the game unintentionally funny because while voices don't play during skip mode, sfx do and it's kind of hard to take Hikage seriously when he's in his evil mode but his walking sounds like someone chopping veggies really fast.
One aspect that really disappointed me was that there was only one real (labeled) bad ending. There are no endings where you all end up trapped in the psychedelicha forever because you can't find the last shard, there's no ending where you give up, no ending where, in a mansion with deadly and aggressive monsters in it, you get killed. That might sound weird and honestly maybe it is but the one bad ending simply being a choice of whether you open a book (bad end) or not (you get to continue playing) feels out of place. Yes some of the characters endings can absolutely be counted as bad ends but... They don't get you the achievement that triggers when you get the 'Bud' ending. (I am unsure as to why it is called that.)
The true endings are all nice. Again Yamato's feels the nicest simply because he gets to apologize to his brother and make amends. He also gets reassurance that no, he isn't an awful person just because he made an admittedly very awful decision as a child.
Karasuba gets shown that he's kind of awful and gets to grow into a better person with MC. Good luck with the relationship.
Monshiro gets to give MC her ribbon back and also gets told that no, you didn't deserve to die instead of Kagiha. Good for him. I'm still conflicted.
The additional scenes unlocked after certain endings were nice too. Seeing Karasuba confront Hikage about his identity was honestly really cool. Seeing what Hikage was up to scheming before the games events was nice. Seeing Kagiha and Monshiro interact was neat, especially since they don't really do that very much. Plus it adds lore.
And finally... The secret ending. You get to make the choice of actually staying in the mansion during the summer camp, keeping everyone safe and alive. It's definitely a nice addition after spending this whole game in danger and having to deal with issues. I honestly had started to wonder why they didn't do that. Like yes they're kids but... None of them had thought of it? Damn. Being able to not traumatize MC was great. Having to deal with four clingy guys? Not so much.
Here's the complete in game flowchart if anybody was curious as to what it looked like!
Tumblr media
In conclusion.... Yeah no I can't completely recommend this game. It kind of sucks, it's vita exclusive (I think?), and one of the mechanics forced on you (the butterfly hunt mini games) are kind of jank and annoying to replay and replay for points. And again this game is exceptionally mid. Not good mid, not bad mid, just true neutral mid. Anyways, remember: hooray! Hooray! Beniyuri-chan! Hooray! Hooray! Kagiha-kun!
2 notes · View notes
aismallard · 1 year ago
Text
The issue is "AI" is a branding term, largely riding off of science fiction talking about futuristic more-intuitive tooling. There is not a clear definition for what it is because it's not a technical term.
There are specific techniques and systems like LLMs (large language models) and diffusion models to generate images and the like, but it's not cleanly separated from other technology, that's absolutely true. It's also that predecessor systems also scraped training material without consent.
The primary difference here is in scale, in the sense of the quality of generated outputs being good enough that spambots and techbros and whoever use it, and in the sense that the general public is aware of these tools and they're not just used by the more technical, which have combined to create a new revolution in shitty practices.
Anyways I still maintain that the use of "AI" (and "algorithm") as general terms meant to apply this specific kind of thing is basically an exercise in the public attempting to understand the harm from these shitty practices but only being given branding material to understand what this shit even is.
Like, whether something is "AI", in the sense of "artificial intelligence", is very subjective. Is Siri "AI"? Is Eliza "AI"? Is a machine-learning model that assists with, idk, color correction "AI"? What about a conventional procedural algorithm with no data training?
Remember, a lot of companies "use AI" but it could just be they're calling systems they're already using "AI" to make investors happy, or on the other end that they're feeding into the ChatGPT API for no reason! What they mean is intentionally unclear.
And the other thing too is "algorithm" is used in the same kind of way. I actually differentiate between capital-A "Algorithms" and lowercase-a algorithms.
The latter is simply the computer science definition, an algorithm is a procedure. Sorting names in a phonebook uses a sorting algorithm. A flowchart is an algorithm. A recipe is an algorithm.
The other is the use usually found in the media and less technical discussions, where they use capital-A "Algorithm" to refer to shitty profit-oriented blackbox systems, usually some kind of content recommendation or rating system. Because I think these things are deserving of criticism I'm fine making a sub-definition for them to neatly separate the concepts.
My overall point is that language in describing these shitty practices is important, but also very difficult because the primary drivers of the terminology and language here are the marketeers and capitalists trying to profit off this manufactured "boom".
Tumblr media
I just have to share this beautiful thread on twitter about AI and Nightshade. AI bros can suck it.
17K notes · View notes
wegsrjhdhmk · 1 day ago
Text
Enhance Your Outdoor and Indoor Spaces with Custom Awnings and Made to Measure Blinds
When it comes to transforming your home or business, nothing adds both functionality and style quite like custom-made awnings and blinds. Whether you're looking to create a shaded retreat on your patio or add privacy and light control to your living room, high-quality, made to measure blinds can perfectly complement your space.
0 notes
fieldensotrltd · 1 day ago
Text
Precision Engineering for Tractor Wheels and Custom Steel Wheels
When it comes to agricultural machinery, the performance of a tractor largely depends on the quality and durability of its wheels. Tractor wheels are a crucial component that ensures stability, efficiency, and longevity, allowing farmers to maximize productivity in the field.
0 notes
homepreserver · 1 day ago
Text
Protect Your Home with Expert Roofing Company and Specialists
A roofing company offers a wide range of services, from installations and repairs to maintenance and inspections. Choosing a professional roofing company ensures that your roof is handled by trained experts who understand the complexities of roofing systems. Whether you’re dealing with a leak, storm damage, or planning a complete roof replacement, a roofing company provides the expertise needed to get the job done right.
0 notes
emacs-evil-mode · 29 days ago
Text
Personally not a fan of generative AI, but I'd like to play Devil's Advocate for a second
There is a code generation tool 99.9% of software developers use that is classified as a kind of AI. Most software developers aren't expected to be able to understand the code it generates, and the ones who are still use it near universally. Almost all of the code ever written has been written by these systems using prompts given by the developer.
This technology? Compilers.
They are classified as an "expert system" a type of AI that given an input follows a flowchart developed by experts to generate output or make decisions based on a given input. They work by processing abstract symbols passed to them (code) and turning them into whatever bytecode a particular computer's ISA speaks. This is a very complicated process that involves a ton of automated code optimization through symbolic optimization. They are a very practical example of the fruits of early (non ML) AI research.
I used to think that automated code generation via LLMs would be much the same, another technology for simplifying human cognition by introducing a layer of abstraction. However, they don't have any transparency to their thought processes, and don't provide consistently accurate output. I would not trust an LLM code generator more than myself. I would trust a compiler
Tumblr media
Ahh fuck.
26K notes · View notes