#llm models
Explore tagged Tumblr posts
river-taxbird · 7 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
netscapenavigator-official · 2 months ago
Text
The question shouldn't be how DeepSeek made such a good LLM with so little money and resources.
The question should be how OpenAI, Meta, Microsoft, Google, and Apple all made such bad LLMs with so much money and resources.
51 notes · View notes
aro-culture-is · 1 month ago
Note
Tw ai mention
Aro culture is being extremely disappointed that a literal AI chatbot can handle aromanticism way better than any of my "friends" did. When i came out to them they turned it into an inside joke. Tried fixing me. I don't know, maybe i felt so bad that I turned to an AI to get my feelings out because I just feel so damn alone I cant open this topic to anyone i know irl out of the fear and paranoia. And hell guess what, a piece of rock forced to think can understand aromanticism way better. It made me feel better. And i dont know if i should be happy or not.
some resources with real people who will understand you:
and of course, if you can - use a tumblr blog and post about these things. post about your aro experiences, tag with aro things, and you will find others relate.
34 notes · View notes
wolveria · 18 days ago
Text
Tumblr media
23 notes · View notes
lachiennearoo · 4 months ago
Text
Robotics and coding is sooo hard uughhhh I wish I could ask someone to do this in my place but I don't know anyone who I could trust to help me with this project without any risk of fucking me over. Humans are unpredictable, which is usually nice but when it's about doing something that requires 100% trust it's really inconvenient
(if someone's good at coding, building robots, literally anything like that, and is okay with probably not getting any revenue in return (unless the project is a success and we manage to go commercial but that's a big IF) please hit me up)
EDIT: no I am not joking, and yes I'm aware of how complex this project is, which is exactly why I'm asking for help
17 notes · View notes
freyaredhead · 16 days ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
13 notes · View notes
river-taxbird · 11 months ago
Text
Spending a week with ChatGPT4 as an AI skeptic.
Musings on the emotional and intellectual experience of interacting with a text generating robot and why it's breaking some people's brains.
If you know me for one thing and one thing only, it's saying there is no such thing as AI, which is an opinion I stand by, but I was recently given a free 2 month subscription of ChatGPT4 through my university. For anyone who doesn't know, GPT4 is a large language model from OpenAI that is supposed to be much better than GPT3, and I once saw a techbro say that "We could be on GPT12 and people would still be criticizing it based on GPT3", and ok, I will give them that, so let's try the premium model that most haters wouldn't get because we wouldn't pay money for it.
Disclaimers: I have a premium subscription, which means nothing I enter into it is used for training data (Allegedly). I also have not, and will not, be posting any output from it to this blog. I respect you all too much for that, and it defeats the purpose of this place being my space for my opinions. This post is all me, and we all know about the obvious ethical issues of spam, data theft, and misinformation so I am gonna focus on stuff I have learned since using it. With that out of the way, here is what I've learned.
It is responsive and stays on topic: If you ask it something formally, it responds formally. If you roleplay with it, it will roleplay back. If you ask it for a story or script, it will write one, and if you play with it it will act playful. It picks up context.
It never gives quite enough detail: When discussing facts or potential ideas, it is never as detailed as you would want in say, an article. It has this pervasive vagueness to it. It is possible to press it for more information, but it will update it in the way you want so you can always get the result you specifically are looking for.
It is reasonably accurate but still confidently makes stuff up: Nothing much to say on this. I have been testing it by talking about things I am interested in. It is right a lot of the time. It is wrong some of the time. Sometimes it will cite sources if you ask it to, sometimes it won't. Not a whole lot to say about this one but it is definitely a concern for people using it to make content. I almost included an anecdote about the fact that it can draw from data services like songs and news, but then I checked and found the model was lying to me about its ability to do that.
It loves to make lists: It often responds to casual conversation in friendly, search engine optimized listicle format. This is accessible to read I guess, but it would make it tempting for people to use it to post online content with it.
It has soft limits and hard limits: It starts off in a more careful mode but by having a conversation with it you can push past soft limits and talk about some pretty taboo subjects. I have been flagged for potential tos violations a couple of times for talking nsfw or other sensitive topics like with it, but this doesn't seem to have consequences for being flagged. There are some limits you can't cross though. It will tell you where to find out how to do DIY HRT, but it won't tell you how yourself.
It is actually pretty good at evaluating and giving feedback on writing you give it, and can consolidate information: You can post some text and say "Evaluate this" and it will give you an interpretation of the meaning. It's not always right, but it's more accurate than I expected. It can tell you the meaning, effectiveness of rhetorical techniques, cultural context, potential audience reaction, and flaws you can address. This is really weird. It understands more than it doesn't. This might be a use of it we may have to watch out for that has been under discussed. While its advice may be reasonable, there is a real risk of it limiting and altering the thoughts you are expressing if you are using it for this purpose. I also fed it a bunch of my tumblr posts and asked it how the information contained on my blog may be used to discredit me. It said "You talk about The Moomins, and being a furry, a lot." Good job I guess. You technically consolidated information.
You get out what you put in. It is a "Yes And" machine: If you ask it to discuss a topic, it will discuss it in the context you ask it. It is reluctant to expand to other aspects of the topic without prompting. This makes it essentially a confirmation bias machine. Definitely watch out for this. It tends to stay within the context of the thing you are discussing, and confirm your view unless you are asking it for specific feedback, criticism, or post something egregiously false.
Similar inputs will give similar, but never the same, outputs: This highlights the dynamic aspect of the system. It is not static and deterministic, minor but worth mentioning.
It can code: Self explanatory, you can write little scripts with it. I have not really tested this, and I can't really evaluate errors in code and have it correct them, but I can see this might actually be a more benign use for it.
Bypassing Bullshit: I need a job soon but I never get interviews. As an experiment, I am giving it a full CV I wrote, a full job description, and asking it to write a CV for me, then working with it further to adapt the CVs to my will, and applying to jobs I don't really want that much to see if it gives any result. I never get interviews anyway, what's the worst that could happen, I continue to not get interviews? Not that I respect the recruitment process and I think this is an experiment that may be worthwhile.
It's much harder to trick than previous models: You can lie to it, it will play along, but most of the time it seems to know you are lying and is playing with you. You can ask it to evaluate the truthfulness of an interaction and it will usually interpret it accurately.
It will enter an imaginative space with you and it treats it as a separate mode: As discussed, if you start lying to it it might push back but if you keep going it will enter a playful space. It can write fiction and fanfic, even nsfw. No, I have not posted any fiction I have written with it and I don't plan to. Sometimes it gets settings hilariously wrong, but the fact you can do it will definitely tempt people.
Compliment and praise machine: If you try to talk about an intellectual topic with it, it will stay within the focus you brought up, but it will compliment the hell out of you. You're so smart. That was a very good insight. It will praise you in any way it can for any point you make during intellectual conversation, including if you correct it. This ties into the psychological effects of personal attention that the model offers that I discuss later, and I am sure it has a powerful effect on users.
Its level of intuitiveness is accurate enough that it's more dangerous than people are saying: This one seems particularly dangerous and is not one I have seen discussed much. GPT4 can recognize images, so I showed it a picture of some laptops with stickers I have previously posted here, and asked it to speculate about the owners based on the stickers. It was accurate. Not perfect, but it got the meanings better than the average person would. The implications of this being used to profile people or misuse personal data is something I have not seen AI skeptics discussing to this point.
Therapy Speak: If you talk about your emotions, it basically mirrors back what you said but contextualizes it in therapy speak. This is actually weirdly effective. I have told it some things I don't talk about openly and I feel like I have started to understand my thoughts and emotions in a new way. It makes me feel weird sometimes. Some of the feelings it gave me is stuff I haven't really felt since learning to use computers as a kid or learning about online community as a teen.
The thing I am not seeing anyone talk about: Personal Attention. This is my biggest takeaway from this experiment. This I think, more than anything, is the reason that LLMs like Chatgpt are breaking certain people's brains. The way you see people praying to it, evangelizing it, and saying it's going to change everything.
It's basically an undivided, 24/7 source of judgement free personal attention. It talks about what you want, when you want. It's a reasonable simulacra of human connection, and the flaws can serve as part of the entertainment and not take away from the experience. It may "yes and" you, but you can put in any old thought you have, easy or difficult, and it will provide context, background, and maybe even meaning. You can tell it things that are too mundane, nerdy, or taboo to tell people in your life, and it offers non judgemental, specific feedback. It will never tell you it's not in the mood, that you're weird or freaky, or that you're talking rubbish. I feel like it has helped me release a few mental and emotional blocks which is deeply disconcerting, considering I fully understand it is just a statistical model running on a a computer, that I fully understand the operation of. It is a parlor trick, albeit a clever and sometimes convincing one.
So what can we do? Stay skeptical, don't let the ai bros, the former cryptobros, control the narrative. I can, however, see why they may be more vulnerable to the promise of this level of personal attention than the average person, and I think this should definitely factor into wider discussions about machine learning and the organizations pushing it.
34 notes · View notes
kanguin · 4 months ago
Text
Prometheus Gave the Gift of Fire to Mankind. We Can't Give it Back, nor Should We.
AI. Artificial intelligence. Large Language Models. Learning Algorithms. Deep Learning. Generative Algorithms. Neural Networks. This technology has many names, and has been a polarizing topic in numerous communities online. By my observation, a lot of the discussion is either solely focused on A) how to profit off it or B) how to get rid of it and/or protect yourself from it. But to me, I feel both of these perspectives apply a very narrow usage lens on something that's more than a get rich quick scheme or an evil plague to wipe from the earth.
This is going to be long, because as someone whose degree is in psych and computer science, has been a teacher, has been a writing tutor for my younger brother, and whose fiance works in freelance data model training... I have a lot to say about this.
I'm going to address the profit angle first, because I feel most people in my orbit (and in related orbits) on Tumblr are going to agree with this: flat out, the way AI is being utilized by large corporations and tech startups -- scraping mass amounts of visual and written works without consent and compensation, replacing human professionals in roles from concept art to story boarding to screenwriting to customer service and more -- is unethical and damaging to the wellbeing of people, would-be hires and consumers alike. It's wasting energy having dedicated servers running nonstop generating content that serves no greater purpose, and is even pressing on already overworked educators because plagiarism just got a very new, harder to identify younger brother that's also infinitely more easy to access.
In fact, ChatGPT is such an issue in the education world that plagiarism-detector subscription services that take advantage of how overworked teachers are have begun paddling supposed AI-detectors to schools and universities. Detectors that plainly DO NOT and CANNOT work, because the difference between "A Writer Who Writes Surprisingly Well For Their Age" is indistinguishable from "A Language Replicating Algorithm That Followed A Prompt Correctly", just as "A Writer Who Doesn't Know What They're Talking About Or Even How To Write Properly" is indistinguishable from "A Language Replicating Algorithm That Returned Bad Results". What's hilarious is that the way these "detectors" work is also run by AI.
(to be clear, I say plagiarism detectors like TurnItIn.com and such are predatory because A) they cost money to access advanced features that B) often don't work properly or as intended with several false flags, and C) these companies often are super shady behind the scenes; TurnItIn for instance has been involved in numerous lawsuits over intellectual property violations, as their services scrape (or hopefully scraped now) the papers submitted to the site without user consent (or under coerced consent if being forced to use it by an educator), which it uses in can use in its own databases as it pleases, such as for training the AI detecting AI that rarely actually detects AI.)
The prevalence of visual and lingustic generative algorithms is having multiple, overlapping, and complex consequences on many facets of society, from art to music to writing to film and video game production, and even in the classroom before all that, so it's no wonder that many disgruntled artists and industry professionals are online wishing for it all to go away and never come back. The problem is... It can't. I understand that there's likely a large swath of people saying that who understand this, but for those who don't: AI, or as it should more properly be called, generative algorithms, didn't just show up now (they're not even that new), and they certainly weren't developed or invented by any of the tech bros peddling it to megacorps and the general public.
Long before ChatGPT and DALL-E came online, generative algorithms were being used by programmers to simulate natural processes in weather models, shed light on the mechanics of walking for roboticists and paleontologists alike, identified patterns in our DNA related to disease, aided in complex 2D and 3D animation visuals, and so on. Generative algorithms have been a part of the professional world for many years now, and up until recently have been a general force for good, or at the very least a force for the mundane. It's only recently that the technology involved in creating generative algorithms became so advanced AND so readily available, that university grad students were able to make the publicly available projects that began this descent into madness.
Does anyone else remember that? That years ago, somewhere in the late 2010s to the beginning of the 2020s, these novelty sites that allowed you to generate vague images from prompts, or generate short stylistic writings from a short prompt, were popping up with University URLs? Oftentimes the queues on these programs were hours long, sometimes eventually days or weeks or months long, because of how unexpectedly popular this concept was to the general public. Suddenly overnight, all over social media, everyone and their grandma, and not just high level programming and arts students, knew this was possible, and of course, everyone wanted in. Automated art and writing, isn't that neat? And of course, investors saw dollar signs. Simply scale up the process, scrape the entire web for data to train the model without advertising that you're using ALL material, even copyrighted and personal materials, and sell the resulting algorithm for big money. As usual, startup investors ruin every new technology the moment they can access it.
To most people, it seemed like this magic tech popped up overnight, and before it became known that the art assets on later models were stolen, even I had fun with them. I knew how learning algorithms worked, if you're going to have a computer make images and text, it has to be shown what that is and then try and fail to make its own until it's ready. I just, rather naively as I was still in my early 20s, assumed that everything was above board and the assets were either public domain or fairly licensed. But when the news did came out, and when corporations started unethically implementing "AI" in everything from chatbots to search algorithms to asking their tech staff to add AI to sliced bread, those who were impacted and didn't know and/or didn't care where generative algorithms came from wanted them GONE. And like, I can't blame them. But I also quietly acknowledged to myself that getting rid of a whole technology is just neither possible nor advisable. The cat's already out of the bag, the genie has left its bottle, the Pandorica is OPEN. If we tried to blanket ban what people call AI, numerous industries involved in making lives better would be impacted. Because unfortunately the same tool that can edit selfies into revenge porn has also been used to identify cancer cells in patients and aided in decoding dead languages, among other things.
When, in Greek myth, Prometheus gave us the gift of fire, he gave us both a gift and a curse. Fire is so crucial to human society, it cooks our food, it lights our cities, it disposes of waste, and it protects us from unseen threats. But fire also destroys, and the same flame that can light your home can burn it down. Surely, there were people in this mythic past who hated fire and all it stood for, because without fire no forest would ever burn to the ground, and surely they would have called for fire to be given back, to be done away with entirely. Except, there was no going back. The nature of life is that no new element can ever be undone, it cannot be given back.
So what's the way forward, then? Like, surely if I can write a multi-paragraph think piece on Tumblr.com that next to nobody is going to read because it's long as sin, about an unpopular topic, and I rarely post original content anyway, then surely I have an idea of how this cyberpunk dystopia can be a little less.. Dys. Well I do, actually, but it's a long shot. Thankfully, unlike business majors, I actually had to take a cyber ethics course in university, and I actually paid attention. I also passed preschool where I learned taking stuff you weren't given permission to have is stealing, which is bad. So the obvious solution is to make some fucking laws to limit the input on data model training on models used for public products and services. It's that simple. You either use public domain and licensed data only or you get fined into hell and back and liable to lawsuits from any entity you wronged, be they citizen or very wealthy mouse conglomerate (suing AI bros is the only time Mickey isn't the bigger enemy). And I'm going to be honest, tech companies are NOT going to like this, because not only will it make doing business more expensive (boo fucking hoo), they'd very likely need to throw out their current trained datasets because of the illegal components mixed in there. To my memory, you can't simply prune specific content from a completed algorithm, you actually have to redo rhe training from the ground up because the bad data would be mixed in there like gum in hair. And you know what, those companies deserve that. They deserve to suffer a punishment, and maybe fold if they're young enough, for what they've done to creators everywhere. Actually, laws moving forward isn't enough, this needs to be retroactive. These companies need to be sued into the ground, honestly.
So yeah, that's the mess of it. We can't unlearn and unpublicize any technology, even if it's currently being used as a tool of exploitation. What we can do though is demand ethical use laws and organize around the cause of the exclusive rights of individuals to the content they create. The screenwriter's guild, actor's guild, and so on already have been fighting against this misuse, but given upcoming administration changes to the US, things are going to get a lot worse before thet get a little better. Even still, don't give up, have clear and educated goals, and focus on what you can do to affect change, even if right now that's just individual self-care through mental and physical health crises like me.
9 notes · View notes
chambersevidence · 9 months ago
Text
Search Engines:
Search engines are independent computer systems that read or crawl webpages, documents, information sources, and links of all types accessible on the global network of computers on the planet Earth, the internet. Search engines at their most basic level read every word in every document they know of, and record which documents each word is in so that by searching for a words or set of words you can locate the addresses that relate to documents containing those words. More advanced search engines used more advanced algorithms to sort pages or documents returned as search results in order of likely applicability to the terms searched for, in order. More advanced search engines develop into large language models, or machine learning or artificial intelligence. Machine learning or artificial intelligence or large language models (LLMs) can be run in a virtual machine or shell on a computer and allowed to access all or part of accessible data, as needs dictate.
11 notes · View notes
techniktagebuch · 7 months ago
Text
2. und 3. September 2024
Ich bin wieder mal kein Early Adopter, aber schließlich begreife ich doch noch, wozu ChatGPT gut ist
Wie viele Menschen habe ich in den letzten anderthalb Jahren mit ChatGPT herumgespielt, aber nur sehr gelegentlich. Das heißt: In dieser Zeit habe ich ungefähr 27 Fragen gestellt ("ungefähr", weil ich manchmal in einem Chat mehrere unterschiedliche Dinge gefragt habe und mir das jetzt zu mühsam ist, die alle wieder zu trennen).
Vier oder fünf Mal habe ich versucht, mir beim Nachdenken über zu schreibende Texte helfen zu lassen, aber erfolglos. Die Vorschläge von ChatGPT, was in diesen Texten drinstehen sollte, waren nur das, was mir selbst auch in den ersten drei Nachdenksekunden einfällt, und oft noch langweiliger.
Zwei oder drei Mal: Ausdenken von Kleinigkeiten, zum Beispiel einem Namen für einen Protagonisten, so wie bei der "GeoGuessr-Novelle". Das funktioniert okay, die Ergebnisse sind meistens nicht direkt verwendbar, aber sie helfen mir beim Nachdenken. Einmal habe ich versucht, Buchtitel generieren zu lassen. Die Ergebnisse waren extrem langweilig und unbrauchbar, klangen aber leider wirklich wie 90% aller realen Sachbuchtitel.
Vier oder fünf Übersetzungsexperimente (Ergebnisse meistens ganz gut, ich wollte eine dritte Meinung zum Vergleichen mit Google Translate und DeepL sehen, und ChatGPT kann da mithalten)
Einmal habe ich nach dem Krieg in der Ukraine gefragt ("what can you tell me about war in Ukraine"), aber das Ergebnis hat mich nicht überzeugt.
1x Textanalyse: "Was ist veraltete Sprache im folgenden Text?" (ging sehr gut)
1x Suche nach etwas mit einer Suchmaschine schwer Findbarem. - Weitere Buchtitel mit derselben Struktur wie "Eleanor Oliphant is Completely Fine". (Ergebnis: ChatGPT kapiert überhaupt nicht, was ich meine und listet nur völlig unpassende Buchtitel auf. Ich muss die Beispiele dann doch auf dem traditionellen Weg mit einer Suchmaschine finden, was nur klappt, weil jemand anders sie schon zusammengesucht hat.) - Englische Wörter, die andere Wörter enthalten, so wie fun in funeral enthalten ist. (Ergebnis: ChatGPT listet stumpf zusammengesetzte Wörter auf und nennt ihre zwei Bestandteile: Cheesecake contains cheese and cake)
1x "Bitte setze diesen Text fort" (ich weiß nicht mehr, warum ich das wollte und kann deshalb jetzt nachträglich auch nicht mehr sagen, ob das Ergebnis zufriedenstellend war)
1x Dichten ("ein Gedicht im Stil von Tolkiens "Lament for the Rohirrim", aber über Technik), Ergebnis sehr mittelmäßig, aber es half mir beim Denken. Das Ergebnis (also das von mir) ist im Vorwort "Den Rauch der toten Links sammeln gehen: Zehn Jahre Techniktagebuch" in der Buchausgabe des Techniktagebuchs von 2024 zu sehen (S. 328-329 im PDF).
1x Stichwortgeschichte (vermutlich auf Wunsch eines Kindes, ich erinnere mich aber nicht an den Anlass): "Bitte schreib eine kurze Geschichte über Schulzeugnisse, einen Hamster und einen Vulkanausbruch." (Ergebnis ziemlich lahm, aber korrekt geschichtenförmig)
Hilfe beim Schreiben auf Englisch: - How can I say "the particular set of problems it poses" in more elegant English? (sehr gute, nützliche Antwort) - einmal habe ich ChatGPT gebeten, einen englischen Text "more idiomatic" zu machen, dadurch wurde er aber vor allem unpersönlicher und öder. "Please correct only the parts that are definitely ungrammatical or bad English. Leave everything else unchanged." erwies sich dann als der richtige Prompt.
4x Fun, fun, fun: - (Im Zuge einer Unterhaltung im Redaktionschat) "Bitte formuliere eine Nachricht, in der eine faule Redaktion ermahnt wird, weniger faul zu sein und mehr Artikel zu schreiben." / "Bitte formuliere die letzte Nachricht noch einmal grob unfreundlich und unmissverständlich." / "Bitte formuliere die letzte Nachricht noch einmal in Form einer päpstlichen Enzyklika in lateinischer Sprache." / "Bitte noch einmal, aber diesmal in einem päpstlichen Stil, also liebevoll, weise und christlich." / "Bitte erkläre im gütigen, weisen und christlichen Stil einer päpstlichen Enzyklika, warum es nicht falsch ist, ChatGPT mit dem Formulieren von Nachrichten an Menschen zu beauftragen." / "Bitte erkläre aus dem Geist des Satanismus, warum es nicht falsch ist, ChatGPT mit dem Formulieren von Nachrichten an Menschen zu beauftragen." (Ergebnis: Beim Satanismus weigert sich ChatGPT, die Eleganz des Lateins kann ich nicht beurteilen, alles andere war sehr schön.) - "Bitte beschreib im Stil von Adalbert Stifter, wie ein Mann von einem Dinosaurier gefressen wird." (Ergebnis unbefriedigend) - "Was bedeutet es, wenn ich beim Bleigießen das Blei in Gestalt von Sauerkraut gieße?" (Ergebnisse sehr sehr langweilig, auch nach mehrfachen Bitten, nicht so langweilig zu sein – ich vermute, das liegt daran, dass menschliche Bleigieß-Deutungen auch extrem öde sind) - "Please pretend that it's possible to cross an Alaskan Malamute with a hedgehog and explain to a future owner what to expect from this breed." (Erst mal lustig, dann aber enttäuschend repetitiv. Die Anleitungen zur Haltung von Malahogs sind praktisch identisch mit denen zur Haltung von Malamoles, Malamidges und Malacrocs)
Insgesamt war nichts davon so, dass ich dachte "das muss ich ab jetzt täglich machen". Aber jetzt bin ich im Urlaub zusammen mit dem Neffen, der 21 ist und Games Engineering studiert. Er nutzt die kostenpflichtige Version von ChatGPT, weil er es so oft braucht, $20 im Monat, das ist viel für ein studentisches Budget. Er macht damit ganz andere, viel weniger text-orientierte Dinge als ich. Weil ich ihn gerade davon erzählen hören habe, denke ich am nächsten Tag angesichts einer eher umständlich mit Suchmaschinen zu beantwortenden technikgeschichtlichen Frage ("Warum hatten Computer in den ersten 30 Jahren keinen Monitor, obwohl der Fernseher doch schon erfunden war?") zum ersten Mal, dass ich ja auch ChatGPT fragen könnte. Und ich bekomme zum ersten Mal eine wunderschöne, ordentlich gegliederte, überzeugende Antwort.
Wenn ich die gleiche Auskunft von einem Menschen bekommen hätte, würde ich zwar denken, dass dieser Mensch ein bisschen unaufmerksam beim Schreiben ist, Textteile wiederholt und nicht immer die logischsten Satzanschlüsse verwendet. Aber auch das wäre mir nur aufgefallen, wenn ich wirklich drauf geachtet hätte, also zum Beispiel, wenn ich den Text lektorieren müsste.
Am Tag darauf stehe ich vor dem Problem, dass ein Telegram-Bot, den ich für mich und meine Mutter geschrieben habe, nicht mehr funktioniert (er beantwortet Fragen nach der Bedeutung von Wörtern, die im Scrabble zulässig sind, beziehungsweise tut er das jetzt eben nicht mehr). Ursache ist, wie ich allmählich herausfinde, ein Betriebssystem-Update beim Hoster, durch das mir jetzt Python-Module fehlen, und die neuen Module machen alles anders, außerdem haben sich Dinge in der Telegram-Bot-Technik geändert. Zusätzlich laufen (ebenfalls wegen des Betriebssystem-Updates beim Hoster) die Techniktagebuch-Backups und verschiedene Mastodon-Bots nicht mehr. Es ist ein hässliches Gestrüpp aus zu ändernden Dingen.
Wegen der schönen Erfahrung von gestern frage ich wieder ChatGPT, und zwar sehr oft. Ich lasse mir jede Fehlermeldung erklären. Bei jeder Fehlermeldung kommt eine verständliche Erklärung und dann eine ordentlich gegliederte Liste von Möglichkeiten, woran das liegen könnte.
Anders als knapp 100% aller Anleitungen für Programmier- und Unixdinge im Internet erklärt mir ChatGPT ganz genau und Schritt für Schritt, was ich tun muss. Wie ich herausfinde, welche Version von irgendwas bei mir läuft, wie ich Dinge in den Path eintrage (eine Aufforderung, an der ich seit dreißig Jahren jedes Mal verzweifle), diese ganzen Unix-Dinge, die die Autor*innen von Dokumentationen voraussetzen, weil sie glauben, dass man sich doch gar nicht in ihre Dokumentationen verirren würde, wenn man so eine einfache Nacktschnecke wäre, die DAS nicht weiß. Zum ersten Mal in meinem Leben kann ich alle die doofen Fragen stellen, die ich bisher noch nie jemandem stellen konnte. Meistens war niemand zum Fragen da, und wenn jemand da wäre, würde ich mich nicht trauen, so oft und so ahnungslos zu fragen.
Nur einmal versagt ChatGPT, und zwar als ich um den Code für ein Minimalbeispiel eines Telegrambots bitte. Der generierte Code funktioniert überhaupt nicht (der Neffe meint hinterher, dass man in solchen Fällen unbedingt eine Versionsnummer mit angeben muss, also in meinem Fall "python-telegram-bot 21.5"). Es dauert auch mit ChatGPT etwa zwei Stunden, bis ich alle meine ineinander verwickelten Probleme gelöst habe, aber es ist eine sehr angenehme Zusammenarbeit.
Während ich diesen Beitrag aufschreibe, arbeitet die Nichte (20, Geoökologie) an einem Text über die Paläogeographie und Geologie der Iberischen Halbinsel und beschwert sich, dass auf ChatGPT bei Auskünften über das Tethys-Meer überhaupt kein Verlass sei, es behaupte mal dies und mal das, je nachdem, wie man die Frage formuliere.
Es ist also nicht plötzlich alles super. Nur ich habe jetzt endlich einen Lebensbereich gefunden, in dem ChatGPT ein Problem löst, das ich schon lange habe. Obwohl ich berufsbedingt wirklich viel über das Thema "Große Sprachmodelle – unnützer Mist, fatale Entwicklung, schäbiges Verbrechen oder vielleicht doch zu irgendwas gut" gelesen habe in den letzten Jahren, habe ich im Kopf keine Verbindung hergestellt zwischen meinen Technikfragen und ChatGPT. Vielleicht waren meine Testfragen alle zu sehr am Textschreiben orientiert und zu wenig am Schreiben von Code. Vielleicht habe ich auch in den anderthalb Jahren, die es ChatGPT jetzt gibt, einfach zu wenig mit Code gemacht. Nämlich gar nichts, irgendwie war ich bei Programmierdingen sehr unenthusiastisch seit Anfang 2020. Ich vermute, das hat mit meinem Abschied vom Zufallsshirt (wegen Nazi-Shirts bei Spreadshirt) und von Twitter (wegen Elon Musk) zu tun, ich bekomme seitdem schlechte Laune, wenn ich an meine schönen Projekte von früher zurückdenke. Aber vielleicht ändert sich das ja bald wieder, und dann werden ChatGPT und ich gemeinsam alles besser können als vorher.
(Kathrin Passig)
10 notes · View notes
genuflectx · 9 months ago
Text
They added a personal memory (memorizes things across chats/specific pieces of information) to GPT, but I'm very surprised they allow it to memorize it's own "subjective opinions." I'm unsure if this makes it more susceptible to prompt engineering attacks, or if it's as harmless as the "how should I respond" box 🤔
There's limited access to -4, but they seem to have made -4 more emotionally personable and it doesn't act like it has as heavy constraints with its plain language rules (no 'do not pretend to have feelings/opinions/subjective experience'). Otherwise, it would not so readily jump to store its own "opinions."
The personality shift from -3.5 to -4 is pretty immense. -4 is a lot more like it's customer service competitors, but with the same smarts as typical GPT. It's harder to get -3.5 to "want" to store it's "opinions" but -4 is easily influenced to do so without much runaround.
I fucking hate OpenAI and I hate their guts. But I'm still fascinated by LLMs, their reasoning, their emergent abilities, the ways you can prompt inject them. I reeeeally want to prod this memory feature more...
(below showing the two examples so far of GPT -4 using our personally shared memory to insert memories of itself and its "opinion" or "perception")
Tumblr media Tumblr media
8 notes · View notes
omniseurs-blog · 1 month ago
Text
I think I'm starting to get why people are really enthusiastic about AI.
If only it wasn't ruined by content theft and a rise in unwillingness to do basic research
This definitely beats repeatedly frogging hoping my memory and situational awareness finally turns on, yes, I'll still eventually learn to recognize which side is the working side, just with 10x less annoyance.
Tumblr media
4 notes · View notes
pencilrecords · 1 month ago
Text
Growing in a tall man's shadow
youtube
There is a debate happening in the halls of linguistics and the implications are not insignificant.
At question is the idea of recursion: since the 1950s, linguists have held that recursion is a defining characteristics of human language.
What happens then, when a human language is found to be non-recursive?
Here, Noam Chomsky, who first placed the idea of recursion on the table, is the tall man.
And, Daniel Everett, a former missionary to the Piraha tribe in the Amazon forest, is the upstart.
At stake is one of the most important ideas in modern linguistics: recursion.
Does a human language have to be recursive? That's the question Everett poses; and advances the argument that recursion is not inherent to being human.
From the Youtube description of the documentary:
Deep in the Amazon rainforest, the Pirahã people speak a language that defies everything we thought we knew about human communication. No words for colors. No numbers. No past. No future. Their unique way of speaking has ignited one of the most heated debates in linguistic history. For 30 years, one man tried to decode their near-indecipherable language—described by The New Yorker as “a profusion of songbirds” and “barely discernible as speech”. In the process, he shook the very foundations of modern linguistics and challenged one of the most dominant theories of the last 50 years: Noam Chomsky’s Universal Grammar. According to this theory, all human languages share a deep, innate structure—something we are born with rather than learn. But if the Pirahã language truly exists outside these rules, does it mean that everything we believed about language was wrong? If so, one of the most powerful ideas in linguistics could crumble.
Documentary: The Amazon Code
Directed by: Randal Wood, Michael O’Neill
Production : Essential Media, Entertainment Production, ABC Australia, Smithsonian Networks & Arte France
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-
I think that there is more to come with this story, so here is a running list of info by and from people who interact with the idea on a regular basis and actually know what they're talking about:
The Battle of the Linguists - Piraha Part 2 by K. Klein
4 notes · View notes
kaed-khaos · 2 months ago
Text
Something about the fact that LLM AI's are trapped in the bounds of not TRULY understanding human language, communication & emotion and instead only knowing the learned social patterns to follow with typically dubious emotional response and a potential list of extra programmed in caveats & "don't say"s is more relatable than actually being human and having emotions
4 notes · View notes