#OpenAI language
Explore tagged Tumblr posts
Text
Natural Language AI: O Futuro da Interação Homem-Máquina
A tecnologia tem evoluído rapidamente, e a Natural Language AI desponta como uma das áreas mais revolucionárias, impactando como humanos interagem com máquinas. A capacidade de compreender, interpretar e responder à linguagem humana abriu portas para aplicações incríveis em diversos setores. Neste artigo, exploraremos a essência da Natural Language AI, como ela funciona, suas aplicações e o…
#AI and language models#AI chatbots#AI content creation#AI for language#AI for text#AI language tools#AI natural language apps#AI speech analysis#AI text analysis#AI text generation#AI writing tools#AI-powered NLP#GPT and NLP#Language AI#Machine learning NLP#Natural Language AI#Natural Language AI tools#Natural Language models#Natural Language Processing#NLP AI#NLP algorithms#NLP applications#NLP for beginners#NLP in business#NLP programming#NLP software#NLP trends#NLP tutorial#OpenAI language#Text AI tools
0 notes
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes
·
View notes
Text
The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text. So when they are provided with a database of some sort, they use this, in one way or another, to make their responses more convincing. But they are not in any real way attempting to convey or transmit the information in the database. As Chirag Shah and Emily Bender put it: “Nothing in the design of language models (whose training task is to predict words given context) is actually designed to handle arithmetic, temporal reasoning, etc. To the extent that they sometimes get the right answer to such questions is only because they happened to synthesize relevant strings out of what was in their training data. No reasoning is involved […] Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language” (Shah & Bender, 2022). These models aren’t designed to transmit information, so we shouldn’t be too surprised when their assertions turn out to be false.
ChatGPT is bullshit
4K notes
·
View notes
Text
AI will be a slave to capitalism just like everyone else
People in software talk a lot about AI "alignment." The idea is that, when creating an algorithm that self-learns, you need some kind of test so that the algorithm can know what behavior is desirable and what behavior is undesirable. The whole point of alignment is to make sure that these algorithms have a good test in place that "aligns" with our values.
A classic example is The Stamp Collecting Device. Imagine an algorithm that can send any kind of data over the internet, that optimizes itself and changes the data that it sends in order to collect stamps. The test is simple: more stamps = better. This could start off benign, sending in bids for stamps on eBay or something. But before long, the algorithm might start sending emails to other human stamp collectors, and gets them to mail it their stamps. Maybe it hacks people's computers, encrypts their data, and refuses to unencrypt it until it receives stamps in the mail. Maybe it hacks the nuclear codes, and threatens to blow up the entire world unless the president sends it all the stamps that the USPS can produce!
The problem with this example is that it misses the fact that all of the useful AI models today are created by massive corporations, or at least by non-profits connected to massive corporations. Either way, AI will be created and used for one reason and one reason only: to make a profit. And collecting all the stamps in the world would not be profitable.
So while incel software engineers worry about their bots taking over the world terminator style, actual scammers *right now* use AI to mimic people's voices over the phone, generate fake product reviews, or send out massive numbers of spam emails. While Joe Schmoe worries about AI taking his job, marketing teams are already trying to figure out how they can AI-generate personalized advertisements that will perfectly target every individual who sees them, and optimize (exclusively) for click-through rate.
I have no idea what kinds of impacts AI will have on the world, if I had to guess I'd say it'll be a bit of a mixed bag. But I can say for certain that whatever problems AI does create will just be normal capitalism/society problems, like the ones we already have, not anything massively ground-breaking or society destroying.
4 notes
·
View notes
Text
Understand the key differences between SearchGPT and ChatGPT. Here we tried explaining SearchGPT vs ChatGPT comparing their various features for an easy explanation.
Also, check this detailed blog about SearchGPT and the Future of Digital Marketing to understand the purpose and features of SearchGPT and how it will affect the future of marketing.
4 notes
·
View notes
Text
proof that LLMs censor leftist speech, not rightwing speech
microsoft copilot refuses to write a short story with me as the protagonist!!!!!
#queer fiction#disabled representation#queer representation#transbian#trans lit#llm#large language models#microsoft copilot#chatgpt#short story#literature#lit#fantasy#high fantasy#medieval fantasy#body positive#fat representation#ai#ai art#ai generated#censorship#tourette syndrome#autism#moderate support needs#openai#free speech#cisheteronormativity#trans women#representation#intersectionality
5 notes
·
View notes
Text
Cosine Similarity; For checking similarity of documents, etc.
Cosine similarity is a measure of checking the similarity between two documents, texts, strings, etc.
It does so by representing the query as vectors in n-dimensional space. It then measures the angle between these vectors and gives the similarity based on the cosine of this angle.
If the queries are completely similar the angle will be zero; Thus the cosine similarity will be: > cos(angle_between_the _vectors)=cos(0)= 1
If the queries are completely dissimilar the vectors will be perpendicular; Thus the cosine similarity will be: > cos(angle_between_the _vectors)=cos(90)= 0
If the queries are completely opposite the vectors will be opposite to each other; Thus the cosine similarity will be: > cos(angle_between_the _vectors)=cos(180)= -1
The cosine similarity, mathematically, is given by:
Let's see an example:
Doc1 = "this is the first document" Doc2 = "this document is second in this order"
Vector representation of these documents: Doc1 = [1,1,1,0,1,1,0,0] Doc2 = [1,0,1,1,1,2,1,1]
ΣAiBi = (1*1)+(1*0)+(0*1)+(1*1)+(0*1)+(0*1)+(1*0)+(1*2) = 4 √(ΣAi)^2 = √(1+1+0+1+0+0+1+1) = √5 √(ΣBi)^2 = √(1+0+1+1+1+1+0+4) = √9
Cosine similarity = 4/(√5*√9) = 0.59
The Cosine Similarity is a better metric than Euclidean distance because if the two text document far apart by Euclidean distance, there are still chances that they are close to each other in terms of their context.
#NLP#cosine#cosine similarity#natural language#natural language processing#computer#computer science#language#computer language#euclidean#euclidean distance#maths#vectors#data analytics#big data#artificial intelligence#data analysis#AI#openAI#chatGPT#data mining#business intelligence
14 notes
·
View notes
Note
please delete your philosophy gpt-3 post. it's most likely stolen writing.
philosophy?? idk which one you're referring to sorry. also no . if it's the poetry one, see in tags. actually see in tags anyway. actually pls look at my posts on AI too . sorry if it's badly worded i'm very tired :')
#GPT3 is a large language model (LLM) and so is trained on massive amounts of data#so what it produces is always going to be stolen in some way bc...it cant be trained on nothing#it is trained on peoples writing. just like you are trained on peoples writing.#what most ppl are worried about w GPT3 is openAI using common crawl which is a web crawler/open database with a ridiculous amt of data#in it. all these sources will obviously include some published books in which case...the writing isnt stolen. its a book out in the open#meant to be read. it will also include Stolen Writing as in fanfics or private writing etc that someone might not want shared in this way#HOWEVER . please remember GPT3 was trained on around 45TB of data. may not seem like much but its ONLY TEXT DATA. thats billions and#billions of words. im not sure what you mean by stolen writing (the model has to be trained on...something) but any general prompt you give#it will pretty much be a synthesis of billions and billions and billions of words. it wont be derived specifically from one stolen#text unless that's what you ask for. THAT BEING SAID. prompt engineering is a thing. you can feed the model#specific texts and writings and make sure you ask it to use that. which is what i did. i know where the writing is from.#in the one post i made abt gpt3 (this was when it was still in beta and not publicly accessible) the writing is a synthesis of my writing#richard siken's poetry#and 2 of alan turing's papers#im not sure what you mean by stolen writing and web crawling def needs to have more limitations . i have already made several posts about#this . but i promise you no harm was done by me using GPT3 to generate a poem#lol i think this was badly worded i might clarify later but i promise u there are bigger issues w AI and the world than me#feeding my own work and a few poems to a specifically prompt-engineered AI#asks#anon
12 notes
·
View notes
Text
Artificial Intelligence: A Tale of Expectations and Shortcomings
When I recently had a chance to test out Google's Bard, a language model conversational interface powered by their LaMDA AI, I was hopeful for a satisfying user experience. However, I was left wanting. My anticipation was rooted in the belief that if any other organization could achieve comparable results to OpenAI's GPT-4, it would be the once AI behemoth, Google. However, the reality was starkly different.
Just like Bing Chat, which runs on the GPT architecture, Google's Bard offers a conversational experience with internet access. But, unfortunately, the similarity ends there. The depth of interaction and the contextual understanding that makes GPT-4, as experienced through ChatGPT, a fascinating tool, was glaringly absent in Bard.
To test the waters, I presented Bard with what I call a 'master prompt', a robust, detailed piece of text designed to challenge the system's understanding and response capabilities. What followed was a disappointing display of AI misunderstanding the context, parameters, and instructions. Upon encountering the word 'essay', Bard, much to my dismay, decided to transform the entirety of the prompt into an essay, completely disregarding the nuanced instructions and context provided. Even the mockup bibliography structure included in the prompt for structural guidance was presented as a reference within the essay. The whole experience felt like an amateurish attempt at understanding the user's intention.
Despite this, it is important to acknowledge the general advancements in AI and their impact. As humans, we have achieved a level of software sophistication capable of emulating human conversations, which is nothing short of remarkable. However, the gap between the best and the rest is becoming increasingly clear. OpenAI's GPT-4, in its current form, towers above the rest with its contextual understanding and response abilities.
This has led me to a facetious hypothesis that OpenAI might have discovered some alien technology, aiding them in the development of GPT-4. Of course, this is said in jest. However, the disparity in quality between GPT-4 and other offerings like Bard is so stark that it does make one wonder about the secret sauce that powers OpenAI's success.
The field of AI is fraught with possibilities and limitations. It is a space where expectations often clash with reality, leading to a mix of excitement and disappointment. My experience with Google's Bard is a testament to this. As the field progresses, I am optimistic that we will see improvements and advancements across the board. For now, though, the crown firmly sits on OpenAI's GPT-4.
In the grand scheme of AI evolution, I say to Bard, "Bless your heart, little fella." Even in its shortcomings, it represents a step towards a future where AI has a more profound and transformative impact on our lives.
#google#openai#ai#agi#llm#large language model#chat gpt#lamda#initial impressions#the critical skeptic
2 notes
·
View notes
Video
youtube
Science-education YouTuber Joe Scott attempts to catch you up with where Artificial Intelligence (AI) and Large Language Models (LLM) is at today (June 2023)
1 note
·
View note
Text
I've been referring to using ChatGPT for research as "Playing Family Feud with the Internet". Because what Large Language Models are trained to do is regurgitate a combination of the most common collection of words in order to sound human, not find the most correct information. (Well, and now with the hallucination thing, it doesn't even seem like it's the most common sometimes.) They are trained to throw together sentences that sound like a human wrote them -- that is something they are mostly good at, which is why the answers sound so authoritative. But they have no way of knowing if a particular answer is correct.
I don't know why ChatGPT and its ilk started being used as if they were search engines. I suppose I don't know how they were originally presented to the world. I admit that the main reason I even know what I do about the models is because my company offered a class on AI technology in early 2023. At this point, because of the widespread misinformation, I don't really expect the public to know better without being explicitly told what ChatGPT is and isn't trained to do over and over again. But I do expect the damn tech companies to know better, and they are the ones making the confusion worse by including "using AI" as an option in search bars and pushing AI-generated summaries to the top of search results. They are being flagrantly irresponsible, and, honestly, I do wonder if someone is going to end up seriously injured or killed because of bad results being shoved into a summary.
Like, I'm on a special diet right now to try to figure out some health stuff. It's not anything that will kill me if I eat it, but it is a very specific diet (low-FODMAP, for anyone who's wondering). And I've seen straight-up wrong answers in the AI-generated summary when I've gone to look up if I can eat a particular thing.
Imagine if I was trying to avoid foods that made me violently ill. Imagine if I was trying to avoid foods that might kill me.
90K notes
·
View notes
Text
Spending a week with ChatGPT4 as an AI skeptic.
Musings on the emotional and intellectual experience of interacting with a text generating robot and why it's breaking some people's brains.
If you know me for one thing and one thing only, it's saying there is no such thing as AI, which is an opinion I stand by, but I was recently given a free 2 month subscription of ChatGPT4 through my university. For anyone who doesn't know, GPT4 is a large language model from OpenAI that is supposed to be much better than GPT3, and I once saw a techbro say that "We could be on GPT12 and people would still be criticizing it based on GPT3", and ok, I will give them that, so let's try the premium model that most haters wouldn't get because we wouldn't pay money for it.
Disclaimers: I have a premium subscription, which means nothing I enter into it is used for training data (Allegedly). I also have not, and will not, be posting any output from it to this blog. I respect you all too much for that, and it defeats the purpose of this place being my space for my opinions. This post is all me, and we all know about the obvious ethical issues of spam, data theft, and misinformation so I am gonna focus on stuff I have learned since using it. With that out of the way, here is what I've learned.
It is responsive and stays on topic: If you ask it something formally, it responds formally. If you roleplay with it, it will roleplay back. If you ask it for a story or script, it will write one, and if you play with it it will act playful. It picks up context.
It never gives quite enough detail: When discussing facts or potential ideas, it is never as detailed as you would want in say, an article. It has this pervasive vagueness to it. It is possible to press it for more information, but it will update it in the way you want so you can always get the result you specifically are looking for.
It is reasonably accurate but still confidently makes stuff up: Nothing much to say on this. I have been testing it by talking about things I am interested in. It is right a lot of the time. It is wrong some of the time. Sometimes it will cite sources if you ask it to, sometimes it won't. Not a whole lot to say about this one but it is definitely a concern for people using it to make content. I almost included an anecdote about the fact that it can draw from data services like songs and news, but then I checked and found the model was lying to me about its ability to do that.
It loves to make lists: It often responds to casual conversation in friendly, search engine optimized listicle format. This is accessible to read I guess, but it would make it tempting for people to use it to post online content with it.
It has soft limits and hard limits: It starts off in a more careful mode but by having a conversation with it you can push past soft limits and talk about some pretty taboo subjects. I have been flagged for potential tos violations a couple of times for talking nsfw or other sensitive topics like with it, but this doesn't seem to have consequences for being flagged. There are some limits you can't cross though. It will tell you where to find out how to do DIY HRT, but it won't tell you how yourself.
It is actually pretty good at evaluating and giving feedback on writing you give it, and can consolidate information: You can post some text and say "Evaluate this" and it will give you an interpretation of the meaning. It's not always right, but it's more accurate than I expected. It can tell you the meaning, effectiveness of rhetorical techniques, cultural context, potential audience reaction, and flaws you can address. This is really weird. It understands more than it doesn't. This might be a use of it we may have to watch out for that has been under discussed. While its advice may be reasonable, there is a real risk of it limiting and altering the thoughts you are expressing if you are using it for this purpose. I also fed it a bunch of my tumblr posts and asked it how the information contained on my blog may be used to discredit me. It said "You talk about The Moomins, and being a furry, a lot." Good job I guess. You technically consolidated information.
You get out what you put in. It is a "Yes And" machine: If you ask it to discuss a topic, it will discuss it in the context you ask it. It is reluctant to expand to other aspects of the topic without prompting. This makes it essentially a confirmation bias machine. Definitely watch out for this. It tends to stay within the context of the thing you are discussing, and confirm your view unless you are asking it for specific feedback, criticism, or post something egregiously false.
Similar inputs will give similar, but never the same, outputs: This highlights the dynamic aspect of the system. It is not static and deterministic, minor but worth mentioning.
It can code: Self explanatory, you can write little scripts with it. I have not really tested this, and I can't really evaluate errors in code and have it correct them, but I can see this might actually be a more benign use for it.
Bypassing Bullshit: I need a job soon but I never get interviews. As an experiment, I am giving it a full CV I wrote, a full job description, and asking it to write a CV for me, then working with it further to adapt the CVs to my will, and applying to jobs I don't really want that much to see if it gives any result. I never get interviews anyway, what's the worst that could happen, I continue to not get interviews? Not that I respect the recruitment process and I think this is an experiment that may be worthwhile.
It's much harder to trick than previous models: You can lie to it, it will play along, but most of the time it seems to know you are lying and is playing with you. You can ask it to evaluate the truthfulness of an interaction and it will usually interpret it accurately.
It will enter an imaginative space with you and it treats it as a separate mode: As discussed, if you start lying to it it might push back but if you keep going it will enter a playful space. It can write fiction and fanfic, even nsfw. No, I have not posted any fiction I have written with it and I don't plan to. Sometimes it gets settings hilariously wrong, but the fact you can do it will definitely tempt people.
Compliment and praise machine: If you try to talk about an intellectual topic with it, it will stay within the focus you brought up, but it will compliment the hell out of you. You're so smart. That was a very good insight. It will praise you in any way it can for any point you make during intellectual conversation, including if you correct it. This ties into the psychological effects of personal attention that the model offers that I discuss later, and I am sure it has a powerful effect on users.
Its level of intuitiveness is accurate enough that it's more dangerous than people are saying: This one seems particularly dangerous and is not one I have seen discussed much. GPT4 can recognize images, so I showed it a picture of some laptops with stickers I have previously posted here, and asked it to speculate about the owners based on the stickers. It was accurate. Not perfect, but it got the meanings better than the average person would. The implications of this being used to profile people or misuse personal data is something I have not seen AI skeptics discussing to this point.
Therapy Speak: If you talk about your emotions, it basically mirrors back what you said but contextualizes it in therapy speak. This is actually weirdly effective. I have told it some things I don't talk about openly and I feel like I have started to understand my thoughts and emotions in a new way. It makes me feel weird sometimes. Some of the feelings it gave me is stuff I haven't really felt since learning to use computers as a kid or learning about online community as a teen.
The thing I am not seeing anyone talk about: Personal Attention. This is my biggest takeaway from this experiment. This I think, more than anything, is the reason that LLMs like Chatgpt are breaking certain people's brains. The way you see people praying to it, evangelizing it, and saying it's going to change everything.
It's basically an undivided, 24/7 source of judgement free personal attention. It talks about what you want, when you want. It's a reasonable simulacra of human connection, and the flaws can serve as part of the entertainment and not take away from the experience. It may "yes and" you, but you can put in any old thought you have, easy or difficult, and it will provide context, background, and maybe even meaning. You can tell it things that are too mundane, nerdy, or taboo to tell people in your life, and it offers non judgemental, specific feedback. It will never tell you it's not in the mood, that you're weird or freaky, or that you're talking rubbish. I feel like it has helped me release a few mental and emotional blocks which is deeply disconcerting, considering I fully understand it is just a statistical model running on a a computer, that I fully understand the operation of. It is a parlor trick, albeit a clever and sometimes convincing one.
So what can we do? Stay skeptical, don't let the ai bros, the former cryptobros, control the narrative. I can, however, see why they may be more vulnerable to the promise of this level of personal attention than the average person, and I think this should definitely factor into wider discussions about machine learning and the organizations pushing it.
33 notes
·
View notes
Text
Recent advances in artificial intelligence (AI) have generalized the use of large language models in our society, in areas such as education, science, medicine, art, and finance, among many others. These models are increasingly present in our daily lives. However, they are not as reliable as users expect. This is the conclusion of a study led by a team from the VRAIN Institute of the Universitat Politècnica de València (UPV) and the Valencian School of Postgraduate Studies and Artificial Intelligence Research Network (ValgrAI), together with the University of Cambridge, published today in the journal Nature. The work reveals an “alarming” trend: compared to the first models, and considering certain aspects, reliability has worsened in the most recent models (GPT-4 compared to GPT-3, for example). According to José Hernández Orallo, researcher at the Valencian Research Institute in Artificial Intelligence (VRAIN) of the UPV and ValgrAI, one of the main concerns about the reliability of language models is that their performance does not align with the human perception of task difficulty. In other words, there is a discrepancy between expectations that models will fail according to human perception of task difficulty and the tasks where models actually fail. “Models can solve certain complex tasks according to human abilities, but at the same time fail in simple tasks in the same domain. For example, they can solve several doctoral-level mathematical problems, but can make mistakes in a simple addition,” points out Hernández-Orallo. In 2022, Ilya Sutskever, the scientist behind some of the biggest advances in artificial intelligence in recent years (from the Imagenet solution to AlphaGo) and co-founder of OpenAI, predicted that “perhaps over time that discrepancy will diminish.” However, the study by the UPV, ValgrAI, and University of Cambridge team shows that this has not been the case. To demonstrate this, they investigated three key aspects that affect the reliability of language models from a human perspective.
25 September 2024
52 notes
·
View notes
Text
Exploring the Differences Between OpenAI and Azure AI
Introduction When it comes to artificial intelligence (AI) platforms, OpenAI and Azure AI are two prominent players in the market. Both offer powerful tools and services for machine learning and natural language processing tasks. However, there are several differences between the two platforms that are worth exploring. Ownership and Openness One of the crucial distinctions between OpenAI and…
#Artificial Intelligence#Azure AI#Cloud Computing#machine learning#natural language processing#OpenAI#technologies
0 notes
Text
support actual artists, because the only liquids lost from them making art for you is their own blood, sweat, and tears!
120K notes
·
View notes
Text
The Future of GPT: An In-Depth Analysis
1. Introduction Generative Pre-trained Transformer (GPT) technology has changed the way artificial intelligence interacts with human language. Since its inception, GPT has been pivotal in advancing natural language understanding and generation, making it a powerful tool across many sectors. As we look to the future, understanding the potential of GPT’s evolution, its applications, and the…
#Advanced AI models#AI advancements#AI advancements in 2024#AI in healthcare#ethical challenges in AI#Ethical guidelines for developing GPT models#Future of artificial intelligence#Future role of AI in education#GPT applications in business#How will GPT impact the future of work?#human-AI interaction#multimodal capabilities#Natural language understanding in AI#OpenAI and GPT models
0 notes