#Why does chat GPT suddenly stop
Explore tagged Tumblr posts
needtricks-blog · 29 days ago
Text
Why ChatGPT Stops Suddenly Response Limits, Glitches, & Fixes
Why ChatGPT Stops Suddenly: Response Limits, Glitches, & Fixes. ChatGPT is a powerful AI language model that can generate insightful and detailed responses. However, users sometimes experience the frustration of ChatGPT stopping unexpectedly in the middle of its reply. There are several reasons why this happens, and understanding them can help you avoid interruptions. In this article, we’ll dive…
0 notes
morlock-holmes · 7 months ago
Note
What objections would you actually accept to AI?
Roughly in order of urgency, at least in my opinion:
Problem 1: Curation
The large tech monopolies have essentially abandoned curation and are raking in the dough by monetizing the process of showing you crap you don't want.
The YouTube content farm; the Steam asset flip; SEO spam; drop-shipped crap on Etsy and Amazon.
AI makes these pernicious, user hostile practices even easier.
Problem 2: Economic disruption
This has a bunch of aspects, but key to me is that *all* automation threatens people who have built a living on doing work. If previously difficult, high skill work suddenly becomes low skill, this is economically threatening to the high skill workers. Key to me is that this is true of *all* work, independent of whether the work is drudgery or deeply fulfilling. Go automate an Amazon fulfillment center and the employees will not be thanking you.
There's also just the general threat of existing relationships not accounting for AI, in terms of, like, residuals or whatever.
Problem 3: Opacity
Basically all these AI products are extremely opaque. The companies building them are not at all transparent about the source of their data, how it is used, or how their tools work. Because they view the tools as things they own whose outputs reflect on their company, they mess with the outputs in order to attempt to ensure that the outputs don't reflect badly on their company.
These processes are opaque and not communicated clearly or accurately to end users; in fact, because AI text tools hallucinate, they will happily give you *fake* error messages if you ask why they returned an error.
There's been allegations that Mid journey and Open AI don't comply with European data protection laws, as well.
There is something that does bother me, too, about the use of big data as a profit center. I don't think it's a copyright or theft issue, but it is a fact that these companies are using public data to make a lot of money while being extremely closed off about how exactly they do that. I'm not a huge fan of the closed source model for this stuff when it is so heavily dependent on public data.
Problem 4: Environmental maybe? Related to problem 3, it's just not too clear what kind of impact all this AI stuff is having in terms of power costs. Honestly it all kind of does something, so I'm not hugely concerned, but I do kind of privately think that in the not too distant future a lot of these companies will stop spending money on enormous server farms just so that internet randos can try to get Chat-GPT to write porn.
Problem 5: They kind of don't work
Text programs frequently make stuff up. Actually, a friend pointed out to me that, in pulp scifi, robots will often say something like, "There is an 80% chance the guards will spot you!"
If you point one of those AI assistants at something, and ask them what it is, a lot of times they just confidently say the wrong thing. This same friend pointed out that, under the hood, the image recognition software is working with probabilities. But I saw lots of videos of the Rabbit AI assistant thing confidently being completely wrong about what it was looking at.
Chat-GPT hallucinates. Image generators are unable to consistently produce the same character and it's actually pretty difficult and unintuitive to produce a specific image, rather than a generic one.
This may be fixed in the near future or it might not, I have no idea.
Problem 6: Kinetic sameness.
One of the subtle changes of the last century is that more and more of what we do in life is look at a screen, while either sitting or standing, and making a series of small hand gestures. The process of writing, of producing an image, of getting from place to place are converging on a single physical act. As Marshall Macluhan pointed out, driving a car is very similar to watching TV, and making a movie is now very similar, as a set of physical movements, to watching one.
There is something vaguely unsatisfying about this.
Related, perhaps only in the sense of being extremely vague, is a sense that we may soon be mediating all, or at least many, of our conversations through AI tools. Have it punch up that email when you're too tired to write clearly. There is something I find disturbing about the idea of communication being constantly edited and punched up by a series of unrelated middlemen, *especially* in the current climate, where said middlemen are large impersonal monopolies who are dedicated to opaque, user hostile practices.
Given all of the above, it is baffling and sometimes infuriating to me that the two most popular arguments against AI boil down to "Transformative works are theft and we need to restrict fair use even more!" and "It's bad to use technology to make art, technology is only for boring things!"
#ai
90 notes · View notes
midlandslady · 2 months ago
Note
I find this ai debate loaded with double standards. If people were to complain about the ethic problems of ai (plagiarism of artists etc), I could be on board. But I just cannot wrap my head around why the same people, who have no problem with (sometimes pretty explicit) fanart of PS edits scream about lack of consent of the actors the moment this is about ai. Someone even said "if you want to see them kiss learn Photoshop" or something. How does it matter if person does the work behind it or an ai? As a former influencer, I can assure that it is no less disturbing having weird art made about you in a way that you are actually recognizable. And like... I remember some seriously disturbing fanarts from 15 years ago where the actors playing these characters were utterly recognizable. Or edits of elves or hobvits completely naked or in far more intimate poses than any of your ai gifs. And heck, there is even VERY explicit fiction ABOUT ACTORS. Either none is okay or all is ok but we cannot suddenly go wild on consent just because it is a computer doing the blending and not care when it is instead a human. Just my fifty cents.
For all we know, the actors might not even care though. There were several instances of actors actually laughing over ships/fanarts with their characters and having a blast. So why don't we all just chill, question our standards and stop defending people of whom we don't even know if they even see a problem.
Thank you, anon. Honestly I don't know, I am on neutral ground. I see the issues with every ai platform, either chat GPT or ai imaging or ai voiceovers, and I think it is dangerous but I also see it as an innovation in technology. I have been reading fanfiction for years and I have been a youtube vidder for as many years, where I used to create my stories and do crossovers with the fancast I chose for my stories. And in the same place as I am are a thousand other vidders. If I think about it under this new light, then all of that was wrong as well. But no one seemed to care about that, which leads me to believe that this commotion is simply fueled by a hatred towards ai. It also seems to be a trend of the new generation, where everything we say has the potential to be censored because it affects other people's sensibilities. I am not really a part of that generation so it puzzles me🤷‍♀️
0 notes
sultanaislammow · 1 year ago
Text
I chatted with a scholar about the 10 truths about AI
Tumblr media
After the emergence of ChatGPT, HE Tuberdiscussions about productivity and value have occurred frequently. Behind this, in fact, hidden are our expectations and worries about new technologies. At this time, we might as well first understand the development source and future of AI technology. Let’s take a look at the observations and sharing in this article.
On June 26, Fu Sheng posted a message in Moments, expressing his opposition to Zhu Xiaohu’s view that “ChatGPT is very unfriendly to startups. In the next two to three years, everyone is asked to give up their financing fantasies.” He also said, “Half of the startups in Silicon Valley started around ChatGPT. , investors can still be so ignorant and fearless.”
Later, Zhu Xiaohu retorted at the bottom of his post that 99% of the value is created by GPT. What value does such a startup have?
Afterwards, Fu Sheng and Zhu Xiaohu started a "fierce battle" in their circle of friends to discuss the value subversion brought about by new technologies.
Since the emergence of ChatGPT and triggered a wave of AI, there have been endless discussions about the value of enterprises and the value of people. There is excitement about seeing that new technologies will change lives, worries about their own careers and enterprises being replaced, and pregnant women. Confusion on the eve of technological change. These emotions largely stem from the fact that we don’t know where AI technology comes from and where it is going.
Shangyinshe started a dialogue with senior practitioners in the digital AI industry and formed the top ten observations on AI, which are summarized today for communication and discussion with readers.
1. AI efficiency improvement started ten years ago
Ten years ago, a Japanese team came to Shenzhen to build a small-scale chip patch factory. Jiang Keyue, a senior practitioner in the AI ​​industry, felt the impact of intelligent manufacturing for the first time when she visited. When the robot hand of the placement machine grabs the chip, the yield rate of the fingernail-sized chip on the circuit board can reach 99.0%, and it is basically difficult to make mistakes.
All equipment in the factory were connected to the touch screen in front of the factory director's desk through internal networking. At that time, touch screens were scarce and mobile phones were still clamshell machines. The factory director can see the production status of the entire assembly line and the real-time yield rate of the patch through the screen. Product yield testing is also done by machines, and there is no need to design a specially designed testing department.
In other words, the earliest machine tube machine was basically realized ten years ago.
2. Why is it only widely perceived now?
The development process of enterprises using AI to improve efficiency is continuous. It is like a train traveling from the past to the future. We used to think that this car was far away, its speed seemed very slow, and even its appearance was not very clear, and its impact on us was not great.
But as this train slowly drives towards us, you will find that it has never stopped. Its voice continues to amplify, its appearance becomes clearer, and its speed becomes faster and faster, so that now almost everyone can feel the impact of AI.
In the eyes of many people, with the introduction of large models such as ChatGPT and Wen Xinyiyan, AI suddenly emerged. Actually not. For example, the Wenxin traffic model that Baidu initially built was also a generative intelligent model based on big data. Suppose we want to go somewhere in Beijing, it can generate several routes for people to choose from based on commands.
Just like Huawei, it has been engaged in product sales and research and development since the 1980s, but it was only after 2005 that the Huawei brand was accepted by the public. In fact, by that time it had become a giant.
This is mainly because Huawei previously made industry-level applications and began to make mobile communication terminals such as mobile phones. Nowadays, most of the computer rooms of China Mobile and China Unicom are equipped with Huawei equipment. Industry barriers have led to relatively high thresholds for Huawei's products, so they are not known to the public.
For AI, ChatGPT is like Huawei putting a mobile phone in front of you. When we can all communicate with it, ChatGPT acts as a key to open the valve, allowing everyone to feel the waterfall of AI.
0 notes