#openai stock
Explore tagged Tumblr posts
business-watch-daily · 3 days ago
Text
Tumblr media
Donald Trump unveils an ambitious plan, heralding private sector contributions of up to $500 billion towards the advancement of artificial intelligence infrastructure
Read More
4 notes · View notes
shyducksuit · 2 days ago
Text
Tumblr media
Announcing The Stargate Project
Read more:
0 notes
nightpool · 1 year ago
Text
Well, sure, but that is a fight about AI safety. It’s just a metaphorical fight about AI safety. I am sorry, I have made this joke before, but events keep sharpening it. The OpenAI board looked at Sam Altman and thought “this guy is smarter than us, he can outmaneuver us in a pinch, and it makes us nervous. He’s done nothing wrong so far, but we can’t be sure what he’ll do next as his capabilities expand. We do not fully trust him, we cannot fully control him, and we do not have a model of how his mind works that we fully understand. Therefore we have to shut him down before he grows too powerful.”
I’m sorry! That is exactly the AI misalignment worry! If you spend your time managing AIs that are growing exponentially smarter, you might worry about losing control of them, and if you spend your time managing Sam Altman you might worry about losing control of him, and if you spend your time managing both of them you might get confused about which is which. Maybe Sam Altman will turn the old board members into paper clips.
matt levine tackles the sam altman alignment problem
55 notes · View notes
knucklesex · 28 days ago
Text
Balaji publicly voiced his concerns about AI’s ethical implications, especially regarding copyright, he criticised OpenAI’s use of publicly available internet data for profit, questioning its impact on creators' livelihoods.
By late 2023, Balaji had lost enthusiasm for his work at OpenAI and began openly criticising CEO Sam Altman, as per family accounts. After resigning, he planned to start a machine-learning nonprofit in the medical field.
4 notes · View notes
intelvueofficial · 1 year ago
Text
ChatGPT Invention 😀😀
Tumblr media
ChatGPT is not new, Courage the Cowardly Dog was the first who use ChatGPT 😀😀😀😀
21 notes · View notes
todayscroll · 3 days ago
Text
Asia tech, chipmaking stocks surge on OpenAI cheer By Investing.com
Investing.com– Asian technology and chipmaking stocks rose on Wednesday, fueled largely by renewed optimism over artificial intelligence-fueled demand after OpenAI announced a massive infrastructure project in the U.S. Tech stocks in Japan, Taiwan, and South Korea were the best performers, with gains biased more towards chipmakers. TSMC (TW:), the world’s biggest contract chipmaker, jumped over…
Tumblr media
View On WordPress
0 notes
healthmonastery · 1 year ago
Text
The Future of Chatbots and Conversational AI: Unveiling ChatGPT
In the ever-evolving landscape of artificial intelligence, there’s one technology that’s been turning heads and reshaping conversations: ChatGPT. As an innovation brought to life by OpenAI, ChatGPT has swiftly become a frontrunner in the realm of conversational AI. In this blog post, we’ll delve into what ChatGPT is, how to utilize it effectively, and take a glimpse into its promising future.…
Tumblr media
View On WordPress
0 notes
artificial-intelligence99 · 2 years ago
Text
10 Tips for Choosing Best Insurance Policy for Your Business: Insights from ChatGPT
How to choose Best Insurance Policy for Your Business?
As a business owner, you know how important it is to protect your business from unexpected events that could lead to financial losses. That's why having the right insurance policy is crucial for your business's success.
Tumblr media
But with so many insurance options available, choosing the best policy for your business can be a daunting task. That's where ChatGPT comes in. In this blog post, we'll explore ten tips for choosing the best insurance policy for your business, using insights generated by ChatGPT
Read more: https://www.bikeflame.com/2023/05/10-tips-for-choosing-best-insurance.html
0 notes
ivoryself-blog · 2 years ago
Photo
Tumblr media
#bossup#ai#aitechnology#technology#technologysales#affiliatemarketing#openai#chatgpt#realestate#shopify#aiart#esty#pinterest#pin#crypto#Nfts#stocks#airbnb #money#ethereum#sales#ecommerce#printing#amazon#youtube#youtube#automation#socialmediamarketing#credit#future#lifestyle https://www.instagram.com/p/CqoEiTuOxAg/?igshid=NGJjMDIxMWI=
0 notes
andytriboletti · 2 years ago
Photo
Tumblr media
Asked #OpenAI for international plastics companies and high dividend yielding #stocks. I was thinking of a new feature of feather to provide you with the dividend yield of all the stocks you track in your account. I'm waiting on working on that until I get pirates activated again on Facebook. They said they respond within one day. (at Lincoln University) https://www.instagram.com/p/CpAm5ncr1tL/?igshid=NGJjMDIxMWI=
0 notes
kivodaily · 2 years ago
Text
Google Shares Dropped from Presentation
Tumblr media
Google – The race for AI technology has intensified since ChatGPT unveiled OpenAI in late 2022, leaving other tech firms in the dust.
Google in particular is lagging and has been working to catch up.
The company held an event on Wednesday to display Bard, an AI chatbot, to terrible consequences.
As a result, Alphabet, the parent company of Google, saw a decline of more than 7% in share price at the close of trade.
The news
On Tuesday, Microsoft showcased brand-new AI technologies on its Bing search engine.
Due to the event’s success, Google decided to emulate it.
Earlier that day, Google had confirmed the news of its Bard announcement and said that the AI technology will be made available over the coming weeks.
The presentation
Google executives spoke about Bard’s potential on Wednesday at the event.
In a presentation, the pros and cons of AI were discussed.
The company’s well-known language model, LaMDA (Language Model for Dialogue Applications), drives Bard...Read More
Source: Kivo Daily
0 notes
mostlysignssomeportents · 9 months ago
Text
AI is a WMD
Tumblr media
I'm in TARTU, ESTONIA! AI, copyright and creative workers' labor rights (TOMORROW, May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (TOMORROW, May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).
Tumblr media
Fun fact: "The Tragedy Of the Commons" is a hoax created by the white nationalist Garrett Hardin to justify stealing land from colonized people and moving it from collective ownership, "rescuing" it from the inevitable tragedy by putting it in the hands of a private owner, who will care for it properly, thanks to "rational self-interest":
https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions
Get that? If control over a key resource is diffused among the people who rely on it, then (Garrett claims) those people will all behave like selfish assholes, overusing and undermaintaining the commons. It's only when we let someone own that commons and charge rent for its use that (Hardin says) we will get sound management.
By that logic, Google should be the internet's most competent and reliable manager. After all, the company used its access to the capital markets to buy control over the internet, spending billions every year to make sure that you never try a search-engine other than its own, thus guaranteeing it a 90% market share:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Google seems to think it's got the problem of deciding what we see on the internet licked. Otherwise, why would the company flush $80b down the toilet with a giant stock-buyback, and then do multiple waves of mass layoffs, from last year's 12,000 person bloodbath to this year's deep cuts to the company's "core teams"?
https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528
And yet, Google is overrun with scams and spam, which find their way to the very top of the first page of its search results:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
The entire internet is shaped by Google's decisions about what shows up on that first page of listings. When Google decided to prioritize shopping site results over informative discussions and other possible matches, the entire internet shifted its focus to producing affiliate-link-strewn "reviews" that would show up on Google's front door:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
This was catnip to the kind of sociopath who a) owns a hedge-fund and b) hates journalists for being pain-in-the-ass, stick-in-the-mud sticklers for "truth" and "facts" and other impediments to the care and maintenance of a functional reality-distortion field. These dickheads started buying up beloved news sites and converting them to spam-farms, filled with garbage "reviews" and other Google-pleasing, affiliate-fee-generating nonsense.
(These news-sites were vulnerable to acquisition in large part thanks to Google, whose dominance of ad-tech lets it cream 51 cents off every ad dollar and whose mobile OS monopoly lets it steal 30 cents off every in-app subscriber dollar):
https://www.eff.org/deeplinks/2023/04/saving-news-big-tech
Now, the spam on these sites didn't write itself. Much to the chagrin of the tech/finance bros who bought up Sports Illustrated and other venerable news sites, they still needed to pay actual human writers to produce plausible word-salads. This was a waste of money that could be better spent on reverse-engineering Google's ranking algorithm and getting pride-of-place on search results pages:
https://housefresh.com/david-vs-digital-goliaths/
That's where AI comes in. Spicy autocomplete absolutely can't replace journalists. The planet-destroying, next-word-guessing programs from Openai and its competitors are incorrigible liars that require so much "supervision" that they cost more than they save in a newsroom:
https://pluralistic.net/2024/04/29/what-part-of-no/#dont-you-understand
But while a chatbot can't produce truthful and informative articles, it can produce bullshit – at unimaginable scale. Chatbots are the workers that hedge-fund wreckers dream of: tireless, uncomplaining, compliant and obedient producers of nonsense on demand.
That's why the capital class is so insatiably horny for chatbots. Chatbots aren't going to write Hollywood movies, but studio bosses hyperventilated at the prospect of a "writer" that would accept your brilliant idea and diligently turned it into a movie. You prompt an LLM in exactly the same way a studio exec gives writers notes. The difference is that the LLM won't roll its eyes and make sarcastic remarks about your brainwaves like "ET, but starring a dog, with a love plot in the second act and a big car-chase at the end":
https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/
Similarly, chatbots are a dream come true for a hedge fundie who ends up running a beloved news site, only to have to fight with their own writers to get the profitable nonsense produced at a scale and velocity that will guarantee a high Google ranking and millions in "passive income" from affiliate links.
One of the premier profitable nonsense companies is Advon, which helped usher in an era in which sites from Forbes to Money to USA Today create semi-secret "review" sites that are stuffed full of badly researched top-ten lists for products from air purifiers to cat beds:
https://housefresh.com/how-google-decimated-housefresh/
Advon swears that it only uses living humans to produce nonsense, and not AI. This isn't just wildly implausible, it's also belied by easily uncovered evidence, like its own employees' Linkedin profiles, which boast of using AI to create "content":
https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg
It's not true. Advon uses AI to produce its nonsense, at scale. In an excellent, deeply reported piece for Futurism, Maggie Harrison Dupré brings proof that Advon replaced its miserable human nonsense-writers with tireless chatbots:
https://futurism.com/advon-ai-content
Dupré describes how Advon's ability to create botshit at scale contributed to the enshittification of clients from Yoga Journal to the LA Times, "Us Weekly" to the Miami Herald.
All of this is very timely, because this is the week that Google finally bestirred itself to commence downranking publishers who engage in "site reputation abuse" – creating these SEO-stuffed fake reviews with the help of third parties like Advon:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
(Google's policy only forbids site reputation abuse with the help of third parties; if these publishers take their nonsense production in-house, Google may allow them to continue to dominate its search listings):
https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation
There's a reason so many people believed Hardin's racist "Tragedy of the Commons" hoax. We have an intuitive understanding that commons are fragile. All it takes is one monster to start shitting in the well where the rest of us get our drinking water and we're all poisoned.
The financial markets love these monsters. Mark Zuckerberg's key insight was that he could make billions by assembling vast dossiers of compromising, sensitive personal information on half the world's population without their consent, but only if he kept his costs down by failing to safeguard that data and the systems for exploiting it. He's like a guy who figures out that if he accumulates enough oily rags, he can extract so much low-grade oil from them that he can grow rich, but only if he doesn't waste money on fire-suppression:
https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/
Now Zuckerberg and the wealthy, powerful monsters who seized control over our commons are getting a comeuppance. The weak countermeasures they created to maintain the minimum levels of quality to keep their platforms as viable, going concerns are being overwhelmed by AI. This was a totally foreseeable outcome: the history of the internet is a story of bad actors who upended the assumptions built into our security systems by automating their attacks, transforming an assault that wouldn't be economically viable into a global, high-speed crime wave:
https://pluralistic.net/2022/04/24/automation-is-magic/
But it is possible for a community to maintain a commons. This is something Hardin could have discovered by studying actual commons, instead of inventing imaginary histories in which commons turned tragic. As it happens, someone else did exactly that: Nobel Laureate Elinor Ostrom:
https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons/
Ostrom described how commons can be wisely managed, over very long timescales, by communities that self-governed. Part of her work concerns how users of a commons must have the ability to exclude bad actors from their shared resources.
When that breaks down, commons can fail – because there's always someone who thinks it's fine to shit in the well rather than walk 100 yards to the outhouse.
Enshittification is the process by which control over the internet moved from self-governance by members of the commons to acts of wanton destruction committed by despicable, greedy assholes who shit in the well over and over again.
It's not just the spammers who take advantage of Google's lazy incompetence, either. Take "copyleft trolls," who post images using outdated Creative Commons licenses that allow them to terminate the CC license if a user makes minor errors in attributing the images they use:
https://pluralistic.net/2022/01/24/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator/
The first copyleft trolls were individuals, but these days, the racket is dominated by a company called Pixsy, which pretends to be a "rights protection" agency that helps photographers track down copyright infringers. In reality, the company is committed to helping copyleft trolls entrap innocent Creative Commons users into paying hundreds or even thousands of dollars to use images that are licensed for free use. Just as Advon upends the economics of spam and deception through automation, Pixsy has figured out how to send legal threats at scale, robolawyering demand letters that aren't signed by lawyers; the company refuses to say whether any lawyer ever reviews these threats:
https://pluralistic.net/2022/02/13/an-open-letter-to-pixsy-ceo-kain-jones-who-keeps-sending-me-legal-threats/
This is shitting in the well, at scale. It's an online WMD, designed to wipe out the commons. Creative Commons has allowed millions of creators to produce a commons with billions of works in it, and Pixsy exploits a minor error in the early versions of CC licenses to indiscriminately manufacture legal land-mines, wantonly blowing off innocent commons-users' legs and laughing all the way to the bank:
https://pluralistic.net/2023/04/02/commafuckers-versus-the-commons/
We can have an online commons, but only if it's run by and for its users. Google has shown us that any "benevolent dictator" who amasses power in the name of defending the open internet will eventually grow too big to care, and will allow our commons to be demolished by well-shitters:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/09/shitting-in-the-well/#advon
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Catherine Poh Huay Tan (modified) https://www.flickr.com/photos/68166820@N08/49729911222/
Laia Balagueró (modified) https://www.flickr.com/photos/lbalaguero/6551235503/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
320 notes · View notes
probablyasocialecologist · 6 months ago
Text
This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager. But come on. The jig is up. The technology that was at this time last year being somberly touted as so powerful that it posed an existential threat to humanity is now worrying investors because it is apparently incapable of generating passable marketing emails reliably enough. We’ve had at least a year of companies shelling out for business-grade generative AI, and the results—painted as shinily as possible from a banking and investment sector that would love nothing more than a new technology that can automate office work and creative labor—are one big “meh.” As a Bloomberg story put it last week, “Big Tech Fails to Convince Wall Street That AI Is Paying Off.” From the piece: Amazon.com Inc., Microsoft Corp. and Alphabet Inc. had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed. Shares in Google owner Alphabet have fallen 7.4% since it reported last week. Microsoft’s stock price has declined in the three days since the company’s own results. Shares of Amazon — the latest to drop its earnings on Thursday — plunged by the most since October 2022 on Friday. Silicon Valley hailed 2024 as the year that companies would begin to deploy generative AI, the type of technology that can create text, images and videos from simple prompts. This mass adoption is meant to finally bring about meaningful profits from the likes of Google’s Gemini and Microsoft’s Copilot. The fact that those returns have yet to meaningfully materialize is stoking broader concerns about how worthwhile AI will really prove to be. Meanwhile, Nvidia, the AI chipmaker that soared to an absurd $3 trillion valuation, is losing that value with every passing day—26% over the last month or so, and some analysts believe that’s just the beginning. These declines are the result of less-than-stellar early results from corporations who’ve embraced enterprise-tier generative AI, the distinct lack of killer commercial products 18 months into the AI boom, and scathing financial analyses from Goldman Sachs, Sequoia Capital, and Elliot Management, each of whom concluded that there was “too much spend, too little benefit” from generative AI, in the words of Goldman, and that it was “overhyped” and a “bubble” per Elliot. As CNN put it in its report on growing fears of an AI bubble, Some investors had even anticipated that this would be the quarter that tech giants would start to signal that they were backing off their AI infrastructure investments since “AI is not delivering the returns that they were expecting,” D.A. Davidson analyst Gil Luria told CNN. The opposite happened — Google, Microsoft and Meta all signaled that they plan to spend even more as they lay the groundwork for what they hope is an AI future. This can, perhaps, explain some of the investor revolt. The tech giants have responded to mounting concerns by doubling, even tripling down, and planning on spending tens of billions of dollars on researching, developing, and deploying generative AI for the foreseeable future. All this as high profile clients are canceling their contracts. As surveys show that overwhelming majorities of workers say generative AI makes them less productive. As MIT economist and automation scholar Daron Acemoglu warns, ���Don’t believe the AI hype.”
6 August 2024
184 notes · View notes
Text
Bullish for ai stocks.
52 notes · View notes
d2071art · 2 months ago
Text
NO AI
TL;DR: almost all social platforms are stealing your art and use it to train generative AI (or sell your content to AI developers); please beware and do something. Or don’t, if you’re okay with this.
Which platforms are NOT safe to use for sharing you art:
Facebook, Instagram and all Meta products and platforms (although if you live in the EU, you can forbid Meta to use your content for AI training)
Reddit (sold out all its content to OpenAI)
Twitter
Bluesky (it has no protection from AI scraping and you can’t opt out from 3rd party data / content collection yet)
DeviantArt, Flikr and literally every stock image platform (some didn’t bother to protect their content from scraping, some sold it out to AI developers)
Here’s WHAT YOU CAN DO:
1. Just say no:
Block all 3rd party data collection: you can do this here on Tumblr (here’s how); all other platforms are merely taking suggestions, tbh
Use Cara (they can’t stop illegal scraping yet, but they are currently working with Glaze to built in ‘AI poisoning’, so… fingers crossed)
2. Use art style masking tools:
Glaze: you can a) download the app and run it locally or b) use Glaze’s free web service, all you need to do is register. This one is a fav of mine, ‘cause, unlike all the other tools, it doesn’t require any coding skills (also it is 100% non-commercial and was developed by a bunch of enthusiasts at the University of Chicago)
Anti-DreamBooth: free code; it was originally developed to protect personal photos from being used for forging deepfakes, but it works for art to
Mist: free code for Windows; if you use MacOS or don’t have powerful enough GPU, you can run Mist on Google’s Colab Notebook
(art style masking tools change some pixels in digital images so that AI models can’t process them properly; the changes are almost invisible, so it doesn’t affect your audiences perception)
3. Use ‘AI poisoning’ tools
Nightshade: free code for Windows 10/11 and MacOS; you’ll need GPU/CPU and a bunch of machine learning libraries to use it though.
4. Stay safe and fuck all this corporate shit.
73 notes · View notes
ralfmaximus · 6 months ago
Text
Nvidia stock has lost a trillion dollars of valuation, 30% of the total, since its 2024 high. If history is any guide, and it usually is, there's no coming back from a tipping point like this. The next part will be really painful for a lot of people — and yet more beneficial for longterm tech progress than the bubble mentality could ever be.
The AI bubble is bursting. About fucking time.
Polite reminder that this particular bubble is LLM Generative AI. Products like OpenAI, Midjourney, Microsoft CoPilot. The hallucinating, expensive, useless text-prediction algorithm on steroids. The chatbots that suggest glue as a pizza topping.
Classical Analytical AI is very different; an important, useful technology that really will revolutionize life on earth. When it happens, and it will.
The problem here is that many tech investors don't understand the difference. Because techbros have intentionally lied to blur the distinction, to drive up investments. As a result, money will dry up in all of the AI tech sectors, which hurts classical AI development. For awhile.
But overall, this particular bubble bursting is a good thing. Time to end the bullshit.
88 notes · View notes