#(sorry about the double tag please pretend i reblogged that right the first time ajhfks)
Explore tagged Tumblr posts
snickerdoodlles · 2 months ago
Text
Tumblr media
@emberfaye "pouring effort into it to pour effort into it" is such a good summation of it! because like. it's all part of the "growth at all costs rot-economy" problem more than anything else. OpenAI is currently in its Uber-phase where its this behemoth propped up by venture capital and its being hyped into being the Next! Big! Thing! so that either someone else buys it for more than its worth or it goes public and all the current investors cash out before people catch on to the fact that it's a flop. like, nowhere in this equation of investment are people actually asking, "hey, does OpenAI bring in more revenue than it spends?" reality has no place in this business according to them. a lot of genAI is like this, but most other genAI titans are companies like Google, or Meta, or apparently fucking Amazon if they ever get anywhere with their Olympus model. these other companies have marketshare for reasons other than generative AI tho, so OpenAI is just the specific case where you can really see how little return or viability genAI currently has a business.
like. honestly, i think i was giving them too much credit to even say that they're ignoring/losing sight of potential sustainable business, because that probably isn't even a factor in their company goals. they don't care where genAI actually goes as a business after they cash out, so all the absurd hype around it right now is serving the exact purpose they want: make people think its worth something. if genAI happens to suddenly become a viable business model before then, lucky us ig, but that's just straight up not a concern for the business side of it.
my personal disdain towards there being another huge leap forward in genAI before their cash out time stems primarily from the fact that the AI puppet is a ton of smoke and mirrors atm. i don't want to make it sound like nothing's come from genAI, there has been a lot of truly incredible research and innovation that has come out of this bubble. not by OpenAI or any of the other big names, they've pretty much stopped sharing information since the first wave of foundation models, but there is a ton of really impressive research in areas like NLP, deep learning, and more that wouldn't have been possible without the development of these really huge foundation models (and this isn't including the potential applications of genAI in other scientific fields, medicine and biology esp).
however, 1. innovation is never a straight line. even just looking back on genAI: the shift to using transformers, the current architecture all LLMs use and even most other genAI use in part, was due to a breakthru in machine-translation back in 2017. nobody was expecting it to be such a big hit-- the paper that published their findings on transformer models was hilariously named "attention is all you need"-- and certainly nobody had any idea that transformers could even scale to such huge scale until OpenAI took a stab at it with GPT-3 (we actually don't know why they scale to huge models so well either! sure, we have theories like superposition on why, but we don't actually know).
which segues nicely into my bigger point, 2. there's so much about genAI we just don't know. all the model's unsupervised pre-training, the first stage of learning that accounts for 99% of genAI's function, happens in a black box. even tho the structure of an artificial neural network is simple (its two matrices with a nonlinear function like a ReLU between them, then a bunch of those in series), mapping out or understanding the hows/whys/etc of the emergent behavior is extremely difficult. right now, the only assured method for "get better results/reduce output error" is scaling a model to something huge, and second mostly reliable method is training the model on more data (size and data are closely intertwined, but a model's size is not always indicative of their training data size).
(sorry if this is vague, i'm trying really hard not to geek spiral rn 😂💦)
getting back on point, there is a lot of research going into making model training more efficient, understanding how they process and store information (fun fact! we actually have no idea how LLMs recall basic facts. we have some theories on how, but we have no idea what the actual mechanism is that allows it. sounds wild, doesn't it?), any progress in AI memory, and also just figuring out more on where genAI breaks down. there's just so much we don't know about how genAI works-- and it's actually a lot more than how it seems on the surface:
that two-model system i mentioned in the reblog above? it doesn't just work to the benefit of both models, it's huge in patching over a lot of issues in genAI. the smaller model/AI assistant handles a ton of tasks like augmented context retrieval, simulating short-term memory, integrating apps for specific functions (ie something like a calculator app because something as specific as math is contradictory to the generative function of LLMs), and a bunch of other stuff. also, on top of patching over or minimizing the issues in models we do not know how to fix yet, genAI is absolutely trained or coded to mimic specific behaviors that increase user's trusts in it (ie, the speed of ChatGPT's responses? heavily researched and tested because a slower response seems "more thoughtful"-- and therefore more trustworthy-- to users; not at all reflective of the actual time it takes for a model to generate an answer (time is not an asset to genAI)).
like. people shouldn't stop researching machine learning, there is a lot to learn about it and like i said, innovation is never a straight line, you never know what's going to push genAI or even another science field forward. but venture capital isn't investing in OpenAI because they believe in genAI. they'd love for another big jump forward in it to bolster all the ways they're hyping it, but given how little we know about the actual mechanisms of genAI, i'm highly skeptical of us achieving any huge leaps forward in it in the next few years. and anyone who claims we're anywhere close to general artificial intelligence, or any sort of mimicry of human intelligence, is a grifter trying to sell you grade-C bullshit.
Tumblr media
akdj profit-making monster
bold fucking choice of words for a company that's several hundred million in the red in its yearly income
31 notes · View notes