Tumgik
#AI models explained
tao-ai-update · 3 days
Text
youtube
https://youtube.com/clip/Ugkxtda4QrqvIMYxhoot7YSidwgdBIWZfHwZ?si=95Yr3N5IZEwB-8vD
1 note · View note
ultravioart · 8 months
Text
The hangyu fan design stealing was disproven, but this Mega Delphox design that predates Palworld absolutely was lifted from a fandesign to make the Pal Wixen.
I will not be reposting the artists fan design, please click the link and give eyes/credit to the original fanartist!
Pyroaura98/EtherealHaze made this Mega Delphox fan design art prior to 2014, but this post was made in 2014. Either way it predates Palworld (announced 2021).
And this is the Pal design Wixen
Tumblr media
Many fan made megas for Delphox include a hat, but Wixen is obviously taking from the design EtherealHaze made because:
-The red hair strands flow into the red hat, a very unique/distinct design element of the EtherealHaze's fan design.
-The red hat incorporates the ears into it as red, just like EtherealHaze's design (most other fan designs for mega delphox leave the ears seperate and a different color)
-The orange triangle chest fur.
-The white paws INSIDE the 3d red sleeve holes, just like EtherealHaze's design.
And for reference, this the official Pokemon Delphox: note no 3d sleeve holes and black paws.
Tumblr media
It is unethical for larger companies like Pocketpair to use designs from smaller artists without consent and compensation.
TemTem received legal permission for the viral fakemon platypus line from the original artist.
Tumblr media
Palworld did not receive legal permission for the fanmade Mega Delphox design.
Just to make it clear, Pokemon themselves have made parody designs of other series monsters, and in the monster taming genre MANY games parody eachother, so I don't think Palworld parodying official Pokemon is inherently bad. It also is pretty much impossible to financially damage Pokemon with parody as Pokemon is one of the richest IPs in the world.
However, Palworld making parody of small artist's fanmon designs without consent and compensation is not ethical.
Pocketpair is a larger company taking from someone who cannot legally fight back. Pokemon can sue Palworld if it really is an issue for Pokemon, but EtherealHaze cannot do so with the same ease because good lawyers are extremely expensive, and the legal battle can be very time consuming and stressful for just one person.
TDLR; This post is meant to inform consumers about the products, I am not defending Pokemon nor Palworld nor TemTem, only aiming to inform about practices I feel are unjust. Palworld devs lifting from fan made fakemon without legal consent and compensation is unethical. If you still wish to play Palworld, please consider pirating Palworld instead of purchasing/playing online (release sales and online player numbers are used to get investors for the company/game).
Additional issues with Pocketpair's behavior towards AI and NFTs:
Pocketpair has previously published a party game that hosts an unethical generative ai art model. And no you are not 'spotting the ai art vs hand drawn art'-- that is disinformation.
In this party game every player during a round uses ai to generate images based on a given theme, except one imposter player that has to guess the theme. Then the group tries to guess which ai image was created with the imposter prompt. This game profits off of hosting an unethical generative ai image model.
Why is this generative ai model unethical?
Generative ai image models like Stable Diffusion, Midjourney, etc, are trained on open web data (scrubbing the entire internet, uses private info/copyright/NSFL content -- article linked discussing what kind of illegal NSFL content Midjourney, Stable Diffusion, etc has been trained on. Images of warcrimes also are used in such datasets.)
This is absolutely unacceptable and should not be supported. Datasets can absolutely be made without illegal content, truely twisted people were unwilling to make ethical datasets with legal free to use images.
Also in general, big generative ai model dev (be it text, image, etc) is notorious for abusing workers. It often exposes grossly underpaid employees to traumatic materials with no proper mental health care. Link. You can read many articles about this issue online. For reference, government intelligence workers have therapy for the traumatic content they view, so even if its ai generated, constant exposure to traumatic content is not to be taken lightly. Workers are real people.
Additionally, Pocketpair social medias have flirted with NFTs, and the CEO of Pocketpair is a founder of a cryptocurrency company called Coincheck. NFTs are scams (you purchase the receipt, not the actual item, and all NFTs loose monetary value) and if that wasn't bad enough, NFTs, Crypto, and in general Blockchain infrastructure waste a shiton of resources via proof-of-work processing, you might as well be pissing oil into the ocean. Link. That is not ethical.
It does not appear that the game Palworld itself uses unethical ai, nor has it stolen actual pokemon 3d model topology, BUT the EtherealHaze fakemon design has been lifted from without legal consent, which is enough for me to drop the game. Furthermore, I cannot trust Pocketpair as a company to not introduce unethical ai and NFTs into Palworld in the future, due to the company's past behavior.
Whether you continue to play Palworld is up to you, but as I said before, please consider pirating the game instead of purchasing or playing online, since the stats can be used to get investors for the Pocketpair company/Palworld game.
32 notes · View notes
medicinemane · 1 year
Text
You know, you have the Whorf hypothesis, which talks about how language might effect how we think
I believe one of the things he (or someone else saying similar things) brought up was the idea that:
If we for instance have barrels which used to contain a toxic chemical that's now empty, but the barrel is still dangerous, does lacking a word for "empty but dangerous" influence how we think about or treat this barrel? Would someone be less cautious around it for instance because "empty" implies to an extent that the barrel is back to how it was before it was filled?
Anyway, this is just me establishing a concept here
My thought here is if poorly fitting words may disproportionately warp people's understanding of concepts
I wonder if by using phrases like "artificial intelligence" we don't meaningfully skew perception of "ai" programs towards a thinking program, even among people who have some understanding of how it works (basically rapidly running a number of calculations until it gets an answer it thinks will be good, it's similar to those "having a simulated bird learn to walk" things you'll see, just very fast)
How much do we end up having certain terms basically become poison pills because of how ubiquitous they've become while being almost totally wrong
I'm not even really talking about things like reasonable terms used wrong, like people saying "gaslighting" when they mean "lying"
It really is specifically with terms like "ai" where... well... where I'm afraid we may have done irrevocable damage to public understanding of something, and where... I don't know that there's a way to ever fix it and shift the language used
Just something I'm thinking about tonight
#though I'm not actually thinking about ai; I'm thinking about another term that... what I have to say isn't that spicy#but I do kind of worry it would be a little too spicy for people who've really latched onto the word#even though... I literally just want to help; I literally think that term is a poison pill to the people who use it more than anyone else#and I think I have at least a candidate replacement for it in the same way I have something like 'deep modeling' to replace 'ai'#but... I don't think... I don't think I know of anyway how I could get that change to happen#even if like I... presented these thoughts to the greatest minds and everyone agreed on a new better term... could we spread it?#just drives me nuts with ai for obvious reasons#and with this term because whenever someone actually explains what the hell they mean... it's not at all what the word they use means#and a shift in words to one that... actually explains it... I mean I think it might massively make people more receptive#don't use something that's both very charged and also... kind of just the wrong word#use a word that's accurate and you can probably bring most people around on quickly#...well... whatever... I'll sprinkle these thoughts in people's ears from time to time#and hopefully it slowly takes root in enough people to have at least some small impact#in other news it's not like I remember the name of that hypothesis#I just decided that a couple minutes search could track me down a name; make me sound knowledgeable; all while being more accurate
2 notes · View notes
zuko-always-lies · 7 months
Text
From instructions on how to opt out, look at the official staff post on the topic. It also gives more information on Tumblr's new policies. If you are opting out, remember to opt out each separate blog individually.
Please reblog this post, so it will get more votes!
47K notes · View notes
phantomrose96 · 7 months
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
35K notes · View notes
kajmasterclass · 1 month
Text
youtube
0 notes
jcmarchi · 2 months
Text
ChatGPT-4 vs. Llama 3: A Head-to-Head Comparison
New Post has been published on https://thedigitalinsider.com/chatgpt-4-vs-llama-3-a-head-to-head-comparison/
ChatGPT-4 vs. Llama 3: A Head-to-Head Comparison
As the adoption of artificial intelligence (AI) accelerates, large language models (LLMs) serve a significant need across different domains. LLMs excel in advanced natural language processing (NLP) tasks, automated content generation, intelligent search, information retrieval, language translation, and personalized customer interactions.
The two latest examples are Open AI’s ChatGPT-4 and Meta’s latest Llama 3. Both of these models perform exceptionally well on various NLP benchmarks.
A comparison between ChatGPT-4 and Meta Llama 3 reveals their unique strengths and weaknesses, leading to informed decision-making about their applications.
Understanding ChatGPT-4 and Llama 3
LLMs have advanced the field of AI by enabling machines to understand and generate human-like text. These AI models learn from huge datasets using deep learning techniques. For example, ChatGPT-4 can produce clear and contextual text, making it suitable for diverse applications.
Its capabilities extend beyond text generation as it can analyze complex data, answer questions, and even assist with coding tasks. This broad skill set makes it a valuable tool in fields like education, research, and customer support.
Meta AI’s Llama 3 is another leading LLM built to generate human-like text and understand complex linguistic patterns. It excels in handling multilingual tasks with impressive accuracy. Moreover, it’s efficient as it requires less computational power than some competitors.
Companies seeking cost-effective solutions can consider Llama 3 for diverse applications involving limited resources or multiple languages.
Overview of ChatGPT-4
The ChatGPT-4 leverages a transformer-based architecture that can handle large-scale language tasks. The architecture allows it to process and understand complex relationships within the data.
As a result of being trained on massive text and code data, GPT-4 reportedly performs well on various AI benchmarks, including text evaluation, audio speech recognition (ASR), audio translation, and vision understanding tasks.
Text Evaluation
Vision Understanding
Overview of Meta AI Llama 3:
Meta AI’s Llama 3 is a powerful LLM built on an optimized transformer architecture designed for efficiency and scalability. It is pretrained on a massive dataset of over 15 trillion tokens, which is seven times larger than its predecessor, Llama 2, and includes a significant amount of code.
Furthermore, Llama 3 demonstrates exceptional capabilities in contextual understanding, information summarization, and idea generation. Meta claims that its advanced architecture efficiently manages extensive computations and large volumes of data.
Instruct Model Performance
Instruct Human evaluation
Pre-trained model performance
ChatGPT-4 vs. Llama 3
Let’s compare ChatGPT-4 and Llama to better understand their advantages and limitations. The following tabular comparison underscores the performance and applications of these two models:
Aspect ChatGPT-4 Llama 3 Cost Free and paid options available Free (open-source) Features & Updates Advanced NLU/NLG. Vision input. Persistent threads. Function calling. Tool integration. Regular OpenAI updates. Excels in nuanced language tasks. Open updates. Integration & Customization API integration. Limited customization. Suits standard solutions. Open-source. Highly customizable. Ideal for specialized uses. Support & Maintenance Provided by OpenAl through formal channels, including documentation, FAQs, and direct support for paid plans. Community-driven support through GitHub and other open forums; less formal support structure. Technical Complexity Low to moderate depending on whether it is used via the ChatGPT interface or via the Microsoft Azure Cloud. Moderate to high complexity depends on whether a cloud platform is used or you self-host the model. Transparency & Ethics Model card and ethical guidelines provided. Black box model, subject to unannounced changes. Open-source. Transparent training. Community license. Self-hosting allows version control. Security OpenAI/Microsoft managed security. Limited privacy via OpenAI. More control via Azure. Regional availability varies. Cloud-managed if on Azure/AWS. Self-hosting requires its own security. Application Used for customized AI Tasks Ideal for complex tasks and high-quality content creation
Ethical Considerations
Transparency in AI development is important for building trust and accountability. Both ChatGPT4 and Llama 3 must address potential biases in their training data to ensure fair outcomes across diverse user groups.
Additionally, data privacy is a key concern that calls for stringent privacy regulations. To address these ethical concerns, developers and organizations should prioritize AI explainability techniques. These techniques include clearly documenting model training processes and implementing interpretability tools.
Furthermore, establishing robust ethical guidelines and conducting regular audits can help mitigate biases and ensure responsible AI development and deployment.
Future Developments
Undoubtedly, LLMs will advance in their architectural design and training methodologies. They will also expand dramatically across different industries, such as health, finance, and education. As a result, these models will evolve to offer increasingly accurate and personalized solutions.
Furthermore, the trend towards open-source models is expected to accelerate, leading to democratized AI access and innovation. As LLMs evolve, they will likely become more context-aware, multimodal, and energy-efficient.
To keep up with the latest insights and updates on LLM developments, visit unite.ai.
1 note · View note
signode-blog · 2 months
Text
AI Trading
What is AI and Its Relevance in Modern Trading? 1. Definition of AI Artificial Intelligence (AI): A branch of computer science focused on creating systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, understanding natural language, and perception. Machine Learning (ML): A subset of AI that involves the…
0 notes
vampirepumpkins · 6 months
Note
"this is AI [shopped] i can tell from some of the pixels and from seeing quite a few AI arts [shops] in my time." go back to 4chan pleaseeeeeeeee and stop trying to publicly humiliate a random artist who has had a presence on this site for almost a decade
I have never been to 4chan in my life. Someone being on a a site for decades does not preclude them from utilizing AI. Furthermore, the artist in question has implied in the replies that they do use some AI in their work. I'm not humiliating anyone.
Idk, it's not pixels my dude. That entire bridge doesn't make sense. Most of the illustration shows a high level of skill and yet parts of that bridge fade in and out and the posts are spaced strangely. Where the bridge begins makes no sense and if you try zoom in, it's all oddly out of focus especially compared to the pretty in focus mountain.
Like, is it bad to look at things critically now? You dont think someone would go on the internet and misrepresent their work?
0 notes
vividverses · 10 months
Text
Exciting developments in MLOps await in 2024! 🚀 DevOps-MLOps integration, AutoML acceleration, Edge Computing rise – shaping a dynamic future. Stay ahead of the curve! #MLOps #TechTrends2024 🤖✨
0 notes
porcupine-girl · 9 months
Text
An important message to college students: Why you shouldn't use ChatGPT or other "AI" to write papers.
Here's the thing: Unlike plagiarism, where I can always find the exact source a student used, it's difficult to impossible to prove that a student used ChatGPT to write their paper. Which means I have to grade it as though the student wrote it.
So if your professor can't prove it, why shouldn't you use it?
Well, first off, it doesn't write good papers. Grading them as if the student did write it themself, so far I've given GPT-enhanced papers two Ds and an F.
If you're unlucky enough to get a professor like me, they've designed their assignments to be hard to plagiarize, which means they'll also be hard to get "AI" to write well. To get a good paper out of ChatGPT for my class, you'd have to write a prompt that's so long, with so many specifics, that you might as well just write the paper yourself.
ChatGPT absolutely loves to make broad, vague statements about, for example, what topics a book covers. Sadly for my students, I ask for specific examples from the book, and it's not so good at that. Nor is it good at explaining exactly why that example is connected to a concept from class. To get a good paper out of it, you'd have to have already identified the concepts you want to discuss and the relevant examples, and quite honestly if you can do that it'll be easier to write your own paper than to coax ChatGPT to write a decent paper.
The second reason you shouldn't do it?
IT WILL PUT YOUR PROFESSOR IN A REALLY FUCKING BAD MOOD. WHEN I'M IN A BAD MOOD I AM NOT GOING TO BE GENEROUS WITH MY GRADING.
I can't prove it's written by ChatGPT, but I can tell. It does not write like a college freshman. It writes like a professional copywriter churning out articles for a content farm. And much like a large language model, the more papers written by it I see, the better I get at identifying it, because it turns out there are certain phrases it really, really likes using.
Once I think you're using ChatGPT I will be extremely annoyed while I grade your paper. I will grade it as if you wrote it, but I will not grade it generously. I will not give you the benefit of the doubt if I'm not sure whether you understood a concept or not. I will not squint and try to understand how you thought two things are connected that I do not think are connected.
Moreover, I will continue to not feel generous when calculating your final grade for the class. Usually, if someone has been coming to class regularly all semester, turned things in on time, etc, then I might be willing to give them a tiny bit of help - round a 79.3% up to a B-, say. If you get a 79.3%, you will get your C+ and you'd better be thankful for it, because if you try to complain or claim you weren't using AI, I'll be letting the college's academic disciplinary committee decide what grade you should get.
Eventually my school will probably write actual guidelines for me to follow when I suspect use of AI, but for now, it's the wild west and it is in your best interest to avoid a showdown with me.
12K notes · View notes
wormed-woman · 1 year
Text
thinking about deleting insta and just using it as kind of a portofolio thing bc just the ammount of like braindead comments and just straight up toxic art community culture
1 note · View note
river-taxbird · 1 year
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes · View notes
reasonsforhope · 4 months
Text
AI models can seemingly do it all: generate songs, photos, stories, and pictures of what your dog would look like as a medieval monarch. 
But all of that data and imagery is pulled from real humans — writers, artists, illustrators, photographers, and more — who have had their work compressed and funneled into the training minds of AI without compensation. 
Kelly McKernan is one of those artists. In 2023, they discovered that Midjourney, an AI image generation tool, had used their unique artistic style to create over twelve thousand images. 
“It was starting to look pretty accurate, a little infringe-y,” they told The New Yorker last year. “I can see my hand in this stuff, see how my work was analyzed and mixed up with some others’ to produce these images.” 
For years, leading AI companies like Midjourney and OpenAI, have enjoyed seemingly unfettered regulation, but a landmark court case could change that. 
On May 9, a California federal judge allowed ten artists to move forward with their allegations against Stability AI, Runway, DeviantArt, and Midjourney. This includes proceeding with discovery, which means the AI companies will be asked to turn over internal documents for review and allow witness examination. 
Lawyer-turned-content-creator Nate Hake took to X, formerly known as Twitter, to celebrate the milestone, saying that “discovery could help open the floodgates.” 
“This is absolutely huge because so far the legal playbook by the GenAI companies has been to hide what their models were trained on,” Hake explained...
“I’m so grateful for these women and our lawyers,” McKernan posted on X, above a picture of them embracing Ortiz and Andersen. “We’re making history together as the largest copyright lawsuit in history moves forward.” ...
The case is one of many AI copyright theft cases brought forward in the last year, but no other case has gotten this far into litigation. 
“I think having us artist plaintiffs visible in court was important,” McKernan wrote. “We’re the human creators fighting a Goliath of exploitative tech.”
“There are REAL people suffering the consequences of unethically built generative AI. We demand accountability, artist protections, and regulation.” 
-via GoodGoodGood, May 10, 2024
2K notes · View notes
knowledgegyanai · 1 year
Text
Gyan is an auto curating, self-organizing research engine based on the first explainable language model-based natural language processing engine useful for data scientists, researchers, and professionals who need reliable NLP and NLU outputs. Gyan is fully explainable, unbiased, and easy to maintain.
0 notes
Text
Google’s enshittification memos
Tumblr media
[Note, 9 October 2023: Google disputes the veracity of this claim, but has declined to provide the exhibits and testimony to support its claims. Read more about this here.]
Tumblr media
When I think about how the old, good internet turned into the enshitternet, I imagine a series of small compromises, each seemingly reasonable at the time, each contributing to a cultural norm of making good things worse, and worse, and worse.
Think about Unity President Marc Whitten's nonpology for his company's disastrous rug-pull, in which they declared that everyone who had paid good money to use their tool to make a game would have to keep paying, every time someone downloaded that game:
The most fundamental thing that we’re trying to do is we’re building a sustainable business for Unity. And for us, that means that we do need to have a model that includes some sort of balancing change, including shared success.
https://www.wired.com/story/unity-walks-back-policies-lost-trust/
"Shared success" is code for, "If you use our tool to make money, we should make money too." This is bullshit. It's like saying, "We just want to find a way to share the success of the painters who use our brushes, so every time you sell a painting, we want to tax that sale." Or "Every time you sell a house, the company that made the hammer gets to wet its beak."
And note that they're not talking about shared risk here – no one at Unity is saying, "If you try to make a game with our tools and you lose a million bucks, we're on the hook for ten percent of your losses." This isn't partnership, it's extortion.
How did a company like Unity – which became a market leader by making a tool that understood the needs of game developers and filled them – turn into a protection racket? One bad decision at a time. One rationalization and then another. Slowly, and then all at once.
When I think about this enshittification curve, I often think of Google, a company that had its users' backs for years, which created a genuinely innovative search engine that worked so well it seemed like *magic, a company whose employees often had their pick of jobs, but chose the "don't be evil" gig because that mattered to them.
People make fun of that "don't be evil" motto, but if your key employees took the gig because they didn't want to be evil, and then you ask them to be evil, they might just quit. Hell, they might make a stink on the way out the door, too:
https://theintercept.com/2018/09/13/google-china-search-engine-employee-resigns/
Google is a company whose founders started out by publishing a scientific paper describing their search methodology, in which they said, "Oh, and by the way, ads will inevitably turn your search engine into a pile of shit, so we're gonna stay the fuck away from them":
http://infolab.stanford.edu/pub/papers/google.pdf
Those same founders retained a controlling interest in the company after it went IPO, explaining to investors that they were going to run the business without having their elbows jostled by shortsighted Wall Street assholes, so they could keep it from turning into a pile of shit:
https://abc.xyz/investor/founders-letters/ipo-letter/
And yet, it's turned into a pile of shit. Google search is so bad you might as well ask Jeeves. The company's big plan to fix it? Replace links to webpages with florid paragraphs of chatbot nonsense filled with a supremely confident lies:
https://pluralistic.net/2023/05/14/googles-ai-hype-circle/
How did the company get this bad? In part, this is the "curse of bigness." The company can't grow by attracting new users. When you have 90%+ of the market, there are no new customers to sign up. Hypothetically, they could grow by going into new lines of business, but Google is incapable of making a successful product in-house and also kills most of the products it buys from other, more innovative companies:
https://killedbygoogle.com/
Theoretically, the company could pursue new lines of business in-house, and indeed, the current leaders of companies like Amazon, Microsoft and Apple are all execs who figured out how to get the whole company to do something new, and were elevated to the CEO's office, making each one a billionaire and sealing their place in history.
It is for this very reason that any exec at a large firm who tries to make a business-wide improvement gets immediately and repeatedly knifed by all their colleagues, who correctly reason that if someone else becomes CEO, then they won't become CEO. Machiavelli was an optimist:
https://pluralistic.net/2023/07/28/microincentives-and-enshittification/
With no growth from new customers, and no growth from new businesses, "growth" has to come from squeezing workers (say, laying off 12,000 engineers after a stock buyback that would have paid their salaries for the next 27 years), or business customers (say, by colluding with Facebook to rig the ad market with the Jedi Blue conspiracy), or end-users.
Now, in theory, we might never know exactly what led to the enshittification of Google. In theory, all of compromises, debates and plots could be lost to history. But tech is not an oral culture, it's a written one, and techies write everything down and nothing is ever truly deleted.
Time and again, Big Tech tells on itself. Think of FTX's main conspirators all hanging out in a group chat called "Wirefraud." Amazon naming its program targeting weak, small publishers the "Gazelle Project" ("approach these small publishers the way a cheetah would pursue a sickly gazelle”). Amazon documenting the fact that users were unknowingly signing up for Prime and getting pissed; then figuring out how to reduce accidental signups, then deciding not to do it because it liked the money too much. Think of Zuck emailing his CFO in the middle of the night to defend his outsized offer to buy Instagram on the basis that users like Insta better and Facebook couldn't compete with them on quality.
It's like every Big Tech schemer has a folder on their desktop called "Mens Rea" filled with files like "Copy_of_Premeditated_Murder.docx":
https://doctorow.medium.com/big-tech-cant-stop-telling-on-itself-f7f0eb6d215a?sk=351f8a54ab8e02d7340620e5eec5024d
Right now, Google's on trial for its sins against antitrust law. It's a hard case to make. To secure a win, the prosecutors at the DoJ Antitrust Division are going to have to prove what was going on in Google execs' minds when the took the actions that led to the company's dominance. They're going to have to show that the company deliberately undertook to harm its users and customers.
Of course, it helps that Google put it all in writing.
Last week, there was a huge kerfuffile over the DoJ's practice of posting its exhibits from the trial to a website each night. This is a totally normal thing to do – a practice that dates back to the Microsoft antitrust trial. But Google pitched a tantrum over this and said that the docs the DoJ were posting would be turned into "clickbait." Which is another way of saying, "the public would find these documents very interesting, and they would be damning to us and our case":
https://www.bigtechontrial.com/p/secrecy-is-systemic
After initially deferring to Google, Judge Amit Mehta finally gave the Justice Department the greenlight to post the document. It's up. It's wild:
https://www.justice.gov/d9/2023-09/416692.pdf
The document is described as "notes for a course on communication" that Google VP for Finance Michael Roszak prepared. Roszak says he can't remember whether he ever gave the presentation, but insists that the remit for the course required him to tell students "things I didn't believe," and that's why the document is "full of hyperbole and exaggeration."
OK.
But here's what the document says: "search advertising is one of the world's greatest business models ever created…illicit businesses (cigarettes or drugs) could rival these economics…[W]e can mostly ignore the demand side…(users and queries) and only focus on the supply side of advertisers, ad formats and sales."
It goes on to say that this might be changing, and proposes a way to balance the interests of the search and ads teams, which are at odds, with search worrying that ads are pushing them to produce "unnatural search experiences to chase revenue."
"Unnatural search experiences to chase revenue" is a thinly veiled euphemism for the prophetic warnings in that 1998 Pagerank paper: "The goals of the advertising business model do not always correspond to providing quality search to users." Or, more plainly, "ads will turn our search engine into a pile of shit."
And, as Roszak writes, Google is "able to ignore one of the fundamental laws of economics…supply and demand." That is, the company has become so dominant and cemented its position so thoroughly as the default search engine across every platforms and system that even if it makes its search terrible to goose revenues, users won't leave. As Lily Tomlin put it on SNL: "We don't have to care, we're the phone company."
In the enshittification cycle, companies first lure in users with surpluses – like providing the best search results rather than the most profitable ones – with an eye to locking them in. In Google's case, that lock-in has multiple facets, but the big one is spending billions of dollars – enough to buy a whole Twitter, every single year – to be the default search everywhere.
Google doesn't buy its way to dominance because it has the very best search results and it wants to shield you from inferior competitors. The economically rational case for buying default position is that preventing competition is more profitable than succeeding by outperforming competitors. The best reason to buy the default everywhere is that it lets you lower quality without losing business. You can "ignore the demand side, and only focus on advertisers."
For a lot of people, the analysis stops here. "If you're not paying for the product, you're the product." Google locks in users and sells them to advertisers, who are their co-conspirators in a scheme to screw the rest of us.
But that's not right. For one thing, paying for a product doesn't mean you won't be the product. Apple charges a thousand bucks for an iPhone and then nonconsensually spies on every iOS user in order to target ads to them (and lies about it):
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
John Deere charges six figures for its tractors, then runs a grift that blocks farmers from fixing their own machines, and then uses their control over repair to silence farmers who complain about it:
https://pluralistic.net/2022/05/31/dealers-choice/#be-a-shame-if-something-were-to-happen-to-it
Fair treatment from a corporation isn't a loyalty program that you earn by through sufficient spending. Companies that can sell you out, will sell you out, and then cry victim, insisting that they were only doing their fiduciary duty for their sacred shareholders. Companies are disciplined by fear of competition, regulation or – in the case of tech platforms – customers seizing the means of computation and installing ad-blockers, alternative clients, multiprotocol readers, etc:
https://doctorow.medium.com/an-audacious-plan-to-halt-the-internets-enshittification-and-throw-it-into-reverse-3cc01e7e4604?sk=85b3f5f7d051804521c3411711f0b554
Which is where the next stage of enshittification comes in: when the platform withdraws the surplus it had allocated to lure in – and then lock in – business customers (like advertisers) and reallocate it to the platform's shareholders.
For Google, there are several rackets that let it screw over advertisers as well as searchers (the advertisers are paying for the product, and they're also the product). Some of those rackets are well-known, like Jedi Blue, the market-rigging conspiracy that Google and Facebook colluded on:
https://en.wikipedia.org/wiki/Jedi_Blue
But thanks to the antitrust trial, we're learning about more of these. Megan Gray – ex-FTC, ex-DuckDuckGo – was in the courtroom last week when evidence was presented on Google execs' panic over a decline in "ad generating searches" and the sleazy gimmick they came up with to address it: manipulating the "semantic matching" on user queries:
https://www.wired.com/story/google-antitrust-lawsuit-search-results/
When you send a query to Google, it expands that query with terms that are similar – for example, if you search on "Weds" it might also search for "Wednesday." In the slides shown in the Google trial, we learned about another kind of semantic matching that Google performed, this one intended to turn your search results into "a twisted shopping mall you can’t escape."
Here's how that worked: when you ran a query like "children's clothing," Google secretly appended the brand name of a kids' clothing manufacturer to the query. This, in turn, triggered a ton of ads – because rival brands will have bought ads against their competitors' name (like Pepsi buying ads that are shown over queries for Coke).
Here we see surpluses being taken away from both end-users and business customers – that is, searchers and advertisers. For searchers, it doesn't matter how much you refine your query, you're still going to get crummy search results because there's an unkillable, hidden search term stuck to your query, like a piece of shit that Google keeps sticking to the sole of your shoe.
But for advertisers, this is also a scam. They're paying to be matched to users who search on a brand name, and you didn't search on that brand name. It's especially bad for the company whose name has been appended to your search, because Google has a protection racket where the company that matches your search has to pay extra in order to show up overtop of rivals who are worse matches. Both the matching company and those rivals have given Google a credit-card that Google gets to bill every time a user searches on the company's name, and Google is just running fraudulent charges through those cards.
And, of course, Google put this in writing. I mean, of course they did. As we learned from the documentary The Incredibles, supervillains can't stop themselves from monologuing, and in big, sprawling monopolists, these monologues have to transmitted electronically – and often indelibly – to far-flung co-cabalists.
As Gray points out, this is an incredibly blunt enshittification technique: "it hadn’t even occurred to me that Google just flat out deletes queries and replaces them with ones that monetize better." We don't know how long Google did this for or how frequently this bait-and-switch was deployed.
But if this is a blunt way of Google smashing its fist down on the scales that balance search quality against ad revenues, there's plenty of subtler ways the company could sneak a thumb on there. A Google exec at the trial rhapsodized about his company's "contract with the user" to deliver an "honest results policy," but given how bad Google search is these days, we're left to either believe he's lying or that Google sucks at search.
The paper trail offers a tantalizing look at how a company went from doing something that was so good it felt like a magic trick to being "able to ignore one of the fundamental laws of economics…supply and demand," able to "ignore the demand side…(users and queries) and only focus on the supply side of advertisers."
What's more, this is a system where everyone loses (except for Google): this isn't a grift run by Google and advertisers on users – it's a grift Google runs on everyone.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/10/03/not-feeling-lucky/#fundamental-laws-of-economics
Tumblr media Tumblr media
My next novel is The Lost Cause, a hopeful novel of the climate emergency. Amazon won't sell the audiobook, so I made my own and I'm pre-selling it on Kickstarter!
6K notes · View notes