#AI models explained
Explore tagged Tumblr posts
Text
youtube
https://youtube.com/clip/Ugkxtda4QrqvIMYxhoot7YSidwgdBIWZfHwZ?si=95Yr3N5IZEwB-8vD
#AI#Artificial Intelligence#Transformers AI#AI advancements#AI models#AI superpowers#AI predictions#AI data#AI innovation#AI research#AI engine#AI development#AI computing power#AI language models#AI translation#AI chess#AI jokes#AI applications#AI models explained#AI future#AI capabilities#AI trends#AI knowledge#AI data models#AI learning#AI growth#Youtube
1 note
·
View note
Text
The hangyu fan design stealing was disproven, but this Mega Delphox design that predates Palworld absolutely was lifted from a fandesign to make the Pal Wixen.
I will not be reposting the artists fan design, please click the link and give eyes/credit to the original fanartist!
Pyroaura98/EtherealHaze made this Mega Delphox fan design art prior to 2014, but this post was made in 2014. Either way it predates Palworld (announced 2021).
And this is the Pal design Wixen
Many fan made megas for Delphox include a hat, but Wixen is obviously taking from the design EtherealHaze made because:
-The red hair strands flow into the red hat, a very unique/distinct design element of the EtherealHaze's fan design.
-The red hat incorporates the ears into it as red, just like EtherealHaze's design (most other fan designs for mega delphox leave the ears seperate and a different color)
-The orange triangle chest fur.
-The white paws INSIDE the 3d red sleeve holes, just like EtherealHaze's design.
And for reference, this the official Pokemon Delphox: note no 3d sleeve holes and black paws.
It is unethical for larger companies like Pocketpair to use designs from smaller artists without consent and compensation.
TemTem received legal permission for the viral fakemon platypus line from the original artist.
Palworld did not receive legal permission for the fanmade Mega Delphox design.
Just to make it clear, Pokemon themselves have made parody designs of other series monsters, and in the monster taming genre MANY games parody eachother, so I don't think Palworld parodying official Pokemon is inherently bad. It also is pretty much impossible to financially damage Pokemon with parody as Pokemon is one of the richest IPs in the world.
However, Palworld making parody of small artist's fanmon designs without consent and compensation is not ethical.
Pocketpair is a larger company taking from someone who cannot legally fight back. Pokemon can sue Palworld if it really is an issue for Pokemon, but EtherealHaze cannot do so with the same ease because good lawyers are extremely expensive, and the legal battle can be very time consuming and stressful for just one person.
TDLR; This post is meant to inform consumers about the products, I am not defending Pokemon nor Palworld nor TemTem, only aiming to inform about practices I feel are unjust. Palworld devs lifting from fan made fakemon without legal consent and compensation is unethical. If you still wish to play Palworld, please consider pirating Palworld instead of purchasing/playing online (release sales and online player numbers are used to get investors for the company/game).
Additional issues with Pocketpair's behavior towards AI and NFTs:
Pocketpair has previously published a party game that hosts an unethical generative ai art model. And no you are not 'spotting the ai art vs hand drawn art'-- that is disinformation.
In this party game every player during a round uses ai to generate images based on a given theme, except one imposter player that has to guess the theme. Then the group tries to guess which ai image was created with the imposter prompt. This game profits off of hosting an unethical generative ai image model.
Why is this generative ai model unethical?
Generative ai image models like Stable Diffusion, Midjourney, etc, are trained on open web data (scrubbing the entire internet, uses private info/copyright/NSFL content -- article linked discussing what kind of illegal NSFL content Midjourney, Stable Diffusion, etc has been trained on. Images of warcrimes also are used in such datasets.)
This is absolutely unacceptable and should not be supported. Datasets can absolutely be made without illegal content, truely twisted people were unwilling to make ethical datasets with legal free to use images.
Also in general, big generative ai model dev (be it text, image, etc) is notorious for abusing workers. It often exposes grossly underpaid employees to traumatic materials with no proper mental health care. Link. You can read many articles about this issue online. For reference, government intelligence workers have therapy for the traumatic content they view, so even if its ai generated, constant exposure to traumatic content is not to be taken lightly. Workers are real people.
Additionally, Pocketpair social medias have flirted with NFTs, and the CEO of Pocketpair is a founder of a cryptocurrency company called Coincheck. NFTs are scams (you purchase the receipt, not the actual item, and all NFTs loose monetary value) and if that wasn't bad enough, NFTs, Crypto, and in general Blockchain infrastructure waste a shiton of resources via proof-of-work processing, you might as well be pissing oil into the ocean. Link. That is not ethical.
It does not appear that the game Palworld itself uses unethical ai, nor has it stolen actual pokemon 3d model topology, BUT the EtherealHaze fakemon design has been lifted from without legal consent, which is enough for me to drop the game. Furthermore, I cannot trust Pocketpair as a company to not introduce unethical ai and NFTs into Palworld in the future, due to the company's past behavior.
Whether you continue to play Palworld is up to you, but as I said before, please consider pirating the game instead of purchasing or playing online, since the stats can be used to get investors for the Pocketpair company/Palworld game.
#palworld#pokemon#delphox#mega delphox#text#long text#image#art#mentions of really dark stuff under the cut but it is nessisary to explain why generative ai models are unethical to financially support
32 notes
·
View notes
Text
Impact and innovation of AI in energy use with James Chalmers
New Post has been published on https://thedigitalinsider.com/impact-and-innovation-of-ai-in-energy-use-with-james-chalmers/
Impact and innovation of AI in energy use with James Chalmers
In the very first episode of our monhtly Explainable AI podcas, hosts Paul Anthony Claxton and Rohan Hall sat down with James Chalmers, Chief Revenue Officer of Novo Power, to discuss one of the most pressing issues in AI today: energy consumption and its environmental impact.
Together, they explored how AI’s rapid expansion is placing significant demands on global power infrastructures and what leaders in the tech industry are doing to address this.
The conversation covered various important topics, from the unique power demands of generative AI models to potential solutions like neuromorphic computing and waste heat recapture. If you’re interested in how AI shapes business and global energy policies, this episode is a must-listen.
Why this conversation matters for the future of AI
The rise of AI, especially generative models, isn’t just advancing technology; it’s consuming power at an unprecedented rate. Understanding these impacts is crucial for AI enthusiasts who want to see AI development continue sustainably and ethically.
As James explains, AI’s current reliance on massive datasets and intensive computational power has given it the fastest-growing energy footprint of any technology in history. For those working in AI, understanding how to manage these demands can be a significant asset in building future-forward solutions.
Main takeaways
AI’s power consumption problem: Generative AI models, which require vast amounts of energy for training and generation, consume ten times more power than traditional search engines.
Waste heat utilization: Nearly all power in data centers is lost as waste heat. Solutions like those at Novo Power are exploring how to recycle this energy.
Neuromorphic computing: This emerging technology, inspired by human neural networks, promises more energy-efficient AI processing.
Shift to responsible use: AI can help businesses address inefficiencies, but organizations need to integrate AI where it truly supports business goals rather than simply following trends.
Educational imperative: For AI to reach its potential without causing environmental strain, a broader understanding of its capabilities, impacts, and sustainable use is essential.
Meet James Chalmers
James Chalmers is a seasoned executive and strategist with extensive international experience guiding ventures through fundraising, product development, commercialization, and growth.
As the Founder and Managing Partner at BaseCamp, he has reshaped traditional engagement models between startups, service providers, and investors, emphasizing a unique approach to creating long-term value through differentiation.
Rather than merely enhancing existing processes, James champions transformative strategies that set companies apart, strongly emphasizing sustainable development.
Numerous accolades validate his work, including recognition from Forbes and Inc. Magazine as a leader of one of the Fastest-Growing and Most Innovative Companies, as well as B Corporation’s Best for The World and MedTech World’s Best Consultancy Services.
He’s also a LinkedIn ‘Top Voice’ on Product Development, Entrepreneurship, and Sustainable Development, reflecting his ability to drive substantial and sustainable growth through innovation and sound business fundamentals.
At BaseCamp, James applies his executive expertise to provide hands-on advisory services in fundraising, product development, commercialization, and executive strategy.
His commitment extends beyond addressing immediate business challenges; he prioritizes building competency and capacity within each startup he advises. Focused on sustainability, his work is dedicated to supporting companies that address one or more of the United Nations’ 17 Sustainable Development Goals through AI, DeepTech, or Platform Technologies.
About the hosts:
Paul Anthony Claxton – Q1 Velocity Venture Capital | LinkedIn
www.paulclaxton.io – am a Managing General Partner at Q1 Velocity Venture Capital… · Experience: Q1 Velocity Venture Capital · Education: Harvard Extension School · Location: Beverly Hills · 500+ connections on LinkedIn. View Paul Anthony Claxton’s profile on LinkedIn, a professional community of 1 billion members.
Rohan Hall – Code Genie AI | LinkedIn
Are you ready to transform your business using the power of AI? With over 30 years of… · Experience: Code Genie AI · Location: Los Angeles Metropolitan Area · 500+ connections on LinkedIn. View Rohan Hall’s profile on LinkedIn, a professional community of 1 billion members.
Like what you see? Then check out tonnes more.
From exclusive content by industry experts and an ever-increasing bank of real world use cases, to 80+ deep-dive summit presentations, our membership plans are packed with awesome AI resources.
Subscribe now
#ai#AI development#AI models#approach#Artificial Intelligence#bank#basecamp#billion#Building#Business#business goals#code#Community#Companies#computing#content#data#Data Centers#datasets#development#education#Emerging Technology#energy#energy consumption#Energy-efficient AI#engines#Environmental#environmental impact#Explainable AI#extension
3 notes
·
View notes
Text
You know, you have the Whorf hypothesis, which talks about how language might effect how we think
I believe one of the things he (or someone else saying similar things) brought up was the idea that:
If we for instance have barrels which used to contain a toxic chemical that's now empty, but the barrel is still dangerous, does lacking a word for "empty but dangerous" influence how we think about or treat this barrel? Would someone be less cautious around it for instance because "empty" implies to an extent that the barrel is back to how it was before it was filled?
Anyway, this is just me establishing a concept here
My thought here is if poorly fitting words may disproportionately warp people's understanding of concepts
I wonder if by using phrases like "artificial intelligence" we don't meaningfully skew perception of "ai" programs towards a thinking program, even among people who have some understanding of how it works (basically rapidly running a number of calculations until it gets an answer it thinks will be good, it's similar to those "having a simulated bird learn to walk" things you'll see, just very fast)
How much do we end up having certain terms basically become poison pills because of how ubiquitous they've become while being almost totally wrong
I'm not even really talking about things like reasonable terms used wrong, like people saying "gaslighting" when they mean "lying"
It really is specifically with terms like "ai" where... well... where I'm afraid we may have done irrevocable damage to public understanding of something, and where... I don't know that there's a way to ever fix it and shift the language used
Just something I'm thinking about tonight
#though I'm not actually thinking about ai; I'm thinking about another term that... what I have to say isn't that spicy#but I do kind of worry it would be a little too spicy for people who've really latched onto the word#even though... I literally just want to help; I literally think that term is a poison pill to the people who use it more than anyone else#and I think I have at least a candidate replacement for it in the same way I have something like 'deep modeling' to replace 'ai'#but... I don't think... I don't think I know of anyway how I could get that change to happen#even if like I... presented these thoughts to the greatest minds and everyone agreed on a new better term... could we spread it?#just drives me nuts with ai for obvious reasons#and with this term because whenever someone actually explains what the hell they mean... it's not at all what the word they use means#and a shift in words to one that... actually explains it... I mean I think it might massively make people more receptive#don't use something that's both very charged and also... kind of just the wrong word#use a word that's accurate and you can probably bring most people around on quickly#...well... whatever... I'll sprinkle these thoughts in people's ears from time to time#and hopefully it slowly takes root in enough people to have at least some small impact#in other news it's not like I remember the name of that hypothesis#I just decided that a couple minutes search could track me down a name; make me sound knowledgeable; all while being more accurate
2 notes
·
View notes
Text
From instructions on how to opt out, look at the official staff post on the topic. It also gives more information on Tumblr's new policies. If you are opting out, remember to opt out each separate blog individually.
Please reblog this post, so it will get more votes!
#third party sharing#third-party sharing#scrapping#ai scrapping#Polls#tumblr#tumblr staff#poll#please reblog#art#everything else#features#opt out#policies#data privacy#privacy#please boost#staff
47K notes
·
View notes
Text
“I guess I just don’t understand how-“ you don’t need to understand every high level scientific breakthrough in the world, you just have to stop having moral panics about it, that’s all
#‘I dont understand what physics has to do with this AI’ then maybe you aren’t qualified to have an opinion on this fucking Nobel prize 😭😭😭#like point blank that’s it#I’m sure someone could explain it#but fundamentally the only ones that will be able to talk about whether or not this Nobel prize award makes sense is actually fucking#physicists and likely only physicists who have a focus on what this AI model was made for
1 note
·
View note
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
35K notes
·
View notes
Text
AI Trading
What is AI and Its Relevance in Modern Trading? 1. Definition of AI Artificial Intelligence (AI): A branch of computer science focused on creating systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, understanding natural language, and perception. Machine Learning (ML): A subset of AI that involves the…
#AI and Market Sentiment#AI and Market Trends#AI in Cryptocurrency Markets#AI in Equity Trading#AI in Finance#AI in Forex Markets#AI Trading Strategies#AI-Driven Investment Strategies#AI-Powered Trading Tools#Artificial Intelligence (AI)#Automated Trading Systems#Backtesting Trading Models#Blockchain Technology#Crypto Market Analysis#cryptocurrency trading#Data Quality in Trading#Deep Learning (DL)#equity markets#Event-Driven Trading#Explainable AI (XAI)#Financial Markets#forex trading#Human-AI Collaboration#learn technical analysis#Machine Learning (ML)#Market Volatility#Natural Language Processing (NLP)#Portfolio Optimization#Predictive Analytics in Trading#Predictive Modeling
0 notes
Photo
By this logic, isn’t photoshop/video and photo editing lazy? Digital art in general? Hell, isn’t writing lazy? Obviously, this is not the case, but if so then clearly your issue with AI art isn’t the fact that people are sitting on their ass and typing things to make it.
You just learned that people spend hours using a tool that you thought required no effort to use. Are you not even curious as to what they’re doing? Maybe wondering if they’re potentially putting a little more into it than pushing a button once? Are you that closed-minded that the idea never even crossed your mind? Is it so incomprehensible that people may be using a technology as tool for genuine artistic expression, something that people have been doing forever?
#if you’re gonna say that generative ai is just reverse image search or something like that#Honestly I don’t know what to say to you bc that’s just false. If you are willing to engage I could explain these models to you but#Don’t make shit up or repeat made up shit to back up your argument#If you’re gonna discuss labor concerns with ai. That is a convo I am willing to have#But in a completely profit-less scenario like this? Then what’s the issue?
2K notes
·
View notes
Note
"this is AI [shopped] i can tell from some of the pixels and from seeing quite a few AI arts [shops] in my time." go back to 4chan pleaseeeeeeeee and stop trying to publicly humiliate a random artist who has had a presence on this site for almost a decade
I have never been to 4chan in my life. Someone being on a a site for decades does not preclude them from utilizing AI. Furthermore, the artist in question has implied in the replies that they do use some AI in their work. I'm not humiliating anyone.
Idk, it's not pixels my dude. That entire bridge doesn't make sense. Most of the illustration shows a high level of skill and yet parts of that bridge fade in and out and the posts are spaced strangely. Where the bridge begins makes no sense and if you try zoom in, it's all oddly out of focus especially compared to the pretty in focus mountain.
Like, is it bad to look at things critically now? You dont think someone would go on the internet and misrepresent their work?
#if you could explain to me why the bridge looks like that#instead of being weird on anon#that would be great#personally i think the artist probably did most of the background#and then fudged the rest using ai#and maybe it's a model trained in their own work#the space one does look derivative of spacegoose imo
0 notes
Text
Exciting developments in MLOps await in 2024! 🚀 DevOps-MLOps integration, AutoML acceleration, Edge Computing rise – shaping a dynamic future. Stay ahead of the curve! #MLOps #TechTrends2024 🤖✨
#MLOps#Machine Learning Operations#DevOps#AutoML#Automated Pipelines#Explainable AI#Edge Computing#Model Monitoring#Governance#Hybrid Cloud#Multi-Cloud Deployments#Security#Forecast#2024
0 notes
Text
Impact and innovation of AI in energy use with James Chalmers
New Post has been published on https://thedigitalinsider.com/impact-and-innovation-of-ai-in-energy-use-with-james-chalmers/
Impact and innovation of AI in energy use with James Chalmers
In the very first episode of our monhtly Explainable AI podcas, hosts Paul Anthony Claxton and Rohan Hall sat down with James Chalmers, CEO of Novo Power, to discuss one of the most pressing issues in AI today: energy consumption and its environmental impact.
Together, they explored how AI’s rapid expansion is placing significant demands on global power infrastructures and what leaders in the tech industry are doing to address this.
The conversation covered various important topics, from the unique power demands of generative AI models to potential solutions like neuromorphic computing and waste heat recapture. If you’re interested in how AI shapes business and global energy policies, this episode is a must-listen.
Why this conversation matters for the future of AI
The rise of AI, especially generative models, isn’t just advancing technology; it’s consuming power at an unprecedented rate. Understanding these impacts is crucial for AI enthusiasts who want to see AI development continue sustainably and ethically.
As James explains, AI’s current reliance on massive datasets and intensive computational power has given it the fastest-growing energy footprint of any technology in history. For those working in AI, understanding how to manage these demands can be a significant asset in building future-forward solutions.
Main takeaways
AI’s power consumption problem: Generative AI models, which require vast amounts of energy for training and generation, consume ten times more power than traditional search engines.
Waste heat utilization: Nearly all power in data centers is lost as waste heat. Solutions like those at Novo Power are exploring how to recycle this energy.
Neuromorphic computing: This emerging technology, inspired by human neural networks, promises more energy-efficient AI processing.
Shift to responsible use: AI can help businesses address inefficiencies, but organizations need to integrate AI where it truly supports business goals rather than simply following trends.
Educational imperative: For AI to reach its potential without causing environmental strain, a broader understanding of its capabilities, impacts, and sustainable use is essential.
Meet James Chalmers
James Chalmers is a seasoned executive and strategist with extensive international experience guiding ventures through fundraising, product development, commercialization, and growth.
As the Founder and Managing Partner at BaseCamp, he has reshaped traditional engagement models between startups, service providers, and investors, emphasizing a unique approach to creating long-term value through differentiation.
Rather than merely enhancing existing processes, James champions transformative strategies that set companies apart, strongly emphasizing sustainable development.
Numerous accolades validate his work, including recognition from Forbes and Inc. Magazine as a leader of one of the Fastest-Growing and Most Innovative Companies, as well as B Corporation’s Best for The World and MedTech World’s Best Consultancy Services.
He’s also a LinkedIn ‘Top Voice’ on Product Development, Entrepreneurship, and Sustainable Development, reflecting his ability to drive substantial and sustainable growth through innovation and sound business fundamentals.
At BaseCamp, James applies his executive expertise to provide hands-on advisory services in fundraising, product development, commercialization, and executive strategy.
His commitment extends beyond addressing immediate business challenges; he prioritizes building competency and capacity within each startup he advises. Focused on sustainability, his work is dedicated to supporting companies that address one or more of the United Nations’ 17 Sustainable Development Goals through AI, DeepTech, or Platform Technologies.
About the hosts:
Paul Anthony Claxton – Q1 Velocity Venture Capital | LinkedIn
www.paulclaxton.io – am a Managing General Partner at Q1 Velocity Venture Capital… · Experience: Q1 Velocity Venture Capital · Education: Harvard Extension School · Location: Beverly Hills · 500+ connections on LinkedIn. View Paul Anthony Claxton’s profile on LinkedIn, a professional community of 1 billion members.
Rohan Hall – Code Genie AI | LinkedIn
Are you ready to transform your business using the power of AI? With over 30 years of… · Experience: Code Genie AI · Location: Los Angeles Metropolitan Area · 500+ connections on LinkedIn. View Rohan Hall’s profile on LinkedIn, a professional community of 1 billion members.
Like what you see? Then check out tonnes more.
From exclusive content by industry experts and an ever-increasing bank of real world use cases, to 80+ deep-dive summit presentations, our membership plans are packed with awesome AI resources.
Subscribe now
#ai#AI development#AI models#approach#Artificial Intelligence#bank#basecamp#billion#Building#Business#business goals#CEO#code#Community#Companies#computing#content#data#Data Centers#datasets#development#education#Emerging Technology#energy#energy consumption#Energy-efficient AI#engines#Environmental#environmental impact#Explainable AI
0 notes
Text
An important message to college students: Why you shouldn't use ChatGPT or other "AI" to write papers.
Here's the thing: Unlike plagiarism, where I can always find the exact source a student used, it's difficult to impossible to prove that a student used ChatGPT to write their paper. Which means I have to grade it as though the student wrote it.
So if your professor can't prove it, why shouldn't you use it?
Well, first off, it doesn't write good papers. Grading them as if the student did write it themself, so far I've given GPT-enhanced papers two Ds and an F.
If you're unlucky enough to get a professor like me, they've designed their assignments to be hard to plagiarize, which means they'll also be hard to get "AI" to write well. To get a good paper out of ChatGPT for my class, you'd have to write a prompt that's so long, with so many specifics, that you might as well just write the paper yourself.
ChatGPT absolutely loves to make broad, vague statements about, for example, what topics a book covers. Sadly for my students, I ask for specific examples from the book, and it's not so good at that. Nor is it good at explaining exactly why that example is connected to a concept from class. To get a good paper out of it, you'd have to have already identified the concepts you want to discuss and the relevant examples, and quite honestly if you can do that it'll be easier to write your own paper than to coax ChatGPT to write a decent paper.
The second reason you shouldn't do it?
IT WILL PUT YOUR PROFESSOR IN A REALLY FUCKING BAD MOOD. WHEN I'M IN A BAD MOOD I AM NOT GOING TO BE GENEROUS WITH MY GRADING.
I can't prove it's written by ChatGPT, but I can tell. It does not write like a college freshman. It writes like a professional copywriter churning out articles for a content farm. And much like a large language model, the more papers written by it I see, the better I get at identifying it, because it turns out there are certain phrases it really, really likes using.
Once I think you're using ChatGPT I will be extremely annoyed while I grade your paper. I will grade it as if you wrote it, but I will not grade it generously. I will not give you the benefit of the doubt if I'm not sure whether you understood a concept or not. I will not squint and try to understand how you thought two things are connected that I do not think are connected.
Moreover, I will continue to not feel generous when calculating your final grade for the class. Usually, if someone has been coming to class regularly all semester, turned things in on time, etc, then I might be willing to give them a tiny bit of help - round a 79.3% up to a B-, say. If you get a 79.3%, you will get your C+ and you'd better be thankful for it, because if you try to complain or claim you weren't using AI, I'll be letting the college's academic disciplinary committee decide what grade you should get.
Eventually my school will probably write actual guidelines for me to follow when I suspect use of AI, but for now, it's the wild west and it is in your best interest to avoid a showdown with me.
12K notes
·
View notes
Text
thinking about deleting insta and just using it as kind of a portofolio thing bc just the ammount of like braindead comments and just straight up toxic art community culture
#sparked from this simple meme of someone explaining why the vtuber models are expencive#and ppl in comments were like 'get a real job' like im sorry but are you stupid#and like a surprising ammount of the comments had the n word on it despite the profile looking like they shouldnt probably say that#why does insta think i wanna see posts from ppl talking abt how much they ahte artists and wanna see them lose their jobs#not even bc of ai but just bc they have weird recentment?
1 note
·
View note
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Text
AI models can seemingly do it all: generate songs, photos, stories, and pictures of what your dog would look like as a medieval monarch.
But all of that data and imagery is pulled from real humans — writers, artists, illustrators, photographers, and more — who have had their work compressed and funneled into the training minds of AI without compensation.
Kelly McKernan is one of those artists. In 2023, they discovered that Midjourney, an AI image generation tool, had used their unique artistic style to create over twelve thousand images.
“It was starting to look pretty accurate, a little infringe-y,” they told The New Yorker last year. “I can see my hand in this stuff, see how my work was analyzed and mixed up with some others’ to produce these images.”
For years, leading AI companies like Midjourney and OpenAI, have enjoyed seemingly unfettered regulation, but a landmark court case could change that.
On May 9, a California federal judge allowed ten artists to move forward with their allegations against Stability AI, Runway, DeviantArt, and Midjourney. This includes proceeding with discovery, which means the AI companies will be asked to turn over internal documents for review and allow witness examination.
Lawyer-turned-content-creator Nate Hake took to X, formerly known as Twitter, to celebrate the milestone, saying that “discovery could help open the floodgates.”
“This is absolutely huge because so far the legal playbook by the GenAI companies has been to hide what their models were trained on,” Hake explained...
“I’m so grateful for these women and our lawyers,” McKernan posted on X, above a picture of them embracing Ortiz and Andersen. “We’re making history together as the largest copyright lawsuit in history moves forward.” ...
The case is one of many AI copyright theft cases brought forward in the last year, but no other case has gotten this far into litigation.
“I think having us artist plaintiffs visible in court was important,” McKernan wrote. “We’re the human creators fighting a Goliath of exploitative tech.”
“There are REAL people suffering the consequences of unethically built generative AI. We demand accountability, artist protections, and regulation.”
-via GoodGoodGood, May 10, 2024
#ai#anti ai#fuck ai art#ai art#big tech#tech news#lawsuit#united states#us politics#good news#hope#copyright#copyright law
2K notes
·
View notes