#algorithmic regulation
Explore tagged Tumblr posts
Text
Lies, damned lies, and Uber
I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me TONIGHT in PHOENIX (Changing Hands, Feb 29) then Tucson (Mar 10-11), San Francisco (Mar 13), and more!
Uber lies about everything, especially money. Oh, and labour. Especially labour. And geometry. Especially geometry! But especially especially money. They constantly lie about money.
Uber are virtuosos of mendacity, but in Toronto, the company has attained a heretofore unseen hat-trick: they told a single lie that is dramatically, materially untruthful about money, labour and geometry! It's an achievement for the ages.
Here's how they did it.
For several decades, Toronto has been clobbered by the misrule of a series of far-right, clownish mayors. This was the result of former Ontario Premier Mike Harris's great gerrymander of 1998, when the city of Toronto was amalgamated with its car-dependent suburbs. This set the tone for the next quarter-century, as these outlying regions – utterly dependent on Toronto for core economic activity and massive subsidies to pay the unsustainable utility and infrastructure bills for sprawling neighborhoods of single-family homes – proceeded to gut the city they relied on.
These "conservative" mayors – the philanderer, the crackhead, the sexual predator – turned the city into a corporate playground, swapping public housing and rent controls for out-of-control real-estate speculation and trading out some of the world's best transit for total car-dependency. As part of that decay, the city rolled out the red carpet for Uber, allowing the company to put as many unlicensed taxis as they wanted on the city's streets.
Now, it's hard to overstate the dire traffic situation in Toronto. Years of neglect and underinvestment in both the roads and the transit system have left both in a state of near collapse and it's not uncommon for multiple, consecutive main arteries to shut down without notice for weeks, months, or, in a few cases, years. The proliferation of Ubers on the road – driven by desperate people trying to survive the city's cost-of-living catastrophe – has only exacerbated this problem.
Uber, of course, would dispute this. The company insists – despite all common sense and peer-reviewed research – that adding more cars to the streets alleviates traffic. This is easily disproved: there just isn't any way to swap buses, streetcars, and subways for cars. The road space needed for all those single-occupancy cars pushes everything further apart, which means we need more cars, which means more roads, which means more distance between things, and so on.
It is an undeniable fact that geometry hates cars. But geometry loathes Uber. Because Ubers have all the problems of single-occupancy vehicles, and then they have the separate problem that they just end up circling idly around the city's streets, waiting for a rider. The more Ubers there are on the road, the longer each car ends up waiting for a passenger:
https://www.sfgate.com/technology/article/Uber-Lyft-San-Francisco-pros-cons-ride-hailing-13841277.php
Anything that can't go on forever eventually stops. After years of bumbling-to-sinister municipal rule, Toronto finally reclaimed its political power and voted in a new mayor, Olivia Chow, a progressive of long tenure and great standing (I used to ring doorbells for her when she was campaigning for her city council seat). Mayor Chow announced that she was going to reclaim the city's prerogative to limit the number of Ubers on the road, ending the period of Uber's "self-regulation."
Uber, naturally, lost its shit. The company claims to be more than a (geometrically impossible) provider of convenient transportation for Torontonians, but also a provider of good jobs for working people. And to prove it, the company has promised to pay its drivers "120% of minimum wage." As I write for Ricochet, that's a whopper, even by Uber's standards:
https://ricochet.media/en/4039/uber-is-lying-again-the-company-has-no-intention-of-paying-drivers-a-living-wage
Here's the thing: Uber is only proposing to pay 120% of the minimum wage while drivers have a passenger in the vehicle. And with the number of vehicles Uber wants on the road, most drivers will be earning nothing most of the time. Factor in that unpaid time, as well as expenses for vehicles, and the average Toronto Uber driver stands to make $2.50 per hour (Canadian):
https://ridefair.ca/wp-content/uploads/2024/02/Legislated-Poverty.pdf
Now, Uber's told a lot of lies over the years. Right from the start, the company implicitly lied about what it cost to provide an Uber. For its first 12 years, Uber lost $0.41 on every dollar it brought in, lighting tens of billions in investment capital provided by the Saudi royals on fire in an effort to bankrupt rival transportation firms and disinvestment in municipal transit.
Uber then lied to retail investors about the business-case for buying its stock so that the House of Saud and other early investors could unload their stock. Uber claimed that they were on the verge of producing a self-driving car that would allow them to get rid of drivers, zero out their wage bill, and finally turn a profit. The company spent $2.5b on this, making it the most expensive Big Store in the history of cons:
https://www.theinformation.com/articles/infighting-busywork-missed-warnings-how-uber-wasted-2-5-billion-on-self-driving-cars
After years, Uber produced a "self-driving car" that could travel one half of one American mile before experiencing a potentially lethal collision. Uber quietly paid another company $400m to take this disaster off its hands:
https://www.economist.com/business/2020/12/10/why-is-uber-selling-its-autonomous-vehicle-division
The self-driving car lie was tied up in another lie – that somehow, automation could triumph over geometry. Robocabs, we were told, would travel in formations so tight that they would finally end the Red Queen's Race of more cars – more roads – more distance – more cars. That lie wormed its way into the company's IPO prospectus, which promised retail investors that profitability lay in replacing every journey – by car, cab, bike, bus, tram or train – with an Uber ride:
https://www.reuters.com/article/idUSKCN1RN2SK/
The company has been bleeding out money ever since – though you wouldn't know it by looking at its investor disclosures. Every quarter, Uber trumpets that it has finally become profitable, and every quarter, Hubert Horan dissects its balance sheets to find the accounting trick the company thought of this time. There was one quarter where Uber declared profitability by marking up the value of stock it held in Uber-like companies in other countries.
How did it get this stock? Well, Uber tried to run a business in those countries and it was such a total disaster that they had to flee the country, selling their business to a failing domestic competitor in exchange for stock in its collapsing business. Naturally, there's no market for this stock, which, in Uber-land, means you can assign any value you want to it. So that one quarter, Uber just asserted that the stock had shot up in value and voila, profit!
https://www.nakedcapitalism.com/2022/02/hubert-horan-can-uber-ever-deliver-part-twenty-nine-despite-massive-price-increases-uber-losses-top-31-billion.html
But all of those lies are as nothing to the whopper that Uber is trying to sell to Torontonians by blanketing the city in ads: the lie that by paying drivers $2.50/hour to fill the streets with more single-occupancy cars, they will turn a profit, reduce the city's traffic, and provide good jobs. Uber says it can vanquish geometry, economics and working poverty with the awesome power of narrative.
In other words, it's taking Toronto for a bunch of suckers.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/29/geometry-hates-uber/#toronto-the-gullible
Image: Rob Sinclair (modified) https://commons.wikimedia.org/wiki/File:Night_skyline_of_Toronto_May_2009.jpg
CC BY 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en
#pluralistic#uber#hubert horan#fraud#toronto#geometry hates cars#urbanism#ontpoli#olivia chow#self-regulation#transport#urban planning#taxis#transit#urban theory#labor#algorithmic wage discrimination#veena dubal
904 notes
·
View notes
Text
tumblr has absolutely filled my for you feed with pro-ana shit even though i've never looked that stuff up. so now not only is this website pushing this content at me, but their moderation priorities are so focused on banning people for nonsense that I'm the one that's got to play moderator and report all these awful posts it's showing me in the hopes that 1) it's not going to get shown to more people 2) it's not going to get shown to more vulnerable people and 3) the website gets the fucking idea that i hate these posts and don't want to see them
#rubia speaks#if i can say something controversial: i understand there are laws about websites not being responsible for what they HOST#but if your website's algorithm is SERVING me content that encourages self harm i think it should absolutely be illegal#i did not search for it and yet it's being PROMOTED to me repeatedly. this shit needs more regulation
5 notes
·
View notes
Text
ok listen. i know hbomberguy said he doesnt wanna become the type of youtube who spends their time doing drama videos or ruining ppl careers but like. if somebody doesnt start doing crazy detailed research on ryan hall, yall then i will
#hbomberguy#meteorology#ryan hall yall#like his info is good and he generally seems like a good gug but theres. some weird stuff#*guy#the lack of clarity regarding his education#the giveaways#the almost unnecessarily high production quality#the way his mods behave during livestreams#(theyre biased when regulating politics they link weather.gov when ppl ask for forecasts but respond negatively when other meteorologists#are mentioned/linked etc)#occasional thoughtlessness (magenta polygons instead of red are more visible but they confuse ppl and studies say theyre less effective)#the fact they he claims to live in kentucky but said the us had very calm weather while there were wildfires throughout the south&appalachi#(also i could be forgetting but if i rmember right he mentioned the drought maybe twice until he retweeted that video from james spann)#the fact that he used to review vapes#the shady sponsorships and sometimes downright untrue info regarding those#the guilt tripping asking ppl to donate#like theres def good faith interpretations available#he himself says that theyre trying to reach as many ppl as possible w warnings so they have to play the YT algorithm game#but like. some of its looking kinda weird man
22 notes
·
View notes
Photo
some before-and-after pictures of how I’ve been using AI generated images in my art lately 🤖
I share other artists’ concerns about the unethical nature of the theft going on in the training data of AI art algorithms, so I refuse to spend any money on them or to consider the images generated by them to be true art, but I’m curious to hear people’s thoughts on using it for reference and paint-over like this?
my hope is that with proper regulation and more ethical use, AI could be a beneficial tool to help artists - instead of a way that allows people to steal from us more easily.
#ai art#art#digital art#artists on tumblr#right now i use ai as an extension of how i already create art#i make regular use of reference photos and free-use stock images for a variety of different purposes#and i think that with enough of a transformation of the source material#true art CAN be created from ai 'art'#but it's a complicated subject for sure#how do we decide how much transformation is enough to consider it an original work?#how do we regulate the use of image generation algorithms as more and more people get access to them#and start to run them for themselves?#we're in extremely uncharted territory here as technological progress continues to accelerate at an unprecedented pace#it's important to think about these questions and stay informed on how we're reacting#because no matter how hard we try technology doesn't go backwards#but we do still have a say in how we choose to use it and how we feel about it
93 notes
·
View notes
Note
There's no other alternatives. Most people don't like rping ocxcanon and not everyone can write or can afford commissions. Things have nuance and ai isn't inheritely bad. Chat bots have existed for years and that's all character ai is. A chat bot.
A chat bot as they have existed for decades has a set number of responses based on what you ask it, this set number of responses cannot be updated by the chatbot itself, instead whoever is the person responsible for the code would need to update the database of responses as well as the algorithm used to identify which response to give when asked a questions. That’s how chatbots work.
AI chatbot can and do update themselves based on the questions and responses you give them, that’s the main difference, and the creator can just filter out certain things if they do not want them (it has happened before with an AI where it taught itself to curse and the creator didn’t want that)
Now the problem here with RP is that you are most likely using characters under trademarks, characters from very popular franchises, and if at any point the creator of these AI bots decides to sell the content for profit they can go to the character owners and go “Hey here I have thousand of queries for X popular character of yours, filtering through them I can give you the 10x most popular asked questions, or the most used plotlines” and the character owner can go “oh this would be great to make a story based on the most popular questions!” and that’s how AI starts to fuck up with writers/authors.
#there’s a reason why old chatbots ask you to use as clear and as short wuedtions possible#because their answers and algorithms are not created to deal with complicated stuff#meanwhile ai chat bots keep evolving#and yeah it does sound cool#and it would be#if we had regulations in place about it#that’s my main problem the lack of regulations because as it is#this will definitely end affecting writers and other artistic creators#anti ai
40 notes
·
View notes
Text
I refuse to hop in a Zoox car in my entire life if I can avoid it. I refuse to hop into any self-driving robo taxi (or robotaxi) that uses AI to keep it’s passengers “safe.” If this is actually a service they are legally allowed to provide publicly, there’s about to be a whole bunch of new laws made in hopefully very little time! Now you know me, obviously fuck the law, many laws are unjust, but sometimes we need some regulations to keep up with the shit that rich Silicon Valley tech bros “put out” while claiming it’s allegedly their own work. These rich bastards are dangerous! Now I’ll pass along the questions that my partner & I jokingly pondered. If something happens that the AI & detection systems doesn’t know how to handle, will us as the passengers be held legally responsible say if a child gets punted into the air by the self driving car & we can’t do anything to stop it? What if we’re asleep assuming the car is safe & it runs over a legally endangered animal? What if we’re on our phones & these self-driving robot cars cleave someone in half? What if it crashes into someone’s private property? Are we held responsible in any of these cases or is the big rich guy’s company? If it’s anything like Tesla, you should get your kids or pets out of the road when you see a Zoox car coming, it could allegedly cause some mortalities. Two more things. What’s stopping someone from hijacking, hacking, or planting a virus on these self-driving taxi services? What if one of them gets hijacked to take someone to a human trafficker meetup spot? Will the company be held responsible at all? The gifs below pretty much summarizes my feelings.
#I would sooner trust an Uber driver even though I’m kind of paranoid; than trust a robot with no legal responsibilities#we’re trying this post again since apparently I hit a key word that the algorithm didn’t like in my post or tags#I’ll put my trust in a random stranger before I put my trust in an AI whose owner is probably a billionaire or millionaire#zoox & any other robotaxi or robo taxi services who do self driving cars; I don’t trust like that#regulations for stuff like this often only happens once it’s too late & I actually hate that so much#robots are stealing our jobs & the government is just letting them; they don’t care about us#these tag rambles are probably gonna get my post wiped from being seen by anyone#I’m anti-AI btw just to clarify; in case that wasn’t blatantly obvious#I’ll always be anti AI#I’ll trust a Lyft driver before I trust a robot with no sense of self awareness of its own#mine#op#self driving vehicles#2024
7 notes
·
View notes
Text
listen to me. are you listening? tiktok is not uniquely anything when it comes to the internet. it is a tool and a platform like any other, used by all kinds of people—by nearly every kind of person or entity to whom it is available, in fact! and while what the u.s. government is doing right now to force the ownership of the company to change hands is bad and happening for the wrong reasons, to put it mildly—
claiming that the u.s. establishment is interested in shutting down tiktok because its been sooooo good and revolutionary for progressive/left-wing organizing is uhh. horse shit. that's not true. everyone uses tiktok. you, statistically, probably use tiktok. so do some of the congresspeople endorsing legislation that might end in tiktok being banned. so do right-wing influencers and terfs and trad-wives. just like everyone uses every other social media site.
don't fall into that trap of thinking that just because you and the people in your circle use this tool for good, that this tool is only used for good. it is actually just a tool for everyone!
here's an excerpt from a book called, The Wires of War, by Jacob Helberg which, if you're interested in why the u.s. congress is actually pulling this shit with tiktok, is a great read. this excerpt follows a section where Helberg described the role social media played in the Arab Spring in 2011. emphasis mine.
It would be several years before the 2016 election awakened the West to the ways in which the Internet could exploit the vulnerabilities of their societies. But for the autocrats in Bejing, Moscow, and Tehran, the Arab Spring was a technological awakening of their own. Seeing other repressive governments around the world crumble, illiberal regimes in Russia and China accelerated their treatment of the information space as a domain of war. "Tech-illiterate bureaucrats were replaced by a new generation of enforcers who understood the internet almost as well as the protesters," write Singer and Brooking in their book, LikeWar: The Weaponization of Social Media. "In truth, democratic activists had no special claim to the internet. They'd simply gotten there first. "
#tiktok ban#social media#idk man#maybe im too invested in personal privacy to have the same#visceral reaction#to the tiktok stuff that a lot of other people are having#do i think the current tiktok ban (for lack of a more concise phrase) is good?#no. lol#but frankly there's a lot of regulation of social media i'd like to see written and passed#that would probably write tiktok and most every other social media site#out of existence#starting with banning infinite scroll and other addictive user interface details#and ending with banning content suggestion algorithms altogether#or at least making them entirely fucking public#which would amount to the same thing for all these tech companies
7 notes
·
View notes
Text
Christopher Nolan Warns of 'Terrifying Possibilities' as AI Reaches 'Oppenheimer Moment': 'We Have to Hold People Accountable'
By Kim J. Murphy
"I hope so," Nolan stated. "When I talk to the leading researchers in the field of AI right now, for example, they literally refer to this -- right now -- as their Oppenheimer moment. They're looking to history to say, 'What are the responsibilities for scientists developing new technologies that may have unintended consequences?'"
"Do you think Silicon Valley is thinking that right now?" Todd interjected. "Do you think they say that this is an Oppenheimer moment?"
"They say that they do," Nolan said after a pause and then chuckled. "It's helpful that that's in the conversation and I hope that that thought process will continue. I am not saying Oppenheimer's story offers any easy answers to those questions, but it at least can show where some of those responsibilities lie and how people take a breath and think, 'Okay, what is the accountability?'"
Accountability needs to come from external sources; If you look to Silicon Valley, it will never come wholly from within. There is no incentive to do so, when the profit-making incentive is the only one that really matters.
#use of ai#chris nolan#christopher nolan#oppenheimer movie#oppenheimer 2023#artificial intelligence#thank you chris on highlighting the gratuitous use of the word algorithm#regulation from a multitude of sources can’t come soon enough#third party certification#external accountability#accountability#accountability baby
4 notes
·
View notes
Video
youtube
“TikTok Isn’t For Creators Anymore”, ICYMI Podcast, January 28, 2023
On today’s episode, Rachelle Hampton is joined by journalist and author Cory Doctorow to discuss his latest piece, “The Enshittification of TikTok,” in Wired. They talk about the life cycles of online platforms, why nobody on the platforms have any understanding of the rules of the game, and why we’re in dire need of better regulations.
This podcast is produced by Daniel Schroeder, Rachelle Hampton, and Daisy Rosario.
Slate
#social media#algorithm#data#capitalism#corporations#internet#accessibility#freedom of information#relevant#cory doctorow#rachelle hampton#media#profiteering#late stage capitalism#monopolies#digital platforms#regulation#competition#digital#ICYMI Podcast#Slate
2 notes
·
View notes
Text
New o1 model of LLM at OpenAI could change hardware market
New Post has been published on https://thedigitalinsider.com/new-o1-model-of-llm-at-openai-could-change-hardware-market/
New o1 model of LLM at OpenAI could change hardware market
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.
Reportedly led by a dozen AI researchers, scientists, and investors, the new training techniques, which underpin OpenAI’s recent ‘o1’ model (formerly Q* and Strawberry), have the potential to transform the landscape of AI development. The reported advances may influence the types or quantities of resources AI companies need continuously, including specialised hardware and energy to aid the development of AI models.
The o1 model is designed to approach problems in a way that mimics human reasoning and thinking, breaking down numerous tasks into steps. The model also utilises specialised data and feedback provided by experts in the AI industry to enhance its performance.
Since ChatGPT was unveiled by OpenAI in 2022, there has been a surge in AI innovation, and many technology companies claim existing AI models require expansion, be it through greater quantities of data or improved computing resources. Only then can AI models consistently improve.
Now, AI experts have reported limitations in scaling up AI models. The 2010s were a revolutionary period for scaling, but Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, says that the training of AI models, particularly in the understanding language structures and patterns, has levelled off.
“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Scaling the right thing matters more now,” they said.
In recent times, AI lab researchers have experienced delays in and challenges to developing and releasing large language models (LLM) that are more powerful than OpenAI’s GPT-4 model.
First, there is the cost of training large models, often running into tens of millions of dollars. And, due to complications that arise, like hardware failing due to system complexity, a final analysis of how these models run can take months.
In addition to these challenges, training runs require substantial amounts of energy, often resulting in power shortages that can disrupt processes and impact the wider electriciy grid. Another issue is the colossal amount of data large language models use, so much so that AI models have reportedly used up all accessible data worldwide.
Researchers are exploring a technique known as ‘test-time compute’ to improve current AI models when being trained or during inference phases. The method can involve the generation of multiple answers in real-time to decide on a range of best solutions. Therefore, the model can allocate greater processing resources to difficult tasks that require human-like decision-making and reasoning. The aim – to make the model more accurate and capable.
Noam Brown, a researcher at OpenAI who helped develop the o1 model, shared an example of how a new approach can achieve surprising results. At the TED AI conference in San Francisco last month, Brown explained that “having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer.”
Rather than simply increasing the model size and training time, this can change how AI models process information and lead to more powerful, efficient systems.
It is reported that other AI labs have been developing versions of the o1 technique. The include xAI, Google DeepMind, and Anthropic. Competition in the AI world is nothing new, but we could see a significant impact on the AI hardware market as a result of new techniques. Companies like Nvidia, which currently dominates the supply of AI chips due to the high demand for their products, may be particularly affected by updated AI training techniques.
Nvidia became the world’s most valuable company in October, and its rise in fortunes can be largely attributed to its chips’ use in AI arrays. New techniques may impact Nvidia’s market position, forcing the company to adapt its products to meet the evolving AI hardware demand. Potentially, this could open more avenues for new competitors in the inference market.
A new age of AI development may be on the horizon, driven by evolving hardware demands and more efficient training methods such as those deployed in the o1 model. The future of both AI models and the companies behind them could be reshaped, unlocking unprecedented possibilities and greater competition.
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, a
Tags: artificial intelligence, machine learning, models
#000#2022#2024#ai#ai & big data expo#AI chips#AI development#AI industry#AI innovation#AI models#AI regulation#ai training#Algorithms#amp#Analysis#anthropic#approach#Arrays#artificial#Artificial Intelligence#author#automation#Big Data#bot#bristol#california#change#chatGPT#chips#Companies
0 notes
Text
The Future of Real Estate in Jamaica: AI, Big Data, and Cybersecurity Shaping Tomorrow’s Market
#AI Algorithms#AI Real Estate Assistants#AI-Powered Chatbots#Artificial Intelligence#Automated Valuation Models#Big Data Analytics#Blockchain in Real Estate#Business Intelligence#cloud computing#Compliance Regulations#Cyber Attacks Prevention#Cybersecurity#Data encryption#Data Privacy#Data Security#data-driven decision making#Digital Property Listings#Digital Transactions#Digital Transformation#Fraud Prevention#Identity Verification#Internet of Things (IoT)#Machine Learning#Network Security#predictive analytics#Privacy Protection#Property Management Software#Property Technology#Real Estate Market Trends#real estate technology
0 notes
Text
Balancing AI Regulation in Education with Innovation: 6 Insights from Comparative Research
Curious about how AI is shaping the future of education? Our latest report dives into real-world insights from educators and AI experts. Discover the challenges, opportunities, and ethical considerations in AI integration.
As artificial intelligence (AI) becomes increasingly integral to educational practices, the debate over how to govern this powerful technology grows more pressing. The Organisation for Economic Co-operation and Development (OECD) recently published a working paper titled Artificial Intelligence and the Future of Work, Education, and Training, which delves into the potential impact of AI on equity…
#AI governance#AI in classrooms#AI in education#AI policy#AI regulation#algorithmic bias#data privacy#Ethical AI#Graeme Smith#Innovation in Education#OECD AI report#thisisgraeme
0 notes
Text
cant tell you how bad it feels to constantly tell other artists to come to tumblr, because its the last good website that isn't fucked up by spoonfeeding algorithms and AI bullshit and isn't based around meaningless likes
just to watch that all fall apart in the last year or so and especially the last two weeks
there's nowhere good to go anymore for artists.
edit - a lot of people are saying the tags are important so actually, you'll look at my tags.
#please dont delete your accounts because of the AI crap. your art deserves more than being lost like that #if you have a good PC please glaze or nightshade it. if you dont or it doesnt work with your style (like mine) please start watermarking #use a plain-ish font. make it your username. if people can't google what your watermark says and find ur account its not a good watermark #it needs to be central in the image - NOT on the canvas edges - and put it in multiple places if you are compelled #please dont stop posting your art because of this shit. we just have to hope regulations will come slamming down on these shitheads#in the next year or two and you want to have accounts to come back to. the world Needs real art #if we all leave that just makes more room for these scam artists to fill in with their soulless recycled garbage #improvise adapt overcome. it sucks but it is what it is for the moment. safeguard yourself as best you can without making #years of art from thousands of artists lost media. the digital world and art is too temporary to hastily click a Delete button out of spite
#not art#but important#please dont delete your accounts because of the AI crap. your art deserves more than being lost like that#if you have a good PC please glaze or nightshade it. if you dont or it doesnt work with your style (like mine) please start watermarking#use a plain-ish font. make it your username. if people can't google what your watermark says and find ur account its not a good watermark#it needs to be central in the image - NOT on the canvas edges - and put it in multiple places if you are compelled#please dont stop posting your art because of this shit. we just have to hope regulations will come slamming down on these shitheads#in the next year or two and you want to have accounts to come back to. the world Needs real art#if we all leave that just makes more room for these scam artists to fill in with their soulless recycled garbage#improvise adapt overcome. it sucks but it is what it is for the moment. safeguard yourself as best you can without making#years of art from thousands of artists lost media. the digital world and art is too temporary to hastily click a Delete button out of spite
23K notes
·
View notes
Text
🌋💢->☮️🌻
instagram
I was just journaling about how i need to learn how to do this. Maybe cookies in my computer are feeding my journaling to algorithms, but I'm glad this vid came across my feed. Because i genuinely need to learn how to express my dislike for characters without making their fans feel invalidated.
#see journal file 20240711a2051#tips advice suggestions#fandomfrictionfracas#reminders#how to socialize#socializing#writing#algorithms#dropped series#disliked characters#ranting#i need to learn to regulate my emotions#fe3hfuukasetsugetsu#elredeaglescrit
1 note
·
View note
Text
Can Africa Lead the Way? Decoding Bias and Building a Fairer AI Ecosystem
Mitigating bias in AI development, particularly through focusing on representative #African #data collection and fostering collaboration between African and Western #developers, will lead to a more equitable and inclusive future for #AI in Africa.
The rise of Artificial Intelligence (AI) has ignited a revolution across industries, from healthcare diagnostics to creative content generation. However, amidst the excitement lurks a shadow: bias. This insidious force can infiltrate AI systems, leading to discriminatory outcomes and perpetuating societal inequalities. As AI continues to integrate into the African landscape, the question of…
View On WordPress
#African-Descent#AI#AI Bias#Algorithms#artificial intelligence#AWS#Dr. Nashlie Sephus#General Data Protection Regulation GDPR#Kenya#machine-learning#Representative AI
0 notes
Text
As sketchy as the oceangate submarine was... you can bet your ass every single one of musky's endeavors would look just as sketchy if it wasn't for the fact that he's forced to work with government regulators.
Hell, most of his projects are this sketchy if you look a bit closer. For example: the tesla tunnels.
No fire suppression system, no emergency exits, no emergency lighting, no way for EMS to get through, no fucking nothing. I am pretty sure it's not even big enough to open the car's doors.
Or the Cybertruck that's a deathtrap for both the people on the outside and the people on the inside because it utterly disregards the last 50 or so years of advancements in car safety technology such as crumple zones or safety glass
Or the tesla model 3 where you can't even open the back doors without power. So if you're in an accident and lose power... good luck getting your kids out of the back, especially when the huge battery is turning into a huge, unextinguishable flamethrower.
Or the fucking starship launchpad that was utterly destroyed by the rocket and threw huge concrete chunks and other debris around for miles... which, incidentally, also destroyed the rocket.
That's what all these self-proclaimed Silicon Valley tech bro geniuses are like.
They all think they know better than everyone else, and that rules or consequences don't apply to them, and they see safety as little more than an afterthought.
It's why Ai and social media algorithms are used sooooo ethically. It's why amazon and facebook try to find out everything about you and happily sell that data with no disregard for what it could be used for.
It's about damn time one of these CEO dipshits got killed by their own dipshitery, I just wish it had been musk or bezos instead...
Once again, in conclusion:
31K notes
·
View notes