#Tech Won't Save Us
Explore tagged Tumblr posts
probablyasocialecologist · 5 months ago
Text
youtube
Paris Marx is joined by Jason Hickel to discuss how technology would change in a degrowth society and why it doesn’t make sense to organize society around profit and infinite expansion.
25 notes · View notes
thoughtportal · 1 year ago
Text
technological boogie man
130 notes · View notes
fearforthestorm · 19 days ago
Text
If you want to hear a little bit more about what's been going on with Wordpress and Automattic, one of my favorite podcasts just did an episode about it! I'm only partway through listening now, but TWSU always does fantastic work and this is no exception. Absolutely recommend giving this one a listen!!
3 notes · View notes
haveyouheardthispodcast · 8 months ago
Text
Tumblr media
2 notes · View notes
wat3rm370n · 4 days ago
Text
cryptocurrency memecoin pump-and-dump
Expert agencies and elected legislatures
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/11/21/policy-based-evidence/#decisions-decisions
Tumblr media
Since Trump hijacked the Supreme Court, his backers have achieved many of their policy priorities: legalizing bribery, formalizing forced birth, and – with the Loper Bright case, neutering the expert agencies that regulate business:
https://jacobin.com/2024/07/scotus-decisions-chevron-immunity-loper
What the Supreme Court began, Elon Musk and Vivek Ramaswamy are now poised to finish, through the "Department of Government Efficiency," a fake agency whose acronym ("DOGE") continues Musk's long-running cryptocurrency memecoin pump-and-dump. The new department is absurd – imagine a department devoted to "efficiency" with two co-equal leaders who are both famously incapable of getting along with anyone – but that doesn't make it any less dangerous.
Expert agencies are often all that stands between us and extreme misadventure, even death. The modern world is full of modern questions, the kinds of questions that require a high degree of expert knowledge to answer, but also the kinds of questions whose answers you'd better get right.
You're not stupid, nor are you foolish. You could go and learn everything you need to know to evaluate the firmware on your antilock brakes and decide whether to trust them. You could figure out how to assess the Common Core curriculum for pedagogical soundness. You could learn the material science needed to evaluate the soundness of the joists that hold the roof up over your head. You could acquire the biology and chemistry chops to decide whether you want to trust produce that's been treated with Monsanto's Roundup pesticides. You could do the same for cell biology, virology, and epidemiology and decide whether to wear a mask and/or get an MRNA vaccine and/or buy a HEPA filter.
You could do any of these. You might even be able to do two or three of them. But you can't do all of them, and that list is just a small slice of all the highly technical questions that stand between you and misery or an early grave. Practically speaking, you aren't going to develop your own robust meatpacking hygiene standards, nor your own water treatment program, nor your own Boeing 737 MAX inspection protocol.
Markets don't solve this either. If they did, we wouldn't have to worry about chunks of Boeing jets falling on our heads. The reason we have agencies like the FDA (and enabling legislation like the Pure Food and Drug Act) is that markets failed to keep people from being murdered by profit-seeking snake-oil salesmen and radium suppository peddlers.
These vital questions need to be answered by experts, but that's easier said than done. After all, experts disagree about this stuff. Shortcuts for evaluating these disagreements ("distrust any expert whose employer has a stake in a technical question") are crude and often lead you astray. If you dismiss any expert employed by a firm that wants to bring a new product to market, you will lose out on the expertise of people who are so legitimately excited about the potential improvements of an idea that they quit their jobs and go to work for whomever has the best chance of realizing a product based on it. Sure, that doctor who works for a company with a new cancer cure might just be shilling for a big bonus – but maybe they joined the company because they have an informed, truthful belief that the new drug might really cure cancer.
What's more, the scientific method itself speaks against the idea of there being one, permanent answer to any big question. The method is designed as a process of continual refinement, where new evidence is continuously brought forward and evaluated, and where cherished ideas that are invalidated by new evidence are discarded and replaced with new ideas.
So how are we to survive and thrive in a world of questions we ourselves can't answer, that experts disagree about, and whose answers are only ever provisional?
The scientific method has an answer for this, too: refereed, adversarial peer review. The editors of major journals act as umpires in disputes among experts, exercising their editorial discernment to decide which questions are sufficiently in flux as to warrant taking up, then asking parties who disagree with a novel idea to do their damndest to punch holes in it. This process is by no means perfect, but, like democracy, it's the worst form of knowledge creation except for all others which have been tried.
Expert regulators bring this method to governance. They seek comment on technical matters of public concern, propose regulations based on them, invite all parties to comment on these regulations, weigh the evidence, and then pass a rule. This doesn't always get it right, but when it does work, your medicine doesn't poison you, the bridge doesn't collapse as you drive over it, and your airplane doesn't fall out of the sky.
Expert regulators work with legislators to provide an empirical basis for turning political choices into empirically grounded policies. Think of all the times you've heard about how the gerontocracy that dominates the House and the Senate is incapable of making good internet policy because "they're out of touch and don't understand technology." Even if this is true (and sometimes it is, as when Sen Ted Stevens ranted about the internet being "a series of tubes," not "a dump truck"), that doesn't mean that Congress can't make good internet policy.
After all, most Americans can safely drink their tap water, a novelty in human civilization, whose history amounts to short periods of thriving shattered at regular intervals by water-borne plagues. The fact that most of us can safely drink our water, but people who live in Flint (or remote indigenous reservations, or Louisiana's Cancer Alley) can't tells you that these neighbors of ours are being deliberately poisoned, as we know precisely how not to poison them.
How did we (most of us) get to the point where we can drink the water without shitting our guts out? It wasn't because we elected a bunch of water scientists! I don't know the precise number of microbiologists and water experts who've been elected to either house, but it's very small, and their contribution to good sanitation policy is negligible.
We got there by delegating these decisions to expert agencies. Congress formulates a political policy ("make the water safe") and the expert agency turns that policy into a technical program of regulation and enforcement, and your children live to drink another glass of water tomorrow.
Musk and Ramaswamy have set out to destroy this process. In their Wall Street Journal editorial, they explain that expert regulation is "undemocratic" because experts aren't elected:
https://www.wsj.com/opinion/musk-and-ramaswamy-the-doge-plan-to-reform-government-supreme-court-guidance-end-executive-power-grab-fa51c020
They've vowed to remove "thousands" of regulations, and to fire swathes of federal employees who are in charge of enforcing whatever remains:
https://www.theverge.com/2024/11/20/24301975/elon-musk-vivek-ramaswamy-doge-plan
And all this is meant to take place on an accelerated timeline, between now and July 4, 2026 – a timeline that precludes any meaningful assessment of the likely consequences of abolishing the regulations they'll get rid of.
"Chesterton's Fence" – a thought experiment from the novelist GK Chesterton – is instructive here:
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
A regulation that works might well produce no visible sign that it's working. If your water purification system works, everything is fine. It's only when you get rid of the sanitation system that you discover why it was there in the first place, a realization that might well arrive as you expire in a slick of watery stool with a rectum so prolapsed the survivors can use it as a handle when they drag your corpse to the mass burial pits.
When Musk and Ramaswamy decry the influence of "unelected bureaucrats" on your life as "undemocratic," they sound reasonable. If unelected bureaucrats were permitted to set policy without democratic instruction or oversight, that would be autocracy.
Indeed, it would resemble life on the Tesla factory floor: that most autocratic of institutions, where you are at the mercy of the unelected and unqualified CEO of Tesla, who holds the purely ceremonial title of "Chief Engineer" and who paid the company's true founders to falsely describe him as its founder.
But that's not how it works! At its best, expert regulations turns political choices in to policy that reflects the will of democratically accountable, elected representatives. Sometimes this fails, and when it does, the answer is to fix the system – not abolish it.
I have a favorite example of this politics/empiricism fusion. It comes from the UK, where, in 2008, the eminent psychopharmacologist David Nutt was appointed as the "drug czar" to the government. Parliament had determined to overhaul its system of drug classification, and they wanted expert advice:
https://locusmag.com/2021/05/cory-doctorow-qualia/
To provide this advice, Nutt convened a panel of drug experts from different disciplines and asked them to rate each drug in question on how dangerous it was for its user; for its user's family; and for broader society. These rankings were averaged, and then a statistical model was used to determine which drugs were always very dangerous, no matter which group's safety you prioritized, and which drugs were never very dangerous, no matter which group you prioritized.
Empirically, the "always dangerous" drugs should be in the most restricted category. The "never very dangerous" drugs should be at the other end of the scale. Parliament had asked how to rank drugs by their danger, and for these categories, there were clear, factual answers to Parliament's question.
But there were many drugs that didn't always belong in either category: drugs whose danger score changed dramatically based on whether you were more concerned about individual harms, familial harms, or societal harms. This prioritization has no empirical basis: it's a purely political question.
So Nutt and his panel said to Parliament, "Tell us which of these priorities matter the most to you, and we will tell you where these changeable drugs belong in your schedule of restricted substances." In other words, politicians make political determinations, and then experts turn those choices into empirically supported policies.
This is how policy by "unelected bureaucrats" can still be "democratic."
But the Nutt story doesn't end there. Nutt butted heads with politicians, who kept insisting that he retract factual, evidence-supported statements (like "alcohol is more harmful than cannabis"). Nutt refused to do so. It wasn't that he was telling politicians which decisions to make, but he took it as his duty to point out when those decisions did not reflect the policies they were said to be in support of. Eventually, Nutt was fired for his commitment to empirical truth. The UK press dubbed this "The Nutt Sack Affair" and you can read all about it in Nutt's superb book Drugs Without the Hot Air, an indispensable primer on the drug war and its many harms:
https://www.bloomsbury.com/us/drugs-without-the-hot-air-9780857844989/
Congress can't make these decisions. We don't elect enough water experts, virologists, geologists, oncology researchers, structural engineers, aerospace safety experts, pedagogists, gerontoloists, physicists and other experts for Congress to turn its political choices into policy. Mostly, we elect lawyers. Lawyers can do many things, but if you ask a lawyer to tell you how to make your drinking water safe, you will likely die a horrible death.
That's the point. The idea that we should just trust the market to figure this out, or that all regulation should be expressly written into law, is just a way of saying, "you will likely die a horrible death."
Trump – and his hatchet men Musk and Ramaswamy – are not setting out to create evidence-based policy. They are pursuing policy-based evidence, firing everyone capable of telling them how to turn the values espouse (prosperity and safety for all Americans) into policy.
They dress this up in the language of democracy, but the destruction of the expert agencies that turn the political will of our representatives into our daily lives is anything but democratic. It's a prelude to transforming the nation into a land of epistemological chaos, where you never know what's coming out of your faucet.
438 notes · View notes
wat3rm370n · 9 days ago
Text
The Accusation in a Mirror of Disruption.
0 notes
dbluegreen · 3 months ago
Video
youtube
Escaping the Processed World w/ Chris Carlsson
0 notes
alanshemper · 2 years ago
Text
For Hinton, the threat of AI is a future problem. It’s not about how AI will increase the power of employers over employees, how it will be wielded against marginalized communities, its potential environmental impacts, and other serious concerns that will affect a lot of people outside the circles of wealthy technologists and executives. Instead, Hinton’s focus is on the fantasy that AI is on the cusp of becoming more intelligent than humans, and will then have the capacity to trick and manipulate us into doing its bidding.
0 notes
cyberianlife · 2 years ago
Text
Paris Marx is joined by Timnit Gebru to discuss the misleading framings of artificial intelligence, her experience of getting fired by Google in a very public way, and why we need to avoid getting distracted by all the hype around ChatGPT and AI image tools.
Timnit Gebru is the founder and executive director of the Distributed AI Research Institute and former co-lead of the Ethical AI research team at Google. You can follow her on Twitter at @timnitGebru.
Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, support the show on Patreon, and sign up for the weekly newsletter.
The podcast is produced by Eric Wickham and part of the Harbinger Media Network.
1 note · View note
inkprovised · 20 days ago
Text
Tumblr media
I was sketching when my graphic tablet died on me...
96 notes · View notes
batcavescolony · 7 months ago
Text
S4 E3 Supernatural
Now THIS is a good episode. Castiel took Dean back in time to 1973! We find out Sam and Dean's maternal grandparents, Samuel and Deanna Campbell, and Mary are hunters. On top of that, Azazel is playing match maker so he can have his little psychic children be the best of the best, and he made a deal with Mary to revive John after he killed him. Also as if Azazel hasn't killed enough of Sam & Dean's family they killed Samuel and Deanna too. Oh this is so interesting, then Castiel taking Dean back, saying destiny can't be changed but Sam is going down a dark path and either Dean stops him or angels do.
23 notes · View notes
shikai-the-storyteller · 8 months ago
Text
Ok I heard Pac is doing the Fallout watchparty in a few days (?) and his VOD won't be saved either, so I'm frickin learning how to use OBS solely so I can record the entire thing WITH subtitles.
11 notes · View notes
Text
Working in publishing, my inbox is basically just:
Article on the Horrors of AI
Article on How AI Can Help Your Business
Article on How AI Has Peaked
Article on How AI Is Here to Stay Forever
Article on How AI Is a Silicon Valley Scam That Doesn't Live Up to the Promise and In Fact Can't Because They've Literally Run Out of Written Words to Train LLMs On
#allison's work life#artificial generation fuckery#in point of fact we're lumping a lot of things into 'AI' so probably bits of them are all true#i think AI narration probably is here to stay because we've been mass training that for ages (what did you think alexa and siri were?)#i think ai covers will stick around on the low price point end unless those servers go the way of crypto#but as with everywhere they'll be limited because you can't ask an ai for design alts#(and do you guys know how many fucking passes it takes to make minute finicky changes to get exec to sign off on a cover?)#i think ai translation for books will die on the vine - you'd have to feed the whole text of your book to the ai and publishers hate that#ai writing is absolute garbage at long form so it will never replace authorship#it's also not going to be used to write a lot of copy because again you'd have to feed the ai your book and publishers say no way#like the thing to keep in mind is publishers want to save money but they want to control their intellectual property even more#that's the bread and butter#the number 1 thing they don't want to do is feed the books into an LLM#christ we won't even give libraries a fair deal on ebooks you think they're just going to give that shit away to their competitors??#but also i don't think the server/power/tech issue is sustainable for something like chatgpt and it is going to go the way of crypto#is humanity going to create an actual artificial intelligence that can write and think and draw?#yeah probably eventually#i do not think this attempt is it#they got too greedy and did too much too fast and when the money dries up? that's it#maybe I'm wrong but i just think the money will dry out long before the tech improves
4 notes · View notes
xitsensunmoon · 9 months ago
Text
HOW TO GLAZE YOUR WORK WITHOUT A GOOD PC(or on mobile)/TIPS TO MAKE IT LESS VISIBLE
Glaze your work online on:
Cara app. It requires you to sign up but it is actually a good place for your portfolio. Glazing takes 3 minutes per image and doesn't require anything but an internet connection compared to 20-30 minutes if your pc doesn't have a good graphic card. There IS a daily limit of 9 pictures tho. Glazed art will be sent to you after it's done, by email. It took me 30 minutes to glaze 9 images on a default setting. Cara app is also a space SPECIFICALLY for human artists and the team does everything in their power to ensure it stays that way.
WebGlaze. This one is a little bit more complicated, as you will need to get approval from the Glaze team themselves, to ensure you're not another AI tech bro(which, go fuck yourself if you are). You can do it through their twitter, through the same Cara app(the easiest way) or send them an email(takes the longest). For more details read on their website.
Unfortunately there are no ways that I know of to use Nightshade YET, as it's quite new. Cara.app definitely works on implementing it into their posting system tho!
Now for the tips to make it less visible(the examples contain only nightshade's rendering, sorry for that!):
Heavy textures. My biggest tip by far. Noise, textured brushes or just an overlay layer, everything works well. Preferably, choose the ones that are "crispy" and aren't blurred. It won't really help to hide rough edges of glaze/nightshade if you blur it. You can use more traditional textures too, like watercolor, canvas, paper etc. Play with it.
Tumblr media
Colour variety. Some brushes and settings allow you to change the colour you use just slightly with every stroke you make(colour jitter I believe?). If you dislike the process of it while drawing, you can clip a new layer to your colour art and just add it on top. Saves from the "rainbow-y" texture that glaze/nightshade overlays.
Gradients(in combination with textures work very well). Glaze/nightshade is more visible on low contrast/very light/very dark artworks. Try implementing a simple routine of adding more contrast to your art, even to the doodles. Just adding a neutral-coloured bg with a darker textured gradient already is going to look better than just plain, sterile digital colour.
Tumblr media
And finally, if you dislike how glaze did the job, just try to glaze/shade it again. Sometimes it's more visible, sometimes it's more subtle, it's just luck. Try again, compare, and choose the one you like the most. REMEMBER TO GLAZE/SHADE AFTER YOU MADE ALL THE CHANGES, NOT BEFORE!!
If you have any more info feel free to add to this post!!
7K notes · View notes
wat3rm370n · 12 days ago
Text
All the nightmares of AI in hospitals.
Tracking everything a patient says in the ICU in order to deny nurses the ability to complain about unsafe staffing levels.
.coda - I’m a neurology ICU nurse. The creep of AI in our hospitals terrifies me     By Michael Kennedy and Isobel Cockerell - 12 November 2024 We felt the system was designed to take decision-making power away from nurses at the bedside. Deny us the power to have a say in how much staffing we need. That was the first thing. Then, earlier this year, the hospital got a huge donation from the Jacobs family, and they hired a chief AI officer. When we heard that, alarm bells went off — “they’re going all in on AI,” we said to each other. We found out about this Scribe technology that they were rolling out. It’s called Ambient Documentation. They announced they were going to pilot this program with the physicians at our hospital.  It basically records your encounter with your patient. And then it’s like chat GPT or a large language model — it takes everything and just auto populates a note. Or your “documentation.” There were obvious concerns with this, and the number one thing that people said was, “Oh my god — it’s like mass surveillance. They’re gonna listen to everything our patients say, everything we do. They’re gonna track us.”
0 notes
corrodedbisexual · 7 months ago
Text
Eddie is constantly bouncing between jobs and rage quitting every 6 months on average. Steve, however, somehow gets lucky with a job in computer sales. With the industry in a booming rise, he makes a pretty decent income to support them both whenever Eddie's out of a job. Best part is, even though his charming voice and smile certainly help make sales, he doesn't feel like he's one of those scammers pushing all kinds of crap people don't need. Computers are objectively useful.
This goes on until their mid 30s and Steve saves up enough to open his own small tech store. He very hesitantly starts involving his recently unemployed (again) boyfriend in some mundane tasks (upon Eddie's own initiative saying he wants to help) and quickly learns that all of Eddie's previous bosses were morons. Eddie's meticulous and a quick learner with every single task. All he needs is not to have a boss who's a total jackass to him, and a bit of freedom to just... be himself.
Eddie does everything with mild enthusiasm; mild, because it's still work, ugh; enthusiasm, because it's his BOYFRIEND finally being free to do his own thing instead of working for The Man, woohoo, go Stevie! Eddie doesn't need to wear a stupid uniform or put his hair up, can play music in his headphones doing inventory, answers the phones in his special flirty manner, and Steve doesn't have a problem with any of that. He actually listens to Eddie's bitching and recognizes the helpful suggestions to improve things in the middle of all that, instead of telling him to shut up and do his damn job.
Working together can often be the perfect storm to ruin a relationship, but despite becoming Eddie's de-facto boss, Steve never treats him differently. It's never orders, always "Eddie can you [do this and that]?". It's soft smiles and a quiet "thanks, babe", and if no one's around, a kiss on Eddie's cheek when he gets something done. It's a calm explanation instead of yelling if he messes up.
Steve hands Eddie a handful of cash at the end of each week, despite Eddie's comments that it's a bit ridiculous to pay him at all, since he'd been practically living out of Steve's pocket for months at a time, and Steve has been single-handedly paying the rent for their joint apartment. Steve insists though, and Eddie has to admit that it's nice to always have cash in his pocket now.
Eddie learns more and more of everything that's needed to run the store, to the point that he spends a week handling everything alone when Steve's sick with the flu, but it's still a shock when several months later Steve shows him the paperwork in which he writes Eddie in as full partner. Eddie tries to protest, but Steve won't have it; he says he never could have survived all these months of start-up chaos without Eddie, and he fully deserves this. He's been giving Eddie half the store profits for months anyway, time to just make it official.
1K notes · View notes