#silicon valley history
Explore tagged Tumblr posts
Text
The Birth of an Industry: Fairchild’s Pivotal Role in Shaping Silicon Valley
In the late 1950s, the Santa Clara Valley of California witnessed a transformative convergence of visionary minds, daring entrepreneurship, and groundbreaking technological advancements. At the heart of this revolution was Fairchild Semiconductor, a pioneering company whose innovative spirit, entrepreneurial ethos, and technological breakthroughs not only defined the burgeoning semiconductor industry but also indelibly shaped the region’s evolution into the world-renowned Silicon Valley.
A seminal 1967 promotional film, featuring Dr. Harry Sello and Dr. Jim Angell, offers a fascinating glimpse into Fairchild’s revolutionary work on integrated circuits (ICs), a technology that would soon become the backbone of the burgeoning tech industry. By demystifying IC design, development, and applications, Fairchild exemplified its commitment to innovation and knowledge sharing, setting a precedent for the collaborative and open approach that would characterize Silicon Valley’s tech community. Specifically, Fairchild’s introduction of the planar process and the first monolithic IC in 1959 marked a significant technological leap, with the former enhancing semiconductor manufacturing efficiency by up to 90% and the latter paving the way for the miniaturization of electronic devices.
Beyond its technological feats, Fairchild’s entrepreneurial ethos, nurtured by visionary founders Robert Noyce and Gordon Moore, served as a blueprint for subsequent tech ventures. The company’s talent attraction and nurturing strategies, including competitive compensation packages and intrapreneurship encouragement, helped establish the region as a magnet for innovators and risk-takers. This, in turn, laid the foundation for the dense network of startups, investors, and expertise that defines Silicon Valley’s ecosystem today. Notably, Fairchild’s presence spurred the development of supporting infrastructure, including the expansion of Stanford University’s research facilities and the establishment of specialized supply chains, further solidifying the region’s position as a global tech hub. By 1965, the area witnessed a surge in tech-related employment, with jobs increasing by over 300% compared to the previous decade, a direct testament to Fairchild’s catalyzing effect.
The trajectory of Fairchild Semiconductor, including its challenges and eventual transformation, intriguingly parallels the broader narrative of Silicon Valley’s growth. The company’s decline under later ownership and its subsequent re-emergence underscore the region’s inherent capacity for reinvention and adaptation. This resilience, initially embodied by Fairchild’s pioneering spirit, has become a hallmark of Silicon Valley, enabling the region to navigate the rapid evolution of the tech industry with unparalleled agility.
What future innovations will emerge from the valley, leveraging the foundations laid by pioneers like Fairchild, to shape the global technological horizon in the decades to come?
Dr. Harry Sello and Dr. Jim Angell: The Design and Development Process of the Integrated Circuit (Fairchild Semiconductor Corporation, October 1967)
youtube
Robert Noyce: The Development of the Integrated Circuit and Its Impact on Technology and Society (The Computer Museum, Boston, May 1984)
youtube
Tuesday, December 3, 2024
#silicon valley history#tech industry origins#entrepreneurial ethos#innovation and technology#california santa clara valley#integrated circuits#semiconductor industry development#promotional film#ai assisted writing#machine art#Youtube#lecture
9 notes
·
View notes
Text
being on here was so easy when i didn’t have my blog personalized at all and i just used the web version to look at the silicon valley hbo tag
#the first thing i ever actually engaged with on here i fear. very troubling roots#made this blog in 2017 and then it sat untouched for years and i would occasionally check for sims cc#and then. silicon valley shortly followed by always sunny a dew months later#and the rest was history
12 notes
·
View notes
Note
Hey, what does disruptor mean? I saw it when looking at your answers. I’ve also seen people joke about it on twitter but I can’t find a meaning to it.
It's a term I personally loathe, but I'm willing to do some recent cultural/intellectual history to explain where it came from and what it means.
The term disruptor as it's commonly used today comes out of the business world, more specifically the high tech sector clustered in Silicon Valley. Originally coined as "disruptive innovation" by business school professor Clayton Christensen in the mid-to-late 90s, the idea was that certain new businesses (think your prototypical startup) have a greater tendency to develop innovative technologies and business models that radically destabilize established business models, markets, and large corporations - and in the process, help to speed up economic and technological progress.
While Christensen's work was actually about business models and firm-level behavior, over time this concept mutated to focus on the individual entrepeneur/inventor/founder figure of the "disruptor," as part of the lionization of people like Steve Jobs or Mark Zuckerburg or Elon Musk, or firms like Lyft, Uber, WeWork, Theranos, etc. It also mutated into a general belief that "disrupting" markets and, increasingly, social institutions is how society will and should progress.
I find these ideas repellant. First of all, when it comes to the actual business side of things, I think it mythologizes corporate executives as creative geniuses by attributing credit for innovations actually created by the people they employ. Elon Musk didn't create electric cars or reusable rockets, Steve Jobs didn't design any computers or program any OSes, but because they're considered "disruptors," we pretend that they did. This has a strong effect on things like support for taxing the rich - because there is this popular image of the "self-made billionaire" as someone who "earned" their wealth through creating "disruptive" companies or technologies, there is more resistance to taxing or regulating the mega-wealthy than would otherwise be the case.
Even more importantly, treating "disruptors" like heroes and "disruption" as a purely good thing tends to make people stop thinking about whether disruption to a given industry is actually a good thing, whether what tech/Silicon Valley/startup firms are doing is actually innovative, what the economic and social costs of the disruption are, and who pays them. Because when we look at a bunch of high-profile case studies, it often turns out to be something of a case of smoke and mirrors.
To take ridesharing as an example, Lyft and Uber and similar companies aren't actually particularly innovative. Yes, they have apps that connect riders to drivers, but that's not actually that different from the old school method of using the phone to call up a livery cab company. There's a lot of claims about how the apps improve route planning or the availability of drivers or bring down prices, but they're usually overblown: route planning software is pretty common (think Google Maps), when you actually look at how Lyft and Uber create availability, it's by flooding the market with large numbers of new drivers, and when you look at how they got away with low prices, it was usually by spending billions upon billions of venture capital money on subsidizing their rides.
Moreover, this "disruption" has a pretty nasty dark side. To start with, Lyft and Uber's business strategy is actually a classic 19th century monopoly strategy dressed up in 21st century rhetoric: the "low prices" had nothing to do with innovative practices or new technology, it was Lyft and Uber pulling the classic move of deliberately selling at a loss to grab market share from the competition, at which point they started raising their prices on consumers. Availability of drivers was accomplished by luring way too many new drivers into the labor market with false promises of making high wages in their spare time, but when the over-supply of drivers inevitably caused incomes to decline, huge numbers of rideshare drivers found themselves trapped by auto debts and exploited by the companies' taking a significant chunk of their earnings, using the threat of cutting them off from the app to cow any resistance. And above all, Lyft and Uber's "disruption" often came down to a willful refusal to abide by pre-existing regulations meant to ensure that drivers could earn a living wage, that consumers would be protected in the case of accidents or from the bad behavior of drivers, etc. As a policy historian, however, I find the extension of "disruption" into social institutions the most troubling. Transportation, health care, education, etc. are absolutely vital for the functioning of modern society and are incredibly complex systems that require a lot of expertise and experience to understand, let alone change. Letting a bunch of billionaires impose technocratic "reforms" on them from above, simply because they say they're really smart or because they donate a bunch of money, is a really bad idea - especially because when we see what the "disruptors" actually propose and/or do, it often shows them to be very ordinary (if not actively stupid) people who don't really know what they're doing.
Elon Musk's Loop is an inherently worse idea than mass transit. His drive for self-driving cars is built on lies. Pretty much all of the Silicon Valley firms that have tried to "disrupt" in the area of transportation end up reinventing the wheel and proposing the creation of buses or trolleys or subways.
Theranos was a giant fraud that endangered the lives of thousands in pursuit of an impossible goal that, even if it ould have been achieved, wouldn't have made much of a difference in people's lives compared to other, more fruitful areas of biotech and medical research.
From Bill Gates to Mark Zuckerburg, Silicon Valley billionaires have plunged huge amounts of philanthropy dollars into all kinds of interventions in public education, from smaller classrooms to MOOCs to teacher testing to curriculum reform to charter schools. The track record of these reforms has been pretty uniformly abysmal, because it turns out that educational outcomes are shaped by pretty much every social force you can think of and educational systems are really complex and difficult to measure.
So yeah, fuck disruptors.
111 notes
·
View notes
Link
When tech whizkids are caught behaving badly, they're just being "brilliant jerks." And the figure of the charismatic-but-bratty genius inventor is everywhere these days. We look at how the isolated, tormented mad scientist in science fiction evolved into the sexy asshole that everyone wants to be. And we talk to Christopher Cantwell, co-creator of Halt and Catch Fire and recently writer of the Iron Man comic, about how Tony Stark has changed.
#Silicon Valley vs. Science Fiction: Difficult Geniuses#our opinions are correct#podcast#podcasts#silicon valley#tech bros#tech billionaires#technology history#technology#halt and catch fire#Christopher Cantwell#charlie jane anders#annalee newitz#science fiction#science fiction history
3 notes
·
View notes
Text
#pirates#silicon valley#apple#macintosh#steve jobs#bill gates#microsoft#windows#retro#computers#history
0 notes
Text
Why Silicon Valley is here
youtube
0 notes
Text
Tesla's Dieselgate

Elon Musk lies a lot. He lies about being a “utopian socialist.” He lies about being a “free speech absolutist.” He lies about which companies he founded:
https://www.businessinsider.com/tesla-cofounder-martin-eberhard-interview-history-elon-musk-ev-market-2023-2 He lies about being the “chief engineer” of those companies:
https://www.quora.com/Was-Elon-Musk-the-actual-engineer-behind-SpaceX-and-Tesla
He lies about really stupid stuff, like claiming that comsats that share the same spectrum will deliver steady broadband speeds as they add more users who each get a narrower slice of that spectrum:
https://www.eff.org/wp/case-fiber-home-today-why-fiber-superior-medium-21st-century-broadband
The fundamental laws of physics don’t care about this bullshit, but people do. The comsat lie convinced a bunch of people that pulling fiber to all our homes is literally impossible — as though the electrical and phone lines that come to our homes now were installed by an ancient, lost civilization. Pulling new cabling isn’t a mysterious art, like embalming pharaohs. We do it all the time. One of the poorest places in America installed universal fiber with a mule named “Ole Bub”:
https://www.newyorker.com/tech/annals-of-technology/the-one-traffic-light-town-with-some-of-the-fastest-internet-in-the-us
Previous tech barons had “reality distortion fields,” but Musk just blithely contradicts himself and pretends he isn’t doing so, like a budget Steve Jobs. There’s an entire site devoted to cataloging Musk’s public lies:
https://elonmusk.today/
But while Musk lacks the charm of earlier Silicon Valley grifters, he’s much better than they ever were at running a long con. For years, he’s been promising “full self driving…next year.”
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
He’s hasn’t delivered, but he keeps claiming he has, making Teslas some of the deadliest cars on the road:
https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/
Tesla is a giant shell-game masquerading as a car company. The important thing about Tesla isn’t its cars, it’s Tesla’s business arrangement, the Tesla-Financial Complex:
https://pluralistic.net/2021/11/24/no-puedo-pagar-no-pagara/#Rat
Once you start unpacking Tesla’s balance sheets, you start to realize how much the company depends on government subsidies and tax-breaks, combined with selling carbon credits that make huge, planet-destroying SUVs possible, under the pretense that this is somehow good for the environment:
https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#killer-analogy
But even with all those financial shenanigans, Tesla’s got an absurdly high valuation, soaring at times to 1600x its profitability:
https://pluralistic.net/2021/01/15/hoover-calling/#intangibles
That valuation represents a bet on Tesla’s ability to extract ever-higher rents from its customers. Take Tesla’s batteries: you pay for the battery when you buy your car, but you don’t own that battery. You have to rent the right to use its full capacity, with Tesla reserving the right to reduce how far you go on a charge based on your willingness to pay:
https://memex.craphound.com/2017/09/10/teslas-demon-haunted-cars-in-irmas-path-get-a-temporary-battery-life-boost/
That’s just one of the many rent-a-features that Tesla drivers have to shell out for. You don’t own your car at all: when you sell it as a used vehicle, Tesla strips out these features you paid for and makes the next driver pay again, reducing the value of your used car and transfering it to Tesla’s shareholders:
https://www.theverge.com/2020/2/6/21127243/tesla-model-s-autopilot-disabled-remotely-used-car-update
To maintain this rent-extraction racket, Tesla uses DRM that makes it a felony to alter your own car’s software without Tesla’s permission. This is the root of all autoenshittification:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
This is technofeudalism. Whereas capitalists seek profits (income from selling things), feudalists seek rents (income from owning the things other people use). If Telsa were a capitalist enterprise, then entrepreneurs could enter the market and sell mods that let you unlock the functionality in your own car:
https://pluralistic.net/2020/06/11/1-in-3/#boost-50
But because Tesla is a feudal enterprise, capitalists must first secure permission from the fief, Elon Musk, who decides which companies are allowed to compete with him, and how.
Once a company owns the right to decide which software you can run, there’s no limit to the ways it can extract rent from you. Blocking you from changing your device’s software lets a company run overt scams on you. For example, they can block you from getting your car independently repaired with third-party parts.
But they can also screw you in sneaky ways. Once a device has DRM on it, Section 1201 of the DMCA makes it a felony to bypass that DRM, even for legitimate purposes. That means that your DRM-locked device can spy on you, and because no one is allowed to explore how that surveillance works, the manufacturer can be incredibly sloppy with all the personal info they gather:
https://www.cnbc.com/2019/03/29/tesla-model-3-keeps-data-like-crash-videos-location-phone-contacts.html
All kinds of hidden anti-features can lurk in your DRM-locked car, protected from discovery, analysis and criticism by the illegality of bypassing the DRM. For example, Teslas have a hidden feature that lets them lock out their owners and summon a repo man to drive them away if you have a dispute about a late payment:
https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/
DRM is a gun on the mantlepiece in Act I, and by Act III, it goes off, revealing some kind of ugly and often dangerous scam. Remember Dieselgate? Volkswagen created a line of demon-haunted cars: if they thought they were being scrutinized (by regulators measuring their emissions), they switched into a mode that traded performance for low emissions. But when they believed themselves to be unobserved, they reversed this, emitting deadly levels of NOX but delivering superior mileage.
The conversion of the VW diesel fleet into mobile gas-chambers wouldn’t have been possible without DRM. DRM adds a layer of serious criminal jeopardy to anyone attempting to reverse-engineer and study any device, from a phone to a car. DRM let Apple claim to be a champion of its users’ privacy even as it spied on them from asshole to appetite:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Now, Tesla is having its own Dieselgate scandal. A stunning investigation by Steve Stecklow and Norihiko Shirouzu for Reuters reveals how Tesla was able to create its own demon-haunted car, which systematically deceived drivers about its driving range, and the increasingly desperate measures the company turned to as customers discovered the ruse:
https://www.reuters.com/investigates/special-report/tesla-batteries-range/
The root of the deception is very simple: Tesla mis-sells its cars by falsely claiming ranges that those cars can’t attain. Every person who ever bought a Tesla was defrauded.
But this fraud would be easy to detect. If you bought a Tesla rated for 353 miles on a charge, but the dashboard range predictor told you that your fully charged car could only go 150 miles, you’d immediately figure something was up. So your Telsa tells another lie: the range predictor tells you that you can go 353 miles.
But again, if the car continued to tell you it has 203 miles of range when it was about to run out of charge, you’d figure something was up pretty quick — like, the first time your car ran out of battery while the dashboard cheerily informed you that you had 203 miles of range left.
So Teslas tell a third lie: when the battery charge reached about 50%, the fake range is replaced with the real one. That way, drivers aren’t getting mass-stranded by the roadside, and the scam can continue.
But there’s a new problem: drivers whose cars are rated for 353 miles but can’t go anything like that far on a full charge naturally assume that something is wrong with their cars, so they start calling Tesla service and asking to have the car checked over.
This creates a problem for Tesla: those service calls can cost the company $1,000, and of course, there’s nothing wrong with the car. It’s performing exactly as designed. So Tesla created its boldest fraud yet: a boiler-room full of anti-salespeople charged with convincing people that their cars weren’t broken.
This new unit — the “diversion team” — was headquartered in a Nevada satellite office, which was equipped with a metal xylophone that would be rung in triumph every time a Tesla owner was successfully conned into thinking that their car wasn’t defrauding them.
When a Tesla owner called this boiler room, the diverter would run remote diagnostics on their car, then pronounce it fine, and chide the driver for having energy-hungry driving habits (shades of Steve Jobs’s “You’re holding it wrong”):
https://www.wired.com/2010/06/iphone-4-holding-it-wrong/
The drivers who called the Diversion Team weren’t just lied to, they were also punished. The Tesla app was silently altered so that anyone who filed a complaint about their car’s range was no longer able to book a service appointment for any reason. If their car malfunctioned, they’d have to request a callback, which could take several days.
Meanwhile, the diverters on the diversion team were instructed not to inform drivers if the remote diagnostics they performed detected any other defects in the cars.
The diversion team had a 750 complaint/week quota: to juke this stat, diverters would close the case for any driver who failed to answer the phone when they were eventually called back. The center received 2,000+ calls every week. Diverters were ordered to keep calls to five minutes or less.
Eventually, diverters were ordered to cease performing any remote diagnostics on drivers’ cars: a source told Reuters that “Thousands of customers were told there is nothing wrong with their car” without any diagnostics being performed.
Predicting EV range is an inexact science as many factors can affect battery life, notably whether a journey is uphill or downhill. Every EV automaker has to come up with a figure that represents some kind of best guess under a mix of conditions. But while other manufacturers err on the side of caution, Tesla has the most inaccurate mileage estimates in the industry, double the industry average.
Other countries’ regulators have taken note. In Korea, Tesla was fined millions and Elon Musk was personally required to state that he had deceived Tesla buyers. The Korean regulator found that the true range of Teslas under normal winter conditions was less than half of the claimed range.
Now, many companies have been run by malignant narcissists who lied compulsively — think of Thomas Edison, archnemesis of Nikola Tesla himself. The difference here isn’t merely that Musk is a deeply unfit monster of a human being — but rather, that DRM allows him to defraud his customers behind a state-enforced opaque veil. The digital computers at the heart of a Tesla aren’t just demons haunting the car, changing its performance based on whether it believes it is being observed — they also allow Musk to invoke the power of the US government to felonize anyone who tries to peer into the black box where he commits his frauds.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world
This Sunday (July 30) at 1530h, I’m appearing on a panel at Midsummer Scream in Long Beach, CA, to discuss the wonderful, award-winning “Ghost Post” Haunted Mansion project I worked on for Disney Imagineering.
Image ID [A scene out of an 11th century tome on demon-summoning called 'Compendium rarissimum totius Artis Magicae sistematisatae per celeberrimos Artis hujus Magistros. Anno 1057. Noli me tangere.' It depicts a demon tormenting two unlucky would-be demon-summoners who have dug up a grave in a graveyard. One summoner is held aloft by his hair, screaming; the other screams from inside the grave he is digging up. The scene has been altered to remove the demon's prominent, urinating penis, to add in a Tesla supercharger, and a red Tesla Model S nosing into the scene.]
Image: Steve Jurvetson (modified) https://commons.wikimedia.org/wiki/File:Tesla_Model_S_Indoors.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
#pluralistic#steve stecklow#autoenshittification#norihiko shirouzu#reuters#you're holding it wrong#r2r#right to repair#range rage#range anxiety#grifters#demon-haunted world#drm#tpms#1201#dmca 1201#tesla#evs#electric vehicles#ftc act section 5#unfair and deceptive practices#automotive#enshittification#elon musk
8K notes
·
View notes
Text
The Pleasants Effect (2020) – 2 stars
Re-editing the footage of someone long deceased, to construct whole sentences, undermines what little credibility the film's claims had. #history #invention #weather #sanfrancisco
Director: Pete Levine Running time: 36mins One of the things I often find hardest about reviews for Indy Film Library, is avoiding playing ‘fantasy filmmaker’. While I would argue there is no such thing as objective criticism in this game – we all bring something of ourselves to the movies, and those aspects of our personality inescapably effect how we interact with the film in front of us –…

View On WordPress
#C.R. Pleasants#documentary#elon musk#fog#history#inventor#oral history#san francisco#short film#silicon valley#weather
0 notes
Text
Just after Trump’s re-election in November 2024, I wrote a column headlined ‘How to Survive the Broligarchy’ (reproduced below) and in the three months since, pretty much everything it predicted how now come to pass. This is technoauthoritarianism. It’s tyranny + surveillance tools. It’s the merger of Silicon Valley companies with state power. It’s the ‘broligarchy’, a concept I coined in July last year though I’ve been contemplating it for a lot longer. Since 2016, I’ve followed a thread that led from Brexit to Trump via a shady data company called Cambridge Analytica to expose the profound threat technology poses to democracy. In doing so, I became the target: a weaponized lawsuit and an overwhelming campaign of online abuse silenced and paralysed me for a long time. This - and worse - is what so many others now face. I’m here to tell you that if it comes for you, you can and will survive it.
This week represents a hinge of history. Everything has changed. America and Russia are now allies. Ukraine has been thrown to the dogs. Europe’s security hangs in the balance. On the one hand, there’s nothing any of us can do. On the other, we have to do something. So, here’s what I’m doing. I’m starting a conversation. I’ve recorded the first one - a scrappy pilot - a podcast I’ve called How to Survive the Broligarchy and I’ve re-named the newsletter too. This first conversation (details below) is about how we need a new media built from the ground up to deal with the dangerous new world we’re in. That can only happen, in partnership with you, the reader. The days of top-down command and control are over. Please let’s try and do this together.
1 When someone tells you who they are, believe them. Last week Donald Trump appointed a director of intelligence who spouts Russian propaganda, a Christian nationalist crusader as secretary of defence, and a secretary of health who is a vaccine sceptic. If Trump was seeking to destroy American democracy, the American state and American values, this is how he’d do it.
2 Journalists are first, but everyone else is next. Trump has announced multibillion-dollar lawsuits against “the enemy camp”: newspapers and publishers. His proposed FBI director is on record as wanting to prosecute certain journalists. Journalists, publishers, writers, academics are always in the first wave. Doctors, teachers, accountants will be next. Authoritarianism is as predictable as a Swiss train. It’s already later than you think.
3 To name is to understand. This is McMuskism: it’s McCarthyism on steroids, political persecution + Trump + Musk + Silicon Valley surveillance tools. It’s the dawn of a new age of political witch-hunts, where burning at the stake meets data harvesting and online mobs.
4 If that sounds scary, it’s because that’s the plan. Trump’s administration will be incompetent and reckless but individuals will be targeted, institutions will cower, organisations will crumble. Fast. The chilling will be real and immediate.
5 You have more power than you think. We’re supposed to feel powerless. That’s the strategy. But we’re not. If you’re a US institution or organisation, form an emergency committee. Bring in experts. Learn from people who have lived under authoritarianism. Ask advice.
6 Do not kiss the ring. Do not bend to power. Power will come to you, anyway. Don’t make it easy. Not everyone can stand and fight. But nobody needs to bend the knee until there’s an actual memo to that effect. WAIT FOR THE MEMO.
7 Know who you are. This list is a homage to Yale historian, Timothy Snyder. His On Tyranny, published in 2017, is the essential guide to the age of authoritarianism. His first command, “Do not obey in advance”, is what has been ringing, like tinnitus, in my ears ever since the Washington Post refused to endorse Kamala Harris. In some weird celestial stroke of luck, he calls me as I’m writing this and I ask for his updated advice: “Know what you stand for and what you think is good.”
8 Protect your private life. The broligarchy doesn’t want you to have one. Read Shoshana Zuboff’s The Age of Surveillance Capitalism: they need to know exactly who you are to sell you more shit. We’re now beyond that. Surveillance Authoritarianism is next. Watch The Lives of Others, the beautifully told film about surveillance in 80s east Berlin. Act as if you are now living in East Germany and Meta/Facebook/Instagram/WhatsApp is the Stasi. It is.
9 Throw up the Kool-Aid. You drank it. That’s OK. We all did. But now is the time to stick your fingers down your throat and get that sick tech bro poison out of your system. Phones were – still are – a magic portal into a psychedelic fun house of possibility. They’re also tracking and surveilling you even as you sleep while a Silicon Valley edgelord plots ways to tear up the federal government.
10 Listen to women of colour. Everything bad that happened on the internet happened to them first. The history of technology is that it is only when it affects white men that it’s considered a problem. Look at how technology is already being used to profile and target immigrants. Know that you’re next.
11 Think of your personal data as nude selfies. A veteran technology journalist told me this in 2017 and it’s never left me. My experience of “discovery” – handing over 40,000 emails, messages, documents to the legal team of the Brexit donor I’d investigated – left me paralysed and terrified. Think what a hostile legal team would make of your message history. This can and will happen.
12 Don’t buy the bullshit. A Securities and Exchange judgment found Facebook had lied to two journalists – one of them was me – and Facebook agreed to pay a $100m penalty. If you are a journalist, refuse off the record briefings. Don’t chat on the phone; email. Refuse access interviews. Bullshit exclusives from Goebbels 2.0 will be a stain on your publication for ever.
13 Even dickheads love their dogs. Find a way to connect to those you disagree with. “The obvious mistakes of those who find themselves in opposition are to break off relations with those who disagree with you,” texts Vera Krichevskaya, the co-founder of TV Rain, Russia’s last independent TV station. “You cannot allow anger and narrow your circle.”
14 Pay in cash. Ask yourself what an international drug trafficker would do, and do that. They’re not going to the dead drop by Uber or putting 20kg of crack cocaine on a credit card. In the broligarchy, every data point is a weapon. Download Signal, the encrypted messaging app. Turn on disappearing messages.
15 Remember. Writer Rebecca Solnit, an essential US liberal voice, emails: “If they try to normalize, let us try to denormalize. Let us hold on to facts, truths, values, norms, arrangements that are going to be under siege. Let us not forget what happened and why.”
16 Find allies in unlikely places. One of my most surprising sources of support during my trial(s) was hard-right Brexiter David Davis. Find threads of connection and work from there.
17 There is such a thing as truth. There are facts and we can know them. From Tamsin Shaw, professor in philosophy at New York University: “‘Can the sceptic resist the tyrant?’ is one of the oldest questions in political philosophy. We can’t even fully recognise what tyranny is if we let the ruling powers get away with lying to us all.”
18 Plan. Silicon Valley doesn’t think in four-year election cycles. Elon Musk isn’t worrying about the midterms. He’s thinking about flying a SpaceX rocket to Mars and raping and pillaging its rare earth minerals before anyone else can get there. We need a 30-year road map out of this.
19 Take the piss. Humour is a weapon. Any man who feels the need to build a rocket is not overconfident about his masculinity. Work with that.
20 They are not gods. Tech billionaires are over-entitled nerds with the extraordinary historical luck of being born at the exact right moment in history. Treat them accordingly.
There is much much more to say on all of the above and that’s my plan. But please do share this with anyone who needs to hear it.
How to Survive the Broligarchy: a new podcast
A month ago, I was feeling floored: at the moment in which everything I’ve been warning about for the last eight years suddenly became overwhelmingly real, I was also being dislodged from my journalistic home. The Guardian, my seat of operations for the last 20 years, the last nearly ten of which have been focussed squarely on this subject, has done a deal, in the face of fierce opposition from its journalists, to give away a core part of the organisation. More than 100+ journalists will leave the organisation, including me.
This week, the Guardian confirmed that the last edition of the Observer would be April 20 and my 20-year employment with the organisation would be terminated then. The same day, Tortoise Media, the new home of the Observer, wrote to tell me that they would not be offering me a contract. But now, instead of feeling floored, I feel energised. You’ll hear some of that energy, I hope, in this first episode of the new podcast that I made a pilot for this week. It’s embedded at the top of this newsletter and - when I figure out the backend - will be available on Apple and Spotify and everywhere else too. I have an idea that I explore in this first episode with two people much smarter than me that this might be the start of a journey to a creating a independent, open, collaborative transparent form of ‘live’ journalism.
My investigation of big tech, power, politics, the weaponisation of data, foreign interference, Russian oligarchs and social media has always traversed subjects and specialisms. I’ve drawn on the expertise of so many people along the way and in trying to understand this moment, I realised they are not only the people I want to speak to now, they are also the expert voices that everyone needs to hear. My idea is to make these conversations public and to build a community - a feedback loop - contributing ideas and suggestions and, hopefully, networks of action.
I’ve been doing some of this work with the Citizens, the non-profit, I founded back in 2020 (sign up to their newsletter here), but there is a small ray of hope, in the midst of the current crisis, independent media is in a huge moment of growth and the green shoots of a non-corporate, non-oligarch owned media system are springing up everywhere. I’m hugely grateful to the 55,000 people who’ve signed up to this newsletter so far but there’s so much more we can do.
I’d been kicking around this idea for a new podcast for the last few weeks and then a call with my friend, Claire Wardle, spurred me into action. Claire is a professor at Cornell, an Ivy League university in upstate New York, where she studies as as she puts it “our crazy information environment”. I first met Claire when giving evidence to a parliamentary committee back in 2017 and then we re-met at the TED conference in Vancouver in 2019 where we were both due to give talks and hung out in between paralysing bouts of fear and imposter syndrome.
That TED talk led to a years-long lawsuit for me. And Claire, who founded a non-profit called First Draft that co-ordinated newsrooms and researchers to fight mis- and disinformation, has also found herself under attack. She and more than 100 other researchers in the field have been subpoenaed by a congressional committee who accused them of being part of the ‘censorship industrial complex’.
It’s these sorts of attacks that are now coming for so many other people. My ‘How to Survive the Broligarchy’ column, above, was intended as both a handbook - how do we protect ourselves? - and a manifesto, how do we fightback against these companies? And that’s the ethos of this podcast too, bringing together a network of people who have the knowledge we need for this next stage.
Claire and I decided this first conversation should be about how the media is covering this moment and its inability to shake off the “business as normal” framing of the authoritarian takeover of the US government.
I include a voice note from Roger McNamee in the first episode, a tech investor - he introduced Mark Zuckberg to Sheryl Sandberg - who’s now one of the most trenchant critics of both Silicon Valley and the media. And Mark Little, an Irish foreign correspondent turned tech entrepreneur (one of his claims to fame is being aquired by Rupert Murdoch), who’s pioneered new media models joined us to talk about solutions.
The best and most enjoyable journalism I’ve done in recent times is two investigative, narrative podcasts. Sergei & the Westminster Spy Ring debuted in December at number one in the Apple podcast chart and the BBC’s Stalked is currently sitting at the top of the true crime and series charts.
And what Mark pointed out, which I hadn’t thought about before, is that it’s the “process” of these real-time investigative podcasts that young listeners like. And it’s true that what we’re doing in Stalked is really punchy: this week, we name the suspect who we believe to be Hannah, my ex-stepdaughter’s, cyberstalker, something the police abjectly failed to do. In Sergei, we uncovered a UK government cover-up of foreign interference. We’re doing both of these live, transparently, and showing our workings.
As Mark teases it out, this is the impulse behind this podcast pilot too. It’s also a “true crime” story: democracy has been murdered and there’s a serial killer on the loose. It’s a race against time to prevent the perpetrator devastating the US beyond repair and racking up a bodycount in Europe. (If you can’t or don’t want to listen to it, there’s a transcript here.)
If that all sounds a bit weird and experimental but also ambitious and unlikely, I’d have to agree. But the whole point is that we have entered a wholly dangerous new era and we need new ways of communicating, of doing journalism, of storytelling, of reaching new audiences. It may very well not work in which case I’ll try something else but I’d love your feedback in the comments below. If you have ideas for collaborations or building this network, you can email me at [email protected].
272 notes
·
View notes
Text
Throughout its venerable 268-year reign, The Onion has always made it a top priority to endorse the correct presidential candidates. From George Washington to Richard Nixon to Donald Trump, this institution’s highly respected editorial board has had its finger on the pulse, and has accurately backed the winner of every single national election in this country’s long and storied history.
Now, with our nation at a pivotal crossroads, The Onion‘s editorial board faces its most difficult decision yet. That’s why we have chosen to officially endorse Joseph R. Biden for president of the United States.
To our loyal, handsome, and stunningly brilliant readers, please know that The Onion‘s latest foray into the 2024 election does not come lightly. In these unprecedented times of misinformation and political violence, everyone from left-wing activists to Silicon Valley megadonors attempted to dissuade us from endorsing Joe Biden at this moment in time. Full Story
252 notes
·
View notes
Text
There are a lot of things I don't like about "modern retellings" of (usually Greek) myths - a fundamental misunderstanding of mythology, having little to actually do with the mythological figures and gods, using it as an excuse to shit on pagans like me, et cetera - but one of the most frustrating ones is that none of them are actually modern. Instead, they take the Walmart TERF approach to feminism and go "#girlboss!" without actually looking into the history of women during the time of those myths or understanding what it's supposed to do at all. They also don't even take place in the modern day - they're set in some sort of pseudo-Ancient Greece.
Like, okay, here. Let me outline what I'd consider a modern retelling of a myth, using the Perseus myth as an example.
Perseus is a college-aged young man still living with his mom, Danae, in the "big city" (fuck it, let's say Springfield, MO) and trying to make ends meet because they're both working two part-time jobs because none of the jobs actually want to pay for insurance or retirement or whatever. Danae sometimes meets with her best friend and coworker Clymene and Clymene's husband Dictys, who live out in the country and go fishing and hunting during various times of year. Dictys has a brother named Polydectes who is the grown-man version of a Silicon Valley tech bro who's, I don't know, into crypto and shit.
While Polydectes is living on Dictys's couch and ranting about how he's totally gonna be a rich Wall Street executive some day, he sees Danae talking with Clymene while they gut fish and is like "hot chick, gonna stalk her" and is all creepy about it. Perseus is not about that shit, so he starts trying to find a way to get Dictys to back off.
At this point, any number of things could happen. If you want a girlboss Medusa story, she could be, I don't know, a deep web or black market assassin-for-hire and Perseus scrounges up money for it. If you want something more lighthearted and silly, maybe this is taking place in what is essentially a Yugioh-style world where the fate of things lands on card games and Perseus uses a Medusa-esque card to kick Dictys's ass. Medusa could even just be a coworker of Danae and Clymene and overhear them bitching at work about Dictys so she goes to Perseus like "Dude, do you want me to help take care of that guy messing with your mom?" Literally anything could happen at this point.
You don't even have to erase Andromeda! She could be anything from a classmate of Perseus's that he helps out to his coworker that he protects from creeps to...well, again, literally anything! It's a modern world, she's got all kinds of possibilities!
See? A modern retelling would actually be cool as shit if people paid attention to the "modern" part!
128 notes
·
View notes
Text
the fucked up thing about every corporation's executives being like "we are going to use AI and other new technologies to replace you it's inevitable sorry" is that
1) usually experts in AI and these new technologies will tell you that it CANT replace every worker and has its own challenges and requires new and different kinds of workers to make it functional and
2) it is not the fucking technology putting people out of work or ruining how the system functions. it is PEOPLE (executives) making CHOICES to make it that way, and blaming the TECHNOLOGY. moving to adopt new technology in sustainable and realistic ways requires, money, time, and long term investment, which executives just trying to show exponential growth to wall street at the next shareholder meeting for a few years before they take a nice severance package and hop to the next company or retire, don't give a shit enough to do. they see a new toy, a new bauble, that some silicon valley idiot tells them will reduce costs and increase output, they sell the lie to their shareholders, they rinse and repeat. it is CHOICES. BY PEOPLE. NOT THE TECHNOLOGY. it never has been in the history of human innovation.
884 notes
·
View notes
Text
Local libraries struggling as book publishers charge three times as much for digital books as physical ones | Fortune
#Local libraries struggling as book publishers charge 3× as much for digital books as physical ones—and they don’t even get to keep them#THE ONE THOUSANDTH RIPOFF FROM SILICON VALLEY AND HIGH TECH COMPANIES#We Need Physical Books Not Just Because Of Costs But E Books Can Be Tampered With#I Believe E Books Are A Way To Supress History#Amazon#Kindle#Libraries#Librarian
2 notes
·
View notes
Text
The history of computing is one of innovation followed by scale up which is then broken by a model that “scales out”—when a bigger and faster approach is replaced by a smaller and more numerous approaches. Mainframe->Mini->Micro->Mobile, Big iron->Distributed computing->Internet, Cray->HPC->Intel/CISC->ARM/RISC, OS/360->VMS->Unix->Windows NT->Linux, and on and on. You can see this at these macro levels, or you can see it at the micro level when it comes to subsystems from networking to storage to memory. The past 5 years of AI have been bigger models, more data, more compute, and so on. Why? Because I would argue the innovation was driven by the cloud hyperscale companies and they were destined to take the approach of doing more of what they already did. They viewed data for training and huge models as their way of winning and their unique architectural approach. The fact that other startups took a similar approach is just Silicon Valley at work—the people move and optimize for different things at a micro scale without considering the larger picture. See the sociological and epidemiological term small area variation. They look to do what they couldn’t do at their previous efforts or what the previous efforts might have been overlooking.
- DeepSeek Has Been Inevitable and Here's Why (History Tells Us) by Steven Sinofsky
45 notes
·
View notes
Text
Silicon Valley's Parasite Culture
From Ted Gioia's substack -- and I do realize the irony of reposting a substack post about parasitical behavior as content on my Tumblr yes -- that said, I really want people to read this
...For the first time in history, the Forbes list of billionaires is filled with individuals who got rich via parasitical business strategies—creating almost nothing, but gorging themselves on the creativity of others. That’s how you get to the top in the digital age. Instead of US Steel, it’s Us steal. Instead of IBM, it’s IB Robbing U. But when parasites get too strong, they risk killing their hosts.
Recall that only ten percent of animal species are parasites. What happens if that number grows to 30% or 50% or 70%? That must have catastrophic consequences, no? This is precisely the situation in the digital culture right now. Google’s success in leeching off newspapers puts newspapers out of business. Musicians earn less and less, even as Spotify makes more and more. Hollywood is collapsing because it can’t compete with free video made by content providers. It’s no coincidence that these parasite platforms are the same companies investing heavily in AI. They must do this because even they understand that they are killing their hosts. When the host dies, AI-generated content can replace human creativity. Or—to be blunt about it—the host will die because of AI-generated content. And then the web billionaires won’t even need to toss those few shekels at artists. It’s every parasite’s dream. The host can die, but the leech still lives on! But there’s one catch. Training AI requires the largest parasitical theft of intellectual property in history. Everything now gets seized and sucked dry. No pirate in history has pilfered with such ambition and audacity.
now, I think we are finding that there are diminishing returns on the AI training at this point (in this gen of the technology at least) such that they are not able to replace human creativity. But if they could, they would, is the point. And when we talk about AI we need to address the parasitical business models that make it an inevitability.
55 notes
·
View notes
Text
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes