#silicon valley history
Explore tagged Tumblr posts
frank-olivier · 2 months ago
Text
Tumblr media
The Birth of an Industry: Fairchild’s Pivotal Role in Shaping Silicon Valley
In the late 1950s, the Santa Clara Valley of California witnessed a transformative convergence of visionary minds, daring entrepreneurship, and groundbreaking technological advancements. At the heart of this revolution was Fairchild Semiconductor, a pioneering company whose innovative spirit, entrepreneurial ethos, and technological breakthroughs not only defined the burgeoning semiconductor industry but also indelibly shaped the region’s evolution into the world-renowned Silicon Valley.
A seminal 1967 promotional film, featuring Dr. Harry Sello and Dr. Jim Angell, offers a fascinating glimpse into Fairchild’s revolutionary work on integrated circuits (ICs), a technology that would soon become the backbone of the burgeoning tech industry. By demystifying IC design, development, and applications, Fairchild exemplified its commitment to innovation and knowledge sharing, setting a precedent for the collaborative and open approach that would characterize Silicon Valley’s tech community. Specifically, Fairchild’s introduction of the planar process and the first monolithic IC in 1959 marked a significant technological leap, with the former enhancing semiconductor manufacturing efficiency by up to 90% and the latter paving the way for the miniaturization of electronic devices.
Beyond its technological feats, Fairchild’s entrepreneurial ethos, nurtured by visionary founders Robert Noyce and Gordon Moore, served as a blueprint for subsequent tech ventures. The company’s talent attraction and nurturing strategies, including competitive compensation packages and intrapreneurship encouragement, helped establish the region as a magnet for innovators and risk-takers. This, in turn, laid the foundation for the dense network of startups, investors, and expertise that defines Silicon Valley’s ecosystem today. Notably, Fairchild’s presence spurred the development of supporting infrastructure, including the expansion of Stanford University’s research facilities and the establishment of specialized supply chains, further solidifying the region’s position as a global tech hub. By 1965, the area witnessed a surge in tech-related employment, with jobs increasing by over 300% compared to the previous decade, a direct testament to Fairchild’s catalyzing effect.
The trajectory of Fairchild Semiconductor, including its challenges and eventual transformation, intriguingly parallels the broader narrative of Silicon Valley’s growth. The company’s decline under later ownership and its subsequent re-emergence underscore the region’s inherent capacity for reinvention and adaptation. This resilience, initially embodied by Fairchild’s pioneering spirit, has become a hallmark of Silicon Valley, enabling the region to navigate the rapid evolution of the tech industry with unparalleled agility.
What future innovations will emerge from the valley, leveraging the foundations laid by pioneers like Fairchild, to shape the global technological horizon in the decades to come?
Dr. Harry Sello and Dr. Jim Angell: The Design and Development Process of the Integrated Circuit (Fairchild Semiconductor Corporation, October 1967)
youtube
Robert Noyce: The Development of the Integrated Circuit and Its Impact on Technology and Society (The Computer Museum, Boston, May 1984)
youtube
Tuesday, December 3, 2024
8 notes · View notes
antiquesintheattic · 24 days ago
Text
being on here was so easy when i didn’t have my blog personalized at all and i just used the web version to look at the silicon valley hbo tag
12 notes · View notes
racefortheironthrone · 2 years ago
Note
Hey, what does disruptor mean? I saw it when looking at your answers. I’ve also seen people joke about it on twitter but I can’t find a meaning to it.
It's a term I personally loathe, but I'm willing to do some recent cultural/intellectual history to explain where it came from and what it means.
The term disruptor as it's commonly used today comes out of the business world, more specifically the high tech sector clustered in Silicon Valley. Originally coined as "disruptive innovation" by business school professor Clayton Christensen in the mid-to-late 90s, the idea was that certain new businesses (think your prototypical startup) have a greater tendency to develop innovative technologies and business models that radically destabilize established business models, markets, and large corporations - and in the process, help to speed up economic and technological progress.
While Christensen's work was actually about business models and firm-level behavior, over time this concept mutated to focus on the individual entrepeneur/inventor/founder figure of the "disruptor," as part of the lionization of people like Steve Jobs or Mark Zuckerburg or Elon Musk, or firms like Lyft, Uber, WeWork, Theranos, etc. It also mutated into a general belief that "disrupting" markets and, increasingly, social institutions is how society will and should progress.
I find these ideas repellant. First of all, when it comes to the actual business side of things, I think it mythologizes corporate executives as creative geniuses by attributing credit for innovations actually created by the people they employ. Elon Musk didn't create electric cars or reusable rockets, Steve Jobs didn't design any computers or program any OSes, but because they're considered "disruptors," we pretend that they did. This has a strong effect on things like support for taxing the rich - because there is this popular image of the "self-made billionaire" as someone who "earned" their wealth through creating "disruptive" companies or technologies, there is more resistance to taxing or regulating the mega-wealthy than would otherwise be the case.
Even more importantly, treating "disruptors" like heroes and "disruption" as a purely good thing tends to make people stop thinking about whether disruption to a given industry is actually a good thing, whether what tech/Silicon Valley/startup firms are doing is actually innovative, what the economic and social costs of the disruption are, and who pays them. Because when we look at a bunch of high-profile case studies, it often turns out to be something of a case of smoke and mirrors.
To take ridesharing as an example, Lyft and Uber and similar companies aren't actually particularly innovative. Yes, they have apps that connect riders to drivers, but that's not actually that different from the old school method of using the phone to call up a livery cab company. There's a lot of claims about how the apps improve route planning or the availability of drivers or bring down prices, but they're usually overblown: route planning software is pretty common (think Google Maps), when you actually look at how Lyft and Uber create availability, it's by flooding the market with large numbers of new drivers, and when you look at how they got away with low prices, it was usually by spending billions upon billions of venture capital money on subsidizing their rides.
Moreover, this "disruption" has a pretty nasty dark side. To start with, Lyft and Uber's business strategy is actually a classic 19th century monopoly strategy dressed up in 21st century rhetoric: the "low prices" had nothing to do with innovative practices or new technology, it was Lyft and Uber pulling the classic move of deliberately selling at a loss to grab market share from the competition, at which point they started raising their prices on consumers. Availability of drivers was accomplished by luring way too many new drivers into the labor market with false promises of making high wages in their spare time, but when the over-supply of drivers inevitably caused incomes to decline, huge numbers of rideshare drivers found themselves trapped by auto debts and exploited by the companies' taking a significant chunk of their earnings, using the threat of cutting them off from the app to cow any resistance. And above all, Lyft and Uber's "disruption" often came down to a willful refusal to abide by pre-existing regulations meant to ensure that drivers could earn a living wage, that consumers would be protected in the case of accidents or from the bad behavior of drivers, etc. As a policy historian, however, I find the extension of "disruption" into social institutions the most troubling. Transportation, health care, education, etc. are absolutely vital for the functioning of modern society and are incredibly complex systems that require a lot of expertise and experience to understand, let alone change. Letting a bunch of billionaires impose technocratic "reforms" on them from above, simply because they say they're really smart or because they donate a bunch of money, is a really bad idea - especially because when we see what the "disruptors" actually propose and/or do, it often shows them to be very ordinary (if not actively stupid) people who don't really know what they're doing.
Elon Musk's Loop is an inherently worse idea than mass transit. His drive for self-driving cars is built on lies. Pretty much all of the Silicon Valley firms that have tried to "disrupt" in the area of transportation end up reinventing the wheel and proposing the creation of buses or trolleys or subways.
Theranos was a giant fraud that endangered the lives of thousands in pursuit of an impossible goal that, even if it ould have been achieved, wouldn't have made much of a difference in people's lives compared to other, more fruitful areas of biotech and medical research.
From Bill Gates to Mark Zuckerburg, Silicon Valley billionaires have plunged huge amounts of philanthropy dollars into all kinds of interventions in public education, from smaller classrooms to MOOCs to teacher testing to curriculum reform to charter schools. The track record of these reforms has been pretty uniformly abysmal, because it turns out that educational outcomes are shaped by pretty much every social force you can think of and educational systems are really complex and difficult to measure.
So yeah, fuck disruptors.
111 notes · View notes
thoughtportal · 2 years ago
Link
When tech whizkids are caught behaving badly, they're just being "brilliant jerks." And the figure of the charismatic-but-bratty genius inventor is everywhere these days. We look at how the isolated, tormented mad scientist in science fiction evolved into the sexy asshole that everyone wants to be. And we talk to Christopher Cantwell, co-creator of Halt and Catch Fire and recently writer of the Iron Man comic, about how Tony Stark has changed.
3 notes · View notes
morerogue · 1 month ago
Text
0 notes
teachanarchy · 1 year ago
Text
Why Silicon Valley is here
youtube
0 notes
mostlysignssomeportents · 2 years ago
Text
Tesla's Dieselgate
Tumblr media
Elon Musk lies a lot. He lies about being a “utopian socialist.” He lies about being a “free speech absolutist.” He lies about which companies he founded:
https://www.businessinsider.com/tesla-cofounder-martin-eberhard-interview-history-elon-musk-ev-market-2023-2 He lies about being the “chief engineer” of those companies:
https://www.quora.com/Was-Elon-Musk-the-actual-engineer-behind-SpaceX-and-Tesla
He lies about really stupid stuff, like claiming that comsats that share the same spectrum will deliver steady broadband speeds as they add more users who each get a narrower slice of that spectrum:
https://www.eff.org/wp/case-fiber-home-today-why-fiber-superior-medium-21st-century-broadband
The fundamental laws of physics don’t care about this bullshit, but people do. The comsat lie convinced a bunch of people that pulling fiber to all our homes is literally impossible — as though the electrical and phone lines that come to our homes now were installed by an ancient, lost civilization. Pulling new cabling isn’t a mysterious art, like embalming pharaohs. We do it all the time. One of the poorest places in America installed universal fiber with a mule named “Ole Bub”:
https://www.newyorker.com/tech/annals-of-technology/the-one-traffic-light-town-with-some-of-the-fastest-internet-in-the-us
Previous tech barons had “reality distortion fields,” but Musk just blithely contradicts himself and pretends he isn’t doing so, like a budget Steve Jobs. There’s an entire site devoted to cataloging Musk’s public lies:
https://elonmusk.today/
But while Musk lacks the charm of earlier Silicon Valley grifters, he’s much better than they ever were at running a long con. For years, he’s been promising “full self driving…next year.”
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
He’s hasn’t delivered, but he keeps claiming he has, making Teslas some of the deadliest cars on the road:
https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/
Tesla is a giant shell-game masquerading as a car company. The important thing about Tesla isn’t its cars, it’s Tesla’s business arrangement, the Tesla-Financial Complex:
https://pluralistic.net/2021/11/24/no-puedo-pagar-no-pagara/#Rat
Once you start unpacking Tesla’s balance sheets, you start to realize how much the company depends on government subsidies and tax-breaks, combined with selling carbon credits that make huge, planet-destroying SUVs possible, under the pretense that this is somehow good for the environment:
https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#killer-analogy
But even with all those financial shenanigans, Tesla’s got an absurdly high valuation, soaring at times to 1600x its profitability:
https://pluralistic.net/2021/01/15/hoover-calling/#intangibles
That valuation represents a bet on Tesla’s ability to extract ever-higher rents from its customers. Take Tesla’s batteries: you pay for the battery when you buy your car, but you don’t own that battery. You have to rent the right to use its full capacity, with Tesla reserving the right to reduce how far you go on a charge based on your willingness to pay:
https://memex.craphound.com/2017/09/10/teslas-demon-haunted-cars-in-irmas-path-get-a-temporary-battery-life-boost/
That’s just one of the many rent-a-features that Tesla drivers have to shell out for. You don’t own your car at all: when you sell it as a used vehicle, Tesla strips out these features you paid for and makes the next driver pay again, reducing the value of your used car and transfering it to Tesla’s shareholders:
https://www.theverge.com/2020/2/6/21127243/tesla-model-s-autopilot-disabled-remotely-used-car-update
To maintain this rent-extraction racket, Tesla uses DRM that makes it a felony to alter your own car’s software without Tesla’s permission. This is the root of all autoenshittification:
https://pluralistic.net/2023/07/24/rent-to-pwn/#kitt-is-a-demon
This is technofeudalism. Whereas capitalists seek profits (income from selling things), feudalists seek rents (income from owning the things other people use). If Telsa were a capitalist enterprise, then entrepreneurs could enter the market and sell mods that let you unlock the functionality in your own car:
https://pluralistic.net/2020/06/11/1-in-3/#boost-50
But because Tesla is a feudal enterprise, capitalists must first secure permission from the fief, Elon Musk, who decides which companies are allowed to compete with him, and how.
Once a company owns the right to decide which software you can run, there’s no limit to the ways it can extract rent from you. Blocking you from changing your device’s software lets a company run overt scams on you. For example, they can block you from getting your car independently repaired with third-party parts.
But they can also screw you in sneaky ways. Once a device has DRM on it, Section 1201 of the DMCA makes it a felony to bypass that DRM, even for legitimate purposes. That means that your DRM-locked device can spy on you, and because no one is allowed to explore how that surveillance works, the manufacturer can be incredibly sloppy with all the personal info they gather:
https://www.cnbc.com/2019/03/29/tesla-model-3-keeps-data-like-crash-videos-location-phone-contacts.html
All kinds of hidden anti-features can lurk in your DRM-locked car, protected from discovery, analysis and criticism by the illegality of bypassing the DRM. For example, Teslas have a hidden feature that lets them lock out their owners and summon a repo man to drive them away if you have a dispute about a late payment:
https://tiremeetsroad.com/2021/03/18/tesla-allegedly-remotely-unlocks-model-3-owners-car-uses-smart-summon-to-help-repo-agent/
DRM is a gun on the mantlepiece in Act I, and by Act III, it goes off, revealing some kind of ugly and often dangerous scam. Remember Dieselgate? Volkswagen created a line of demon-haunted cars: if they thought they were being scrutinized (by regulators measuring their emissions), they switched into a mode that traded performance for low emissions. But when they believed themselves to be unobserved, they reversed this, emitting deadly levels of NOX but delivering superior mileage.
The conversion of the VW diesel fleet into mobile gas-chambers wouldn’t have been possible without DRM. DRM adds a layer of serious criminal jeopardy to anyone attempting to reverse-engineer and study any device, from a phone to a car. DRM let Apple claim to be a champion of its users’ privacy even as it spied on them from asshole to appetite:
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
Now, Tesla is having its own Dieselgate scandal. A stunning investigation by Steve Stecklow and Norihiko Shirouzu for Reuters reveals how Tesla was able to create its own demon-haunted car, which systematically deceived drivers about its driving range, and the increasingly desperate measures the company turned to as customers discovered the ruse:
https://www.reuters.com/investigates/special-report/tesla-batteries-range/
The root of the deception is very simple: Tesla mis-sells its cars by falsely claiming ranges that those cars can’t attain. Every person who ever bought a Tesla was defrauded.
But this fraud would be easy to detect. If you bought a Tesla rated for 353 miles on a charge, but the dashboard range predictor told you that your fully charged car could only go 150 miles, you’d immediately figure something was up. So your Telsa tells another lie: the range predictor tells you that you can go 353 miles.
But again, if the car continued to tell you it has 203 miles of range when it was about to run out of charge, you’d figure something was up pretty quick — like, the first time your car ran out of battery while the dashboard cheerily informed you that you had 203 miles of range left.
So Teslas tell a third lie: when the battery charge reached about 50%, the fake range is replaced with the real one. That way, drivers aren’t getting mass-stranded by the roadside, and the scam can continue.
But there’s a new problem: drivers whose cars are rated for 353 miles but can’t go anything like that far on a full charge naturally assume that something is wrong with their cars, so they start calling Tesla service and asking to have the car checked over.
This creates a problem for Tesla: those service calls can cost the company $1,000, and of course, there’s nothing wrong with the car. It’s performing exactly as designed. So Tesla created its boldest fraud yet: a boiler-room full of anti-salespeople charged with convincing people that their cars weren’t broken.
This new unit — the “diversion team” — was headquartered in a Nevada satellite office, which was equipped with a metal xylophone that would be rung in triumph every time a Tesla owner was successfully conned into thinking that their car wasn’t defrauding them.
When a Tesla owner called this boiler room, the diverter would run remote diagnostics on their car, then pronounce it fine, and chide the driver for having energy-hungry driving habits (shades of Steve Jobs’s “You’re holding it wrong”):
https://www.wired.com/2010/06/iphone-4-holding-it-wrong/
The drivers who called the Diversion Team weren’t just lied to, they were also punished. The Tesla app was silently altered so that anyone who filed a complaint about their car’s range was no longer able to book a service appointment for any reason. If their car malfunctioned, they’d have to request a callback, which could take several days.
Meanwhile, the diverters on the diversion team were instructed not to inform drivers if the remote diagnostics they performed detected any other defects in the cars.
The diversion team had a 750 complaint/week quota: to juke this stat, diverters would close the case for any driver who failed to answer the phone when they were eventually called back. The center received 2,000+ calls every week. Diverters were ordered to keep calls to five minutes or less.
Eventually, diverters were ordered to cease performing any remote diagnostics on drivers’ cars: a source told Reuters that “Thousands of customers were told there is nothing wrong with their car” without any diagnostics being performed.
Predicting EV range is an inexact science as many factors can affect battery life, notably whether a journey is uphill or downhill. Every EV automaker has to come up with a figure that represents some kind of best guess under a mix of conditions. But while other manufacturers err on the side of caution, Tesla has the most inaccurate mileage estimates in the industry, double the industry average.
Other countries’ regulators have taken note. In Korea, Tesla was fined millions and Elon Musk was personally required to state that he had deceived Tesla buyers. The Korean regulator found that the true range of Teslas under normal winter conditions was less than half of the claimed range.
Now, many companies have been run by malignant narcissists who lied compulsively — think of Thomas Edison, archnemesis of Nikola Tesla himself. The difference here isn’t merely that Musk is a deeply unfit monster of a human being — but rather, that DRM allows him to defraud his customers behind a state-enforced opaque veil. The digital computers at the heart of a Tesla aren’t just demons haunting the car, changing its performance based on whether it believes it is being observed — they also allow Musk to invoke the power of the US government to felonize anyone who tries to peer into the black box where he commits his frauds.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/07/28/edison-not-tesla/#demon-haunted-world
Tumblr media
This Sunday (July 30) at 1530h, I’m appearing on a panel at Midsummer Scream in Long Beach, CA, to discuss the wonderful, award-winning “Ghost Post” Haunted Mansion project I worked on for Disney Imagineering.
Tumblr media
Image ID [A scene out of an 11th century tome on demon-summoning called 'Compendium rarissimum totius Artis Magicae sistematisatae per celeberrimos Artis hujus Magistros. Anno 1057. Noli me tangere.' It depicts a demon tormenting two unlucky would-be demon-summoners who have dug up a grave in a graveyard. One summoner is held aloft by his hair, screaming; the other screams from inside the grave he is digging up. The scene has been altered to remove the demon's prominent, urinating penis, to add in a Tesla supercharger, and a red Tesla Model S nosing into the scene.]
Tumblr media
Image: Steve Jurvetson (modified) https://commons.wikimedia.org/wiki/File:Tesla_Model_S_Indoors.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
8K notes · View notes
indyfilmlibrary · 2 years ago
Text
The Pleasants Effect (2020) – 2 stars
Re-editing the footage of someone long deceased, to construct whole sentences, undermines what little credibility the film's claims had. #history #invention #weather #sanfrancisco
Director: Pete Levine Running time: 36mins One of the things I often find hardest about reviews for Indy Film Library, is avoiding playing ‘fantasy filmmaker’. While I would argue there is no such thing as objective criticism in this game – we all bring something of ourselves to the movies, and those aspects of our personality inescapably effect how we interact with the film in front of us –…
Tumblr media
View On WordPress
0 notes
amplexadversary · 2 years ago
Text
Still would rather buy three machines that are really damn good at three separate things than one machine that sucks at all of them for four times the price of any one of the others. Geez is that the goal these device designers have? Shackle their consumer base to one line (theirs) as if we don’t already have to deal with that with our physical fucking meat bodies? I really really hope that isn’t the fucking endgame here.
0 notes
theonion · 1 month ago
Text
Tumblr media
Throughout its venerable 268-year reign, The Onion has always made it a top priority to endorse the correct presidential candidates. From George Washington to Richard Nixon to Donald Trump, this institution’s highly respected editorial board has had its finger on the pulse, and has accurately backed the winner of every single national election in this country’s long and storied history. 
Now, with our nation at a pivotal crossroads, The Onion‘s editorial board faces its most difficult decision yet. That’s why we have chosen to officially endorse Joseph R. Biden for president of the United States.
To our loyal, handsome, and stunningly brilliant readers, please know that The Onion‘s latest foray into the 2024 election does not come lightly. In these unprecedented times of misinformation and political violence, everyone from left-wing activists to Silicon Valley megadonors attempted to dissuade us from endorsing Joe Biden at this moment in time. Full Story
250 notes · View notes
jasper-the-menace · 1 month ago
Text
There are a lot of things I don't like about "modern retellings" of (usually Greek) myths - a fundamental misunderstanding of mythology, having little to actually do with the mythological figures and gods, using it as an excuse to shit on pagans like me, et cetera - but one of the most frustrating ones is that none of them are actually modern. Instead, they take the Walmart TERF approach to feminism and go "#girlboss!" without actually looking into the history of women during the time of those myths or understanding what it's supposed to do at all. They also don't even take place in the modern day - they're set in some sort of pseudo-Ancient Greece.
Like, okay, here. Let me outline what I'd consider a modern retelling of a myth, using the Perseus myth as an example.
Perseus is a college-aged young man still living with his mom, Danae, in the "big city" (fuck it, let's say Springfield, MO) and trying to make ends meet because they're both working two part-time jobs because none of the jobs actually want to pay for insurance or retirement or whatever. Danae sometimes meets with her best friend and coworker Clymene and Clymene's husband Dictys, who live out in the country and go fishing and hunting during various times of year. Dictys has a brother named Polydectes who is the grown-man version of a Silicon Valley tech bro who's, I don't know, into crypto and shit.
While Polydectes is living on Dictys's couch and ranting about how he's totally gonna be a rich Wall Street executive some day, he sees Danae talking with Clymene while they gut fish and is like "hot chick, gonna stalk her" and is all creepy about it. Perseus is not about that shit, so he starts trying to find a way to get Dictys to back off.
At this point, any number of things could happen. If you want a girlboss Medusa story, she could be, I don't know, a deep web or black market assassin-for-hire and Perseus scrounges up money for it. If you want something more lighthearted and silly, maybe this is taking place in what is essentially a Yugioh-style world where the fate of things lands on card games and Perseus uses a Medusa-esque card to kick Dictys's ass. Medusa could even just be a coworker of Danae and Clymene and overhear them bitching at work about Dictys so she goes to Perseus like "Dude, do you want me to help take care of that guy messing with your mom?" Literally anything could happen at this point.
You don't even have to erase Andromeda! She could be anything from a classmate of Perseus's that he helps out to his coworker that he protects from creeps to...well, again, literally anything! It's a modern world, she's got all kinds of possibilities!
See? A modern retelling would actually be cool as shit if people paid attention to the "modern" part!
121 notes · View notes
hashtagloveloses · 2 years ago
Text
the fucked up thing about every corporation's executives being like "we are going to use AI and other new technologies to replace you it's inevitable sorry" is that
1) usually experts in AI and these new technologies will tell you that it CANT replace every worker and has its own challenges and requires new and different kinds of workers to make it functional and
2) it is not the fucking technology putting people out of work or ruining how the system functions. it is PEOPLE (executives) making CHOICES to make it that way, and blaming the TECHNOLOGY. moving to adopt new technology in sustainable and realistic ways requires, money, time, and long term investment, which executives just trying to show exponential growth to wall street at the next shareholder meeting for a few years before they take a nice severance package and hop to the next company or retire, don't give a shit enough to do. they see a new toy, a new bauble, that some silicon valley idiot tells them will reduce costs and increase output, they sell the lie to their shareholders, they rinse and repeat. it is CHOICES. BY PEOPLE. NOT THE TECHNOLOGY. it never has been in the history of human innovation.
884 notes · View notes
msclaritea · 11 months ago
Text
Local libraries struggling as book publishers charge three times as much for digital books as physical ones | Fortune
2 notes · View notes
seat-safety-switch · 2 years ago
Text
When you’re standing on the outside, it may seem bizarre to you that rocket scientists aren’t paid more. They are literally rocket scientists, after all, the only people in the world who are not allowed to say “it’s not rocket science” at work. And yet they are often paid somewhat less than a regular old hard-hatted engineer, involved in expensive (and fragile) projects to construct overpriced pedestrian bridges for overpriced private universities. Why is that?
One reason is that the rocket scientists don’t pose much of a threat to management. There’s more of them than there are jobs available building rockets. If they quit, then the bosses will just hire slightly dumber rocket scientists, and pay them even less. Rockets will still go up, and they’ll go where they want to, because of the well-documented history and best practices of the industry. They can keep coasting on this for a little while, maybe even decades, with a barely-perceptible drop in quality. Maybe it’s already happened. Maybe tomorrow is when we find out what the first part of a rocket that has been quality-faded into oblivion is. Hope you don’t live under the flight path.
There is, of course, another approach, and that’s “being a dirtbag.” I myself have a lot of experience in this particular field, and I think it is one of those multi-skilled disciplines that can expand into rocket science if so required. The aforementioned best practices of this industry have been written down and documented so well, in fact, that just some asshole off the street like myself can check them out of the library (using an assumed name, of course,) read them, and know generally all that humanity has figured out over the last century about making rockets that don’t explode. Then, in the language of Silicon Valley influencers, I can “disrupt” the industry.
Of course, by “disrupt” I really mean grift. If management can’t really tell the difference between good rocket scientists and slightly less good ones, then it stands to reason that they’ll give completely bad ones the benefit of the doubt. I can get billions of dollars of venture capital for my space-flight startup, shoot a few Estes rockets into the ceiling of the cafeteria, and still pocket enough dough to be able to afford a base-model Honda Civic from the 1980s. It’s not brain surgery.
2K notes · View notes
rubyvroom · 4 months ago
Text
Silicon Valley's Parasite Culture
From Ted Gioia's substack -- and I do realize the irony of reposting a substack post about parasitical behavior as content on my Tumblr yes -- that said, I really want people to read this
...For the first time in history, the Forbes list of billionaires is filled with individuals who got rich via parasitical business strategies—creating almost nothing, but gorging themselves on the creativity of others. That’s how you get to the top in the digital age. Instead of US Steel, it’s Us steal. Instead of IBM, it’s IB Robbing U. But when parasites get too strong, they risk killing their hosts.
Recall that only ten percent of animal species are parasites. What happens if that number grows to 30% or 50% or 70%? That must have catastrophic consequences, no? This is precisely the situation in the digital culture right now. Google’s success in leeching off newspapers puts newspapers out of business. Musicians earn less and less, even as Spotify makes more and more. Hollywood is collapsing because it can’t compete with free video made by content providers. It’s no coincidence that these parasite platforms are the same companies investing heavily in AI. They must do this because even they understand that they are killing their hosts. When the host dies, AI-generated content can replace human creativity. Or—to be blunt about it—the host will die because of AI-generated content. And then the web billionaires won’t even need to toss those few shekels at artists. It’s every parasite’s dream. The host can die, but the leech still lives on! But there’s one catch. Training AI requires the largest parasitical theft of intellectual property in history. Everything now gets seized and sucked dry. No pirate in history has pilfered with such ambition and audacity.
now, I think we are finding that there are diminishing returns on the AI training at this point (in this gen of the technology at least) such that they are not able to replace human creativity. But if they could, they would, is the point. And when we talk about AI we need to address the parasitical business models that make it an inevitability.
55 notes · View notes
mostlysignssomeportents · 1 year ago
Text
What kind of bubble is AI?
Tumblr media
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes · View notes