#bruce schneier
Explore tagged Tumblr posts
Text
The true post-cyberpunk hero is a noir forensic accountant
![Tumblr media](https://64.media.tumblr.com/b7fd75c5ad285e2e7f01ce58f3abbd92/d27a585c5ccf35a1-1a/s540x810/5fb2948fd071ae8ac098cd5eeadc1dc244a12116.jpg)
I'm touring my new, nationally bestselling novel The Bezzle! Catch me in TOMORROW (Apr 17) in CHICAGO, then Torino (Apr 21) Marin County (Apr 27), Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
I was reared on cyberpunk fiction, I ended up spending 25 years at my EFF day-job working at the weird edge of tech and human rights, even as I wrote sf that tried to fuse my love of cyberpunk with my urgent, lifelong struggle over who computers do things for and who they do them to.
That makes me an official "post-cyberpunk" writer (TM). Don't take my word for it: I'm in the canon:
https://tachyonpublications.com/product/rewired-the-post-cyberpunk-anthology-2/
One of the editors of that "post-cyberpunk" anthology was John Kessel, who is, not coincidentally, the first writer to expose me to the power of literary criticism to change the way I felt about a novel, both as a writer and a reader:
https://locusmag.com/2012/05/cory-doctorow-a-prose-by-any-other-name/
It was Kessel's 2004 Foundation essay, "Creating the Innocent Killer: Ender's Game, Intention, and Morality," that helped me understand litcrit. Kessel expertly surfaces the subtext of Card's Ender's Game and connects it to Card's politics. In so doing, he completely reframed how I felt about a book I'd read several times and had considered a favorite:
https://johnjosephkessel.wixsite.com/kessel-website/creating-the-innocent-killer
This is a head-spinning experience for a reader, but it's even wilder to experience it as a writer. Thankfully, the majority of literary criticism about my work has been positive, but even then, discovering something that's clearly present in one of my novels, but which I didn't consciously include, is a (very pleasant!) mind-fuck.
A recent example: Blair Fix's review of my 2023 novel Red Team Blues which he calls "an anti-finance finance thriller":
https://economicsfromthetopdown.com/2023/05/13/red-team-blues-cory-doctorows-anti-finance-thriller/
Fix – a radical economist – perfectly captures the correspondence between my hero, the forensic accountant Martin Hench, and the heroes of noir detective novels. Namely, that a noir detective is a kind of unlicensed policeman, going to the places the cops can't go, asking the questions the cops can't ask, and thus solving the crimes the cops can't solve. What makes this noir is what happens next: the private dick realizes that these were places the cops didn't want to go, questions the cops didn't want to ask and crimes the cops didn't want to solve ("It's Chinatown, Jake").
Marty Hench – a forensic accountant who finds the money that has been disappeared through the cells in cleverly constructed spreadsheets – is an unlicensed tax inspector. He's finding the money the IRS can't find – only to be reminded, time and again, that this is money the IRS chooses not to find.
This is how the tax authorities work, after all. Anyone who followed the coverage of the big finance leaks knows that the most shocking revelation they contain is how stupid the ruses of the ultra-wealthy are. The IRS could prevent that tax-fraud, they just choose not to. Not for nothing, I call the Martin Hench books "Panama Papers fanfic."
I've read plenty of noir fiction and I'm a long-term finance-leaks obsessive, but until I read Fix's article, it never occurred to me that a forensic accountant was actually squarely within the noir tradition. Hench's perfect noir fit is either a happy accident or the result of a subconscious intuition that I didn't know I had until Fix put his finger on it.
The second Hench novel is The Bezzle. It's been out since February, and I'm still touring with it (Chicago tonight! Then Turin, Marin County, Winnipeg, Calgary, Vancouver, etc). It's paying off – the book's a national bestseller.
Writing in his newsletter, Henry Farrell connects Fix's observation to one of his own, about the nature of "hackers" and their role in cyberpunk (and post-cyberpunk) fiction:
https://www.programmablemutter.com/p/the-accountant-as-cyberpunk-hero
Farrell cites Bruce Schneier's 2023 book, A Hacker’s Mind: How the Powerful Bend Society’s Rules and How to Bend Them Back:
https://pluralistic.net/2023/02/06/trickster-makes-the-world/
Schneier, a security expert, broadens the category of "hacker" to include anyone who studies systems with an eye to finding and exploiting their defects. Under this definition, the more fearsome hackers are "working for a hedge fund, finding a loophole in financial regulations that lets her siphon extra profits out of the system." Hackers work in corporate offices, or as government lobbyists.
As Henry says, hacking isn't intrinsically countercultural ("Most of the hacking you might care about is done by boring seeming people in boring seeming clothes"). Hacking reinforces – rather than undermining power asymmetries ("The rich have far more resources to figure out how to gimmick the rules"). We are mostly not the hackers – we are the hacked.
For Henry, Marty Hench is a hacker (the rare hacker that works for the good guys), even though "he doesn’t wear mirrorshades or get wasted chatting to bartenders with Soviet military-surplus mechanical arms." He's a gun for hire, that most traditional of cyberpunk heroes, and while he doesn't stand against the system, he's not for it, either.
Henry's pinning down something I've been circling around for nearly 30 years: the idea that though "the street finds its own use for things," Wall Street and Madison Avenue are among the streets that might find those uses:
https://craphound.com/nonfic/street.html
Henry also connects Martin Hench to Marcus Yallow, the hero of my YA Little Brother series. I have tried to make this connection myself, opining that while Marcus is a character who is fighting to save an internet that he loves, Marty is living in the ashes of the internet he lost:
https://pluralistic.net/2023/05/07/dont-curb-your-enthusiasm/
But Henry's Marty-as-hacker notion surfaces a far more interesting connection between the two characters. Marcus is a vehicle for conveying the excitement and power of hacking to young readers, while Marty is a vessel for older readers who know the stark terror of being hacked, by the sadistic wolves who're coming for all of us:
https://www.youtube.com/watch?v=I44L1pzi4gk
Both Marcus and Marty are explainers, as am I. Some people say that exposition makes for bad narrative. Those people are wrong:
https://maryrobinettekowal.com/journal/my-favorite-bit/my-favorite-bit-cory-doctorow-talks-about-the-bezzle/
"Explaining" makes for great fiction. As Maria Farrell writes in her Crooked Timber review of The Bezzle, the secret sauce of some of the best novels is "information about how things work. Things like locks, rifles, security systems":
https://crookedtimber.org/2024/03/06/the-bezzle/
Where these things are integrated into the story's "reason and urgency," they become "specialist knowledge [that] cuts new paths to move through the world." Hacking, in other words.
This is a theme Paul Di Filippo picked up on in his review of The Bezzle for Locus:
https://locusmag.com/2024/04/paul-di-filippo-reviews-the-bezzle-by-cory-doctorow/
Heinlein was always known—and always came across in his writings—as The Man Who Knew How the World Worked. Doctorow delivers the same sense of putting yourself in the hands of a fellow who has peered behind Oz’s curtain. When he fills you in lucidly about some arcane bit of economics or computer tech or social media scam, you feel, first, that you understand it completely and, second, that you can trust Doctorow’s analysis and insights.
Knowledge is power, and so expository fiction that delivers news you can use is novel that makes you more powerful – powerful enough to resist the hackers who want to hack you.
Henry and I were both friends of Aaron Swartz, and the Little Brother books are closely connected to Aaron, who helped me with Homeland, the second volume, and wrote a great afterword for it (Schneier wrote an afterword for the first book). That book – and Aaron's afterword – has radicalized a gratifying number of principled technologists. I know, because I meet them when I tour, and because they send me emails. I like to think that these hackers are part of Aaron's legacy.
Henry argues that the Hench books are "purpose-designed to inspire a thousand Max Schrems – people who are probably past their teenage years, have some grounding in the relevant professions, and really want to see things change."
(Schrems is the Austrian privacy activist who, as a law student, set in motion the events that led to the passage of the EU's General Data Privacy Regulation:)
https://pluralistic.net/2020/05/15/out-here-everything-hurts/#noyb
Henry points out that William Gibson's Neuromancer doesn't mention the word "internet" – rather, Gibson coined the term cyberspace, which, as Henry says, is "more ‘capitalism’ than ‘computerized information'… If you really want to penetrate the system, you need to really grasp what money is and what it does."
Maria also wrote one of my all-time favorite reviews of Red Team Blues, also for Crooked Timber:
https://crookedtimber.org/2023/05/11/when-crypto-meant-cryptography/
In it, she compares Hench to Dickens' Bleak House, but for the modern tech world:
You put the book down feeling it’s not just a fascinating, enjoyable novel, but a document of how Silicon Valley’s very own 1% live and a teeming, energy-emitting snapshot of a critical moment on Earth.
All my life, I've written to find out what's going on in my own head. It's a remarkably effective technique. But it's only recently that I've come to appreciate that reading what other people write about my writing can reveal things that I can't see.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/17/panama-papers-fanfic/#the-1337est-h4x0rs
Image: Frédéric Poirot (modified) https://www.flickr.com/photos/fredarmitage/1057613629 CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
#pluralistic#science fiction#cyberpunk#literary criticism#maria farrell#henry farrell#noir#martin hench#marty hench#red team blues#the bezzle#forensic accountants#hackers#bruce schneier#post-cyberpunk#blair fix
207 notes
·
View notes
Text
I remember when all this malarkey was first brought in. The security experts said at the time it was just making a big song and dance about security without actually making anything safer, and nothing has proved them wrong since.
We need to disband the TSA.
Like, i’m not saying no security at all, but we need to disband the current TSA and go back to like. A quick x-ray of your bags and a metal detector.
When I was a kid flying alone, my parents knew I was smart and not easily freaked out by planes, so from age 8 when going to visit my grandma (an hour’s plane ride away), they wouldn’t even bother to set me up an an “unaccompanied minor”, they’d just let me fly.
Today that sounds absolutely NUTS, but you know why they could do it when I was 8?
When I was 8, they could walk me to the gate, put me on the gangway, and watch the plane take off, and know that my grandmother would be waiting at the gate on the other side to pick me up when I stepped off the plane.
Shortly after 9/11, my sister went to go visit my grandma. She was probably 10 or so. They wouldn’t let anyone go through the metal detectors anymore, you had to have a boarding pass, but if you went to the ticket counter and said, like “I’m picking up/dropping off an unaccompanied child/an elderly person/someone with disabilities” you could get a non-ticket pass to get through security and go to the gate.
Like, people forget sometimes, I think, that the full blown craziness of our current airport “security” (which is a joke and often does more harm than good - hurting or distressing innocent people and missing actual threats going through) took a while to ramp up. If you told parents in the wake of 9/11 that they would not be able to go with their unaccompanied children through security to make sure they got on the plane safely, or be there to pick them up at the gate when they arrive, there would’ve been fucking RIOTS. I remember my parents - VERY conservative and pro-Bush and pro-Patriot act and everything - being FRUSTRATED that they had to get a special pass to go with my sister through security if she was flying alone, because shouldn’t the fact that she’s a child and they’re her parent be enough to get them through?
Seriously, I know this is just one issue out of MANY that the current TSA has, but it’s just. It blows my mind.
You used to be able to go have lunch in the terminal with a friend if they had a layover in your city. You used to be able to romantically chase someone down to stop them boarding their plane when you realized you’d made a mistake turning down their offer to like. Get together or whatever. You used to be able to PUT YOUR GODDAMN CHILD ON A PLANE and be sure that on the other end someone would be right there to pick them up, or that they could just sit down right outside and wait if their pick-up person was running late.
22K notes
·
View notes
Text
not much news in it, but a good summary from a very well known and respected security researcher
1 note
·
View note
Text
But there is something fundamentally different about talking with a bot as opposed to a person. A person can be a friend. An AI cannot be a friend, despite how people might treat it or react to it. AI is at best a tool, and at worst a means of manipulation. Humans need to know whether we’re talking with a living, breathing person or a robot with an agenda set by the person who controls it. That’s why robots should sound like robots.
You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity.
We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again.
—Bruce Schneier and Barath Raghavan
This is a great idea. There are audio examples in the blog post if you want to hear what it sounds like.
(A ring modulator is a simple circuit that's commonly used in synthesizers. It adds a bunch of non-harmonic frequencies that make audio sound kind of metallic.)
0 notes
Text
Take a Selfie Using a NY Surveillance Camera
0 notes
Quote
If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology
Bruce Schneier
0 notes
Text
The End of Trust
New Post has been published on https://www.aneddoticamagazine.com/the-end-of-trust/
The End of Trust
EFF and McSweeney’s have teamed up to bring you The End of Trust (McSweeney’s 54). The first all-nonfiction McSweeney’s issue is a collection of essays and interviews focusing on issues related to technology, privacy, and surveillance.
The collection features writing by EFF’s team, including Executive Director Cindy Cohn, Education and Design Lead Soraya Okuda, Senior Investigative Researcher Dave Maass, Special Advisor Cory Doctorow, and board member Bruce Schneier.
Anthropologist Gabriella Coleman contemplates anonymity; Edward Snowden explains blockchain; journalist Julia Angwin and Pioneer Award-winning artist Trevor Paglen discuss the intersections of their work; Pioneer Award winner Malkia Cyril discusses the historical surveillance of black bodies; and Ken Montenegro and Hamid Khan of Stop LAPD Spying debate author and intelligence contractor Myke Cole on the question of whether there’s a way law enforcement can use surveillance responsibly.
The End of Trust is available to download and read right now under a Creative Commons BY-NC-ND license.
#bruce schneier#Cindy Cohn#collection of essays and interviews#Cory Doctorow#Dave Maass#Edward Snowden#EFF#Gabriella Coleman#privacy#Soraya Okuda#surveillance#technology#The End of Trust
1 note
·
View note
Text
Schneier's Law: Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.
— Schneier's Law, written by cryptographer Bruce Schneier
0 notes
Text
Limits of AI and LLM for attorneys
Note: Creepio is a featured player among Auralnauts. The current infatuation with Artificial Intelligence (AI), especially at the state bar which is pushing CLEs about how lawyers need to get on the AI bandwagon, is generally an un-serious infatuation with a marketing concept. AI and LLM – language learning models, on which much of recent AI is based – has nothing to do with accuracy. So, for a…
![Tumblr media](https://64.media.tumblr.com/2ae8fad732f20ce7cfce499becbc1363/60201e5202045583-e8/s540x810/9ebc646f030f2412472437adb5ee257710c2fad9.jpg)
View On WordPress
0 notes
Text
''When most people look at a system, they focus on how it works. When security technologists look at the same system, they can’t help but focus on how it can be made to fail: how that failure can be used to force the system to behave in a way it shouldn’t, in order to do something it shouldn’t be able to do—and then how to use that behavior to gain an advantage of some kind.
That’s what a hack is: an activity allowed by the system that subverts the goal or intent of the system.
...
Hacking is how the rich and powerful subvert the rules to increase both their wealth and power. They work to find novel hacks, and also to make sure their hacks remain so they can continue to profit from them.
...It’s not that the wealthy and powerful are better at hacking, it’s that they’re less likely to be punished for doing so. Indeed, their hacks often become just a normal part of how society works. Fixing this is going to require institutional change. Which is hard, because institutional leaders are the very people stacking the deck against us.''
-Bruce Schneier, A Hacker's Mind
0 notes
Text
Battery rationality
![Tumblr media](https://64.media.tumblr.com/1b9b2cadb2b90c8630d56dac84043420/cd43afd78ab08199-1e/s540x810/113863bfc89cbab2cc74e5cdce0c14145b6508cd.jpg)
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/12/06/shoenabombers/#paging-dick-cheney
After 9/11, we were told that "no cost was too high" when it came to fighting terrorism, and indeed, the US did blow trillions on forever wars and regime change projects and black sites and kidnappings and dronings and gulags that were supposed to end terrorism.
Back in the imperial core, we all got to play the home edition of the "no price is too high" War on Terror game. New, extremely invasive airport security measures were instituted. A "no-fly" list as thick as a phone book, assembled in secret, without any due process or right of appeal, was produced and distributed to airlines, and suddenly, random babies and sitting US Senators couldn't get on airplanes anymore, because they were simultaneously too dangerous to fly and also not guilty enough to charge with any crime:
https://pluralistic.net/2021/01/20/damn-the-shrub/#no-nofly
We lost our multitools, our knitting needles, our medical equipment, all in the name of keeping another boxcutter rebellion from rushing the cockpit. As security expert Bruce Schneier repeatedly pointed out back then, the presence of (for example) glass bottles on the drinks trolley meant that would-be terrorists could trivially avail themselves of an improvised edged weapon that was every bit as deadly as 9/11's box cutters.
According to Schneier, there were exactly two meaningful security measures taken in those days: reinforcing cockpit doors, and teaching basic self-defense to flight crews. Everything else was "security theater," a term coined to describe the entire business, from TSA confiscations to warehouses full of useless "chemical sniffer" booths that were supposed to smell out bombs on our person:
https://www.motherjones.com/politics/2010/01/airport-scanner-scam/
Security theater isn't just about deploying measures that don't work – it's also about defending yourself against risks that don't exist. You know how this goes: in 2001, Richard Reid – AKA "The Shoenabomber" – tried to blow up a plane with explosives he'd hidden in his shoes. It didn't work, because it's a stupid idea – and then we all took off our shoes for a quarter-century:
https://en.wikipedia.org/wiki/Richard_Reid
In 2006, a gang of amateur chemists hatched a plan to synthesize explosives in an airplane toilet sink, scheming to smuggle in different reagents and precursors in their carry-on luggage, then making a bomb in the sky and taking down the plane and all its passengers. The "Hair Gel Bombers" were caught before the could try their scheme, but even if they had made it onto the plane, they would have failed. Their liquid explosive recipe started with mixing up a "piranha bath" – a mixture of sulfuric acid and hydrogen peroxide – that needs to be kept extremely cold for a long time, or it will turn into instantly lethal gas. If the liquid bomb plot had gone ahead, the near-certain outcome would have been the eventual discovery of an asphyxiated terrorist in the bathroom, lips blue and lungs burned away, face down in a shallow sink filled with melting ice-cubes:
https://en.wikipedia.org/wiki/2006_transatlantic_aircraft_plot
The fact that these guys failed utterly didn't have any impact on the dramaturges who ran the world's security theater. We're still having our liquids taken away at airport checkpoints.
Why did we have to defend ourselves against imaginary attacks that had been proven not to work? Because "no price was too high to pay" in the War on Terror. As Schneier pointed out, this was obvious nonsense: there is a 100% effective, foolproof way to prevent all attacks on civilian aircraft. All we need to do is institute a 100% ban on air travel. We didn't do that, because "no price is too high to pay" was always bullshit. Some prices are obviously too high to pay.
Which is why we still get to keep our underwear on, even after Umar Farouk "Underwear Bomber" Abdulmutallab's failed 2009 attempt to blow up an airplane with a bomb he'd hidden in his Y-fronts:
https://en.wikipedia.org/wiki/Umar_Farouk_Abdulmutallab
It's why we aren't all getting a digital rectal exam every time we fly, despite the fact that hiding a bomb up your ass actually works, as proven by Abdullah "Asshole Bomber" al-Asiri, who blew his torso off with a rectally inserted bomb in 2009 in a bid to kill a Saudi official:
https://en.wikipedia.org/wiki/Abdullah_al-Asiri
Apparently, giving every flier a date with Doctor Jellyfinger is too high a price to pay for aviation safety, too.
Now, theatrical productions can have very long runs (The Mousetrap ran in London for 70 years!), but eventually the curtain rings down on every stage. It's possible we're present for the closing performance of security theater.
On September 17, the Israeli military assassinated 12 people in Lebanon and wounded 2,800 more by blowing up their pagers and two-way radios whose batteries had been gimmicked with pouches of PETN, a powerful explosive. This is a devastating attack, because we carry a ton of battery-equipped gadgets around with us, and most of them are networked and filled with programmable electronics, so they can be detonated based on a variety of circumstances – physical location, a specific time, or a remote signal.
What's more, PETN-gimmicked batteries are super easy to make and effectively impossible to detect. In a breakdown published a few days after the attack, legendary hardware hacker Andrew "bunnie" Huang described the hellmouth that had just been opened:
https://www.bunniestudios.com/blog/2024/turning-everyday-gadgets-into-bombs-is-a-bad-idea/
The battery in your phone, your laptop, your tablet, and your power-bank is a "lithium pouch battery." These are manufactured all over the world, and you don't need a large or sophisticated factory to make one. It would be effectively impossible to control the manufacture of these batteries. You can make batteries in "R&D quantities" for about $50,000. Alibaba will sell you a full, turnkey "pouch cell assembly line" for about $10,000. More reputable vendors want as little as $15,000.
A pouch cell is composed of layers of "cathode and anode foils between a polymer separator that is folded many times." After a machine does all this folding, the battery is laminated into a pouch made of aluminum foil, which is then cleaned up, labeled, and flushed into the global supply chain.
To make a battery bomb, you mix PETN "with binders to create a screen-printed sheet" that's folded and inserted into the battery, in such a way as to produce a shaped charge that "concentrat[es] the shock wave in an area, effectively turning the case around the device into a small fragmentation grenade."
Doing so will reduce the capacity of the battery by about 10% or less, which is within the normal variations we see in batteries. If you're worried about getting caught by someone who's measuring battery capacity, you can add an extra explosive sheet to the battery's interior, increasing the thickness of a 10-sheet battery by 10%, which is within the tolerance for normal swelling.
Once the explosive is laminated inside its (carefully cleaned) aluminum pouch, there's no way to detect the chemical signature of the PETN. The pouch seals that all in. The PETN and other components of the battery are too similar to one another to be detected with X-ray fluorescence, and the multi-layer construction of a battery also foils attempts to peer inside it with Spatially Offset Raman Spectroscopy.
According to bunnie, there are no ways to detect a battery bomb through visual inspection, surface analysis or X-rays. You can't spot it by measuring capacity or impedance with electromechanical impedance spectroscopy. You could spot it with a high-end CT scan – a half-million dollar machine that takes about 30 minutes for each scan. You might be able to spot it with ultrasound.
Lithium batteries have "protection circuit modules" – a small circuit board with a chip that helps with the orderly functioning of the battery. To use one of these to detonate a PETN-equipped battery, you'd only have to make a small, board-level rewiring, which could deliver a charge via a "third wire" – the NTC temperature sensor that's standard in batteries.
Bunnie gets into a lot more detail in his post. It's frankly terrifying, because it's hard to read this without concluding that, indeed, any battery in any gadget could actually be a powerful, undetectable bomb. What's more, supply chain security sucks and bunnie runs down several ways you could get these batteries into your target's gadget. These range from the nefarious to the brute simple: "buy a bunch of items from Amazon, swap out the batteries, restore the packaging and seals, and return the goods to the warehouse."
Bunnie's point is that, having shown the world that battery bombs are possible, the Israelis have opened the hellmouth. They were the first ones to do this, but they won't be the last. We need to figure out something before "the front line of every conflict [is brought] into your pocket, purse or home."
All of that is scary af, sure, but note what hasn't happened in the wake of an extremely successful, nearly impossible to defeat explosives attack that used small electronics of the same genus as the pocket rectangles virtually every air traveler boards a plane with. We've had no new security protocols instituted since September 17, likely because no one can think of anything that would work.
Now, in the heady days when the security theater was selling out every performance and we were all standing in two-hour lines to take our shoes off, none of this would have mattered. The TSA's motto of "when in trouble, or in doubt, run in circles, scream and shout" would have come to the fore. We'd be forced to insert our phones into some grifter's nonfunctional billion-dollar PETN dowsing-box, or TSA agents would be ordering us to turn on our phones and successfully play eleven rounds of Snake, or we'd be forced to lick our phones to prove that they weren't covered in poison.
But today, we're keeping calm and carrying on. The fact that something awful exists is, well, awful, but if we don't know what to do about it, there's no sense in just doing something, irrespective of whether that will help. We could order everyone to leave their phones at home when they fly, but then no one would fly anymore, and obviously, no one seriously thinks "no price is too high" for safety. Some prices are just too high.
I started thinking about all this last week, when I was in New Delhi to give a keynote for the annual meeting of the International Cooperative Alliance, which was jointly held with the UN as the inauguration of the UN International Year of Coops, with an address from UN Secretary General Antonio Guterres:
https://2025.coop/
When I arrived in New Delhi, my hosts were somewhat flustered because Indian Prime Minister Narendra Modi had just announced that he would give the opening keynote, which meant a lot of rescheduling and shuffling – but also a lot of security. I was told that the only things I could bring to the conference center the next day were my badge, my passport and my hotel room key. I couldn't bring a laptop, a phone or a spare battery. I couldn't even bring a pen ("they're worried about stabbings").
Modi – a lavishly corrupt authoritarian genocidier – has a lot of reasons to worry about his security. He has actual enemies who sometimes blow stuff up, and if one of them took him out, he wouldn't be the first Indian PM to die by assassination.
But the speakers and delegates gathered in the hotel lobby the next morning, we were told that we could bring phones, after all. Because of course we could. You can't fly people from all over the world to India and then ask them to forego the device they use as translator, map, note-taker, personal diary, and credit card. Some prices are just too high.
They took a lot of security measures. Everyone went through a metal detector, naturally. Then, we were sealed in the plenary room for more than an hour while the building was sealed off. Armed men were stationed all around the room, and the balcony outside the room was ringed with snipers:
https://www.flickr.com/photos/doctorow/54165263130/
We were prohibited from leaving our seats from the time Modi entered the room until he left it again, despite the fact that the PM was never more than a few steps from the single most terrifying bodyguard I'd ever seen:
https://www.flickr.com/photos/doctorow/54164805776/
And yet: the fact that we were less than two months out from an extremely successful, highly public demonstration of the weaponization of small batteries in personal electronics did not mean that we all had to leave our phones at the hotel.
After that, I'm tempted to think that, just possibly, security theater's curtain has rung down and its long SRO run has come to an end. It's a small bright spot in a dark time, but I'll take it.
#pluralistic#batteries#terrorism#security#security theater#modi#bombs#petn#bunnie huang#aviation#tsa#fin de siecle
396 notes
·
View notes
Text
Shamir Secret Sharing
It’s 3am. Paul, the head of PayPal database administration carefully enters his elaborate passphrase at a keyboard in a darkened cubicle of 1840 Embarcadero Road in East Palo Alto, for the fifth time. He hits Return. The green-on-black console window instantly displays one line of text: “Sorry, one or more wrong passphrases. Can’t reconstruct the key. Goodbye.”
There is nerd pandemonium all around us. James, our recently promoted VP of Engineering, just climbed the desk at a nearby cubicle, screaming: “Guys, if we can’t get this key the right way, we gotta start brute-forcing it ASAP!” It’s gallows humor – he knows very well that brute-forcing such a key will take millions of years, and it’s already 6am on the East Coast – the first of many “Why is PayPal down today?” articles is undoubtedly going to hit CNET shortly. Our single-story cubicle-maze office is buzzing with nervous activity of PayPalians who know they can’t help but want to do something anyway. I poke my head up above the cubicle wall to catch a glimpse of someone trying to stay inside a giant otherwise empty recycling bin on wheels while a couple of Senior Software Engineers are attempting to accelerate the bin up to dangerous speeds in the front lobby. I lower my head and try to stay focused. “Let’s try it again, this time with three different people” is the best idea I can come up with, even though I am quite sure it will not work.
It doesn’t.
The key in question decrypts PayPal’s master payment credential table – also known as the giant store of credit card and bank account numbers. Without access to payment credentials, PayPal doesn’t really have a business per se, seeing how we are supposed to facilitate payments, and that’s really hard to do if we no longer have access to the 100+ million credit card numbers our users added over the last year of insane growth.
This is the story of a catastrophic software bug I briefly introduced into the PayPal codebase that almost cost us the company (or so it seemed, in the moment.) I’ve told this story a handful of times, always swearing the listeners to secrecy, and surprisingly it does not appear to have ever been written down before. 20+ years since the incident, it now appears instructive and a little funny, rather than merely extremely embarrassing.
Before we get back to that fateful night, we have to go back another decade. In the summer of 1991, my family and I moved to Chicago from Kyiv, Ukraine. While we had just a few hundred dollars between the five of us, we did have one secret advantage: science fiction fans.
My dad was a highly active member of Zoryaniy Shlyah – Kyiv’s possibly first (and possibly only, at the time) sci-fi fan club – the name means “Star Trek” in Ukrainian, unsurprisingly. He translated some Stansilaw Lem (of Solaris and Futurological Congress fame) from Polish to Russian in the early 80s and was generally considered a coryphaeus at ZSh.
While USSR was more or less informationally isolated behind the digital Iron Curtain until the late ‘80s, by 1990 or so, things like FidoNet wriggled their way into the Soviet computing world, and some members of ZSh were now exchanging electronic mail with sci-fi fans of the free world.
The vaguely exotic news of two Soviet refugee sci-fi fans arriving in Chicago was transmitted to the local fandom before we had even boarded the PanAm flight that took us across the Atlantic [1]. My dad (and I, by extension) was soon adopted by some kind Chicago science fiction geeks, a few of whom became close friends over the years, though that’s a story for another time.
A year or so after the move to Chicago, our new sci-fi friends invited my dad to a birthday party for a rising star of the local fandom, one Bruce Schneier. We certainly did not know Bruce or really anyone at the party, but it promised good food, friendly people, and probably filk. My role was to translate, as my dad spoke limited English at the time.
I had fallen desperately in love with secret codes and cryptography about a year before we left Ukraine. Walking into Bruce’s library during the house tour (this was a couple years before Applied Cryptography was published and he must have been deep in research) felt like walking into Narnia.
I promptly abandoned my dad to fend for himself as far as small talk and canapés were concerned, and proceeded to make a complete ass out of myself by brazenly asking the host for a few sheets of paper and a pencil. Having been obliged, I pulled a half dozen cryptography books from the shelves and went to work trying to copy down some answers to a few long-held questions on the library floor. After about two hours of scribbling alone like a man possessed, I ran out of paper and decided to temporarily rejoin the party.
On the living room table, Bruce had stacks of copies of his fanzine Ramblings. Thinking I could use the blank sides of the pages to take more notes, I grabbed a printout and was about to quietly return to copying the original S-box values for DES when my dad spotted me from across the room and demanded I help him socialize. The party wrapped soon, and our friends drove us home.
The printout I grabbed was not a Ramblings issue. It was a short essay by Bruce titled Sharing Secrets Among Friends, essentially a humorous explanation of Shamir Secret Sharing.
Say you want to make sure that something really really important and secret (a nuclear weapon launch code, a database encryption key, etc) cannot be known or used by a single (friendly) actor, but becomes available, if at least n people from a group of m choose to do it. Think two on-duty officers (from a cadre of say 5) turning keys together to get ready for a nuke launch.
The idea (proposed by Adi Shamir – the S of RSA! – in 1979) is as simple as it is beautiful.
Let’s call the secret we are trying to split among m people K.
First, create a totally random polynomial that looks like: y(x) = C0 * x^(n-1) + C1 * x^(n-2) + C2 * x^(n-3) ….+ K. “Create” here just means generate random coefficients C. Now, for every person in your trusted group of m, evaluate the polynomial for some randomly chosen Xm and hand them their corresponding (Xm,Ym) each.
If we have n of these points together, we can use Lagrange interpolating polynomial to reconstruct the coefficients – and evaluate the original polynomial at x=0, which conveniently gives us y(0) = K, the secret. Beautiful. I still had the printout with me, years later, in Palo Alto.
It should come as no surprise that during my time as CTO PayPal engineering had an absolute obsession with security. No firewall was one too many, no multi-factor authentication scheme too onerous, etc. Anything that was worth anything at all was encrypted at rest.
To decrypt, a service would get the needed data from its database table, transmit it to a special service named cryptoserv (an original SUN hardware running Solaris sitting on its own, especially tightly locked-down network) and a special service running only there would perform the decryption and send back the result.
Decryption request rate was monitored externally and on cryptoserv, and if there were too many requests, the whole thing was to shut down and purge any sensitive data and keys from its memory until manually restarted.
It was this manual restart that gnawed at me. At launch, a bunch of configuration files containing various critical decryption keys were read (decrypted by another key derived from one manually-entered passphrase) and loaded into the memory to perform future cryptographic services.
Four or five of us on the engineering team knew the passphrase and could restart cryptoserv if it crashed or simply had to have an upgrade. What if someone performed a little old-fashioned rubber-hose cryptanalysis and literally beat the passphrase out of one of us? The attacker could theoretically get access to these all-important master keys. Then stealing the encrypted-at-rest database of all our users’ secrets could prove useful – they could decrypt them in the comfort of their underground supervillain lair.
I needed to eliminate this threat.
Shamir Secret Sharing was the obvious choice – beautiful, simple, perfect (you can in fact prove that if done right, it offers perfect secrecy.) I decided on a 3-of-8 scheme and implemented it in pure POSIX C for portability over a few days, and tested it for several weeks on my Linux desktop with other engineers.
Step 1: generate the polynomial coefficients for 8 shard-holders.
Step 2: compute the key shards (x0, y0) through (x7, y7)
Step 3: get each shard-holder to enter a long, secure passphrase to encrypt the shard
Step 4: write out the 8 shard files, encrypted with their respective passphrases.
And to reconstruct:
Step 1: pick any 3 shard files.
Step 2: ask each of the respective owners to enter their passphrases.
Step 3: decrypt the shard files.
Step 4: reconstruct the polynomial, evaluate it for x=0 to get the key.
Step 5: launch cryptoserv with the key.
One design detail here is that each shard file also stored a message authentication code (a keyed hash) of its passphrase to make sure we could identify when someone mistyped their passphrase. These tests ran hundreds and hundreds of times, on both Linux and Solaris, to make sure I did not screw up some big/little-endianness issue, etc. It all worked perfectly.
A month or so later, the night of the key splitting party was upon us. We were finally going to close out the last vulnerability and be secure. Feeling as if I was about to turn my fellow shard-holders into cymeks, I gathered them around my desktop as PayPal’s front page began sporting the “We are down for maintenance and will be back soon” message around midnight.
The night before, I solemnly generated the new master key and securely copied it to cryptoserv. Now, while “Push It” by Salt-n-Pepa blared from someone’s desktop speakers, the automated deployment script copied shard files to their destination.
While each of us took turns carefully entering our elaborate passphrases at a specially selected keyboard, Paul shut down the main database and decrypted the payment credentials table, then ran the script to re-encrypt with the new key. Some minutes later, the database was running smoothly again, with the newly encrypted table, without incident.
All that was left was to restore the master key from its shards and launch the new, even more secure cryptographic service.
The three of us entered our passphrases… to be met with the error message I haven’t seen in weeks: “Sorry, one or more wrong passphrases. Can’t reconstruct the key. Goodbye.” Surely one of us screwed up typing, no big deal, we’ll do it again. No dice. No dice – again and again, even after we tried numerous combinations of the three people necessary to decrypt.
Minutes passed, confusion grew, tension rose rapidly.
There was nothing to do, except to hit rewind – to grab the master key from the file still sitting on cryptoserv, split it again, generate new shards, choose passphrases, and get it done. Not a great feeling to have your first launch go wrong, but not a huge deal either. It will all be OK in a minute or two.
A cursory look at the master key file date told me that no, it wouldn’t be OK at all. The file sitting on cryptoserv wasn’t from last night, it was created just a few minutes ago. During the Salt-n-Pepa-themed push from stage, we overwrote the master key file with the stage version. Whatever key that was, it wasn’t the one I generated the day before: only one copy existed, the one I copied to cryptoserv from my computer the night before. Zero copies existed now. Not only that, the push script appears to have also wiped out the backup of the old key, so the database backups we have encrypted with the old key are likely useless.
Sitrep: we have 8 shard files that we apparently cannot use to restore the master key and zero master key backups. The database is running but its secret data cannot be accessed.
I will leave it to your imagination to conjure up what was going through my head that night as I stared into the black screen willing the shards to work. After half a decade of trying to make something of myself (instead of just going to work for Microsoft or IBM after graduation) I had just destroyed my first successful startup in the most spectacular fashion.
Still, the idea of “what if we all just continuously screwed up our passphrases” swirled around my brain. It was an easy check to perform, thanks to the included MACs. I added a single printf() debug statement into the shard reconstruction code and instead of printing out a summary error of “one or more…” the code now showed if the passphrase entered matched the authentication code stored in the shard file.
I compiled the new code directly on cryptoserv in direct contravention of all reasonable security practices – what did I have to lose? Entering my own passphrase, I promptly got “bad passphrase” error I just added to the code. Well, that’s just great – I knew my passphrase was correct, I had it written down on a post-it note I had planned to rip up hours ago.
Another person, same error. Finally, the last person, JK, entered his passphrase. No error. The key still did not reconstruct correctly, I got the “Goodbye”, but something worked. I turned to the engineer and said, “what did you just type in that worked?”
After a second of embarrassed mumbling, he admitted to choosing “a$$word” as his passphrase. The gall! I asked everyone entrusted with the grave task of relaunching crytposerv to pick really hard to guess passphrases, and this guy…?! Still, this was something -- it worked. But why?!
I sprinted around the half-lit office grabbing the rest of the shard-holders demanding they tell me their passphrases. Everyone else had picked much lengthier passages of text and numbers. I manually tested each and none decrypted correctly. Except for the a$$word. What was it…
A lightning bolt hit me and I sprinted back to my own cubicle in the far corner, unlocked the screen and typed in “man getpass” on the command line, while logging into cryptoserv in another window and doing exactly the same thing there. I saw exactly what I needed to see.
Today, should you try to read up the programmer’s manual (AKA the man page) on getpass, you will find it has been long declared obsolete and replaced with a more intelligent alternative in nearly all flavors of modern Unix.
But back then, if you wanted to collect some information from the keyboard without printing what is being typed in onto the screen and remain POSIX-compliant, getpass did the trick. Other than a few standard file manipulation system calls, getpass was the only operating system service call I used, to ensure clean portability between Linux and Solaris.
Except it wasn’t completely clean.
Plain as day, there it was: the manual pages were identical, except Solaris had a “special feature”: any passphrase entered that was longer than 8 characters long was automatically reduced to that length anyway. (Who needs long passwords, amiright?!)
I screamed like a wounded animal. We generated the key on my Linux desktop and entered our novel-length passphrases right here. Attempting to restore them on a Solaris machine where they were being clipped down to 8 characters long would never work. Except, of course, for a$$word. That one was fine.
The rest was an exercise in high-speed coding and some entirely off-protocol file moving. We reconstructed the master key on my machine (all of our passphrases worked fine), copied the file to the Solaris-running cryptoserv, re-split it there (with very short passphrases), reconstructed it successfully, and PayPal was up and running again like nothing ever happened.
By the time our unsuspecting colleagues rolled back into the office I was starting to doze on the floor of my cubicle and that was that. When someone asked me later that day why we took so long to bring the site back up, I’d simply respond with “eh, shoulda RTFM.”
RTFM indeed.
P.S. A few hours later, John, our General Counsel, stopped by my cubicle to ask me something. The day before I apparently gave him a sealed envelope and asked him to store it in his safe for 24 hours without explaining myself. He wanted to know what to do with it now that 24 hours have passed.
Ha. I forgot all about it, but in a bout of “what if it doesn’t work” paranoia, I printed out the base64-encoded master key when we had generated it the night before, stuffed it into an envelope, and gave it to John for safekeeping. We shredded it together without opening and laughed about what would have never actually been a company-ending event.
P.P.S. If you are thinking of all the ways this whole SSS design is horribly insecure (it had some real flaws for sure) and plan to poke around PayPal to see if it might still be there, don’t. While it served us well for a few years, this was the very first thing eBay required us to turn off after the acquisition. Pretty sure it’s back to a single passphrase now.
Notes:
1: a member of Chicagoland sci-fi fan community let me know that the original news of our move to the US was delivered to them via a posted letter, snail mail, not FidoNet email!
522 notes
·
View notes
Text
Books for Learning Cybersecurity
I created this post for the Studyblr Masterpost Jam, check out the tag for more cool masterposts from folks in the studyblr community!
I'll admit that I've mostly used online resources and courses so far, but these are the handful of books that I've read and recommend!
Books About Cybersecurity History
The Cuckoo's Egg by Clifford Stoll: the story of how one man noticed an odd error in a computer at UC Berkely and traced it back to an international hacking scheme. a very fun & interesting read.
Cult of the Dead Cow by Joseph Menn: a look into hacker culture and how few people took computer security seriously at the beginning, and an exploration of how cybersecurity often becomes deeply and uncomfortably intertwined with geopolitics.
Books to Learn Cybersecurity
Practical Malware Analysis by Michael Sikorski and Andrew Honig: an oldie but a goodie for getting into malware analysis. I'm currently in the middle of working through this one and having a great time.
Serious Cryptography by Jean-Philippe Aumasson: a detailed guide for understanding the nuts and bolts of cryptography (the 2nd edition is coming out this month, so it might be worth checking out the newer version).
Cybersecurity Foundations Textbooks
Computer Networking: A Top-Down Approach (7th edition) by Kurose & Ross: this was the textbook we used in the computer networks class I took in college - I think this book does a good job of explaining how the internet works and breaking concepts down to the lowest level.
Operating System Concepts (10th edition) by Silberschatz, Galvin, and Gagne: this is the textbook from my operating systems course. This is a good book to go through if you want to learn how a computer Really Works.
Books I Haven't Read (Yet) - But I Know They'll Be Good
Cybersecurity Books from No Starch Press - many books have free chapters posted here that you can read as a preview!
Books by Bruce Schneier - books on society, technology, and cryptography
73 notes
·
View notes
Text
Why Not Write Cryptography
I learned Python in high school in 2003. This was unusual at the time. We were part of a pilot project, testing new teaching materials. The official syllabus still expected us to use PASCAL. In order to satisfy the requirements, we had to learn PASCAL too, after Python. I don't know if PASCAL is still standard.
Some of the early Python programming lessons focused on cryptography. We didn't really learn anything about cryptography itself then, it was all just toy problems to demonstrate basic programming concepts like loops and recursion. Beginners can easily implement some old, outdated ciphers like Caesar, Vigenère, arbitrary 26-letter substitutions, transpositions, and so on.
The Vigenère cipher will be important. It goes like this: First, in order to work with letters, we assign numbers from 0 to 25 to the 26 letters of the alphabet, so A is 0, B is 1, C is 2 and so on. In the programs we wrote, we had to strip out all punctuation and spaces, write everything in uppercase and use the standard transliteration rules for Ä, Ö, Ü, and ß. That's just the encoding part. Now comes the encryption part. For every letter in the plain text, we add the next letter from the key, modulo 26, round robin style. The key is repeated after we get tot he end. Encrypting "HELLOWORLD" with the key "ABC" yields ["H"+"A", "E"+"B", "L"+"C", "L"+"A", "O"+"B", "W"+"C", "O"+"A", "R"+"B", "L"+"C", "D"+"A"], or "HFNLPYOLND". If this short example didn't click for you, you can look it up on Wikipedia and blame me for explaining it badly.
Then our teacher left in the middle of the school year, and a different one took over. He was unfamiliar with encryption algorithms. He took us through some of the exercises about breaking the Caesar cipher with statistics. Then he proclaimed, based on some back-of-the-envelope calculations, that a Vigenère cipher with a long enough key, with the length unknown to the attacker, is "basically uncrackable". You can't brute-force a 20-letter key, and there are no significant statistical patterns.
I told him this wasn't true. If you re-use a Vigenère key, it's like re-using a one time pad key. At the time I just had read the first chapters of Bruce Schneier's "Applied Cryptography", and some pop history books about cold war spy stuff. I knew about the problem with re-using a one-time pad. A one time pad is the same as if your Vigenère key is as long as the message, so there is no way to make any inferences from one letter of the encrypted message to another letter of the plain text. This is mathematically proven to be completely uncrackable, as long as you use the key only one time, hence the name. Re-use of one-time pads actually happened during the cold war. Spy agencies communicated through number stations and one-time pads, but at some point, the Soviets either killed some of their cryptographers in a purge, or they messed up their book-keeping, and they re-used some of their keys. The Americans could decrypt the messages.
Here is how: If you have message $A$ and message $B$, and you re-use the key $K$, then an attacker can take the encrypted messages $A+K$ and $B+K$, and subtract them. That creates $(A+K) - (B+K) = A - B + K - K = A - B$. If you re-use a one-time pad, the attacker can just filter the key out and calculate the difference between two plaintexts.
My teacher didn't know that. He had done a quick back-of-the-envelope calculation about the time it would take to brute-force a 20 letter key, and the likelihood of accidentally arriving at something that would resemble the distribution of letters in the German language. In his mind, a 20 letter key or longer was impossible to crack. At the time, I wouldn't have known how to calculate that probability.
When I challenged his assertion that it would be "uncrackable", he created two messages that were written in German, and pasted them into the program we had been using in class, with a randomly generated key of undisclosed length. He gave me the encrypted output.
Instead of brute-forcing keys, I decided to apply what I knew about re-using one time pads. I wrote a program that takes some of the most common German words, and added them to sections of $(A-B)$. If a word was equal to a section of $B$, then this would generate a section of $A$. Then I used a large spellchecking dictionary to see if the section of $A$ generated by guessing a section of $B$ contained any valid German words. If yes, it would print the guessed word in $B$, the section of $A$, and the corresponding section of the key. There was only a little bit of key material that was common to multiple results, but that was enough to establish how long they key was. From there, I modified my program so that I could interactively try to guess words and it would decrypt the rest of the text based on my guess. The messages were two articles from the local newspaper.
When I showed the decrypted messages to my teacher the next week, got annoyed, and accused me of cheating. Had I installed a keylogger on his machine? Had I rigged his encryption program to leak key material? Had I exploited the old Python random number generator that isn't really random enough for cryptography (but good enough for games and simulations)?
Then I explained my approach. My teacher insisted that this solution didn't count, because it relied on guessing words. It would never have worked on random numeric data. I was just lucky that the messages were written in a language I speak. I could have cheated by using a search engine to find the newspaper articles on the web.
Now the lesson you should take away from this is not that I am smart and teachers are sore losers.
Lesson one: Everybody can build an encryption scheme or security system that he himself can't defeat. That doesn't mean others can't defeat it. You can also create an secret alphabet to protect your teenage diary from your kid sister. It's not practical to use that as an encryption scheme for banking. Something that works for your diary will in all likelihood be inappropriate for online banking, never mind state secrets. You never know if a teenage diary won't be stolen by a determined thief who thinks it holds the secret to a Bitcoin wallet passphrase, or if someone is re-using his banking password in your online game.
Lesson two: When you build a security system, you often accidentally design around an "intended attack". If you build a lock to be especially pick-proof, a burglar can still kick in the door, or break a window. Or maybe a new variation of the old "slide a piece of paper under the door and push the key through" trick works. Non-security experts are especially susceptible to this. Experts in one domain are often blind to attacks/exploits that make use of a different domain. It's like the physicist who saw a magic show and thought it must be powerful magnets at work, when it was actually invisible ropes.
Lesson three: Sometimes a real world problem is a great toy problem, but the easy and didactic toy solution is a really bad real world solution. Encryption was a fun way to teach programming, not a good way to teach encryption. There are many problems like that, like 3D rendering, Chess AI, and neural networks, where the real-world solution is not just more sophisticated than the toy solution, but a completely different architecture with completely different data structures. My own interactive codebreaking program did not work like modern approaches works either.
Lesson four: Don't roll your own cryptography. Don't even implement a known encryption algorithm. Use a cryptography library. Chances are you are not Bruce Schneier or Dan J Bernstein. It's harder than you thought. Unless you are doing a toy programming project to teach programming, it's not a good idea. If you don't take this advice to heart, a teenager with something to prove, somebody much less knowledgeable but with more time on his hands, might cause you trouble.
350 notes
·
View notes
Text
Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI generated disinformation” trope and speculate on some of the ways AI will change how democracy functions – in both large and small ways.
When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.
Some items on my list are still speculative, but non require science-fictional levels of technological advance. And we can see the first stages of many of them today.
—Bruce Schneier [a major dude in cybersecurity btw]
0 notes