#and it went in chronological order of being published which was so smart????
Explore tagged Tumblr posts
Note
hey! i stumbled across you on ao3 through genshin (i think? that was in september i have no idea at this point), went to check out your profile and saw my hero academia works there. i am currently very much into it, so i was like let's gooo sooo I found B♭ and that has been a wild journey.
firstly, i don't have any experience with american school system, so a lot of worldbuilding was new for me. moreover, marching band is something from another universe(aka music lover but never got educated on the matter), so fic constantly challenged me with new details-concepts-vocabulary. stepping outside of your comfort zone while reading? great idea! i think i never learned so much from a fic while enjoying it so much ^^
secondly, i am simply amazed by sheer amount of effort you put into it. i decided to read in publishing order, so non-chronological really impressed me. you're honestly a mastermind being able to pull that off. also, having a song for every chapter with specifically picked out lyrics relevant to the content is so, so cool! the diversity of your playlists should be astonishing, i'm jealous :)
thirdly, the characters are just so real. i love all the canon references, i love the reactions that don't feel exagerrated or too mild. they are acting...exactly as i would expect them to in that circumstances and setting. i just accepted leads' ways of thinking and reflecting so naturally
i also read the extra notes when they were available and just...how much thought is put in is mezmerising. for some reason i never thought pulling directly from your life experiences when writing? but it actually makes a lot of sense and it brought me some ideas to try out so hehe ;)
as i am very smart and hadn't scrolled down on the order post, i didn't see until quite late in the reading that the end of perfect harmony is published as notes, so that was a surprise. i understand your reasons and the fact that you're not even in the fandom anymore, but you mentioned in some extra notes that it's ok to ask for them even if years passed so...here i am three years after, complimenting B♭ :D
anyway, i finished it a couple of days ago, and even the notes are quite detailed. images of described shenanigans popped into my head just like that, and i really appreciate that you published them and i got to know what happened next!!
i actually wondered why were the comments disabled since i really wanted to comment on a few chapters bc your work deserves it so much...but yeah, that's what led me here so i guess congrats, you get my thoughts all nicely packed in one place ^_^
there's probably a lot of specific pieces, details, ideas i liked about B♭, so that is merely a summary of exciting things i remember!
i'll say goodbye using my favourite oneshot title:
thank you for the music ✩°。⋆⸜(ू。•ω•。)
not gonna lie i'm kind of obsessed w/the way you just glossed over the fact that you (probably) found me through my (anonymous) genshin fics, which means you jumped through the (minimum three) hoops required to get here, my (named) fandom blog, and then proceed to gush abt a bnha series i did. like i would assume that if someone put in the effort to find my other fandom fics from my genshin stuff, then there must've been smth really worth looking into w/the genshin stuff lmao
for the sake of my mutuals' dashboards, since this ask is so long i'm just gonna chuck the whole (long) answer under a cut lol
anyway yes Bb!! the amt of effort n planning i put into that series was legitimately insane. i made school schedules for EVERY SINGLE BNHA CHARACTER and PUT IT ON A SPREADSHEET so that i could PLAN WHO COULD WALK WITH WHOM TO THEIR NEXT CLASSES n have PLOT-RELEVANT CONVERSATIONS LIKE THAT. i made little profiles for each of the characters, where i chose their favorite musical key (and why), how many years/instruments they play, and gave them each a funny little quote/catchphrase!!!
what possessed me to do this for ~20 different characters i honestly could not tell you
i definitely loved working on Bb a lot. i remember sitting down three years ago, practically to the day by this point, n hashing out the events of every single chapter to the epilogue, then reorganizing them into a proper timeline (i also kept a calendar in my notes with the chapters in order), all while occasionally looking out my bedroom window n thinking how wonderfully bright n warm n sunny the world was becoming again. bc really, 2019 was a very struggle year for me, n i didn't take the time to appreciate the sunlight then the way i have every year since. from there, i worked off that very strict outline, and most of the note-chapters that were eventually put up are primarily just copy-pasted straight from there.
i remember being on youtube a lot for music recs when working on perfect harmony too!! a bunch of them changed in the years btwn walking away from the series n actually publishing the notes (which were actually published mid-december last year, then backdated to 2020 a few days later ahaha), with a number of the tour arc alternate chapter title songs coming from songs that didn't even exist at the time of the fic's original planning. my mp3 collection grew a lot during the planning phases of Bb lmao.
i'm glad the characters felt so real!!! while no one character was based entirely off one single person i knew irl, one could say that writing Bb was a bit of a love letter to my time in high school band in some places, both the events i partook in n the people i knew there. it was a very "write what you know" type of fic.
anyway haha yeah the end of my bnha days were not fun, but i still loved Bb enough to hold onto the idea of returning to it Soon(tm) that i put off publishing the chapter notes for almost two years. even then, that was a difficult decision for me to make bc a part of me wasn't ready to close that chapter of my life. i think ultimately it was the best decision to make though, since the fics are p heavily tied up in a much sadder part of my life that i'd just rather not return to.
the main reason comments were turned off of Bb (and indeed, the majority of my bnha fics) is most simply described as "resentment". it's different from how i feel abt my old snk fics (where i turned comments off of them so that i could pretend no one's really reading them anymore), which is more impersonal "oh my god i was so young back then and i give fewer than negative shits abt any mistakes i might've made on them or what anyone thinks of them" bc in bnha it's kind of hard to avoid the fact that i had a Name in the circles i typically traversed for a while. it wasn't that big of a name, but it's certainly more than nothing.
it's not really a feeling i like to dwell on, so i just archive-locked the responsible works n turned off comments for the most heinous culprits (mostly sparklers, but even tho i love Bb as a story, i do not love Bb as a publishing experience, if that makes sense), and for the most part, that keeps the resentment contained.
still, i'm genuinely happy that you enjoyed the au so much!!! i honestly love love love how goddamn SPECIFIC the premises are for this fic. the world was truly built with love, and the music puns for every title were always such a joy to come up with c':
thank you for the ask!!!! :D
#asks#kid-of-yesterday#long post#if you really did come here from my genshin fics last SEPTEMBER then boy howdy do i know Exactly which fic you came from#(if it was in september then it Must have been the saucy xv fic abt the sharp teeth bc pure identity didn't come out til oct)#i have a lot of Feelings abt my time in the bnha fandom that i just don't rlly like to talk abt publicly tbh#mostly bc (most of those Feelings are 'resentment' lmao) i try actively to not be a bitter person anymore#but also i hate admitting that people Knew me bc it feels like vanity or bragging#(bc if my name had ever been ''worth anything'' then why did Bb not garner the attention i'd hoped?)#i know that those thoughts aren't true n all but they're overall very complicated feelings#regarding how fandom at large treats fan creations and creators that ultimately led to my current decision to publish anonymously#ofc my feelings towards fandom n the fan-fan creator relationship have shifted w/time again n i do often consider just de-anoning#but it's... Tricky.... to say the least#haha sorry for unloading a little gloomily onto your lovely ask but i also think you deserve my honest thoughts#and not a saccharine falsehood / partial truth (oh hey that's the main thesis of rhythm lol)#ALSO to have an izch fic as your fave is exemplary taste when combined w/krbk#i am handing you a plaque that says 'certified good taste in ships'
1 note
·
View note
Text
Dino Watches Anime (Nov 28)
Obviously, I’m not going to list the ongoing anime that I’ve still watching as that hasn’t changed much. I will put the ones that I recently completed though!
Recently Completed!
Youkoso Jitsuryoku Shijou Shugi no Kyoushitsu e
I was going to put this in chronological order until I realized that I just wanted to get this piece of crap out of the way. Seriously, I regret watching this show. I HATE how it’s the highest rated out of all of them! It’s almost an 8/10! I gave it a 4! Here’s why:
This anime started out okay. I liked the sound of its premise. I liked the idea of teenage psychology being pushed but not as life-or-death but more of status. Because believe it or not, sometimes a person values their image and status more than their life. That plot was... kind of there? I don’t know. It was mostly boobs and ass. Those jiggle physics don’t stop here. They make sure to remind you that every character in this anime has large assets and asses every two seconds.
The characters are probably the most deplorable part of this show. They were so bad. Seriously, we just took the worst parts of every trope and threw them together! The “I don’t talk to anyone. I don’t have any friends. I’m EDGY and don’t belong here. I’m this close to selling myself to Orochimaru for power”, the “cardboard houseplant that’s so monotone that it hurts”, the “double-sided dipstick that will take out a person’s intestines and use them as a jump rope”, and the “arrogant older brother who is way more accomplished than his sister”. We also have more assorted bastards, but those are the main ones. The characters ruined everything. Their interactions were so coarse, forced, hard to watch, and everything is executed so poorly that it made me wonder whether people rated this for ulterior motives or not. Everyone here is an asshole.
Let’s look at the first three characters:
“cardboard houseplant that’s so monotone that it hurts” - Shoya Chiba isn’t even a bad voice actor. He does give me Hiroshi Kamiya vibes though (not a bad thing), but his voice acting in this show was hard to listen to because his expression didn’t change and neither did his voice. Seriously, over 12 episodes, he has that same expression. Someone threatened to harm him, and he’s still looking like a dead fish. I can’t describe how much worse it is to have a main character whose facial muscles don’t move. He has no personality.
“I don’t talk to anyone. I don’t have any friends. I’m EDGY and don’t belong here. I’m this close to selling myself to Orochimaru for power” - I like her design, but what else is going for her? How many times does she need to say, “I don’t need friends. I just want to move up in school.” Bitch, I get it. You can calm down. You keep doing things for other people but you say you don’t care? She arguably gets the most growth. Akari Kito voiced her and it was just like how any other person on earth would voice this character.
“double-sided dipstick that will take out a person’s intestines and use them as a jump rope” - She’s exactly what she sounds like. She’s in that gif. She’s sweet and nice until you catch her being not that. Yurika Kubo did a pretty alright job voicing her. Nothing really to say here besides I hated her with a burning passion.
Music was alright. Animation was... Lerche standard. Nothing special. It looks nice until you are flashed so many times that you can’t tell what this show is even about anymore.
This is one of the worst shows I’ve watched in a while. It wastes a perfectly good premise and voice cast.
Kekkaishi
2006 was a good year for anime, and this probably got swept over because Code Geass took the fall season by storm. But this anime was genuinely good. I wanted a good shonen/comedy with action and this filled that void and more. I even read some of the manga before realizing that I just don’t like reading manga that much.
I genuinely like the cast of characters and find them amusing. I also like how they incorporate a stay-at-home dad who wears an apron and no one judges him because it’s what they see as normal. We have a female character whose not being sexualized every few seconds. Sunrise did cheat a little with other female characters though because the manga made their proportions okay while the anime decided to make them look more like a Barbie rather than a human. The animation was pretty okay too. For 52 episodes, it did some pretty okay stuff but with today’s technology, it’s probably not as “wow” as it was back in the day.
I’m just mad that they developed a character only to kill him a couple of episodes later. That’s sad.
The soundtrack was pretty standard, but I was impressed by the fact that I liked the voice acting. I originally wasn’t as much of a fan of Hiroyuki Yoshino’s works because I found his voice annoying, but when he finds the right character (like Yoshimori or Eraser Mic), he works really well. It’s unfortunate that a lot of the main cast aren’t as prolific as they once were, but I guess that’s life.
No one was hurt in the making of that gif.
I rated this a 9/10 because it was for pure enjoyment. I didn’t have this much fun watching an anime in a while. This is the anime that got me binge-watching again.
Nobunaga Concerto
This anime has a blaring problem. It’s not the story, it’s not the writing, it’s not the characters, and it’s not the music. It’s the art. Watch any clip and it will give some Berserk flashbacks.
The writing was pretty good too. The story was genuinely interesting, but in the end, it didn’t feel like it did enough. It didn’t cover enough. The dialogue and the incorporation of modern culture with the historic parts were smart. Saburou was really likeable and oddly adaptive. The characters around him (the historic ones) are pretty cut and dry. The music was pretty good too! The art and lack of adaptation are the only things truly holding this show back.
Mamoru Miyano plays the main character and obviously makes him charming and funny, Yuki Kaji plays Nobunaga Oda, and Nana Mizuki plays Oda’s betrothed. I actually didn’t know anything about Oda’s tale prior to this anime so don’t think that’s required.
I rated it a 7/10
*Another important note is that they get suddenly racist in the last episode. A black guy appears, and people scream that it’s a monkey like they’ve never seen a darker-skinned human before. It was honestly disappointing.
Ookami-san to Shichinin no Nakama-tachi
Okay, this anime surprised me because of how much I liked it. It wasn’t even anything special. They took the same JC Staff rom-com tropes and put them into another anime combined with some fairy tale lore. But this anime was so entertaining and charming with its cast that I genuinely didn’t hate any of the characters. There were a few moments that made me go, “okay, that’s a bit too much”, but a girl going around punching people with neko boxing gloves? That’s pretty cool. Ookami was a really funny character who I actually found a bit interesting which is weird for a story that’s supposed to be superficial and comedic. Ryoushi is practically a spitting image of my anxiety and personality but in a charming way? He has some cool moments. He’s almost a little like Zenitsu. Courageous when push comes to shove but he’s actually awake. Ringo was the innocent loli until she wasn’t because if you mess with her friend, she will poison you. Again, they made these references to regular rom-com anime and fairy tales that completely roll together nicely. JC Staff didn’t mess this one up, and as always, there’s a tsundere Rie Kugimiya role in there somewhere.
Because I enjoyed it so much, I gave it a 9/10.
Inari, Konkon, Koi Iroha
I literally finished this one an hour ago, read the last chapter of the manga, and went “what the heck?” Because... I enjoyed this, but I also didn’t? Bitter-sweetness at its best. Houko Kuwashima is a really underrated voice actress because she hasn’t taken that many big roles as of recent, but she has incredible range. The characters of this are incredibly plain, but I don’t mind that because they aren’t painful to watch unlike the first anime I mentioned (seriously, I watched the last three shows on this list to wash that bad anime out of my brain). Everyone in this anime seems to be perfect in one way or another because they don’t really wish ill on anyone. Not gonna lie, characters like that aren’t for everyone because “everyone is a scum at some point in their lives”. I definitely respect for the need of balance. The story is pretty simple and plain and so is the art. The music was nice and pleasant. Basically, it’s a palette-cleanser of an anime after watching some bad anime. It’s about developing middle school romance and this... “teenage” couple on the side. It’s about friendship! And discovering yourself, and yes, one character found out she was gay, and I was rooting for that character so hard only to find out that she didn’t get her conclusive ending. Everyone else gets some bullshit ending one way or another! This is published in the same publication as Bungou Stray Dogs, and I wouldn’t have been able to tell if I didn’t look it up.
I rated this one an 8/10 because I enjoyed it still despite the ending being a little idealistic, sad, and far-fetched (seriously, someone becomes a god and gets their existence erased).
2 notes
·
View notes
Text
#5yrsago Greenwald's "No Place to Hide": a compelling, vital narrative about official criminality
Cory Doctorow reviews Glenn Greenwald's long-awaited No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. More than a summary of the Snowden leaks, it's a compelling narrative that puts the most explosive revelations about official criminality into vital context.
Glenn Greenwald's long-awaited No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State is more than a summary of the Snowden leaks: it's a compelling narrative that puts the most explosive revelations about official criminality into vital context.
No Place has something for everyone. It opens like a spy-thriller as Greenwald takes us through his adventures establishing contact with Snowden and flying out to meet him -- thanks to the technological savvy and tireless efforts of Laura Poitras, and those opening chapters are real you-are-there nailbiters as we get the inside story on the play between Poitras and Greenwald, Snowden, the Guardian, Bart Gellman and the Washington Post.
Greenwald offers us some insight into Snowden's character, which has been something of a cipher until now, as the spy sought to keep the spotlight on the story instead of the person. This turns out to have been a very canny move, as it has made it difficult for NSA apologists to muddy the waters with personal smears about Snowden and his life. But the character Greenwald introduces us to isn't a lurking embarrassment -- rather, he's a quick-witted, well-spoken, technologically masterful idealist. Exactly the kind of person you'd hope he'd be, more or less: someone with principles and smarts, and the ability to articulate a coherent and ultimately unassailable argument about surveillance and privacy. The world Snowden wants isn't one that's totally free of spying: it's one of well-defined laws, backed by an effective set of checks and balances ensure that spies are servants to democracy, and not the other way around. The spies have acted as if the law allows them to do just about anything to anyone. Snowden insists that if they want that law, they have to ask for it -- announce their intentions, get Congress on side, get a law passed and follow it. Making it up as you go along and lying to Congress and the public doesn't make freedom safe, because freedom starts with the state and its agents following their own rules.
From here, Greenwald shifts gears, diving into the substance of the leaks. There have been other leakers and whistleblowers before Snowden, but no story about leaks has stayed alive in the public's imagination and on the front page for as long as the Snowden files; in part that's thanks to a canny release strategy that has put out stories that follow a dramatic arc. Sometimes, the press will publish a leak just in time to reveal that the last round of NSA and government denials were lies. Sometimes, they'll be a you-ain't-seen-nothing-yet topper for the last round of stories. Whether deliberate or accidental, the order of publication has finally managed to give the mass-spying story that's been around since Mark Klein's 2005 bombshell.
But for all that, the leaks haven't been coherent. Even if you follow them closely -- as I do -- it's sometimes hard to figure out what, exactly, we have learned about the NSA. In part, that's because so much of the NSA's "collect-it-all" strategy involves overlapping ways of getting the same data (often for the purposes of a plausibly deniable parallel construction) so you hear about a new leak and can't figure out how it differs from the last one.
No Place's middle act is a very welcome and well-executed framing of all the leaks to date (some new ones were revealed in the book), putting them in a logical, rather than dramatic or chronological, order. If you can't figure out what the deal is with NSA spying, this section will put you straight, with brief, clear, non-technical explanations that anyone can follow.
The final third is where Greenwald really puts himself back into the story -- specifically, he discusses how the establishment press reacted to his reporting of the story. He characterizes himself as a long-time thorn in the journalistic establishment's side, a gadfly who relentlessly picked at the press's cowardice and laziness. So when famous journalists started dismissing his work as mere "blogging" and called for him to be arrested for reporting on the Snowden story, he wasn't surprised.
But what could have been an unseemly score-settling rebuttal to his critics quickly becomes something more significant: a comprehensive critique of the press's financialization as media empires swelled to the size of defense contractors or oil companies. Once these companies became the establishment, and their star journalists likewise became millionaire plutocrats whose children went to the same private schools as the politicians they were meant to be holding to account, they became tame handmaidens to the state and its worst excesses.
The Klein NSA surveillance story broke in 2005 and quickly sank, having made a ripple not much larger than that of Janet Jackson's wardrobe malfunction or the business of Obama's birth certificate. For nearly a decade, the evidence of breathtaking, lawless, endless surveillance has mounted, without any real pushback from the press. There has been no urgency to this story, despite its obvious gravity, no banner headlines that read ONE YEAR IN, THE CRIMES GO ON. The story -- the government spies on your merest social interaction in a way that would freak you the fuck out if you thought about it for ten seconds -- has become wonkish and complicated, buried in silly arguments about whether "metadata collection" is spying, about the fine parsing of Keith Alexander's denials, and, always, in Soviet-style scaremongering about the terrorists lurking within.
Greenwald doesn't blame the press for creating this situation, but he does place responsibility for allowing it square in their laps. He may linger a little over the personal sleights he's received at the hands of establishment journalists, but it's hard to fault him for wanting to point out that calling yourself a journalist and then asking to have another journalist put in prison for reporting on a massive criminal conspiracy undertaken at the highest level of government makes you a colossal asshole.
The book ends with a beautiful, barn-burning coda in which Greenwald sets out his case for a free society as being free from surveillance. It reads like the transcript of a particularly memorable speech -- an "I have a dream" speech; a "Blood, sweat, toil and tears" speech. It's the kind of speech I could have imagined a young senator named Barack Obama delivering in 2006, back when he had a colorable claim to being someone with a shred of respect for the Constitution and the rule of law. It's a speech I hope to hear Greenwald deliver himself someday.
No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State
https://boingboing.net/2014/05/28/greenwalds-no-place-to-hid.html
5 notes
·
View notes
Text
Rick Riordan's books just keep getting better (and more diverse!) Transcript
Part 1
Part 2
Transcript Below the Cut:
Rick Riordan. So, I've wanted to make a video about Rick Riordan for a while and with The new Trials of Apollo book just coming out, I’m really hyped about it. So I wanted to talk about why I like his books, or at least some of the things that impress me about them and keep me consistently excited about them.
Rick Riordan, if you don’t know, is the author of the wildly popular Percy Jackson series, and today I want to talk about his books, especially how his representation of minorities has improved over time.
So, a few quick things: First, I’m not going to talk about ALL of Rick Riordan’s work, especially his ancillary and tie in material like the Demi-God Files or all the cross over stories, mostly because I haven't read all of them.
And second: Spoilers. Just, big old spoilers for basically everything. I’m not going to go into big plot points much, but I will be talking about some of the characters in depth. I’m going to move through his ouvre in roughly chronological order. So, you are warned.
Lastly, this video hinges on the premise that well done, well executed, fully fledged representations of minority characters in children and Young Adult media is good and important. I’m not really going to argue this point. It is the assumption we are beginning with. Diverse media with diverse characters is good and important.
And this point is, weirdly, kind of controversial. In fact, in the vast majority of children and young adult media most of the cast will be white, straight, cis, able bodied, neurotypical children or young adults with an unstated or vague religious affiliation. This last bit, about the unstated or vague religious affiliation is one we don’t often think about, but really, having a character with ANY stated religion is really rare. Most will, maybe, practice a sort of secularized Christmas maybe? But that’s about it.
The rationale you’ll hear for this is that this makes books more accessible and thus marketable. I would counter that if you really want your book to appeal to as many different people as possible, wouldn’t you want to have as many different types of characters as possible? But that comes with the assumption that outright bigots wouldn’t refuse a book because one of the secondary characters is in a wheelchair, I guess.
So, yeah. Most children's lit and young adult lit will be white, straight, cis, able bodied, neurotypical children or young adults with an unstated or vague religious affiliation, even if it gets absurdly, massively popular. Popular enough to take risks and work outside the box. I’m looking at you, JK Rowling. Looking at you.
This fact, this lack of diversity, does not bother some people. And we are not going to argue this point in this video. We are beginning with the assertion that this situation is not ideal, and that added quality, well written diversity is a positive. And we are going to look specifically at how Rick Riordan improves in this specific aspect of his writing over time.
--
Ok, so, Uncle Rick is a San Antonio , Texas native, and as someone who was also born and raised in central Texas, I love this fact. He went to MY alma mater, UT, and became a middle school teacher. We’re basically the same person.
Now, Percy Jackson isn’t actually his first book series. In the 90s he wrote a detective series set in San Antonio called Big Red Tequila. There’s like 7 books in this series and I have read none of them. I’m sure they’re great though. How could they not be with a name like that?
Our story really begins in 2004-ish. The story goes that he was telling his son Greek myths as bedtime stories, and when he ran out of myths (or at least child friendly myths I assume), he started to make one up. He invented a story about a boy named Percy, a son of Poseidon, who goes on an adventure to return Zeus's missing lightning bolts. His son told him that he should turn it into a book, his dad had published books before after all. So, Rick did just that. He then took his rough draft to his middle school students and used their feedback to revise.
He then sold this book to Miramax Books for enough money to retire from teaching and focus on writing. God damn. Rick was living the dream here. Life goals.
So, yeah. If you’ve never read the first Percy Jackson books they are...fine. They’re ok, good even. Definitely like, children’s books. But if you like bad puns and greek myths they are fun. I read all 5 in like...one weekend when I was in high school. I personally think the books really pick up in the third one: Titan’s Curse, mostly because we meet my favorite character, Nico. There’s some good world building in that book, and it really feels like Rick had figured out how he wanted to end the series by that point, so the plot feels more focused. Maybe that’s just me.
So, remember how I said that Children’s lit will tend to be filled with white, straight, cis, able bodied, neurotypical children with an unstated or vague religious affiliation? Well, Percy Jackson and all his friends are...mostly, white, straight, cis, able bodied, children with an unstated or vague religious affiliation who have ADHD and Dyslexia.
Because Rick Riordan’s son has ADHD and Dyslexia, and Rick wanted these heroes to be like him. So, yeah. The diversity isn’t AMAZING here, but the intent to provide representation for minority children was present from the very beginning. And ADHD and Dyslexia are, like, super powers here, proof the children are demi-gods, are side effects of their brains and bodies being ready for amazing quests. And there’s this great diversity in the characters with ADHD and Dyslexia and how it impacts them. Annabeth is depicted as super smart and studious. You have Percy who has always struggled in school. And so on.
Now, how you feel about this representation of ADHD and Dyslexia will vary. Some people really like it, others think it isn’t very well done or plays into some iffy tropes. I think we can safely say that the intent was very positive, but your milage may vary on the execution.
There’s also a movie adaption of the first 2 books which are…..bad. Logan Lerman was 18 when he played Percy- who should be like...12? And they made Hades the bad guy? And like..Persephone? Is? In? The? Underworld? In? Summer? Which….ugh. Like, they made Grover black, which was a cool choice, an attempt to address the lack of racial diversity it seems. but still these movies are not good….maybe if you haven't read the books, you’ll like them. I don’t like them. I didn't even watch the second one honestly. .
Alright so we will look at the rest of Rick Riordan’s books in part 2 of this video. I wanted to cut it here to keep it from getting super super long. So I will see you guys over on Part 2 to finish up
CUT --
Welcome back to my look at all of Rick Riordan’s books and how they have improved over time. We are going to jump right in where we left off at the end of Percy Jackson and the Olympians.
Ok, so after Percy Jackson, Rick Riordan started work on the Kane Chronicles. If you haven’t read the Kane Chronicles, I don’t blame you. They are kind of the forgotten half-siblings of the Percy Jackson universe, but you should read them. They are really good, and they feel like a really experimental time for Rick. Not only is this the first time we see him play with a split First Person narrator, where different chapters are from different character’s Point of View, but he also really tackles race in these books.
Carter and Sadie are biracial, and deal with all kinds of race issues- Sadie being white passing and Carter not, the books looks at how that impacts them and their experiences with others, their family, and their heritage. Plus all the Egyptian shit is really cool.
But even if you skipped this book series (seriously, go back and read them.) you can see this evolution in Rick’s writing in his sequel to Percy Jackson- Heroes of Olympus. These books actually came out at the same time as the Kane Chronicles, with The first Kane Chronicles book coming out in May 2010, then The first Heroes of Olympus book coming out in October 2010, and back and forth. And it’s clear that his new skills in Red Pyramid were influential on Heroes of Olympus.
Not only do we see the return of the Shifting narrator, now a Third Person Limited Point of View that follows different characters in different chapters, but where the first series was overwhelmingly white, these books seem to make a real effort to avoid that. The first two books- The Lost Hero and Son of Neptune take place, more or less concurrent and independent of each other. It’s not until the 3rd book when all the new characters meet up. But in those first 2 books, we get 5 new MAIN characters- Jason, a white boy; Piper, a Native American girl, specifically Cherokee if I remember; Leo, a Mexican American boy; Frank, a Chinese American boy; and Hazel, an African American girl. We also get Reyna, who isn’t a main character at first, but I would argue becomes one in House of Hades, and she is Puerto Rican.
And ALL of these characters and their racial identities are handled really well. Like, they are fully fleshed out and genuine characters. This doesn’t feel like shallow, lazy tokenism. Their heritage plays a part in who they are, but is not the ONLY thing about them. Piper, for example, has a father who refuses to play Native American roles in movies because he wants to avoid being stereotyped or type cast and Piper carries that struggle to connect with her heritage with her. Hazel’s experience as a black girl, and a black girl from the 1930s at that, impacted how she was treated growing up and makes up a big part of her backstory. But they aren’t solely defined by these experiences like shallow stereotypes.
It’s well done ,is that I’m saying
-
So at this point, we could say that while Rick had a good grasp on racial diversity and neuro-divergence representation. Most of his characters were still straight, cis, able bodied, children with an unstated or vague religious affiliation. (Seriously, did none of these kids have like..faith in a religion before?)
Now, here is a true, fun fact. On June 30th, 2013, 3 months before the release of House of Hades, I went on Tumblr and wrote an Open Letter to Rick Riordan about how he should really include LGBT+ characters in his books. He had written. Like, 11 children’s books at this point, and despite my headcanons, every character had been portrayed as assumed straight and cis. So I wrote a letter. How much I liked his books, but really, could we have some LGBT+ characters, this IS Greek mythology after all. I don’t think he ever saw this letter, despite me tweeting it at him.
Among other things in this letter, I go on to list several possibilities for LGBT+ representation in his books, including: quote: maybe Nico feels an unrequited crush on Percy. A headcanon I had since book 4, Battle of the Labyrinth.
And so, I want my moment, just to say: I. Was. Right. And I told you so.
House of Hades came out in 2013, and well, so did Nico. My favorite character came out of the closet, or, well, was outed and it was heart wrenching. The fandom kind of lost its shit over this. Anyone who had shipped Percy and Nico was throwing a party, homophobes were throwing a fit, it was very emotional. I was gloating a lot.
And let’s be clear- Nico’s sexuality in House of Hades is not...handled the best. It’s better than nothing certainly, and it’s better than Word of God reveals post publication. Rowling. But, by itself, it’s...well...single sad cis gay boy pines over unrequited straight crush hits some stereotypes. None of this is malicious, but it by itself is only so-so representation.
But Rick wasn’t done there, because we still had one more book- Blood of Olympus. Nico gets a super cute boyfriend in the form of Will Solace, and gets some closure with Percy. Now, your mileage may vary with that particular scene. Nico smugly telling Percy he “isn't his type” feels, well, a little out of character and, I dunno, corny. But it’s nice to see Nico get this happy relationship with Will, and I’ll forgive Rick for any stumbles in the exact execution to avert that sad-single-gay trope.
- -
Ok. So, now at last, we get to the 2 series that are still in publication: Gods of Asgard and Trials of Apollo. These two series are publishing concurrently, and because the Gods of Asgard started publishing first, let’s talk about it first.
I love Gods of Asgard. Truly. These might be my favorite of Riordan’s books. Part of that might just be that after 10 Greek and Roman books, a focus on Norse is refreshing, but I just love it. I love Magnus, I love the Annabeth cameos, I love Sam. Ok, so, the first Gods of Asgard book: Sword of Summer hits two important notes when it comes to minority representation.
Hearth is deaf and mute and uses sign language. This is the first time we’ve had a main character with a clear disability other than ADHD and Dyslexia. Which is really cool. And The consistent use of sign language throughout is neat.
Our second is Sam. Who is muslim and wears a hijab. Like, truly, how many stories do you know about a hijabi muslim valkyrie girl kicking all the ass.
Book 2, Hammer of Thor…well. Remember when I said Nico is my favorite character? Nico might have to fight Alex Fierro for my heart. Alex Fierro. A trans gender fluid child of Loki. I love Alex. Some people cried SJW Gay-Agenda bullshit over Alex like, being trans and gender fluid and, actually mentioning it more than once, but those people are unhappy assholes and I ignore them. I like Alex. Alex is an interesting, complicated character and I can’t wait for the next book.
Also, am I the only one who thinks Magnus and Alex are being set up for some romance? Just wishful thinking? Feels like romance. I ship it. I’ve been right before.
So ok, so We now have racial diversity, representation of multiple kinds of disabilities, a gay character, a gender fluid trans character, and a muslim character. -
Let’s talk about Trials of Apollo.
These books are really fun. If for no other reason than Apollo might actually be the most loud, entertaining narrator we’ve had yet. He’s funny, he’s an asshole. He’s also very loudly and clearly bisexual. Which, duh. How else would you even write Apollo if you have any understanding of Greek mythology? It’s mentioned a couple of times in the first book, and then even more in the second, where his prior relationships have plot relevance.
The second book also introduced us to Jo and Emmie, a biracial lesbian couple who used to be hunters of Artemis who are now raising a daughter together.
And this is kind of the joy of really GOOD diverse representation. Like, Apollo has faced hardship because of his relationships, with both men and women, but his sexuality itself isn’t a problem with him. Alex is very secure with who they are, but has clearly faced a lot of transphobia. Nico was very closeted and seemed to have a lot of pain tied up in his sexuality and is only just now healing from that with Will. Jo and Emmie clearly faced issues with their relationship, having to leave the hunters, but have built a new life together. We get this great array of experiences, rather than just one prevailing narrative.
I love it, and we’ve come so far from that first bedtime story about a boy trying to find some stolen lightning bolts. - -
So, what’s in the future for Rick Riordan? Well, he hasn’t announced any new book series for after Gods of Asgard and Trials of Apollo wrap up. However, we do know that he is starting his own Publishing Imprint with Disney Hyperion. Rick will only work as a curator it seems, focusing on having minority authors write fantasy/mythology based books from their native cultures. There are 3 books signed right now,
Jennifer Cervantes’s Storm Runner, which is about a boy having to save the world from a Mayan Prophesy.
Roshani Chokshi’s Aru Shah and the End of Time, about a 12-year-old Indian-American girl who unwittingly frees a demon intent on awakening the God of Destruction
And Yoon Ha Lee’s Dragon Pearl, about a teenage fox spirit on a space colony.
All of which sound AMAZING and I will preorder as soon as Amazon let’s me.
Look, Rick Riordan is not a perfect person or a perfect writer. Some people take issue with him because he has said some rather insulting things about the small number of people who still worship the greek gods. That he took these stories and was dismissive of the people who still value them religiously. Now, The majority of those comments seem to come from blog posts back in 2006, and he did have a brief apology for offending Hellenists on his facebook back in February. and one would hope that this interest on letting minority authors tell stories from their own culture in the future is evidence that he has learned and grown since then.
And not everyone will like Rick Riordan’s books no matter what. They are for kids. They are corny and have bad puns and sometimes meander or forget about important characters for long stretches of time. Sometimes the ideas he has are better than the execution. It happens.
But when I look at his books as a whole, I see Middle school teacher from San Antonio who started with a fun idea and never stopped growing as an author with a dedication to minority representation in his novels. And I certainly appreciate that, and look forward to more of his work for as long as he decides to produce it.
229 notes
·
View notes
Text
Digital Immigrants Helping to Build a Digital Nation
A instead silly fixation has actually created in our 'digital' globe today: that there is some type of divide between those who are 'digital natives,' and also those that are not.
Worse, some feel that there is no higher compliment to pay someone than to describe them as a digital local ... which it's perfectly acceptable to disregard 'non-natives' as in some way outré.
With the possible exception of Nicolas Sarkozy, that language is no much longer acceptable in the real world of constructing genuine nations. And a good idea, too.
Immigrant Nation
Little Italy...
Rocco Rossi, a previous Toronto mayoral candidate and previous Head of state of the Liberal Celebration of Canada, recently told a touching tale concerning his uncle's challenging beginnings as an 18-year-old Italian immigrant to Canada in 1951, landing at Pier 21 in Halifax, Nova Scotia. Later, after the uncle had damaged this fresh ground, others from his household and also the inadequate farming area he had emigrated from made the journey throughout. Today, 350 people from that neighborhood now call Greater Toronto home.
What struck me regarding the tale had not been concerning how various most of us are, but rather, the similarities. Outside of First Nations and also (now) a fairly little portion of those descended from those that showed up early, in the 16th as well as 17th centuries, the majority of Canadians are first, 2nd, and also 3rd generation immigrants. And many have moms and dads or grandparents that can associate tales of what life was like trying to obtain developed 'off the watercraft.'
Yet we frequently deny these similarities. Established immigrant teams typically aren't as inviting as they could be of the newer ones. The most recent ones cannot recognize the dull society of the well-known team. As well as the reputable, however still relatively new, groups like the Italians as immortalized by novelist Nino Ricci, desire to convey a Goldilocksian top quality of being not as well fresh, and also not as well stagnant, not delighting in the complete advantages of the establishment but not destitute anymore.
Everyone jockeying for placement - and also lobbing subtle putdowns at groups that showed up on a different watercraft. When in truth, we're all in the exact same boat.
The Digital Nation
That got me thinking about the "Digital Nation."
In Digital Nation, the chronology is in reverse: the newer generations are the 'citizens,' the older generations supposedly the uncomfortable, unpredictable 'immigrants.'
The Digital Nation...
In this version of truth, there is a shocking quantity of displaying around that qualifies to function as well as prosper in the sector. If you didn't just get here, possibly you're as well set in your means to really 'obtain it.' (As Dilbert when aptly shared, the older tech employee could be changed by the younger tech employee, that could subsequently be changed by a 'unborn child.')
Or on the various other hand, if you arrive far too late, maybe all the great get-rich-fast possibilities (Microsoft millionaires, Google worker # 76, obtained an excellent gig at Facebook 3 years ago) will certainly be gone!]
All of it is nonsense…
The Digital Generation Gap
At most ideal, this department offers to remind us of how younger individuals think, or ways to drop old baggage from our business techniques in order to gauge range, networks, and also the rate and also power of the info revolution.
Crossing the divide...
At its worst, it assigns excessive credit rating to any individual who merely shows comfort with using their new tablet, and also that could string a few buzzwords together from Silicon Valley startup culture.
And this underestimates simply exactly how strong the digital divide still is even among young, educated people under 25. There are those that make use of Facebook as well as jargons, and after that, those who can grasp sophisticated shows languages (as well as comply with official scholastic research study, at the very least for a while) to address hard troubles. The substantial majority of 'electronic citizens' are passive fans of the productions as well as improvements headed by a driven, accomplished few.
It's unsurprising to this Gen-X' er that it's often baby boomers who appear obsessed with electronic natives, and even desire to be viewed as digital locals. These are the ones who write books on ways to recognize those that are born electronic, or tweet constantly concerning this application or that.
How do they ever before get anything done? Some of it's downright weird, when you look closely.
Faking It Till You Make It
Most achieved individuals can take advantage of digital innovation as well as digital society - regardless of whether they are in the ideal age brace or straight able to code in the current languages. And also they do so in interesting ways.
Most notably, if they fake-it-till-they-make it hard sufficient, they're left with legions of followers that casually throw around discusses of usages of their systems as methods of faking-it-even-harder. Perhaps a few examples will certainly allow clarify.
Take Matt Drudge, publisher of a page of web links I do not know exactly what making of called the Drudge Report:
Drudge has commonly been lauded as a leader of the fast-moving electronic national politics press. Evaluating only by his age, he could have had a Commodore Pet dog in college, and also he can also do a mean pantomime of a rotating dial phone. Drudge's dad is an electronic leader, having started an on the internet study shop called refdesk. Drudge is an unlikely hero, offered that the style of his site was in fact ripped off from his Dad's.
But then once more, Techmeme's aggregation style was torn off from Drudge. It seems our heroes obtain unlikelier and also unlikelier with each passing generation.
And what about Arianna Huffington The queen of the vaguely dynamic soft-scraper empire referred to as the Huffington Blog post, is an effectively off Child Boomer. I make sure she goes to fantastic initiative to look laid-back when she utilizes her smart device (without reading glasses).
Then there's Nick Denton founder of Gawker Media. He's a splendidly creative business owner and also always exact commentator on the state of our industry. Denton is an authority of breathless, shameless, bawdy blogging.
Today's chatter is tomorrow's news
But did you recognize that Denton left his task as a financial reporter to launch just what was basically a kind of venture internet search engine modern technology? A news collector called Additionally, among a pack of very early solutions planned to transform the archaic technique of 'news clipping solutions' on its head. Denton acts all casual, but it takes a whole lot of deep understanding of the info transformation to develop a successful start-up that changes just how business works.
New York Mayor Michael Bloomberg started up something not so dissimilar to Denton's business, unlike the mildly wealthy Denton, Bloomberg got very abundant off it. Bloomberg is still a global information realm, majority-owned by Michael Bloomberg, regardless of being pre-Web in its genesis. It was started in 1981, around the same time Microsoft created MS-DOS.
A fellow named Alan Meckler belonged to a team who began up conferences with names like Internet Globe back in the early 1990's. He later went on to own companies with names like Internet.com. Before all that, he was associated with 'information changes' on media like CD-Roms. He is around 70 years old.
Google is stocked with young, wise coders. They're likewise filled with skilled Ph.D's, directed by many Silicon Valley elders, as well as have actually typically had their butts kicked by a seasoned company advisor called Costs Campbell, who not just holds a Master's level, yet was train of the Columbia University football team in the 1970's, VP of Advertising with Apple, as well as much more just recently, CEO as well as Chairman of Intuit. For every one of these factors, Googlers dubbed him 'Train.'
Sheryl Sandberg, also a vital number in the early days of Google's procedures as well as the ethical conscience of its advertising program, is currently COO of Facebook. She pertained to Google with a background in seeking advice from at McKinsey, and as an upper-level official in the Treasury Department under the Clinton Administration. Her role at Facebook has actually been so critical to the business's survival and profitability that her total (mainly stock-based) payment (until now) is valued at better compared to $1B.
Turning to non-media companies.
Amazon.com, led by the irrepressible Jeff Bezos, is today an $83 billion firm. Bezos began Amazon in 1994. He is a true digital pioneer as well as enthusiast. Yet he is not a 'digital indigenous' by today's meaning, neither was he viewed as an especially accomplished techie.
Like Steve Jobs, Bezos found out a great deal on the task, though he knew sufficient in 1994 to write job descriptions for established coders. He came down from Wall Street with a vision as well as implemented it with an outlandish degree of obsession to information. Amazon.com is so influential that its simplicity of usage ended up being a darkness under which all ecommerce suppliers lived for years.
Groupon is a 'laughable' development by a cabal of tech-agnostic financiers that take place to have actually made fairly a dent in the market. It's a digital company, kind of. Movie critics of the business appear to really feel that by slamming the company for 'not being actually electronic,' they could in some way talk down its evaluation. Best of luck with that said! Current appraisal: $11.4 billion. I'm not a follower of Groupon myself, yet it's extremely real.
How numerous various other examples would certainly you like?
In cloud computer and also SaaS, middle-aged to older conglomerates like IBM, Xerox, Oracle, and so on compose a substantial part of the value of the US stock exchanges. Also Salesforce.com, the 'startup,' is also mature to be amazing to the amazing youngsters. However it deserves $20.7 billion. Its 47-year-old founder, Marc Benioff, cut his teeth at business at Apple as well as Oracle after developing a software application firm in high school, selling games for machines like the Atari.
Digital Elders
' It's the wood that must fear your hand, not vice versa.' -Pai Mei
What's my verdict? Well, I don't suggest to reject the noticeable: that it could be a great benefit to be born right into the digital change. Several new and also important solutions will certainly be begun up by those that involve the table with a lot of the appropriate prerequisites in terms of understanding as well as disposition.
But 'digital immigrants' like Jeff Bezos, Michael Bloomberg, Marc Benioff, as well as Arianna Huffington bring something special to the table as well:
They create ventures and address problems self-consciously instead of intuitively. Perhaps it's 'you state tomato, and also I state tomahto,' but those who can bring aware, organized effort right into an area typically get to elevations that the plain virtuoso cannot.
More exceptionally, they understand just what's absolutely effective and game-changing about a fad or technology, and could evangelize that adjustment to those who aren't sure.
They could be ready to work harder, be more stressed, remain the course seemingly forever on a lengthy march to monotonous greatness.
And one more thing. Due to the fact that they're not captured up in the 'social scene' of 'being digital,' the 'digital immigrants' are suitable to inform the truth.
Alan Meckler, that somehow does not have a Wikipedia access (though his business does), uploaded merely and also directly: 'Wikipedia is a Farce' as well as Wikipedia is Dishonest:. And also just what did he need to lose? Lunch with Jimmy Wales?
Digital Nation Building
There is going to be much worth and an extraordinary amount of fresh cultural outcome rising from Digital Nation in the coming years. However Digital Country needs - desperately requires - the equilibrium, official academic histories, framework, greed, worries, planning experience, bridging abilities, and also irreverence of its 'electronic immigrants.' (' Immigrants' who represent, paradoxically, the older generations of technologies and also economic go-getters from worlds far, much away - specifically, previous years like the 1990's, 1980's, and also when it comes to IBM, long before that.)
Were you aware that good ol' Microsoft (MSFT, at $273B) is still valued more than Google to this day? Crazy, right? Resting pleasantly in the list of top 10 business by market capitalization in the Criterion as well as Poor's 500: IBM, at $238B. Google's holding its own at $203B. Facebook, after it goes public, is expected to be valued at $100B. We'll see if they have exactly what it requires to maintain. It's actually prematurely to say.
Or to sum it up most briefly: Zuckerberg, Schmuckerberg.
#business#business owner#marketing#marketing agency#search engine marketing#social media#social media management#social media news#social media strategy
1 note
·
View note
Text
This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity
Recently GigaOm publisher and CEO, Byron Reese, sat down for a chat with Seedcamp’s Carlos Espinal on their podcast ‘This Much I Know.’ It’s an illuminating 80-minute conversation about the future of technology, the future of humanity, Star Trek, and much, much more.
You can listen to the podcast at Seedcamp or Soundcloud, or read the full transcript here.
Carlos Espinal: Hi everyone, welcome to ‘This Much I Know,’ the Seedcamp podcast with me, your host Carlos Espinal bringing you the inside story from founders, investors, and leading tech voices. Tune in to hear from the people who built businesses and products scaled globally, failed fantastically, and learned massively. Welcome everyone! On today’s podcast we have Byron Reese, the author of a new book called The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity. Not only is Byron an author, he’s also the CEO of publisher GigaOm, and he’s also been a founder of several high-tech companies, but I won’t steal his thunder by saying every great thing he’s done. I want to hear from the man himself. So welcome, Byron.
Byron Reese: Thank you so much for having me. I’m so glad to be here.
Excellent. Well, I think I mentioned this before: one of the key things that we like to do in this podcast is get to the origins of the person; in this case, the origins of the author. Where did you start your career and what did you study in college?
I grew up on a farm in east Texas, a small farm. And when I left high school I went to Rice University, which is in Houston. And I studied Economics and Business, a pretty standard general thing to study. When I graduated, I realized that it seemed to me that like every generation had… something that was ‘it’ at that time, the Zeitgeist of that time, and I knew I wanted to get into technology. I’d always been a tinkerer, I built my first computer, blah, blah, blah, all of that normal kind of nerdy stuff that I did.
But I knew I wanted to get into technology. So, I ended up moving out to the Bay Area and that was in the early 90s, and I worked for a technology company and that one was successful, and we sold it and it was good. And I worked for another technology company, got an idea and spun out a company and raised the financing for that. And we sold that company. And then I started another one and after 7 hard years, we sold that one to a company and it went public and so forth. So, from my mother’s perspective, I can’t seem to hold a job; but from another view, it’s kind of like the thing of our time instead. We’re in this industry that changes so rapidly. There are more opportunities that always come along and I find that whole feeling intoxicating.
That’s great. That’s a very illustrious career with that many companies having been built and sold. And now you’re running GigaOm. Do you want to share a little bit for people who may not be as familiar with GigaOm and what it is and what you do?
Certainly. And I hasten to add that I’ve been fortunate that I’ve never had a failure in any of my companies, but they’ve always had harder times. They’ve always had these great periods of like, ‘Boy, I don’t know how we’re going to pull this through,’ and they always end up [okay]. I think tenacity is a great trait in the startup world, because they’re all very hard. And I don’t feel like I figured it all out or anything. Every one is a struggle.
GigaOm is a technology research company. So, if you’re familiar with companies like Forrester or Gartner or those kinds of companies, what we are is a company that tries to help enterprises, help businesses deal with all of the rapidly changing technology that happens. So, you can imagine if you’re a CIO of a large company and there are so many technologies, and it all moves so quickly and how does anybody keep up with all of that? And so, what we have are a bunch of analysts who are each subject matter experts in some area, and we produce reports that try to orient somebody in this world we’re in, and say ‘These kinds of solutions work here, and these work there’ and so forth.
And that’s GigaOm’s mission. It’s a big, big challenge, because you can never rest. Big new companies I find almost every day that I’ve never even heard of and I think, ‘How did I miss this?’ and you have to dive into that, and so it’s a relentless, nonstop effort to stay current on these technologies.
On that note, one of the things that describes you on your LinkedIn page is the word ‘futurist.’ Do you want to walk us through what that means in the context of a label and how does the futurist really look at industries and how they change?
Well, it’s a lower case ‘f’ futurist, so anybody who seriously thinks about how the future might unfold, is to one degree or another, a futurist. I think what makes it into a discipline is to try to understand how change itself happens, how does technology drive changes and to do that, you almost by definition, have to be a historian as well. And so, I think to be a futurist is to be deliberate and reflective on how it is that we came from where we were, in savagery and low tech and all of that, to this world we are in today and can you in fact look forward.
The interesting thing about the future is it always progresses very neatly and linearly until it doesn’t, until something comes along so profound that it changes it. And that’s why you hear all of these things about one prediction in the 19th Century was that, by some year in the future, London would be unnavigable because of all the horse manure or the number of horses that would be needed to support the population, and that maybe would have happened, except you had the car, and like that. So, everything’s a straight-line, until one day it isn’t. And I think the challenge of the futurist is to figure out ‘When does it [move in] a line and when is it a hockey stick?’
So, on that definition of line versus hockey stick, your background as having been CEO of various companies, a couple of which were media centric, what is it that drew you to artificial intelligence specifically to futurize on?
Well, that is a fantastic question. Artificial intelligence is first of all, a technology that people widely differ on its impact, and that’s usually like a marker that something may be going on there. There are people who think it’s just oversold hype. It’s just data mining, big data renamed. It’s just the tool for raising money better. Then there are people who say this is going to be the end of humanity, as we know it. And philosophically the idea that a machine can think, maybe, is a fantastically interesting one, because we know that when you can teach a machine to do something, you can usually double and double and double and double and double its ability to do that over time. And if you could ever get it to reason, and then it could double and double and double and double, well that could potentially be very interesting.
Humans only evolve, computers are able to evolve kind of at the speed of light, they get better and humans evolve at the speed of life. It takes generations. And so, if a machine can think, a question famously posed by Alan Turing, if a machine could think then that could potentially be a game changer. Likewise, I have a similar fascination for robots because it’s a machine that can act, that can move and can interact physically in the world. And I got to thinking what would happen, what is it a human in a world where machines can think better and act better, then what are we? What is uniquely human at that point?
And so, when you start asking those kinds of questions about a technology, that gets very interesting. You can take something like air conditioning and you can say, wow, air conditioning. Think of the impact that had. It meant that in the evenings people wouldn’t… in warm areas, people don’t go out on their front porch anymore. They close the house up and air condition it, and therefore they have less interaction with their neighbors. And you can take some technology as simple as that and say that had all these ripples throughout the world.
The discovery of the new world ended the Italian Renaissance effectively, because it changed the focus of Europe to a whole different direction. So, when those sorts of things had those kinds of ripples through history, you can only imagine what if the machine could think, like that’s a big deal. Twenty-five years ago, we made the first browser, the Mosaic browser, and if you had an enormous amount of foresight and somebody said to you, in 25 years, 2 billion people are going to be using this, what do your think’s going to happen?
If you had an enormous amount of foresight, you might’ve said, well, the Yellow Pages are going to have it rough and the newspapers are, and travel agents are, and stock brokers are going to have a hard time, and you would have been right about everything, but nobody would have guessed there would be Google, or eBay, or Etsy, or Airbnb, or Amazon, or $25 trillion worth of a million new companies. And all that was, was computers being able to talk to each other. Imagine if they could think. That is a big question.
You’re right and I think that there is…I was joking and I said ‘Tinder’ in the background just because that’s a social transformation. Not even like a utility, but rather the social expectation of where certain things happen that was brought about that. So, you’re right… and we’re going to get into some of those [new platforms] as we review your book. In order to do that, let’s go through the table of contents. So, for those of you that don’t have the book yet, because hopefully you will after this chat, the book is broken up into five parts and in some ways these parts are arguably chronological in their stage of development.
The first one I would label as the historical, and it’s broken out into the fourth ages that we’ve had as humans, the first age being language and fire, the second one being agriculture and cities, the third one being writing and wheels, and the fourth one being the one that we’re currently in, which is robots and AI. And we’re left with three questions, which are: what is the composition of the universe, what are we, and what is yourself? And those are big, deep philosophical ones that will manifest themselves in the book a little bit later as we get into consciousness.
Part two of the book is about narrow AI and robots. Arguably I would say this is where we are today, and Seedcamp as an investor in AI companies has broadly invested in narrow AI through different companies. And this is I think the cutting edge of AI, as far as we understand it. Part three in the book covers artificial general intelligence, which is everything we’ve always wanted to see, where science fiction represents quite well, everything from that movie AI, with the little robot boy, to Bicentennial Man with Robin Williams, and sort of the ethical implications of that.
Then part four of the book is computer consciousness, which is a huge debate, because as Byron articulates in the book, there’s a whole debate on what is consciousness and there’s a distinction between a monist and the dualist and how they experience consciousness and how they define it. And hopefully Byron will walk us through that in more detail. And lastly, the road from here is the future, as far as we can see it in the futurist portion of the book, I mean part three, four and five are all futurist portions of the book, but this one is where I think, Byron, you go to the ‘nth’ degree possible with a few exceptions. So maybe we can kick off with your commentary on why you have broken up the book into these five parts.
Well you’re right that they’re chronological, and you may have noticed each one opens with what you could call a parable, and the parables themselves are chronological as well. The first one is about Prometheus and it’s about technology, and about how the technology changed and all the rest. And like you said, that’s where you want to kind of lay the groundwork of the last 100,000 years and that’s why it’s named something like ‘the road to here,’ it’s like how we got to where we are today.
And then I think there are three big questions that everywhere I go I hear one variant of them or another. The first one is around narrow AI and like you said, it’s a real technology that’s going to impact us, what’s it going to do with jobs, what’s this going to do in warfare, what will it do with income? All of these things we are certainly going to deal with. And then we’re unfortunate with the term ‘artificial intelligence,’ because it can mean many different things, but one is that it can be narrow AI, it can be a Nest thermometer that can adjust the temperature, but it can also be Commander Data of Star Trek. It can be C-3PO out of Star Wars. It can be something as versatile as a human and fortunately those two things share the same name, but they’re different technologies, so it has to kind of be drawn out on its own, and to say, “Is this very different thing that shares the same name likely? possible? What are its implications and whatnot?”
Interestingly, of the people who believe we’re going to build [an AGI] very immensely and when, some say as soon as five years, and some say as long away as five hundred. And that’s very telling that these people had such wide viewpoints on when we’ll get it. And then to people who believe we’re going to build one, the question then becomes, ‘well is it alive? Can it feel pain? Does it experience the world? And therefore, by that basis does it have rights?’ And if it does, does that mean you can no longer order it to plunge your toilet when it gets stopped up, because all you’ve made is a sentient being that you can control, and is that possible?
And why is it that we don’t even know this? The only real thing any of us know is our own consciousness and we don’t even know where that comes about. And then finally the book starts 100,000 years ago. I wanted to look 100,000 years out or something like that. I wanted to start thinking about, no matter how these other issues shake out, what is the long trajectory of the human race? Like how did we get here and what does that tell us about where we’re going? Is human history a story of things getting better or things getting worse, and how do they get better or worse and all of the rest. So that was a structure that I made for the book before I wrote a single word.
Yeah, and it makes sense. Maybe for the sake of not stealing the thunder of those that want to read it, we’ll skip a few of those, but before we go straight into questions about the book itself, maybe you can explain who you want this book to be read by. Who is the customer?
There are two customers for the book. The first is people who are in the orbit of technology one way or the other, like it’s their job, or their day to day, and these questions are things they deal with and think about constantly. The value of the book, the value prop of the book is that it never actually tells you what I think on any of these issues. Now, let me clarify that ever so slightly because the book isn’t just another guy with another opinion telling you what I think is going to happen. That isn’t what I was writing it for at all.
What I was really intrigued by is how people have so many different views on what’s going to happen. Like with the jobs question, which I’m sure we’ll come to. Are we going to have universal unemployment or are we going to have too few humans? These are very different outcomes all by very technical minded informed people. So, what I’ve written or tried to write is a guidebook that says I will help you get to the bottom of all the assumptions underlying these opinions and do so in a way that you can take your own values, your own beliefs, and project them onto these issues and have a lot of clarity. So, it’s a book about how to get organized and understand why the debate exists about these things.
And then the second group are people who, they just see headlines every now and then where Elon Musk says, “Hey, I hope we’re not just the boot loaders for the AI, but it seems to be the case,” or “There’s very little chance we’re going to survive this.” And Stephen Hawking would say, “This may be the last invention we’re permitted to make.” Bill Gates says he’s worried about AI as well. And the people who see these headlines, they’re bound to think, “Wow, if Bill Gates and Elon Musk and Stephen Hawking are worried about this, then I guess I should be worried as well.” Just on the basis of that, there’s a lot of fear and angst about these technologies.
The book actually isn’t about technology. It’s about how much you believe and what that means for your beliefs about technology. And so, I think after reading the book, you may still be afraid of AI, you may not, but you will be able to say, ‘I know why Elon Musk, or whoever, thinks what they think. It isn’t that they know something I don’t know, they don’t have some special knowledge I don’t have, it’s that they believe something. They believe something very specific about what people are, what the brain is. They have a certain view of the world as completely mechanistic and all these other things.’ You may agree with them, you may not, but I tried to get at all of the assumptions that live underneath those headlines you see. And so why would Stephen Hawking say that, why would he? Well, there are certain assumptions that you would have to believe to come to that same conclusion.
Do you believe that’s the main reason that very intelligent people will disagree on with respect to how optimistic they are about what artificial intelligence will do? You mentioned Elon Musk who is pretty pessimistic about what AI might do, whereas there are others like Mark Zuckerberg from Facebook, who is pretty optimistic, comparatively speaking. Do you think it’s this different account of what we are, that’s explaining the difference?
Absolutely. The basic rules that govern the universe and what our self is, what is that voice you hear in your head?
The three big questions.
Exactly. I think the answer to all these questions boil down to those three questions, which as I pointed out, are very old questions. They go back as far as we have writing, and presumably therefore they go back before that, way beyond that.
So we’ll try to answer some of those questions and maybe I can prod you. I know that you’ve mentioned in the past that you’re not necessarily expressing your specific views, you’re just laying out the groundwork for people to have a debate, but maybe we can tease some of your opinions.
I make no effort to hide them. I have beliefs about all those questions as well, and I’m happy to share them, but the reason they don’t have a place in the book is: it doesn’t matter whether I think I’m a machine or not. Who cares whether I think I’m a machine? The reader already has an opinion of whether a human being is a machine. The fact that I’m just one more person who says ‘yay’ or ‘nay,’ that doesn’t have any bearing on the book.
True. Although, in all fairness, you are a highly qualified person to give an opinion.
I know, but to your point, if Elon Musk says one thing and Mark Zuckerberg says another, and they’re diametrically opposed, they are both eminently qualified to have an opinion and so these people who are eminently qualified to have opinions have no consensus, and that means something.
That does mean something. So, one thing I would like to comment about the general spirit of your book, is that I generally felt like the book was built from a position of optimism. Even towards the very end of the book, towards the 100,000 years in the future, there was always this underlying tone of, we will be better off because of this entire revolution, no matter how it plays out versus not.And I think that maybe I can tease out of you that fact that you are telegraphing your view on ‘what are we?’ Effectively, are we a benevolent race in a benevolent existence, or are we something that’s more destructive in nature? So, I don’t know if you would agree with that statement about the spirit of the book or whether…
Absolutely. I am unequivocally, undeniably optimistic about the future, for a very simple reason, which is, there was a time in the past, maybe 70,000 years ago, that humans were down to something like maybe a 1000 breeding pairs. We were an endangered species and we were one epidemic, one famine, one away from total annihilation and somehow, we got past that. And then 10,000 years ago, we got agriculture and we learned to regularly produce food, but it took us 90 percent of our people for 10,000 years to make our food.
But then we learned a trick and the trick is technology, because what technology does is it multiplies what you are able to do. And what we saw is that all of a sudden, it didn’t take 90 percent, 80 percent, 70, 60, all the way down, in the West to 2 percent. And furthermore, we learned all of these other tricks we could do with technology. It’s almost magic that what it does is it multiplies human ability. And we know of no upward limit of what technology can do and therefore, there is no end to how it can multiply what we can do.
And so, one has to ask the question, “Are we on balance going to use that for good or ill?” And the answer obviously is for good. I know maybe it doesn’t seem obvious if you caught the news this morning, but the simple fact of the matter is by any standard you choose today, life is better than it was in the past, by that same standard anywhere in the world. And so, we have an unending story of 10,000 years of human progress.
And what has marred humanity for the longest time is the concept of scarcity. There was never enough good stuff for everybody, not enough food, not enough medicine, not enough education, not enough leisure, and technology lets us overcome scarcity. And so, I think if you keep that at the core, that on balance, there have been more people who wanted to build than destroy, we know that, because we have been building for 10,000 years. That on balance, on net, we use technology for good on net, always, without fail.
I’d be interested to know the limits to your optimism there. Is your optimism probabilistic? Do you assign, say a 90 percent chance to the idea that technology and AI will be on balance, good for humans? Or do you think it’s pretty precarious, there’s maybe a 10 percent chance, 20 percent chance that that might be a point where if we fail to institute the right sort of arrangements, it might be bad. How would you sort of describe your optimism in that sense?
I find it hard to find historic cases where technology came along that magnified what people were able to do and that was bad for us. If in fact artificial intelligence makes everybody effectively smarter, it’s really hard to spin that to a bad thing. If you think that’s a bad thing, then one would advocate that maybe it would be great if tomorrow everybody woke up with 10 fewer IQ points. I can’t construct that in my mind.
And what artificial intelligence is, is it’s a collective memory of the planet. We take data from all these people’s life experiences and we learn from that data, and so to somehow say that’s going to end up badly, is to say ignorance is better than knowledge. It’s to say that, yeah, now that we have a collective memory of the planet, things are going to get worse. If you believe that, then it would be great if everybody forgot everything they know tomorrow. And so, to me, the antithetical position that somehow making everybody smarter, remembering our mistakes better, all of these other things can somehow lead to a bad result…I think is…I shall politely say, unproven in the extreme.
You see, I believe that people are inherently…we have evolved to be by default, extremely cautious. Somebody said it’s much better to mistake a rock for a bear and to run away from it, than it is to mistake a bear for a rock and just stand there. So, we are a skittish people and our skittishness has served us well. But what happens is it means anytime you’re born with some bias, some cognitive bias, and we’re born I think with one of fear, it does one well to be aware of that and to say, “I know I’m born this way. I know that for 10,000 years things have gotten better, but tomorrow they might just be worse.” We come by that honestly, it served us well in the past, but that doesn’t mean it’s not wrong.
All right, well if we take that and use that as a sort of a veneer for the rest of the conversation, let’s move into the narrow AI portion of your book. We can go into the whole variance of whether robots are going to take all of our jobs, some of our jobs, or none of our jobs and we can kind of explore that.
I know that you’ve covered that in other interviews, and one of the things that maybe we also should cover is how we train our AI systems in this narrow era. How we can inadvertently create issues for ourselves by having old data sets that represent social norms that have changed and therefore skew things in the wrong way, and inherently create momentum for machines to believe and make wrong conclusions of us, even though we as humans might be able to derive that out of contextual relevance at some point, but is no longer. Maybe you can just kick off that whole section with commentary on that.
So, that is certainly a real problem. You see when you take a data set and let’s say the data is 100 percent accurate and you come up with some conclusion about it, it takes on a halo of, ‘well that’s just the facts, that’s just how things are, that’s just the truth.’ And in a sense, it is just the truth, and AI is only going to come to conclusions based on like you said, the data that it’s trained on. You see, the interesting thing about artificial intelligence, is it has a philosophical assumption behindit, and it is that the future is like the past and for many things that is true. A cat tomorrow looks like a cat today and so you can take a bunch of cats from yesterday, or a week ago, or a month, or a year and you can train it and it’s going to be correct. A cell phone tomorrow doesn’t look like a cell phone ten years ago though, and so if you took a bunch of photos of cell phones from 10 years ago, trained an AI, it’s going to be fabulously wrong. And so, you hit the nail on the head.
The onus is on us to make sure that whatever we are teaching it is a truth that will be true tomorrow, and that is a real concern. There is no machine that can kind of ‘sanity check’ that for you, that you tell the machine, “This is the truth, now, tell me about tomorrow,” but people have to get very good at that. Luckily there’s a lot of awareness around this issue, like people who assemble large datasets, are aware that data has a ‘best-by’ date that varies widely. For how to play a game of chess, it’s hundreds of years. That hasn’t changed. If it’s what a cell phone looks like, it’s a year. So the trick is to just be very cognizant of the data you’re using.
I find the people who are in this industry are very reflective about these kinds of things, and this gives me a lot of encouragement. There have been times in the past where people associated with a new technology had a keen sense that it was something very serious, like the Manhattan project in the United States in World War II, or the computers that were built in the United Kingdom in that same period. They realized they were doing something of import, and they were very reflective about it, even in that time. And I find that to be the case with people in AI today.
I think that generally speaking, a lot of the companies that we’ve invested in this sector and in this stage of effectively narrow based AI, as you said, are going through and thinking through it. But what’s interesting is that I’ve noticed that there is a limit to what we can teach as metadata to data for machine learning algorithms to learn and evolve by themselves. So, the age-old argument is that you can’t build an artificial general intelligence. You have to grow it. You have to nurture it. And it’s done over time. And part of the challenge of nurturing or growing something is knowing what pieces of input to give it.
Now, if you use children as the best approximation of what we do, there’s a lot of built in features, including curiosity and a desire to self-preserve and all these things that then enable the acquisition of metadata, which then justifies and rewrites existing data as either valid or invalid, to use your cell phone example. How do you see us being able to tackle that when we’re inherently flawed in our ability to add metadata to existing data? Are we going to effectively never be able to make it to artificial general intelligence because of our inability to add that additional color to data so that it isn’t effectively a very tarnished and limited utility?
Well, yes, it could very easily be the case, and by the way, that’s an extremely minority view among people in AI. I will just say that up front. I’m not representing a majority of people in AI, but I think that could very well be the case. Let me just dive into that a little bit about how people know what we know. How is it that we are generally intelligent, have general intelligence? If I asked, “Does it hurt your thumb when you hit it with a hammer?” You would say “yes,” and then I would say, “Have you ever done it?” “Yes.” And then I would say, “Well, when?” And you likely can’t remember, and so you’re right, we have data that we take somehow learning from, and we store it and we don’t knowhow we store it. There’s no place in your brain which is ‘hitting your thumb with a hammer hurts,’ and then if I somehow could cut that out, you no longer know that. It doesn’t exist. We don’t know how we’d do that.
Then we do something really clever. We know how to take data we know in one area and apply it to another area. I could draw a picture of a completely made up alien that is weird beyond imagination. And I could show that picture to you and then I could give you a bunch of photographs and say find that alien in these. And if the alien is upside down or underwater or covered in peanut butter, or half behind a tree or whatever, you’re like, “There it is. There it is. There it is. There it is.” We don’t know how we do that. So, we don’t know how to make computers do it.
And then if you think about it, if I were to ask you to imagine a trout swimming in a river, and imagine the same trout in a jar of formaldehyde and in a laboratory. “Do they weigh the same?” You would say, “yeah.” “Do they smell the same?” “Uh, no.” “Are they the same color?” “Probably not.” “Are they the same temperature?” “Definitely not.” And even though you have no experience with any of that, you instinctively know how to apply it. These are things that people do very naturally, and we don’t know how to make machines do them.
If you were to think of a question to ask a computer like, “Dr. Smith is having lunch at his favorite restaurant when he receives a phone call. Looking worried, he runs out the door neglecting to pay his bill. Are the owners liable to call the police?” You would say a human would say no. Clearly, he’s a doctor. It’s his favorite restaurant, he must eat there a lot, he must’ve gotten an emergency call. He ran out the door forgetting to pay. We’ll just ask him to pay the next night he comes in. The amount of knowledge you had to have, just to answer that question is complexity in the extreme.
I can’t even find a chatbot that can answer [the question:] “What’s bigger, a nickel or the sun?” And so to try to answer a question that requires this nuance and all of this inference and understanding and all of that, I do not believe we know how to build that now. That would be, I believe, a statement within the consensus. I don’t believe we know how to build it, and even if you were to say, “Well, if you had enough data and enough computers, you could figure that out.” It may just literally be impossible, like every instantiation of every possibility. We don’t know how we do it. It’s a great mystery and it’s even hotly debated [around] even if we knew how we do it, could we build a machine to do it? I don’t even know that that’s the case.
I think that’s part of the thing that baffles me in your book. I’m jumping a little bit around here in your book now. You do talk about consciousness and you talk about sentience and how we know what we know, who we are, what we are. You talk about the dot on pets and how they identify themselves as themselves, and with any engineering problem, sometimes you can conceive of a solution before actually the method by which to get there is accomplished. You can conceive the idea of flying. You just don’t know what combination of anything that you are copying from birds or copying from leaves, or whatever, will function in getting to that goal: flying.
The problem with this one is that from an engineering point of view, this idea of having another human or another human-like entity that not only has consciousness, but has free will and sentience as far as we can perceive it, [doesn’t recognize that] there’s a lot of things that you described in your chapter on consciousness that we don’t even know how to qualify. Like which is a huge catalyst in being able to create the metadata that structures data in a way that then gives the illusion and perception of consciousness. Maybe this is where you give me your personal opinion… do you think we’ll ever be able to create an answer to that engineering question, such that technology can be built around it? Because otherwise we might just be stuck on the formulation of the problem.
The logic that says we can build it is very straightforward and seemingly ironclad. The logic goes like this. If we figure out how a neuron works, we can build one. Either physically build one or model it in a computer. And if you can model that neuron in a computer, then you learn how it talks to other neurons and then you model a 100 billion of them in the computer, and all of a sudden you have a human mind. So that that says, we don’t have to know it, we just have to understand the physics. So, the position just says whatever a neuron does, it behaves the laws of physics and if we can understand how those laws are interacting, then we will be able to build it. Case closed. There’s no question at all that it cannot be done.
So I would say that’s the majority viewpoint. The other viewpoint says, “Well wait a minute, we have this brain that we don’t understand how it works. And then we have this mind, and a mind is a concept everybody uses and if you want a definition, it’s kind of everything your brain can do that an organ doesn’t seem like it would be able to. You have a sense of humor; your liver may not have a sense of humor. You have emotions, your stomach may not have emotions, and so forth.” So somehow, we have a mind that we don’t know how it comes about. And then to your point, we are conscious and what that means is we experience the world. I feel warmth, [whereas] a computer measures temperature. Those are very different things and we not only don’t know how it is that we are conscious, we don’t even know how to ask the question in a scientific method, nor what the answer looks like.
And so, I would say my position to be perfectly clear is, we have brains we don’t understand, minds we don’t understand and consciousness we don’t understand. And therefore, I am unconvinced that we can ever build something like this. And so I see no evidence that we can build it because the only example that we have is something that we don’t understand. I don’t think you have to appeal to spiritualism or anything like that, to come to that conclusion, although many people would disagree with me.
Yeah, it’s interesting. I think one thing underlying the pessimistic view is this belief that while we may not have the technology now or have an idea of how we’re going to get there, the kinetic sort of an AI explosion—that’s what I think Nick Bostrom, the philosopher has called it—may be pretty rapid in the sense that once there is material success in developing these AI models, that will encourage researchers to sort of pile on and therefore they bring in more people to produce those models and then secondly, if there are advancements in self-improving AI models. So there’s a belief that it may be pretty quick that we get super intelligence that underlies this pessimism and the belief that we sort of have to act now. What would be your thoughts on that?
Oh, well I don’t agree. I think that’s the “Is that a bear or a rock?” kind of thing. The only evidence we really have for that scenario is movies, and they’re very compelling and I’m not conspiratorial, and they’re entertaining. But what happens is you see that enough, and you do something that has a name, it’s called ‘reasoning from fictional evidence’ and that’s what we do. Where you say, “Well, that could happen, and then you see it again, and yeah, that could happen. That really could again.” Again, and again and again.
To put it in perspective, when I say we don’t understand how the brain works, let me be really clear about that. Your brain has 100 billion neurons, roughly the same number of stars in the Milky Way. You might say, “Well, we don’t understand it because there’s so many.” This is not true. There’s a worm called the nematode worms. He’s about as long as a hair is thick, and his brain has 302 neurons. These are the most successful creatures on the planet, by the way. Seventy percent of all animals are nematode worms and 302 neurons. That’s it. [This is about] the number of pieces of cereal in a bowl of cereal. So, for 20 years a group of people in something called the ‘open worm project’ had been trying to model those 302 neurons in a computer to get it to display some of complex behavior that a nematode worm does. And not only have they not done it, there’s even a debate among them whether it is even possible to do that. So that’s the reality of the situation. We haven’t even gotten to the mind.
Again, how is it that we’re creative? And we haven’t even gotten to, how is it that we experience the world? We’re just talking about how does a brain work, if it only has 302 neurons, a bunch of smart people, 20 years working on it, may not even be possible. So somehow to spin a narrative that, well, yeah, that all may be true, but what if there was a breakthrough and then it sped up on itself and sped up and then it got smarter and then it got so smart it had 100 IQ, then a thousand, then a million, then 100 million. And then it doesn’t even see us anymore. That’s as speculative as any other kind of scenario you want to come up with. It’s so removed from the facts on the ground that you can’t rebut it because it is not based on any evidence that you can rebuke.
You know, the fun thing about chatting with you, Byron, is that the temptation is to sort of jump into all these theories and which ones are your favorites. So because I have the microphone, I will. Let me just jump into one. Best science fiction theory that you like. I think we’ve touched on a few of these things, but what is the best unified theory of everything, from science fiction that you feel like, ‘you know what, this might just explain it all’?
Star Trek.
Okay. Which variant of it? Because there’s not…
Oh, I would take either….I’ll take ‘The Next Generation.’ So, what is that narrative? We use technology to overcome scarcity. We have bumps all along the way. We are insatiably curious, and we go out to explore the stars as Captain Picard told the guy he thought out from the 20th Century. He said the challenge in our time is to better yourself, is to discover who you are. And what we found interestingly with the Internet, and sure, you can list all the nefarious uses you want. What we found is the minute you make blogs, 100 million people want to tell you what they think. The minute you make YouTube, millions of people want to upload video; the minute you make iTunes, music flourishes.
I think in my father’s generation, they didn’t write anything after they left college. We wake up in the morning, and we write all day long. You send emails constantly and so what we have found is that it isn’t that there were just a few people, and like the Italian Renaissance, there were only a few people who wanted to paint or cared to paint. It was like everybody probably did. Only there wasn’t enough of the good stuff, and so only either you had extreme talent or extreme wealth and then you got to paint.
Well, in the future, in the Star Trek variant of it, we’ve eliminated scarcity through technology, and everybody is empowered, every Dante to write their Inferno, every Marie Curie to discover radium and all of the rest. And so that vision of the future, you know, Gene Roddenberry said in the future there will be no hunger and there will be no greed and all the children would know how to read. That variant of the future is the one that’s most consistent with the past. That’s the one you can say, “Yeah, somebody in the 1400s looking at our life today, that would look like Star Trek to them. These people like push up a button and the temperature in the room gets cooler, and they have leisure time. They have hobbies.” That would’ve seemed like science fiction.
I think there’s a couple of things that I want to tackle with the Star Trek analogy to get us sort of warmed up on this and I think Kyran’s waiting here at the top to ask some of them, but I think the most obvious one to ask, if we use that as a parable of the future, is about Lieutenant Commander Data. Lieutenant Commander Data is one of the characters starring in The Next Generation and is the closest attempt to artificial general intelligence, and yet he’s crippled from fully comprehending the human condition because he’s got an emotion chip that has to be turned off because when it’s turned on, he goes nuts; and his brother is also nuts because he was overly emotional. And then he ends up representing every negative quality of humanity. So to some extent, not only have I just shown off about my knowledge of the Star Trek era…
Lore wasn’t over overly emotional. He got the chip that was meant for Data and it wasn’t designed for him. That was his backstory.
Oh, that’s right. I stand corrected, but maybe you can explore that. In that future, walk us through why you think Gene had that level of limitation for Data, and whether or not that’s an implication of ultimately the limits of what we can expect from robots.
Well, obviously that story is about…that whole setup is just not hard science. Right? That whole setup is, like you said, it’s embodying us and it’s the Pinocchio Story of Data wanting to be a boy and all of the rest. So, it’s just storytelling as far as I’m concerned. You know, it’s convenient that he has a positronic brain and having removed part of his scalp, you just see all this light coursing through, but that’s not something that science is behind, like Warp 10 or something, the tri-quarter. You know Uhura in the original series, she had a Bluetooth device in her ear all the time, right?
Yeah, but I guess with the Data metaphor, I guess what I’m asking is: the limitations that prevented Data from being able to do some of the things that humans do, and therefore ultimately come around full circle into being a fully independent, conscious, free-willed, sentient being, were entirely because of some human elements he was lacking. I guess the question and you brought it up in your book is, whether or not we need those human elements to really drive that final conversion of a machine to some sort of entity that we can respect as an equivalent peer to us.
Yeah. Data is a tricky one because he could not feel pain, so you would say he’s not sentient. And to be clear, sentient means, it’s often misused, to mean ‘smart.’ That’s sapient. Sentient means you can experience pain. He didn’t, but as you said, at some point in the show, he experienced emotional pain through that chip and therefore he is sentient. They had a whole episode about, “Does Data have a soul?” And you’re right, I think there are things that humans do that it’s hard to…unless you start with the assumption everything in a human being is mechanistic, in physics and that you’re a bag of chemicals with electrical impulses going through you.
If you start with that, then everything has to be mechanical, but most people don’t see themselves that way, I have found, and so if there is something else, some emergent or something else that’s going on, then yeah, I believe that has to be wrapped up in our intelligence. That being said, everybody I think has had this experience of when you’re driving along and you kind of space [out] and then you kind of ‘come to’ and you’re like, “Holy cow, I’m three miles along. I don’t remember driving there.” Yet you behaved very intelligently. You navigated traffic and did all of that, but you weren’t kind of conscious. You weren’t experiencing the world at least that much. That may be the limit of what we can do, that a person during that three minutes when you’re kind of spaced, because that person also didn’t write a new poem or do anything creative. They just merely mechanically went through the motions of driving. That may be the limit. That may be that last little bit that makes us human.
The Star Trek view has two pieces to it. It has a technological optimism, which I don’t contest. I think I’m aligned with you and agreeing with that. There’s also an economic or a social optimism there and that’s also about how that technology is owned, who owns the means of production, who owns the replicators. When it comes to that, how precarious do you think the Star Trek Universe is in the sense that if the replicators are only in the hand of a certain group of people, if they’re so expensive that only a few people learn them, or only a few people own the robots, then it’s no longer such an optimistic scenario that we have. I’d just be interested in hearing your views there.
You’re right, that the replicator is a little bit of a convenient…I don’t want to say it’s a cheat, but it’s a convenient way to get around scarcity and they never really go into, well, how is it that anybody could go to the library and replicate whatever they wanted. Like how did they get that? I understand those arguments. We have [a world where] the ability of a person using technology to affect a lot of lives goes up and that’s why we have more billionaires. We have more self-made billionaires now; a higher percentage of billionaires are self-made now than ever before. You know, Google and Facebook together made 12 billionaires. The ability to make a billion dollars gets easier and easier, at least for some people (not me) because technology allows them to multiply and affect more lives and you’re right. So that does tend to make more super, super, super rich people. But, I think the income inequality debate is a little…maybe needs a slight bit of focus.
To my mind it doesn’t matter all that much how many super rich people there are. The question is how many poor people are there? How many people have a good life? How many people can have medical care and can, you know, if I could get everybody to that state, but I had to make a bunch of super rich people, it’s like, absolutely, we’ll take that. So I think, income inequality by itself is a distraction.
I think the question is how do you raise the lot of everybody else and what we know about technology is that it gets better over time and the prices fall over time. And that goes on ad infinitum. Who could have afforded an iPhone 20 years ago? Nobody. Who could have afforded the cell phone 30 years ago? Rich people. Who could have afforded any of this stuff all these years ago? Nobody but the very rich, and yet now because they get rich, all the prices of all that continue to fall and everybody else benefits from it.
I don’t deny there are all kinds of issues. You have your Hepatitis C vaccine, costs $100,000 and there are a lot of people who need it and only a few people are going to [get it]. There’s all kinds of things like that, but I would just take some degree of comfort that if history has taught us anything, is that the price of anything related to technology falls over time. You probably have 100 computers in your house. You certainly have dozens of them, and who from 1960 would have ever thought that ? Yet here they are here. Here we are in that future.
So, I think you almost have to be conspiratorial to say, yeah, we’re going to get these great new technologies, and only a few people are going to control them and they’re just going to use them to increase their wealth ad infinitum. And everybody else is just going to get the short end of the stick. Again, I think that’s playing on fear. I think that’s playing on all of that, because if you just say, “What are the facts on the ground? Are we better off than we were 50 years ago, 100 years ago, 200 years ago?” I think you can only say “yes.”
Those are all very good points and I’m actually tempted to jump around a little bit in your book and maybe revisit a couple of ideas from the narrow AI section, but maybe what we can do is we can merge the question about robot proofing jobs with some of the stuff that you’ve talked about in the last part, which is the road from here.
One of the things that you mentioned before is this general idea that the world is getting better, no matter what. These things that we just discussed about iPhones and computers being more and more accessible is an example of it. You talked about the section of ‘murderous meerkats’ where you know, even things like crime are things that are improving over time, and therefore there is no real reason for us to fear the future. But at the same time, I’m curious as to whether or not you think that there is a decline in certain elements of society, which we aren’t factoring into the dataset of positivity.
For example, do we feel that there is a decline in the social values that have developed in the current era, in this sort of decline of social values, things like helping each other out, things like looking out for the collective versus the individual, has come and gone, and we’re now starting to see the manifestations of that through some of the social media and how it represents itself. And I just wanted to get your ideas down the road from here and whether or not you would revisit them, if somebody were to tell you and show you some sociologists’ research regarding the decline of social values, and how that might affect the kinds of jobs humans will have in the future versus robots.
So I’m an optimist about the future. I’m clear about that. Everything is hard. It’s like me talking about my companies. Everything’s a struggle to get from here to there. I’m not going to try to spin every single thing. I think these technologies have real implications on people’s privacy and they’re going to affect warfare and there are all these things that are real problems that we’re really going to have to have to think about. The idea that somehow these technologies make us less empathetic, I don’t agree with. And you can just run through a list of examples like everybody kind of has a cause now. Everybody has some charity or thing that they support. Volunteerism, Go-Fund-Me’s are up…People can do something as simple as post a problem they have online and some stranger who will get nothing in return is going to give them a big, long answer.
People toil on a free encyclopedia and they toil in anonymity. They get no credit whatsoever. We had the ‘open source’ movement. Nobody saw that. Nobody said “Yeah, programmers are going to work really hard and write really good stuff and give it away.” Nobody said we’re going to have Creative Commons where people are going to create things that are digital and they’re going to give them away. Nobody said, “Oh yeah, people are going to upload videos on YouTube and just let other people watch them for free.” Everywhere you look, technology empowers us and our benevolence.
To take the other view is like a “Kids these days!” shaking your cane, “Get off my grass!” kind of view that things are bad now. They’re getting worse. Which is what people have said for as long as people have been reflecting on the age. And so, I don’t buy any of that. In terms of specifically about jobs, I’ve tried hard to figure out what the half-life of a job is. And I think every 40 years, every 50 years, half of all the jobs vanish. Because what does technology do? It makes great new high paying jobs, like a geneticist. And it destroys low-paying tedious jobs, like an order taker at a fast food restaurant.
And what people sometimes say is, “You really think that order taker is going to become a geneticist? They’re not trained for these new jobs.” And the answer is, “Well, no.” What’ll happen is a college professor will become a geneticist and a high school biology teacher gets the college job and the substitute teacher gets hired at the high school job, all the way down. The question isn’t, “Can that person who lost their job to automation get one of these great new jobs?” The question is, “Can everybody on the planet do a job a little harder than the job they have today?” And if the answer to that is yes, then what happens is, every time technology creates great new jobs, everybody down the line gets a promotion. And that is 250 years of why have we had in the West full employment, because employment other than during the depression has always been 5 to 10 percent… for 250 years.
Why have we had full employment for 250 years and rising wages? Even when something like the assembly line came out, or something like we replaced all the animal power with steam, you never had bumps in unemployment because people just used those technologies to do more. So yes, in 40 or 50 years, half the jobs are going to be gone, that’s just how the economy works. The good news is though, when I think back to my K-12 education, and I think if I knew the whole future, what would I have taken then that would help me today. And I can only think of one thing that I really just missed out on. And can you guess by the way?
Computer education?
No, because anything they taught me then would no longer be useful. Typing. I should’ve taken typing. Who would have thought that that would be like the skill I need every day the most? But I didn’t know that. So you have to say, “Wow, like everything you have, everything that I do in my job today is not stuff I learned in school.” What we all do now is you hear a new term or concept and you google it and you click on that and you go to Wikipedia and you follow the link, and then it’s 3:00 AM in the morning and you wake up the next morning, and you know something about it. And that’s what every single one of us does, what every single one of us has always done, what every single one of us will continue to do. And that’s how the workforce morphs. It isn’t that we’re facing this kind of cataclysmic disconnect between our education system and our job market. It’s that people are going to learn to do the new things, as they learned to be web designers, and they learned every other thing that they didn’t learn in school.
Yeah, we’d love to dive into the economic arguments in a second, but just to bring it back to your point that technology is always empowering. I’m going to play devil’s advocate here and mention someone we had on the podcast about a year ago. Tristan Harris, who’s the leader of an initiative called ‘Time Well Spent’ and his arguments were that the effects of technology can be nefarious. Two days ago, there was a New York Times article, referring to a research paper on statistical analysis and anti-refugee violence in Germany, and one of the biggest correlating factors was time spent on social media, suggesting that it isn’t always like beneficial or benign for humans. Just to play devil’s advocate here, what is your take on that?
So, is your point that social media causes people to be violent, or is the interpretation people prone to violence also are prone to using social media?
Maybe one variant of that, and Kyran can provide his own, is that the good is getting better with technology and the bad is getting badder with technology. You just hope that one doesn’t detonate something that is irreversible.
Well, I will not uniformly defend every application of technology to every single situation. I could rattle off all the nefarious uses of the Internet, right? I mean bilking people, you know them all, you don’t need me to list it. The question isn’t, “Do any of those things happen?” The question is, “On balance, are more people using the Internet for good, than evil?” And we know the answer is ‘good.’
It has to be, because if we were more evil than good as a species, we never would have survived this way. We’re highly communal. We’ve only survived because we like to support each other, forget about all the wars, granted, all of the problems, all the social strife, all of that. But in the end, you’re left with the question, “How did we make progress to begin with?” And we made progress because there are more people who are working for progress than there are…who are carrying torches and doing all the rest. It just is simple.
I guess I’m not qualified to make this statement, but I’m going to go ahead and do it anyway. Humans have those attributes because we’re inherently social animals, and as a consequence we’re driven to survive and forego being right at times, because we value the social structure more than we do our own selves; and we value the success of the social structure more than ourselves; and there’s always going to be deviations from that, but on average it then answers and shows and represents itself in the way that you have articulated it.
And that’s a theory that I have, but one of the things that if you accept that theory, well you can let me know or not, but let’s, for the sake of the question, let’s just assume that it’s correct, then how do you impart that onto a collection of artificial intelligences such that they mirror that? And as we start delegating more and more to those collective artificial intelligences, can we rely on them to have that same drive when they’re no longer as socially dependent on each other, the way that humans are for reproduction and defense and emotional validation?
That could well be the case, yes. I mean, we have to make sure that we program them to reflect an ethical code, and that’s an inherently very hard thing to do, because people aren’t great at articulating them and even when they articulate them, they’re full of all these provisos and exceptions and everybody’s is different. But luckily, there are certain broad concepts that almost everybody agrees with. That life is better than death, and that building is better than destroying, and there are these very high-level concepts that we will need to take great pains in how we build our AIs, and this is an old debate, even in AI.
There was a man named Weizenbaum, who made a chatbot in the sixties. It was simple. You would say, “I’m having a bad day today,” and it would say, “Why are you having a bad day?” “I’m having a bad day because of my mother.” “Why are you having a bad day because of your mother?” Back and forth. Super simple. Everybody knew it was a chatbot, and yet he saw people getting like emotionally attached to it, and he kind of turned on it and he said, “In the end, we never want computers.”
When the computer says ‘I understand,’ it’s just a lie, that there is no ‘I,’ and there is no understanding. And he came to believe we should never let computers do those kinds of things. They should never be…recipients of our emotions. We should never make them caregivers and all of these other things because in the end, they don’t have any moral capacity at all. They have no empathy. They have faked empathy, they have simulated empathy, and so I think there is something to that, that there will just simply be jobs we’re not going to want them to do because in the end they’re going to require a person I think.
You see, any job a computer could do; a robot could do. If you make a person do that job, there’s a word for that. That’s dehumanizing. If a machine can, in theory, do a job, if you make a person do it, that’s dehumanizing. You’re not using anything about them that makes them a human being, you’re using them as a stand-in for a machine, and those are the jobs machines should do.
But then there are all the other jobs that only people can do, and that’s what I think people should do. I think they’re going to be a lot of things like that, that we are going to be uncomfortable with and we still don’t have any idea. Like, when you’re on a chatbot, you need to be told it’s a chatbot. Should robotic voices on the phone actually sound somewhat robotic, so you know that’s not a person? You think about R2-D2 or C-3PO, just think if their names were Jack and Larry. That’s a subtle difference in how we regard them that we don’t know how we’re going to do that, but you’re entirely right. Machines don’t have any empathy and they can only fake it, and there are real questions if that’s good or not.
Well, that’s a great way of looking at it, and one of the things that’s been really great during this chat is understanding the origin of some of these views and how you end up at this positive outcome at the end of the day on average. And the book does a really good job of leaving the reader with that thought in mind, but arms them to have these kinds of engaging conversations. So thanks for sharing the book with us and thanks for providing your opinion on different elements of the book.
However, you know, it’d be great to get some thoughts about things that you feel that inspired you or that you left out of the book. For example, which movies have most affected you in the vein of this particular book. What are your thoughts on a TV show like Westworld and how that illustrates the development of the mind of the artificial intelligence in the show? Maybe just share a little bit about how your thoughts have evolved.
Certainly, and I would also like to add, I do think there’s one way it can all go south. I think there is one pessimistic future and I think that will come about if people stop believing in a better tomorrow. I think pessimism is what will get us all killed. The reason we’ve had optimism, be so successful, is there’ve been a number of people who get up and say, “Somebody needs to invent the blank. Somebody needs to find a cure for this, somebody needs to do it. I will do it.” And you have enough people who believe in one form or another, in a better tomorrow.
There’s a mentality of, don’t polish brass on a sinking ship. And that’s where you just say, “Well what’s the point? Why bother? Why bother?” And if enough people said “Why bother?” then we are going to have to build that world. We’re going to have to build that better world. And just like I said earlier with my companies, it’s going to be hard. Everybody’s got to work hard at it. And so, it’s not a gift, it’s not free. We’ve clawed our way from savagery to civilization and we’ve got to keep clawing. But the interesting thing is, finally I think there is enough of the good stuff for everybody and you’re right, there are big distribution problems about that, and there are a lot of people who aren’t getting any of the good stuff, and those are all real things we’re going to have to deal with.
When it comes to movies and TV, I have to see them all because everybody asks me about them on shows. So I have to go see them. And I used to loathe going to all the pessimistic movies that have far and away dominated…In fact, I even get to think of, you know, Black Mirror, it’s like I started writing out story ideas for a show in my head, I call ‘White Mirror.’ Who’s telling those stories about how everything can be good in the future? That doesn’t mean they’re bereft of drama. It just means that it’s a different setting to explore these issues.
I used to be so annoyed at having to go to all of these movies. I would go to see some movie like Elysium and then be like, yeah, they’re the 99 percent, yeah, they’re poor and beaten down. Yeah, they’re covered in dirt. And now, yeah, the 1 percent, I bet they live in someplace high up in the sky, pretty and clean. Yeah, there that is. And then, you know, you see Metropolis, the most expensive movie ever made, adjusted for inflation, from almost a century ago. And yeah, there are the 99 percent. They’re dirty, they’re covered in dirt, everybody forgets to bathe in the future. I wonder where the…oh yeah, the one percent, yeah, they live in that tower up there. Oh, everything up there is white and clean. Wow. Isn’t that something. And I have to sit through these things.
And then I read a quote by Frank Herbert, and he said sometimes the purpose of science fiction is to keep the future from happening. And I said, okay, these are cautionary tales. These are warnings, and now I view them all like that. And so, I think there are a lot of cautionary tales out there and very few things that we can…like Star Trek. You heard me answer that so quickly because there aren’t a lot of positive views about the future that are in science fiction. It just doesn’t seem to be as rich of a ground to tells stories and even in that world, you had to have the Ferengi, and you had to have the Klingons and you had to have the Romulans and so forth.
So, I’ve watched them all and you know, I enjoy Westworld, like the next person. But I also realized those are people playing those androids and that nobody can build a machine that does any of that. And so it’s fiction. It’s not speculative in my mind. It’s pure fiction. It’s what they are and that doesn’t mean they’re any less enjoyable… When I ask people on my AI podcast what science fiction influenced you, they all, almost all say Star Trek. That was a show that inspired people, and so I really gravitate towards things that inspire me and inspire me in a vision of a better tomorrow.
For me, if I had to answer that question, I would say The Matrix. And I think that it brings up a lot of philosophical questions and even questions about reality. And it’s dystopian in some ways I guess, but in some ways, it illustrates how we got there and how we can get out of it. And it has a utopian conclusion I guess, because it’s ultimately in the form of liberation. But it is an interesting point you make.
And it actually makes me reflect back on all the movies that I’ve seen, and it actually also brings up another question which is whether or not it’s just representative of the times. Because if you look at art and if you look at literature over the years, in many ways they are inspired by what’s going on during that era. And you can see bouts of optimism, post- the resolution of some conflict. And then you can see the brewing of social upheaval, which then ends up with some sort of a conflict, and you see that all across the decades and it is interesting. And I guess that brings up a moral responsibility for us not to generate the most intense set of innovations around artificial intelligence, in a point where maybe society is quite split at the moment. We might inject unfortunate conclusions into AI systems just because of the state of where we are in our geopolitical evolution.
Yeah. I call my airline of choice once a week to do something, and it asked me to state my member number, which unfortunately has an A, an H, and an 8 in it. And it never gets it right. So that’s what people are trying to do with AI today, is it’s just like make a lot of really tedious stuff less tedious and use caller ID by the way. I always call from the same number, but that’s a different subject.
And so most of the problems that we try to solve with it are relatively mundane, and most of them are about how do we stop disease, and how do we… all of these very worthwhile things. It’s not a scary technology. It’s study the past, look for patterns in data, project into the future. That’s it. And anything around that that tries to make it terrifying, I think is sensationalism. I think the responsibility is to tell the story about AI like that, without the fear and emphasizing the positivity of all the good that can come out of this technology.
What do you think we’ll look upon 50 years from now and think, “Wow, why were we doing that?” How do you get away with that, the way that we look back today on slavery and think, “Why the hell did that happen?”
Well, I will give an answer to that. And it’s not my own personal axe to grind. To be clear, I live in Austin, Texas. We have barbecue joints here in abundance, but I believe that we will learn to grow meat in a laboratory and it will be not only environmentally, massively better, but it will taste better, and be cheaper and healthier and everything. And so I think we’re going to grow all of our meat and maybe even all of our vegetables, by the way. Why do you need sunlight and rain and all of that? But put that aside for a minute, I think we’re going to grow all of our meat in the future and I don’t know if you grow it from a cell, if it’s still veganism to eat it. Maybe it is, I don’t know, like strictly speaking, but I think once the best steak you’ve ever had in your life is 99 cents, everybody’s just going to have that.
And then we’ll look back at how we treat animals with a sense of collective shame of that, because the question is, “Can they feel?” In the United States, up until the mid-90s, veterinarians were taught that animals couldn’t feel pain and so they didn’t anesthetize them. They also operated on babies at the same time because they couldn’t feel pain. Now I think people care whether the chicken that they’re eating was raised humanely. And so, I think that expansion of empathy to animals, who now I think most people believe they do feel pain, they do experience sadness or something that must feel like that, and the fact that we essentially keep them in abhorrent conditions and all of that.
And again, I’m not grinding my own axe here. This isn’t something that…I don’t think it’s going to come up with people, like overnight changing. I think what’s gonna happen is there’ll be an alternative. The alternative will be so much better, but then everybody would use it and look back and think, how in the world did we do that?
No, I agree with that. As a matter of fact, we’ve invested in a company that’s trying to solve that problem, and I’m going to post in the show notes just because they’re in stealth right now, but by the time this interview goes to print, hopefully we’ll be able to talk about them. But yes, I agree with you entirely, and we put our money behind it. So, looking forward to that being one of the issues to be solved. Now another question is, what’s something that you used to strongly believe in, that now you think you were fundamentally misguided about?
Oh, that happens all the time. I didn’t write this book to start off by saying, “I will write a book that doesn’t really say what I think, it’ll just be this framework.” I wrote a book to try to figure out what I think, because I would hear all of these proclamations about these technologies and what they could do. And so, I think I used to be way more in the AGI camp, that this is something we’re going to build and we’re going to have those things, like on Westworld. This was before Westworld though. And I used to be much more in that, until I wrote the book, which changed me and I can’t say I disbelieve it, that would be the wrong way to say it, but I see no evidence for it. I think I used to buy that narrative a lot more and I didn’t realize it was less a technological opinion and more a metaphysical opinion. And so, like working through all of that and just understanding all of the biases and all of the debate. It’s very humbling because these are big issues and what I wanted to do, like I said, is make a book that helps other people work through them.
Well it is a great book. I’ve really enjoyed reading it. Thank you very much for writing it. Congratulations! You’re also the longest podcast we’ve ever recorded, but it’s a subject that is very dear to me, and one that is endlessly fascinating, and we could continue on, but we’re going to be respectful of your time, so thank you for joining us and for your thoughts.
Well, thank you. Anytime you want me back, I would love to continue the conversation.
Well, until next time guys. Bye. Thanks for listening. If you enjoyed the podcast, don’t forget to subscribe on iTunes and SoundCloud and leave us a review with your thoughts on our show.
0 notes
Text
This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity
Recently GigaOm publisher and CEO, Byron Reese, sat down for a chat with Seedcamp’s Carlos Espinal on their podcast ‘This Much I Know.’ It’s an illuminating 80-minute conversation about the future of technology, the future of humanity, Star Trek, and much, much more.
You can listen to the podcast at Seedcamp or Soundcloud, or read the full transcript here.
Carlos Espinal: Hi everyone, welcome to ‘This Much I Know,’ the Seedcamp podcast with me, your host Carlos Espinal bringing you the inside story from founders, investors, and leading tech voices. Tune in to hear from the people who built businesses and products scaled globally, failed fantastically, and learned massively. Welcome everyone! On today’s podcast we have Byron Reese, the author of a new book called The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity. Not only is Byron an author, he’s also the CEO of publisher GigaOm, and he’s also been a founder of several high-tech companies, but I won’t steal his thunder by saying every great thing he’s done. I want to hear from the man himself. So welcome, Byron.
Byron Reese: Thank you so much for having me. I’m so glad to be here.
Excellent. Well, I think I mentioned this before: one of the key things that we like to do in this podcast is get to the origins of the person; in this case, the origins of the author. Where did you start your career and what did you study in college?
I grew up on a farm in east Texas, a small farm. And when I left high school I went to Rice University, which is in Houston. And I studied Economics and Business, a pretty standard general thing to study. When I graduated, I realized that it seemed to me that like every generation had… something that was ‘it’ at that time, the Zeitgeist of that time, and I knew I wanted to get into technology. I’d always been a tinkerer, I built my first computer, blah, blah, blah, all of that normal kind of nerdy stuff that I did.
But I knew I wanted to get into technology. So, I ended up moving out to the Bay Area and that was in the early 90s, and I worked for a technology company and that one was successful, and we sold it and it was good. And I worked for another technology company, got an idea and spun out a company and raised the financing for that. And we sold that company. And then I started another one and after 7 hard years, we sold that one to a company and it went public and so forth. So, from my mother’s perspective, I can’t seem to hold a job; but from another view, it’s kind of like the thing of our time instead. We’re in this industry that changes so rapidly. There are more opportunities that always come along and I find that whole feeling intoxicating.
That’s great. That’s a very illustrious career with that many companies having been built and sold. And now you’re running GigaOm. Do you want to share a little bit for people who may not be as familiar with GigaOm and what it is and what you do?
Certainly. And I hasten to add that I’ve been fortunate that I’ve never had a failure in any of my companies, but they’ve always had harder times. They’ve always had these great periods of like, ‘Boy, I don’t know how we’re going to pull this through,’ and they always end up [okay]. I think tenacity is a great trait in the startup world, because they’re all very hard. And I don’t feel like I figured it all out or anything. Every one is a struggle.
GigaOm is a technology research company. So, if you’re familiar with companies like Forrester or Gartner or those kinds of companies, what we are is a company that tries to help enterprises, help businesses deal with all of the rapidly changing technology that happens. So, you can imagine if you’re a CIO of a large company and there are so many technologies, and it all moves so quickly and how does anybody keep up with all of that? And so, what we have are a bunch of analysts who are each subject matter experts in some area, and we produce reports that try to orient somebody in this world we’re in, and say ‘These kinds of solutions work here, and these work there’ and so forth.
And that’s GigaOm’s mission. It’s a big, big challenge, because you can never rest. Big new companies I find almost every day that I’ve never even heard of and I think, ‘How did I miss this?’ and you have to dive into that, and so it’s a relentless, nonstop effort to stay current on these technologies.
On that note, one of the things that describes you on your LinkedIn page is the word ‘futurist.’ Do you want to walk us through what that means in the context of a label and how does the futurist really look at industries and how they change?
Well, it’s a lower case ‘f’ futurist, so anybody who seriously thinks about how the future might unfold, is to one degree or another, a futurist. I think what makes it into a discipline is to try to understand how change itself happens, how does technology drive changes and to do that, you almost by definition, have to be a historian as well. And so, I think to be a futurist is to be deliberate and reflective on how it is that we came from where we were, in savagery and low tech and all of that, to this world we are in today and can you in fact look forward.
The interesting thing about the future is it always progresses very neatly and linearly until it doesn’t, until something comes along so profound that it changes it. And that’s why you hear all of these things about one prediction in the 19th Century was that, by some year in the future, London would be unnavigable because of all the horse manure or the number of horses that would be needed to support the population, and that maybe would have happened, except you had the car, and like that. So, everything’s a straight-line, until one day it isn’t. And I think the challenge of the futurist is to figure out ‘When does it [move in] a line and when is it a hockey stick?’
So, on that definition of line versus hockey stick, your background as having been CEO of various companies, a couple of which were media centric, what is it that drew you to artificial intelligence specifically to futurize on?
Well, that is a fantastic question. Artificial intelligence is first of all, a technology that people widely differ on its impact, and that’s usually like a marker that something may be going on there. There are people who think it’s just oversold hype. It’s just data mining, big data renamed. It’s just the tool for raising money better. Then there are people who say this is going to be the end of humanity, as we know it. And philosophically the idea that a machine can think, maybe, is a fantastically interesting one, because we know that when you can teach a machine to do something, you can usually double and double and double and double and double its ability to do that over time. And if you could ever get it to reason, and then it could double and double and double and double, well that could potentially be very interesting.
Humans only evolve, computers are able to evolve kind of at the speed of light, they get better and humans evolve at the speed of life. It takes generations. And so, if a machine can think, a question famously posed by Alan Turing, if a machine could think then that could potentially be a game changer. Likewise, I have a similar fascination for robots because it’s a machine that can act, that can move and can interact physically in the world. And I got to thinking what would happen, what is it a human in a world where machines can think better and act better, then what are we? What is uniquely human at that point?
And so, when you start asking those kinds of questions about a technology, that gets very interesting. You can take something like air conditioning and you can say, wow, air conditioning. Think of the impact that had. It meant that in the evenings people wouldn’t… in warm areas, people don’t go out on their front porch anymore. They close the house up and air condition it, and therefore they have less interaction with their neighbors. And you can take some technology as simple as that and say that had all these ripples throughout the world.
The discovery of the new world ended the Italian Renaissance effectively, because it changed the focus of Europe to a whole different direction. So, when those sorts of things had those kinds of ripples through history, you can only imagine what if the machine could think, like that’s a big deal. Twenty-five years ago, we made the first browser, the Mosaic browser, and if you had an enormous amount of foresight and somebody said to you, in 25 years, 2 billion people are going to be using this, what do your think’s going to happen?
If you had an enormous amount of foresight, you might’ve said, well, the Yellow Pages are going to have it rough and the newspapers are, and travel agents are, and stock brokers are going to have a hard time, and you would have been right about everything, but nobody would have guessed there would be Google, or eBay, or Etsy, or Airbnb, or Amazon, or $25 trillion worth of a million new companies. And all that was, was computers being able to talk to each other. Imagine if they could think. That is a big question.
You’re right and I think that there is…I was joking and I said ‘Tinder’ in the background just because that’s a social transformation. Not even like a utility, but rather the social expectation of where certain things happen that was brought about that. So, you’re right… and we’re going to get into some of those [new platforms] as we review your book. In order to do that, let’s go through the table of contents. So, for those of you that don’t have the book yet, because hopefully you will after this chat, the book is broken up into five parts and in some ways these parts are arguably chronological in their stage of development.
The first one I would label as the historical, and it’s broken out into the fourth ages that we’ve had as humans, the first age being language and fire, the second one being agriculture and cities, the third one being writing and wheels, and the fourth one being the one that we’re currently in, which is robots and AI. And we’re left with three questions, which are: what is the composition of the universe, what are we, and what is yourself? And those are big, deep philosophical ones that will manifest themselves in the book a little bit later as we get into consciousness.
Part two of the book is about narrow AI and robots. Arguably I would say this is where we are today, and Seedcamp as an investor in AI companies has broadly invested in narrow AI through different companies. And this is I think the cutting edge of AI, as far as we understand it. Part three in the book covers artificial general intelligence, which is everything we’ve always wanted to see, where science fiction represents quite well, everything from that movie AI, with the little robot boy, to Bicentennial Man with Robin Williams, and sort of the ethical implications of that.
Then part four of the book is computer consciousness, which is a huge debate, because as Byron articulates in the book, there’s a whole debate on what is consciousness and there’s a distinction between a monist and the dualist and how they experience consciousness and how they define it. And hopefully Byron will walk us through that in more detail. And lastly, the road from here is the future, as far as we can see it in the futurist portion of the book, I mean part three, four and five are all futurist portions of the book, but this one is where I think, Byron, you go to the ‘nth’ degree possible with a few exceptions. So maybe we can kick off with your commentary on why you have broken up the book into these five parts.
Well you’re right that they’re chronological, and you may have noticed each one opens with what you could call a parable, and the parables themselves are chronological as well. The first one is about Prometheus and it’s about technology, and about how the technology changed and all the rest. And like you said, that’s where you want to kind of lay the groundwork of the last 100,000 years and that’s why it’s named something like ‘the road to here,’ it’s like how we got to where we are today.
And then I think there are three big questions that everywhere I go I hear one variant of them or another. The first one is around narrow AI and like you said, it’s a real technology that’s going to impact us, what’s it going to do with jobs, what’s this going to do in warfare, what will it do with income? All of these things we are certainly going to deal with. And then we’re unfortunate with the term ‘artificial intelligence,’ because it can mean many different things, but one is that it can be narrow AI, it can be a Nest thermometer that can adjust the temperature, but it can also be Commander Data of Star Trek. It can be C-3PO out of Star Wars. It can be something as versatile as a human and fortunately those two things share the same name, but they’re different technologies, so it has to kind of be drawn out on its own, and to say, “Is this very different thing that shares the same name likely? possible? What are its implications and whatnot?”
Interestingly, of the people who believe we’re going to build [an AGI] very immensely and when, some say as soon as five years, and some say as long away as five hundred. And that’s very telling that these people had such wide viewpoints on when we’ll get it. And then to people who believe we’re going to build one, the question then becomes, ‘well is it alive? Can it feel pain? Does it experience the world? And therefore, by that basis does it have rights?’ And if it does, does that mean you can no longer order it to plunge your toilet when it gets stopped up, because all you’ve made is a sentient being that you can control, and is that possible?
And why is it that we don’t even know this? The only real thing any of us know is our own consciousness and we don’t even know where that comes about. And then finally the book starts 100,000 years ago. I wanted to look 100,000 years out or something like that. I wanted to start thinking about, no matter how these other issues shake out, what is the long trajectory of the human race? Like how did we get here and what does that tell us about where we’re going? Is human history a story of things getting better or things getting worse, and how do they get better or worse and all of the rest. So that was a structure that I made for the book before I wrote a single word.
Yeah, and it makes sense. Maybe for the sake of not stealing the thunder of those that want to read it, we’ll skip a few of those, but before we go straight into questions about the book itself, maybe you can explain who you want this book to be read by. Who is the customer?
There are two customers for the book. The first is people who are in the orbit of technology one way or the other, like it’s their job, or their day to day, and these questions are things they deal with and think about constantly. The value of the book, the value prop of the book is that it never actually tells you what I think on any of these issues. Now, let me clarify that ever so slightly because the book isn’t just another guy with another opinion telling you what I think is going to happen. That isn’t what I was writing it for at all.
What I was really intrigued by is how people have so many different views on what’s going to happen. Like with the jobs question, which I’m sure we’ll come to. Are we going to have universal unemployment or are we going to have too few humans? These are very different outcomes all by very technical minded informed people. So, what I’ve written or tried to write is a guidebook that says I will help you get to the bottom of all the assumptions underlying these opinions and do so in a way that you can take your own values, your own beliefs, and project them onto these issues and have a lot of clarity. So, it’s a book about how to get organized and understand why the debate exists about these things.
And then the second group are people who, they just see headlines every now and then where Elon Musk says, “Hey, I hope we’re not just the boot loaders for the AI, but it seems to be the case,” or “There’s very little chance we’re going to survive this.” And Stephen Hawking would say, “This may be the last invention we’re permitted to make.” Bill Gates says he’s worried about AI as well. And the people who see these headlines, they’re bound to think, “Wow, if Bill Gates and Elon Musk and Stephen Hawking are worried about this, then I guess I should be worried as well.” Just on the basis of that, there’s a lot of fear and angst about these technologies.
The book actually isn’t about technology. It’s about how much you believe and what that means for your beliefs about technology. And so, I think after reading the book, you may still be afraid of AI, you may not, but you will be able to say, ‘I know why Elon Musk, or whoever, thinks what they think. It isn’t that they know something I don’t know, they don’t have some special knowledge I don’t have, it’s that they believe something. They believe something very specific about what people are, what the brain is. They have a certain view of the world as completely mechanistic and all these other things.’ You may agree with them, you may not, but I tried to get at all of the assumptions that live underneath those headlines you see. And so why would Stephen Hawking say that, why would he? Well, there are certain assumptions that you would have to believe to come to that same conclusion.
Do you believe that’s the main reason that very intelligent people will disagree on with respect to how optimistic they are about what artificial intelligence will do? You mentioned Elon Musk who is pretty pessimistic about what AI might do, whereas there are others like Mark Zuckerberg from Facebook, who is pretty optimistic, comparatively speaking. Do you think it’s this different account of what we are, that’s explaining the difference?
Absolutely. The basic rules that govern the universe and what our self is, what is that voice you hear in your head?
The three big questions.
Exactly. I think the answer to all these questions boil down to those three questions, which as I pointed out, are very old questions. They go back as far as we have writing, and presumably therefore they go back before that, way beyond that.
So we’ll try to answer some of those questions and maybe I can prod you. I know that you’ve mentioned in the past that you’re not necessarily expressing your specific views, you’re just laying out the groundwork for people to have a debate, but maybe we can tease some of your opinions.
I make no effort to hide them. I have beliefs about all those questions as well, and I’m happy to share them, but the reason they don’t have a place in the book is: it doesn’t matter whether I think I’m a machine or not. Who cares whether I think I’m a machine? The reader already has an opinion of whether a human being is a machine. The fact that I’m just one more person who says ‘yay’ or ‘nay,’ that doesn’t have any bearing on the book.
True. Although, in all fairness, you are a highly qualified person to give an opinion.
I know, but to your point, if Elon Musk says one thing and Mark Zuckerberg says another, and they’re diametrically opposed, they are both eminently qualified to have an opinion and so these people who are eminently qualified to have opinions have no consensus, and that means something.
That does mean something. So, one thing I would like to comment about the general spirit of your book, is that I generally felt like the book was built from a position of optimism. Even towards the very end of the book, towards the 100,000 years in the future, there was always this underlying tone of, we will be better off because of this entire revolution, no matter how it plays out versus not.And I think that maybe I can tease out of you that fact that you are telegraphing your view on ‘what are we?’ Effectively, are we a benevolent race in a benevolent existence, or are we something that’s more destructive in nature? So, I don’t know if you would agree with that statement about the spirit of the book or whether…
Absolutely. I am unequivocally, undeniably optimistic about the future, for a very simple reason, which is, there was a time in the past, maybe 70,000 years ago, that humans were down to something like maybe a 1000 breeding pairs. We were an endangered species and we were one epidemic, one famine, one away from total annihilation and somehow, we got past that. And then 10,000 years ago, we got agriculture and we learned to regularly produce food, but it took us 90 percent of our people for 10,000 years to make our food.
But then we learned a trick and the trick is technology, because what technology does is it multiplies what you are able to do. And what we saw is that all of a sudden, it didn’t take 90 percent, 80 percent, 70, 60, all the way down, in the West to 2 percent. And furthermore, we learned all of these other tricks we could do with technology. It’s almost magic that what it does is it multiplies human ability. And we know of no upward limit of what technology can do and therefore, there is no end to how it can multiply what we can do.
And so, one has to ask the question, “Are we on balance going to use that for good or ill?” And the answer obviously is for good. I know maybe it doesn’t seem obvious if you caught the news this morning, but the simple fact of the matter is by any standard you choose today, life is better than it was in the past, by that same standard anywhere in the world. And so, we have an unending story of 10,000 years of human progress.
And what has marred humanity for the longest time is the concept of scarcity. There was never enough good stuff for everybody, not enough food, not enough medicine, not enough education, not enough leisure, and technology lets us overcome scarcity. And so, I think if you keep that at the core, that on balance, there have been more people who wanted to build than destroy, we know that, because we have been building for 10,000 years. That on balance, on net, we use technology for good on net, always, without fail.
I’d be interested to know the limits to your optimism there. Is your optimism probabilistic? Do you assign, say a 90 percent chance to the idea that technology and AI will be on balance, good for humans? Or do you think it’s pretty precarious, there’s maybe a 10 percent chance, 20 percent chance that that might be a point where if we fail to institute the right sort of arrangements, it might be bad. How would you sort of describe your optimism in that sense?
I find it hard to find historic cases where technology came along that magnified what people were able to do and that was bad for us. If in fact artificial intelligence makes everybody effectively smarter, it’s really hard to spin that to a bad thing. If you think that’s a bad thing, then one would advocate that maybe it would be great if tomorrow everybody woke up with 10 fewer IQ points. I can’t construct that in my mind.
And what artificial intelligence is, is it’s a collective memory of the planet. We take data from all these people’s life experiences and we learn from that data, and so to somehow say that’s going to end up badly, is to say ignorance is better than knowledge. It’s to say that, yeah, now that we have a collective memory of the planet, things are going to get worse. If you believe that, then it would be great if everybody forgot everything they know tomorrow. And so, to me, the antithetical position that somehow making everybody smarter, remembering our mistakes better, all of these other things can somehow lead to a bad result…I think is…I shall politely say, unproven in the extreme.
You see, I believe that people are inherently…we have evolved to be by default, extremely cautious. Somebody said it’s much better to mistake a rock for a bear and to run away from it, than it is to mistake a bear for a rock and just stand there. So, we are a skittish people and our skittishness has served us well. But what happens is it means anytime you’re born with some bias, some cognitive bias, and we’re born I think with one of fear, it does one well to be aware of that and to say, “I know I’m born this way. I know that for 10,000 years things have gotten better, but tomorrow they might just be worse.” We come by that honestly, it served us well in the past, but that doesn’t mean it’s not wrong.
All right, well if we take that and use that as a sort of a veneer for the rest of the conversation, let’s move into the narrow AI portion of your book. We can go into the whole variance of whether robots are going to take all of our jobs, some of our jobs, or none of our jobs and we can kind of explore that.
I know that you’ve covered that in other interviews, and one of the things that maybe we also should cover is how we train our AI systems in this narrow era. How we can inadvertently create issues for ourselves by having old data sets that represent social norms that have changed and therefore skew things in the wrong way, and inherently create momentum for machines to believe and make wrong conclusions of us, even though we as humans might be able to derive that out of contextual relevance at some point, but is no longer. Maybe you can just kick off that whole section with commentary on that.
So, that is certainly a real problem. You see when you take a data set and let’s say the data is 100 percent accurate and you come up with some conclusion about it, it takes on a halo of, ‘well that’s just the facts, that’s just how things are, that’s just the truth.’ And in a sense, it is just the truth, and AI is only going to come to conclusions based on like you said, the data that it’s trained on. You see, the interesting thing about artificial intelligence, is it has a philosophical assumption behindit, and it is that the future is like the past and for many things that is true. A cat tomorrow looks like a cat today and so you can take a bunch of cats from yesterday, or a week ago, or a month, or a year and you can train it and it’s going to be correct. A cell phone tomorrow doesn’t look like a cell phone ten years ago though, and so if you took a bunch of photos of cell phones from 10 years ago, trained an AI, it’s going to be fabulously wrong. And so, you hit the nail on the head.
The onus is on us to make sure that whatever we are teaching it is a truth that will be true tomorrow, and that is a real concern. There is no machine that can kind of ‘sanity check’ that for you, that you tell the machine, “This is the truth, now, tell me about tomorrow,” but people have to get very good at that. Luckily there’s a lot of awareness around this issue, like people who assemble large datasets, are aware that data has a ‘best-by’ date that varies widely. For how to play a game of chess, it’s hundreds of years. That hasn’t changed. If it’s what a cell phone looks like, it’s a year. So the trick is to just be very cognizant of the data you’re using.
I find the people who are in this industry are very reflective about these kinds of things, and this gives me a lot of encouragement. There have been times in the past where people associated with a new technology had a keen sense that it was something very serious, like the Manhattan project in the United States in World War II, or the computers that were built in the United Kingdom in that same period. They realized they were doing something of import, and they were very reflective about it, even in that time. And I find that to be the case with people in AI today.
I think that generally speaking, a lot of the companies that we’ve invested in this sector and in this stage of effectively narrow based AI, as you said, are going through and thinking through it. But what’s interesting is that I’ve noticed that there is a limit to what we can teach as metadata to data for machine learning algorithms to learn and evolve by themselves. So, the age-old argument is that you can’t build an artificial general intelligence. You have to grow it. You have to nurture it. And it’s done over time. And part of the challenge of nurturing or growing something is knowing what pieces of input to give it.
Now, if you use children as the best approximation of what we do, there’s a lot of built in features, including curiosity and a desire to self-preserve and all these things that then enable the acquisition of metadata, which then justifies and rewrites existing data as either valid or invalid, to use your cell phone example. How do you see us being able to tackle that when we’re inherently flawed in our ability to add metadata to existing data? Are we going to effectively never be able to make it to artificial general intelligence because of our inability to add that additional color to data so that it isn’t effectively a very tarnished and limited utility?
Well, yes, it could very easily be the case, and by the way, that’s an extremely minority view among people in AI. I will just say that up front. I’m not representing a majority of people in AI, but I think that could very well be the case. Let me just dive into that a little bit about how people know what we know. How is it that we are generally intelligent, have general intelligence? If I asked, “Does it hurt your thumb when you hit it with a hammer?” You would say “yes,” and then I would say, “Have you ever done it?” “Yes.” And then I would say, “Well, when?” And you likely can’t remember, and so you’re right, we have data that we take somehow learning from, and we store it and we don’t knowhow we store it. There’s no place in your brain which is ‘hitting your thumb with a hammer hurts,’ and then if I somehow could cut that out, you no longer know that. It doesn’t exist. We don’t know how we’d do that.
Then we do something really clever. We know how to take data we know in one area and apply it to another area. I could draw a picture of a completely made up alien that is weird beyond imagination. And I could show that picture to you and then I could give you a bunch of photographs and say find that alien in these. And if the alien is upside down or underwater or covered in peanut butter, or half behind a tree or whatever, you’re like, “There it is. There it is. There it is. There it is.” We don’t know how we do that. So, we don’t know how to make computers do it.
And then if you think about it, if I were to ask you to imagine a trout swimming in a river, and imagine the same trout in a jar of formaldehyde and in a laboratory. “Do they weigh the same?” You would say, “yeah.” “Do they smell the same?” “Uh, no.” “Are they the same color?” “Probably not.” “Are they the same temperature?” “Definitely not.” And even though you have no experience with any of that, you instinctively know how to apply it. These are things that people do very naturally, and we don’t know how to make machines do them.
If you were to think of a question to ask a computer like, “Dr. Smith is having lunch at his favorite restaurant when he receives a phone call. Looking worried, he runs out the door neglecting to pay his bill. Are the owners liable to call the police?” You would say a human would say no. Clearly, he’s a doctor. It’s his favorite restaurant, he must eat there a lot, he must’ve gotten an emergency call. He ran out the door forgetting to pay. We’ll just ask him to pay the next night he comes in. The amount of knowledge you had to have, just to answer that question is complexity in the extreme.
I can’t even find a chatbot that can answer [the question:] “What’s bigger, a nickel or the sun?” And so to try to answer a question that requires this nuance and all of this inference and understanding and all of that, I do not believe we know how to build that now. That would be, I believe, a statement within the consensus. I don’t believe we know how to build it, and even if you were to say, “Well, if you had enough data and enough computers, you could figure that out.” It may just literally be impossible, like every instantiation of every possibility. We don’t know how we do it. It’s a great mystery and it’s even hotly debated [around] even if we knew how we do it, could we build a machine to do it? I don’t even know that that’s the case.
I think that’s part of the thing that baffles me in your book. I’m jumping a little bit around here in your book now. You do talk about consciousness and you talk about sentience and how we know what we know, who we are, what we are. You talk about the dot on pets and how they identify themselves as themselves, and with any engineering problem, sometimes you can conceive of a solution before actually the method by which to get there is accomplished. You can conceive the idea of flying. You just don’t know what combination of anything that you are copying from birds or copying from leaves, or whatever, will function in getting to that goal: flying.
The problem with this one is that from an engineering point of view, this idea of having another human or another human-like entity that not only has consciousness, but has free will and sentience as far as we can perceive it, [doesn’t recognize that] there’s a lot of things that you described in your chapter on consciousness that we don’t even know how to qualify. Like which is a huge catalyst in being able to create the metadata that structures data in a way that then gives the illusion and perception of consciousness. Maybe this is where you give me your personal opinion… do you think we’ll ever be able to create an answer to that engineering question, such that technology can be built around it? Because otherwise we might just be stuck on the formulation of the problem.
The logic that says we can build it is very straightforward and seemingly ironclad. The logic goes like this. If we figure out how a neuron works, we can build one. Either physically build one or model it in a computer. And if you can model that neuron in a computer, then you learn how it talks to other neurons and then you model a 100 billion of them in the computer, and all of a sudden you have a human mind. So that that says, we don’t have to know it, we just have to understand the physics. So, the position just says whatever a neuron does, it behaves the laws of physics and if we can understand how those laws are interacting, then we will be able to build it. Case closed. There’s no question at all that it cannot be done.
So I would say that’s the majority viewpoint. The other viewpoint says, “Well wait a minute, we have this brain that we don’t understand how it works. And then we have this mind, and a mind is a concept everybody uses and if you want a definition, it’s kind of everything your brain can do that an organ doesn’t seem like it would be able to. You have a sense of humor; your liver may not have a sense of humor. You have emotions, your stomach may not have emotions, and so forth.” So somehow, we have a mind that we don’t know how it comes about. And then to your point, we are conscious and what that means is we experience the world. I feel warmth, [whereas] a computer measures temperature. Those are very different things and we not only don’t know how it is that we are conscious, we don’t even know how to ask the question in a scientific method, nor what the answer looks like.
And so, I would say my position to be perfectly clear is, we have brains we don’t understand, minds we don’t understand and consciousness we don’t understand. And therefore, I am unconvinced that we can ever build something like this. And so I see no evidence that we can build it because the only example that we have is something that we don’t understand. I don’t think you have to appeal to spiritualism or anything like that, to come to that conclusion, although many people would disagree with me.
Yeah, it’s interesting. I think one thing underlying the pessimistic view is this belief that while we may not have the technology now or have an idea of how we’re going to get there, the kinetic sort of an AI explosion—that’s what I think Nick Bostrom, the philosopher has called it—may be pretty rapid in the sense that once there is material success in developing these AI models, that will encourage researchers to sort of pile on and therefore they bring in more people to produce those models and then secondly, if there are advancements in self-improving AI models. So there’s a belief that it may be pretty quick that we get super intelligence that underlies this pessimism and the belief that we sort of have to act now. What would be your thoughts on that?
Oh, well I don’t agree. I think that’s the “Is that a bear or a rock?” kind of thing. The only evidence we really have for that scenario is movies, and they’re very compelling and I’m not conspiratorial, and they’re entertaining. But what happens is you see that enough, and you do something that has a name, it’s called ‘reasoning from fictional evidence’ and that’s what we do. Where you say, “Well, that could happen, and then you see it again, and yeah, that could happen. That really could again.” Again, and again and again.
To put it in perspective, when I say we don’t understand how the brain works, let me be really clear about that. Your brain has 100 billion neurons, roughly the same number of stars in the Milky Way. You might say, “Well, we don’t understand it because there’s so many.” This is not true. There’s a worm called the nematode worms. He’s about as long as a hair is thick, and his brain has 302 neurons. These are the most successful creatures on the planet, by the way. Seventy percent of all animals are nematode worms and 302 neurons. That’s it. [This is about] the number of pieces of cereal in a bowl of cereal. So, for 20 years a group of people in something called the ‘open worm project’ had been trying to model those 302 neurons in a computer to get it to display some of complex behavior that a nematode worm does. And not only have they not done it, there’s even a debate among them whether it is even possible to do that. So that’s the reality of the situation. We haven’t even gotten to the mind.
Again, how is it that we’re creative? And we haven’t even gotten to, how is it that we experience the world? We’re just talking about how does a brain work, if it only has 302 neurons, a bunch of smart people, 20 years working on it, may not even be possible. So somehow to spin a narrative that, well, yeah, that all may be true, but what if there was a breakthrough and then it sped up on itself and sped up and then it got smarter and then it got so smart it had 100 IQ, then a thousand, then a million, then 100 million. And then it doesn’t even see us anymore. That’s as speculative as any other kind of scenario you want to come up with. It’s so removed from the facts on the ground that you can’t rebut it because it is not based on any evidence that you can rebuke.
You know, the fun thing about chatting with you, Byron, is that the temptation is to sort of jump into all these theories and which ones are your favorites. So because I have the microphone, I will. Let me just jump into one. Best science fiction theory that you like. I think we’ve touched on a few of these things, but what is the best unified theory of everything, from science fiction that you feel like, ‘you know what, this might just explain it all’?
Star Trek.
Okay. Which variant of it? Because there’s not…
Oh, I would take either….I’ll take ‘The Next Generation.’ So, what is that narrative? We use technology to overcome scarcity. We have bumps all along the way. We are insatiably curious, and we go out to explore the stars as Captain Picard told the guy he thought out from the 20th Century. He said the challenge in our time is to better yourself, is to discover who you are. And what we found interestingly with the Internet, and sure, you can list all the nefarious uses you want. What we found is the minute you make blogs, 100 million people want to tell you what they think. The minute you make YouTube, millions of people want to upload video; the minute you make iTunes, music flourishes.
I think in my father’s generation, they didn’t write anything after they left college. We wake up in the morning, and we write all day long. You send emails constantly and so what we have found is that it isn’t that there were just a few people, and like the Italian Renaissance, there were only a few people who wanted to paint or cared to paint. It was like everybody probably did. Only there wasn’t enough of the good stuff, and so only either you had extreme talent or extreme wealth and then you got to paint.
Well, in the future, in the Star Trek variant of it, we’ve eliminated scarcity through technology, and everybody is empowered, every Dante to write their Inferno, every Marie Curie to discover radium and all of the rest. And so that vision of the future, you know, Gene Roddenberry said in the future there will be no hunger and there will be no greed and all the children would know how to read. That variant of the future is the one that’s most consistent with the past. That’s the one you can say, “Yeah, somebody in the 1400s looking at our life today, that would look like Star Trek to them. These people like push up a button and the temperature in the room gets cooler, and they have leisure time. They have hobbies.” That would’ve seemed like science fiction.
I think there’s a couple of things that I want to tackle with the Star Trek analogy to get us sort of warmed up on this and I think Kyran’s waiting here at the top to ask some of them, but I think the most obvious one to ask, if we use that as a parable of the future, is about Lieutenant Commander Data. Lieutenant Commander Data is one of the characters starring in The Next Generation and is the closest attempt to artificial general intelligence, and yet he’s crippled from fully comprehending the human condition because he’s got an emotion chip that has to be turned off because when it’s turned on, he goes nuts; and his brother is also nuts because he was overly emotional. And then he ends up representing every negative quality of humanity. So to some extent, not only have I just shown off about my knowledge of the Star Trek era…
Lore wasn’t over overly emotional. He got the chip that was meant for Data and it wasn’t designed for him. That was his backstory.
Oh, that’s right. I stand corrected, but maybe you can explore that. In that future, walk us through why you think Gene had that level of limitation for Data, and whether or not that’s an implication of ultimately the limits of what we can expect from robots.
Well, obviously that story is about…that whole setup is just not hard science. Right? That whole setup is, like you said, it’s embodying us and it’s the Pinocchio Story of Data wanting to be a boy and all of the rest. So, it’s just storytelling as far as I’m concerned. You know, it’s convenient that he has a positronic brain and having removed part of his scalp, you just see all this light coursing through, but that’s not something that science is behind, like Warp 10 or something, the tri-quarter. You know Uhura in the original series, she had a Bluetooth device in her ear all the time, right?
Yeah, but I guess with the Data metaphor, I guess what I’m asking is: the limitations that prevented Data from being able to do some of the things that humans do, and therefore ultimately come around full circle into being a fully independent, conscious, free-willed, sentient being, were entirely because of some human elements he was lacking. I guess the question and you brought it up in your book is, whether or not we need those human elements to really drive that final conversion of a machine to some sort of entity that we can respect as an equivalent peer to us.
Yeah. Data is a tricky one because he could not feel pain, so you would say he’s not sentient. And to be clear, sentient means, it’s often misused, to mean ‘smart.’ That’s sapient. Sentient means you can experience pain. He didn’t, but as you said, at some point in the show, he experienced emotional pain through that chip and therefore he is sentient. They had a whole episode about, “Does Data have a soul?” And you’re right, I think there are things that humans do that it’s hard to…unless you start with the assumption everything in a human being is mechanistic, in physics and that you’re a bag of chemicals with electrical impulses going through you.
If you start with that, then everything has to be mechanical, but most people don’t see themselves that way, I have found, and so if there is something else, some emergent or something else that’s going on, then yeah, I believe that has to be wrapped up in our intelligence. That being said, everybody I think has had this experience of when you’re driving along and you kind of space [out] and then you kind of ‘come to’ and you’re like, “Holy cow, I’m three miles along. I don’t remember driving there.” Yet you behaved very intelligently. You navigated traffic and did all of that, but you weren’t kind of conscious. You weren’t experiencing the world at least that much. That may be the limit of what we can do, that a person during that three minutes when you’re kind of spaced, because that person also didn’t write a new poem or do anything creative. They just merely mechanically went through the motions of driving. That may be the limit. That may be that last little bit that makes us human.
The Star Trek view has two pieces to it. It has a technological optimism, which I don’t contest. I think I’m aligned with you and agreeing with that. There’s also an economic or a social optimism there and that’s also about how that technology is owned, who owns the means of production, who owns the replicators. When it comes to that, how precarious do you think the Star Trek Universe is in the sense that if the replicators are only in the hand of a certain group of people, if they’re so expensive that only a few people learn them, or only a few people own the robots, then it’s no longer such an optimistic scenario that we have. I’d just be interested in hearing your views there.
You’re right, that the replicator is a little bit of a convenient…I don’t want to say it’s a cheat, but it’s a convenient way to get around scarcity and they never really go into, well, how is it that anybody could go to the library and replicate whatever they wanted. Like how did they get that? I understand those arguments. We have [a world where] the ability of a person using technology to affect a lot of lives goes up and that’s why we have more billionaires. We have more self-made billionaires now; a higher percentage of billionaires are self-made now than ever before. You know, Google and Facebook together made 12 billionaires. The ability to make a billion dollars gets easier and easier, at least for some people (not me) because technology allows them to multiply and affect more lives and you’re right. So that does tend to make more super, super, super rich people. But, I think the income inequality debate is a little…maybe needs a slight bit of focus.
To my mind it doesn’t matter all that much how many super rich people there are. The question is how many poor people are there? How many people have a good life? How many people can have medical care and can, you know, if I could get everybody to that state, but I had to make a bunch of super rich people, it’s like, absolutely, we’ll take that. So I think, income inequality by itself is a distraction.
I think the question is how do you raise the lot of everybody else and what we know about technology is that it gets better over time and the prices fall over time. And that goes on ad infinitum. Who could have afforded an iPhone 20 years ago? Nobody. Who could have afforded the cell phone 30 years ago? Rich people. Who could have afforded any of this stuff all these years ago? Nobody but the very rich, and yet now because they get rich, all the prices of all that continue to fall and everybody else benefits from it.
I don’t deny there are all kinds of issues. You have your Hepatitis C vaccine, costs $100,000 and there are a lot of people who need it and only a few people are going to [get it]. There’s all kinds of things like that, but I would just take some degree of comfort that if history has taught us anything, is that the price of anything related to technology falls over time. You probably have 100 computers in your house. You certainly have dozens of them, and who from 1960 would have ever thought that ? Yet here they are here. Here we are in that future.
So, I think you almost have to be conspiratorial to say, yeah, we’re going to get these great new technologies, and only a few people are going to control them and they’re just going to use them to increase their wealth ad infinitum. And everybody else is just going to get the short end of the stick. Again, I think that’s playing on fear. I think that’s playing on all of that, because if you just say, “What are the facts on the ground? Are we better off than we were 50 years ago, 100 years ago, 200 years ago?” I think you can only say “yes.”
Those are all very good points and I’m actually tempted to jump around a little bit in your book and maybe revisit a couple of ideas from the narrow AI section, but maybe what we can do is we can merge the question about robot proofing jobs with some of the stuff that you’ve talked about in the last part, which is the road from here.
One of the things that you mentioned before is this general idea that the world is getting better, no matter what. These things that we just discussed about iPhones and computers being more and more accessible is an example of it. You talked about the section of ‘murderous meerkats’ where you know, even things like crime are things that are improving over time, and therefore there is no real reason for us to fear the future. But at the same time, I’m curious as to whether or not you think that there is a decline in certain elements of society, which we aren’t factoring into the dataset of positivity.
For example, do we feel that there is a decline in the social values that have developed in the current era, in this sort of decline of social values, things like helping each other out, things like looking out for the collective versus the individual, has come and gone, and we’re now starting to see the manifestations of that through some of the social media and how it represents itself. And I just wanted to get your ideas down the road from here and whether or not you would revisit them, if somebody were to tell you and show you some sociologists’ research regarding the decline of social values, and how that might affect the kinds of jobs humans will have in the future versus robots.
So I’m an optimist about the future. I’m clear about that. Everything is hard. It’s like me talking about my companies. Everything’s a struggle to get from here to there. I’m not going to try to spin every single thing. I think these technologies have real implications on people’s privacy and they’re going to affect warfare and there are all these things that are real problems that we’re really going to have to have to think about. The idea that somehow these technologies make us less empathetic, I don’t agree with. And you can just run through a list of examples like everybody kind of has a cause now. Everybody has some charity or thing that they support. Volunteerism, Go-Fund-Me’s are up…People can do something as simple as post a problem they have online and some stranger who will get nothing in return is going to give them a big, long answer.
People toil on a free encyclopedia and they toil in anonymity. They get no credit whatsoever. We had the ‘open source’ movement. Nobody saw that. Nobody said “Yeah, programmers are going to work really hard and write really good stuff and give it away.” Nobody said we’re going to have Creative Commons where people are going to create things that are digital and they’re going to give them away. Nobody said, “Oh yeah, people are going to upload videos on YouTube and just let other people watch them for free.” Everywhere you look, technology empowers us and our benevolence.
To take the other view is like a “Kids these days!” shaking your cane, “Get off my grass!” kind of view that things are bad now. They’re getting worse. Which is what people have said for as long as people have been reflecting on the age. And so, I don’t buy any of that. In terms of specifically about jobs, I’ve tried hard to figure out what the half-life of a job is. And I think every 40 years, every 50 years, half of all the jobs vanish. Because what does technology do? It makes great new high paying jobs, like a geneticist. And it destroys low-paying tedious jobs, like an order taker at a fast food restaurant.
And what people sometimes say is, “You really think that order taker is going to become a geneticist? They’re not trained for these new jobs.” And the answer is, “Well, no.” What’ll happen is a college professor will become a geneticist and a high school biology teacher gets the college job and the substitute teacher gets hired at the high school job, all the way down. The question isn’t, “Can that person who lost their job to automation get one of these great new jobs?” The question is, “Can everybody on the planet do a job a little harder than the job they have today?” And if the answer to that is yes, then what happens is, every time technology creates great new jobs, everybody down the line gets a promotion. And that is 250 years of why have we had in the West full employment, because employment other than during the depression has always been 5 to 10 percent… for 250 years.
Why have we had full employment for 250 years and rising wages? Even when something like the assembly line came out, or something like we replaced all the animal power with steam, you never had bumps in unemployment because people just used those technologies to do more. So yes, in 40 or 50 years, half the jobs are going to be gone, that’s just how the economy works. The good news is though, when I think back to my K-12 education, and I think if I knew the whole future, what would I have taken then that would help me today. And I can only think of one thing that I really just missed out on. And can you guess by the way?
Computer education?
No, because anything they taught me then would no longer be useful. Typing. I should’ve taken typing. Who would have thought that that would be like the skill I need every day the most? But I didn’t know that. So you have to say, “Wow, like everything you have, everything that I do in my job today is not stuff I learned in school.” What we all do now is you hear a new term or concept and you google it and you click on that and you go to Wikipedia and you follow the link, and then it’s 3:00 AM in the morning and you wake up the next morning, and you know something about it. And that’s what every single one of us does, what every single one of us has always done, what every single one of us will continue to do. And that’s how the workforce morphs. It isn’t that we’re facing this kind of cataclysmic disconnect between our education system and our job market. It’s that people are going to learn to do the new things, as they learned to be web designers, and they learned every other thing that they didn’t learn in school.
Yeah, we’d love to dive into the economic arguments in a second, but just to bring it back to your point that technology is always empowering. I’m going to play devil’s advocate here and mention someone we had on the podcast about a year ago. Tristan Harris, who’s the leader of an initiative called ‘Time Well Spent’ and his arguments were that the effects of technology can be nefarious. Two days ago, there was a New York Times article, referring to a research paper on statistical analysis and anti-refugee violence in Germany, and one of the biggest correlating factors was time spent on social media, suggesting that it isn’t always like beneficial or benign for humans. Just to play devil’s advocate here, what is your take on that?
So, is your point that social media causes people to be violent, or is the interpretation people prone to violence also are prone to using social media?
Maybe one variant of that, and Kyran can provide his own, is that the good is getting better with technology and the bad is getting badder with technology. You just hope that one doesn’t detonate something that is irreversible.
Well, I will not uniformly defend every application of technology to every single situation. I could rattle off all the nefarious uses of the Internet, right? I mean bilking people, you know them all, you don’t need me to list it. The question isn’t, “Do any of those things happen?” The question is, “On balance, are more people using the Internet for good, than evil?” And we know the answer is ‘good.’
It has to be, because if we were more evil than good as a species, we never would have survived this way. We’re highly communal. We’ve only survived because we like to support each other, forget about all the wars, granted, all of the problems, all the social strife, all of that. But in the end, you’re left with the question, “How did we make progress to begin with?” And we made progress because there are more people who are working for progress than there are…who are carrying torches and doing all the rest. It just is simple.
I guess I’m not qualified to make this statement, but I’m going to go ahead and do it anyway. Humans have those attributes because we’re inherently social animals, and as a consequence we’re driven to survive and forego being right at times, because we value the social structure more than we do our own selves; and we value the success of the social structure more than ourselves; and there’s always going to be deviations from that, but on average it then answers and shows and represents itself in the way that you have articulated it.
And that’s a theory that I have, but one of the things that if you accept that theory, well you can let me know or not, but let’s, for the sake of the question, let’s just assume that it’s correct, then how do you impart that onto a collection of artificial intelligences such that they mirror that? And as we start delegating more and more to those collective artificial intelligences, can we rely on them to have that same drive when they’re no longer as socially dependent on each other, the way that humans are for reproduction and defense and emotional validation?
That could well be the case, yes. I mean, we have to make sure that we program them to reflect an ethical code, and that’s an inherently very hard thing to do, because people aren’t great at articulating them and even when they articulate them, they’re full of all these provisos and exceptions and everybody’s is different. But luckily, there are certain broad concepts that almost everybody agrees with. That life is better than death, and that building is better than destroying, and there are these very high-level concepts that we will need to take great pains in how we build our AIs, and this is an old debate, even in AI.
There was a man named Weizenbaum, who made a chatbot in the sixties. It was simple. You would say, “I’m having a bad day today,” and it would say, “Why are you having a bad day?” “I’m having a bad day because of my mother.” “Why are you having a bad day because of your mother?” Back and forth. Super simple. Everybody knew it was a chatbot, and yet he saw people getting like emotionally attached to it, and he kind of turned on it and he said, “In the end, we never want computers.”
When the computer says ‘I understand,’ it’s just a lie, that there is no ‘I,’ and there is no understanding. And he came to believe we should never let computers do those kinds of things. They should never be…recipients of our emotions. We should never make them caregivers and all of these other things because in the end, they don’t have any moral capacity at all. They have no empathy. They have faked empathy, they have simulated empathy, and so I think there is something to that, that there will just simply be jobs we’re not going to want them to do because in the end they’re going to require a person I think.
You see, any job a computer could do; a robot could do. If you make a person do that job, there’s a word for that. That’s dehumanizing. If a machine can, in theory, do a job, if you make a person do it, that’s dehumanizing. You’re not using anything about them that makes them a human being, you’re using them as a stand-in for a machine, and those are the jobs machines should do.
But then there are all the other jobs that only people can do, and that’s what I think people should do. I think they’re going to be a lot of things like that, that we are going to be uncomfortable with and we still don’t have any idea. Like, when you’re on a chatbot, you need to be told it’s a chatbot. Should robotic voices on the phone actually sound somewhat robotic, so you know that’s not a person? You think about R2-D2 or C-3PO, just think if their names were Jack and Larry. That’s a subtle difference in how we regard them that we don’t know how we’re going to do that, but you’re entirely right. Machines don’t have any empathy and they can only fake it, and there are real questions if that’s good or not.
Well, that’s a great way of looking at it, and one of the things that’s been really great during this chat is understanding the origin of some of these views and how you end up at this positive outcome at the end of the day on average. And the book does a really good job of leaving the reader with that thought in mind, but arms them to have these kinds of engaging conversations. So thanks for sharing the book with us and thanks for providing your opinion on different elements of the book.
However, you know, it’d be great to get some thoughts about things that you feel that inspired you or that you left out of the book. For example, which movies have most affected you in the vein of this particular book. What are your thoughts on a TV show like Westworld and how that illustrates the development of the mind of the artificial intelligence in the show? Maybe just share a little bit about how your thoughts have evolved.
Certainly, and I would also like to add, I do think there’s one way it can all go south. I think there is one pessimistic future and I think that will come about if people stop believing in a better tomorrow. I think pessimism is what will get us all killed. The reason we’ve had optimism, be so successful, is there’ve been a number of people who get up and say, “Somebody needs to invent the blank. Somebody needs to find a cure for this, somebody needs to do it. I will do it.” And you have enough people who believe in one form or another, in a better tomorrow.
There’s a mentality of, don’t polish brass on a sinking ship. And that’s where you just say, “Well what’s the point? Why bother? Why bother?” And if enough people said “Why bother?” then we are going to have to build that world. We’re going to have to build that better world. And just like I said earlier with my companies, it’s going to be hard. Everybody’s got to work hard at it. And so, it’s not a gift, it’s not free. We’ve clawed our way from savagery to civilization and we’ve got to keep clawing. But the interesting thing is, finally I think there is enough of the good stuff for everybody and you’re right, there are big distribution problems about that, and there are a lot of people who aren’t getting any of the good stuff, and those are all real things we’re going to have to deal with.
When it comes to movies and TV, I have to see them all because everybody asks me about them on shows. So I have to go see them. And I used to loathe going to all the pessimistic movies that have far and away dominated…In fact, I even get to think of, you know, Black Mirror, it’s like I started writing out story ideas for a show in my head, I call ‘White Mirror.’ Who’s telling those stories about how everything can be good in the future? That doesn’t mean they’re bereft of drama. It just means that it’s a different setting to explore these issues.
I used to be so annoyed at having to go to all of these movies. I would go to see some movie like Elysium and then be like, yeah, they’re the 99 percent, yeah, they’re poor and beaten down. Yeah, they’re covered in dirt. And now, yeah, the 1 percent, I bet they live in someplace high up in the sky, pretty and clean. Yeah, there that is. And then, you know, you see Metropolis, the most expensive movie ever made, adjusted for inflation, from almost a century ago. And yeah, there are the 99 percent. They’re dirty, they’re covered in dirt, everybody forgets to bathe in the future. I wonder where the…oh yeah, the one percent, yeah, they live in that tower up there. Oh, everything up there is white and clean. Wow. Isn’t that something. And I have to sit through these things.
And then I read a quote by Frank Herbert, and he said sometimes the purpose of science fiction is to keep the future from happening. And I said, okay, these are cautionary tales. These are warnings, and now I view them all like that. And so, I think there are a lot of cautionary tales out there and very few things that we can…like Star Trek. You heard me answer that so quickly because there aren’t a lot of positive views about the future that are in science fiction. It just doesn’t seem to be as rich of a ground to tells stories and even in that world, you had to have the Ferengi, and you had to have the Klingons and you had to have the Romulans and so forth.
So, I’ve watched them all and you know, I enjoy Westworld, like the next person. But I also realized those are people playing those androids and that nobody can build a machine that does any of that. And so it’s fiction. It’s not speculative in my mind. It’s pure fiction. It’s what they are and that doesn’t mean they’re any less enjoyable… When I ask people on my AI podcast what science fiction influenced you, they all, almost all say Star Trek. That was a show that inspired people, and so I really gravitate towards things that inspire me and inspire me in a vision of a better tomorrow.
For me, if I had to answer that question, I would say The Matrix. And I think that it brings up a lot of philosophical questions and even questions about reality. And it’s dystopian in some ways I guess, but in some ways, it illustrates how we got there and how we can get out of it. And it has a utopian conclusion I guess, because it’s ultimately in the form of liberation. But it is an interesting point you make.
And it actually makes me reflect back on all the movies that I’ve seen, and it actually also brings up another question which is whether or not it’s just representative of the times. Because if you look at art and if you look at literature over the years, in many ways they are inspired by what’s going on during that era. And you can see bouts of optimism, post- the resolution of some conflict. And then you can see the brewing of social upheaval, which then ends up with some sort of a conflict, and you see that all across the decades and it is interesting. And I guess that brings up a moral responsibility for us not to generate the most intense set of innovations around artificial intelligence, in a point where maybe society is quite split at the moment. We might inject unfortunate conclusions into AI systems just because of the state of where we are in our geopolitical evolution.
Yeah. I call my airline of choice once a week to do something, and it asked me to state my member number, which unfortunately has an A, an H, and an 8 in it. And it never gets it right. So that’s what people are trying to do with AI today, is it’s just like make a lot of really tedious stuff less tedious and use caller ID by the way. I always call from the same number, but that’s a different subject.
And so most of the problems that we try to solve with it are relatively mundane, and most of them are about how do we stop disease, and how do we… all of these very worthwhile things. It’s not a scary technology. It’s study the past, look for patterns in data, project into the future. That’s it. And anything around that that tries to make it terrifying, I think is sensationalism. I think the responsibility is to tell the story about AI like that, without the fear and emphasizing the positivity of all the good that can come out of this technology.
What do you think we’ll look upon 50 years from now and think, “Wow, why were we doing that?” How do you get away with that, the way that we look back today on slavery and think, “Why the hell did that happen?”
Well, I will give an answer to that. And it’s not my own personal axe to grind. To be clear, I live in Austin, Texas. We have barbecue joints here in abundance, but I believe that we will learn to grow meat in a laboratory and it will be not only environmentally, massively better, but it will taste better, and be cheaper and healthier and everything. And so I think we’re going to grow all of our meat and maybe even all of our vegetables, by the way. Why do you need sunlight and rain and all of that? But put that aside for a minute, I think we’re going to grow all of our meat in the future and I don’t know if you grow it from a cell, if it’s still veganism to eat it. Maybe it is, I don’t know, like strictly speaking, but I think once the best steak you’ve ever had in your life is 99 cents, everybody’s just going to have that.
And then we’ll look back at how we treat animals with a sense of collective shame of that, because the question is, “Can they feel?” In the United States, up until the mid-90s, veterinarians were taught that animals couldn’t feel pain and so they didn’t anesthetize them. They also operated on babies at the same time because they couldn’t feel pain. Now I think people care whether the chicken that they’re eating was raised humanely. And so, I think that expansion of empathy to animals, who now I think most people believe they do feel pain, they do experience sadness or something that must feel like that, and the fact that we essentially keep them in abhorrent conditions and all of that.
And again, I’m not grinding my own axe here. This isn’t something that…I don’t think it’s going to come up with people, like overnight changing. I think what’s gonna happen is there’ll be an alternative. The alternative will be so much better, but then everybody would use it and look back and think, how in the world did we do that?
No, I agree with that. As a matter of fact, we’ve invested in a company that’s trying to solve that problem, and I’m going to post in the show notes just because they’re in stealth right now, but by the time this interview goes to print, hopefully we’ll be able to talk about them. But yes, I agree with you entirely, and we put our money behind it. So, looking forward to that being one of the issues to be solved. Now another question is, what’s something that you used to strongly believe in, that now you think you were fundamentally misguided about?
Oh, that happens all the time. I didn’t write this book to start off by saying, “I will write a book that doesn’t really say what I think, it’ll just be this framework.” I wrote a book to try to figure out what I think, because I would hear all of these proclamations about these technologies and what they could do. And so, I think I used to be way more in the AGI camp, that this is something we’re going to build and we’re going to have those things, like on Westworld. This was before Westworld though. And I used to be much more in that, until I wrote the book, which changed me and I can’t say I disbelieve it, that would be the wrong way to say it, but I see no evidence for it. I think I used to buy that narrative a lot more and I didn’t realize it was less a technological opinion and more a metaphysical opinion. And so, like working through all of that and just understanding all of the biases and all of the debate. It’s very humbling because these are big issues and what I wanted to do, like I said, is make a book that helps other people work through them.
Well it is a great book. I’ve really enjoyed reading it. Thank you very much for writing it. Congratulations! You’re also the longest podcast we’ve ever recorded, but it’s a subject that is very dear to me, and one that is endlessly fascinating, and we could continue on, but we’re going to be respectful of your time, so thank you for joining us and for your thoughts.
Well, thank you. Anytime you want me back, I would love to continue the conversation.
Well, until next time guys. Bye. Thanks for listening. If you enjoyed the podcast, don’t forget to subscribe on iTunes and SoundCloud and leave us a review with your thoughts on our show.
from Gigaom https://gigaom.com/2018/10/11/this-much-i-know-byron-reese-on-conscious-computers-and-the-future-of-humanity/
0 notes
Text
Misnomer: American-Asian or Asian-American?
Dear Mr. Michael Luo,
In Chronological Order:
1. “Go back to your country, where you came from.” From a tall-white girl junior year in gym class, when I was apparently too good at tennis with my groundstrokes, for her liking. I was on the varsity team. I’m pretty sure nobody ever said that to the Russian girl my year who played first singles on the professional circuit and later went to college on a tennis scholarship.
2. We’re watching the movie the Weddings Crashers on a screening at the upper east 90s of NYC. In the scene where all the female bodies/conquests from various weddings are heaped one after another onto the bed, nobody says a word when a bombshell white girl, black girl appear in lingerie falling onto the bed. Men titter and snicker when an Asian one appears in bed.
3. Schizophrenic homeless-appearing man on the Chicago subway, muttering/chanting about “an Asian man and his whore”, when my oblivious elderly father leaned forward perhaps a little too closely for his liking while standing in front of the seated man, in the crowded train. It took me awhile to figure out what was going on, and I don’t know what my face must have looked like when I realized what he was saying. I turned around, and saw a white-collar youngish white “normal” man surveying me with taunting and leisurely pleasure.
4. I meet my old college friend at a Bulls game, along with her brother-in law who is from the midwest. My friend is from the south. He regards her clearly as the smart young professional lady. When he finds out I’m a physician, he conveys through conversation and body language, “Well, what do you know, look at this Asian girl, she’s pretty smart too, isn’t she?” When I tell her this years later, she tells me uncomfortably, “I’m sorry you feel that way.”
5. “You’re not a typical Asian. You’re not like most Asians.”
6. “I remember our high school valedictorian. I know what Asian girls are like.” From an Indian-American MD recounting his high school experience in southern California.
7. “I went through an Asian female phase in college.” A Pakistani-American physician recounts his Harvard days, in a chuckling, knowing, authoritative tone.
8. “The young…Asian…female…physician…” In a university hospital, stated by an older white male physician at a multi-disciplinary tumor board with full theatrical effect.
9. “You’re a typical Asian girl.” – note in relation to #5. Said by an older Asian man who did not find me suitable for various reasons.
10. We’re watching Ex-Machina in a NY movie theatre. The Swiss Alicia Vikander is a symbolic sit-in for the woman figure of Eve. The Asian one is portrayed as a mindless submissive sex slave. Men snicker when the Asian one appears every time on screen.
11. “That’s nothing when you compare it to what African-Americans have to go through.” From an old African-born college friend who implies that instead of his experience being a portal to empathize with other forms of injustice, Asians don’t really deserve to have a voice because it’s not so bad for them.
12. “Asian-born.” Stated within 5 minutes of a job interview in middle America, while the partner MD who was raised in middle America but with an Ivy League pedigree states how much he misses the food choices in the city, and asks me if I cook, and when I say I don’t but my mother does, brazenly says to me that I should have my mother cook and bring him some Asian food.
13. “People from the Orient are like that.” From an older male patient, and he glances at me quickly while articulating the word “Orient.”
14. I walk into a patient room in my white coat and the first thing the patient says is, “You’re a young girl.” Call me solipsistic, but I’m pretty sure my non-Asian young female MD colleagues don’t really have this problem or nearly as often.
15. “Don’t forget your heritage.” From a German-born elderly patient who speaks pridefully about her own and looks at me as if trying to give well-intentioned cultural advice. I don’t have the heart to tell her my heritage is American.
16. “I notice Asian girls tend to be very cutesy.” From an elderly female patient who clearly considers herself a progressive cosmopolitan, for her region of the country, as she shrewdly looks me up and down. When I deny the obvious implication that I am “cutesy” only because I really am not, although I don’t understand the relevance of her statement in our context as patient and physician, she cuts me off saying “It’s ok if you are…”
17. “Tiger mother.”
It’s been awhile since you first published your pivotal article in the NY Times: https://www.nytimes.com/2016/10/10/nyregion/to-the-woman-who-told-my-family-to-go-back-to-china.html. Others have also brought it to attention: https://planamag.com/a-matter-of-class-nydia-han-and-michael-luo/. But some things stay with you, a long time after you’ve read them. I’ve also perused the types of articles from many others that surfaced after yours. Surveying that wave, I salute you for the courage to do what you did, which was tactfully bring attention to a situation that has been swept under the table for too long and often, from a position where you did potentially have something to lose by divesting your experience instead of pretending to be a good sport about it which we know is the custom. I know enough people who make good money and enjoy partial (if incomplete) social/professional prestige and pretend this sort of thing doesn’t happen to them. At this point, I’ve realized that trying to figure out if something is worth being personally offended about or not, is irrelevant. Something needs to be done for the status to CHANGE.
Some stereotypes are true, just as they are for African-American and Hispanics, but what’s unfair is that while other races at least are given the public right to be outraged and people know they just can’t say certain things out loud even if they’re thinking it, many people apparently think it’s all right to walk all over Asians. They’re just looking at it the “wrong” way, or taking it too personally. But why are African-Americans given the right to be “personal” about it? I see... Asians are missing out on the affirmative action, not just academically and professionally, but socially. It’s partly our fault – we generally don’t give people a hard enough time up front often enough.
While the Harold and Kumar movies were a step in the right direction for Asian-American men, was it really a good thing to be portrayed as the wussy Asian-boy who learned some courage by hanging out with the “cool” ballsy smart but lazy Indian leading man who was already clearly above white dudes? Are Asians only given a semblance of a voice when their example is a stepping stone and sidekick for the evolution of more “dominant” races of color? But while unchecked naïve anger and resentment alone may not be unjustified, they are clearly not necessarily effective for change because it can make people just look like sore losers. This isn’t always the case, but while women who speak out against the Harvey Weinsteins are portrayed as courageous, people who speak out about Asian-American racism are often perceived of as losers who can’t handle reasonable pressure, haven’t succeeded properly at “fitting in” and should seek help. Somewhere, there might be another Virginia Tech case brewing, and although the media coverage of that incident was shameful, it centered on an unstable Asian-American boy who was pushed over the precipice by an African-American (among others) who should have known better, as people of color and as adults. America needs to remember they lost a lot of lives that day. For that kid, it wasn’t just about being mentally deranged or predisposed as it may have been for Columbine and other shootings, it was the Asian-American experience that bred a monster that this country needs to take some responsibility for.
This is not about myself. I’m at peace and I’ve learned how to handle myself and situations. I say this because I don’t feel guilty that I embrace my American-ness, no matter how others might perceive me, in the past, now, or in the future, because it’s not necessarily the Asian-ness that I should feel compelled to not deny, but the American-ness that many people seem bent on denying me. I’ve traveled over 30 American states and over 15 countries but America is what I know best, love and embrace, not Asia, not Scandinavia, not South America, not Africa. I don’t feel compelled to have to explain that, no matter what anybody says or does any more.
This is not just about the next generation of Asian-Americans, the halfbloods, the fullbloods, the quarterbloods. This is just as much about all the other Americans who miss the humanity that binds us all together.
In any community, there is always a dominant group or race. The goal should not be to become the dominant one or necessarily assimilate (although there’s no need to apologize for those who want to assimilate – it’s a critical step in the immigrant experience), but to learn to coexist with real respect and independence of a certain degree with whatever one’s choice of identity is. Feminism at first was about bra-burning but it then became a choice to be a homemaker if that’s what a woman wanted. The same can be said of the Asian-American experience, someday I hope.
This concept extends globally as well. China has obviously become an economic superpower, but I was incredulous when a Swiss girl I met in Iceland told me in a dismissive and disparaging tone, “Oh the Chinese, they just go on vacation a lot because they like that, it’s not necessarily because they’re rich.” It is curious, but Japan and Germany who are not necessarily superpowers any longer nor the most culturally sophisticated, and were enemies of the Allies, have always been and continue to be, generally respected by most people in this country, and also abroad, or at least more so than other Asian races. What are the conditions for human respect, to give and receive respect, and how can we apply that in a social and educational setting? That may be a question worth contemplating further by the great thinkers of this country and world.
Martin Luther King said “Education plus character – that is the true goal of education.” From my own limited experience, I exasperatedly believe we have a long way to go. It is almost 2018 and I call America home, but I am not fully confident that if another Holocaust arrived in our world, and push came to shove, enough of us would not turn in our neighbor. That is not a denouncement, but rather a diagnosis of a reality of an illness pervasive in a nation and world, for whom only one symptom is racial injustice.
I’ve done enough testing and diagnosis. I’d like to contribute to a cure. I appreciate your time and thanking you for reading my letter.
0 notes
Text
Purpose - Chapter Four
Chapter Four: The Purpose of Fun Part Two
‘He liked being busy, having a purpose, and although he enjoyed helping out the townsfolk, he wanted something a bit more constant and preferably close to the sea.’
Just my take on what Killian would do for a job in Storybrooke based off a tumblr post.
Basically a bunch of interlinking one-shots. Not necessarily in chronological order.
Contains Captain Swan, Captain Cobra, and Hook/Snow BrOTP (Snook?)
So yes, I wrote this over a year ago and I’m finally publishing it as part of my Sixteen Fics challenge
Previous chapters
[AO3]
“You know, I thought you were joking about the hike,” Emma panted from the back of the group.
It was a miracle he even heard her, with the wide gap between the front and the rear, which were essentially two different groups at this point. When they'd set off, Killian had taken the lead and had instructed Emma to keep an eye on them from the back. However, he hadn’t anticipated some to be severely eager, and others to be so idle - no middle ground.
Someone from the back group had affectionately named them ‘The Slackers’.
Emma wasn’t pleased.
He knew Emma was in good shape (many times had she tried to introduce him to the fitness centre they have in this world - albeit she claimed the one in Storybrooke was rather lackluster - yet (when they weren’t chasing after villains) he preferred to join her on her twice-weekly runs) but it seemed even the former bail bondsperson had quickly tired from the morning hike.
“What’s the matter, Swan? Can’t handle it?” he called back teasingly.
She gave no reply, but he was pretty sure that if they were not surrounded by kids a certain finger of hers would be raised.
“Killian,” whined someone who was trudging not far behind him, “Are we nearly there?”
“Not far now,” he told the group.
He heard another groan from his half of the group, “You said that twenty minutes ago!”
“Alright!” he halted, spinning to face the group. “Today is all about teamwork! Three tasks will be presented to you today, and in the groups I’ve already assigned, you will be completing them. The winning team will win a prize.”
At the word ‘prize’ several people perked up in a meerkat-like manner. He bit back a chuckle, setting about explaining the first task.
“Orienteering,” he announced. “We will be setting you off in different locations, with only a map, a compass, a talking machi-”
“Radio.”
“And a radio for emergencies,” he corrected, flashing Emma a thankful smile. “There are fifteen marks; at each mark you must collect a coin as proof. I have made a sheet for each team to help you find each mark. You will do this by either solving the riddles, following the directions, or finding the coordinates on the map. You can do them in any order except for the last one, which is where the next task is. Also, some marks will not have instructions on the sheet, as you must go to the previous mark to find the clue for that one. The sooner you arrive at the second task the more points you get. Any questions?”
Henry’s hand shot up, “Yeah, when did you set this up?”
Killian smirked, “Your mother and I may have taken a few trips here whilst you were at your other mother’s.” Henry seemed satisfied with the answer, and Killian took another question.
“What’s the prize?”
This time Killian didn’t hold back his laugh, “My lips are sealed.”
There were a couple more questions and a few moans when he sorted them into groups (he tried to mix up gender and age in groups of five or six, yet that didn’t stop him from trying to play matchmaker with a few of them).
“Emma, love,” Killian turned to her, who had noticed and was clearly amused at his matchmaking, “Care to transport Group 1 to their starting point?”
Emma grinned affirmative, strolling over to the first group, “Remember: don’t start until we contact you on the radio.”
***
“It’s upside down, stupid!” Angel snatched the map from Caroline, setting it straight. “Ok, guys, we can win this, we have Henry.”
“Why am I the key to success?” Henry asked, slightly intimidated by the athletic girl.
“Because,” she rolled her eyes, “you know how Killian works. Plus you’re smart, so there’s that.”
“No pressure,” Caroline grinned oh-so unsympathetically at him.
Henry was in a group of six: eighteen year old Caroline, she was on the pathway to becoming a nurse, which was helpful (he wasn't entirely unconvinced his sorta-stepdad wouldn't take this opportunity to torture him); sixteen year old Angel, who was probably stronger than any of the boys her age; little Karen, her enthusiasm was certainly a bonus; twelve year old Kevin, he was small and fast, Killian had called him ‘the ideal cabin boy’-
-and of course, Violet.
He was going to murder Captain Hook.
“Right,” Henry glanced over the sheet with the list of marks. “Killian said we can do this in any order, but knowing him, it’ll be far quicker to do the logical thing and work chronologically.”
“The first one’s a riddle,” Violet said, leaning over his shoulder to read it, her chin grazing his shoulder.
She may be his girlfriend, but he was still a teenage boy who had no idea what he was doing.
“So we need to solve it,” Kevin finished. “What is it?”
Henry recited: “First think of a person who lives in disguise, Who deals in secrets and tells naught but lies. Next, tell me what’s always the last thing to mend, The middle of middle and end of the end? And finally give me the sound often heard During the search for a hard-to-find word. Now string them together, and answer me this, Which creature would you be unwilling to kiss? You have the answer, job well done! Now think of a place where these creatures have fun. I know it may be hard but put on your thinking cap, What’s a place like this on your map?”
Angel’s face was pure confusion, “Um, I’m sorry – what the hell?”
“That feels familiar...” Caroline raised a brow.
“Of course it does,” Henry vowed to tease Killian to no end over this, “Part of it is from The Harry Potter books. Belle had them read a few weeks ago in book club.” Henry shook his head, the fearsome pirate Captain Hook was really a nerd, “He’s just added to it.”
“Ok,” Violet brushed off her curiosity over the Harry Potter books (Henry had given her a list of books she was steadily making her way through), “So what’s the answer?”
“The creature is spider, right?” Caroline asked, “It’s been awhile since I read them.”
“Yeah,” Henry confirmed. “So what kind of places do spiders like?”
“We don’t have to see spiders do we?” Karen pouted, “I hate spiders.”
“Agreed,” Caroline piped in. “And it better not be why we need our swim stuff.”
At the start of the trip, Killian had warned them to wear their swimsuits underneath their clothes during the day, cryptically warning them that they may get wet. He knew the man loved keeping secrets from them, just to see them squirm.
“Spiders live everywhere,” Angel scrunched up her nose in confusion, ignoring Caroline’s comment.
“Maybe we’re not supposed to take it literally,” Violet suggested. “I mean, it’s a riddle, right? So what place on the map has specifically to do with spiders?”
Angel studied it, “The map’s in Spanish or French or-.”
“Latin,” Caroline answered. “Gimme,” she snatched the map again.
It took all of Henry’s strength not to roll his eyes; of course Killian would write the maps himself and put them in another bloody language. He’d taught them bits and pieces during Saturday’s etiquette lessons, probably just enough that they could figure it out.
“This one,” Angel pointed, “means creature right?”
Caroline shook her head, “It won’t be that easy.”
“Octo means eight,” Violet supplied, gesturing at another point on the map. “Spiders have eight legs.”
Octo crura ludens.
Caroline grinned, “Nice one, Camelot. Ludens mean playing.”
“Good enough for me,” Angel shrugged. “Let’s go!”
***
“Ahoy, mates!” Killian greeted them as they approached a clearing, some kind of playground-gym thing next to him. “First ones, congrats!”
“Why am I not surprised?” Emma beamed proudly at him, causing Henry to blush. She and Killian were wrapped up in one another on a picnic bench, three large baskets next to them.
“Mooom,” Henry groaned, ignoring Violet’s giggle.
As adorable as it was, he acted offended by it.
Twenty minutes later, the last of groups had shown up, Killian and Emma sat them down for lunch, handing out sandwiches and crisps. Violet slid next to him, gently shoving Timothy out the way.
“Alright mates,” Killian hollered in his usual fashion after lunch (Henry liked to call it his ‘Lieutenant mode’), “As you can probably tell, the next task is more physical-”
The less athletic ones groaned.
“Don’t worry, a few of you can do this quiz instead.”
Yet another groan.
“This is a relay. Each person in the team will have an obstacle to complete, then pass on this-” he held up a small pouch, “on to the next person. As you can see, there are four obstacles and five or six teammates. Those who haven’t been assigned an obstacle will be doing the aforementioned quiz. When a person is done with their obstacle they may move on to help those doing the quiz. Any questions, or shall I explain each obstacle?”
A few hands went up. “Do we get to chose who does what?” someone asked.
“Aye,” Killian answered and that seemed to be the most popular question as half the hands went down. Karen’s was the only one still up, “Yes, lass?”
“Can I go to the bathroom?”
***
“Ok,” Angel clapped her hands after Killian's briefing. The tasks were fairly simple, and again you had to collect coins and put them in the pouch, only this time you could move on without them, but they got you points. “We won the last one, we can do this. First obstacle: tree climbing, who wants it?”
“I’ll do it,” Caroline volunteered, “I’m tall so that means I can climb faster.”
“Good enough for me. Monkey bars? Who things they can go back and forth three times without falling?”
“I could,” Violet offered.
“Perfect! Crawling under the net? Kevin, you’re small: how about it?”
He shrugged, “Sure.”
“I’ll transport the boxes,” Angel volunteered. “Henry, Karen are you two ok to do the quiz?”
Henry nodded, “No problem.”
“Alright team Firebolts!” she recited the nickname they’d given themselves after a lengthy Harry Potter conversation, claiming that ‘Group 5’ was too boring, “Let’s do this!”
***
Henry knew he had an edge when it came to the quiz: it was all things Killian ranted on about after meetings or...really whenever the subject came up. Karen knew what she was doing as well, her small size and quiet nature being more than made up for with her obviously tuned listening skills, as she managed to recite almost exactly what Killian said.
“Hey,” a red faced Caroline puffed, nearly collapsing next to them. “Your girlfriend’s up, Author,” she teased.
Peering up from the paper, Henry saw Violet dangling from the bars, a coin perched in her mouth. Despite her upbringing she looked perfectly at home here in the woods. She had substituted her usually classy dresses and heels for a cotton shirt and jeans with boots, her hair tied into two buns that were falling loose. Her face was marred with concentration and there was mud on her cheek.
To Henry she was gorgeous.
Somehow he managed not to get distracted long enough to plow through a few questions (with much, much prompting from Caroline and Karen). Henry was only alerted that Violet had finished with a forceful push on his shoulder as she rested next to him, face red and hair in disarray.
Needless to say he was a bit less focused on the task at hand after that.
***
Half of the group looked dead on their feet (and he should know) as he cheerily gave out the results of the second task.
“There are three categories to which I’ll be giving out points. The first one is the fastest group: Group 2.” There was a triumphant cheer.
“The most coins is split between two groups: Group 8 and Group 9.”
A larger cheer came up.
“And the most answer’s right, with full marks: Group 5 or,” he swallowed a laugh, “The Firebolts.”
“Yes!” Angel exclaimed, “Henry you genius!”
Henry flushed bright red at Angel’s praise and Killian bit back another laugh; of course it was Henry’s group. He tried not to be proud at that; he failed.
“Next task won’t be until later on. So for now, everyone back to camp!”
“More walking?” a tortured moan sounded.
Killian chortled loudly, “Come on you lazy lot!”
Unsurprisingly, the front group was a lot smaller this time.
(A chorus of “Join the Slackers” could probably be heard in Storybrooke)
***
“A night trail?”
“Isn’t it a bit light for a night trail?”
As he began to explain, Henry wondered if Killian’s smirk had left his face all day, “It’s not a night trail, per say, more like a blindfolded trail. It’s an exercise in trust. I’m sure you’re all sick of me droning on about the vitality of teamwork, but it will always ring true, and a critical part of teamwork is trust.” He began pacing, “In your groups, all but one of you will be blindfolded and you will have to complete a relay of sorts. There will be a bag of all the coins you have collected over the course of the day that you must pass from one member to the other. The first team there will have their coins doubled. The last will have them halved.”
Henry heard Timothy shout from the back, “How will we do it if we’re blindfolded?”
Killian sighed, “I’m getting to that. The one who isn’t blindfolded will be in a little hut Swan suggested to me. In there they will be able to see everything their teammates are doing and will be able to direct them through a…” he paused to search for the word.
“Headset,” his mom answered for him.
“Headset and camera. They will be responsible for guiding their teammates.”
Killian exhaled, “I believe that’s all.” He clapped his hands together, “Oh! I almost forgot! Emma and I will be choosing who the guide is.”
***
If he got out of this alive he swore to murder Killian Jones and Emma Swan.
Because of course he has to do the maze part of the course with Violet’s voice in his ear. How the hell was he supposed to concentrate? The answer was: he wasn’t. And he couldn’t even tell Violet that he hadn’t heard her.
“Henry? Are you ok? Wave if yes.”
He waved, trying to remember which way she’d said to turn.
“Henry! For the fifth time! Turn around; you’re at a dead end!” she snapped in his ear.
“Oops,” he muttered to himself. The maze was made up of low bushes and narrow pathways. He held tightly onto the pouch of coins, knowing that only Angel was left for the final sprint, and she’d probably kill him when he got there.
Because of course his teammates are hearing the exact same things he is. Of course they know that he seemingly fails at following basic instructions.
As he listened to Violet guide him, he allowed a new distraction to enter his mind:
How to murder two out of three parents and get away with it.
The author’s next bestseller.
***
They came second to last, and - as he suspected - Angel punched him very hard when the blindfolds finally came off.
***
“Well done to you all, mates,” Killian applauded as they readjusted to the light of the world again. “You’re coins have been counted, some added for the earlier tasks, and the winning group is…” His gaze grazed over the anticipating faces of the children, all eager to know if they’d won. “Group 8.”
“HELL YEAH!” a triumphant whoop came from the group. Some of the others looked downtrodden at their lack of victory, but a few just looked ready for some well deserved sleep.
As they sat around the campfire later on, he had his arm around Emma, the pair of them watching Henry. A friend of his in the winning group turned to him, “You upset you lost after you’re little victory streak?”
The boy (young man now - but he’d only ever see the boy he helped rescue from the grasps of Pan) simply continued looking at Violet, who had just begun laughing at something Angel said, and replied, “Not at all.”
Emma’s hand slid up to his heart and he glanced at her, joining in on the proud smile on her face.
***
If she thought how good he was with Henry was attractive, she was nowhere near prepared for how endearing being around these kids made him.
Needless to say him violently shaking her awake that night made him less so.
“Swan,” he hissed, “Swan, wake up.”
“Noooo,” she whined, curling her arms around him, “sleeping. You try.”
He chuckled, “Come on, love, wake up. I have a surprise for you.”
The childish wonder in his voice - so similar to what she’d been hearing all day - was what drew her out of the tent in the middle of the night to trek through the woods with him to whatever surprise he had planned.
“Are we nearly there yet? We shouldn’t go too far from the ki-”
“Relax, Swan.” He reassured her, “I’ve acquired the help of some of the older ones to be on alert. Now,” he stopped, turning to her, “close your eyes.”
“Seriously? This slope is really steep,” she cautioned.
He turned to her, a boyish grin lighting up his face, “Why, Swan, I thought you trusted me! Isn’t that part of this true love thing?”
Bastard. She grumbled, reluctantly closing her eyes, letting herself be guided by his hand and hook, “This ‘true love thing’ didn’t involve wandering through the forest without sight in the middle of the night. We’re not doing an obstacle course like the kids are we? Because we saw what happened to Henry with Violet guiding him.”
He snorted, carefully leading her forwards, “Ah, the bloom of first love. I told you: you can hide buried treasure, or - as you proved to me last week - a winning poker hand, but-”
“‘You can’t hide the bloom of first love’,” she quoted him. “God, I can’t believe he’s fourteen with a girlfriend.”
“Me either, love. Tree root,” he warned. She thanked him and he continued talking, “It seems like only last week he was just the young boy whom I offered to help rescue from the clutches of my old foe.”
The mention of Neverland brought back the memories of the death-trap island and all that occurred there. Thankfully, before she let them overwhelm her, Killian announced, “And here we are: open your eyes.”
Following his instructions, she opened her lids and was greeted with a sublime scene. Picturesque turquoise water filled the area beyond her line of sight. She was stood on a cliff, deep water below her. The rocks sloped down to the level of the water, From what she could tell, the water shallowed a bit further on, then got deeper again as the colour of the water darkened from the colour of Killian’s eyes to a navy - it was hard to tell; the twinkling water seemed to glow, and she suspected magic was part of it. Tall rocks creeped around the lagoon, protecting them from the wind. The most beautiful part of it all, however, was the lack of ceiling, revealing the stars to them, and all the constellations Killian had taught her over time.
“Oh, Killian,” she sighed,.“Wh-” she stepped out of his arms, eyes scanning the area, “How? What? When did-?”
“I take it you like it?”
“Like it!” Overjoyed, she gave him a firm peck, “It’s beautiful!”
“Isn’t it just?” he agreed. “I plan to take everyone here tomorrow for some water games. What was it? Water colo?”
“Polo, Killian. Water polo.” She beamed at him, “So this is why you told them to pack swimsuits?”
“Aye,” he confirmed. “Now, don’t you think the two of us should test it out?”
“Now?” she questioned. “I left my swimsuit in the tent.”
She knew as soon as the words were out of her mouth what his response would be.
“Right you are, love. I would say that your undergarment are fine, but alas, we wouldn’t want to ruin them would we?”
“No,” she slid closer to him, fingers playing with the edges of his jacket, “I suppose we wouldn’t.”
“Hmm.” He wrapped his arm around her, hand finding it’s way between her top and bottoms, resting on her bare back.
“But we really should make sure it’s ok. I mean, it’s only being responsible.” She knew he had probably checked when he first found the place, but one hand had made it’s way to his heart, calmed by the steady beat of it, and she couldn’t help but play along.
“I may have a solution to that,” he stepped out of the embrace, throwing off his jacket and pulling his shirt over his head. She followed his lead as he took off his jeans, leaving them both in their underwear. She couldn’t help the smirk that overtook her face: was she really about to go skinny dipping?
Stepping forward, she assisted him in removing his brace, and he returned the favour by popping off her bra. When they were finally both naked, he took her hand dragged her off the edge of the small cliff, splashing into the water.
When they arrived back at their tent that night they were both sticky in their clothes, but as they curled up around each other, neither had it in them to care.
***
His grandmother would be so proud.
Fifty-odd kids in the middle of the woods, packing up a campsite, whistling ‘Whilst while you work’.
He finished packing up his own tent, chucking it into the coach. Turning to an overly-cheerful Killian, he asked, “So, what are the plans for today?”
“Still not telling, lad,” he replied.
“Oh, come on Dad! Please?”
Killian merely shook his head, checking off everyone’s luggage.
Once they were all packed up, they left the coach where it was and headed off, following the two adults. After over twenty four hours the lack of wifi was starting to kick in, and everyone was a little moodier. Henry himself tried to stay a bit more positive, picking up speed to walk with his parents, dragging Violet with him.
When they arrived, there was a collective gasp at the sight of the lagoon. Henry turned to Killian, who merely wore a self-satisfied smirk.
“So this is why we need our swim stuff?”
“Aye, lad.” He pivoted around, addressing the group, “Alright mates! After all the hard work you’ve been doing this weekend I thought you deserved a little break. We’ll be here until it’s hometime so remember to have lunch at some point.”
“Where will lunch be?” asked Devon.
“I’ll put it with our clothes,” Emma announced. “We’ve got sandwiches, chips, chocolate bars, and some leftover marshmallows.”
Everyone groaned at that; the previous night there had been a little mishap whilst trying to teach Killian what s'mores are. Let’s just say that pretty much everyone there will be put of by the sight of a marshmallow for weeks to come.
Killian winced with the rest of them, before continuing to give instructions, “We’ll be doing some water activities later on if anyone wants to join in. And please, don’t go in too deep if you don’t feel confident enough. I’m sure none of us here want an impromptu rescue to take place.” He gave them one final grin, “Well off you go mates!”
***
After several hours of splashing, screaming, but thankfully no saving, the group were all more or less passed out on the coach. Everyone was sporting damp hair and exhausted yet content expressions.
Killian himself had some pride mixed in there.
Emma was pretty sure she did too.
***
It was late evening by the time they had dropped of the kids, all of them waving goodbye, giving their thanks and promising to see Killian at the next meeting.
Once Henry had shot off to Regina’s, Emma dragged Killian away, deciding to walk home, giving themselves time to reflect.
A few mishaps occurred but they were all laughing by the end of the weekend. After they got home she told him, “I’m proud of you, you know. You’ve had a real impact on these kids.”
He blushed, scratching behind his ear, “It’s nothing really, love.”
“No, it is, Killian,” she insisted. “If I’d had something like that when I was young, my career as a thief might have been briefer.”
Killian smiled at her, wanting to brush off her compliment, “Ah, but then we wouldn’t have Henry.”
“I don’t know,” she exhaled, “I think that the Author would have had to have been born no matter what.”
He chuckled and agreed.
#killian jones#henry mills#emma swan#captain cobra#captain swan#ouat ff#ouat fanfiction#ouat fic#captain swan ff#cs fic#cs ff#my writing#sixteen fics
0 notes
Text
Greenwald's "No Place to Hide": a compelling, vital narrative about official criminality #3yrsago
Glenn Greenwald's long-awaited No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State is more than a summary of the Snowden leaks: it's a compelling narrative that puts the most explosive revelations about official criminality into vital context.
No Place has something for everyone. It opens like a spy-thriller as Greenwald takes us through his adventures establishing contact with Snowden and flying out to meet him -- thanks to the technological savvy and tireless efforts of Laura Poitras, and those opening chapters are real you-are-there nailbiters as we get the inside story on the play between Poitras and Greenwald, Snowden, the Guardian, Bart Gellman and the Washington Post.
Greenwald offers us some insight into Snowden's character, which has been something of a cipher until now, as the spy sought to keep the spotlight on the story instead of the person. This turns out to have been a very canny move, as it has made it difficult for NSA apologists to muddy the waters with personal smears about Snowden and his life. But the character Greenwald introduces us to isn't a lurking embarrassment -- rather, he's a quick-witted, well-spoken, technologically masterful idealist. Exactly the kind of person you'd hope he'd be, more or less: someone with principles and smarts, and the ability to articulate a coherent and ultimately unassailable argument about surveillance and privacy. The world Snowden wants isn't one that's totally free of spying: it's one of well-defined laws, backed by an effective set of checks and balances ensure that spies are servants to democracy, and not the other way around. The spies have acted as if the law allows them to do just about anything to anyone. Snowden insists that if they want that law, they have to ask for it -- announce their intentions, get Congress on side, get a law passed and follow it. Making it up as you go along and lying to Congress and the public doesn't make freedom safe, because freedom starts with the state and its agents following their own rules.
From here, Greenwald shifts gears, diving into the substance of the leaks. There have been other leakers and whistleblowers before Snowden, but no story about leaks has stayed alive in the public's imagination and on the front page for as long as the Snowden files; in part that's thanks to a canny release strategy that has put out stories that follow a dramatic arc. Sometimes, the press will publish a leak just in time to reveal that the last round of NSA and government denials were lies. Sometimes, they'll be a you-ain't-seen-nothing-yet topper for the last round of stories. Whether deliberate or accidental, the order of publication has finally managed to give the mass-spying story that's been around since Mark Klein's 2005 bombshell.
But for all that, the leaks haven't been coherent. Even if you follow them closely -- as I do -- it's sometimes hard to figure out what, exactly, we have learned about the NSA. In part, that's because so much of the NSA's "collect-it-all" strategy involves overlapping ways of getting the same data (often for the purposes of a plausibly deniable parallel construction) so you hear about a new leak and can't figure out how it differs from the last one.
No Place's middle act is a very welcome and well-executed framing of all the leaks to date (some new ones were revealed in the book), putting them in a logical, rather than dramatic or chronological, order. If you can't figure out what the deal is with NSA spying, this section will put you straight, with brief, clear, non-technical explanations that anyone can follow.
The final third is where Greenwald really puts himself back into the story -- specifically, he discusses how the establishment press reacted to his reporting of the story. He characterizes himself as a long-time thorn in the journalistic establishment's side, a gadfly who relentlessly picked at the press's cowardice and laziness. So when famous journalists started dismissing his work as mere "blogging" and called for him to be arrested for reporting on the Snowden story, he wasn't surprised.
But what could have been an unseemly score-settling rebuttal to his critics quickly becomes something more significant: a comprehensive critique of the press's financialization as media empires swelled to the size of defense contractors or oil companies. Once these companies became the establishment, and their star journalists likewise became millionaire plutocrats whose children went to the same private schools as the politicians they were meant to be holding to account, they became tame handmaidens to the state and its worst excesses.
The Klein NSA surveillance story broke in 2005 and quickly sank, having made a ripple not much larger than that of Janet Jackson's wardrobe malfunction or the business of Obama's birth certificate. For nearly a decade, the evidence of breathtaking, lawless, endless surveillance has mounted, without any real pushback from the press. There has been no urgency to this story, despite its obvious gravity, no banner headlines that read ONE YEAR IN, THE CRIMES GO ON. The story -- the government spies on your merest social interaction in a way that would freak you the fuck out if you thought about it for ten seconds -- has become wonkish and complicated, buried in silly arguments about whether "metadata collection" is spying, about the fine parsing of Keith Alexander's denials, and, always, in Soviet-style scaremongering about the terrorists lurking within.
Greenwald doesn't blame the press for creating this situation, but he does place responsibility for allowing it square in their laps. He may linger a little over the personal sleights he's received at the hands of establishment journalists, but it's hard to fault him for wanting to point out that calling yourself a journalist and then asking to have another journalist put in prison for reporting on a massive criminal conspiracy undertaken at the highest level of government makes you a colossal asshole.
The book ends with a beautiful, barn-burning coda in which Greenwald sets out his case for a free society as being free from surveillance. It reads like the transcript of a particularly memorable speech -- an "I have a dream" speech; a "Blood, sweat, toil and tears" speech. It's the kind of speech I could have imagined a young senator named Barack Obama delivering in 2006, back when he had a colorable claim to being someone with a shred of respect for the Constitution and the rule of law. It's a speech I hope to hear Greenwald deliver himself someday.
No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State
https://boingboing.net/2014/05/28/greenwalds-no-place-to-hid.html
12 notes
·
View notes