#recently i actually learned that at our size we're actually called a-
Explore tagged Tumblr posts
shopwitchvamp · 1 year ago
Text
I wanted to add a gentle reminder/explanation beneath this harsh one, haha. This is at no one in particular but just anyone that might wonder about these things:
I know it's frustrating when a design someone really wants is sold out, your size isn't available, or a different version of something you really like doesn't exist yet- but please keep in mind that we're a tiny 2 person business operating out of our apartment🙏
We don't have the space or the funds it'd take to keep everything in stock all the time (I'd have to do the math on it but as a very simplified example, let's say I have about 100 designs at 4 sizes each and want to have at least 10 pieces on hand in each size.. that'd mean needing to be able to store 4,000 pieces of clothing and just to be able to order them in the first place I'd need well over $100k liquid cash on hand 😱😵‍💫🫥).
This whole time I've built my business up one step at a time without ever taking loans, and I plan to keep going slowly and steadily and not trying to make any sudden leaps to scale 20 levels up like taking out a huge loan and renting a warehouse and buying a larger vehicle to be able to transport all that stuff and needing to hire employees to keep up with it all and.. etc, etc *tries not to have an anxiety meltdown just thinking about it*
I realize I operate in a way and on a timeline that isn't very compatible with the speed everything moves at these days, or with how business is often done in the US, or to be able to meet the expectations people have of "online shopping" in general. But overall it's really working for me, and I don't even know if I want my business to end up as big as it *could* be if I let things just barrel down the road totally uncontrolled.
I hope everyone can understand where I'm coming from and that I'm doing my best! There's also always a lot going on behind the scenes that isn't at all apparent from just looking at the shop or seeing a couple posts on social media. Things are growing and improving all the time, but it will take time ☺️
Tumblr media
732 notes · View notes
roobylavender · 2 years ago
Note
I feel like I’m sympathetic to people who make mistakes in their early twenties. Like for years you are called a kid and are made to sit in a chair for hours and then suddenly one day you turn a certain age and the whole world sees you as an adult and there are adult consequences and yet for the first time you have to find your footing with all this freedom that you didn’t have before. Like I can’t imagine getting in trouble with the law at a young age and then having to carry that throughout your whole life when like different decades of your life you are different people and like we know being a labeled a bad person does more harm then good since that all somebody believes they are. Like you at 21 isn’t the same as you at 28 year old. Sorry I’m just thinking of humans and how we behave a lot lately and I feel like as people we really don’t do enough to help other people. Like as far as we know we are all we have in this world and maybe one day the earth with explode and all of this will be meaningless but even so we should like wish the best for each other a because I think at the heart of it all people do want to do good. And with that said I think what you said in your last post about superheroes in regards to the Spider-Man film really resonated with me on what a hero should be. Sorry I know this is random but sometimes I see one of your post and it spirals into something else.
i definitely agree. like admittedly even people that young can do heinous things but it's like.. no one is doing those heinous things without a basis, yknow. like i'm going to be as vague here as i possibly can be but one of the death row inmates i worked on a claim for recently committed his crime when he was barely eighteen, and it was a horrible thing, and he clearly has a very affected sense of morality and such, but he's also got an extensive trauma profile. and in the end, is a lifetime spent in prison awaiting an execution date really going to do anything for him as a person? will he learn anything? will he get better? will anything about the roots of why people turn out this way and commit these crimes improve? i think i agree young criminals (or frankly any criminals) do need to be stripped of power lest they abuse it again, but ig the problem with our system is that the idea of being stripped of abusive power is viewed as equivalent to a prison sentence. it's not viewed as equivalent to being in a position to learn from scratch, to being made to commit yourself to community, to addressing your behaviors and attitudes at the root. the victims of these crimes obv don't have to forgive these people but i do think if we're ever going to improve as a society that there need to be systems in place that actually allow criminals to start from scratch in some place where they they can't abuse power again but can learn to like. commit to something at the ground level that reshapes their perspective etc. it's not a one size fits all kind of thing i'm sure but i do think it's worth investing in bc some of the death row inmates i've met who it seems have totally turned their life around (as much as one can while in prison for years), it's bc they found something to learn and commit to and that gave them the space and time to reflect on their behaviors and crimes and whether they really believed in what drove them to commit those crimes in the first place
2 notes · View notes
sangdelune-archive2 · 2 years ago
Text
Tumblr media
╰ ☾ ☆ * : ・ ⁞ — ˗ˏˋㅤㅤ𝐆𝐄𝐓 𝐓𝐎 𝐊𝐍𝐎𝐖 𝐓𝐇𝐄 𝐌𝐔𝐍
Tumblr media
NAME : Stella
PRONOUNS : She / Her
PREFERENCE OF COMMUNICATION : I don't give out my discord much nowadays so actually, my preferred method is tumblr ims through my main !
MOST ACTIVE MUSE : Unfortunately NOT Misha because the V.NC RPC is always dead compared to my other RPC. But Misha is always very near and dear to me no matter what.
EXPERIENCE / HOW MANY YEARS : Goes to show my age I believe I started maybe when I was 9-10 before I even knew it was called roleplaying. I started in 2012, back in the days where no one knew how to cut posts and everyone had entire gifs as ' icons '. My muse was so minor of a character there wasn't any gifs so I HAD icons but they were all inconsistent sizes LOL.
BEST EXPERIENCE : Honestly as much as I hate to say it, I think the 2012-2013 days were the best despite the site being so messy and so unhinged since everyone was still learning and I had a lot of fun plots with that character. But more recently, I'd say it was meeting some of my very few besties here on this site !
RP PET PEEVE : If I had to name ONE ultimate pet peeve, it's that people on this site tend to take things too seriously. Boundaries are important yes but people tend to forget that the block / blacklist button exist. I've seen and experienced it all on this site and that's the root of all the hurt feelings, d/rama, friendships ending, etc. In the end, we're all here to write our silly little fictional characters and if some people don't mesh, that's okay ! Just don't start a witchhunt or start policing over it.
PLOTS OR MEMES : BOTH except I'm really bad at plotting so I tend to lean towards memes !
LONG OR SHORT REPLIES : It really depends on the day. Sometimes I have no muse and sometimes I can type 240238490324 paragraphs. GENERALLY though, I try to match my partner's length !
ARE YOU LIKE YOUR MUSES : OH GOD I HOPE NOT okay, no I relate to them a bit, yes.
Tagged by: @jardinae
Tagging: Whoever !
Tumblr media
1 note · View note
madame-fear · 2 years ago
Note
THANL YOU SO MUCH OMG IM HONOURED! OMG I WANNA GO TO GERMANY SO BAD😭 ❤️🇩🇪
AND PLEASE OMG ANY WORDS YOU WANNA KNOW HOW TO PRONOUNCE ASK ME! one of the best things about being british is our elite humour and our way with words. we have the best insults its great! it’s actually funny cus my accent is hardly noticeable im from a particularly small/just under average size city yet its ranked quite lowly in attractive accent lists (my accent is unatractive apparently) my city is also largely hated as its a shithole pretty much. apparently some kids were told if they dont behave their parents will send them to (insert my city name here) BUT ITS REALLY NOT THAT BAD THO 😭
BUT SERIOUSLY I LOVE THE BRITISH ENGLISH LANGUAGE SO MUCH I WILL GLADLY HELP YOU WITH WORD PRONUNCIATION
my step-mum is fluent german and almost fluent french and she says that german is her favourite language and she fully supports me in wanting to learn it. (it would 100% also help me in the future as i want to go down the historian path but specifically look at ww2) (also im hopefully gonna learn latin at school which will also help in becoming a historian) my favourite german word is the one you gained in world war 2 because of the destruction caused to my city in the blitz. 🥲
AND YES OMG GERMANY AND ENGLAND ARE TECHNICALLY LIKE ‘brothers’! because english and germany are actually germanic languages from the indo-european language family.
~🦈🇬🇧❤️🇩🇪
AAaAAAHHHHHH💕💕💕💕I SHOULD BE THE ONE THANKING U !!! If you need any help with German pronunciation as well, feel free to ask me anytime!!!! ❤❤❤
I need to re-read/listen English pronunciation books (mainly because i have to speak good English to get into the Universities i want to go), so if i have any problem pronouncing a particular word, i'll ask you!!! 🥰 I currently have trouble with several words rn, but i want to first re-read some things and practice pronunciation. If i still have problems with said words, i'll definitely tell you about it! 🤭🤭😚💝
Please know that if you ever need me to return the favour by helping you out with German pronunciation, i'm always here and absolutely glad to help you out!!! At your service 🧎‍♀️ (it's supposed to be on one knee but oh well) ❤
You know, my dad & i always enjoy watching WW2 documentaries because we're both German as hell! And we've seen several good docus on Amazon Prime & Netflix. I seriously don't remember the names of the documentaries and im not sure if they'll be available in UK, but maybe you can search some on those platforms, mostly on Amazon Prime, because I know they have good quality documentaries about WW2. Also, there's a YouTube british channel that talks about history, and has some documentaries about a certain moustache man, and WW2. I wanna say it's called 《History Hit》 but to be honest i think it's other channel similar to that...if i find it, i'll tell you the name!!!
And also, UK & Germany are like the best brothers ever and have a good history together!! Recently when the Queen died i saw on TikTok several videos of Germans in UK crying for the Queen and honestly it was quite an emotional moment 🥲 God I love UK so much and you guys are the best <333 I swear I follow a shit ton of British memes accounts on TikTok bc you all make me laugh so much and your humour??? Top notch 👌👌👌
Thx so so much for allowing me to ask you questions abt English💕💕🤧🤧🤧🤧
🇬🇧🤜🤛🇩🇪
2 notes · View notes
should-be-sleeping · 1 year ago
Text
Just as we returned from the store our neighbor was just getting out of her car, so I went over to help her bring her things inside. She'd run out to get some donuts and a slushie. She was having a good day and was making the best of it. She'd managed to get out of the house twice before I arrived.
I carried her goods for her and gave her my arm to help her up the hill to her door. She said her hands weren't cooperating well today, so she gave me her keys and I unlocked her door for her. We went in and she wanted to put her slushie into the freezer as a treat for later. She'd gotten the biggest size so it was a little tall for the freezer, but I helped her get it in without spilling.
She offered me a donut but she'd only bought two, so I declined. As much as I love donuts, and maple bars in particular, I could not in good conscience rob her of such delights. We talked about our favorite donut places, and incidentally, ours was the same -- a local place not too far out of the way. There's an even better place I know of (award winning, in fact), but it's kind of far away. Perhaps I'll take a long detour to bring her a maple bar from there sometime.
She asks about the photo I'd sent her of the reusable zip tie in use, and if I had any other photos of my aquariums. I don't have any on hand but I do have a video I'd taken recently of the baby guppies eagerly swimming right into my open palm. She loves this.
She tells me about her favorite radio show and about how the hosts all know her and have dubbed her their community witch. She plays for me a download from their show in which they highlight one of her Wicked Witch impressions and it's so spot on it's uncanny. Then she suggests that I send them the video of my baby fish, because they do a pet segment on their website and "it's not just dogs!"
I ask her for more details and then agree to send it it. Learning her user name, I mention I'll be sure to give her a shout out as my friend who suggested I submit it and she loves this even more.
We discuss painting, as she is a painter too (or was before her hands stopped cooperating). She wanted to travel and paint the aurora borealis. She says there is a southern one, too, and tells me all about her plans in 2020 to visit New Zealand to see them. But then the pandemic happened.
"Now I'll never get to experience that..." she says, forlorn.
"Are there any videos of it, do you think?" I ask.
This fills her with hope, "Oh, Maybe?"
I decide it will be my mission to help her find them. They are apparently a totally different color than the Northern Lights. We'll have to see if this is true.
She tells me she's trying to find an amethyst for me. I assume because she saw my ring. I tell her I've been wearing it more than usual lately because it's my little brother's birthstone and he passed away last month. She hugs me and I am grateful. I've had a rough couple of months myself, before this. A lot of grief.
I show her my nails, painted in red and gold for the 49ers and she is almost giddy about that I actually did it. She gets up and goes to a nearby shelf and brings back a small fabric cosmetic bag. Inside is where she keeps all the stuff she used when she was able to do her nails regularly. She hands me 49er's decals and I eagerly accept them. This will complete the look for real.
We talk for a while longer and then her friend (the maintenance guy) calls. He asks if she's home and she asks me, jokingly so he can hear, if we're at her house. I respond in kind, "Last I checked, anyway!" He comes over right as I need to leave, so we basically swap places at the door. My neighbor hugs me and says life is easier when you have someone in it who gives a shit. Then she apologizes for swearing.
I remind her to text me any time at all, whether she needs or just wants something and she says she will. Later that evening I notice her car is gone. I assume she has gone to get dinner with aforementioned guy. Later yet she's still not home, I'm a little worried, as I know she has driven a city over before and not been able to get back home, but I give it a little time trusting she'd text me if she needed help.
By 10:00pm I poke my head out to check and thankfully her car is now there, parked in its usual spot. Right as I get back inside she texts me saying she had an INCREDIBLE night. I tell her I can't wait to hear the details and how remarkable her timing was as I had just thought to check on her. She gushes about how she had just gone for a drive in her little sports car but then wound up at the casino and even won a little money!
She tells me all about it and I'm so relieved to hear she was out having fun. She hadn't been able to do that last week. Or the one before that. She tells me she could die a happy woman tonight. I tell her I hope she doesn't, so she can tell me more about her adventures. She sends me an emoji filled post about how her best luck wasn't at the casino but in meeting me.
I tell her the feeling's mutual.
Tough day today... and friendly reminder that being human is easier when we help each other.
I saw one of our neighbors, an older woman we sometimes talk to in passing, sitting outside of her house. I don't know what exactly made me look twice, but on second glance as we drove by I realized her walker was in the grass. She was otherwise just sitting there, like she had a thousand times before, so it would have been easy to assume she was fine and go on with my life as normal but something told me to go check in on her anyway.
She was not fine. She was the polar opposite of fine. Just diagnosed with terminal cancer not fine. No next of kin not fine. A veteran facing eviction from her house for missing rent while in the hospital not fine. In constant debilitating pain not fine. Only semi-lucid not fine. She was extremely alone not fine.
I thought, at most, she might be bored while unable to pick up her walker not fine. A five minute detour from my day not fine. A help her back into her house and say "see you later!" not fine. Instead I spent the last three hours with her because she was so scared and alone and no one should be alone.
We talked a lot while I was there. She's actually two years younger than my mom (who also has cancer but slightly better luck, I guess). I helped her into her house and got her a drink and we talked about what all is going on with her. None of it was good. I was as reassuring as I could be, but there's only so much of this I can actually help her with.
"Why did you come?" she asked through tears.
"Because you looked like you might need some help."
She called me an angel. I told her I was just doing my best. I told her that kindness should never be rare. That we should all try to make the world just a little bit better than it was.
She offered to pay me but I told her I was just there as a friend. Before today we were basically strangers. No need to repay me with anything other than her company, I assured her. She cried, a lot. I managed not to somehow. Something tells me she had needed to cry long before this but in being Strong she never had the chance to.
She needed to get her mail, which is a long walk when you're disabled because it is not at all handicap accessible (across a parking lot, over a bridge, across a small field). So I helped her get her mail. We stopped every three feet because her pain was so bad, but she was determined to be able to go do this with me and not just send me on an errand. I patiently stayed with her and reminded her, through her apologies, it was fine to take our time: there was a nice breeze and birds were singing. She appreciated this. She loves nature.
Halfway back she said she wanted to go to the pool. To put her feet in the water. She loves water, and has not been able to even see the pool in a month. Neither of us were dressed for swimming, but I took her to the pool anyway. There is a stair leading down to it, meaning she couldn't bring her walker, so I offered her my arm.
We went to the pool. She put her feet in the water and then, with more energy and enthusiasm than I'd seen the whole time, she jumped in. In her fancy dress! She was instantly ten years younger at least, clear and happy, floating in the sun. Dress and all. She grew up with a pool and had been on a swim team.
I sat by the edge of the pool while she swam, keeping her company and also making sure she was okay. When she got tired I took her back home and then had to help her get undressed and redressed. I made sure she felt no shame. Getting out of wet clothes is hard for anyone, let alone someone with like twenty pounds of tumors racking them with constant pain.
She was so fucking happy to have gone swimming.
She is trying to "make everything right" before she goes. Trying to repay her debt to society and her debts in general. She couldn't understand why the corporation that owns our houses wouldn't take her money. She was genuinely distressed -- not to be homeless on her deathbed but to not leave this world with a clean slate. I told her intent matters. She can only do her best.
This company not letting her repay her debt was their fault, not hers.
When I finally needed to go, I told her to let me know any time she needed a hand or just wanted company. She told me she was going to die tonight. I told her I hoped not, so I could see her tomorrow. I offered her a hug, we hugged and she sobbed for a solid ten minutes into my shoulder. I told her she was okay. That it was okay.
When I got home I cried myself, because I could not believe she was going through all of that alone. I cannot even imagine how isolated she must have felt. Once I pulled myself back together I sent her a text reminding her to reach out any time and I'd do my best to come over. Like, any time at all.
I hope she is here tomorrow.
907 notes · View notes
kaibaswifey-old · 3 years ago
Text
Tiktokers wanting lolita to be fast fashion and buy their knock-offs from amazon that aren't even going to look right on them bc of how shitty they're made when the rest of us have taken years to build our wardrobes bc we're not impatient brats. And they call us meanie gatekeepers for trying to teach them 🙄
We buy cheaper taobao brands, we save up, we prioritize, we buy second hand and wait for sales. Lolita is never going to be fast fashion and it's not going anywhere so be patient.
I was like 12-13 when I discovered lolita fashion thru looking up stuff about Chobits lol. I lurked egl LJ for the longest time just learning things (as a result, I didn't have much of an ita phase.) My first item was secondhand skirt of my dream print at 18, I think. I didnt have a decent sized wardrobe till recently. I'm now 30 lol.
Y'all can wait if lolita fashion is actually important to you.
3 notes · View notes
itsbenedict · 4 years ago
Text
Two-Faced Jewel: Session 9
The Slaying of the Bobbledragon
Tumblr media
A half-elf conwoman (and the moth tasked with keeping her out of trouble) travel the Jewel in search of, uh, whatever a fashionable accessory is pointing them at. [Campaign log]
Since slaying a serial-killer dragon is a little outside the party's expertise, they're off to Cauterdale to enlist the aid of the Deathseekers' Guild! Having gotten a good night's sleep at a druid village, and not eaten, they're ready to take on, uh...
Well, some sort of very large monster that Zero kindly drew for me.
In the morning, they rather uneventfully get up and get back on the road, thanking the villagers for their hospitality. And the remainder of the trip to Cauterdale is likewise brief and uneventful, right up until the fire.
Saelhen du Fishercrown: the what Benedict I. (GM): The fire.
Yeah, the forest and the road up ahead are ablaze, sort of blocking passage. The dirt road isn't actively on fire, but the trees on both sides are, making it pretty risky to proceed. The team opts to send Oyobi up ahead to scout the situation- and pretty soon she comes back with a report. Apparently, just past the visible fireline, the forest is totally burned down- just charred stumps as far as she could see, right up to the city walls. The fire itself is just, like, 10 meters wide or so, so it's totally something they could just dash through.
It takes some Animal Handling checks to coax the giraffes through, and the ones that balk get them and their riders a little bit of chip damage from heat and smoke inhalation, but the party is pretty much able to push through to the blasted wasteland of charred tree stumps surrounding Cauterdale.
They notice a few people in strange armor in the distance, doing something near the fire- from the seemingly controlled nature of this burn and the name of the town, they conclude that those are fire squads doing this deliberately, and don't get involved. It's a fine conclusion, and the party begins walking the remaining mile to the city.
As they approach, they notice... a little ways off from the main gates, something is attacking the city walls. Guards atop the walls are manning some sort of huge harpoon guns, and they seem to have already slain several of the... whatever these things are. The remaining one, though, seems larger and more resilient than the others, continuing its assault despite the several harpoons already lodged in its flesh.
What they see is a huge reptilian monster. It's probably not a dragon- no wings, and it doesn't appear to be using a breath weapon- but it's the size of a dragon, with tiny arms, headbutting the metal walls of the town repeatedly.
Orluthe makes his Nature roll to recognize this thing- he's heard of them before. They're called "bobbledragons"- some sort of deformed mutant offshoot of true dragons, incapable of speech or flight or magic but still possessed of monstrous strength and durability.
Tumblr media
Luckily, the bobbledragon doesn't seem to be in between them and the main gate- the fight is far enough away that they could potentially just walk up and head into town, assuming they'll open the gates during a situation like this. Hell, they don't even need to open the gates- if the guards just drop a rope, they should be able to just climb over. That seems like a decent plan, so Saelhen and Looseleaf begin working together to draft a use of the Message spell to ask the guards to help them inside.
Then they notice that I've been moving Oyobi's token on the map in the direction of the fight.
Tumblr media
Oyobi, blinded by bloodlust and/or extra-credit-in-Severe-Zoology-lust, is determined to help fell the bobbledragon. Their attempts at persuasion fail, and Oyobi, undeterred, continues to charge the giant fucking T-rex that is making huge dents in the walls of a city.
As Oyobi runs for it, and as the party follows behind in hopes of stopping her from making a terrible mistake, the bobbledragon jumps and seizes one of the guards on the wall in its jaws, demonstrating its +10 4d12+7 bite attack by immediately oneshotting its victim.
Looseleaf: oh god we're all going to die. you're using the real t-rex statblock. that thing is challenge eight. it is made for a party of four level eight adventurers, so either we are all going to die here, or the guards are going to show us why they are professional fighters and we are students. Benedict I. (GM): "Shit! It can jump!" "No!" The guards seem upset.
Not promising.
Looseleaf: This thing does sufficient damage to oneshot any of us with a perfectly mediocre hit. Looseleaf right now is kind of thoroughly convinced that Oyobi is actually literally about to die. In that light, Looseleaf is going to message Oyobi again. And she is not going to get any closer. Actually, she's going to back off, put distance between herself and the monster. [Oyobi that thing is going to bite you in half get back here you are going to die.] Benedict I. (GM): Roll Persuasion! DC 20 again. -Looseleaf: 17 / PERSUASION (1)- Oyobi Yamatake: [I'M GONNA LIVE FOREVER!!!]
So... that's a bust, and Oyobi finally reaches the dragon and begins her assault. Miraculously, her flying leap hits, and she digs her sword in... for thirteen damage.
The guards return fire against the bobbledragon, and one of the harpoons catches it in the chest- but it doesn't go down, and the second harpoon- manned by just one guard, after his partner got crunched- misses. Another guard, without a cannon, throws a spear- and gets not only a critical hit, but a max damage critical hit, spearing the thing right in the eye.
youtube
...for eleven damage, because these are ordinary CR 1/8 Guards, but still!
Saelhen tries to distract the bobbledragon so Oyobi can run and hide, but... her arrow goes wide, and Oyobi isn't interested in running and hiding anyway. The bobbledragon, targeting whatever did the most damage to it recently with its bite attack, jumps and bites the whole damn harpoon gun out of the guard tower, leaving the guards without heavy weaponry.
And then with its tail, it tries to slap the insect that just stung it in the rear.
...and rolls a 3, meaning Oyobi gracefully backflips over the attack and strikes a dramatic pose.
Looseleaf: God, she did not deserve that dodge. She got so fucking lucky there. Saelhen du Fishercrown: she really didn't Oyobi Yamatake: "When you get to Dragon Hell, tell them Oyobi Yamatake sent you!!"
Looseleaf, in the interest of communicating to Oyobi how much danger she's in, makes use of an upgrade to her Rend Spirit attack she learned while studying Lumiere's notes on pain. With Painread, she can get some feedback back from something whose spirit she disrupts, and figure out exactly how bad a shape it's in. She does so (dealing a cool 16 damage as she does), and learns how huge this thing's remaining hit point pool is, so she can tell Oyobi how unlikely she is to survive long enough to take it down.
...It, uh, it was already pretty hurt when they arrived, and it, um, has nine hit points left. And it's Oyobi's turn.
Tumblr media
Oyobi Yamatake: Oyobi dashes forwards, Naruto-runs up to the T-rex's throat, and does a spinning leap that slashes open its jugular. It roars, and the roar swiftly fades off as its breath escapes. Saelhen du Fishercrown: God dammit, Oyobi. Oyobi Yamatake: "YES! YES! B-S-U! B-S-U! B-S-U!" "THAT is how it's DONE!" She is jumping up and down, doing a celebratory dance, the works. "Flawlessed the boss! Hell yeah!"
Yeah, so... I had kind of been planning on her getting oneshot and laid up in the hospital, as a sort of character growth thing and also keeping her out of the way of certain events in town, but, uh... the dice... didn't exactly... share my priorities.
With the bobbledragon slain, and Oyobi doing an extremely obnoxious victory dance, the rest of the party springs into action to stabilize the guard who was used as a chew toy. Thanks to his plate armor, he hasn't lost much blood, but he's got more broken bones than not, and his prognosis wouldn't be good... if it weren't for the healer's kits Looseleaf had the foresight to buy for everyone. Saelhen stabilizes him, and Orluthe calls on his goddess to Lay On Hands to save the guard's life.
Tumblr media
Then there's this guy- the captain of the guard, who fought in the battle with a fancy crossbow that shot flaming bolts. He demands to know who the party is, seeming kind of annoyed that they rewarded weakness by saving the guard's life.
Benedict I. (GM): He looks down at your medical kit. "Y'know, all of my men are prepared to fight and die for our home. You really want to take away this man's glory?" The injured guard looks up. "Uh, sir, I- it's fine, actually..." "Feh." Looseleaf: This guy immediately seems like a bad boss. Saelhen du Fishercrown: Oh, he's ridiculous. Okay, that changes the tenor of this conversation somewhat! "...I apologize, sir," says Saelhen, bowing to the guard on his stretcher, "if I have diminished your victory with my carelessness."
And rather than give this guy any more of the time of day, Saelhen asks the random guard his name. (And then I have to give him one and make him a character, whoops.)
Tumblr media
Medd Cutter here is thankful for Saelhen's assistance saving his life, and Saelhen pledges to remember his heroism. The commander feels- by design- somewhat left out of the heroism-remembering, and declares that he is REX SCAR, and Saelhen kind of blows him off. He's not happy, but...
Captain Scar is still the sort of person who is very impressed with anyone who rolls up and kills a bobbledragon just because they felt like it, and despite Saelhen's calculated snub, tries to get buddy-buddy with the group of obviously very powerful people who just arrived. He decides to help them through customs without going through the usual processes, much to the chagrin of...
Tumblr media
...Long-Tongue, Cauterdale Customs and Border Inspection Officer of Cauterdale, who's very loquacious and wordy and redundantly repeats what she says in different words to phrase things differently in a somewhat unnecessary fashion for no real reason. Rex bullies his way past her, but Saelhen- as another snub, and just to be... nice? (What's her game...?), hands her the 300-page history of the de la Surplus family as collateral for a deferred border inspection.
Inside the walls, Cauterdale is a very crowded place. It's like 80% slum, choked with buildings constructed of a patchwork of scrap metal and discarded siding, without much wood to speak of. The streets are narrow and bustling, and the general vibe around the place is impatient.
The remaining guards escorting them (Rex went off someplace) inform them, when questioned, that the town indeed burns down the forest around them- since they're near the jungle, horrible dangerous things tend to come out of the trees to attack them, and their harpoon defenses are most effective when they can see their attackers coming from a mile away, with no obstructions. Looseleaf asks if bobbledragon attacks are common.
Benedict I. (GM): Another guard shakes his head. "No, that one was pretty crazy. Usually it's just the giant spiders, or the giant mosquitoes, or the mushroom demons." "We've had a few bobbledragons before, but that was like, four at once." Looseleaf: "Oh gods there's already giant spiders?!" "We're not even at- I thought this was a pine forest still!" Benedict I. (GM): "No, that's usually after it rains," Medd says. Looseleaf: Looseleaf casts Druidcraft. Please tell me it's not going to rain. Benedict I. (GM): Nope! Clear skies for now. "Whoa, cool." Looseleaf:"Thank the gods of sea and sky and weather and everything even tangentially related to weather," she says. "No rain." "I hope it never rains, ever again." Benedict I. (GM): "Haha, better stay away from..." "Wait, where are you headed?" Saelhen du Fishercrown: "The rainforest," adds Saelhen, mildly. Looseleaf: "Ttttthunderbrush, and yes I know that place is crawling with spiders NOERU SHUT UP,"
Then Looseleaf asks about what they're there for- the Deathseekers' Guild. Unfortunately, the guards tell them that the Deathseekers... probably still exist, but they're like, a weird secret club of old people who think they're too cool to join the guard. They give them a couple leads- apparently the Temple of Andra keeps tabs on them, and also a guard by the name of Mags was the last to see them as they were recently seen leaving the city.
The team splits up- Looseleaf and Orluthe head for the temple, and Oyobi and Saelhen head for the guardhouse to talk to Mags. (Vayen... is still gone, after vanishing as soon as the bobbledragon fight started.) The latter group does their thing next session, so...
After dropping off their rental giraffes, they head inside to meet...
Tumblr media
This guy, working the reception desk. He seems to be made of rock, and when he talks he rumbles.
As Looseleaf explains their dilemma and their need for Deathseekers, this guy takes a keen interest in their plight. He's very "hmmmm, iiiiiinteresting, oh i see, you don't say?" about the whole thing, making a very normal interaction seem as ominous as possible.
He tells her that the Deathseekers, to his knowledge, should be back in the city from their unspecified errand inside two days, and offers to take a message.
Looseleaf: "I don't suppose they're looking for a green dragon, are they?" Benedict I. (GM): This guy's smile keeps getting wider. It's kind of creepy. "Hm? What makes you say that?"
As she explains about the dragon, he offers her and Orluthe a candy from a bowl on the desk. After some hemming and hawing out-of-character because the creepy rock man is offering you suspicious candy, they eventually opt to have some, because really, Looseleaf isn't suspicious of this guy. Hers is lemon-flavored. It's tasty.
Then, as she describes the empty tower with the corpse of the torture wizard in it, this guy's demeanor changes suddenly from "creepy wry amusement" to "genuine concern". He tries to put on a poker face, but him having a poker face when he's until now been all creepy-friendly chewing the scenery... stands out. He gives her a strong assurance that the Deathseekers will handle this problem for her.
Benedict I. (GM): "I... thank you, for this information." Looseleaf: "You're welcome. Please, uh, make sure that the Deathseekers get this information as quickly as possible. The dragon eats a corpse a week and there's only three corpses left in the tower, there's a very real deadline on this." Benedict I. (GM): [rolling 1d20+4] (Insight) 17+4 = 21 Looseleaf: Belatedly, Looseleaf realizes she's made a mistake. Benedict I. (GM): "You say... the dragon eats three corpses a week?" "Only three corpses left in the tower?" Looseleaf: Namely: Looseleaf has no good reason to know the fact that the dragon eats a corpse a week. Since she's never met the dragon. Benedict I. (GM): "Curious information." "How did you come across it?" Looseleaf: "Uh, erm, uh." Shit.
Looseleaf opts to tell the truth about Arnie, to avoid spinning a dangerous web of lies for herself- after all, Arnie's not worth lying for. She does describe him in as sympathetic terms as she can, though, and asks this guy not to harm him if possible- she doesn't want to break her word to Arnie if she can help it.
Benedict I. (GM): He takes a moment to process this. "...Very well." "My people will be the soul of discretion." "I thank you very much for your generous contribution to the Ecumene of Understanding."
Looseleaf notices that something is wrong.
This guy is the receptionist. He's not a bishop or anything. He's not even wearing priestly vestments- just a nice suit. And he's speaking as though he's in a position of power- "my people", he says.
And after considering various possibilities, she tries something. A shot in the dark, but...
Tumblr media
And the way Looseleaf plays this, is... "quit acting like you don't know what I'm talking about, c'mon, the jig is up". She takes out the letter she found in Lumiere's tower and shows it off, as proof!
And this guy keeps denying it, and getting increasingly more panicked, and looking nervously over at Orluthe, and asking her to please stop, shh shh shh shh, and it's when he begs her to have a conversation with him in private please that she makes the connection. If this guy is affiliated with Lumiere, who's apparently affiliated with some sort of secret conspiracy that's affiliated with some sort of deific usurpation... he maybe doesn't want to have that conversation in front of a cleric.
Looseleaf:"Okay, Orluthe, uhm. Sorry, so," Looseleaf whispers into Orluthe's ear. "Long story short, turns out my sister, who left my village way before I did, ended up falling into some kind of magical secret society. The kind of secret society with Hal Lumiere, i.e. 'the torture wizard who came up with all those pain knives that we all got stabbed a lot with', was apparently a very active member of." Benedict I. (GM):Oh my god, um. Looseleaf: "So, uh, I'm kinda freaking out about that, right now, but if my hunches are right then I'm the sister of someone important in their organization?" Benedict I. (GM): As you start whispering, he tries to interrupt. "Please do not say things to him!" "Please let us speak in private!!" Looseleaf: Oh he's freaked out now huh. "Anyways that's why I am actually indeed going to speak, with this guy, in private," Looseleaf finishes. "And if I don't show up in a half-hour or so, then things have probably gone lopsided." "In which case you should find everyone else and tell them to, I dunno, come save me or whatever." "You got all that?" Benedict I. (GM): The rock man looks distraught. Orluthe Chokorov: "I, uh... think so? This is really... I'm not sure it's safe..."
With a good Persuasion roll, Orluthe agrees to stay behind, and the rock man leads Looseleaf into a backroom whose doors and walls seem warded heavily with some sort of abjuration magic. A secret saferoom.
The man describes the problem: the gods don't know that they exist, or didn't until Looseleaf went and told a cleric of Diamode that they existed. Clerics, in this setting, channel divinity literally- their gods come into their heads to do magic for them, meaning anything a cleric knows is something a god can know, if they care to check.
Benedict I. (GM): "Because if the next time Diamode is in that kid, if she goes looking for that memory..." "I mean, she might not. And you didn't mention anything about our aims, so she might consider it beneath her notice." "But that, right there? That was nearly game over." "And I can't just kill you, because if I did, Yomi would end me." Looseleaf: "Yeah, I'm not incredibly foolish, I haven't actually shown anybody else Yomi's letter." "Nobody knows that Lumiere was involved with... deicidal blasphemy." "That's what this is about, right? Thereabouts, in terms of sheer magnitude and hubris?" Benedict I. (GM): He sighs. "It's not like that." "At least, it's not all like that." "The Project is... fractious." "The less you know about the project, the less you're able to carelessly blurt out about the project your cleric friends, or to anyone who tries reading your mind or tricks you into a Zone of Truth..." "The safer we all are." "With as much as you know, you're already dangerous. It'd be best for us- and you- if you dropped this. Never spoke of it to anyone."
Looseleaf points out that it's good that she found the letter, because that tower was sitting abandoned for a year- anyone could've walked in and read it, since it was lying on a bookcase in the open.
This is somehow not taken as good news- when he finds out that the letter could've potentially been read by anyone, that there was a security breach for a year...
Looseleaf: "Look, my man, next time you want to send a letter, by the way, use... use some encoding." "Don't just write things in plaintext like a chump, by the gods." Benedict I. (GM): "He was supposed to burn after reading." Saelhen du Fishercrown: he's too dead for that! Benedict I. (GM): "Wait, you said it was... out in the open?" "But he's dead?" "Either he was an idiot, or... someone else opened his mail." "Except... Yomi should've hand-delivered it, so..." "...well. We'll definitely look into it."
He brings up sending for someone to do memory magic to handle the breach- but he realizes he can't have that done to Looseleaf, because Diamode would notice if someone tampered with her cleric's memories, and someone needs to still know what's up so they can keep Orluthe away from the truth. (Plus, she figures she'd notice the inconsistencies and end up sleuthing it out again.)
Looseleaf asks if Yomi is doing well, and gets... that she's intense, and powerful, and she probably thinks she's "doing well", but... he doesn't know about happy.
Lastly, he shows Looseleaf a symbol- a blank circle, with the elvish character 人 drawn underneath. The symbols of gods are typically circles with a design inside- so the meaning of this and its relationship to the nature of the Project is fairly easy to infer.
Benedict I. (GM): "If you need to prove to someone you're in the know, without blurting out a bunch of dangerous details, this is the mark." He then eats the paper and the graphite stick he used to draw it.
Next time: Saelhen and Oyobi grill the guard Mags for information on the Deathseekers, and connections are made with powerful individuals.
2 notes · View notes
patchdotexe · 4 years ago
Text
Explorers of Arvus: uhhhh / 3.23.21
today's notes are different from usual bc. well. you'll see
LAST TIME ON EXPLORERS OF ARVUS i broke my sleep schedule and am barely existing so this is fine. we went back to camp vengeance an uhhhhhhhhhhhh we are now going to fuck off into the forest to die or prove a very important point
oh god we forgot to level up
[mgd voice] BOOSTING NYX TO MAXIMUM LEVEL
im so fuckin tired. what on earth am i doing. how do i level again
k is not here this time but instead we've got mae+nii bonking their heads together to simulate 2 braincells and so far it is not working. i might just have to like fuckin, drop out n zzz partway thru or somethin. would be fun to see how chaotic michael makes charlie in my absensce
oh wait i can do d&dbeyond i think. how do i work this again. will i ever remember i have shield
what level am i. level 6? pog. oh shit i think i have a new thing
. new spell
. 3 total 3rd level spell slots
. bend luck! i can now screw people over on purpose (and will probably use my sorcery points FINALLY)
michael is leveling charlie up bc my brain is apple sos
ASDXFKLJFH I FEEL CALLED OUT zec rb'd my most recent art of MaX with "all i know about xem is that leo likes xem a lot that's the extent of my knowledge" THANK U FOR SUPPORTIN ME ANYWAY
there will be less blaseball distractions than last time bc blaseball is now on siesta. however i will still have MaX brainrot in the background bc i was drawing xem
wyatt mason my beloved
OKAY I GOTTA MUTE THE TACO STAND FOR THE ENTIRETY OF D&D i cannot and will not get distracted. we can do this. we
nintendo wii
we havent even started yet and im already incoherent
ok i have made a decision and that decision is that i do not have the brainpower to play. however i do have the brianpower to take notes hopefully! so ill just like. vibe. this will be a first
Tumblr media
oh man im gonan pick up Blink. charlie is gonna be a fucking menace to herself and others
oh my god its not concentration so charlie may continue teleporting while unconscious. thorne is going to hate this
[charlie gets her soul eaten by a ring] [charlie singing dragonston din tei at halvkWAIT JORB HAS A PRIZE
jorb got a thing! an evil genius thing! figure man. fugrine. figuring. help
GREEN HAS DIAGNOSED ME AS TIGREX MONSTERHUNTER i love this
my notes are a disaster. this is so sucks
serotonin is stored in the wiggly zoomy jorb camera
jorb: his pinky is the size of the rest of his fingers
leo: he has a disease
jorb: he has a disease.
jorb: that disease is male pattern baldness
leo: [reduced to tearful giggling for mysterious reasons]
LAST TIME, ON EXPLORERS OF ARVUS: we've returned to camp vengeance! taure is still unconscious, which is not very great. camp vengeance is doin better tho!
michael, as part of the recap: ingrid is getting railed by her new girlfriend,
first dice roll of the day is michael rolled a 1. good start
OH THORNE IS AN ARTIFICER NOW thorne took a level in artificer!
"...it's like figuring out the right mathematical equation to summon a gun."
group is gonna go check out the statue that we passed by now that we're not WHAT DO YOU MEAN PONK AND GEORGE CANONICALLY HAVE IBS thats it im not looking at 772 anymore
im doing a bad job of paying attention but at least im Present
SIERON LEARNED FLY AND USED IT ON CHARLIE
michael: what do you want to do with your new flying powers?
leo: how many problems can i cause in 10 minutes
guard 1: ...why is the halfling flying?
guard 2: [rolls a 3 on intelligence] i think they can just do that
groundhogs, the real scourge of the campaign
silje and sieron are gonna hunt a big elk. they got distracted and sieron is putting grass on silje's head. i think
WAIT WE'RE ON WATCH NOW FUCK
we have discovered kali's tragic backstory whoops
update i am. too sleepy for this. good nigh everyone
[ and then leo went and somewhat took a nap! solar, normally playing thorne, started playing charlie in my stead. @jorbs-palace, local hero, started taking shitpost notes in my stead. ]
jorb's ghostwritten notes for leo:
help solar is immediately doing a cursed voice for charlie. charlie can do so many crimes
congratulations, charlie is now temporarily immortal!
dwarves can hit things with their beard
kali wants to know if she's legally allowed to bail
she'd feel really bad if she had to loot our corpses for payment if we died.
we have entered the Tree Zone
one of the corpses is now a flamingo (has one leg)
silje has decided to stab the ground. take that, dirt
kali was large size for a second there but then she remembered to not be a giant
"you accidentally deleted my cat?!"
silje has learned naruto cloning jutsu
be gone, thot
oh boy, making an int check to look at a statue! 11! silje is dumb apparently.
hmm. the statue has divination magic. it's also affecting silje.
SILJE LEARNED A 6TH LEVEL SPELL? its only single use but still
you solved my statue riddllllleeeee
thorne forgot to have eyes
its a shame mac and cheese doesnt exist in the d&d universe
wizards are just math criminals (the criminal part is setting people on fire)
sieron crit fails a check but it was still a 9 because of having +8
thorne is looking for what's weird!
uh oh music got scary, never a good sign
hmm. those leaves over there weren't dead a moment ago.
UNDEAD TROLL TIME! rolling initiative
"it's ok, im a wizard, it's my duty to be correct." "wow! waow!"
woooah here he comes
IT JUST DID HALF SIERON'S HEALTH AS A PASSIVE END OF TURN EFFECT?
thorne backed up and cast eldri- oh, ray of enfeeblement. character development continues
charlie is going to just blink out of existence for a minute.
big chungus has grabbed silje and sieron. BIG CHUNGUS HAS THROWN SILJE AND SIERON.
sieron is using hit and run tactics! isn't good at his extra attack yet though
silje is activating bid bid blood blood blood
thorne uses beam of skipping your leg day. troll's legs are now skipped.
michael is trying to determine what a 'clavicle' is
"does that mean the star trek kind, or the bdsm kind?"
charlie wants to cast magic missile.
charlie has vanished back into the ethereal plane mid-taunt
silje has decided to not get bitten today
silje may or may not have stats.
oh, right, trolls are weak to fire! and also we forgot to upgrade sieron's firebolt. so it actually hurts now!
silje is full of knives and blades and does 31 damage in one turn!
charlie shouts words of encouragement from the ethereal plane. a nearby ghost vibes with this.
🎉 eldritch blast 🎉
kali remembered she hates the sun
silje is enthuasiatic about charlie saying "get him cat boy!"
charlie contemplating using fireball to nuke the troll and also the entire stonehenge
charlie has decided to use magic missile instead, probably for the best
the troll bit at charlie SO POORLY it broke some of its teeth on the ground
charlie is too small to hit
accidentally rolled advantage on a firebolt, so got to learn it WOULD have done 29 damage with a crit but instead it missed because it was not actually with advantage
silje has just sliced open its entire back and made a spray of frozen blood! radical. big boy is down!
we have burned the body because we are not stupid. well, we ARE stupid, but not stupid in the way of leaving a body full of necrotic magic around
[dr coomer voice] i think it's good that he died!
we're also doing a funeral pyre for the other corpses that were around. just to be sure.
our loot is: the satisfaction of a job well done
thorne is cosplaying as charlie
charlie has located the direction troll came from! she found the 'the way to sweet loot' sign
thorne is apparently better at survival checks than our hired guide? wack
we found a viking house! it has: mead, a shield, gravestones,
found a gold coin in the mead! maybe it was thirsty
oh theres a LOT Of coins in there actually. 60 gold and 120 silver!
have successfully pointed out a hole in the DM's logic :)
there was a raven! it cawed and left. ok bye buddy
and that's where we leave it! heading back to camp vengeance next time.
someone rated this session a 7.2 out of 10, which is very specific
good night mr coconut
4 notes · View notes
2019weekendproject · 6 years ago
Text
Call Me BREW-know Mark!
It's the first workshop in this weekend series, some hands-on time to learn how to brew coffee at home. Yes, it's not rocket science but this fun little activity had me convinced that there is science to making the perfect cup!
I googled 'weekend classes' and found this on offer. I've always been a coffee fan but I wouldnt call it legit, which means I fancy a 3-in-1 as much as an espresso. You can say I'm only really going for that comforting feeling you get from sipping a hot drink but with a kick, which is why I don't share the same sentiment for tea.
Tumblr media
There I was, an eager beaver who was up bright and early for a Sunday morning class. It's only my third time in Yardstick, definitely dig the cozy workspace vibe.
We're a small group of five, I ended up fifth-wheeling with two couples plus Jon, our coffee expert-slash-teacher for the day, who kicked off the session by asking why we signed up for the workshop. It was a gift for the first couple who travelled all the way from Laguna, being such coffee addicts; out of curiosity says one, and a good knowledge head-starter for business says another. As for me, it was the weeekendproject = fill my weekends with something that adds value... sparks joy! 😂
Tumblr media
Obviously, I was more interested in taking photos, so I did away with notes. However, if there was only one key take-away, it has to be that COFFEE IS 98% WATER, so the type you use actually matters a whole lot ~ FILTERED > DISTILLED.
On we went then to get our hands dirty (or very, very clean ~ wash your hands please) to try out some recipes!
It brought me back to my Chem lab days
Tumblr media
3x3
Three rounds with three recipes each, I get to fly solo ~for obvious reasons. We exchanged notes on what we thought about each brew.
Tumblr media
Round 1: French Press
We were asked to dive right into the brewing process, only to learn the err of our ways after, like... forgetting to AGITATE!
We were all too gentle with our grounds, but apparently, you gotta get rough to get the most out of them beans. AGITATE ~a fun word to really just mean stir vigorously.
Tumblr media
Round 2: Kalita
Your coffee can do away with the taste of paper, so prep by pouring hot water on that filter before-hand.
There's also 'the PURGE', sacrificing six perfectly fine beans in the grinder to wash out the last batch. And so I did the purge, the grind, then release.
Side Note: Remember to hold the cup before you release, else you end up throwing away fine fresh grounds, which is exactly what happened to me ~and it was on my second attempt #clumsy
Tumblr media
Round 3: AeroPress
Purge. Agitate. Press.
Who knew that small differences in steeping time - 1 vs 3 vs 5minutes, water temperature - 80 vs 90 vs 100 degrees, and ground size - 14 vs 16 vs 18 can mean either a good experience or a crappy one! For instance, I now know why my recent brews have been extra sour, effect of underextraction ~ I guess you really can't hurry a good cup (...just like love)
Tumblr media
Learned loads in the quick session. Love it when I get more just from listening to other people's Q&A. Really glad I was in this mix!
Tumblr media
More courses available ~ who knows, latte art could soon be part of my arsenal of coffee-skillz
Tumblr media
WHAT'S NEXT?
It's probably worthwhile to get to exact measurements - ground size and weight, water temperature, steeping time - but on the daily, will have to make do with approximations. Will have to eventually beef up on tools (think weighing scale and bean grinder) to calibrate my bean-method combo to perfection! Afterall, to get to a close approximate, you will need to have a clear idea of the exact.
A wine appreciation class should be a fun follow up for a weekend workshop. And while you can already learn loads online, I still think HANDS ON >> HOW TO YOUTUBE VIDEOS
This weekend is rated A for A~xperience! 😋
Tumblr media
1 note · View note
the-nfe-channel · 3 years ago
Text
‘Oseaaq Burns: SftE Writing Exercise #2’ by Eddie White
"Oh………em………GEEZY, NADY! Lookit! They have a fish taco stand………UNDERWATER!" Shantrice shrieks, overcome with joy and wonderment. This dimension they've been transported to—a completely aquatic version of earth called Oseaaq—is the most remarkable thing she's ever laid eyes on, but for Sinead? Not so much.
"Shan………for the love of God………I already saw it! I'm not fuckin' blind," she gripes, rolling her eyes. "I don't know what's so fuckin' amazing about some muthafuckin' fish eating fish anyway."
Sinead has been moping around, arms folded in indignation and feeling her usual regret for tagging along with her best friend. For the life of her, she can't figure out why she unfailingly agrees. She is always thinking that she'd rather not, intending to rebuke Shantrice's offer with the strongest no to ever be uttered, but results are consistently opposite. Every. Single. Time.
"Well, it's not about fish eating fish, obviously, but I mean, how many times have you been to an aquatic dimension with a damn fish taco stand?" Shantrice pauses after that inquiry, but abruptly continues before Sinead can respond as the question was rhetorical. "I'll tell you how many: ZERO! This is fuckin' newsworthy shit! Uncommon territory, literally!"
"Shantrice………we're not here to eat and gush over food stands," Sinead sighs. While it is true they didn't come there to eat, she can't deny the rumblings in her tummy. Nevertheless, she ignores her hunger. "Actually, we shouldn't be here at all, but you and that damn professor and y'alls inventions, always opening up gateways and shit."
"Hey! Don't fault us for our interest in the unknown! We do this for science anyway!" Shantrice declares as she marches along, holding her head high like she was Madame President speaking to an auditorium of packed students and constituents. "Besides, you're never doing shit. Learn to live a little."
"Ugh! Whatever, Shan!" Sinead screams, storming off in a rage. Due to her current disposition, she accidentally activates one of the abilities she has copied, causing jawbreaker-sized holes to appear on her face, her neck and the back of her hands. These holes erupt with an intense, silver fire that bursts through the protective gear she's wearing.
Even with exposure to the aquatic atmosphere, the flames aren't being squelched, which doesn't bode well for Sinead as her suit is now filling up with water. Before she can even get a grasp on the situation, the entire area is ablaze, including the fish taco stand Shantrice was raving about.
"Oh no, Nady! What the fuck have you done!?"
"Don't yell at me, dammit!" sneers Sinead. "I wasn't trying to do this, it just happened!"
Her anxiety has reached an intensity of 50,000 as she struggles to repress the flames and deal with the water entering her suit. Although the uniform has an advanced form of nanites that can fix any kind of damage as well as suppress Eclipsed abilities in dire situations, they aren't able to perform those actions for some reason. The fire is apparently overpowering them, something Professor Corsair most likely didn't factor in.
Also, being that the required training to manage this power had only recently started, it's no surprise Sinead is experiencing zero success.
"Shit! I'm sorry, Nady! I really am," Shantrice replies, feeling distraught. "I know you weren't. Nevertheless, we gotta do something about this fire and you—IMMEDIATELY!"
Shantrice is trying to keep a cooler head—no pun intended—and simultaneously bring soothing to her bestie. Quite often she forgets just how quickly Sinead can become flustered in intense situations. It's amazing that UpShift was even able to convince her to join the Safeguard Junior Guild. Although, she has surmised that Dexter joining may have something to do with that.
"Well I don't know what to do, Shan!" Sinead wails. "The guy I replicated this power from didn't know how to use it himself, which left me shit outta luck!"
"Okay, okay! Just try to calm down," Shantrice sighs, simultaneously feeling exasperated and enervated. "Your panicking is making me panic."
"Well you should be panicking, Shan! I might drown out here!"
As Sinead is persistently trying to recall the flames, something comes barreling towards her in a state of supercavitation, slamming into the poor girl. The impact knocks her about three yards away from Shantrice, and possibly out cold.
"Ohshitohshitohshitohshit—SINEAD!" Shantrice yelps. She raises her left arm and presses a rectangular button, arming a miniature cannon located inside the forearm of her armor. "Alright! Whoever just hit my fuckin' bestie, show yourself or I'm blasting every cotton-pickin' thing in sight!"
Overhead appear a trio of massive shadows, blocking out what little sun was visible. As a possible response to Shantrice's threat, the shadows descend and reveal themselves to be a humanoid shark, whale and squid.
"Oh my fuckin' Lord………," Shantrice shudders.
Realizing she's in over her head, she tries to activate the teleporter on both her and Sinead's suits, but the latter is too far away. Her hopes of reaching Sinead are dashed for now, as the path to her retrieval is blocked by not only the humanoid creatures, but a wall of roaring flames.
"ShitshitshitshitSHIT!"
"Don't go screamin' now, bitch," roars the humanoid shark, which appears to be a hammerhead and also female. "Where's all that tough talk from a few seconds ago?"
"Tough talk? Tuh! Bitch this is who I am!" Shantrice growls. "Now, I'll admit, y'all got me outnumbered, but one of y'all attacked my friend, so somebody's gotta pay. Who's it gonna be?"
The humanoid whale, also a female, steps forward. "I can handle this runt, Demetria, unless you object?"
"Go right ahead, Nadine," Demetria chuckles. "I wanna see what the bitch is really made of."
Shantrice grunts and cracks her knuckles, an action which sounds extremely weird because it reverberated inside the gauntlets of her armor. "Alright then, fuck it. Bring it on, Shamu. I'm gonna kick your tubby ass."
"We'll see about that!" Nadine roars, lunging towards Shantrice at breakneck speed, almost too fast for the girl to dodge.
"Fuck! You move like a fuckin' NFL player!"
"I don't know what this NFL is that you speak of, but I did play Oseaaq-Orb in high school," Nadine explains.
"Yeah………that's pro'lly the same thing, but I'm just gonna end this discussion here. This is weird," Shantrice groans.
"Suit yourself!" Nadine barrels towards Shantrice in a supercavitating state, similar to what Sinead was hit with moments ago. Either she was her assailant, or all of the inhabitants of Oseaaq possess this ability.
"You're not gonna pull the same move twice, wench!" Shantrice tightens her left fist, firing a blast from the cannon she armed earlier. "Take that!"
A pulsating sphere of electricity soars towards Nadine, striking her directly in her blowhole. The attack sends a shock through her nervous system, disabling her for the time being.
"Well shit, that was easier than I thought!" Shantrice snickers, feeling overconfident. "One down, two to go! Who's next!?"
🌊🌊🌊
0 notes
Text
The only thing that we know for certain in life is that all of you reading this right now and myself will DIE. (NOT tonight - I just mean at some point in our lives - this is NOT a terrorist attack - believe me, I do NOT have malicious or evil intentions - well in my opinion at least, but sometimes our perception of ourselves differs to how others perceive us - but does that really matter? All I care about is what I think about myself) Wait, Hang On I Lied. There's one more certainty in life. That you and I are human beings. (Well, I do hope so. After all, I only know who I am. And only you know who you are) Yes I tried my best to think of an engaging first liner to grab your attention. (And if you're still reading this now - it must have worked!) I was just worried with all the 'clutter' and 'competition' out there that you could potentially miss this. And yes that's also why I have the photo of a cute baby. And also because we were all once babies at some point in our lives (well unless you came out another way which is not a certain opening in a female body) And before you amazing security officers out there, Who work super hard to protect your citizens, Even on the weekend (which is meant for rest with family) (and shout out to everyone in Australia who still worked today on Mother's Day -your sacrifice of your treasured time which could have been spent with your Mother (the technical economic term is opportunity cost - in case you were wondering - yes I know you all are secretly nerds) Will never be forgotten) Ok so back to you security officers Think of shutting this down, I assure you that this is NOT a security threat. It is NOT an act of cyber terrorism. 'So what is it then?' - you find yourself thinking (Yes I am a mind reader) Today marks a turning point in the course of mankind. Today marks a day that hope is restored in the world. What you are seeing today will be written in history books for future generations to come. We will make it in a Guinness World Record Book for 1. The most number of people clicking going on a facebook event 2. The most number of people posting on a facebook event page 3. The most number of people sharing the same message across social media I know what you're thinking. Well this girl sounds 'ambitious' Which were common responses I got Well yes, This is 'ambitious' I think so too But 'ambitious' and 'reality' are NOT mutually exclusive (is this the right term? I always struggled with probability in maths) But it's going to happen - keep reading on if you would like to see how history is going to be made :) (But technically, history is being 'made' every single day by each and every one of us just be being alive - even going to the toilet and eliminating waste is technically 'making' history) Every single person in the world will eventually receive my message. (And news outlets out there! Please choose a decent photo of me [ie. not one where my armpit hair is showing] Actually, I don't mind if you can find a photo of me with armpit hair. (Yes - that's a challenge!) (We all have hair - I don't see what's the big deal) (Why would you want to see a photo of me with armpit hair when you can just strip yourself down [yes I put this in just for you - you know who you are xD] and just lift up your arm and VOILA!!! Hair before your very eyes!!!!! ) (I'm actually super hairy In my opinion For a girl) Also, I'm going to keep on ranting about this (again, PMS is a real thing for the female population - have sympathy for us fellas!) Another thing I do not understand is why we must wear clothes And in some places in the world, Such as Australia, We can actually get charged with a criminal offence (and maybe be put in gaol) For stripping down in certain public places (with some exceptions such as nude beaches which are mainly filled with elderly people right now - I reckon we can diversify that a little) And showing our 'private parts' (but are our 'private parts' really even that 'private' after all if we all have them? (well I know it differs between females and males)) but yeah - and some of us have unique bodies - either born naturally or through operations - I respect that - it's your life and you choose how you would like to live it - and which gender you would like to live as and which private parts you would like to have) And in some places like Australia, Myth has it that the bigger something (something in a similar shape to a sausage) is The more masculine a male is Well to me, that's absolutely bullshit I don't know how these 'myths' even originated! All sizes are beautiful to me! Ok, so back to me and armpit hair: I filled in one of my friends' survey about hair and shaving yesterday. Why is shaving a thing anyways? We all have hair on our bodies (well some more than others but we all do) Why is it often socially unacceptable for girls have to have cleanly shaven armpits when they wear sleeveless tops or dresses? And why is often socially acceptable for males to not shave?? Now that is gender discrimination to the max! Why is this NOT written in the Discrimination Act in Australia?? (or maybe it is - I have to admit I haven't read it - and I highly doubt that my fellow Australian peers have either - but apologies! If it is in there!) And on that note of Discrimination, It is so real And close It still happens today in the 21st century!!! Right here in Australia This week, I had the privilege of talking to a beautiful Indigenous lady I've always been curious of Indigenous Australian culture (do you know that Indigenous Australian culture is the oldest surviving culture in the entire world???) WOW Because I certainly didn't know this. If Australia was a person And let's just say I was that person for theoretical purposes I would go around showing that off I would tell everyone I would tell the entire world I would be super proud of that I would make sure the entire world knows (but why doesn't the entire world know?- well maybe it's only me who is oblivious and ignorant and unaware - and maybe all of you do know this - please correct me if I'm wrong) Ok, so yeah. This beautiful Indigenous lady (and I do remember your name - I just want to make sure I respect your privacy before I decide to put your name here for the world to see because there's no way that I have been able to contact you) Said her dream was to become a cook (yes you go girl!) And she applied for a cook job recently. She was called in for an interview. But as soon as she showed up, They told her the position had been filled Now if that isn't discrimination to the max, I don't know what you call that I was super angry when I heard this. Now those of you who know me know that I don't normally get angry It takes quite a bit to get Leeann angry (I give off the impression of being a calm, controlled, sweet, pure and innocent girl) If I was present at the time, I would've taken those café owner(s) to court. And sue you for breaching the Discrimination Act Because the legislation is real and it is properly enforced (well I don't work in the legal field so I actually wouldn't know) But nothing in the world (I believe) cannot be resolved with Honest and open Communication. Just by opening our mouths and making some sounds (I think that's what we call a language), Together, we can solve any problem And we must learn to be accountable And take responsibility for our own actions Like a girl (why do we tend to say man? Are we trying to imply that females are less brave than men? My fellow female population Let's band together and prove them wrong -Trust me boys, you never mess with girls, We will make sure You Rue For The Rest Of Your Life Until The Moment You Die :) [just kidding XD- no I'm not kidding here] Yes, we must take responsibility for our own actions like a girl (I remember seeing a campaign trying to challenge gender stereotypes a couple of years back - that was awesome! I forgot what it was called though but I do remember it so it means it was effective) And I will illustrate this with something we all do -fart. Why do we feel the need to suppress our urges to fart? If you stink up a room with your own smelly gas, Then at least do it proudly! Make it as loud as possible! And admit it was you! And apologise maybe! OR, if that's too scary for you, I have another suggestion which has largely been inspired by one of my close mates (who I'm sure would probably appreciate it if I don't name and shame them - your very welcome in advance =D) This is no magic but You simply tell the person you're talking to or the people around you that you need to fart And head outside To do the deed. Then walk back in. And continue with your life. Easy. See, life isn't at all that complicated is it? (I know! I'm a genius!!!) Prior to my launch tonight, I shared my initiative 'Die To Live' with some fellow peers. I had many people who doubted me. But I also had many people who had absolute faith. Now, I don't blame those of you who I spoke to and doubted me. If someone told me that at Sunday 9pm on the 13th of May, 2018, Hope will be restored in the world, That the world will be changed And that it will be a major event in history, I will look at them And think they're nuts! (And no, in case you were wondering, I don't mean the pecan nut, macadamia nut, or peanut) And some of these people also looked like they wanted to lock me up in a mental health hospital. But what does it even mean to be 'mentally ill?' Am I considered 'crazy' just because I have different opinions that nobody else seems to have? Does that make me 'mentally ill?' (Correct me if I'm wrong, but in my humble opinion, that just means I'm a human being) While we're on the topic of 'mental illness,' Check out the School of Life and one of their recent videos Called something along the lines of - why the modern society makes us mentally ill I watched it over breakfast yesterday and could not agree more (i promise that this is not paid advertising/product placement or whatever we choose to call it) Because it's so good that I voluntarily choose to 'advertise' for them The School of Life does not need any paid marketing (yes you girls are awesome!) But at the same time, Yes, I get you. I wouldn't believe it either Until I see it unfold Before my very eyes Myself. But I certainty would not lock someone with different thoughts to mine in a mental health hospital, away from the rest of society. I would simply respect their opinion, try to understand and empathise from their point of view and then move on with my life. And I also had one special 'case.' You know who you are. You're the person I bumped into and didn't think I was 'insane' but instead thought I was plotting to commit suicide at 9pm Sunday May 13th and then upload 13 videos onto Facebook with each video incriminating a different person who lead me to end my life. -Just like the TV series - 13 reasons why Oh you funny!! (but I'm even funnier xD) But you had faith in me and that's all that matters :D Life is NOT a Television series!!! (For those of you who don't know what a TV is - it is essentially a virtual reality -trust me though, it's nothing special - and you're not missing out - because you're living your own reality instead - and I believe that is infinite times cooler than watching someone else's) But what I don't understand is why some of you who doubted me had absolute faith in science. (I'm not throwing shade here [or am I? - well too bad too sad because you'll never know what goes through my mind] but Shout out to that person I had an extremely heated intense friendly 2 hour banter sesh about science and religion a couple of days ago) Those words you used cut me But I forgive you Because I know you didn't mean it Because, in my humble opinion, science is a belief system in itself based off faith. For example, most of us in today's era believe that the Earth is round. And this is 'proven' to us through science. But until I personally travel up into space and view the Earth from a distance with my own very eyes, I refuse to believe this as an absolute 'truth.' (but even then, I may not even trust my own eyes - they could be lying to me - I could just be hallucinating) We often like to think we are 100% certain of many things in our everyday lives. Perhaps uncertainty makes us feel uneasy. In my opinion, we dislike uncertainty. Which is why we try to structure our lives and lock ourselves in some kind of routine to try and eliminate uncertainty (but this is simply NOT possible in my opinion - the only certainty in life is death - but even that's not even certain) Who said we should eat 3 meals a day - Breakfast Lunch And Dinner (for those of you who don't know what I'm rambling on about - because I'm aware you may or may not have ever eaten a proper meal (yet) - they're just names some of us use to tell ourselves when we should eat) Wouldn't hunger be a better indicator of when to eat instead of locked in time periods? And who said that we should aim for 5 serves of vegetables and 2 serves of fruit per day or something along those lines? (Yes it's a rhetorical question - I know who - 'official' nutritional guidelines or something I think) Because for me, if I know that the only certainty in life is death I would rather eat what I want to eat If I enjoy the taste of it But at the same time, it is all about the 'balance' (as Katherine Du likes to say) (there will be more on food and eating in the second part of my 'story' -I'm not going to tell you all of it now -just to make sure you keep reading heeeheheheee) And who decided that humans should sleep once a day? And it has to be at nighttime? And who came up with the guidelines that children need about 9-10 hours of sleep per night And that adults need about 6-8 hours per night? (Yes I know - it is scientifically 'proven' - but how did you scientists come up with these numbers? In saying this, I have the most utmost respect for you scientists -I'm just curious -it's hard work working in labs -I have some mates studying science/medicine and they tell me about their 4 hour lab sessions When I heard this, I was angry Because That's torture! Abuse of human rights!! Because I get hungry every 2-3 hours!!!) Wouldn't sleepiness and fatigue be more appropriate signals of when to sleep? Mum, I know you will read this. I did tell you that your friend's daughters will probably read my 'story' first Then tell their parents Then they will call you up And tell you to read this. (I wasn't at all wrong about that was I?) I have to main things I would like to say to you mummy: 1. Happy mother's day! 2. I love you Remember two nights ago when I got home and slept at 7pm Without eating dinner? And you were upset the next morning that I didn't eat your food? I apologise again if I hurt you, But I feel like it was not that necessary to 'lash out at me' when I asked (just innocently out of curiosity): Who decided that humans should eat 3 meals a day? OK so back to the science and religion 'friendly banter' I had Once again, the only certainty in life is death. (and I will repeat this numerous times throughout my 'story' just to annoy you - <3 - I challenge you to count how many times I mention that - and maybe there will be a prize for the person who gets the right number or gets closest to the right number! - just like those jelly bean in a jar guessing competitions! - just kidding - I'm not serious on this one - I can't be bothered to count myself - I have bigger fish to fry ;)) People thousands of years back were 100% certain that the Earth was flat. But they were somehow 'proven' to be 'wrong'. Now we (or just me) are 100% certain that the Earth is round. So in my humble opinion, we can only 'disprove' things but never 'prove' things. We merely get less 'wrong' each time round (Manson, 2016) But we are never 100% 'right.' Anything is possible. (Well maybe besides eternal life beyond Earth - but even that is not 100% impossible) So, an anonymous person who wishes not to be named recently brought to my attention how Fast the world is changing around us. For example, Facebook was invented in 2004 - it's only been 14 years - but I seem to hardly remember any parts of my life without Facebook in it) Wikipedia was launched in 2001 (and I didn't get this one from Wikipedia) (I don't know how I wouldn't 'survived' all those assignments without you! Thank you Jimmy Wales and Larry Sanger! And bless all you other inventors out there who invented something useful to humanity! Again, bless you all who believed me without needing to see it happen. You know who you are. I will never forget how you made me feel. There is nothing that fuels the human spirit like faith. (unless it's more alcohol) Complete And Utter Faith. Even my mother who raised me for 19 years and whom I crawled out of her (something - let's just say body) Doubted me. Yet some of you had utter and complete faith in me within minutes of talking to you for the very first time. And I reiterate again (mum, I'm not throwing shade at you here) If I had a daughter and she told me she's on a quest to change the world this Sunday at 9pm on Mother's Day, I (I don't know what I would do but I would probably not believe her) So….back to how Every single person in the world will eventually receive my message. I chose to use the word 'receive' instead of 'read' because I am also aware that language translation will be needed. TIP: Try copy and pasting this into google translate! (man technology does wonders!!!) And also because not all of us are blessed to be taught how to read. As to why I chose to use English, It's because it just happens to be the language I'm most fluent in. And also because, for some reason, English also happens to be the 'universal' language used across the world. I chose to use the word 'receive' instead of 'see' because I am aware that not all of us are blessed with the ability to see. I chose to use the word 'receive' instead of 'listen' because I am aware that not all of us are blessed with the ability to hear. I chose to use the word 'receive' instead of 'smell' because I am aware that not all of us are blessed with the ability to smell. (this doesn't really have anything to do with what I'm saying today because in my humble opinion, I don't think we can smell a story??? - well feel free to prove me wrong - nothing is certain in life besides death. TBH (to be honest), I just wanted repetition for a couple of lines because I learnt in high school English, that it will help deliver my message across) And I also say 'eventually' because not everyone in the world as it currently stands has even seen what 'technology' looks like, let alone have access to social media. That’s why I'm relying on YOU all to translate my message and communicate it to these fellow peers. I'm just one person. And I need your help. I can't do this alone (but I will if I have to -but ideally not!) So you find yourself still thinking…. 'Ok, I still have no idea what this post is about.' (Yes I am actually a mind reader) Apologies! I'm only human and I'm flawed and I do occasionally get just a little side-tracked and distracted. You're life has value. You were born for a reason. And I will prove it to you. (Yes - I remember whispering this in one beautiful human's ear a couple of days ago. This beautiful human was so selfless and looked out for me when I was not in the best state of self (this hero walked into the female toilets since I was chundering and got kicked out of security guards as a result) (this hero was prepared to take me home on a 1.5 bus ride at like 11pm at night towards a direction which was completely opposite to where he/she lived) (and this hero probably got some of my churned up mix of food and alcohol on them too - soz) (and I apologise again for that other beautiful human who I chundered on their hand -soz not soz - HAHAHA -I do mean it when I say that (now you're probably wondering which part I'm referring to [well you'll never know! Heheee - <3] ) And thank you to you too! You know who you are! I love our long-as text message chats! And that card you wrote me for my 18th last year -those words really touched me Even though we meet up like once (ok I may be using hyperbole here - I'll say twice) a year, You mean the world to me To me, friendships and relationships in general are much more than hanging out in real life, To me, friendships and relationships are more about having that emotional/spiritual connection with another human being To me, friendships and relationships are not defined by physical presence (although I do believe hanging out in real life is nice too - but life sometimes takes us in different directions - and that is not always possible) You may love another person dearly, but that doesn't mean you necessarily have to be together with a physical presence. 'True' love, in my opinion, is when you genuinely want the best for the other person And being genuinely happy to see them happy Yes that night at Metro Theatre in the city, I got kicked out by security guards within 30 minutes of going inside for a combined university event. I think (and you never trust a drunk person's memory) I had about 11 shots of straight vodka that night (looking back, that was not the best idea) Those security guards who kicked us out were not the nicest people. I know that Deep Deep Deep Deep Deep Down That you guys are beautiful people - just please bring it to the surface and show it to the world You could've been a lot more nicer. After I got kicked out and as I was walking towards Maccas (yas I love you maccas - happy meals were my childhood - why are soft serves $0.75 now? They used to only be $0.30! Inflation is a real thing! That's why I love economics! - I'm expecting a massive surge in economics students both at high school and university heheehee - economics teachers and lecturers - you are very welcome XD) In my drunken and semi-conscious state, I remember vaguely rambling on saying things like Why are people like this? Why are people so mean? Why is the world like this? And probably also crying my chunder out at the same time I was always that good straight A studious nerdy student who always did my homework on time and listened to the teacher in class. I waited till I was 18 until I had my first legal drink. (well I did occasionally have some sips of wine at home over dinner but nothing substantial until I turned 18 -unlike most Asian dads, My dad encouraged me to drink at home - he was more than happy! - you're cool dad xD - just wanted to let you know that) I was at a university first years camp when I had my first drink. I remember feeling sad because the alcohol was way too diluted -and I was too 'heavy-weight' -and I couldn't physically drink that much fluid to feel drunk because I was too full Looking back, I was probably drunk and was probably on the verge of my limit But I didn't know because I've never felt what it was like to be 'drunk' Then about a month and a half later, I went to one of my mate's surprise 18th I wanted to 'test' my 'limit' I drank as many different types of alcohol I could get my hands on Rum Vodka Soju Gin White wine Red wine Whiskey Tequila You Name It (well probs besides Maotai which is $$$$ - and we were all young dumb and broke uni students - yes Khalid I love you) And you can probably guess How my night turned out My face was in the bathroom sink for about 3 hours (well it felt like 10 minutes to me but I've realised my perception is super distorted while under the influence) Thank you to those who accompanied me for the entirety or a part of those 3 hours - I'm sure it didn't make it onto the best nights of your life list I remember feeling so ashamed after. I could not stop thinking about it for at least 3 weeks. My reputation! Like most people who chunder for the first time, I vowed that It Wouldn't Happen Again. (deep inside I knew it would because I just wasn't happy and I knew I would turn to more alcohol to distract myself from that constant emptiness but I didn't see another alternative back then) But my brother and mates weren't at all that 'wrong' when they said something along the lines of That's what they all say. Within a couple of weeks (or months - if that detail matters), I Unsurprisingly Chundered Again. And then I repeated what I said previously. And I got the same responses as I did before (kind of like déjà vu) And then the cycle kept repeating itself so many times that I lost count of how many times I chundered Because I stopped caring My 'reputation' was damaged beyond repair anyways And I was happy with the new me (the person who started to care less about what others thought of me) I was always that super good girl who was sweet, nice and 'innocent' (whatever that means) But what does it even mean to be 'innocent?' What's the definition? A lot of my friends had often commented that when they first met me I seemed like an innocent girl then they realised they were 'wrong' like super 'wrong' - completely off Does the fact that I love alcohol And the fact that I've chundered more times than I remember And the fact that I like to squeal at high pitches to the point it may cause long term ear damage (apologies to those people who I have damaged your hearing permanently) And the fact that I really enjoy raves And love waking up to hardstyle music every morning And chucking a phat (someone please explain to me why it's spelt with a 'ph' - I tried googling but I never found an answer - I guess you can't find all the answers to life's problems on google) Muzz To start my day Make me any less 'innocent'? OK so back to that night I got kicked out of Metro Theatre. It was that night when I realised you beautiful humans had my back. And I will forever have yours too. You are all beautiful. And I still remember that night like it was tonight. And I will never forget it. It is around 9pm here where I am in Sydney, Australia right now. There are approximately 7.6 billion people in this world (rounded to 1 decimal place and 2 significant figures - or 'sig figs' - I'm not talking about the dried fruit here) (according to the World Population Clock at 12:18pm yesterday - Sydney time) I may just be one girl. But one girl can change the world. If you don't believe me, I will prove it to you. (200% guarantee Just take a screenshot of this message When you visit me in gaol/jail [depending on where you live in the world] Effective for one year within today HAHAHA in case you haven't realised already, I'm only kidding) Why must we rely on legal systems and laws to protect ourselves from lies? Why can't we rely on trust instead? I realise that it's probably impractical to scrap our legal systems together -but I do reckon mixing a bit of 'trust' into the mixture won't hurt And I am aware that I live in a hole (not literally) I have lived in Sydney, Australia for most of my life Which I know is not representative of the entire world. Some of the things I talk about may make absolutely no sense to you. But I only humbly ask that you take a moment to understand what some of your fellow peers on the other side of the globe go through on a daily basis or have experienced Even if it is super foreign to you. (If you check up on the news on a regular basis, This should be no different I guess But probs maybe just a bit more 'spicy' and realistic) I'm sure you would like to same favour (or should I say flavour HAHHAH - gosh I'm so funny!) to be returned to you. Can I count on you guys (and the entire female population - I don't know why it's normal to say 'guys' for both genders) to have a read of what I have to say first And try not to act on any prejudice or judgement Before you decide to shut it down? Yeah, sorry, I got a little side-tracked again So… The only thing that we know for certain in life is that all of you reading this right now and myself will DIE. So what is the point of staying alive now if it's all going to come to an end? Why are we living to die instead of dying to live? All of us have a mother. (assuming you are all humans like me and started with 'something' that happened between a male and female) I love my mum. Without my mum I wouldn't be here tonight. Without my mum I wouldn't have the opportunity to connect with you tonight. Without my mum you wouldn't be reading this tonight. In Sydney, Australia, Today is Mother's Day. And it's no coincidence that I've chosen this day to connect with you. This is because today we show our appreciation for the beautiful and incredible woman who brought us into this world, whether she is here with you or not today. Today, we show our appreciation to the woman who sucked up the discomfort of having a massive bulge sticking out of her belly for 9 months. Today, we show our appreciation to the woman who suffered physical pain and bleed from childbirth. I don't think there can be any other pain greater than the pain of childbirth (well I haven't given birth so I guess I'm not qualified to say so) (Yes the cute baby photo was specifically chosen to capture your attention) Today, we show our appreciation to the woman who blessed us with a life full of opportunity. Mother's Day is today, in Australia. Why are we on social media? And I am no hypocrite here. Why am I myself on social media tonight? Why have we felt the need to create a 'Day' for all our 'Mothers' out there? Is it because, without a 'Mother's Day,' we will forget to love our 'Mothers'? Shouldn't our mothers be appreciated every single day? (Same for all the 'Father's' out there!!! I love you Dad) In the past, all I did for Mother's Day was go to the shops and buy a box of chocolates or some flowers or whatever was on "Mother's Day Sale." But I've realised there are many things that Money Cannot Buy. (feel free to prove me wrong here) There are many things that cannot be Bought And Sold Based on demand and supply on a Market (Yes I love economics!!!) Love. Time. Purpose. Faith. Hope. Life. The List Goes On And On . . . In my humble opinion, I feel like some meaningful celebrations have been overly commercialised in some 'developed' countries. I feel like Christmas Day is more about buying presents and decorating the Christmas tree. I feel like Easter Day is about eating chocolate shaped in an oval egg shape (or bunny or whatever fancy shape chocolate is moulded into to make it more appealing to buy and eat and make it seem different but at the end of the day it's just chocolate - well maybe different in the sense that it has differing percentages of cocoa content - I'm personally a big fan of dark chocolate! - I reckon 70% is just 'perfect' - well just 'right' - because nothing is 'perfect' but also nothing is 'right' - so yeah, I just contradicted what I just said). I feel like ANZAC Day is more about eating ANZAC cookies and buying things with the Australian flag printed on it. And I feel like Chinese New Year is more about receiving free money from relatives (as long as you are unmarried). Now, I'm not suggesting that you should all divorce or remain single for life and go become Chinese. I'm just telling you about my 'blood nationality' and our culture. Also, while we're on the topic of marriage, I am not at all against marriage (I think marriage is wonderful and Western white wedding dresses are super beautiful on brides), in my humble opinion, I don't really understand the point of marriage? To me, Love is about remaining loyal both physically and emotionally to another human of our own choosing (in my opinion, regardless of gender). Personally, I don't see the need to have my 'love' with another human solidified by the legal system under a notion called 'marriage.' I believe if we truly 'love' another person, We should be able to trust them to remain loyal (both emotionally and physically) to us without protection under the legal system And live together happily ever after (Yes I'm a big dreamer and lover of Disney and I believe in happily ever after fairytale endings with my Prince HEEEHEHEE) And, while we're on the topic of Princes and Princesses and fairytale endings, (I know we all love a good romance on such a dark, romantic night here in Australia and most stories told through mediums such as books and movies tend to have at least a touch of love in them And some have a bigger focus than others *Cough* *Cough* Shakespeare's Romeo and Juliet) One of my favourite TV shows (back in the day I still used to watch TV) was the Bachelor/Bachelorette <3 But now I prefer to live in my own reality TV show instead of watching another's on an electronic screen To my Prince out there, (yes you know who you are) Who wishes not to be named (and shamed - hahah just kidding - Well, hopefully you don't find what I'm about to say to be too embarrassing) The way I fundamentally feel towards you has not changed one bit And I'm not talking about hate here (jokes! I lied! I actually feel even stronger towards you now <3) And gosh, No other human on Earth has ever made me cry as many times as you have. No one can compete with how many rivers on Earth I've filled with my salty tears. (everyone else reading this, please don't try to break the Guinness World Record here - I reckon I've had my fair share of tears and breakdowns) And I meant it when I said nobody has ever made me feel this way. (or something like I've never felt this way towards somebody - or the other way around - well I guess that's not important) (and well I guess it does make sense that everybody feels differently towards each person because they're different people) -that paragraph was very coherent - I know I've already told you this directly but repetition surely doesn't hurt! Thank you for always considering what is best for me in everything you've done. (Well I hope that's what you've been doing - only you know what's going inside that interesting head of yours) Thank you for teaching me the importance of honest and open communication. I would never forget that night when you asked me out in the most romantic location one could possibly think of. (Solid memz) (And great place IF we have any future anniversaries) Thank you for all the 'fun' experiences we've shared together (Yes you know which one I'm referring to in particular ;)) I hope we have many more nights just like that (well maybe just a bit more) You're a Tim Tam Because You're Simply Irresistible And you know which Guinness World Record of mine (or personal best) I would like to break ;) (please don't go finding another planet to live on to get away from me) And I love how we always go 'hunting' for the same places when we're out and about in public ;))))) I also would like to say that I miss you. A lot. <3 (AWWWWW) And I've been thinking about you A lot. (AWWWW) And Just like how I've previously never envisioned a life without a uni degree till this Monday, I've never been able to envision a life without you in it (and I probably won't be able to - but nothing is certain besides death - so I could be wrong I guess) I was never quite a full believer in soul mates Until I met you There was always a 'mystical' feeling I felt around you. I never understood what it was Until now I thought it was just 'lust' Or you were just secretly a 'fuckboi' (whatever that means) But I realised it was much more than that. OK, that's the last (massive) chunk of cheese I'm feeding you guys (for tonight). And I'm sure the rest of you have eaten enough cheese for the day. And I don't want to make you puke tonight. Because that's not my job -That's the job of your significant other <3 I don't know what you were expecting when I messaged you yesterday asking for your permission to have your first name in my 'story.' Well, since you said no, I assume you probably weren't expecting this. (man I had some great jokes I wanted to crack with your first name - GRRRRR) But again, as I have already told you, In this life, If we would like to have a nice and healthy relationship, We must accept the fact that we have the right to both reject and be rejected by others. And others hurt us but we also hurt others. That's just part of life. So, I respect your decision. I had to get that off my chest. Because now, When I'm on my deathbed, I don't have to be wondering what could've been had I chosen to tell you. Instead, When I'm on my deathbed, I can spend my last hours reflecting on what a wonderful life it's been Surrounded by my family and closest friends. Now, I've done everything I possibly could within my control. Now, it's all on you now. And please respect how it's a private matter between us two from now on. Your own love lives are much more interesting than mine. Trust me. Why would you want to see how someone else's story ends (or starts) when you can be writing your own 'story?' So go out there and tell that person you've been wanting to tell how you feel how you've felt all along! Be a girl! Growing up, it was always drilled into me that guys should be the ones chasing girls and girls should not chase guys. And that girls should play 'hard to get' Wouldn't life be so much simpler if you start feeling like you like someone, To say something along the lines of: "Hey. I like you. Do you feel the same way?" Then it can either only go one or two ways (Well we all hope it goes one particular way) And then you can move on happily with life and find someone else who also feels the same way and live happily ever after (well unless you're super unlucky and get a fence sitter And apologies, if that's the case, I don't have any further advice for you - you're on your own then xD) I used to think that expressing my emotions was a sign of weakness. I was 'wrong' (whatever it means to be 'wrong' or 'right') But I've realised it actually takes a lot of courage. It takes a lot of courage to tell someone that you feel hurt by something they've done. It takes a lot of courage to tell someone that you love them. But, in my opinion, by telling others how we feel, It actually liberates us. It allows us to make amends Instead of letting resentment build And then exploding later Like our own internal Big Bang Because in my Theory (I guess you can call it the Big Bang Theory), believe me, in my experience, I have exploded many times (not literally) By letting my resentment build (under the influence [heavy] of alcohol) If you don't believe me, Believe Bronnie Ware!! For those of you who don't know Bronnie, She worked as a palliative nurse for 8 years looking after people in their final days alive. And she writes in her book "The Top Five Regrets of the Dying," That one of the top 5 regrets she heard from people with limited time on Earth was that they wished they had the courage to express their own emotions. I used to put on a face and act like something that really hurt me didn't affect me at all. I don't understand why I aspired to be a 'psychopath.' Because a key characteristic of a 'psychopath' is that they feel no emotions. Our ability to feel emotions, whether that be: Happiness Disappointment Joy Anger Resentment Love Is what makes us human. Why do we attempt to 'dehumanise' ourselves? So back to marriage…. Again, I am not against marriage. Well, even if I am, why should you care? It's your life and you choose and how you would like to live it. And believe me, in my humble opinion, life is too short for you to spend a couple of minutes writing a nasty comment trying to convince me of the importance of marriage. (Well if you decide to do so, I'm absolutely honoured! because it means I'm super important to you because you care a lot about what I think) But for me personally, I would just like to wear a nice white pretty long wedding dress for fun and take some photos around my closest family and friends Anyways, got a little side tracked again. Back to the topic: I know that many of us struggle or have struggled to find meaning in life. I'm one of them. And I'll be sharing my story with you. I know if I don't wake up tomorrow, I can Rest In Peace. Apologies, if I have generalised or made false assumptions in parts of my 'story' by using words like "We." I know that there is no other certainty besides death. But sometimes, it is 'easier' to do so to illustrate a point I'm trying to make. I hope you understand. If you don't like what I have to say, you can either (Mark Manson): 1. Do nothing OR 2. Do something I value all opinions and perspectives. I only ask that you do so in a courteous and respectful manner. Growing up, my dad was always the logical one and less of a 'dreamer' than I was. I tried having D&M (Deep and Meaningful conversations) with my Dad but they never turned out the way I hoped. 'Dad, what do you think the meaning of life is?' 'There's no meaning. You live. You die. That's it.' Wow! So optimistic Dad!! I love you Dad! Growing up, you also 'tried' (and I use the word 'tried' because you weren't that successful in doing so) to drill into me that it was a waste of time and energy to 'care too much' about the world Because you said there's nothing I can do about it. I just have to accept life the way it is. Well, back to Mark Manson's two options, You can probably guess which path I decided to take (and it wasn't to accept it I Refuse to accept the world as it is) To all my fellow peers out there, If I have offended you, please let me know. I am not perfect. I don't try to be perfect. And I don't need to be perfect. And as much effort as I've put it and how hard I've tried to minimise resentment and offense, (Just like how I'm trying to be at the minimum point on the parabola And at the maximum point on the parabola with my impact) I'm only human. And so are you. And to further illustrate my point that nothing in this world is 'perfect' (apologies if this sounds like an essay), My 'story' is not fully edited. I've ran through it once - made some changes and this is what you're reading now. There are errors. There are bits repeated. There are bits that make no sense whatsoever. This is to further highlight my belief that nothing in the world is 'perfect' (or the real reason could just be that I'm lazy and cbbs editing it) LOL DISCLAIMER: I do not accept any legal responsibility for any tears shed Or any laughs shared Or any puke vomited from cheese overload in the process of reading my 'story.' (Oh and in case you haven't realised already It's also R rated And if you don't know what that means Adults only!! - just kidding, anyone can read my 'story') I reckon that our mental state would be a better measure of our 'real age' Because our age is just a 1, 2 (or 3) (or 4) (or more) digit number which doesn't indicate anything about our 'maturity' level (whatever that means) nor our 'wisdom' (whatever that means) You are reading at your own risk. Remember It's YOUR own life. And YOU choose how to live it. (Please show appreciation for the fact that I've been nice and have made this disclaimer at a font size that you can actually see) [Tip: Get a box of tissues ready (don’t worry if you don’t know what tissues are - they just help absorb our tears) You can live without them! Actually we can live without a lot of things If my house was on fire, i know what i would choose to take - nothing at all - nothing but myself and my family - I slept in a room with nothing [not literally] but a mattress laid on top of the carpet on the floor with a blanket, pillow, oxygen, walls, life and I was clothed too] And in case you were wondering, I didn't choose to do that for fun. My house was under renovations for a couple of weeks (we repainted the entire house and changed the entire carpet) And during those two weeks, I felt like I was 'homeless' I can't imagine what it's like to actually be sleeping out in the open on the streets Or being a refugee I felt like I was being kicked out of my own dwelling and I didn't belong - I felt lost and very uncomfortable OK, so here's my 'story'. https://leeannchn.wixsite.com/dietolive/single-post/2018/05/13/Lets-Not-Live-To-Die-but-Die-To-Live
Tumblr media
0 notes
clarenceomoore · 7 years ago
Text
Voices in AI – Episode 23: A Conversation with Pedro Domingos
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode Byron and Pedro Domingos talk about the master algorithm, machine creativity, and the creation of new jobs in the wake of the AI revolution.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 23: A Conversation with Pedro Domingos","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-05-31-pedro-domingos-(00-54-04).mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview.jpeg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today, I’m excited, our guest is none other than Pedro Domingos, professor at the University of Washington, but notable for his book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World. Pedro, welcome to the show.
Pedro Domingos: Thanks for having me.
What is artificial intelligence?
Artificial intelligence is getting computers to do things that traditionally require human intelligence, like reasoning, problem solving, common sense knowledge, learning, vision, speech and language understanding, planning, decision-making and so on.
And is it artificial in the sense that artificial turf is artificial, in that it isn’t really intelligence, it just looks like intelligence? Or is it actually truly intelligent, and it’s just the “artificial” demarks that we created it?
That’s a fun analogy. I hadn’t heard that before. No, I don’t think AI is like artificial turf. I think it’s real intelligence. It’s just intelligence of a different kind. We’re used to thinking of human intelligence, or maybe animal intelligence, as the only intelligence on the planet. What happens now is a different kind of intelligence. It’s a little bit like, does a submarine really swim? Or is it faking that it swims? Actually, it doesn’t really swim, but it can still travel underwater using very different ideas. Or, you know, does a plane fly even though it doesn’t flap its wings? Well, it doesn’t flap its wings but it does fly – and AI is a little bit like that. In some ways, actually, artificial intelligence is intelligent in ways that human intelligence isn’t. There are many areas where AI exceeds human intelligence, so I would say that they’re different forms of intelligence, but it is very much a form of intelligence.
And how would you describe the state of the art, right now?
So, in science and technology progress often happens in spurts. There are long periods of slow progress and then there are periods of very sudden, very rapid progress. And we are definitely in one of those periods of very rapid progress in AI, which was a long time in the making. AI is a field that’s fifty years old, and we had what was called the “AI spring” in the 80s, where it looked like it was going to really take off. But then that didn’t really happen at the end of the day, and the problem was that people back then were trying to do AI using what’s called “knowledge engineering.” If I wanted an AI system to do medical diagnosis, I had to interview doctors and program, you know, the doctor’s knowledge of diagnosis and the formal rules into the computers, and that didn’t scale.
The thing that has changed recently is that we have a new way to do AI, which is machine learning. Instead of trying to program the computers to do things, the computers program themselves by learning from data. So now what I do for medical diagnosis is I give the computer a database of patient records, what their symptoms and test results were and what the diagnosis was, and from just that, in thirty seconds, the computer can learn typically to do medical diagnosis better than human doctors. So, thanks to that, thanks to machine learning, we are now seeing a phase of very rapid progress. Also because the learning algorithms have gotten better, and very importantly – the beauty of machine learning is that, because the intelligence comes from the data, as the data grows exponentially the AI systems get more intelligent with essentially no extra work from us. So now AI is becoming very powerful. Just on the back of the weight of data that we have.
The other element, of course, is computing power. We need enough computing power to turn all that data into intelligent systems, but we do have those. So the combination of learning algorithms, a lot of data, and a lot of computing power is what is making the current progress happen.
And, how long do you think we can ride that wave? Do you think that machine learning is the path to an AGI, hypothetically? I mean, do we have, ten, twenty, forty more years of running with kind of the machine learning ball? Or, do we need another kind of breakthrough?
I think machine learning is definitely the path to, you know, artificial general intelligence. But I think pretty much… I think there are a few people in AI who would disagree with that. You know, your computer can be as intelligent as you want. If it can’t learn, you know, thirty minutes later it will be falling behind humans. So, machine learning really is essential to getting to intelligence. In fact, the whole idea of the singularity was I.J. Good back in the 50s who had this idea of a learning machine that could make a machine that learned better than it did. As a result of which you would have this succession of better and better, more and more intelligent machines until they left humans in the dust. Now, how long will it take? That’s very hard to predict, precisely because progress is not linear. I think the current bloom of progress at some point will probably plateau. I don’t think we’re on the verge of having general AI. We’ve come a thousand miles but there’s a million miles more to go. We’re going to need many more breakthroughs, and who knows where those breakthroughs will come from.
In the most optimistic view, maybe this will all happen in the next decade or two, because things will just happen one after another, and we’ll have it very soon. In the more pessimistic view, it’s just too hard and it’ll never happen. If you poll the AI experts, they never just say it’s going to be several decades. But the truth is nobody really knows for sure.
What is kind of interesting is not that people don’t know, and not that their forecast are kind of all over the map, but that knowledgeable people, if you look at the extreme estimates, five years are the most aggressive, and then the furthest out are like five hundred years. And what does that suggest to you? You know, if I went to my cleaners and I said, “Hey, when is my shirt going to be ready?” and they said, “Sometime between five and five hundred days.” I would be like, “Okay… something is going on here.” Why do you think the opinions are so variant on when we get an AGI?
Well, the cleaners, when they clean your shirt, it’s a very well-known, very repeatable process. They know how long it takes and it’s going to take the same thing this time, right? There are very few unknowns. The problem in AI is that we don’t even know what we don’t know. We have no idea what we’re missing, so some people think we’re not missing that much. Those are the optimists, saying, “Oh, we just need more data.” Right? Back in the 80s they said, “Oh, we just need more knowledge.” And then, that wasn’t the case, so that’s the optimistic view. The more pessimistic view is that this is a really, really hard problem and we’ve only scratched the surface, so the uncertainty comes from the fact that we don’t even know what we don’t know.
We certainly don’t know how the brain works, right? We have vague ideas of kind of like what different parts of it do, but in terms of how a thought is encoded, we don’t know. Do you think we need to know more about our own intelligence to make an AGI, or is it like, “No, that’s apples and oranges. It doesn’t really matter how the brain works. We’re building an AGI differently.”
Not necessarily. So, there are different schools of thought in AI, and this is part of what I talk about in my book. There is one thought, one school of thought in AI – the Connectionists – whose whole agenda is to reverse engineer the brain. They think that the shortest path is, you know, “here’s the competition, go reverse engineer it, figure out how it works, build it on the computer, and then we’ll have intelligence.” So that is definitely a plausible approach. I think it’s actually a very difficult approach, precisely because we understand so little about how the brain works. In some ways maybe it’s trying to solve a problem by way of solving the hardest of problems. And then there are other AI types, namely the Symbolists, whose whole idea is, “No, we don’t need to understand things at that low level. In fact, we���re just going to get lost in the weeds if we try to do that. We have to understand intelligence at a higher-level abstraction and we’ll get there much sooner that way. So forget how the brain works, that’s really not important.” Again, the analogy with the brains and airplanes is a good one. What the Symbolists say is, “If we try to make airplanes by building machines that will flap their wings we’ll never have them. What we need to do is understand the laws of physics and aerodynamics and then build machines based on that.” So there are different schools of thought. And I actually think it’s good that there are different schools of thought and we’ll see who gets there first.
So, you mentioned your book, The Master Algorithm, which is of course required reading in this field. Can you give the listener who may not be as familiar with it, an overview of what is The Master Algorithm? What are we looking for?
Yeah, sure. So the book is essentially an introduction to machine learning for a general audience. So not just for technical people, but business people, policy makers, just citizens and the people who are curious. It talks about the impact that machine learning is already having in the world. A lot of people think that these things are science fiction, but there are already in their lives, they just don’t know it. It also looks at the future and what we can expect coming down the line. But mainly, it is an introduction to what I was just describing. That there are five main schools of thought in machine learning. There’s the people who want to reverse engineer the brain; the ones who want to simulate evolution; the ones who do machine learning by automating the scientific method; the ones who use Bayesian statistics, and the ones who do reasoning by analogy, like people do in everyday life. And then I look at what these different methods can and can’t do.
The name The Master Algorithm comes from this notion that the machine learning algorithm is a master algorithm in the same sense that a master key opens all doors. A learning algorithm can do all sorts of different things while being the same algorithm. This is really what’s extraordinary about machine learning, is that, in traditional computer science, if I want the computer to play chess I have to write a program explaining how to play chess; and if I wanted to drive a car, I had to write a program explaining how to drive a car. With machine learning the same learning algorithm can learn to play chess or drive a car or do a million different other things. Just by learning by the appropriate data. And each of these tribes of machine learning has its own master algorithm. The more optimistic members of that tribe believe that you can do everything with that master algorithm. My contention in the book is that each of these algorithms is only solving part of the problem. What we need to do is unify them all into a grand theory of machine learning, in the same way that physics has a standard model and biology has a central dogma. And then, that will be the true master algorithm. And I suggest some paths towards that algorithm, and I think we’re actually getting pretty close to it.
One thing I found empowering in the book, and you state it over and over at the beginning is that the master algorithm is aspirationally accessible for a wide range of people. You basically said, “You, listening to the book, this is still a field where the layman can still have some amount of breakthrough.” Can you speak to that for just a minute?
Absolutely, in fact that’s part of what got me into machine learning, is that, unlike physics or mathematics or biology that are very mature fields and you really can only contribute once you have at least a PhD; computer science and AI and machine learning are still very young. So, you could be a kid in a garage and have a great idea that will be transformative. And I hope that that will happen. I think, even after we find this master algorithm that’s the unification of the five current ones, as we were talking about, we will still be missing some really important, really deep ideas. And I think in some ways, someone coming from outside the field is more likely to find those, than those of us who are professional machine learning researchers, and are already thinking along these tracks of these particular schools of thought. So, part of my goal in writing the book was to get people who are not machine learning experts thinking about machine learning and maybe having the next great ideas that will get us closer to AGI.
And, you also point out in the book why you believe that we know that such a thing is possible, and one of your proof points is our intelligence.
Exactly.
Can you speak to that?
Yeah, so this is, of course, one of those very ambitious goals that people should be at the outset a little suspicious of, right? Is this, like the philosopher’s stone or the perpetual motion machine, is it really possible? And again some people don’t think it’s possible. I think there’s a number of reasons why I’m pretty sure it is possible, one of which is that we already have existing proof. One existing proof is our brain, right? As long as you believe in reductionism, which all scientists do, then the way your brain works can be expressed as an algorithm. And if I program that algorithm into a computer, then that algorithm can learn everything that your brain can. Therefore, in that sense, at least, one version of the master algorithm already exists. Another one is evolution. Evolution created us and all life on Earth. And it is essentially an algorithm, and we roughly understand how that algorithm works, so there is another existing instance of the master algorithm.
Then there’s also, besides these more empirical reasons, there’s also theoretical reasons that tell us that a master algorithm exists. One of which is that for each of the five tribes, for their master algorithm there’s a theorem that says, if you give enough data to this algorithm it can learn any function. So, at least at that level we already know that master algorithms exist. Now the question is how complicated will it be, how hard it will be to get us there? How broadly good would that algorithm be in terms of learning from a reasonable amount of data in a reasonable amount of time?
You just said all scientists are reductionist. Is that necessarily the case, like, can you not be a scientist and believe in something like strong emergence, and say, “Actually you can’t necessarily take the human mind down to individual atoms and kind of reconstruct…”
Yeah, yeah, absolutely, so, what I mean… this is a very good point. In fact, in the sense that you’re talking about we cannot be reductionists in AI. So what I mean by reductionist is just the idea that we can decompose a complex system into simpler, smaller parts that interact and that make up the system. This is how all of the sciences and engineering works. But very much, this does not preclude the existence of emergent properties. So, the system can be more than the sum of its parts, if it’s non-linear. And very much the brain is a non-linear system. And that’s what we have to do to reach AI. You could even say that machine learning is the science of emergent properties. In fact, one of the names by which it has been know in some quarters is “self organizing systems.” And in fact, what makes AI hard, the reason we haven’t already solved it, is that the usual divide and conquer strategy that scientists and engineers follow of dividing problems into smaller and smaller sub-problems and then solving the sub-problems and putting the solutions together… that tends not to work in AI, because the subsystems are very strongly coupled together. So, this is a harder problem and there are emergent properties, but that does not mean that you can’t reduce it to these pieces, it’s just a harder thing to do.
Marvin Minsky, I remember, talked about how, you know, we kind of got tricked a little bit by the fact that it takes very few fundamental laws of the universe to understand most of physics. The same with electricity. The same with magnetism. There are very few simple laws to explain everything that happens. And so the hope had been that intelligence would be like that. Are we giving up on that notion?
Yes, so, again there are different views within AI on this. I think at one end there are people who hope we will discover a few laws of AI and those would solve everything. At the other end of the spectrum there are people like Marvin Minsky who just think that intelligence is a big, big pile of hacks. He even has a book that’s like one of these tricks per page, and who knows how many more there are. I think, and most people in AI believe, that it’s somewhere in between. If AI is just a big pile of hacks, we’re never going to get there. And it can’t really be just a pile of hacks, because if the hacks were so powerful as to create intelligence, then you can’t really call them hacks.
On the other hand, you know, you can’t reduce it to a few laws, like Newton’s laws. So this idea of the master algorithm is, at the end of the day we will find one algorithm that does intelligence, but that algorithm is not going to be a hundred lines of code. It’s not going to be millions of lines of code either. You know, if the algorithm is thousands or maybe tens of thousands of lines of codes, that would be great. It’ll still be a more complex theory, much more complex than the ones we have in physics, but it’ll be much, much simpler than what people like Marvin Minsky envisioned.
And if we find the master algorithm… is that good for humanity?
Well, I think it’s good or bad depending on what we do with it. Like all technology, machine learning gives us more power. You can think of it as a superpower, right? Telephones let us speak at a distance, airplanes let us fly, and machine learning let’s us predict things and lets technology adapts automatically to our needs. All of this is good if we use it for good. If we use it for bad, it will be bad, right? The technology itself doesn’t know how it’s going to be used and part of my reason for writing this book is that everybody needs to be aware of what machine learning is, and what it can do, so that they can control it. Because, otherwise, machine learning will just give more control to those few who actually know how to use it.
I think, if you look at the history of technology, over time, in the end, the good tends to prevail over the bad, which is why we live in a better world today than we did 200 years or 2,000 years ago. But we have to make it happen, right? It just doesn’t fall from the tree like that.
And so, in your view, the master algorithm is essentially synonymous with AGI in the sense that it can figure anything out, it’s a general artificial intelligence. Would it be conscious?
Yeah, so, by the way, I wouldn’t say the master algorithm is synonymous with AGI. I think it’s the enabler of AGI. Once we have a master algorithm we’re still going to need to apply it to vision, and language, and reasoning, and all these things. And then, we’ll have AGI. So, one way to think about this is that it’s an 80/20 rule. The master algorithm is the 20% of the work that gets you 80% of the way, but you still need to do the rest, right? So maybe this is a better way to think about it.
Fair enough. So, I’ll just ask the question a little more directly. What do you think consciousness is?
That’s a very good question. The truth is what makes consciousness simultaneously so fascinating and so hard is that at the end of the day, if there is one thing that I know, and it’s that I’m conscious, right? Descartes said, “I think, therefore I am,” but maybe he should’ve said “I’m conscious, therefore I am.” The laws of physics, who knows, they might even be wrong. But the fact that I’m conscious right now is absolutely unquestionable. So, everybody knows that about themselves. At the same time, because consciousness is a subjective experience, it doesn’t lend itself to the scientific method. What are reproducible experiments when it comes to consciousness? That’s one aspect, the other one is that consciousness is a very complex emergent phenomenon. So, nobody really knows what it is, or understands it, even at a fairly shallow level. Now, the reason we believe others have consciousness – you believe that I have consciousness because you’re a human being, I’m a human being, so since you have consciousness I probably have consciousness as well. And this is really the extent of it. For all you know, I could be a robot taking to you right now, passing the Turing test, and not be conscious at all.
Now, what happens with machines? How can we tell whether a machine is conscious or not? This has been grist for the mill of a lot of philosophers over the last few decades. I think the bottom line is that once a computer starts to act like it’s consciousness, we will treat it as if it’s consciousness, we will grant it consciousness. In fact, we already do that, even with very simple chatbots and whatnot. So, as far as everyday life goes, it actually won’t be long. In some ways, it’ll happen sooner, that people treat computers as being conscious, than the computers being truly intelligent. Because that’s all we need, right? We project these human properties onto things that act humanly, even in the slightest way.
Now, at the end of the day, if you gaze down into that hardware and those circuits… is there really consciousness there? I don’t know if we will ever be able to answer that question. Right now, I actually don’t see a good way. I think there will come a point at which we understand consciousness well enough, because we understand the brain well enough, that we are fairly confident that we can tell whether something is conscious or not. And then at that point I think we will apply this criteria to these machines, and these machines, at least the ones that have been designed to be conscious, will pass the tests, so, we will believe that machines have consciousness. But, you know, we can never be totally sure.
And, do you believe consciousness is required for a general intellect?
I think there are many kinds of AI and many AI applications that do not require consciousness. So, for example, if I tell a machine learning system to go solve cancer – that’s one of the things we’d like to do, cure cancer, and machine learning is a very big part of the battle to cure cancer – I don’t think it requires consciousness at all. It requires a lot of searching, and understanding molecular biology and trying different drugs, maybe designing drugs, etc. So, 90% of AI that will involve no consciousness at all.
There are some applications of AI and some types of AI that will require consciousness, or something indistinguishable from of it, for example, house bots. We would like to have a robot that cooks dinner and does the dishes and makes the bed and whatnot. In order to do all those things, the robot has to have all the capabilities of a human, has to integrate all of these senses, vision, and touch, and perception, and hearing and whatnot, and then make decision based on it. I think this is either going to be consciousness or indistinguishable from it.
Do you think there will be problems that arise if that happens? Let’s say you build Rosie the Robot, and you don’t know, like you said, deep down inside, if the robot is conscious or merely acting as if it is. Do you think at that point we have to have this question of, “Are we fine enslaving what could be a conscious machine to plunge our toilet for us?”
Well, that depends on what you consider enslaving, right? So, one way to look at this, and it’s the way I look at it, is that these are still just machines, right? Just because they have consciousness doesn’t mean that they have human rights. Human rights are for humans. I don’t think there’s such thing as robot rights. The deeper question here is, what gives something rights? One school of thought is that it’s the ability to suffer that gives you rights and therefore animals should have rights. But, if you think about it historically, the idea of having animal rights, even 50 years ago would’ve seemed absurd. So, by the same standard, maybe 50 years from now, people will want to have robot rights. In fact, there are some people already talking about it. I think it’s a very strange idea. And often people talk about, oh well, will the machines be our friends or will they be our slaves? Will the be our equal? Will they be inferior? Actually, I think this whole way of framing things is mistaken. You know, the robots will be neither our equal nor our slaves. They will be our extensions, right?
Robots are technology, they augment us. I think it’s not so much that the machines will be conscious, but that through machines we will have a bigger consciousness. In the same way that, for example, the internet already gives us a bigger consciousness than we had when there was no internet.
So, robots leads us to a topic that’s on the news literally every day, is the prospect that automation and technological advance will eliminate jobs faster than it can create new ones, or, it will eliminate jobs and replace them with inaccessible kinds of jobs. What do you think about that? What do you think the future holds?
I think we have to distinguish between the near term, by which I mean the next ten years or so, and the long term. In the near term, I think some jobs will disappear, just like jobs have disappeared to automation in the past. AI is really automation on steroids. So I think what’s going to happen in the near term is not so different from what has happened in the past. Some jobs will be automated, so some jobs will disappear. But many new jobs will appear as well. It’s always easier to see the jobs that disappear than the ones that appear. Think for example of being an app developer. There’s millions of people today who make a living today being an app developer. Ten years ago that job didn’t exist. Fifty years ago you couldn’t even imagine that job. Two hundred years ago ninety-something percent of Americans were farmers, and then farming got automated. Now today only 2% of Americans work in agriculture. That doesn’t mean that the other 98% are unemployed. They’re just doing all these jobs that people couldn’t even imagine before. I think a lot of that is what’s going to happen here. We will see entirely new job categories appear. We will also see, on a more mundane level, more demand for lots of existing jobs. For example, I think truck drivers should be worried about the future of their jobs, because self-driving cars are coming, so there will be an end point. There are many millions of truck drivers in the US alone. It’s one of the most widespread occupations. But now, what will they do? People say, “Oh, you can’t turn truck drivers into programmers.” Well, you don’t have to turn them into programmers. Think about what’s going to happen, because trucks are self-driving, goods will cost less. Goods will cost less, people will have more money in their pockets, they will spend it on other things. Like, for example, having a bigger, better houses. And therefore there will be more demand for construction workers and some of these truck drivers will become construction workers and so on.
You know, having said all that, I think that in the near term the most important thing that’s going to happen to jobs is actually, neither the ones that will disappear, nor the ones that will appear, most jobs will be transformed by AI. The way I do my job will change because some parts will become automated, but now I will be able to do more things better or more than I could do before, when I didn’t have the automation. So, really the question everybody needs to think about is, what parts of my job can I automate? Really the best way to protect your job from automation is to automate it yourself… and then what can I do, using these machine learning tools?
Automation is like having a horse. You don’t try to outrun a horse; you ride the horse. And we have to ride automation, to do our jobs better and in more ways than we can now.
So, it doesn’t sound like you’re all that pessimistic about the future of employment?
I’m optimistic, but I also worry. I think that’s a good combination. I think if we’re pessimistic we’ll never do anything. Again, if you look at the history of technology, the optimists at the end of the day are the ones who made the world a better place, not the pessimists. But at the same time, we need to… naïve optimism is very dangerous, right? We need to worry continuously about all the things that could go wrong and make sure that they don’t go wrong. So I think that a combination of optimism and worry is the right one to have.
Some people say we’ll find a way to merge, mentally, with the AI. Is that even a valid question? What do you think of it?
I think that’s what’s going to happen. In fact, it’s already happening. We are going to merge with our machines step by step. You know, like a computer is a machine that is closer to us than a television. A smartphone is closer to us than a desktop is, and the laptop is somewhere in between. And we’re already starting to see these things such as Google Glass and augmented reality, where in essence the computer is extending our senses, and extending our part to do things. And Elon Musk has this company that is going to create an interface between neurons and computers, and in fact, in research labs this already exists. We have colleagues, I have colleagues that work on that. They’re called brain-computer interfaces. So, step-by-step, right? The way to think about this is, we are cyborgs, right? Human beings are actually the cyborg species. From day one, we were of one with our technology. Even our physiology would be different if we couldn’t do things like light fires and throw spears. So this has always been an ongoing process. Part of us is technology, and that will become more and more so in the future. Also with things like the Internet, we are connecting ourselves into a bigger, you know… Humanity itself is an emergent phenomenon, and having the Internet and computers allows a greater level to emerge. And I think, exactly how this happened and when, of course, is up for grabs, but that’s the way things are going.
So, you mentioned in passing a minute ago the singularity. Do you believe that that is what will happen as is commonly thought? That there is going to be this kind of point in the reasonably near future from which we cannot see anything beyond it because we don’t have any frame of reference?
I don’t believe a singularity will happen in those terms. So this idea of exponentially increasing progress that goes on forever… that’s not going to happen because it’s physically impossible, right? No exponential goes on forever. It always flattens out sooner or later. All exponentials are really what are called “S curves” in disguise. They go up faster and faster, and this is how all previous technology waves have looked, but then they flatten out, and finally they plateau. Also, this notion that at some point things will become completely incomprehensible for us… I don’t believe that either, because there will always be parts that we understand, number one, and there are limits to what any intelligence can do – human or non-human.
By that stance the singularity that has already happened. A hundred years ago the most advanced technology was maybe something like a car, right? And I could understand every part of how a car works, completely. Today we already have technology, like the computer systems that we have today. Nobody understands that whole system. Different people understand different parts, and with machine learning in particular, the thing that’s notable about machine learning algorithms is that they can do very complex things very well, and we have no idea how they’re doing them. And yet, we are comfortable with that because we don’t necessarily care about the details of how it is accomplished, we just care whether the medical diagnosis was correct or the patient’s cancer was cured, or the car is driving correctly. So I think this notion of the singularity is a little bit off.
Having said that, we are in the middle of one of these S curves. We are seeing very rapid progress, and by the time this has run its course, the would will be a very, very different place from what it is today.
How so?
All these things that we’ve been talking about. We will have intelligent machines surrounding us. Not just humanoid machines but intelligence on tap, right? In the same way that today you can use electricity for whatever you want just by plugging into a socket, you will be able to plug into intelligence. And indeed, the leading tech companies are already trying to make this happen. So there will be all these things that the greater intelligence enables. Everybody will have a home robot in the same way that they have a car. We will have this whole process that the Internet is enabling and that the intelligence on top of the Internet is enabling and the Internet of things and so on. There will something like this larger emergent being, if you will, that’s not just individual human beings or just societies. But again, it’s hard to picture exactly what that would be, but this is going to happen.
You know, it always makes the news when an artificial intelligence masters some game, like, we all know the list. You had chess, and then you had Jeopardy, of course, and then you had AlphaGo, and then recently you had poker. And I get that games are kind of a natural place, because I guess it’s a confined universe with very rigid, specific rules, and then there’s a lot of training data for teaching it how to function in that. Are there types of problems that machine learning isn’t suited to solve? I mean, just kind of philosophically, it doesn’t matter how good your algorithms are, or how much data you have, or how fast a computer is; it’s not the way to solve that particular problem.
Well, certainly some problems are much harder than others, and, as you say, games are easier in the sense that they are these very constrained, artificial universes. And that’s why we can do so well in them. In fact, the summary of what machine learning and AI are good for today is that they are good for these tasks that are somewhat well-defined and constrained.
What people are much better at are things that require knowledge of the world, they require common sense, they require integrating lots of different information. We’re not there yet. We don’t have the learning algorithms that can do that, so the learning algorithms that we have today are certainly good for some things but not others. But again, if we have the master algorithm then we will be able to do all these things and we are making progress towards them. So, we’ll see.
Any time I see a chatbot or something that’s trying to pass the Turing test, I always type the same first question, which is, “Which is bigger, a nickel or the sun?” And not a single one of them has ever answered it correctly.
Well, exactly. Because they don’t have common sense knowledge. It’s amazing what computers can do in some ways, and it’s amazing what they can’t do in others. It’s like these really simple pieces of common sense logic. In a way one of the big lessons that we’ve learned in AI is that automating the job of a doctor or a lawyer is actually easy. What is very hard to do with AI is what a three year old can do, right? If we could have a robot baby that can do what a one year old can do, and learn the same way, we would have solved AI. It’s much, much harder to do those things; things that we take for granted, like picking up an object, for example, or like walking around without tripping. We take this for granted because evolution spent five hundred million years developing it. It’s extremely sophisticated, but for us it’s below the conscious level. The things for us that we are conscious of and that we have to go to college for, well, we’re not very good at them; we just learned to do them recently. Those, the computers can do much better.
So, in some ways in AI, it’s the hard things that are easy and the easy things that are hard.
Does it mean anything if something finally passes the Turing test? And if so, when do you think that might happen? When will it say, “Well, the sun is clearly bigger than a nickel.”
Well, with all due respect to Alan Turing, who was a great genius and an AI pioneer. Most people in AI, including me, believe that the Turing test is actually a bad idea. The reason the Turing test is a bad idea is that it confuses being intelligent with being human. This idea that you can prove that you’re intelligent by fooling a human into thinking you’re a human is very weird, if you think about it. It’s like saying an airplane doesn’t fly until it can fool birds into thinking it’s a bird. That doesn’t make any sense. So, true intelligence can take many forms, not necessarily the human form. So in some ways we don’t need to pass the Turing test to have AI. And in other ways the Turing test is too easy to pass and by some standards has already been passed by systems that no one would call intelligent. Talking with someone for five minutes and fooling them into thinking you’re a human is actually not that hard, because humans are remarkably adept at projecting humanity into anything that acts human. In fact, even in the 60s there was this famous thing called ELIZA, that basically just picked up keywords in what you said and gave back these canned responses. And if you talked to ELIZA for five minutes you’d actually think that it was a human.
Although Weizenbaum’s observation was, even when people knew ELIZA was just a program, they still formed emotional attachments to it, and that’s what he found so disturbing.
Exactly, so human beings have this uncanny ability to treat things as human because that’s the only reference point that we have, right? This whole idea of reasoning by analogy, if we have something that behaves even a little bit like a human – because there’s nothing else in the universe to compare it to – we start treating it more like a human and project more human qualities into it. And, by the way, this is something that once companies start making bots… this is already happening with chatbots, like Siri and Cortana and whatnot, it’ll happen even more so with home robots. There’s going to be a race to make the robots more and more humanlike because if you form an emotional attachment to my product, that’s what I want, right? I’ll sell more of it and for higher price and so on and so forth. So, we’re going to see uncannily humanlike robots and AIs – whether this is a good or bad things is another matter.
What do you think creativity is, and wouldn’t AGI, by definition, be creative, right? It could write a sonnet, or…
Yeah, so… AGI by definition would be creative. One thing that you hear a lot this days, and that unfortunately is incorrect, is that, “Oh, we can automate these menial, routine jobs, but creativity is this deeply human thing that will never be automated.” And, this is kind of like a superficially plausible notion, when in fact, there are already examples, for example, of computers that could compose music. There is this guy, David Cope, professor at UC Santa Cruz. He has a computer program that will create music in the style of the composer of your choice. And he does this test where he plays a piece by Mozart, a piece by a human composer imitating Mozart, and a piece by his computer, by his system, and he did this at a conference that I was in, and asked people to vote for which one was the real Amadeus, and the real one won, but the second place was actually the computer. So a computer can already write Mozart better than a professional, highly educated human composer can. Computers have made paintings that are actually quite beautiful and striking, many of them. Computers these days write news stories. There’s this company called Narrative Fiction that will write news stories for you, and the likes of Forbes or Fortune – I forget which one it is – actually publish some of the things that they write. So it’s not a novel yet, but we will get there. And also, in other areas, like for example chess and AlphaGo are notable examples. Both Kasparov and Lee Sedol, when they were beaten by the computer, had this remarkable reaction saying, “Wow, the computer was so creative. It came up with these moves that I would never have thought of, that seemed dumb at first but turned out to be absolutely brilliant.” And computers have done things in mathematics, theorems and proofs and etc., all of which if done by humans would be considered highly creative. So, automating creativity is actually not that hard.
It’s funny, when Kasparov first said it seemed creative, what he was implying was that IBM cheated, that people had intervened. And IBM hadn’t. But, that’s a testament to just how…
There were actually two phases, right? He said that at first, so he was suspicious because, again, how could something not human actually be doing that? But then later, after the match when he had lost and so on, if you remember that was this move, right, that Deep Blue made that seemed like a crazy move, because of course when, you know, whatever, five moves later… And Kasparov had this expression, he said, like, “I could smell a new kind of intelligence playing against me.” Which is very interesting for us AI-types because we know exactly what was going on, right? It was these, you know, search algorithms and a whole bunch of technology that we understand fairly well. It’s interesting that from the outside this just seemed like a new kind of intelligence, and maybe it is.
He also said, “At least I didn’t enjoy beating him.” Which I guess someday, though, it may, right?
Oh, yeah, yeah! And you know, again, that could happen again depending on how we build them, right? The other very interesting thing that happened in that match – and again, I think it’s symptomatic – is that Kasparov is someone who always won by basically intimidating his opponents into submission. They just got scared of him, and then he beat them. But the thing that happened with Deep Blue was that Deep Blue couldn’t be intimidated by him; it was just a machine, right? As a result of which, Kasparov himself, suddenly for the first time in his life, probably, became insecure. And then what happened after he lost that game, is that in the following game he actually made these mistakes that he would never make, because he has suddenly, himself, became insecure.
Foreboding, isn’t it? We talked about emergence a couple of times. There’s the Gaia hypothesis that maybe all of the life on our planet has an emergent property – some kind of an intelligence that we can’t perceive any more than our cells can perceive us. Do you have any thoughts on that? And do you have any thoughts on if eventually the Internet could just become emergent – an emergent consciousness?
Right, so, I don’t believe in the Gaia hypothesis in the sense that the Earth as it is does not have enough self-regulating ability to achieve the homeostasis that living beings do. In fact, some things you get these negative feedback cycles where things actually go very wrong. So, most scientists don’t believe in the Gaia hypothesis for Earth today. Now, what I think – and a lot of other people think is the case – is that maybe the Gaia hypothesis will be true in the future because as the Internet expands and the Internet of the things with sensors all over the place, literally all over the planet, and a lot of actions continue being taken based on those sensors, to, among other things, preserve us and presumably other kinds of life on Earth. I think if we fast-forward a hundred years there’s a very good chance that Earth will look like Gaia, but it will be a Gaia that is technological as opposed to just biological. And in fact, I don’t think that there’s an opposition between technology and biology. I think technology will just be the extension of biology by other means, right? It’s biology that’s made by us. I mean, we’re creatures, the things that we make are also biology in that sense. So if you look at it that way maybe what has happened is that since the very beginning, Earth has been evolving towards Gaia, we just haven’t gotten there yet. But technology is very much part of getting there.
What do you think of the OpenAI initiative?
I think… so, the OpenAI initiative, it’s goal is to do AI for the common good. Because, you know, people like Elon Musk and Sam Altman were afraid that because the biggest quantity of AI research is being done inside companies, like Google and Facebook and Microsoft and Amazon and whatnot, it would be owned by them. And AI is very powerful, so it’s dangerous if AI is just owned by these companies. So their goal is to do AI research that is going to be open, hence the name, and available to everybody. I think this is a great agenda. So I very much agree with trying to do that. I think there’s nothing wrong with having a lot of AI research in companies. I think it’s important that there also be AI research that is in the public domain. And universities are one aspect of doing that, something like OpenAI is another example, something like the Allen Institute for AI is another example of doing AI for the public good in this way.
So, I think this is a good agenda. What they’re going to do exactly and what their chances of succeeding are, and how their style of AI will compare to the styles of AI that are being produced by these other labs, whether industry or academia, is something that remains to be seen. But I’m curious to see what they get out of it.
The worry from some people is that they make it analogous to a nuclear weapon in that if you say, “We don’t know how to build one, but we can get 99% of the way there and we’re going to share that with everybody on the planet,” and then you hope that the last little bit that makes it an AGI isn’t a bad actor of some kind. Does that make sense to you?
Yeah, yeah… I understand the analogy, but you have to remember that AI and nuclear weapons are very different for a couple of reasons. One is that nuclear weapons are essentially destructive things, right? Yeah, you can turn them into nuclear power, but they were invented to blow things up. Whereas AI is a tool that we use to do all sorts of things, like diagnose diseases and place ads on webpages, and things from big to small. But the thing that… the knowledge to build a nuclear bomb, is actually not that hard to come by. Fortunately, what is very hard to come by is the enriched uranium, or plutonium, to build the bomb. That’s actually what keeps any terrorist group from building a bomb. It’s not the lack of knowledge, it’s the lack of the materials. Now, in AI it’s actually very different, you just need computing power and you can just plug into the cloud and get that computing power. It’s more that AI is just algorithms – it’s already accessible. Lots of people can use it for whatever they want. In a way, the safety lies in actually having AI in the hands of everybody so that it’s not in the hands of a few. If only one person or one company had access to the master algorithm they would be too powerful. If everybody has access to the master algorithm then there will be competition, there will be collaboration, there will be like a whole ecosystem of things that happen, and we will be safer that way. Just as we are with the economy as it is. But, having said that, we will need something like an AI police. So, William Gibson in Neuromancer had this thing called the Turing police, right? The Turing police is AIs whose job is to police the AIs to make sure that they don’t go bad, or that they get stopped when they go bad. And again, this is no different from what already happens. We have highways, and bank robbers can use the highways to get away, that’s no reason to not have highways, but of course the police also need to have cars so they can catch the robbers, so I think it’s going to be a similar thing with AI.
When I do these chats with people in AI, science fiction writers always come up. They always reference them, they always had their favorites and whatnot. Do you have any books, movies, TV shows or anything like that that you think you watch them and you go, “Yes, that could happen. I see that?”
Unfortunately, a lot of the depictions of AI and robots in movies and TV shows is not very realistic, because the computers and robots are really just humans in disguise. This is how you make an interesting story is by making the robots act like humans. They have evil plan to take over the world or, somebody falls in love with them and things like that. And that’s how you make an interesting movie.
But real AIs, as we were talking about, are very different than that. So a lot of the movies that people associate with AI like Terminator, for example, are really not stuff that will happen. But with a provision that science fiction is a great source of self-fulfilling prophecies, right? People read those things and then they try to make them happen. So, who knows.
Having said that, what is an example of a movie depicting AI that I think could happen, and is fairly interesting and realistic? Well, one example is the movie Her. The movie Her is basically about a virtual assistant that is very human-like, and ten years ago that would’ve been a very strange movie. These days we already have things like Siri and Cortana and Google Now, that are, of course, still a far cry from Her. But I think we’re going to get closer and closer to that.
And final question: what are you working on, and are you going to write another book? What keeps you busy?
Two things: I think we are pretty close to unifying those five master algorithms, and I’m still working on that. That’s what I’ve been working on for the last ten years. And I think we’re almost there. I think once we’re there, the next thing is that, as we’ve been talking about, that’s not going to be enough. So we need something else. I think we need something beyond the existing five paradigms we have, and I’m working on a new type of learning, that I hope will actually take us beyond what those five could do. Some people have jokingly called it the sixth paradigm, and maybe my next book will be called, “The Sixth Paradigm.” That makes it sound like a Dan Brown novel, but that’s definitely something that I’m working on.
When you say you think the master algorithm is almost ready… will there be a “ta-da” moment like, here it is. Or is it kind of a gradualism?
It’s a gradual thing, right? Again, look at physics, they’ve unified three of the forces, right? You know, electromagnetism and the strong and weak forces. They still haven’t unified gravity with them. There are proposals like string theory to do that. These are how moments often only happen in retrospect. People propose a theory, and then maybe it gets tested, and then maybe it gets revised, and then finally when all the pieces are in place people go, “Oh wow.” And I think it’s going to be like that with the master algorithm as well. We have candidates, we have ways of putting these pieces together, it still remains to be seen whether they can do all the things that we want and how well they will scale, right? Scaling is very important because if it’s not scalable then it’s not really solving the problem. So, we’ll see.
All right, well thank you so much for being on the show.
Thanks for having me, this was great!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
0 notes
babbleuk · 7 years ago
Text
Voices in AI – Episode 23: A Conversation with Pedro Domingos
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode Byron and Pedro Domingos talk about the master algorithm, machine creativity, and the creation of new jobs in the wake of the AI revolution.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 23: A Conversation with Pedro Domingos","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-05-31-pedro-domingos-(00-54-04).mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview.jpeg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today, I’m excited, our guest is none other than Pedro Domingos, professor at the University of Washington, but notable for his book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World. Pedro, welcome to the show.
Pedro Domingos: Thanks for having me.
What is artificial intelligence?
Artificial intelligence is getting computers to do things that traditionally require human intelligence, like reasoning, problem solving, common sense knowledge, learning, vision, speech and language understanding, planning, decision-making and so on.
And is it artificial in the sense that artificial turf is artificial, in that it isn’t really intelligence, it just looks like intelligence? Or is it actually truly intelligent, and it’s just the “artificial” demarks that we created it?
That’s a fun analogy. I hadn’t heard that before. No, I don’t think AI is like artificial turf. I think it’s real intelligence. It’s just intelligence of a different kind. We’re used to thinking of human intelligence, or maybe animal intelligence, as the only intelligence on the planet. What happens now is a different kind of intelligence. It’s a little bit like, does a submarine really swim? Or is it faking that it swims? Actually, it doesn’t really swim, but it can still travel underwater using very different ideas. Or, you know, does a plane fly even though it doesn’t flap its wings? Well, it doesn’t flap its wings but it does fly – and AI is a little bit like that. In some ways, actually, artificial intelligence is intelligent in ways that human intelligence isn’t. There are many areas where AI exceeds human intelligence, so I would say that they’re different forms of intelligence, but it is very much a form of intelligence.
And how would you describe the state of the art, right now?
So, in science and technology progress often happens in spurts. There are long periods of slow progress and then there are periods of very sudden, very rapid progress. And we are definitely in one of those periods of very rapid progress in AI, which was a long time in the making. AI is a field that’s fifty years old, and we had what was called the “AI spring” in the 80s, where it looked like it was going to really take off. But then that didn’t really happen at the end of the day, and the problem was that people back then were trying to do AI using what’s called “knowledge engineering.” If I wanted an AI system to do medical diagnosis, I had to interview doctors and program, you know, the doctor’s knowledge of diagnosis and the formal rules into the computers, and that didn’t scale.
The thing that has changed recently is that we have a new way to do AI, which is machine learning. Instead of trying to program the computers to do things, the computers program themselves by learning from data. So now what I do for medical diagnosis is I give the computer a database of patient records, what their symptoms and test results were and what the diagnosis was, and from just that, in thirty seconds, the computer can learn typically to do medical diagnosis better than human doctors. So, thanks to that, thanks to machine learning, we are now seeing a phase of very rapid progress. Also because the learning algorithms have gotten better, and very importantly – the beauty of machine learning is that, because the intelligence comes from the data, as the data grows exponentially the AI systems get more intelligent with essentially no extra work from us. So now AI is becoming very powerful. Just on the back of the weight of data that we have.
The other element, of course, is computing power. We need enough computing power to turn all that data into intelligent systems, but we do have those. So the combination of learning algorithms, a lot of data, and a lot of computing power is what is making the current progress happen.
And, how long do you think we can ride that wave? Do you think that machine learning is the path to an AGI, hypothetically? I mean, do we have, ten, twenty, forty more years of running with kind of the machine learning ball? Or, do we need another kind of breakthrough?
I think machine learning is definitely the path to, you know, artificial general intelligence. But I think pretty much… I think there are a few people in AI who would disagree with that. You know, your computer can be as intelligent as you want. If it can’t learn, you know, thirty minutes later it will be falling behind humans. So, machine learning really is essential to getting to intelligence. In fact, the whole idea of the singularity was I.J. Good back in the 50s who had this idea of a learning machine that could make a machine that learned better than it did. As a result of which you would have this succession of better and better, more and more intelligent machines until they left humans in the dust. Now, how long will it take? That’s very hard to predict, precisely because progress is not linear. I think the current bloom of progress at some point will probably plateau. I don’t think we’re on the verge of having general AI. We’ve come a thousand miles but there’s a million miles more to go. We’re going to need many more breakthroughs, and who knows where those breakthroughs will come from.
In the most optimistic view, maybe this will all happen in the next decade or two, because things will just happen one after another, and we’ll have it very soon. In the more pessimistic view, it’s just too hard and it’ll never happen. If you poll the AI experts, they never just say it’s going to be several decades. But the truth is nobody really knows for sure.
What is kind of interesting is not that people don’t know, and not that their forecast are kind of all over the map, but that knowledgeable people, if you look at the extreme estimates, five years are the most aggressive, and then the furthest out are like five hundred years. And what does that suggest to you? You know, if I went to my cleaners and I said, “Hey, when is my shirt going to be ready?” and they said, “Sometime between five and five hundred days.” I would be like, “Okay… something is going on here.” Why do you think the opinions are so variant on when we get an AGI?
Well, the cleaners, when they clean your shirt, it’s a very well-known, very repeatable process. They know how long it takes and it’s going to take the same thing this time, right? There are very few unknowns. The problem in AI is that we don’t even know what we don’t know. We have no idea what we’re missing, so some people think we’re not missing that much. Those are the optimists, saying, “Oh, we just need more data.” Right? Back in the 80s they said, “Oh, we just need more knowledge.” And then, that wasn’t the case, so that’s the optimistic view. The more pessimistic view is that this is a really, really hard problem and we’ve only scratched the surface, so the uncertainty comes from the fact that we don’t even know what we don’t know.
We certainly don’t know how the brain works, right? We have vague ideas of kind of like what different parts of it do, but in terms of how a thought is encoded, we don’t know. Do you think we need to know more about our own intelligence to make an AGI, or is it like, “No, that’s apples and oranges. It doesn’t really matter how the brain works. We’re building an AGI differently.”
Not necessarily. So, there are different schools of thought in AI, and this is part of what I talk about in my book. There is one thought, one school of thought in AI – the Connectionists – whose whole agenda is to reverse engineer the brain. They think that the shortest path is, you know, “here’s the competition, go reverse engineer it, figure out how it works, build it on the computer, and then we’ll have intelligence.” So that is definitely a plausible approach. I think it’s actually a very difficult approach, precisely because we understand so little about how the brain works. In some ways maybe it’s trying to solve a problem by way of solving the hardest of problems. And then there are other AI types, namely the Symbolists, whose whole idea is, “No, we don’t need to understand things at that low level. In fact, we’re just going to get lost in the weeds if we try to do that. We have to understand intelligence at a higher-level abstraction and we’ll get there much sooner that way. So forget how the brain works, that’s really not important.” Again, the analogy with the brains and airplanes is a good one. What the Symbolists say is, “If we try to make airplanes by building machines that will flap their wings we’ll never have them. What we need to do is understand the laws of physics and aerodynamics and then build machines based on that.” So there are different schools of thought. And I actually think it’s good that there are different schools of thought and we’ll see who gets there first.
So, you mentioned your book, The Master Algorithm, which is of course required reading in this field. Can you give the listener who may not be as familiar with it, an overview of what is The Master Algorithm? What are we looking for?
Yeah, sure. So the book is essentially an introduction to machine learning for a general audience. So not just for technical people, but business people, policy makers, just citizens and the people who are curious. It talks about the impact that machine learning is already having in the world. A lot of people think that these things are science fiction, but there are already in their lives, they just don’t know it. It also looks at the future and what we can expect coming down the line. But mainly, it is an introduction to what I was just describing. That there are five main schools of thought in machine learning. There’s the people who want to reverse engineer the brain; the ones who want to simulate evolution; the ones who do machine learning by automating the scientific method; the ones who use Bayesian statistics, and the ones who do reasoning by analogy, like people do in everyday life. And then I look at what these different methods can and can’t do.
The name The Master Algorithm comes from this notion that the machine learning algorithm is a master algorithm in the same sense that a master key opens all doors. A learning algorithm can do all sorts of different things while being the same algorithm. This is really what’s extraordinary about machine learning, is that, in traditional computer science, if I want the computer to play chess I have to write a program explaining how to play chess; and if I wanted to drive a car, I had to write a program explaining how to drive a car. With machine learning the same learning algorithm can learn to play chess or drive a car or do a million different other things. Just by learning by the appropriate data. And each of these tribes of machine learning has its own master algorithm. The more optimistic members of that tribe believe that you can do everything with that master algorithm. My contention in the book is that each of these algorithms is only solving part of the problem. What we need to do is unify them all into a grand theory of machine learning, in the same way that physics has a standard model and biology has a central dogma. And then, that will be the true master algorithm. And I suggest some paths towards that algorithm, and I think we’re actually getting pretty close to it.
One thing I found empowering in the book, and you state it over and over at the beginning is that the master algorithm is aspirationally accessible for a wide range of people. You basically said, “You, listening to the book, this is still a field where the layman can still have some amount of breakthrough.” Can you speak to that for just a minute?
Absolutely, in fact that’s part of what got me into machine learning, is that, unlike physics or mathematics or biology that are very mature fields and you really can only contribute once you have at least a PhD; computer science and AI and machine learning are still very young. So, you could be a kid in a garage and have a great idea that will be transformative. And I hope that that will happen. I think, even after we find this master algorithm that’s the unification of the five current ones, as we were talking about, we will still be missing some really important, really deep ideas. And I think in some ways, someone coming from outside the field is more likely to find those, than those of us who are professional machine learning researchers, and are already thinking along these tracks of these particular schools of thought. So, part of my goal in writing the book was to get people who are not machine learning experts thinking about machine learning and maybe having the next great ideas that will get us closer to AGI.
And, you also point out in the book why you believe that we know that such a thing is possible, and one of your proof points is our intelligence.
Exactly.
Can you speak to that?
Yeah, so this is, of course, one of those very ambitious goals that people should be at the outset a little suspicious of, right? Is this, like the philosopher’s stone or the perpetual motion machine, is it really possible? And again some people don’t think it’s possible. I think there’s a number of reasons why I’m pretty sure it is possible, one of which is that we already have existing proof. One existing proof is our brain, right? As long as you believe in reductionism, which all scientists do, then the way your brain works can be expressed as an algorithm. And if I program that algorithm into a computer, then that algorithm can learn everything that your brain can. Therefore, in that sense, at least, one version of the master algorithm already exists. Another one is evolution. Evolution created us and all life on Earth. And it is essentially an algorithm, and we roughly understand how that algorithm works, so there is another existing instance of the master algorithm.
Then there’s also, besides these more empirical reasons, there’s also theoretical reasons that tell us that a master algorithm exists. One of which is that for each of the five tribes, for their master algorithm there’s a theorem that says, if you give enough data to this algorithm it can learn any function. So, at least at that level we already know that master algorithms exist. Now the question is how complicated will it be, how hard it will be to get us there? How broadly good would that algorithm be in terms of learning from a reasonable amount of data in a reasonable amount of time?
You just said all scientists are reductionist. Is that necessarily the case, like, can you not be a scientist and believe in something like strong emergence, and say, “Actually you can’t necessarily take the human mind down to individual atoms and kind of reconstruct…”
Yeah, yeah, absolutely, so, what I mean… this is a very good point. In fact, in the sense that you’re talking about we cannot be reductionists in AI. So what I mean by reductionist is just the idea that we can decompose a complex system into simpler, smaller parts that interact and that make up the system. This is how all of the sciences and engineering works. But very much, this does not preclude the existence of emergent properties. So, the system can be more than the sum of its parts, if it’s non-linear. And very much the brain is a non-linear system. And that’s what we have to do to reach AI. You could even say that machine learning is the science of emergent properties. In fact, one of the names by which it has been know in some quarters is “self organizing systems.” And in fact, what makes AI hard, the reason we haven’t already solved it, is that the usual divide and conquer strategy that scientists and engineers follow of dividing problems into smaller and smaller sub-problems and then solving the sub-problems and putting the solutions together… that tends not to work in AI, because the subsystems are very strongly coupled together. So, this is a harder problem and there are emergent properties, but that does not mean that you can’t reduce it to these pieces, it’s just a harder thing to do.
Marvin Minsky, I remember, talked about how, you know, we kind of got tricked a little bit by the fact that it takes very few fundamental laws of the universe to understand most of physics. The same with electricity. The same with magnetism. There are very few simple laws to explain everything that happens. And so the hope had been that intelligence would be like that. Are we giving up on that notion?
Yes, so, again there are different views within AI on this. I think at one end there are people who hope we will discover a few laws of AI and those would solve everything. At the other end of the spectrum there are people like Marvin Minsky who just think that intelligence is a big, big pile of hacks. He even has a book that’s like one of these tricks per page, and who knows how many more there are. I think, and most people in AI believe, that it’s somewhere in between. If AI is just a big pile of hacks, we’re never going to get there. And it can’t really be just a pile of hacks, because if the hacks were so powerful as to create intelligence, then you can’t really call them hacks.
On the other hand, you know, you can’t reduce it to a few laws, like Newton’s laws. So this idea of the master algorithm is, at the end of the day we will find one algorithm that does intelligence, but that algorithm is not going to be a hundred lines of code. It’s not going to be millions of lines of code either. You know, if the algorithm is thousands or maybe tens of thousands of lines of codes, that would be great. It’ll still be a more complex theory, much more complex than the ones we have in physics, but it’ll be much, much simpler than what people like Marvin Minsky envisioned.
And if we find the master algorithm… is that good for humanity?
Well, I think it’s good or bad depending on what we do with it. Like all technology, machine learning gives us more power. You can think of it as a superpower, right? Telephones let us speak at a distance, airplanes let us fly, and machine learning let’s us predict things and lets technology adapts automatically to our needs. All of this is good if we use it for good. If we use it for bad, it will be bad, right? The technology itself doesn’t know how it’s going to be used and part of my reason for writing this book is that everybody needs to be aware of what machine learning is, and what it can do, so that they can control it. Because, otherwise, machine learning will just give more control to those few who actually know how to use it.
I think, if you look at the history of technology, over time, in the end, the good tends to prevail over the bad, which is why we live in a better world today than we did 200 years or 2,000 years ago. But we have to make it happen, right? It just doesn’t fall from the tree like that.
And so, in your view, the master algorithm is essentially synonymous with AGI in the sense that it can figure anything out, it’s a general artificial intelligence. Would it be conscious?
Yeah, so, by the way, I wouldn’t say the master algorithm is synonymous with AGI. I think it’s the enabler of AGI. Once we have a master algorithm we’re still going to need to apply it to vision, and language, and reasoning, and all these things. And then, we’ll have AGI. So, one way to think about this is that it’s an 80/20 rule. The master algorithm is the 20% of the work that gets you 80% of the way, but you still need to do the rest, right? So maybe this is a better way to think about it.
Fair enough. So, I’ll just ask the question a little more directly. What do you think consciousness is?
That’s a very good question. The truth is what makes consciousness simultaneously so fascinating and so hard is that at the end of the day, if there is one thing that I know, and it’s that I’m conscious, right? Descartes said, “I think, therefore I am,” but maybe he should’ve said “I’m conscious, therefore I am.” The laws of physics, who knows, they might even be wrong. But the fact that I’m conscious right now is absolutely unquestionable. So, everybody knows that about themselves. At the same time, because consciousness is a subjective experience, it doesn’t lend itself to the scientific method. What are reproducible experiments when it comes to consciousness? That’s one aspect, the other one is that consciousness is a very complex emergent phenomenon. So, nobody really knows what it is, or understands it, even at a fairly shallow level. Now, the reason we believe others have consciousness – you believe that I have consciousness because you’re a human being, I’m a human being, so since you have consciousness I probably have consciousness as well. And this is really the extent of it. For all you know, I could be a robot taking to you right now, passing the Turing test, and not be conscious at all.
Now, what happens with machines? How can we tell whether a machine is conscious or not? This has been grist for the mill of a lot of philosophers over the last few decades. I think the bottom line is that once a computer starts to act like it’s consciousness, we will treat it as if it’s consciousness, we will grant it consciousness. In fact, we already do that, even with very simple chatbots and whatnot. So, as far as everyday life goes, it actually won’t be long. In some ways, it’ll happen sooner, that people treat computers as being conscious, than the computers being truly intelligent. Because that’s all we need, right? We project these human properties onto things that act humanly, even in the slightest way.
Now, at the end of the day, if you gaze down into that hardware and those circuits… is there really consciousness there? I don’t know if we will ever be able to answer that question. Right now, I actually don’t see a good way. I think there will come a point at which we understand consciousness well enough, because we understand the brain well enough, that we are fairly confident that we can tell whether something is conscious or not. And then at that point I think we will apply this criteria to these machines, and these machines, at least the ones that have been designed to be conscious, will pass the tests, so, we will believe that machines have consciousness. But, you know, we can never be totally sure.
And, do you believe consciousness is required for a general intellect?
I think there are many kinds of AI and many AI applications that do not require consciousness. So, for example, if I tell a machine learning system to go solve cancer – that’s one of the things we’d like to do, cure cancer, and machine learning is a very big part of the battle to cure cancer – I don’t think it requires consciousness at all. It requires a lot of searching, and understanding molecular biology and trying different drugs, maybe designing drugs, etc. So, 90% of AI that will involve no consciousness at all.
There are some applications of AI and some types of AI that will require consciousness, or something indistinguishable from of it, for example, house bots. We would like to have a robot that cooks dinner and does the dishes and makes the bed and whatnot. In order to do all those things, the robot has to have all the capabilities of a human, has to integrate all of these senses, vision, and touch, and perception, and hearing and whatnot, and then make decision based on it. I think this is either going to be consciousness or indistinguishable from it.
Do you think there will be problems that arise if that happens? Let’s say you build Rosie the Robot, and you don’t know, like you said, deep down inside, if the robot is conscious or merely acting as if it is. Do you think at that point we have to have this question of, “Are we fine enslaving what could be a conscious machine to plunge our toilet for us?”
Well, that depends on what you consider enslaving, right? So, one way to look at this, and it’s the way I look at it, is that these are still just machines, right? Just because they have consciousness doesn’t mean that they have human rights. Human rights are for humans. I don’t think there’s such thing as robot rights. The deeper question here is, what gives something rights? One school of thought is that it’s the ability to suffer that gives you rights and therefore animals should have rights. But, if you think about it historically, the idea of having animal rights, even 50 years ago would’ve seemed absurd. So, by the same standard, maybe 50 years from now, people will want to have robot rights. In fact, there are some people already talking about it. I think it’s a very strange idea. And often people talk about, oh well, will the machines be our friends or will they be our slaves? Will the be our equal? Will they be inferior? Actually, I think this whole way of framing things is mistaken. You know, the robots will be neither our equal nor our slaves. They will be our extensions, right?
Robots are technology, they augment us. I think it’s not so much that the machines will be conscious, but that through machines we will have a bigger consciousness. In the same way that, for example, the internet already gives us a bigger consciousness than we had when there was no internet.
So, robots leads us to a topic that’s on the news literally every day, is the prospect that automation and technological advance will eliminate jobs faster than it can create new ones, or, it will eliminate jobs and replace them with inaccessible kinds of jobs. What do you think about that? What do you think the future holds?
I think we have to distinguish between the near term, by which I mean the next ten years or so, and the long term. In the near term, I think some jobs will disappear, just like jobs have disappeared to automation in the past. AI is really automation on steroids. So I think what’s going to happen in the near term is not so different from what has happened in the past. Some jobs will be automated, so some jobs will disappear. But many new jobs will appear as well. It’s always easier to see the jobs that disappear than the ones that appear. Think for example of being an app developer. There’s millions of people today who make a living today being an app developer. Ten years ago that job didn’t exist. Fifty years ago you couldn’t even imagine that job. Two hundred years ago ninety-something percent of Americans were farmers, and then farming got automated. Now today only 2% of Americans work in agriculture. That doesn’t mean that the other 98% are unemployed. They’re just doing all these jobs that people couldn’t even imagine before. I think a lot of that is what’s going to happen here. We will see entirely new job categories appear. We will also see, on a more mundane level, more demand for lots of existing jobs. For example, I think truck drivers should be worried about the future of their jobs, because self-driving cars are coming, so there will be an end point. There are many millions of truck drivers in the US alone. It’s one of the most widespread occupations. But now, what will they do? People say, “Oh, you can’t turn truck drivers into programmers.” Well, you don’t have to turn them into programmers. Think about what’s going to happen, because trucks are self-driving, goods will cost less. Goods will cost less, people will have more money in their pockets, they will spend it on other things. Like, for example, having a bigger, better houses. And therefore there will be more demand for construction workers and some of these truck drivers will become construction workers and so on.
You know, having said all that, I think that in the near term the most important thing that’s going to happen to jobs is actually, neither the ones that will disappear, nor the ones that will appear, most jobs will be transformed by AI. The way I do my job will change because some parts will become automated, but now I will be able to do more things better or more than I could do before, when I didn’t have the automation. So, really the question everybody needs to think about is, what parts of my job can I automate? Really the best way to protect your job from automation is to automate it yourself… and then what can I do, using these machine learning tools?
Automation is like having a horse. You don’t try to outrun a horse; you ride the horse. And we have to ride automation, to do our jobs better and in more ways than we can now.
So, it doesn’t sound like you’re all that pessimistic about the future of employment?
I’m optimistic, but I also worry. I think that’s a good combination. I think if we’re pessimistic we’ll never do anything. Again, if you look at the history of technology, the optimists at the end of the day are the ones who made the world a better place, not the pessimists. But at the same time, we need to… naïve optimism is very dangerous, right? We need to worry continuously about all the things that could go wrong and make sure that they don’t go wrong. So I think that a combination of optimism and worry is the right one to have.
Some people say we’ll find a way to merge, mentally, with the AI. Is that even a valid question? What do you think of it?
I think that’s what’s going to happen. In fact, it’s already happening. We are going to merge with our machines step by step. You know, like a computer is a machine that is closer to us than a television. A smartphone is closer to us than a desktop is, and the laptop is somewhere in between. And we’re already starting to see these things such as Google Glass and augmented reality, where in essence the computer is extending our senses, and extending our part to do things. And Elon Musk has this company that is going to create an interface between neurons and computers, and in fact, in research labs this already exists. We have colleagues, I have colleagues that work on that. They’re called brain-computer interfaces. So, step-by-step, right? The way to think about this is, we are cyborgs, right? Human beings are actually the cyborg species. From day one, we were of one with our technology. Even our physiology would be different if we couldn’t do things like light fires and throw spears. So this has always been an ongoing process. Part of us is technology, and that will become more and more so in the future. Also with things like the Internet, we are connecting ourselves into a bigger, you know… Humanity itself is an emergent phenomenon, and having the Internet and computers allows a greater level to emerge. And I think, exactly how this happened and when, of course, is up for grabs, but that’s the way things are going.
So, you mentioned in passing a minute ago the singularity. Do you believe that that is what will happen as is commonly thought? That there is going to be this kind of point in the reasonably near future from which we cannot see anything beyond it because we don’t have any frame of reference?
I don’t believe a singularity will happen in those terms. So this idea of exponentially increasing progress that goes on forever… that’s not going to happen because it’s physically impossible, right? No exponential goes on forever. It always flattens out sooner or later. All exponentials are really what are called “S curves” in disguise. They go up faster and faster, and this is how all previous technology waves have looked, but then they flatten out, and finally they plateau. Also, this notion that at some point things will become completely incomprehensible for us… I don’t believe that either, because there will always be parts that we understand, number one, and there are limits to what any intelligence can do – human or non-human.
By that stance the singularity that has already happened. A hundred years ago the most advanced technology was maybe something like a car, right? And I could understand every part of how a car works, completely. Today we already have technology, like the computer systems that we have today. Nobody understands that whole system. Different people understand different parts, and with machine learning in particular, the thing that’s notable about machine learning algorithms is that they can do very complex things very well, and we have no idea how they’re doing them. And yet, we are comfortable with that because we don’t necessarily care about the details of how it is accomplished, we just care whether the medical diagnosis was correct or the patient’s cancer was cured, or the car is driving correctly. So I think this notion of the singularity is a little bit off.
Having said that, we are in the middle of one of these S curves. We are seeing very rapid progress, and by the time this has run its course, the would will be a very, very different place from what it is today.
How so?
All these things that we’ve been talking about. We will have intelligent machines surrounding us. Not just humanoid machines but intelligence on tap, right? In the same way that today you can use electricity for whatever you want just by plugging into a socket, you will be able to plug into intelligence. And indeed, the leading tech companies are already trying to make this happen. So there will be all these things that the greater intelligence enables. Everybody will have a home robot in the same way that they have a car. We will have this whole process that the Internet is enabling and that the intelligence on top of the Internet is enabling and the Internet of things and so on. There will something like this larger emergent being, if you will, that’s not just individual human beings or just societies. But again, it’s hard to picture exactly what that would be, but this is going to happen.
You know, it always makes the news when an artificial intelligence masters some game, like, we all know the list. You had chess, and then you had Jeopardy, of course, and then you had AlphaGo, and then recently you had poker. And I get that games are kind of a natural place, because I guess it’s a confined universe with very rigid, specific rules, and then there’s a lot of training data for teaching it how to function in that. Are there types of problems that machine learning isn’t suited to solve? I mean, just kind of philosophically, it doesn’t matter how good your algorithms are, or how much data you have, or how fast a computer is; it’s not the way to solve that particular problem.
Well, certainly some problems are much harder than others, and, as you say, games are easier in the sense that they are these very constrained, artificial universes. And that’s why we can do so well in them. In fact, the summary of what machine learning and AI are good for today is that they are good for these tasks that are somewhat well-defined and constrained.
What people are much better at are things that require knowledge of the world, they require common sense, they require integrating lots of different information. We’re not there yet. We don’t have the learning algorithms that can do that, so the learning algorithms that we have today are certainly good for some things but not others. But again, if we have the master algorithm then we will be able to do all these things and we are making progress towards them. So, we’ll see.
Any time I see a chatbot or something that’s trying to pass the Turing test, I always type the same first question, which is, “Which is bigger, a nickel or the sun?” And not a single one of them has ever answered it correctly.
Well, exactly. Because they don’t have common sense knowledge. It’s amazing what computers can do in some ways, and it’s amazing what they can’t do in others. It’s like these really simple pieces of common sense logic. In a way one of the big lessons that we’ve learned in AI is that automating the job of a doctor or a lawyer is actually easy. What is very hard to do with AI is what a three year old can do, right? If we could have a robot baby that can do what a one year old can do, and learn the same way, we would have solved AI. It’s much, much harder to do those things; things that we take for granted, like picking up an object, for example, or like walking around without tripping. We take this for granted because evolution spent five hundred million years developing it. It’s extremely sophisticated, but for us it’s below the conscious level. The things for us that we are conscious of and that we have to go to college for, well, we’re not very good at them; we just learned to do them recently. Those, the computers can do much better.
So, in some ways in AI, it’s the hard things that are easy and the easy things that are hard.
Does it mean anything if something finally passes the Turing test? And if so, when do you think that might happen? When will it say, “Well, the sun is clearly bigger than a nickel.”
Well, with all due respect to Alan Turing, who was a great genius and an AI pioneer. Most people in AI, including me, believe that the Turing test is actually a bad idea. The reason the Turing test is a bad idea is that it confuses being intelligent with being human. This idea that you can prove that you’re intelligent by fooling a human into thinking you’re a human is very weird, if you think about it. It’s like saying an airplane doesn’t fly until it can fool birds into thinking it’s a bird. That doesn’t make any sense. So, true intelligence can take many forms, not necessarily the human form. So in some ways we don’t need to pass the Turing test to have AI. And in other ways the Turing test is too easy to pass and by some standards has already been passed by systems that no one would call intelligent. Talking with someone for five minutes and fooling them into thinking you’re a human is actually not that hard, because humans are remarkably adept at projecting humanity into anything that acts human. In fact, even in the 60s there was this famous thing called ELIZA, that basically just picked up keywords in what you said and gave back these canned responses. And if you talked to ELIZA for five minutes you’d actually think that it was a human.
Although Weizenbaum’s observation was, even when people knew ELIZA was just a program, they still formed emotional attachments to it, and that’s what he found so disturbing.
Exactly, so human beings have this uncanny ability to treat things as human because that’s the only reference point that we have, right? This whole idea of reasoning by analogy, if we have something that behaves even a little bit like a human – because there’s nothing else in the universe to compare it to – we start treating it more like a human and project more human qualities into it. And, by the way, this is something that once companies start making bots… this is already happening with chatbots, like Siri and Cortana and whatnot, it’ll happen even more so with home robots. There’s going to be a race to make the robots more and more humanlike because if you form an emotional attachment to my product, that’s what I want, right? I’ll sell more of it and for higher price and so on and so forth. So, we’re going to see uncannily humanlike robots and AIs – whether this is a good or bad things is another matter.
What do you think creativity is, and wouldn’t AGI, by definition, be creative, right? It could write a sonnet, or…
Yeah, so… AGI by definition would be creative. One thing that you hear a lot this days, and that unfortunately is incorrect, is that, “Oh, we can automate these menial, routine jobs, but creativity is this deeply human thing that will never be automated.” And, this is kind of like a superficially plausible notion, when in fact, there are already examples, for example, of computers that could compose music. There is this guy, David Cope, professor at UC Santa Cruz. He has a computer program that will create music in the style of the composer of your choice. And he does this test where he plays a piece by Mozart, a piece by a human composer imitating Mozart, and a piece by his computer, by his system, and he did this at a conference that I was in, and asked people to vote for which one was the real Amadeus, and the real one won, but the second place was actually the computer. So a computer can already write Mozart better than a professional, highly educated human composer can. Computers have made paintings that are actually quite beautiful and striking, many of them. Computers these days write news stories. There’s this company called Narrative Fiction that will write news stories for you, and the likes of Forbes or Fortune – I forget which one it is – actually publish some of the things that they write. So it’s not a novel yet, but we will get there. And also, in other areas, like for example chess and AlphaGo are notable examples. Both Kasparov and Lee Sedol, when they were beaten by the computer, had this remarkable reaction saying, “Wow, the computer was so creative. It came up with these moves that I would never have thought of, that seemed dumb at first but turned out to be absolutely brilliant.” And computers have done things in mathematics, theorems and proofs and etc., all of which if done by humans would be considered highly creative. So, automating creativity is actually not that hard.
It’s funny, when Kasparov first said it seemed creative, what he was implying was that IBM cheated, that people had intervened. And IBM hadn’t. But, that’s a testament to just how…
There were actually two phases, right? He said that at first, so he was suspicious because, again, how could something not human actually be doing that? But then later, after the match when he had lost and so on, if you remember that was this move, right, that Deep Blue made that seemed like a crazy move, because of course when, you know, whatever, five moves later… And Kasparov had this expression, he said, like, “I could smell a new kind of intelligence playing against me.” Which is very interesting for us AI-types because we know exactly what was going on, right? It was these, you know, search algorithms and a whole bunch of technology that we understand fairly well. It’s interesting that from the outside this just seemed like a new kind of intelligence, and maybe it is.
He also said, “At least I didn’t enjoy beating him.” Which I guess someday, though, it may, right?
Oh, yeah, yeah! And you know, again, that could happen again depending on how we build them, right? The other very interesting thing that happened in that match – and again, I think it’s symptomatic – is that Kasparov is someone who always won by basically intimidating his opponents into submission. They just got scared of him, and then he beat them. But the thing that happened with Deep Blue was that Deep Blue couldn’t be intimidated by him; it was just a machine, right? As a result of which, Kasparov himself, suddenly for the first time in his life, probably, became insecure. And then what happened after he lost that game, is that in the following game he actually made these mistakes that he would never make, because he has suddenly, himself, became insecure.
Foreboding, isn’t it? We talked about emergence a couple of times. There’s the Gaia hypothesis that maybe all of the life on our planet has an emergent property – some kind of an intelligence that we can’t perceive any more than our cells can perceive us. Do you have any thoughts on that? And do you have any thoughts on if eventually the Internet could just become emergent – an emergent consciousness?
Right, so, I don’t believe in the Gaia hypothesis in the sense that the Earth as it is does not have enough self-regulating ability to achieve the homeostasis that living beings do. In fact, some things you get these negative feedback cycles where things actually go very wrong. So, most scientists don’t believe in the Gaia hypothesis for Earth today. Now, what I think – and a lot of other people think is the case – is that maybe the Gaia hypothesis will be true in the future because as the Internet expands and the Internet of the things with sensors all over the place, literally all over the planet, and a lot of actions continue being taken based on those sensors, to, among other things, preserve us and presumably other kinds of life on Earth. I think if we fast-forward a hundred years there’s a very good chance that Earth will look like Gaia, but it will be a Gaia that is technological as opposed to just biological. And in fact, I don’t think that there’s an opposition between technology and biology. I think technology will just be the extension of biology by other means, right? It’s biology that’s made by us. I mean, we’re creatures, the things that we make are also biology in that sense. So if you look at it that way maybe what has happened is that since the very beginning, Earth has been evolving towards Gaia, we just haven’t gotten there yet. But technology is very much part of getting there.
What do you think of the OpenAI initiative?
I think… so, the OpenAI initiative, it’s goal is to do AI for the common good. Because, you know, people like Elon Musk and Sam Altman were afraid that because the biggest quantity of AI research is being done inside companies, like Google and Facebook and Microsoft and Amazon and whatnot, it would be owned by them. And AI is very powerful, so it’s dangerous if AI is just owned by these companies. So their goal is to do AI research that is going to be open, hence the name, and available to everybody. I think this is a great agenda. So I very much agree with trying to do that. I think there’s nothing wrong with having a lot of AI research in companies. I think it’s important that there also be AI research that is in the public domain. And universities are one aspect of doing that, something like OpenAI is another example, something like the Allen Institute for AI is another example of doing AI for the public good in this way.
So, I think this is a good agenda. What they’re going to do exactly and what their chances of succeeding are, and how their style of AI will compare to the styles of AI that are being produced by these other labs, whether industry or academia, is something that remains to be seen. But I’m curious to see what they get out of it.
The worry from some people is that they make it analogous to a nuclear weapon in that if you say, “We don’t know how to build one, but we can get 99% of the way there and we’re going to share that with everybody on the planet,” and then you hope that the last little bit that makes it an AGI isn’t a bad actor of some kind. Does that make sense to you?
Yeah, yeah… I understand the analogy, but you have to remember that AI and nuclear weapons are very different for a couple of reasons. One is that nuclear weapons are essentially destructive things, right? Yeah, you can turn them into nuclear power, but they were invented to blow things up. Whereas AI is a tool that we use to do all sorts of things, like diagnose diseases and place ads on webpages, and things from big to small. But the thing that… the knowledge to build a nuclear bomb, is actually not that hard to come by. Fortunately, what is very hard to come by is the enriched uranium, or plutonium, to build the bomb. That’s actually what keeps any terrorist group from building a bomb. It’s not the lack of knowledge, it’s the lack of the materials. Now, in AI it’s actually very different, you just need computing power and you can just plug into the cloud and get that computing power. It’s more that AI is just algorithms – it’s already accessible. Lots of people can use it for whatever they want. In a way, the safety lies in actually having AI in the hands of everybody so that it’s not in the hands of a few. If only one person or one company had access to the master algorithm they would be too powerful. If everybody has access to the master algorithm then there will be competition, there will be collaboration, there will be like a whole ecosystem of things that happen, and we will be safer that way. Just as we are with the economy as it is. But, having said that, we will need something like an AI police. So, William Gibson in Neuromancer had this thing called the Turing police, right? The Turing police is AIs whose job is to police the AIs to make sure that they don’t go bad, or that they get stopped when they go bad. And again, this is no different from what already happens. We have highways, and bank robbers can use the highways to get away, that’s no reason to not have highways, but of course the police also need to have cars so they can catch the robbers, so I think it’s going to be a similar thing with AI.
When I do these chats with people in AI, science fiction writers always come up. They always reference them, they always had their favorites and whatnot. Do you have any books, movies, TV shows or anything like that that you think you watch them and you go, “Yes, that could happen. I see that?”
Unfortunately, a lot of the depictions of AI and robots in movies and TV shows is not very realistic, because the computers and robots are really just humans in disguise. This is how you make an interesting story is by making the robots act like humans. They have evil plan to take over the world or, somebody falls in love with them and things like that. And that’s how you make an interesting movie.
But real AIs, as we were talking about, are very different than that. So a lot of the movies that people associate with AI like Terminator, for example, are really not stuff that will happen. But with a provision that science fiction is a great source of self-fulfilling prophecies, right? People read those things and then they try to make them happen. So, who knows.
Having said that, what is an example of a movie depicting AI that I think could happen, and is fairly interesting and realistic? Well, one example is the movie Her. The movie Her is basically about a virtual assistant that is very human-like, and ten years ago that would’ve been a very strange movie. These days we already have things like Siri and Cortana and Google Now, that are, of course, still a far cry from Her. But I think we’re going to get closer and closer to that.
And final question: what are you working on, and are you going to write another book? What keeps you busy?
Two things: I think we are pretty close to unifying those five master algorithms, and I’m still working on that. That’s what I’ve been working on for the last ten years. And I think we’re almost there. I think once we’re there, the next thing is that, as we’ve been talking about, that’s not going to be enough. So we need something else. I think we need something beyond the existing five paradigms we have, and I’m working on a new type of learning, that I hope will actually take us beyond what those five could do. Some people have jokingly called it the sixth paradigm, and maybe my next book will be called, “The Sixth Paradigm.” That makes it sound like a Dan Brown novel, but that’s definitely something that I’m working on.
When you say you think the master algorithm is almost ready… will there be a “ta-da” moment like, here it is. Or is it kind of a gradualism?
It’s a gradual thing, right? Again, look at physics, they’ve unified three of the forces, right? You know, electromagnetism and the strong and weak forces. They still haven’t unified gravity with them. There are proposals like string theory to do that. These are how moments often only happen in retrospect. People propose a theory, and then maybe it gets tested, and then maybe it gets revised, and then finally when all the pieces are in place people go, “Oh wow.” And I think it’s going to be like that with the master algorithm as well. We have candidates, we have ways of putting these pieces together, it still remains to be seen whether they can do all the things that we want and how well they will scale, right? Scaling is very important because if it’s not scalable then it’s not really solving the problem. So, we’ll see.
All right, well thank you so much for being on the show.
Thanks for having me, this was great!
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } from Gigaom https://gigaom.com/2017/12/04/voices-in-ai-episode-23-a-conversation-with-pedro-domingos/
0 notes
mbaljeetsingh · 8 years ago
Text
Deploying From Bitbucket to WordPress
Of all the projects I've worked in the last few years, there's one that stands out as my favorite: I wrote a WordPress plugin called Great Eagle (Tolkien reference) that allows my team to install and update themes and plugins from our private Bitbucket repos, via the normal wp-admin updates UI.
This plugin has blasted our dev shop through the roof when it comes to development best practices, in ways we never expected or intended. It forces us to use proper version numbers because now we can't deploy without them. It forces us to store our work in Bitbucket because now we can't deploy without it. It forces us to use the command line en route to deploying our work (by which I simply mean, git push origin master), which then led to us using phpUnit. Now we can't deploy unless our tests pass. We've arrived at the nirvana of test-driven development, all because we started with the unrelated step of deploying from git.
If this all sounds standard and obvious, great. I'd love a chance to learn from you. If this sounds like exotic rigmarole, guess what? This article is for you.
Disclaimer: My work in this plugin is heavily influenced by, and in some cases plagiarized from, the excellent GitHub Updater plugin, by Andy Fragen. The reason I wrote my own is because we have hundreds of themes and plugins in Bitbucket, and I was having some scale issues when I was auditioning GHU, which have since been addressed. I probably bailed too early, as that plugin has been under active and expert development for years. More than anything, we just wanted a version that was totally under our own maintenance. I'll be featuring some gists from my plugin, but ultimately I recommend that users defer to GHU because it's likely a better fit for most people, and also I don't want to take any momentum from that awesome project.
Prerequisites
My examples demonstrate a multisite install, but that's not particularly important. This works fine on a single-site install as well. I'm on WordPress version 4.8-alpha-39626 at the moment, but that's not terribly important either.
Of chief importance is my assumption that all of the themes and plugins in your workplace are each stored in their own Bitbucket repo. This is quite an assumption! No joke: When embarking on this, we hired a company to manually create a repo for each of our themes and plugins. We were using SVN (poorly!) prior to this migration.
How does it work?
There are three(ish) steps:
1) Create a UI for the user to make an API request to Bitbucket and mirror all of our repository data into the WordPress database. Not all the data about each repo, really just the slug name, which we will use as a key for deeper queries.
A form for the user to mirror our Bitbucket account to the database.
An alternative would be to build this automatically whenever it's empty, but for now, I'm happy to have complete control over when such a large series of API requests gets run.
2) Once we have a bit of information mirrored for all of our repos, we can offer a jQuery autocomplete to choose a few repos for data drill-down, where we make several more API calls for each of them, giving us access to deeper information like version number and download url.
Now that we have a local mirror of our Bitbucket repos, we can populate an autocomplete for selecting some of them for installing or updating.
Why not just gather all of those details for all repos right away? Because we have hundreds of repos and it takes several calls per repo to grab all of the pertinent information such as, say, the version number. It would probably take 15-30 minutes and over 1,000 API trips.
3) Once we have detailed information about the handful of repos we want to use at the moment, we can determine two important things about them. First, is it installed in WordPress? If not, it will appear in a UI for us to install it. Second, if it is installed, is it on the latest version? If not, it will appear in the normal wp-admin updates UI.
Some of the plugins in our Bitbucket account are not installed in our WordPress network.
We used Great Eagle's UI to install one of them.
Our plugin is hosted in a private Bitbucket repo, but here it is in our normal update queue.
On the off-chance that a repo is not readable (maybe it lacks proper docblocks or naming conventions), it gets omitted from all of these steps. This has only happened to us with a small handful of poorly named plugins, but it can be annoying since changing the plugin folder and file names can deactivate the plugin.
Huh. How does it work, exactly?
Fair question. I'll explain what the tricky parts were, and share some code from my plugin.
Building the list of repos
The maximum number of repos per API call is 100. That's just how the Bitbucket API works. We have far more than that in our account, so we have to call Bitbucket in a loop:
<?php /** * Store a "shallow" list of repos. */ public function set_repo_list() { ... // Let's get 100 per page, which is the maximum. $max_pagelen = 100; .... // Get the first page of repos. $page = 1; $call = new LXB_GE_Call( 'api', "repositories/$team", $max_pagelen, $page ); $get = $call -> get(); $out = $get['values']; // Now we know how many there are in total. $total = $get['size']; // How many pages does that make for? $num_pages = ceil( $total / $max_pagelen ); // Query each subsequent page. We already got the first one. while( $page < $num_pages ) { $page++; $next_call = new LXB_GE_Call( 'api', "repositories/$team", $max_pagelen, $page ); $next_get = $next_call -> get(); $next_repos = $next_get['values']; $out = array_merge( $out, $next_repos ); } // Sort the list by most recently updated. $out = $this -> sort( $out, 'updated_on' ); $this -> repo_list = $out; }
Determining the "main" plugin file
WordPress is very unopinionated when it comes to naming plugins. In most cases, a plugin folder does, in fact, contain exactly one plugin, and that plugin will have a "main" file of sorts, that contains a docblock to convey the plugin name, description, author, and most importantly, the version number. Because that file can be named anything, determining which file is the main plugin file is something of an open question. The approach I've taken is to assume that the plugin will conform to some naming conventions we try to use in our work.
<?php function set_main_file_name() { // Grab the slug name for this Bitbucket repo. $slug = $this -> slug; // Grab the list of file names in this repo. $file_list = $this -> file_list; // There's a good chance that there is a file with the same name as the repo. if( in_array( "$slug.php", $file_list ) ) { $main_file_name = "$slug.php"; // If not, there's a good chance there's a plugin.php file. } elseif( in_array( 'plugin.php', $file_list ) ) { $main_file_name = 'plugin.php'; // If not, it's probably a theme. } elseif( in_array( 'style.css', $file_list ) && in_array( 'functions.php', $file_list ) ) { $main_file_name = 'style.css'; // Else, oh well, couldn't find it. } else { $error = sprintf( esc_html__( 'Could not identify a main file for repo %s.', 'bucketpress' ), $slug ); $main_file_name = new BP_Error( __CLASS__, __FUNCTION__, __LINE__, func_get_args(), $error ); } $this -> main_file_name = $main_file_name; }
Determining the version number
Given the main plugin or theme file, we can dig into the docblock in that file in order to determine the version number. Here's how I do it:
<?php /** * Get the value for a docblock line. * * @param string $key The key for a docblock line. * @return string The value for a docblock line. */ function get_value_from_docblock( $key ) { // Grab the contents of the main file. $main_file_body = $this -> main_file_body; // Break the file into lines. $lines = $this -> formatting -> get_lines_from_string( $main_file_body ); // Let's save ourselves some looping and assume the docblock is < 30 lines. $max_lines = 30; $i = 0; foreach( $lines as $line ) { $i++; // If the line does not have the key, skip it. if( ! stristr( $line, $key . ':' ) ) { continue; } // We found the key! break; // Whoops, we made it to the end without finding the key. if( $i == $max_lines ) { return FALSE; } } // Break the line into the key/value pair. $key_value_pair = explode( ':', $line ); // Remove the key from the line. array_shift( $key_value_pair ); // Convert the value back into a string. $out = implode( ':', $line_arr ); $out = trim( $out ); return $out; }
While I'm at it, allow me to applaud php's helpful version_compare() function, which can parse most common version syntaxes:
/** * Determine if this asset needs to be updated. * * @return boolean Returns TRUE of the local version number * is lower than the remote version number, else FALSE. */ function needs_update() { $old_version = $this -> old_version; $new_version = $this -> new_version; $compare = version_compare( $old_version, $new_version ); if( $compare == -1 ) { return TRUE; } return FALSE; }
Parsing the readme.txt
We actually don't use the readme.txt for anything in our plugins, and therefore my Great Eagle plugin does not do much parsing of it either. However, if you wish to incorporate readme information, I'd recommend this library from Ryan McCue for parsing it.
The deal with private repos
Our repos all happen to be private - that's just the way we do business at the moment. In order to query them, we have to filter in some creds. In this example, I'm doing so via basic auth:
<?php /** * Authenticate all of our calls to Bitbucket, so that we can access private repos. * * @param array $args The current args for http requests. * @param string $url The url to which the current http request is going. * @return array $args, filtered to include BB basic auth. */ public function authenticate_http( $args, $url ) { // Find out the url to Bitbucket. $call = new LXB_GE_Call( 'web', FALSE ); $bb_url = $call -> get_url(); // If we're not calling a Bitbucket download, don't bother. if( ! stristr( $url, $bb_url ) ) { return $args; } if( ! stristr( $url, '.zip' ) ) { return $args; } // Okay, time to append basic auth to the args. $creds = $this -> creds; $args['headers']['Authorization'] = "Basic $creds"; return $args; }
I'm doing this via filtration, rather than passing args to wp_remote_get(), because I need WordPress to be prepared with these creds when it makes its calls during its normal theme and plugin update calls, which now happen to be going to Bitbucket.
It would be better to do Oauth instead of basic auth, but after quite a bit of research, I've concluded that there's not a way to do so. The roadblock is because raw file content is actually not part of the Bitbucket API at this point, it's just hosted on their website like any other static asset, such as this public test theme for example (it's public for demo purposes, but again, if it were private, you could access it via basic auth). I do have this humble feature request to show for my efforts. As a security measure, I recommend using Bitbucket's new application passwords feature to create an account specifically and only for scripted calls like this, where that app password only has read access. So, to be clear, with basic auth there is a universe (maybe this one) in which a packet-sniffing foe is reading our plugin files. I'm okay with that, at least for the moment.
Adding our repos to the update queue
If there's one key to getting a foothold in this whole process, it's found in the wp_update_plugins() function. That's a huge function the core uses to loop through all of the installed plugins, determine which ones have an update available, and save the result to a transient. The key is that the transient is then exposed for filtering, which is exactly what my plugin does:
<?php add_filter( 'pre_set_site_transient_update_plugins', array( $this, 'set_plugin_transient' ) ); /** * Inject our updates into core's list of updates. * * @param array $transient The existing list of assets that need an update. * @return The list of assets that need an update, filtered. */ public function set_plugin_transient( $transient ) { if( ! is_array( $this -> assets_to_update ) ) { return $transient; } foreach( $this -> assets_to_update as $asset ) { if( empty( $asset -> transient_key ) ) { continue; } if( ! $asset -> transient_content ) { continue; } $transient -> response[ $asset -> transient_key ] = $asset -> transient_content; } return $transient; }
It took me forever to break into this, and it took me months and months to write this plugin. You should probably just use GHU instead. It's pretty damn similar. That said, if you want to tweak some things and you don't like running 3rd party plugins, maybe the above code will help you write your own.
So what's the point, exactly?
The point is not so much how to build your own git deployer plugin, or which existing one you should use. You can figure that stuff out yourself. The really interesting thing is to look at what happened to us when we started deploying from git. Some of the side effects were profoundly surprising and positive.
So long, FTP
FTP stinks for so many reasons.
FTP access is an attack vector.
No easy way to track or revert changes.
No easy way to allow multiple people to work on the same project at the same time.
Human error. It pretty easy to mis-drag-n-drop, leading to a WSOD or worse.
I never expected this, but it's apparent when updating a plugin across many installs, that this git method is much faster than FTP.
With a git deployment system like the one I'm advocating and explaining in this article, you can go so far as to disable all FTP access to your production environment. Seriously: ou won't need it.
Hello proper versioning
I recommend using a git deploy tool that uses docblocks in order to determine the version number, and uses the version number to determine if the theme or plugin is in need of an update. This forces your team to use proper version numbers, which is a nice first step down to the road from crankin' out themes to maturely managing a long-living codebase.
I'm so stoked about unit testing now
If you're not unit testing, you probably know you should be. With git deployment, it can be both automatic and required.
We use the command line to move our work from our local MAMP to Bitbucket, as in, git push origin master. Each of our plugins carries a Grunt task to execute our phpUnit tests upon git pre-commit, and if the tests fail, so does the commit.
We bind Grunt to our commit using GitHooks and we execute our unit tests via Exec. If the tests fail, so does the deployment.
There's no way to sidestep the tests because there's no way to sidestep git for deploying!
Rollbacks
There are no rollbacks per se with this method. Rather, you only roll forward. Whatever you want to fix or restore, get it in master, boost the version number, push, and deploy.
Staffing
This kind of maturation can have business-wide ramifications. Picture this: You have non-dev support folks on the front lines, trying to debug a problem for a client. In the past, they would have had to place this request in a dev ticket queue, while the customer waits hours or days for a resolution. Not anymore. Now, your front-line support agent can navigate to network admin and see that on this environment the plugin in question is outdated. They're free to update the plugin right away via the normal wp-admin interface. The ticket is resolved by front-line support with no dev team involvement. Perhaps those front-line folks cost less than developers, or perhaps they carry a deep skill set in account management. Either way, you no longer have to open a dev ticket to deploy updates to your in-house plugins. Pivotal.
Rise of the machines
Before this process, we were very much an ordinary dev shop churning out themes and plugins for clients, cowboy-FTPing, not versioning our work. Why? Because we were lazy. Why? Because we were human. We're no longer lazy because we are no longer human, at least when deploying. We're a command line script and a series of API requests, and no matter how lazy we are, we have to follow proper deployment practices because we nuked the FTP creds for our developers! On top of all that, it's a faster way to deploy, free from any click-n-drag misfires.
Can you get on board with this overnight? Okay, no. It's a long and expensive process, and it might not be for you, but honestly it probably is. I think there are about 1,000 dev shops out there that should give careful consideration to this.
Deploying From Bitbucket to WordPress is a post from CSS-Tricks
via CSS-Tricks http://ift.tt/2jyniyi
0 notes
clarenceomoore · 7 years ago
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 12: A Conversation with Scott Clark","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
0 notes
babbleuk · 7 years ago
Text
Voices in AI – Episode 12: A Conversation with Scott Clark
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Scott talk about algorithms, transfer learning, human intelligence, and pain and suffering.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 12: A Conversation with Scott Clark","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-56-02)-scott-clark.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-4.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Scott Clark. He is the CEO and co-founder of SigOpt. They’re a SaaS startup for tuning complex systems and machine learning models. Before that, Scott worked on the ad targeting team at Yelp, leading the charge on academic research and outreach. He holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell, and a BS in Mathematics, Physics, and Computational Physics from Oregon State University. He was chosen as one of Forbes 30 under 30 in 2016. Welcome to the show, Scott.
Scott Clark: Thanks for having me.
I’d like to start with the question, because I know two people never answer it the same: What is artificial intelligence?
I like to go back to an old quote… I don’t remember the attribution for it, but I think it actually fits the definition pretty well. Artificial intelligence is what machines can’t currently do. It’s the idea that there’s this moving goalpost for what artificial intelligence actually means. Ten years ago, artificial intelligence meant being able to classify images; like, can a machine look at a picture and tell you what’s in the picture?
Now we can do that pretty well. Maybe twenty, thirty years ago, if you told somebody that there would be a browser where you can type in words, and it would automatically correct your spelling and grammar and understand language, he would think that’s artificial intelligence. And I think there’s been a slight shift, somewhat recently, where people are calling deep learning artificial intelligence and things like that.
It’s got a little bit conflated with specific tools. So now people talk about artificial general intelligence as this impossible next thing. But I think a lot of people, in their minds, think of artificial intelligence as whatever it is that’s next that computers haven’t figured out how to do yet, that humans can do. But, as computers continually make progress on those fronts, the goalposts continually change.
I’d say today, people think of it as conversational systems, basic tasks that humans can do in five seconds or less, and then artificial general intelligence is everything after that. And things like spell check, or being able to do anomaly detection, are just taken for granted and that’s just machine learning now.
I’ll accept all of that, but that’s more of a sociological observation about how we think of it, and then actually… I’ll change the question. What is intelligence?
That’s a much more difficult question. Maybe the ability to reason about your environment and draw conclusions from it.
Do you think that what we’re building, our systems, are they artificial in the sense that we just built them, but they can do that? Or are they artificial in the sense that they can’t really do that, but they sure can think it well?
I think they’re artificial in the sense that they’re not biological systems. They seem to be able to perceive input in the same way that a human can perceive input, and draw conclusions based off of that input. Usually, the reward system in place in an artificial intelligence framework is designed to do a very specific thing, very well.
So is there a cat in this picture or not? As opposed to a human: It’s, “Try to live a fulfilling life.” The objective functions are slightly different, but they are interpreting outside stimuli via some input mechanism, and then trying to apply that towards a specific goal. The goals for artificial intelligence today are extremely short-term, but I think that they are performing them on the same level—or better sometimes—than a human presented with the exact same short-term goal.
The artificial component comes into the fact that they were constructed, non-biologically. But other than that, I think they meet the definition of observing stimuli, reasoning about an environment, and achieving some outcome.
You used the phrase ‘they draw conclusions’. Are you using that colloquially, or does the machine actually conclude? Or does it merely calculate?
It calculates, but then it comes to, I guess, a decision at the end of the day. If it’s a classification system, for example… going back to “Is there a cat in this picture?” It draws the conclusion that “Yes, there was a cat. No, that wasn’t a cat.” It can do that with various levels of certainty in the same way that, potentially, a human would solve the exact same problem. If I showed you a blurry Polaroid picture you might be able to say, “I’m pretty sure there’s a cat in there, but I’m not 100 percent certain.”
And if I show you a very crisp picture of a kitten, you could be like, “Yes, there’s a cat there.” And I think convolutional neural network is doing the exact same thing: taking in that outside stimuli. Not through an optical nerve, but through the raw encoding of pixels, and then coming to the exact same conclusion.
You make the really useful distinction between an AGI, which is a general intelligence—something as versatile as a human—and then the kinds of stuff we’re building now, which we call AI—which is doing this reasoning or drawing conclusions.
Is an AGI a linear development from what we have now? In other words, do we have all the pieces, and we just need faster computers, better algorithms, more data, a few nips and tucks, and we’re eventually going to get an AGI? Or is an AGI something very different, that is a whole different ball of wax?
I’m not convinced that, with the current tooling we have today, that it’s just like… if we add one more hidden layer to a neural network, all of a sudden it’ll be AGI. That being said, I think this is how science and computer science and progress in general works. Is that techniques are built upon each other, we make advancements.
It might be a completely new type of algorithm. It might not be a neural network. It might be reinforcement learning. It might not be reinforcement learning. It might be the next thing. It might not be on a CPU or a GPU. Maybe it’s on a quantum computer. If you think of scientific and technological process as this linear evolution of different techniques and ideas, then I definitely think we are marching towards that as an eventual outcome.
That being said, I don’t think that there’s some magic combinatorial setting of what we have today that will turn into this. I don’t think it’s one more hidden layer. I don’t think it’s a GPU that can do one more teraflop—or something like that—that’s going to push us over the edge. I think it’s going to be things built from the foundation that we have today, but it will continue to be new and novel techniques.
There was an interesting talk at the International Conference on Machine Learning in Sydney last week about AlphaGo, and how they got this massive speed-up when they put in deep learning. They were able to break through this plateau that they had found in terms of playing ability, where they could play at the amateur level.
And then once they started applying deep learning networks, that got them to the professional, and now best-in-the-world level. I think we’re going to continue to see plateaus for some of these current techniques, but then we’ll come up with some new strategy that will blast us through and get to the next plateau. But I think that’s an ever-stratifying process.
To continue on that vein… When in 1955, they convened in Dartmouth and said, “We can solve a big part of AI in the summer, with five people,” the assumption was that general intelligence, like all the other sciences, had a few simple laws.
You had Newton, Maxwell; you had electricity and magnetism, and all these things, and they were just a few simple laws. The idea was that all we need to do is figure out those for intelligence. And Pedro Domingos argues in The Master Algorithm, from a biological perspective that, in a sense, that may be true.  
That if you look at the DNA difference between us and an animal that isn’t generally intelligent… the amount of code is just a few megabytes that’s different, which teaches how to make my brain and your brain. It sounded like you were saying, “No, there’s not going to be some silver bullet, it’s going to be a bunch of silver buckshot and we’ll eventually get there.”
But do you hold any hope that maybe it is a simple and elegant thing?
Going back to my original statement about what is AI, I think when Marvin Minsky and everybody sat down in Dartmouth, the goalposts for AI were somewhat different. Because they were attacking it for the first time, some of the things were definitely overambitious. But certain things that they set out to do that summer, they actually accomplished reasonably well.
Things like the Lisp programming language, and things like that, came out of that and were extremely successful. But then, once these goals are accomplished, the next thing comes up. Obviously, in hindsight, it was overambitious to think that they could maybe match a human, but I think if you were to go back to Dartmouth and show them what we have today, and say: “Look, this computer can describe the scene in this picture completely accurately.”
I think that could be indistinguishable from the artificial intelligence that they were seeking, even if today what we want is someone we can have a conversation with. And then once we can have a conversation, the next thing is we want them to be able to plan our lives for us, or whatever it may be, solve world peace.
While I think there are some of the fundamental building blocks that will continue to be used—like, linear algebra and calculus, and things like that, will definitely be a core component of the algorithms that make up whatever does become AGI—I think there is a pretty big jump between that. Even if there’s only a few megabytes difference between us and a starfish or something like that, every piece of DNA is two bits.
If you have millions of differences, four-to-the-several million—like the state space for DNA—even though you can store it in a small amount of megabytes, there are so many different combinatorial combinations that it’s not like we’re just going to stumble upon it by editing something that we currently have.
It could be something very different in that configuration space. And I think those are the algorithmic advancements that will continue to push us to the next plateau, and the next plateau, until eventually we meet and/or surpass the human plateau.
You invoked quantum computers in passing, but putting that aside for a moment… Would you believe, just at a gut level—because nobody knows—that we have enough computing power to build an AGI, we just don’t know how?
Well, in the sense that if the human brain is general intelligence, the computing power in the human brain, while impressive… All of the computers in the world are probably better at performing some simple calculations than the biological gray matter mess that exists in all of our skulls. I think the raw amount of transistors and things like that might be there, if we had the right way to apply them, if they were all applied in the same direction.
That being said… Whether or not that’s enough to make it ubiquitous, or whether or not having all the computers in the world mimic a single human child will be considered artificial general intelligence, or if we’re going to need to apply it to many different situations before we claim victory, I think that’s up for semantic debate.
Do you think about how the brain works, even if [the context] is not biological? Is that how you start a problem: “Well, how do humans do this?” Does that even guide you? Does that even begin the conversation? And I know none of this is a map: Birds fly with wings, and airplanes, all of that. Is there anything to learn from human intelligence that you, in a practical, day-to-day sense, use?
Yeah, definitely. I think it often helps to try to approach a problem from fundamentally different ways. One way to approach that problem is from the purely mathematical, axiomatic way; where we’re trying to build up from first principles, and trying to get to something that has a nice proof or something associated with it.
Another way to try to attack the problem is from a more biological setting. If I had to solve this problem, and I couldn’t assume any of those axioms, then how would I begin to try to build heuristics around it? Sometimes you can go from that back to the proof, but there are many different ways to attack that problem. Obviously, there are a lot of things in computer science, and optimization in general, that are motivated by physical phenomena.
So a neural network, if you squint, looks kind of like a biological brain neural network. There’s things like simulated annealing, which is a global optimization strategy that mimics the way that like steel is annealed… where it tries to find some local lattice structure that has low energy, and then you pound the steel with the hammer, and that increases the energy to find a better global optima lattice structure that is harder steel.
But that’s also an extremely popular algorithm in the scientific literature. So it was come to from this auxiliary way, or a genetic algorithm where you’re slowly evolving a population to try to get to a good result. I think there is definitely room for a lot of these algorithms to be inspired by biological or physical phenomenon, whether or not they are required to be from that to be proficient. I would have trouble, off the top of my head, coming up with the biological equivalent for a support vector machine or something like that. So there’s two different ways to attack it, but both can produce really interesting results.
Let’s take a normal thing that a human does, which is: You show a human training data of the Maltese Falcon, the little statue from the movie, and then you show him a bunch of photos. And a human can instantly say, “There’s the falcon under water, and there it’s half-hidden by a tree, and there it’s upside down…” A human does that naturally. So it’s some kind of transferred learning. How do we do that?
Transfer learning is the way that that happens. You’ve seen trees before. You’ve seen water. You’ve seen how objects look inside and outside of water before. And then you’re able to apply that knowledge to this new context.
It might be difficult for a human who grew up in a sensory deprivation chamber to look at this object… and then you start to show them things that they’ve never seen before: “Here’s this object and a tree,” and they might not ‘see the forest for the trees’ as it were.
In addition to that, without any context whatsoever, you take someone who was raised in a sensory deprivation chamber, and you start showing them pictures and ask them to do classification type tasks. They may be completely unaware of what’s the reward function here. Who is this thing telling me to do things for the first time I’ve never seen before?
What does it mean to even classify things or describe an object? Because you’ve never seen an object before.
And when you start training these systems from scratch, with no previous knowledge, that’s how they work. They need to slowly learn what’s good, what’s bad. There’s a reward function associated with that.
But with no context, with no previous information, it’s actually very surprising how well they are able to perform these tasks; considering [that when] a child is born, four hours later it isn’t able to do this. A machine algorithm that’s trained from scratch over the course of four hours on a couple of GPUs is able to do this.
You mentioned the sensory deprivation chamber a couple of times. Do you have a sense that we’re going to need to embody these AIs to allow them to—and I use the word very loosely—‘experience’ the world? Are they locked in a sensory deprivation chamber right now, and that’s limiting them?
I think with transfer learning, and pre-training of data, and some reinforcement algorithm work, there’s definitely this idea of trying to make that better, and bootstrapping based off of previous knowledge in the same way that a human would attack this problem. I think it is a limitation. It would be very difficult to go from zero to artificial general intelligence without providing more of this context.
There’s been many papers recently, and OpenAI had this great blog post recently where, if you teach the machine language first, if you show it a bunch of contextual information—this idea of this unsupervised learning component of it, where it’s just absorbing information about the potential inputs it can get—that allows it to perform much better on a specific task, in the same way that a baby absorbs language for a long time before it actually starts to produce it itself.
And it could be in a very unstructured way, but it’s able to learn some of the actual language structure or sounds from the particular culture in which it was raised in this unstructured way.
Let’s talk a minute about human intelligence. Why do you think we understand so poorly how the brain works?
That’s a great question. It’s easier scientifically, with my background in math and physics—it seems like it’s easier to break down modular decomposable systems. Humanity has done a very good job at understanding, at least at a high level, how physical systems work, or things like chemistry.
Biology starts to get a little bit messier, because it’s less modular and less decomposable. And as you start to build larger and larger biological systems, it becomes a lot harder to understand all the different moving pieces. Then you go to the brain, and then you start to look at psychology and sociology, and all of the lines get much fuzzier.
It’s very difficult to build an axiomatic rule system. And humans aren’t even able to do that in some sort of grand unified way with physics, or understand quantum mechanics, or things like that; let alone being able to do it for these sometimes infinitely more complex systems.
Right. But the most successful animal on the planet is a nematode worm. Ten percent of all animals are nematode worms. They’re successful, they find food, and they reproduce and they move. Their brains have 302 neurons. We’ve spent twenty years trying to model that, a bunch of very smart people in the OpenWorm project…
 But twenty years trying to model 300 neurons to just reproduce this worm, make a digital version of it, and even to this day people in the project say it may not be possible.
I guess the argument is, 300 sounds like a small amount. One thing that’s very difficult for humans to internalize is the exponential function. So if intelligence grew linearly, then yeah. If we could understand one, then 300 might not be that much, whatever it is. But if the state space grows exponentially, or the complexity grows exponentially… if there’s ten different positions for every single one of those neurons, like 10300, that’s more than the number of atoms in the universe.
Right. But we aren’t starting by just rolling 300 dice and hoping for them all to be—we know how those neurons are arranged.
At a very high level we do.
I’m getting to a point, that we maybe don’t even understand how a neuron works. A neuron may be doing stuff down at the quantum level. It may be this gigantic supercomputer we don’t even have a hope of understanding, a single neuron.
From a chemical way, we can have an understanding of, “Okay, so we have neurotransmitters that carry a positive charge, that then cause a reaction based off of some threshold of charge, and there’s this catalyst that happens.” I think from a physics and chemical understanding, we can understand the base components of it, but as you start to build these complex systems that have this combinatorial set of states, it does become much more difficult.
And I think that’s that abstraction, where we can understand how simple chemical reactions work. But then it becomes much more difficult once you start adding more and more. Or even in physics… like if you have two bodies, and you’re trying to calculate the gravity, that’s relatively easy. Three? Harder. Four? Maybe impossible. It becomes much harder to solve these higher-order, higher-body problems. And even with 302 neurons, that starts to get pretty complex.
Oddly, two of them aren’t connected to anything, just like floating out there…
Do you think human intelligence is emergent?
In what respect?
I will clarify that. There are two sorts of emergence: one is weak, and one is strong. Weak emergence is where a system takes on characteristics which don’t appear at first glance to be derivable from them. So the intelligence displayed by an ant colony, or a beehive—the way that some bees can shimmer in unison to scare off predators. No bee is saying, “We need to do this.”  
The anthill behaves intelligently, even though… The queen isn’t, like, in charge; the queen is just another ant, but somehow it all adds intelligence. So that would be something where it takes on these attributes.
Can you really intuitively derive intelligence from neurons?
And then, to push that a step further, there are some who believe in something called ‘strong emergence’, where they literally are not derivable. You cannot look at a bunch of matter and explain how it can become conscious, for instance. It is what the minority of people believe about emergence, that there is some additional property of the universe we do not understand that makes these things happen.
The question I’m asking you is: Is reductionism the way to go to figure out intelligence? Is that how we’re going to kind of make advances towards an AGI? Just break it down into enough small pieces.
I think that is an approach, whether or not that’s ‘the’ ultimate approach that works is to be seen. As I was mentioning before, there are ways to take biological or physical systems, and then try to work them back into something that then can be used and applied in a different context. There’s other ways, where you start from the more theoretical or axiomatic way, and try to move forward into something that then can be applied to a specific problem.
I think there’s wide swaths of the universe that we don’t understand at many levels. Mathematics isn’t solved. Physics isn’t solved. Chemistry isn’t solved. All of these build on each other to get to these large, complex, biological systems. It may be a very long time, or we might need an AGI to help us solve some of these systems.
I don’t think it’s required to understand everything to be able to observe intelligence—like, proof by example. I can’t tell you why my brain thinks, but my brain is thinking, if you can assume that humans are thinking. So you don’t necessarily need to understand all of it to put it all together.
Let me ask you one more far-out question, and then we’ll go to a little more immediate future. Do you have an opinion on how consciousness comes about? And if you do or don’t, do you believe we’re going to build conscious machines?
Even to throw a little more into that one, do you think consciousness—that ability to change focus and all of that—is a requisite for general intelligence?
So, I would like to hear your definition of consciousness.
I would define it by example, to say that it’s subjective experience. It’s how you experience things. We’ve all had that experience when you’re driving, that you kind of space out, and then, all of a sudden, you kind of snap to. “Whoa! I don’t even remember getting here.”
And so that time when you were driving, your brain was elsewhere, you were clearly intelligent, because you were merging in and out of traffic. But in the sense I’m using the word, you were not ‘conscious’, you were not experiencing the world. If your foot caught on fire, you would feel it; but you weren’t experiencing the world. And then instantly, it all came on and you were an entity that experienced something.
Or, put another way… this is often illustrated with the problem of Mary by Frank Jackson:
He offers somebody named Mary, who knows everything about color, like, at a god-like level—knows every single thing about color. But the catch is, you might guess, she’s never seen it. She’s lived in a room, black-and-white, never seen it [color]. And one day, she opens the door, she looks outside and she sees red.  
The question becomes: Does she learn anything? Did she learn something new?  
In other words, is experiencing something different than knowing something? Those two things taken together, defining consciousness, is having an experience of the world…
I’ll give one final one. You can hook a sensor up to a computer, and you can program the computer to play an mp3 of somebody screaming if the sensor hits 500 degrees. But nobody would say, at this day and age, the computer feels the pain. Could a computer feel anything?
Okay. I think there’s a lot to unpack there. I think computers can perceive the environment. Your webcam is able to record the environment in the same way that your optical nerves are able to record the environment. When you’re driving a car, and daydreaming, and kind of going on autopilot, as it were, there still are processes running in the background.
If you were to close your eyes, you would be much worse at doing lane merging and things like that. And that’s because you’re still getting the sensory input, even if you’re not actively, consciously aware of the fact that you’re observing that input.
Maybe that’s where you’re getting at with consciousness here, is: Not only the actual task that’s being performed, which I think computers are very good at—and we have self-driving cars out on the street in the Bay Area every day—but that awareness of the fact that you are performing this task, is kind of meta-level of: “I’m assembling together all of these different subcomponents.”
Whether that’s driving a car, thinking about the meeting that I’m running late to, some fight that I had with my significant other the night before, or whatever it is. There’s all these individual processes running, and there could be this kind of global awareness of all of these different tasks.
I think today, where artificial intelligence sits is, performing each one of these individual tasks extremely well, toward some kind of objective function of, “I need to not crash this car. I need to figure out how to resolve this conflict,” or whatever it may be; or, “Play this game in an artificial intelligence setting.” But we don’t yet have that kind of governing overall strategy that’s aware of making these tradeoffs, and then making those tradeoffs in an intelligent way. But that overall strategy itself is just going to be going toward some specific reward function.
Probably when you’re out driving your car, and you’re spacing out, your overall reward function is, “I want to be happy and healthy. I want to live a meaningful life,” or something like that. It can be something nebulous, but you’re also just this collection of subroutines that are driving towards this specific end result.
But the direct question of what would it mean for a computer to feel pain? Will a computer feel pain? Now they can sense things, but nobody argues they have a self that experiences the pain. It matters, doesn’t it?
It depends on what you mean by pain. If you mean there’s a response of your nervous system to some outside stimuli that you perceive as pain, a negative response, and—
—It involves emotional distress. People know what pain is. It hurts. Can a computer ever hurt?
It’s a fundamentally negative response to what you’re trying to achieve. So pain and suffering is the opposite of happiness. And your objective function as a human is happiness, let’s say. So, by failing to achieve that objective, you feel something like pain. Evolutionarily, we might have evolved this in order to avoid specific things. Like, you get pain when you touch flame, so don’t touch flame.
And the reason behind that is biological systems degrade in high-temperature environments, and you’re not going to be able to reproduce or something like that.
You could argue that when a classification system fails to classify something, and it gets penalized in its reward function, that’s the equivalent of it finding something where, in its state of the world, it has failed to achieve its goal, and it’s getting the opposite of what its purpose is. And that’s similar to pain and suffering in some way.
But is it? Let’s be candid. You can’t take a person and torture them, because that’s a terrible thing to do… because they experience pain. [Whereas if] you write a program that has an infinite loop that causes your computer to crash, nobody’s going to suggest you should go to jail for that. Because people know that those are two very different things.
It is a negative neurological response based off of outside stimuli. A computer can have a negative response, and perform based off of outside stimuli poorly, relative to what it’s trying to achieve… Although I would definitely agree with you that that’s not a computer experiencing pain.
But from a pure chemical level, down to the algorithmic component of it, they’re not as fundamentally different… that because it’s a human, there’s something magic about it being a human. A dog can also experience pain.
These worms—I’m not as familiar with the literature on that, but [they] could potentially experience pain. And as you derive that further and further back, you might have to bend your definition of pain. Maybe they’re not feeling something in a central nervous system, like a human or a dog would, but they’re perceiving something that’s negative to what they’re trying to achieve with this utility function.
But we do draw a line. And I don’t know that I would use the word ‘magic’ the way you’re doing it. We draw this line by saying that dogs feel pain, so we outlaw animal cruelty. Bacteria don’t, so we don’t outlaw antibiotics. There is a material difference between those two things.
So if the difference is a central nervous system, and pain is being defined as a nervous response to some outside stimuli… then unless we explicitly design machines to have central nervous systems, then I don’t think they will ever experience pain.
Thanks for indulging me in all of that, because I think it matters… Because up until thirty years ago, veterinarians typically didn’t use anesthetic. They were told that animals couldn’t feel pain. Babies were operated on in the ‘90s—open heart surgery—under the theory they couldn’t feel pain.  
What really intrigues me is the idea of how would we know if a machine did? That’s what I’m trying to deconstruct. But enough of that. We’ll talk about jobs here in a minute, and those concerns…
There’s groups of people that are legitimately afraid of AI. You know all the names. You’ve got Elon Musk, you get Stephen Hawking. Bill Gates has thrown in his hat with that, Wozniak has. Nick Bostrom wrote a book that addressed existential threat and all of that. Then you have Mark Zuckerberg, who says no, no, no. You get Oren Etzioni over at the Allen Institute, just working on some very basic problem. You get Andrew Ng with his “overpopulation on Mars. This is not helpful to even have this conversation.”
What is different about those two groups in your mind? What is the difference in how they view the world that gives them these incredibly different viewpoints?
I think it goes down to a definition problem. As you mentioned at the beginning of this podcast, when you ask people, “What is artificial intelligence?” everybody gives you a different answer. I think each one of these experts would also give you a different answer.
If you define artificial intelligence as matrix multiplication and gradient descent in a deep learning system, trying to achieve a very specific classification output given some pixel input—or something like that—it’s very difficult to conceive that as some sort of existential threat for humanity.
But if you define artificial intelligence as this general intelligence, this kind of emergent singularity where the machines don’t hit the plateau, that they continue to advance well beyond humans… maybe to the point where they don’t need humans, or we become the ants in that system… that becomes very rapidly a very existential threat.
As I said before, I don’t think there’s an incremental improvement from algorithms—as they exist in the academic literature today—to that singularity, but I think it can be a slippery slope. And I think that’s what a lot of these experts are talking about… Where if it does become this dynamic system that feeds on itself, by the time we realize it’s happening, it’ll be too late.
Whether or not that’s because of the algorithms that we have today, or algorithms down the line, it does make sense to start having conversations about that, just because of the time scales over which governments and policies tend to work. But I don’t think someone is going to design a TensorFlow or MXNet algorithm tomorrow that’s going to take over the world.
There’s legislation in Europe to basically say, if an AI makes a decision about whether you should get an auto loan or something, you deserve to know why it turned you down. Is that a legitimate request, or is it like you go to somebody at Google and say, “Why is this site ranked number one and this site ranked number two?” There’s no way to know at this point.  
Or is that something that, with the auto loan thing, you’re like, “Nope, here are the big bullet points of what went into it.” And if that becomes the norm, does that slow down AI in any way?
I think it’s important to make sure, just from a societal standpoint, that we continue to strive towards not being discriminatory towards specific groups and people. It can be very difficult, when you have something that looks like a black box from the outside, to be able to say, “Okay, was this being fair?” based off of the fairness that we as a society have agreed upon.
The machine doesn’t have that context. The machine doesn’t have the policy, necessarily, inside to make sure that it’s being as fair as possible. We need to make sure that we do put these constraints on these systems, so that it meets what we’ve agreed upon as a society, in laws, etc., to adhere to. And that it should be held to the same standard as if there was a human making that same decision.
There is, of course, a lot of legitimate fear wrapped up about the effect of automation and artificial intelligence on employment. And just to set the problem up for the listeners, there’s broadly three camps, everybody intuitively knows this.
 There’s one group that says, “We’re going to advance our technology to the point that there will be a group of people who do not have the educational skills needed to compete with the machines, and we’ll have a permanent underclass of people who are unemployable.” It would be like the Great Depression never goes away.
And then there are people who say, “Oh, no, no, no. You don’t understand. Everything, every job, a machine is going to be able to do.” You’ll reach a point where the machine will learn it faster than the human, and that’s it.
And then you’ve got a third group that says, “No, that’s all ridiculous. We’ve had technology come along, as transformative as it is… We’ve had electricity, and machines replacing animals… and we’ve always maintained full employment.” Because people just learn how to use these tools to increase their own productivity, maintain full employment—and we have growing wages.
So, which of those, or a fourth one, do you identify with?
This might be an unsatisfying answer, but I think we’re going to go through all three phases. I think we’re in the third camp right now, where people are learning new systems, and it’s happening at a pace where people can go to a computer science boot camp and become an engineer, and try to retrain and learn some of these systems, and adapt to this changing scenario.
I think, very rapidly—especially at the exponential pace that technology tends to evolve—it does become very difficult. Fifty years ago, if you wanted to take apart your telephone and try to figure out how it works, repair it, that was something that a kid could do at a camp kind of thing, like an entry circuits camp. That’s impossible to do with an iPhone.
I think that’s going to continue to happen with some of these more advanced systems, and you’re going to need to spend your entire life understanding some subcomponent of it. And then, in the further future, as we move towards this direction of artificial general intelligence… Like, once a machine is a thousand times, ten thousand times, one hundred thousand times smarter—by whatever definition—than a human, and that increases at an exponential pace… We won’t need a lot of different things.
Whether or not that’s a fundamentally bad thing is up for debate. I think one thing that’s different about this than the Industrial Revolution, or the agricultural revolution, or things like that, that have happened throughout human history… is that instead of this happening over the course of generations or decades… Maybe if your father, and your grandfather, and your entire family tree did a specific job, but then that job doesn’t exist anymore, you train yourself to do something different.
Once it starts to happen over the course of a decade, or a year, or a month, it becomes much harder to completely retrain. That being said, there’s lots of thoughts about whether or not humans need to be working to be happy. And whether or not there could be some other fundamental thing that would increase the net happiness and fulfillment of people in the world, besides sitting at a desk for forty hours a week.
And maybe that’s actually a good thing, if we can set up the societal constructs to allow people to do that in a healthy and happy way.
Do you have any thoughts on computers displaying emotions, emulating emotions? Is that going to be a space where people are going to want authentic human experiences in those in the future? Or are we like, “No, look at how people talk to their dog,” or something? If it’s good enough to fool you, you just go along with the conceit?
The great thing about computers, and artificial intelligence systems, and things like that is if you point them towards a specific target, they’ll get pretty good at hitting that target. So if the goal is to mimic human emotion, I think that that’s something that’s achievable. Whether or not a human cares, or is even able to distinguish between that and actual human emotion, could be very difficult.
At Cornell, where I did my PhD, they had this psychology chatbot called ELIZA—I think this was back in the ‘70s. It went through a specific school of psychological behavioral therapy thought, replied with specific ways, and people found it incredibly helpful.
Even if they knew that it was just a machine responding to them, it was a way for them to get out their emotions and work through specific problems. As these machines get more sophisticated and able, as long as it’s providing utility to the end user, does it matter who’s behind the screen?
That’s a big question. Weizenbaum shut down ELIZA because he said that when a machine says, “I understand” that it’s a lie, there’s no ‘I’, and there’s nothing [there] that understands anything. He had real issues with that.
But then when they shut it down, some of the end users were upset, because they were still getting quite a bit of utility out of it. There’s this moral question of whether or not you can take away something from someone who is deriving benefit from it as well.
So I guess the concern is that maybe we reach a day where an AI best friend is better than a real one. An AI one doesn’t stand you up. And an AI spouse is better than a human spouse, because of all of those reasons. Is that a better world, or is it not?
I think it becomes a much more dangerous world, because as you said before, someone could decide to turn off the machine. When it’s someone taking away your psychologist, that could be very dangerous. When it’s someone deciding that you didn’t pay your monthly fee, so they’re going to turn off your spouse, that could be quite a bit worse as well.
As you mentioned before, people don’t necessarily associate the feelings or pain or anything like that with the machine, but as these get more and more life-like, and as they are designed with the reward function of becoming more and more human-like, I think that distinction is going to become quite a bit harder for us to understand.
And it not only affects the machine—which you can make the argument doesn’t have a voice—but it’ll start to affect the people as well.
One more question along these lines. You were a Forbes 30 Under 30. You’re fine with computer emotions, and you have this set of views. Do you notice any generational difference between researchers who have been in it longer than you, and people of your age and training? Do you look at it, as a whole, differently than another generation might have?
I think there are always going to be generational differences. People grow up in different times and contexts, societal norms shift… I would argue usually for the better, but not always. So I think that that context in which you were raised, that initial training data that you apply your transfer learning to for the rest of your life, has a huge effect on what you’re actually going to do, and how you perceive the world moving forward.
I spent a good amount of time today at SigOpt. Can you tell me what you’re trying to do there, and why you started or co-founded it, and what the mission is? Give me that whole story.
Yeah, definitely. SigOpt is an optimization-as-a-service company, or a software-as-a-service offering. What we do is help people configure these complex systems. So when you’re building a neural network—or maybe it’s a reinforcement learning system, or an algorithmic trading strategy—there’s often many different tunable configuration parameters.
These are the settings that you need to put in place before the system itself starts to do any sort of learning: things like the depth of the neural network, the learning rates, some of these stochastic gradient descent parameters, etc.
These are often kind of nuisance parameters that are brushed under the rug. They’re typically solved via relatively simplistic methods like brute forcing it or trying random configurations. What we do is we take an ensemble of the state-of-the-art research from academia, and Bayesian and global optimization, and we ensemble all of these algorithms behind a simple API.
So when you are downloading MxNet, or TensorFlow, or Caffe2, whatever it is, you don’t have to waste a bunch of time trying different things via trial-and-error. We can guide you to the best solution quite a bit faster.
Do you have any success stories that you like to talk about?
Yeah, definitely. One of our customers is Hotwire. They’re using us to do things like ranking systems. We work with a variety of different algorithmic trading firms to make their strategies more efficient. We also have this great academic program where SigOpt is free for any academic at any university or national lab anywhere in the world.
So we’re helping accelerate the flywheel of science by allowing people to spend less time doing trial-and-error. I wasted way too much of my PhD on this, to be completely honest—fine-tuning different configuration settings and bioinformatics algorithms.
So our goal is… If we can have humans do what they’re really good at, which is creativity—understanding the context in the domain of a problem—and then we can make the trial-and-error component as little as possible, hopefully, everything happens a little bit faster and a little bit better and more efficiently.
What are the big challenges you’re facing?
Where this system makes the biggest difference is in large complex systems, where it’s very difficult to manually tune, or brute force this problem. Humans tend to be pretty bad at doing 20-dimensional optimization in their head. But a surprising number of people still take that approach, because they’re unable to access some of this incredible research that’s been going on in academia for the last several decades.
Our goal is to make that as easy as possible. One of our challenges is finding people with these interesting complex problems. I think the recent surge of interest in deep learning and reinforcement learning, and the complexity that’s being imbued in a lot of these systems, is extremely good for us, and we’re able to ride that wave and help these people realize the potential of these systems quite a bit faster than they would otherwise.
But having the market come to us is something that we’re really excited about, but it’s not instant.
Do you find that people come to you and say, “Hey, we have this dataset, and we think somewhere in here we can figure out whatever”? Or do they just say, “We have this data, what can we do with it?” Or do they come to you and say, “We’ve heard about this AI thing, and want to know what we can do”?
There are companies that help solve that particular problem, where they’re given raw data and they help you build a model and apply it to some business context. Where SigOpt sits, which is slightly different than that, is when people come to us, they have something in place. They already have data scientists or machine learning engineers.
They’ve already applied their domain expertise to really understand their customers, the business problem they’re trying to solve, everything like that. And what they’re looking for is to get the most out of these systems that they’ve built. Or they want to build a more advanced system as rapidly as possible.
And so SigOpt bolts on top of these pre-existing systems, and gives them that boost by fine-tuning all of these different configuration parameters to get to their maximal performance. So, sometimes we do meet people like that, and we pass them on to some of our great partners. When someone has a problem and they just want to get the most out of it, that’s where we can come in and provide this black box optimization on top of it.
Final question-and-a-half. Do you speak a lot? Do you tweet? If people want to follow you and keep up with what you’re doing, what’s the best way to do that?
They can follow @SigOpt on Twitter. We have a blog where we post technical and high-level blog posts about optimization and some of the different advancements, and deep learning and reinforcement learning. We publish papers, but blog.sigopt.com and on Twitter @SigOpt is the best way to follow us along.
Alright. It has been an incredibly fascinating hour, and I want to thank you for taking the time.
Excellent. Thank you for having me. I’m really honored to be on the show.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } from Gigaom https://gigaom.com/2017/10/17/voices-in-ai-episode-12-a-conversation-with-scott-clark/
0 notes