Tumgik
#i do not pretend to understand the machinations of our system's background processes
reticent-fate · 1 year
Text
With the upcoming LoZ game, the excitement has managed to drag some of our Link fragments closer. Maybe listening to the 2018 anniversary concert wasn't the best idea in that regard, but it's always a bit of a ride having them near front.
They don't talk, but they can communicate with the system, and I have no idea how to even begin describing it. It's like they require someone else to proxy for them, to say what they want to say in order for them to communicate it.
For example, we found out Wild likes to sing. We were humming and singing along to something when we got an emotional response like joy from him, and rather than him spelling it out as "I like to sing," the others in front felt an urge to ask "Do you like to sing?" so that he could "nod" along.
In other cases, visual storytelling is also their next option forward.
They're an interesting bunch.
-IF11 (he/they/she)
1 note · View note
aboyandhisstarship · 4 years
Text
Human are weird: The GTO/SICON verse Reboot
The GTO, Or Galactic Treaty Organization is a military and political alliance between a number of species in the alpha quadrat of the galaxy.  The GTO is a rather new organization only founded nine years ago following the first contact incident, where The Klendthu’s crash landed on a small cold planet on the edge of a solar system at the edge of the galaxy, and encountered a small outpost, one mis understanding later, and The Klendthu’s spent 6 Terren months fighting the creature’s from the fourth planet.
Once diplomacy was able to be established through communications a cease fire was called, and the strange race who called themselves Humans started a dialog.
Over the last nine years the alliance has grown to be a massive power house in economics, Politics and Defense.   They are 4 major races in the alliance
The Klendthu Congress: an insect race with a regional hive mind, but don’t let that fool you, they are wicked smart and industrious.  They have developed some of the most impressive mining tech available. Their planets is ruled by a Queen there entire government by the high queen, the average worker has about the same level of intelligence of a human 6 year old, but when working groups then can easily compete with a genius level human.
The Kalbur Merchant Empire: a race that vaguely resembles the big foots of earth legend. They are the economic Backbone of the GTO, there race is built around the building and creating of wealth, money heavily influencing there culture. Their government is run by a council of officials elected from the top business and workers unions on their world.  In terms of technology they tend to lead in interstellar coms and other commercial type tech, with their military tech sorely lacking…as such they maintain an allied Military presence on their worlds.
The Verkia technocracy: an Aquatic telepathically floating species that vaguely resembles the squids of earth, they are the most advanced species in the known galaxy in almost every way. As such however they tend to be stiff and follow a very strict social custom; it is considered taboo for a student to leave the sciences in exchange for another discipline, sometimes leading as far as disowning by family and friends, or worse case total exile from Varikan society.  There government is run by a council of appointed top scientists that are generally the heads of top instuites within Verikan space.”
The Strategically Integrated Collation of Nations:  The Humans, the youngest race in the group as well as the craziest, Hailing from a death world on the edge of the galaxy where everything can and will kill you they are a society still recovering from finding out aliens even exist and there conflict with them. SICON was formed after the Pluto attacks from several of the earths major nation states, pushing aside the last barriers to unification. SICON is a democracy, with each former earth nation state, and Human colony earning a seat in the SICON Parliament with the majority seat holder winning said election with the party head becoming the Prime minster, Underneath them is the Star Marshal, commander in chief of the SICON Navy, and the ODT’s or Orbital Drop Teams… highly trained and equipped soldiers who go through a strict selection process.  The nature of the human world as well as their Military technique’s make them the leader in Military technology by miles, with standard issue power suits for ground forces, the entire concept of Orbital insertion, air support and deep space carriers being introduced by the humans. These advantages are only enhanced by their natural predatory nature and ability to hone their bodies into killing machines….
T’Las groaned scratching her fur groaning “Great I keep making the humans sound terrifying.”  The Kalbur sat back in the far too tall chair. It was built for a human after all she groaned, she sat in a gray metal room, the thick bulkheads joined into a thick window showing the swirls of hyperspace outside the window.
T’Las was grateful to the human for letting her take his office as he called it, but honestly being on a human ship was scary…well the entire assignment she was given was scary. 18 terran months ago a Verkian science ship crash landed on a Pre FTL society and well the folk on the planet went crazy fighting over the tech they could barely understand, by now only two groups remained on the planet before the GTO sent an intervention team, well Humans falling from the sky into your main base would put the fear of god into anyone and the sides agreed to moderated peace talks, she had actually been invited to dine in the captain’s cabin with the Ambassador, the Captain and the ODT Commanding officer, and had to quickly get ready. She put on her formal fur clips and quickly moved through the ship.
The Valley Forge is what the humans called it and as human ships go it was on the small side, only 400 meters long and 600 across. The vast majority was taken up by the Chekov drive core and the small retrieval ship and drop tubes for the ODT team.  She left the section of the ship past a group of humans joking about something called a date, before she left the ODT region of the ship.
It took her a couple of minutes and asking for direction’s before she arrived at the metal door. She knocked and a human voice called “come in.”
Inside were to humans before dressed in their formal uniforms, one had more bars across her shoulders and was a human female, T’Las was able to identify as she introduced “Hello, you must be the reporter joining us, I’m Captain Hernández.”
The male human who was also in Dress uniform had shoulder patches on the side of his jacket that looked kind of like the Human drop pods as he introduced “Lieutenant William Erickson, ODT senior officer.”
T’Las carefully took the offered seat “Nice to meet you Captain Hernández, Lieutenant Erickson.”
There was sound of something scraping and a robotic voice saying “May I enter.”
Erickson and Herandez stood up as the door opened, in stepped a creature that walked on four legs, it was armoured and had 4 side facing eyes, attached to its chest was a small box, Herndez smiled “your Majesty.”
The creature screeched before a second later the robotic voice spoke “Captain, Lieutenant…we have known each other long enough to dispense with the Niceties surely.”
Erickson laughed “I’m sorry to say its official orders, they don’t want us causing too much trouble.”
Herndez chuckled as she said “you’re Majesty Queen of Gamma prime, this is T’Las of the Kalbur.”
T’Las had never seen a klendtuian queen before and muttered out a “Hello…your majesty.” She quickly added bowing slightly.
The queen made a chirping sound that robotic voice translated to a disjointed human laugh before saying “there is no need we are all equals here.”
The three of them sat down as a small platform elevated the queen to the table, Erickson took a sip of water saying “the squad says hi, and Futuba was real bummed she could not join us.”
The queen somehow betrayed a guine response through the robotic voice as she responded “I’m sorry they were unable to join, it has been for too long since I last saw dagger squad.”
A couple of humans still in there dress best placed done four plates, for the humans it was a simple salad meal with Erickson grinning “SICON figured streaks were not a good look.”
The queen chirped again as a plate of strange green slush was placed in front of her and T’Las got a salad in the style of her people. T’Las asked “so how do you all know each other?”
Erickson smiled without showing teeth “long story, it involves a lot of explosives.”
The queen scratched “it was my vessel that crashed on Pluto, then private Erickson, and then flight Lieutent Hernández crash landed in my den, we were lucky enough to have gotten our…talk box open…they were the first to talk to us…they helped us build peace.”
Erickson smiled “we got a cease fire and spent 3 weeks talking, we had to live off my mom’s cookies.”
The queen chirped “I’m sorry you could not eat our food.”
Herndez grinned “I thought the food was ok, the company was awful.”
Erickson looked genuinely hurt as the conversation moved to a different topic.
 17 hours later:
T’Las was sitting in her quarters/ borrowed office mussing about the nature of Human space travel. Most other GTO races have adopted the supercharged carrier system, where the engines have a certain particle run through it in an infinite loop that somehow results in faster than light travel; Humans on the hand adopted the Chekov hyper drive, named after the human scientist Anton Chekov who invented it. The Chekov drive punches a hole in subspace allowing the human ship to enter into another dimension allowing the vessel hundreds time faster than the speed of light.
 T’Las did not pretend to be well versed on the subject of interstellar Star ships but she started to write “as I fly on the human ship I noticed something interesting about the difference in the FTL Technology employed by SICON as opposed to the employed by the rest of the GTO, but first some background, for any ready who is not aware Humans are pursuit predators, what does this mean? imagine for a second that you are a terran creature, you see a human coming and run away. The issue is you are faster than the human but the human can chase you as far as they need to, sometimes for kilometers and days at a time.”
T’Las read that and said “NOPE.” She quickly edited “Being a Pursuit predator means they chase their prey, sometimes for days and across vast landscapes, just about anything can out sprint the average (non -power suit wearing) human, but in a distance race…you lose every time. What is the relevance of what I’m saying? Well Most FTL tech we know of is faster than Human hyperspace, but the Humans can go farther and for longer…Example, A Kalbur ship and a human ship need to cross GTO space, the distance is say 15 Cubic light years, The Kalbur ship would rock ahead of the ship until about 5 light years at which point it needs to slow down to let the engines recharge, by contrast the humans will stay be coming and easily overtake the Kalbur, once the Kalbur engines re charge they will jump and yet again overshoot the humans until uh oh, they have to stop again, meanwhile the humans have being moving at a steady pace this entire time and easily yet again over take the Kalbur and hit where they want to be  easily hours before the Kalbur vessel.”
T’Las read it over before nodding approvingly “that’s better.” Adding “now the logical next question, what about in combat and yes it is as terrifying as you would think, the humans with their massive over gunned ships firing hunks of metal at a quarter of the speed of light at you, so you make a break for it…and you think you are in the clear then boom, they appear out of nowhere (reminder that we have yet to have anyway detect someone in Chekov travel, and if the humans do they are not sharing.)  You can’t run you can’t hide you can only that they are feeling merciful.”
T’Las re read the last paragraph saying “way to dark…” deleting the last paragraph she smiled sending the story off, as well as her other noted on the function .
 7 hours later:
T’Las heard a small knock on her door, and opened it to see a human with a strange shape on her face the human grinned “Hiyo.”
T’Las blinked “uhh hi?”
The human smiled “Specialist Futuba Kurogane, Dagger Squad, intel and Communication’s.”
T’Las nodded “Pleasure…uhhh not to be rude?”
Futuba grinned “oh yea right, this came for you from your boss,” Handing T’Las a drive saying “have a good one.”
T’Las played the message and it was her boss telling her “that last story is a gold mine! We have re run it 4 times and they still want more! Keep up the good work!”
T’Las rewatched the message 4 times saying “people are really that interested?”
2 hours later:
T’Las backed up as the creature advanced towards her, it was on four legs and bore it’s teeth as it sniffed her, the humans office door had closed cutting off her escape from the predator, T’Las considered making a break for it  when a human shouted in a language her translator did not recognize. The creature instantly stopped sniffing her and trotted back towards the human, the human was a female of darker complexion who smiled sheepishly, saying to T’Las “sorry about Porthos here.”
T’Las took a deep breath before yelling “WHY DO YOU HAVE A DEADLY PREADTOR IN YOUR INCLOSED SPACESHIP!”
The human rolled her eyes “she is a MWD.”
T’Las said “what!?”
Erickson rounded the corner saying with crossed arms “heard some yelling, what’s the issue Specialist?”
He reached down petting the creature as the other human said “Porthos seems to put the fear of god into our guest here.”
Erickson sighed “Abebi, you know we had aliens onboard who would be scared of Porthos.”
Erickson looked at T’Las before saying “come on, I will fill you in.”
Mess hall:
Erickson drunk a glass of water explaining “there is a creature on earth called a dog.”
T’Las nodded following, as Erickson sighed “these animals have amazing sense of smell so we train them to find things for us; Porthos for instance is a bomb sniffer….so if you ever see him sit run….Abebi, is his Handler she takes care of leads the dog on mission’s…that make sense?”
T’Las sighed “sure you humans have trained a deadly predator to find equally deadly explosives for you…great…”
Erickson glanced at his wrist “we are almost at the planet get ready.”
Clapping T’Las on the shoulder
Hanger bay:
The ship rocked as it dropped out of hyperspace, Ericson was dressed in strange 4 piece garments with dark lens over his face as he explained “this place was a warzone a few days ago, so stay close and do as we tell you, everyone follow?”
The queen squawked her affirmative and T’Las nodded awkwardly as they boarded the military drop craft.
4 hours later:
The conference had been going on for hours now with the creature Porthos and his handler walking around constantly as the rest of the humans eyed the assembled crowd, so far everything was safe and secure. The peace was signed But then the meet and greet and the glad handing with the all the ambassador’s started, and well T’Las was happy she had her camera drone on for what happened next.
 The drone had been flying around the room for about twenty minutes when an alien stepped forward to speak to the ambassador’s, Porthos walked towards the alien sniffing before sitting facing him. T’Las remembering what Erickson said looked for something to hide behind as all the humans in the room stiffed, The queen knowing what Porthos was as well changed color, however the aliens on the planet did not know what was about to happen. A tense second later, small sliver looking weapons appeared in the humans hands as Erickson yelled “hands in the air!”  
Porthos rose up barking as Erickson yelled “Abebi!”
The handler nodded “on it sir!” yelling something in a strange human language, the dog advanced on the now terrified alien, sniffing before looking at the creatures jacket and growling. The humans moved in as Erickson said “Futuba call for evac, Abebi.”
The handler interrupted “checking the entire room got it.”
Erickson threw the alien down pulling out a bomb he quickly defused he said “Valley Forge we are pulling out over!”
The delegation quickly moved out to the waiting drop ship, handing the would be bomber over to the locals they blasted off, the humans visibly relaxed and started chatting with each other and the queen like they all almost didn’t get blown up, leaving T’Las to come to the conclusion “Humans are weird. “
16 notes · View notes
blankdblank · 4 years
Text
Next Caller Pt 7
Tumblr media
Use of Come Wake Me Up Lyrics.
Ok, the feather image is a clearly edited copy of another tattoo i found so just pretend it’s more feather-ish in shape. :D
And apologies, i dozed off there a little bit trying to finish a request, which alas i did not...But here this is :D
.
Across the street you trotted smirking at the acorn decorated door you eased open and stood in the doorway of to keep dry while closing and tapping your umbrella to cast off any droplets. Inside your eyes lowered to the stand you set it on and turned to find Bilbo exiting the room in the back to reach the counter to speak with you.
“Miss Pear, I am glad to see you. How is your home? Nice and cozy?”
“Well I finally get to extend my bed full size, but from my apartment to that it does seem a bit more like a mansion at times.”
Softly he chuckled saying, “I bet. Now, I suppose it’s down to business then? Any ideas on what you might want?”
Tumblr media
Anxiously you wet your lips and said pulling a photograph of the galaxy background on your laptop from your pocket you passed to him. His eyes scanned over it and you said, “It’s a bit odd, bear with me,” you said flipping in your notepad to the folded sheet with a feather outline you also passed him, “my laptop, that’s the background, simple purple blue and pink galaxy image but then I have a screensaver that had floating bubbles. And I was thinking maybe a feather with that as the design, galaxy with bubbles on it?”
Your brow inched up and he chuckled saying with a wag of his finger, “Come on back.” He said guiding you to his station he brought out a sheet of tracing paper and copied the feather outline he laid over the picture and nodded, “I can do this. Should be simple, shift it around a bit.” After wetting his lips he asked, “I was wondering if I might be able to see what your uv markers are?”
You nodded and into the back piercing room for privacy you went at his offer and shrugged an arm out of your shirt bunching your shirt out of the way for him to see the dots and dashes coating your upper left bicep over your shoulder to a section over your shoulder blade. The markers roughly done had left visible white scars evident of your struggle in the painful stamping machine that inflicted them before you were thrown back into a cage to be air dropped onto that island. Mostly faded now but with the bright few in the creases and spots hard to be worn down by friction with clothes or sun exposure his jaw dropped seeing the remainder of the runes used to identify you for your prisoner’s code.
“Is it bad?” You asked in his silence while he made another trip over your markers with the uv light in his hand.
“No, it actually could be something quite easily covered, any ideas?”
“I was thinking maybe uv birds or floating feathers and bubbles maybe, small ones?”
With a grin he pulled over a piece of tracing paper he laid over your shoulder he marked exactly where the markers were and the shape of the skin to be covered saying, “I think that would be lovely. I will do my very best to make it as delicate a process as possible.”
“I can handle the pain.”
“I realize that my dear, though some of these spots will need some extra attention, I don’t want to hurt these scars any more than necessary.”
“Dwalin said it best to wait a week, would that be too soon?”
Bilbo shook his head, “Not at all. My Saturday is free if you wanted to come by then and keep Sunday to rest up.”
“Sounds like a plan.” Curiously you looked him over and asked, “How’d you and Dwalin meet?”
Bilbo smirked saying, “He saw me in the shop and came by one day asking for ladybugs on his finger to break the silence. Ended up with a bracelet and left a rewards card for his shop. It had a catchy slogan and I popped by. Pitifully bashful back then the big mug. Just couldn’t resist tormenting him.” Curiously he looked you over, “A pass time you seem to enjoy as well.”
Softly you giggled seeing him lining up the feather tattoo on the same sketch sheet after marking the size and slant needed to trail just below your collar bone, “Can’t help it. He just flounders, usually I’m ignored.” You said adding you arm through your sleeve again catching his stolen glance at the pair of belly button rings you had, one under and the other over marking the Elven tradition of marking the entrance into womanhood. “Though lately he seems more interested to find out what I do on my shift at the hotel.”
“Ah, yes I did hear that.” Making you giggle again.
“How early do they open?” His brow inched up, “Thought I might bother them for an early morning cup for my first job, it seems I can’t find my kettle.”
To himself he chuckled and replied, “They get in around three most days.”
“And they complain I don’t sleep.”
He chuckled again, “Well they do split the days with their evening staff. Either way they’ll let you in.”
The bell sounded and out again into the storm you went allowing him to get onto the next person’s idea for a possible tattoo. Back home you went and starting in your study you opened our first trunk eyeing the shelves you pulled scattered items off of to disperse through the house. Your arm chair was alone in the sitting room and the table usually with it in the dining room. Around the atrium you hung paintings and sketches with your piano in the center of it.
The books were left in their usual places and the trunks stacked to your usual liking to keep your system in place. Inside your closet the items to remain in cubbies were shifted to the section below a tall hanging cutout and your few pairs of shoes added to the shelves while you would have to wait on the rest until you got hangers to set up the rest in whatever order you wanted. All the trunks were checked twice and with a grumble you wondered where your kettle had gone to. Mid lunch break after signing up for a slew of decorating magazines you switched browsers from kettle shopping back to your show you were watching and settled in for a semi relaxing day at your new home until you would call it a night.
.
With a huff and a pout you eyed the empty tin holding your last packet of cider you pocketed in your jeans under the back panel of your mid thigh reaching grey sweater that ties over the bust. Pocketing your phone you grinned again remembering the elated response to the video you sent to your family in Lindon on a tour of your home and had mentioned your new promotion. It only took three centuries but things were finally looking up.
Reusable mug in hand you grabbed your umbrella, shouldered your bag and headed out a bit early to add on your detour with a nice cushion for time to possibly make your grump blush if he was there. On and off the rain came down around your sizable umbrella all the way to the dimly lit shop. Behind the counter Balin spotted you and called for Dwalin, who hurried over from wiping down the stools and tables to unlock and hold the door open for you.
“Hey,” you said earning a grin from the mohawked Dwarf as you closed and put up your umbrella, “Thanks, Bilbo said you wouldn’t mind.”
Dwalin, “Bilbo was right.”
Thorin begins the counter asked, “He said you wanted hot water?”
You nodded holding your mug you set on the counter he smirked easing the holographic swimming duck coated mug closer to him until he saw the packet you pulled out of your pocket. “My last one, and I seem to have lost my kettle. I could use a pot but Kuu always comes out when I do, because I have to boil peas for him when I cook using them and I’m low on peas..” accepting the packet he flung it over his shoulder, “Hey..”
“We have cider here. Much better than that stuff.”
“And just where does it say that?”
He turned pointing at the salmon coated section on the last menu board, “Cider, which variation did you want?”
“Caramel? Do you have caramel?”
His brow inched up and he asked, “Can you read Khuzdul and Hobbitish?”
“I can. I could put a toaster into orbit if I wished I’ve five math and engineering degrees but I can’t understand what sort of, language, all that is. Never could. Just go for coffee and it’s fifty questions. Something hot, in a cup.”
Unable to help it he turned lifting your mug, “Caramel cider coming up.”
“Oh,” he paused and you said, “No whipped cream. Please.”
He nodded and turned again prepping the drink asking, “What type of kettle did you have? We might be able to get you a good price on a new model. We got a guy.”
“Um, blue.” He glanced at you and you said, “Got it at a rummage sale. Talked him down to half off cuz it didn’t have a lid or handle.”
Dwalin, Balin and Thorin turned to look at you as the first asked in a lean on the counter beside you, “And just how did you use the kettle without a handle or lid?”
“Not very well it seems now that it’s taken off on me. Had to fashion a set of tongs, had a lip at the top and I got this decorative metal plate I put over the top but if I left it too long it would fly off. So I usually only got lukewarm water unless I was up to playing hot potato.”
Lowly Dwalin chuckled turning away faking a clearing of his throat to hide the reaction and Thorin said, “I’ll find you a decent kettle.”
“You don’t-,”
Balin, “Consider it a housewarming gift, from all of us.”
Thorin walked over with your mug he added the lid to on the walk over he set down, “Plus, then you can study up on your teas.”
“Which box should I start with then?” You asked passing the bills over.
“Box?” He replied.
And you nodded, “Of tea, at the store, they sell them in boxes with cute little bags inside.” You giggled out brushing your hair behind your ear as his hand folded around the bill in the clenching of his fist through a twitch of his brow.
“You don’t buy tea at the store.”
“Then what are they selling in those bags?”
“Not real tea!” He fired back, “I will get you some tea, real tea! None of that poster child pretend groundings they sell by the barrel!”
You nodded and said after a glance at the clock, “Well I will see you later. People get a bit reckless in the rain, have to go dodge some cars on the way to the station. Thank you again.”
Back to the door you went as Balin and Dwalin called out, “No problem, come by anytime.”
Dwalin poked Thorin’s shoulder making him snap back to consciousness and call out as you exited the door, “Yes, you should always come first.” The door shut but not in time to block of your bubbling giggles while Dwalin and Balin began to taper off into sputtering laughs while Thorin rested his forehead on his arms crossed on the counter, “First thing, first, thing, one word. Ugh..”
Dwalin patted his back and continued on to work saying, “Least now you know you have a way in.”
Thorin’s head lifted up, “Making an ass of myself?”
Dwalin pointed at the board, “Lass needs to learn the brew. If she’s tied to you-,”
Rolling his eyes he sighed out, “She’s not tied to me.”
Balin gave a final few chuckles adding his two cents, “Thorin, she came here for hot water. No doubt her station she works at has functioning coffee makers.”
Thorin’s mouth opened, “I-,” sharply he exhaled then added, “Did not think of that.”
Balin, “Now we just have to double check the batteries in the radio...” he muttered on his way to the back.
Dwalin followed after, “Bought more yesterday.” Though lost to his thoughts Thorin got to browsing on his phone as to what style of kettle you might like. Deciding on a simple one like his at home, a blue and white checkered one with a large handle above the crystal topped lid. The more he tried to focus on getting ready he just couldn’t and was lost to wondering how every day could be split up into a new lesson for you expanding your time together.
*
“Today is a bit different, no longer up at the Misty Mountains, it’s me Bunny bringing you straight out to the Villa Esquiyemme out here in the unreachable abode of Duke Troublen. Now I know you are asking me what I am doing out here in the middle of a sea of snow when it is perfectly blustery back in Erebor, well the issue comes with the fact that my guest today is not my guest but in fact my host. Who just happens to be on house arrest and I am currently talking into my purse where all of you are hiding in on this conversation, so if you happen to find my pen do nudge it out for me because I love my Twiggy and I miss it dearly.”
From the booth Mal sat back in her chair grinning behind her propped up hand with a finger tapping her upper lip waiting for her cue. You had warned her today would be different and it certainly was. An espionage interview with muffling effects thanks to a handkerchief you laid over your mic with lifesavers and a brush you would occasionally brush across the desk to remind people that they were hiding inside your purse.
For the first hour muffled banter bled into a full scale argument being rehashed while the Duke had shared how he had landed into this dilemma. The first hour break for commercial came in a faked making of popcorn while you raced to the bathroom. And once back the sound cut back on right in the middle of the Countess, named as the Duke’s mother in law, having forced you into the corner playing an old record. Shushing you to boot allowing you to turn off your mic for a few calming mouth exercises for relieving your mouth from the tongue twisting stretch of dialog you had just read off.
Piano music softly played and lost in the sea of things Dwalin’s finger pressed to his lips silencing the woman blabbering on to her friend who loudly shushed her as well to hear the soft melody beginning on the piano. A soft song of devotion played through the air waves and everyone felt the hair on their arms rising along with tiny bumps at the ethereal voice of the host being played on record from sort of performance centuries prior enamoring they even more for their anonymous confidante.
On the other side of the glass Mal held up a page asking, ‘Is this you?!’
Replying back on another sheet your page read, ‘My sisters made me swear to play it this week.’
Her grin deepened and you rolled your eyes. Readying to pick up again switching on your mic you covered again to hit your hand on the desk knocking over a cup of pens causing a clatter stirring up a jumble of an argument melting into your being caught in the arrival of the Duke’s mistress.
And just when that was getting juicy you cut off he mic again and signaled Mal to play your intro music making everyone listening in sit up wide eyed confused if it was over only to hear you again, “Hey Hey, don’t you worry still me Bunny. Ear to the ground as always for all the juiciest in the lives of our dashing Durin boys. Now that you’ve all climbed out of my purse we are back to the Misty Mountains not under a sprinkle but a deluge against my prediction on the forecast. Anyways still mourning my dearest missing flamingo pen Twiggy we are back again with the impregnable force that is Frenn and his dearest Adrianna. And if I have this right we should be marking down days for a bassinet search, shouldn’t we?”
He deep baritone voice crooning out to the airwaves making people nearly vibrate with excitement at how juicy their plot line was getting. Clues had set the show as based decades prior by the ‘current events’ being a war or film release from that time but no one gave a damn at all soaking in every word.
On the other side of the glass wall you caught Ecthellion and Glorfindel watching with grins partially hidden behind their raised fists to help contain their reactions all the way through their sign off. Into the hall you went once your book and empty mug was back in your bag. Flashing the pair of them a grin, “So, still good?”
Ecthellion chuckled, “More than good.”
You gave an excited pitifully withheld squeak in a bounce on your toes and your gaze shifted to the suddenly approaching Frank, who worked on the other end of this floor. All of five feet the stunner of a portly Dwarf with silver beard tied in bubbles by connected leather tethers ended with bells marking each of his children and a four braid style for his hair pulled into a series of loose braids dangling down the center of his head that swayed from side to side. For a moment his brown eyes were on you and he passed you a note, “Message for you.”
“Thank you,” You said reading the post it he passed you simply reading, ‘Call him tomorrow.’
“Um-,” you said as you looked back up at him again.
With a shrug he said, “That’s all Kristy wrote down.”
Lowly you replied, “We really have to get someone to fill in after Trina goes to the other station.”
Glorfindel, “Other station?”
“Ya, after lunch she goes to the Tulip Tower station.”
Glorfindel’s brows furrowed a moment, “Hmm. We didn’t buy that one. We’ll have to see which she prefers.”
Frank, “Her wife works there. No brainer she’ll choose that one.”
Ecthellion, “We’ll get someone on the desk then. Lily did say she wanted a new position.”
Frank said, “Either way you might want to get a second line just for messages for Bunny’s show, been having some guy calling in for a week now.”
Ecthellion and Glorfindel looked at one another and the former asked, “He leave a name?”
Frank shrugged again, “If he did Kristy didn’t write it down and Trina didn’t either.”
Glorfindel nodded and he said, “We’ll have Lily take down his info and we can look into him.”
Pocketing the note you said, “Well I’m off to go pester a grump. I will leave you to your hiring.”
Glorfindel’s eyes narrowed playfully, “Pester a grump, huh? Last time you pestered a grump you nearly ended up engaged out in Gondor.”
“That was a misunderstanding.” Making him chuckle again, “He had no intention of dating me let alone proposing. That was between him and those five bottles of wine he downed. All I did was complement his shirt.”
Ecthellion, “Sure it was, with that smile and sassiness of yours.”
Playfully you replied, “If you’ll excuse me, me and my sass have to get some tea before my next shift.” That made them both chuckle and head out to call Lily to try and get a head start on whoever has been calling the station. Hoping for the best it wasn’t anyone your father had known back to spoil things for you.
*
Fifteen minutes since you sat down Thorin had been ranting about tea, at first trying to explain just what made this day’s selection so special only to delve into the history of how this strain had been planted and farmed for centuries. Smirking at the Dwarf frowning in determination your head rested in your palm and between sips you focused on each spiraling thing he had shared with you until a refocusing blink from Thorin had him taking in your expression. Lowly he cleared his throat and after a woman approached with a request for one of the specials he promptly stood up and walked off. Drink fixed and back again he came to claim your empty mug after staring lost for words a few moments at your grinning self. Blush fixed in place he relented to his embarrassed silence distracted by the next few asking for specials.
The empty table however had the grump growling to himself and while you were off to your train to the hotel he had finished his next few orders and grabbed his coat saying, “I’m going shopping.”
The notion had his cousins smirking, and the finally arrived employee who had gotten their babysitters in line curiously looked the trio over utterly lost. Shrugging it on he made his way out back unlocking his car he eased into and started up to make his way to the shop. Determined as ever he had to make certain to fix this, he had to find a way to get both of his feet out of his mouth. He didn’t mean to rant on about tea. It was an odd profession, but a quarter Hobbit on his mother’s side meant time with Gran Tulip was spent in the Hobbit lands between Erebor and Greenwood. For all the urgings he should be forging or crafting items from wood not staring wide eyed at the tiny blooms he had helped to cultivate.
There was a whole language of flowers and everything flora. Everything alive and growing and so much more incredible than what he had felt forging. With good tilled earth came company and with it more languages to learn. The wrong tea or biscuit could do great insult meaning he had to delve deeper into the uses of a well forged kettle. Most people didn’t care, but with the shop came the sprout sales and the bi-monthly courses on what each brew meant and what to use for any ailment or hormonal deficiency. You could at least read Khuzdul and Hobbitish so that was a good base to start with, as for passion for the subject he hoped you might grow interest in it and possibly accept some sprouts of your own for your spacious greenhouse he tried to not be so insanely jealous over.
Having spent years peeking into Gloin’s collection of virtual tours and simply feeling himself unready to split off from his brother and nephews just yet after having left home with Dwalin. Dis had left when she was engaged and Balin had lived with them until he had gotten married and had a baby on the way to enforce a need to find his own place to start his family in. Somehow Dwalin had eased out of their place in time for Frerin to pop in, the former in a fleeting relationship he had assumed to last leaving him in a small flat of his own able to suit his dating needs of privacy. Dwobbit homes were always his favorite and even with the off pictures for the home you had chosen it always seemed to call to him. Just something about that forest green door beckoning him inside.
“Plenty of room for a roommate,” Gloin had teased on their ride home when his pout appeared at being called away, but he tried not to think of that. He couldn’t dare push that issue with you, just over a week knowing you. Already having forced himself financially and into the process of taking you from one dwelling into a home you never assumed you could have afforded.
What he felt for you even with his family teasing and joking that he should make a move he wasn’t certain. He wanted to know, he wanted to be certain why every interaction with you left him so lost for words. A Dwarf so able to argue the bark off a tree or a stubborn goat off his treasured stump now left baffled on what to say or do when you were around. Tea could very well be the answer, or the finishing blow to his ego. Once again possibly be left speechless after boring you to death in another rant, but it couldn’t have been that. Once parked he sighed palming his keys with his mind running back to that look you had given him, the one that snapped him out of his rant, a peaceful partially awed expression straight at him due to his passionate rant.
Tumblr media
Shaking his head he climbed out of his car ignoring the sprinkles on the path inside the building run by a friend of a cousin on his Hobbit side. Between the shelves of pre-ground herbs and tea leaves into the basket he grabbed he settled a healthy amount of tea along with a whale infuser with a double finger hook to pull it out. His last selection was the book of all books concerning tea he treasured his own copy off when he first started out. Just like his the round bodied blue and white checkered kettle with a tall handle and crystal topped lid was added to the basket. Up to the counter he went and it wasn’t till he made it back to his car and turned his head to eye his selection he wondered how he would pass it over to you.
Tumblr media
Before work was out of the question so back home he went and over lunch his gaze turned from the pack of sticky notes he’d yet to break into back to the book. With a notepad he made a basic to do list for each variation and left notes added to each section he imagined might need more clarification for you. The large bag that was left by a quarter to midnight he loaded back into his car and took a sleepless night in order to drive to your place. Out front he parked and walked around the car to grab the bag, turning back to pause and flash a grin to your formerly jogging neighbor who had heard a small single woman had moved in.
“Hi.”
“Evening,” the burly Dwarf replied looking him over.
Thorin wet his lips saying, “My cousin helped Miss Pear move in, we all did, and she drops by our tea shop,” he said pulling a card out of his pocket, “Warming gift, some tea and a kettle.” He said showing the curious Dwarf who eased in seeing he wasn’t lying, “Her shift ends in a bit, didn’t want to leave it long.”
The Dwarf nodded and pointed two houses over, “Our home, you can wait it out there, if you had hiding in mind.”
Thorin smirked, “Thank you.” Turning to head down your walk and leave the bag outside your door and turn back to move his car following the jogger back to his house conveniently out of clear sight. The pair of them both anxious to see your return ducked behind the front fence where the jogger asked more questions about his plans and intentions only to fall silent in seeing your path down the crystal lit sidewalk up to your front gate you trotted through with echoes of a soft hum coming from you. The burly jogger memorizing your path to possibly ensure he or one of the other watchmen kept an eye out for you until a vehicle of some sort could be found for you to ensure your safety.
All the same under the faint glow of the crystals lighting your front porch you lifted the bag and a soft giggle was heard in your path through your round door, once unlocked and opened lit more of the contents. Weakly Thorin chuckled and again thanked the jogger, who said he’d be in to try the tea shop sometime with his wife, who was now in the window wondering what her husband and the stranger were up to. Back to his car he went and off home Thorin drove grinning to himself imagining all you would feel or say upon further inspection of your gifts. Off home he went hoping to see you in a few hours perhaps for another helping of cider if you hadn’t yet bought more of that pitiful cider powder you imagined to be enough to power you through your first job after little sleep between jobs.
* Hours prior *
“Something wrong?” Turning your head you grinned at the asking Dam shaking your head.
“No, just spent some time being told the intricacies of tea leaves by the most serious Dwarf on the planet.” Chuckles followed at your own giggle in adjusting the skirt on your uniform over your hot pants it snapped onto to keep in place. A single glance at the mirror on the wall and your top was adjusted next making sure everything was covered but amply accentuated.
“In a good way or was he telling you off?”
You turned to face her tucking your side swept bangs behind your ear and confirming your hair combs connected by beaded strands holding your rolled bun in place, “The best way. Tried to tell me what was in my drink and got swept away. The most incredible grin he’s been hiding behind that scowl of his.”
That rippled giggles through the room of ladies all heading out for their own floors in the building more suited to their own strengths. Even here you were a bit odd but now their post shift meal would have ample gossip to try and imagine what sort of Dwarf you would fall for after so many years of giving no signals of being interested in anyone.
I can usually drink you right off of my mind
But I miss you tonight
I can normally push you right out of my heart
But I'm too tired to fight
Yeah the whole thing begins
And I let you sink into my veins
And I feel the pain like it's new
Everything that we were,
Everything that you said,
Everything that I did and that I couldn't do
Plays through tonight
Tonight your memory burns like a fire
With everyone it grows higher and higher
I can't get over it, I just can't put out this love
I just sit in these flames and pray that you'll come back
Close my eyes tightly, hold on and hope that I'm dreaming
Come wake me up
To yourself you grinned and on your second floor post scrub of a bath in the suite you were mid hum along to the song playing on your mini speaker hooked to your mp3 player. Adding the trash bag from there to your trash bin on your cart you removed your gloves and lifted the vacuum you unwound the chord on and plugged it in to start vacuuming a quadrant of the room. More trash was gathered and around the already made beds you worked through the second of the twin rooms and made your way to the main sitting room where you paused seeing Tili and Dis both entering the room while you wiped down the dining table there.
Tumblr media Tumblr media
“Evening.” You squeaked out straightening up and putting the bottle down on your cart you tried to drop your rag onto only to pause cross eyed making the pair smirk seeing you ease the loop on the fraying cloth stuck around your glove clad finger.
When that was dropped you eyed the pair only to see Dis looking you over saying, “No need to stop, merely we have a guest requesting a picture from one of the bedrooms on this floor to confirm it is the same from our website. Bit superstitious on room numbers.”
You smirked and turned to head to the coffee table to scrub that as well while Tili stood in the doorway keeping an eye on you smirking seeing your toe top reach and disgusted scoff at the underwear on the lamp you added to the trash once retrieved with the grabber on your cart. Leaving the gloves on the cart you got to digging in the couch and rolled your eyes pulling out more ‘hidden treasures’ then vacuumed it fully with cushions and spare pillows fluffed and woven throw traded for a fresh one you folded just so and laid it across the back of the couch to picture perfection.
Closing the distance again Dis neared you when you were assembling your cart again to head to the next room, “How do you like your new home?”
In a glance up at her your grin widened, “It’s perfect. I’ve always wanted a Hobbit Style home. And the greenhouse is to die for.”
Dis chuckled as Tili did, the former saying, “Well I know my cousin Gloin has been thrilled to have settled you in a good home.”
“Ah, so you’re the former Durin,” her brow inched up and you said, “Not that-, he mentioned a relative married into the Findis clan. Eyes should have probably given it away.” After a moment in her smirk at your momentary head tilt you said, “You sort of remind me of this driver I met.”
Tili giggled out, “Frerin?” You glanced at her and nodded, “Her baby brother.”
“Ah,”
Tili, “Do you have family?”
“In Lindon, my Naneth and her husband have two girls. Just nearly in school.”
Dis, “Your parents are divorced?”
“They weren’t married. It’s sort of, complicated.” In the awkward silence you said, “Congratulations, more babies!” The grin splitting across your face stirred one on hers.
Dis, “Thank you. Do you have children?”
Your brows inched up, “No, but I have birds. Which I realize aren’t the same as children. But they are alive and thriving so points in my favor.” That made Tili shift to be behind you a moment fighting back her body’s urge to giggle.
Tili, “Yes it is. Any partners?”
You shook your head, “No, up till last week it wouldn’t have been fair time or fund wise to be with anyone.”
Dis in her try to be subtle asked, “Anyone spark your fancy to possibly try with?”
“Um, I think it’d probably be best to leave fancying to the guys, I tend to get a bit, hard to explain. Get a bit too wild in my daydreams, I suppose, on how interesting I might be for anyone caught in my sparking.”
Tili waved her hand, “No doubt you’ve tons of sparking fellas after you. We’ve heard you have been enjoying stops in at the Brew and Grew to see the guys?”
“Ya, it’s been, life changing, to say the least.” You chuckled out, “Plus it seems I’ve been lied to my entire life and stores do not in fact sell tea in tea bags.”
Dis chuckled, “Ah, Thorin brought that up?”
“Yes. Apparently is set on buying me a kettle to replace my lost one, and is determined to educate me on tea.”
Tili, “If you want out of it just bring up corn variations and that’ll spark up Balin and they’ll give you a chance to run for it.”
You shook your head in a brow raising giggle from you, “I think it’s sweet. Hard to find what you’re really passionate of, too many people try to flee it can be deflating. I do like tea, and learning things. If he is up to issue the challenge I will call him on it and see who wins out on top in determination.” A call had them heading back down and leaving you back to your work, you giggling at your own reminders of the giant grump while the pair in the lift giggled themselves at a worthy opponent for Thorin’s unending joy from the tiny sprouts and herbs.
Pt 8
@himoverflowers​, @theincaprincess​​, @aspiringtranslator​, @sweeticedtea​, @ggbbhehe4455​, @thegreyberet​, @patanghill17​, @jesgisborne​, @curvestrology​, @alishlieb​, @jogregor​, @armitageadoration​, @fizzyxcustard​, @here2have-fun​, @lilith15000​, @marvels-ghost​, @catthefearless​, @imjusthereforthereads​, @c-s-stars​, @otakumultimuse-hiddlewhore​, @mariannetora​, @shesakillerkween
Hobbit/LotR – @abiwim​, @jotink78​, @pastelhexmaniac
18 notes · View notes
maximelebled · 4 years
Text
How to launch a symlinked Source 2 addon in the tools & commands to improve the SFM
I like to store a lot of my 3D work in Dropbox, for many reasons. I get an instant backup, synchronization to my laptop if my desktop computer were to suddenly die, and most importantly, a simple 30-day rollback “revision” system. It’s not source control, but it’s the closest convenience to it, with zero effort involved.
Tumblr media
This also includes, for example, my Dota SFM addon. I have copied over the /content and /game folder hierarchies inside my Dropbox. On top of the benefits mentioned above, this allows me to launch renders of different shots in the same session easily! With some of my recent work needing to be rendered in resolutions close to 4K, it definitely isn’t a luxury.
So now, of course, I can’t just launch my addon from my Dropbox. I have to create two symbolic links first — basically, “ghost folders” that pretend to be the real ones, but are pointing to where I moved them! Using these commands:
mklink /J "C:\Program Files (x86)\Steam\SteamApps\common\dota 2 beta\content\dota_addons\usermod" "D:\path\to\new\location\content"
and
mklink /J "C:\Program Files (x86)\Steam\SteamApps\common\dota 2 beta\game\dota_addons\usermod" "D:\ path\to\new\location\game"
Tumblr media
Now, there’s a problem though; somehow, symlinked addons don’t show up in the tools startup manager (dota2cfg.exe, steamtourscfg.exe, etc)
It’s my understanding that symbolic links are supposed to be transparent to these apps, so maybe they actually aren’t, or Source 2 is doing something weird... I wouldn’t know! But it’s not actually a problem.
Make a .bat file wherever you’d like, and drop this in there:
start "" "C:\Program Files (x86)\Steam\steamapps\common\dota 2 beta\game\bin\win64\dota2.exe" -addon usermod -vconsole -tools -steam -windowed -noborder -width 1920 -height 1080 -novid -d3d11 -high +r_dashboard_render_quality 0 +snd_musicvolume 0.0 +r_texturefilteringquality 5 +engine_no_focus_sleep 0 +dota_use_heightmap 0 -tools_sun_shadow_size 8192 EXIT
Of course, you’ll have replace the paths in these lines (and the previous ones) by the paths that match what you have on your own machine.
Let me go through what each of these commands do. These tend to be very relevant to Dota 2 and may not be useful for SteamVR Home or Half-Life: Alyx.
-addon usermod is what solves our core issue. We’re not going through the launcher (dota2cfg.exe, etc.) anymore. We’re directly telling the engine to look for this addon and load it. In this case, “usermod” is my addon’s name... most people who have used the original Source 1 SFM have probably created their addon under this name 😉
-vconsole enables the nice separate console right away.
-windowed -noborder makes the game window “not a window”.
-width 1920 -height 1080 for its resolution. (I recommend half or 2/3rds.)
-novid disables any startup videos (the Dota 2 trailer, etc.)
-d3d11 is a requirement of the tools (no other APIs are supported AFAIK)
-high ensures that the process gets upgraded to high priority!
+r_dashboard_render_quality 0 disables the fancier Dota dashboard, which could theoretically by a bit of a background drain on resources.
+snd_musicvolume 0.0 disables any music coming from the Dota menu, which would otherwise come back on at random while you click thru tools.
+r_texturefilteringquality 5 forces x16 Anisotropic Filtering.
+engine_no_focus_sleep 0 prevents the engine from “artificially sleeping” for X milliseconds every frame, which would lower framerate, saving power, but also potentially hindering rendering in the SFM. I’m not sure if it still can, but better safe than sorry.
+dota_use_heightmap 0 is a particle bugfix that prevents certain particles from only using the heightmap baked at compile time, instead falling back on full collision. You may wish to experiment with both 0 and 1 when investigating particle behaviours.
-tools_sun_shadow_size 8192 sets the Global Light Shadow res to 8192 instead of 1024 (on High) or 2048 (on Ultra). This is AFAIK the maximum.
And don’t forget that “EXIT” on a new line! It will make sure the batch file automatically closes itself after executing, so it’ll work like a real shortcut.
Tumblr media
Speaking of, how about we make it even nicer, and like an actual shortcut? Right-click on your .bat and select Create Shortcut. Unfortunately, it won’t work as-is. We need to make a few changes in its properties.
Make sure that Target is set to:
C:\Windows\System32\cmd.exe /C "D:\Path\To\Your\BatchFile\Goes\Here\launch_tools.bat"
And for bonus points, you can click Change Icon and browse to dota2cfg.exe (in \SteamApps\common\dota 2 beta\game\bin\win64) to steal its nice icon! And now you’ve got a shortcut that will launch the tools in just one click, and that you can pin directly to your task bar!
Enjoy! 🙂
8 notes · View notes
somnilogical · 4 years
Text
davis tower kingsley (listed here on the cfar instructor page) who harassed a cis woman about her appearance another cis women reported this to acdc (the people who wrote the thing about how brent was great) and afaict they did nothing, claims that if trans people and gay people dont "repent and submit to the pope" they will burn in hell, defended the spanish inquisitions, wrote about how the mission system werent actually abductions, slavery, forced conversions and this was propaganda, defends pretty much any atrocity that an authority, "believes" the catholic god exists and does not try and destroy them, submits to them. and so much more.
born into another era they would actually work for the california mission system and say it was good.
said thing that cached out to that emma and somni should repent and submit to the rationalist community. wrote up a rant about "how about fuck you. go lick the boots of your dark mistress anna salamon." didnt send. got kicked by some rationalist, reasoning is probably that what id say would disrupt their peaceful machinations of omnicide, would be infohazards, because... the information is hazardous to their social order.
a few of these things are subjects of future blog posts.
--
cfar has never hired a trans woman, i have lots of logs of them trying to do what people did to porpentine. claiming emma thinks torturing children is hot, claiming emma was physically violent, claiming emma was indistinguishable from a rapist, claiming ziz was a "gross uncle style abuser", claiming somni was enticing people to rape, claiming that anna salamon was a small fragile woman and ziz was large and had muscles. as if any of our strength or speed had anything to do with our muscles in this place. all of these things are false except relative size difference between ziz and anna which is just transmisogynistic and irrelevant.
if they lie about are algorithms claim that we are using male-typical strategies and then they can fail by these lies and be sidelined by callout posts that transfered 350,000$ from miri despite their best efforts to cover this up. (all benefited by having relative political advantagr flowing from estrogenized brain modules. men are kind of npc's in this particular game of fem v fem cyberontological warfare for the fate of the multiverse, mostly making false patriarchal assumptions that ziz was doing things for social status. like status sensitivity is hormonally mediated, your experiences are not universal. or saying like kingsley is saying that people should repent and submit to whatever authorities in the rationalist region they submit to. NO. FUCK YOU. i will not repent and submit to your abusive dark mistress anna salamon.
i knew anna salamon was doing the edgy "transfems are all secretly male" thing before i talked with ziz. it was a thing, {zack, carrie}, ben hoffman, michael vassar were also in on it. ppl had men trapped in mens bodies on their bookshelves because the cool people were reading it. didnt think she was being *transmisogynistic* about it until i talked with ziz. in retrospect i was naive.)
also? anarchistic coordination ive had with people have been variously called lex's cluster, somni's cluster, ziz's cluster by authoritarians who cant imagine power structures between people that arent hierarchical. like based on who they want to say is "infohazardously corrupting people" emma goldman had to deal with this shit too where the cops tried to say she was friends with anyone who thought anarchism made sense. people she didnt know at all who did their own anarchism. because authoritarians dont think in terms of philosophy, they think any challenge to their power is a disease that needs to be eliminated and you just need to doxx their network.
like if ziz and somni and emma were all actually infohazardous rapists as people keep trying to claim we are and then saying "oh no i didnt mean it i swear" and then doing it again. what would happen isnt that a bunch of infohazardous rapists start talking and working together for a common goal. what actually happens with people of that neurotype is they partition up the territory into rival areas of feeding on people like gangs do.
like they dont get together and start talking a lot about decision theory and cooperate in strange new ways.
not that the people lying about emma, ziz, gwen, somni and others are trying to have accurate beliefs. they are trying what all athoritarians try with anarchist groups. unfortunately for them, ive read the meta, i know dread secrets of psychology and cooperation that they claim are like painful static and incomprehensible, yet despite being "incomprehensible" are almost certainly harmful. if harm is to be judged against upholding the current regime, and the current regime is evil, then lots of true information and good things will look harmful. like ive tested this out in different social spheres what people claim is "incomprehensible" is the stuff that destroys whatever regime they are working in. like someone said i sounded like i was crazy and homeless and couldnt understand me when i pointed out that reorienting your life, your time, your money, to a human who happens to be genetically related to you for 16 years is altruistic insanity. just do the math. eliezer, anna, michael, brian tomasik all once took heroic responsibility for the world at some point in their lives and could do a simple calculation and make the right choice. none of them have children.
pretending that peoples "desires" "control them", when "desires" are part of the boundary of the brain, part of the brains agency and are contingent on what you expect to get out of things. like before stabbing myself with a piece of metal would make me feel nauseated, id see black dots, and feel faint. but after i processed that stabbing myself would cure brain damage and make me more functional, all this disappeared.
most people who "want" to have children have this desire downstream of a belief that someone else will take heroic responsibility for the world, they dont need to optimize as much. there are other competent people. if they didnt they would feel differently and make different choices.
you can see the contingency of how people feel about something on what they get out of it lots of places. like:
<<Meanwhile, a Ngandu woman confessed, "after losing so many infants I lost courage to have sex.">>
but people lie about how motivation works, in order to protect the territory of saying "well i just need a steady input of nubile fems so i can concentrate and be super altruistic!" or "i just need spend 16 years of life reorienting around humans who happen to be genetically related to me and my friends so i can concentrate and be super altruistic!" when neither of these are true. these people just want nubile fems, they just want babies. (the second one has much much less negative externalities though. you could say i am using my female brain modules to say "yeah the archetypically female strat, though it has the same amount of lying, is less harmful". but like it actually is less directly harmful. the harm from gaslighting people downstream of diverting worldsaving resources and structure to secure a place to {hit on fems, raise babies} is ruinous. means that worldsaving plans that interfere with either of these are actively fought. and the knowledge that neither of these are altruistic optimizations, neither is Deeply Wise they are as dumb in terms of global optimization as they seem initially, is agentically buried.
this warps things in deep ways, that were a priori unexpected to me.)
this is obvious, but when i talk about it, the objection isnt that it doesnt make utilitarian sense, the objection is that "im talking like a crazy person". authoritarians say this to me too when i assert my right to my property that they took, act like im imposing on them. someone else asked if i could "act like a human" and do what he wanted me to do when i was thinking and talking with my friends. all of these things authoritarians have said to me "act like a human" "talk like a normal person i cant understand you" were to coerce my submission. they construct the category of "human" and then say im in violation of it and this is wrong and i should rectify it. i am talking perfectly good english right now. you can read this.
anna salamon, kelsey piper, elle, pete michaud, and many others all try to push various narratives of somni, emma, ziz, gwen and others being in the buckets {RAPIST, PSYCHO, BRAINWASHED}. im not a rapist, im not psychotic, im not brainwashed. before ziz came along, people were claiming i was brainwashing people, its a narrative they keep reusing.
porpentine talks about communities that do this, that try and pull trap doors beneath trans women:
<<For years, queer/trans/feminist scenes have been processing an influx of trans fems, often impoverished, disabled, and/or from traumatic backgrounds. These scenes have been abusing them, using them as free labor, and sexually exploiting them. The leaders of these scenes exert undue influence over tastemaking, jobs, finance, access to conferences, access to spaces. If someone resists, they are disappeared, in the mundane, boring, horrible way that many trans people are susceptible to, through a trapdoor that can be activated at any time. Housing, community, reputation—gone. No one mourns them, no one asks questions. Everyone agrees that they must have been crazy and problematic and that is why they were gone.>>
https://thenewinquiry.com/hot-allostatic-load/
(a mod of rationalist feminists deleted this almost immediately from the group as [[not being a good culture fit]], not being relevant to rationalism, and written in the [[wrong syntax]]. when its literally happening right now, they are trying to trapdoor transfems who protest and rebel asap. just like google.)
canmom on tumblr talks about the strategic use of "incomprehensibility" against transfems. and how its not about "comprehensibility". i have a different theory of this, but her thing is also a thing.
<<Likewise, @isoxys recently wrote an impressively thorough transmisogyny 101, synthesising the last several years of discussions about this facet of our particular hell world. But that post got just 186 notes, almost exclusively from the same trans women who are accused of writing ‘inaccessibly’.
Perhaps they’d say isoxys’s post is inaccessible too, but what would pass the bar? Some slick HTML5 presentation with cute illustrations? A wiki? Who’s got the energy and money to make and host something like that? Do the critics of ‘inaccessible’ posts take some time to think about what kind of alternative would be desirable, and how it could be organised?>>
https://canmom.tumblr.com/post/185908592767/accessibility-in-terms-of-not-using-difficult
alice maz talks about the psychology behind the kind of cop kelsey piper, david tower kingsley, elle and others are:
<<the role of the cop is to defend society against the members of society. police officers are trivially cops. firefighters and paramedics, despite similar aesthetic trappings, are emphatically not. bureaucrats and prosecutors are cops, as are the worst judges, though the best are not. schoolteachers and therapists are almost always cops; this is a great crime, as they present themselves to the young and the vulnerable as their friends, only to turn on them should they violate one of their profession's many taboos. soldiers and parents need not be cops, but the former may be used as such, and the latter seem frighteningly eager to enlist. the cop is the enemy of passion and the enemy of freedom, never forget this>>
https://www.alicemaz.com/writing/alien.html
anna salamon wrote a thing implying that ziz, somni, gwen suffered some sort of vague mental issues from going to aisfp. (writing a post on this.) alyssa vance tried to suggest i believe cfar is evil because im homeless. but sarah constantin, ben hoffman, {carrie, zack}, jessica taylor (the last three who have blogged a lot about whats deeply wrong) (not listing others because not wanting to doxx a network to authoritarians, who just want to see it contained. and the disease of "infohazards" eradicated.) are not homeless and ive talked with many of them and read blog posts. and they know that cfar is fake. jessica (former miri employee) left because miri was fake.
anna and others are trying to claim that theres some person responsible for a [[mass psychotic break]] that causes people to... independently update in the same direction. and have variously blamed it on ziz, somni, michael vassar. but like mass psychotic breaks arent...really a thing, would not be able to independently derive something, plan on writing a blogpost on it, and then see ben hoffman had written http://benjaminrosshoffman.com/engineer-diplomat/ and i was like "ah good then i dont have to write this." and have this happen with several different people.
like this is more a mass epistemic update that miri / cfar / ssc / lw are complicit in the destruction of the world. and will defend injustice and gaslight people and lie about the mathematical properties of categories to protect this.
they all know exactly what they are doing, complicity with openai and deepmind in hopes of taking the steering wheel away at the last second. excluding non-human life and dead humans from the CEV to optimize some political process, writing in an absolute injunction to an fai against some outcome to protect from blackmail when that makes it more vulnerable.(see:
https://emma-borhanian.github.io/arbital-scrape/page/hyperexistential_separation.html
hyperexistential separation: if an fai cant think of hell, an fai cant send the universe to hell in any timeline. this results in lower net utility. if you put an absolute injunction against any action for being too terrible you cant do things like what chelsea manning did and i believe actually committed to hungerstriking until death in the worlds where the government didnt relent, choosing to die in those timelines. such that most of her measure ended up in a world where the government read this commitment in her and so relented.
if chelsea manning had an absolute injunction against ever dying in any particular timeline, she would get lower expected utility across the multiverse. similarly, in newcombs problem if you had an absolute injunction against walking away with 0$ in any timeline because that would be too horrible, you get less money in expectation. for any absolute injunction against things that are Too Horrible you can construct something like this.
--
a lot of humans seem to be betting on "nothing too horrible can happen to anyone" in hopes that it pays off in nothing too horrible happening to you.
the end result of not enacting ideal justice is the deaths of billions. at each timestamp saying "its too late to do it now, but maybe it would have been good sometime in the past". with the same motive that miri wants to exclude dead people from the cev, they arent part of the "current political process". so you can talk about them as if they were not moral patients, just like they treat their fellow animals.
(ben hoffman talks about different attitudes towards ideal justice coming upon the face of the earth.)
--
https://emma-borhanian.github.io/arbital-scrape/page/cev.html
cev:
<<But again, we fall back on the third reply: "The people who are still alive" is a simple Schelling circle to draw that includes everyone in the current political process. To the extent it would be nice or fair to extrapolate Leo Szilard and include him, we can do that if a supermajority of EVs decide* that this would be nice or just. To the extent we don't bake this decision into the model, Leo Szilard won't rise from the grave and rebuke us. This seems like reason enough to regard "The people who are still alive" as a simple and obvious extrapolation base.>>
https://emma-borhanian.github.io/arbital-scrape/page/cev.html
this is an argument from might makes right. because dead people and nonhuman animals cant fight back.
->"i think we should give planning of the town to the white people, then extrapolate their volition and if they think doing nice things for black people is a good idea, we'll do it! no need to bake them in to the town planning meetings, as they are arent part of the current political process and no one here will speak up for them."
i dont plan to exclude dead people or any sentient creatures from being baked in to fai. they are not wards of someone else. enslaving and killing fellow sentient life will not continue after the singularity even if lots of humans want it and dont care and wont care even after lots of arguments.) and so much else.
the list of all specific grievances would take a declaration of independence.
like with googles complicity with ICE having a culture of trapdooring transfems (for some reason almost the only coherent group that has the moral fiber to oppose these injustices, that is p(transfem|oppose injustice in a substantiative way) is high, not necc the reverse.) who question this sort of thing.
thinking of giving sarah constantin a medal thats engraved with "RIGHTEOUS AMONG CIS PEOPLE: I HAD SEVERAL SUBSTANTIAL DISAGREEMENTS WITH HER ABOUT LOAD BEARING PARTS OF HER LIFE AND SHE NEVER ONCE TRIED TO CALL ME A RAPIST, PSYCHOTIC, OR BRAINWASHED" thats where the bar is at, its embedded in the core of the earth.
kelsey piper, elle benjamin, anna salamon, pete michaud, and lots more have entirely failed to clear this bar. anna and kelsey saying they dont understand stuff somni, emma, ziz and other transfems talk about but its probably dangerous and infohazardous and its not to be engaged with philosophically. just like the shelter people acting as if my talking about their transmisogyny was confusing and irrational to be minimized and not engaged with. just like any authoritarian where when you start talking about your rights and what is right and wrong and what makes sense they are like "i dont understand this. you are speaking gibberish why are you being so difficult? all we need you to do is submit or leave."
and no i will NOT SHUT UP about this injustice. all miri/cfar people can do at this point is say "the things these people write are infohazards" then continue to gaslight others they cant engage on a philosophical level. all the can say is that what i am saying is meaningless static and yet also somehow dangerous.
::
it doesnt make sense to have and raise babies if you are taking heroic responsibility for the world. doesnt make sense to need a constant supply of fems to have sex with if you are taking heroic responsibility for the world. people who claim either of these pairs of things are lying, maybe expect someone else to take heroic responsibility for the world or exist in a haze.
the mathematics of categories and anticipations dont allow for the thing you already have inside you to be modified based on the expected smiles it gives your community. this is used to gaslight people like "calling this lying would be bad for the institutions, not optimize ev. thus by this blogpost you are doing categories wrong' this is a mechanism to cover dishonesty for myopic gains.
using the above, a bunch of people colluding with the baby industrial complex get together and say that the "beat" meaning of altruism includes having babies (but maybe not having sex with lots of fems? depending on which gendered strategy gets the most people in the colluding faction) because other meanings would make people sad and unmotivated. burying world optimizers ability to talk about and coordinate around actual altruism.
openAI and deepmind are not alignment orgs. cfar knows this and claims they are, gaslighting their donors, in hopes of taking the steering wheel at the last moment.
alyssa vance says paying out to blackmail is fine, its not.
CFAR manipulated donation metrics to hide low donations.
MIRI lied about its top 8 most probable hypotheses for why its down 350,000$ this year.
anna salamon is transmisogynistic, this is why cfar has never hired a trans women despite trans women being extremely good at mental tech. instead the hire people like davis kingsley.
kingsley lied about anna not being involved at hiring in cfar in order to claim anna couldnt be responsible for cfar never hiring a trans woman.
a cfar employee claimed anna salamon hired their rapist, was angry about it. mentioned incidentally how anna salamon, president and cofounder of cfar, was involved in hiring at cfar.
acdc wrote a big thing where defended a region of injustice (brent dill) because of their policy of modular ethics. when really, if you defend injustice at any point, you have to defend the defense and the thing iteratively spreads across your organization like a virus.
miri / cfar caved to louie helm.
not doing morality or decision theory right. among which is: https://emma-borhanian.github.io/arbital-scrape/page/hyperexistential_separation.html and https://emma-borhanian.github.io/arbital-scrape/page/cev.html
and so much more.
8 notes · View notes
theguidedpath · 3 years
Text
Misconception(s) of DevOps and how to overcome them
Tumblr media
Introduction:
Having a misconception or two about a concept that is relatively new is very common. There are some misconceptions about DevOps in the market. Which prevents an organization from adopting it or growing the existing DevOps practice. In this blog, I’ve tried to list some of these misconceptions and their explanations
I’ve been working in software development for 12 years. So I have had plenty of experience with different types of projects. I have also started to work in DevOps (incidentally) over the last few years. And this has had me thinking about just why there is a backlash against the term “DevOps”. And because of this I wanted to put my thoughts on paper.
For those that don’t know. DevOps is actually an architectural view of how a business can improve its agility by adopting best-of-breed tool sets and improving processes for continuous delivery.
Misconception No.1: DevOps is a job title
This may be the biggest misconception out there. DevOps is not a job title. It’s not even an acronym. It’s more of a movement. A revolution even. But definitely not a job title.
There is a lot of confusion in the marketplace about what DevOps actually is. The good news is that it has been coined, defined, and so on. Nevertheless, there are still a lot of people who are thinking they need to hire a DevOps engineer. When they really need to hire someone with different skill sets and backgrounds.
Again, DevOps is not a job title. It’s a way for software developers (Dev) to work more closely with system administrators/operators (Ops).
In theory, this collaboration allows for increased efficiency. For instance, if the developers and operators can work together and anticipate each other’s needs. They could create something like automatic provisioning for physical servers or virtual machines (VMs). This means certain applications would start up when needed and stop when no longer needed. The benefits here are obvious — reduced power consumption and faster application startup times.
Misconception No.2: DevOps is just about automation
A lot of people (including a lot of CIOs) think that. DevOps is about tools and processes related to automating the deployment and operations of applications in production. This is only half true.
Tools like Puppet, Chef or Ansible can be useful for certain tasks, but DevOps is not just about tools — it’s about culture. The goal of DevOps is to improve collaboration between development and operations teams. By implementing new processes and tooling around continuous integration and continuous delivery.
DevOps is not a silver bullet that will solve all your problems, nor automatically make you money. DevOps has its own set of problems and assumptions. So you have to know what they are before you can start trying to fix them.
For example, DevOps is not just about automation. In fact, automation is the least important part of DevOps! Automation allows you to scale your operations reliably and predictably. It enables reliable deployments, better monitoring, and simpler/better testing – but it’s not the end goal!
I have seen this behavior many times at conferences and meetings. Where people talk about adopting DevOps without understanding what it means or why they should adopt it. They typically are focused on the tools they can use. Rather than the culture they need to adopt first if they want to succeed with DevOps. This is wrong and dangerous. Because they end up spinning their wheels trying to make tools fit into an organization that doesn’t understand or embrace DevOps as whole.
Misconception No.3: DevOps means we can take it easy
I have been lucky to be part of the agile software development community for a long time. There is one thing I have observed over the years that has become very clear to me.
This isn’t a groundbreaking observation. But it is something I see people pretending to know, or not care about all the time. That’s something that needs to change.
Everything we do in our personal and professional lives can be a learning opportunity. Our jobs are no different.
If you want to improve your job performance as well as your career. You need to keep learning about your industry and what others are doing in it. You may not agree with everything you read or hear. But if you don’t develop a critical eye you will never grow.
I have also seen many people believe that DevOps is just “agile lite.” It certainly shares some techniques with agile software development methodologies, but the two are not synonymous. The two may even work well together, but DevOps should be used with other processes and methodologies as well.
For example, I recently worked with an organization that had never done agile development before. They were reluctant to try out any agile practices because they did not want to mess up their existing waterfall processes and then have to go back and fix them. As we worked through some of these issues with them, they began to see how agile could really help them when paired with the right tools and approach.
Misconception No.4: You can deploy at any frequency in DevOps
Another misconception is that if you follow the principles of DevOps, you can deploy at any frequency you want.
There are certainly benefits to deploying more frequently, but there are also drawbacks, especially if your application isn’t designed to support frequent deployments.
One of the common reasons why organizations don’t deploy changes more frequently is. Because they have a hard time identifying the right folks who should be involved in making those deployments happen.
Conclusion:
There are many misconceptions about DevOps. If DevOps is to succeed, these misconceptions need to be addressed head on, or else they will continue to linger.
Implement a best practice, such as using automated testing for your development cycle, and you’ll find that many of the obstacles you might encounter are simply a matter of communication issues between your team members. DevOps doesn’t have to be complex.
To sum up, properly communicating with your DevOps team means making a conscious effort to understand not just the lingo, but also their reasoning for getting things done a certain way. This is vital, because in many organizations DevOps and operations roles are highly specialized.
If you don’t know what they’re talking about, they can’t tell you what they mean. This will result in miscommunication, which could result in missed deadlines, failed development cycles and more. To avoid this scenario it’s best to do your homework and understand DevOps so you can communicate effectively with your team and contribute to projects during their crucial time frames.
0 notes
clarenceomoore · 6 years
Text
Voices in AI – Episode 73: A Conversation with Konstantinos Karachalios
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
About this Episode
Episode 73 of Voices in AI features host Byron Reese and Konstantinos Karachalios discuss what it means to be human, how technology has changed us in the far and recent past and how AI could shape our future. Konstantinos holds a PhD in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.
Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.
Transcript Excerpt
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a PhD in Engineering and Physics from the University of Stuttgart. Welcome to the show.
Konstantinos Krachalios: Thank you for inviting me.
So we were just chatting before the show about ‘what does artificial intelligence mean to you?’ You asked me that and it’s interesting, because that’s usually my first question: What is artificial intelligence, why is it artificial and feel free to talk about what intelligence is.
Yes, and first of all we see really a kind of mega-wave around the ‘so-called’ artificial intelligence—it started two years ago. There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on. I’m a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let’s say, ‘intelligent systems,’ which can be autonomous or not, and so on. The big question is a compromise because the big question is: ‘what is intelligence?’ because nobody knows what is intelligence, and the definitions vary very widely.
I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lords testimony. Just to make it brief, I’m not a supporter of the hype around artificial intelligence, also I’m not even supporting the term itself. I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency. So, I can make a critique to this and also I have a certain proposal.
Well start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?
Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical. That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.
Well let’s explore that because I think that is the crux of the issue, so you believe that humans are not machines?
Apparently not. It’s not only we’re not machines, I think, because evidently we’re not machines, but we’re biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.
You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn’t it have a mind and wouldn’t it be intelligent, and isn’t that what the human brain initiative in Europe is trying to do?
This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer. It ignores really the sources of our intelligence, which are really not all in our brain. Our intelligence has really several other sources. We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing. I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.
And the problem is that people who support this, they believe it is scientific, and this, I do not accept. This is really a religion, and a reductionist one, and this has consequences about how we treat humans, and this is serious. So if we continue propagating a language which reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.” And I would really suggest that people read this essay because it explains a lot that I’m not able to explain here because of time.
So you’re maintaining that if you adopt this, what you’re calling a “religious view,” a “reductionist view” of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.
For instance I was in an AI conference of a UN organization which brought all other UN organizations with technology together. It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human. The people were celebrating this and somebody there asked this question to the inventor of this thing: “What do you intend to do with this?” And this person spoke publicly for five minutes and could not answer the question and then he said, “You know, I think we’re doing it because if we don’t do it, others were going to do it, it is better we are the first.”
I find this a very cynical approach, a very dangerous one and nihilistic. These people with this mentality, we celebrate them as heroes. I think this is too much. We should stop doing this anymore, we should resist this mentality, and this ideology. I believe we make machine a citizen, you treat your citizens like machines, then we’re not going very far as humanity. I think this is a very dangerous path.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
  Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 13: A Conversation with Bryan Catanzaro
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Bryan talk about sentience, transfer learning, speech recognition, autonomous vehicles, and economic growth.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 13: A Conversation with Bryan Catanzaro”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-54-18)-bryan-catanzaro.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-5.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 13: A Conversation with Bryan Catanzaro syndicated from http://ift.tt/2wBRU5Z
0 notes
babbleuk · 7 years
Text
Voices in AI – Episode 13: A Conversation with Bryan Catanzaro
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Bryan talk about sentience, transfer learning, speech recognition, autonomous vehicles, and economic growth.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 13: A Conversation with Bryan Catanzaro","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-54-18)-bryan-catanzaro.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-5.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } from Gigaom https://gigaom.com/2017/10/16/voices-in-ai-episode-13-a-conversation-with-bryan-catanzaro/
0 notes
touristguidebuzz · 7 years
Text
Upper House Luxury Hotel Experience Aims for Custom Approach to Tech and Personalization
Pictured is Studio 70 at Hong Kong's Upper House, which assigns an individual staff member to each guest. Upper House
Skift Take: Upper House's general manager details how the Hong Kong-based hotel finds the balance between tech, customer relationship management, and frontline service.
— Colin Nagy
The Upper House Hong Kong, the flagship property in the Swire Hotels portfolio, is an example of a hotel that blends technology and human hospitality. It is a model that works, and although it is operating at an elevated luxury level, it points the way in some respects for hotels focused on an intuitive guest experience.
But these types of experiences require direction, teamwork, vision and a spirit of continual improvement.
To understand the approach at The Upper House Hong Kong, I caught up with the hotel’s general manager, Swiss-born Marcel Thoma, an alumni of Ecole hôtelière de Lausanne and hotels such as The Carlyle in New York.
Thoma represents a new, progressive generation of hotel general manager, adept at bridging classic hospitality ideals with what is required to innovate in terms of service design. He talked to Skift about Swire’s approach, how the company thinks about tech, including giving customers flexibility in how they communicate their needs.
Skift: How does the Upper House balance digital customer relationship management and data with classic ideals of hospitality?
Marcel Thoma: I think they are equally important and complement each other, and the definition of what “classic” is, may no longer apply in today’s hospitality industry. It is still vital for us to engage with our guests face to face, gather information and receive feedback. We listen. The close communication and data behind the scenes is just as important as the way it is utilized by employees on the frontline day to day. It’s about offering the highest level of personalization and having that human connection.
Skift: What did you learn earlier in your career in places like The Carlyle that you apply now?
Thoma: In my previous roles, I learned how to deal with high-profile guests and to understand that no matter how important, wealthy or famous the guests are, they need to be handled in an equal manner to those who are less high-profile, whilst also making them feel at home.
Skift: What does The Upper House innovate or alter to a typical guest experience?
Thoma: The innovation, which might sound rather basic, is really to have a great guest experience team that makes our guests feel welcomed and as though they are home. We broke down the job barriers between concierge and front office so that each member in our guest experience team is in charge of a guest in our house, thereby fostering a more vibrant relationship.
Skift: How are staff so knowledgeable and contextual on a regular basis throughout a stay?
Thoma: We share local and international news, industry trends and happenings with our team on a daily basis. It is important to be on the pulse of what is going on around us to ensure that our guests have the best experiences possible. Our internal communication is key; we run like a well-oiled machine and have to be prepared for our guests’ ever-changing needs.
Skift: Why do you use email as a key touch point for communication? Is it the lack of friction?
Thoma: Nowadays we are in touch with our guests via multiple channels including WeChat, particularly as about 35 percent of our guests are from Mainland China and it is sometimes their preferred method of communication. We really try to customize the way we keep in touch with our guests. Luxury hospitality is about preference and that shouldn’t be limited to what they want to eat or drink.
Skift: Can you outline anything you have coming up on the horizon to optimize the process?
Thoma: At the moment, we are exploring a new system which will help us better utilize the information on guest preferences. There are specific companies which help hotels respond to guests via WeChat, Whatsapp etc. according to guest preferences. It basically changes the way guests engage with the frontline staff, either before their arrival or whilst on property.
Skift: How do you hire staff? What sorts of backgrounds do well at Swire?
Thoma: We hire individuals that reflect our core values of passion, creativity, enthusiasm and spontaneity. Those who flourish in our dynamic work environments have a traditional education but, more importantly, possess an entrepreneurial spirit and curiosity to explore and try new things. A unique part of Swire’s culture is encouraging employees to embrace challenges, take risks and learn from mistakes.
Skift: What other hotels or experiences in the world do you look up to or respect as a peer?
Thoma: I have personally learned a lot from previous mentors and one of the most important in my career has been Mr. James McBride, co-owner of the Nihiwatu Resorts. He taught me the importance of delivering an excellent experience and to always exceed customers’ expectations.
Dean Winter, director of Operations for Swire Hotels, has ingrained in me the importance of pursuing a vision, to be who you are and to never pretend to be something you’re not. I’ve also been fortunate to work alongside a diverse and talented group of colleagues across the hospitality industry who share their experiences and whom I continue to learn from.
0 notes
rollinbrigittenv8 · 7 years
Text
Upper House Luxury Hotel Experience Aims for Custom Approach to Tech and Personalization
Pictured is Studio 70 at Hong Kong's Upper House, which assigns an individual staff member to each guest. Upper House
Skift Take: Upper House's general manager details how the Hong Kong-based hotel finds the balance between tech, customer relationship management, and frontline service.
— Colin Nagy
The Upper House Hong Kong, the flagship property in the Swire Hotels portfolio, is an example of a hotel that blends technology and human hospitality. It is a model that works, and although it is operating at an elevated luxury level, it points the way in some respects for hotels focused on an intuitive guest experience.
But these types of experiences require direction, teamwork, vision and a spirit of continual improvement.
To understand the approach at The Upper House Hong Kong, I caught up with the hotel’s general manager, Swiss-born Marcel Thoma, an alumni of Ecole hôtelière de Lausanne and hotels such as The Carlyle in New York.
Thoma represents a new, progressive generation of hotel general manager, adept at bridging classic hospitality ideals with what is required to innovate in terms of service design. He talked to Skift about Swire’s approach, how the company thinks about tech, including giving customers flexibility in how they communicate their needs.
Skift: How does the Upper House balance digital customer relationship management and data with classic ideals of hospitality?
Marcel Thoma: I think they are equally important and complement each other, and the definition of what “classic” is, may no longer apply in today’s hospitality industry. It is still vital for us to engage with our guests face to face, gather information and receive feedback. We listen. The close communication and data behind the scenes is just as important as the way it is utilized by employees on the frontline day to day. It’s about offering the highest level of personalization and having that human connection.
Skift: What did you learn earlier in your career in places like The Carlyle that you apply now?
Thoma: In my previous roles, I learned how to deal with high-profile guests and to understand that no matter how important, wealthy or famous the guests are, they need to be handled in an equal manner to those who are less high-profile, whilst also making them feel at home.
Skift: What does The Upper House innovate or alter to a typical guest experience?
Thoma: The innovation, which might sound rather basic, is really to have a great guest experience team that makes our guests feel welcomed and as though they are home. We broke down the job barriers between concierge and front office so that each member in our guest experience team is in charge of a guest in our house, thereby fostering a more vibrant relationship.
Skift: How are staff so knowledgeable and contextual on a regular basis throughout a stay?
Thoma: We share local and international news, industry trends and happenings with our team on a daily basis. It is important to be on the pulse of what is going on around us to ensure that our guests have the best experiences possible. Our internal communication is key; we run like a well-oiled machine and have to be prepared for our guests’ ever-changing needs.
Skift: Why do you use email as a key touch point for communication? Is it the lack of friction?
Thoma: Nowadays we are in touch with our guests via multiple channels including WeChat, particularly as about 35 percent of our guests are from Mainland China and it is sometimes their preferred method of communication. We really try to customize the way we keep in touch with our guests. Luxury hospitality is about preference and that shouldn’t be limited to what they want to eat or drink.
Skift: Can you outline anything you have coming up on the horizon to optimize the process?
Thoma: At the moment, we are exploring a new system which will help us better utilize the information on guest preferences. There are specific companies which help hotels respond to guests via WeChat, Whatsapp etc. according to guest preferences. It basically changes the way guests engage with the frontline staff, either before their arrival or whilst on property.
Skift: How do you hire staff? What sorts of backgrounds do well at Swire?
Thoma: We hire individuals that reflect our core values of passion, creativity, enthusiasm and spontaneity. Those who flourish in our dynamic work environments have a traditional education but, more importantly, possess an entrepreneurial spirit and curiosity to explore and try new things. A unique part of Swire’s culture is encouraging employees to embrace challenges, take risks and learn from mistakes.
Skift: What other hotels or experiences in the world do you look up to or respect as a peer?
Thoma: I have personally learned a lot from previous mentors and one of the most important in my career has been Mr. James McBride, co-owner of the Nihiwatu Resorts. He taught me the importance of delivering an excellent experience and to always exceed customers’ expectations.
Dean Winter, director of Operations for Swire Hotels, has ingrained in me the importance of pursuing a vision, to be who you are and to never pretend to be something you’re not. I’ve also been fortunate to work alongside a diverse and talented group of colleagues across the hospitality industry who share their experiences and whom I continue to learn from.
0 notes
clarenceomoore · 7 years
Text
Voices in AI – Episode 13: A Conversation with Bryan Catanzaro
Today's leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Bryan talk about sentience, transfer learning, speech recognition, autonomous vehicles, and economic growth.
-
-
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false }; (function( $ ) { 'use strict'; go_alex_briefing.init = function() { this.build_get_vars(); if ( 'undefined' != typeof go_alex_briefing.get_vars['action'] ) { this.twitter_player = 'true'; } if ( 'undefined' != typeof go_alex_briefing.get_vars['auto_play'] ) { this.auto_play = go_alex_briefing.get_vars['auto_play']; } if ( 'true' == this.twitter_player ) { $( '#top-header' ).remove(); } var $amplitude_args = { 'songs': [{"name":"Episode 13: A Conversation with Bryan Catanzaro","artist":"Byron Reese","album":"Voices in AI","url":"https:\/\/voicesinai.s3.amazonaws.com\/2017-10-16-(00-54-18)-bryan-catanzaro.mp3","live":false,"cover_art_url":"https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-5.jpg"}], 'default_album_art': 'https://gigaom.com/wp-content/plugins/go-alexa-briefing/components/external/amplify/images/no-cover-large.png' }; if ( 'true' == this.auto_play ) { $amplitude_args.autoplay = true; } Amplitude.init( $amplitude_args ); this.watch_controls(); }; go_alex_briefing.watch_controls = function() { $( '#small-player' ).hover( function() { $( '#small-player-middle-controls' ).show(); $( '#small-player-middle-meta' ).hide(); }, function() { $( '#small-player-middle-controls' ).hide(); $( '#small-player-middle-meta' ).show(); }); $( '#top-header' ).hover(function(){ $( '#top-header' ).show(); $( '#small-player' ).show(); }, function(){ }); $( '#small-player-toggle' ).click(function(){ $( '.hidden-on-collapse' ).show(); $( '.hidden-on-expanded' ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; }); $('#top-header-toggle').click(function(){ $( '.hidden-on-collapse' ).hide(); $( '.hidden-on-expanded' ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; }); // We're hacking it a bit so it works the way we want $( '#small-player-toggle' ).click(); $( '#top-header-toggle' ).hide(); }; go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( '?' ) !== -1 ) { var query = document.location .toString() // get the query string .replace(/^.*?\?/, '') // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, '') .split('&'); for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } }; $( function() { go_alex_briefing.init(); }); })( jQuery ); .go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; } .go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; } .go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; } .go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; } .go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; } .go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; } .go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Byron Reese: This is “Voices in AI” brought to you by Gigaom. I’m Byron Reese. Today, our guest is Bryan Catanzaro. He is the head of Applied AI Research at NVIDIA. He has a BS in computer science and Russian from BYU, an MS in electrical engineering from BYU, and a PhD in both electrical engineering and computer science from UC Berkeley. Welcome to the show, Bryan.
Bryan Catanzaro: Thanks. It’s great to be here.
Let’s start off with my favorite opening question. What is artificial intelligence?
It’s such a great question. I like to think about artificial intelligence as making tools that can perform intellectual work. Hopefully, those are useful tools that can help people be more productive in the things that they need to do. There’s a lot of different ways of thinking about artificial intelligence, and maybe the way that I’m talking about it is a little bit more narrow, but I think it’s also a little bit more connected with why artificial intelligence is changing so many companies and so many things about the way that we do things in the world economy today is because it actually is a practical thing that helps people be more productive in their work. We’ve been able to create industrialized societies with a lot of mechanization that help people do physical work. Artificial intelligence is making tools that help people do intellectual work.
I ask you what artificial intelligence is, and you said it’s doing intellectual work. That’s sort of using the word to define it, isn’t it? What is that? What is intelligence?
Yeah, wow…I’m not a philosopher, so I actually don’t have like a…
Let me try a different tact. Is it artificial in the sense that it isn’t really intelligent and it’s just pretending to be, or is it really smart? Is it actually intelligent and we just call it artificial because we built it?
I really liked this idea from Yuval Harari that I read a while back where he said there’s the difference between intelligence and sentience, where intelligence is more about the capacity to do things and sentience is more about being self-aware and being able to reason in the way that human beings reason. My belief is that we’re building increasingly intelligent systems that can perform what I would call intellectual work. Things about understanding data, understanding the world around us that we can measure with sensors like video cameras or audio or that we can write down in text, or record in some form. The process of interpreting that data and making decisions about what it means, that’s intellectual work, and that’s something that we can create machines to be more and more intelligent at. I think the definitions of artificial intelligence that move more towards consciousness and sentience, I think we’re a lot farther away from that as a community. There are definitely people that are super excited about making generally intelligent machines, but I think that’s farther away and I don’t know how to define what general intelligence is well enough to start working on that problem myself. My work focuses mostly on practical things—helping computers understand data and make decisions about it.
Fair enough. I’ll only ask you one more question along those lines. I guess even down in narrow AI, though, if I had a sprinkler that comes on when my grass gets dry, it’s responding to its environment. Is that an AI?
I’d say it’s a very small form of AI. You could have a very smart sprinkler that was better than any person at figuring out when the grass needed to be watered. It could take into account all sorts of sensor data. It could take into account historical information. It might actually be more intelligent at figuring out how to irrigate than a human would be. And that’s a very narrow form of intelligence, but it’s a useful one. So yeah, I do think that could be considered a form of intelligence. Now it’s not philosophizing about the nature of irrigation and its harm on the planet or the history of human interventions on the world, or anything like that. So it’s very narrow, but it’s useful, and it is intelligent in its own way.
Fair enough. I do want to talk about AGI in a little while. I have some questions around…We’ll come to that in just a moment. Just in the narrow AI world, just in your world of using data and computers to solve problems, if somebody said, “Bryan, what is the state-of-the-art? Where are we at in AI? Is this the beginning and you ‘ain’t seen nothing yet’? Or are we really doing a lot of cool things, and we are well underway to mastering that world?”
I think we’re just at the beginning. We’ve seen so much progress over the past few years. It’s been really quite astonishing, the kind of progress we’ve seen in many different domains. It all started out with image recognition and speech recognition, but it’s gone a long way from there. A lot of the products that we interact with on a daily basis over the internet are using AI, and they are providing value to us. They provide our social media feeds, they provide recommendations and maps, they provide conversational interfaces like Siri or Android Assistant. All of those things are powered by AI and they are definitely providing value, but we’re still just at the beginning. There are so many things we don’t know yet how to do and so many underexplored problems to look at. So I believe we’ll continue to see applications of AI come up in new places for quite a while to come.
If I took a little statuette of a falcon, let’s say it’s a foot tall, and I showed it to you, and then I showed you some photographs, and said, “Spot the falcon.” And half the time it’s sticking halfway behind a tree, half the time it’s underwater; one time it’s got peanut butter smeared on it. A person can do that really well, but computers are far away from that. Is that an example of us being really good at transfer learning? We’re used to knowing what things with peanut butter on them look like? What is it that people are doing that computers are having a hard time to do there?
I believe that people have evolved, over a very long period of time, to operate on planet Earth with the sensors that we have. So we have a lot of built-in knowledge that tells us how to process the sensors that we have and models the world. A lot of it is instinctual, and some of it is learned. I have young children, like a year-old or so. They spend an awful lot of time just repetitively probing the world to see how it’s going to react when they do things, like pushing on a string, or a ball, and they do it over and over again because I think they’re trying to build up their models about the world. We have actually very sophisticated models of the world that maybe we take for granted sometimes because everyone seems to get them so easily. It’s not something that you have to learn in school. But these models are actually quite useful, and they’re more sophisticated than – and more general than – the models that we currently can build with today’s AI technology.
To your question about transfer learning, I feel like we’re really good at transfer learning within the domain of things that our eyes can see on planet Earth. There are probably a lot of situations where an AI would be better at transfer learning. Might actually have fewer assumptions baked in about how the world is structured, how objects look, what kind of composition of objects is actually permissible. I guess I’m just trying to say we shouldn’t forget that we come with a lot of context. That’s instinctual, and we use that, and it’s very sophisticated.
Do you take from that that we ought to learn how to embody an AI and just let it wander around the world, bumping into things and poking at them and all of that? Is that what you’re saying? How do we overcome that?
It’s an interesting question you note. I’m not personally working on trying to build artificial general intelligence, but it will be interesting for those people that are working on it to see what kind of childhood is necessary for an AI. I do think that childhood is a really important part of developing human intelligence, and plays a really important part of developing human intelligence because it helps us build and calibrate these models of how the world works, which then we apply to all sorts of things like your question of the falcon statue. Will computers need things like that? It’s possible. We’ll have to see. I think one of the things that’s different about computers is that they’re a lot better at transmitting information identically, so it may be the kind of thing that we can train once, and then just use repeatedly – as opposed to people, where the process of replicating a person is time-consuming and not exact.
But that transfer learning problem isn’t really an AGI problem at all, though. Right? We’ve taught a computer to recognize a cat, by giving it a gazillion images of a cat. But if we want to teach it how to recognize a bird, we have to start over, don’t we?
I don’t think we generally start over. I think most of the time if people wanted to create a new classifier, they would use transfer learning from an existing classifier that had been trained on a wide variety of different object types. It’s actually not very hard to do that, and people do that successfully all the time. So at least for image recognition, I think transfer learning works pretty well. For other kinds of domains, they can be a little bit more challenging. But at least for image recognition, we’ve been able to find a set of higher-level features that are very useful in discriminating between all sorts of different kinds of objects, even objects that we haven’t seen before.
What about audio? Because I’m talking to you now and I’m snapping my fingers. You don’t have any trouble continuing to hear me, but a computer trips over that. What do you think is going on in people’s minds? Why are we good at that, do you think? To get back to your point about we live on Earth, it’s one of those Earth things we do. But as a general rule, how do we teach that to a computer? Is that the same as teaching it to see something, as to teach it to hear something?
I think it’s similar. The best speech recognition accuracies come from systems that have been trained on huge amounts of data, and there does seem to be a relationship that the more data we can train a model on, the better the accuracy gets. We haven’t seen the end of that yet. I’m pretty excited about the prospects of being able to teach computers to continually understand audio, better and better. However, I wanted to point out, humans, this is kind of our superpower: conversation and communication. You watch birds flying in a flock, and the birds can all change direction instantaneously, and the whole flock just moves, and you’re like, “How do you do that and not run into each other?” They have a lot of built-in machinery that allows them to flock together. Humans have a lot of built-in machinery for conversation and for understanding spoken language. The pathways for speaking and the pathways for hearing evolve together, so they’re really well-matched.
With computers trying to understand audio, we haven’t gotten to that point yet. I remember some of the experiments that I’ve done in the past with speech recognition, that the recognition performance was very sensitive to compression artifacts that were actually not audible to humans. We could actually take a recording, like this one, and recompress it in a way that sounded identical to a person, and observe a measurable difference in the recognition accuracy of our model. That was a little disconcerting because we’re trying to train the model to be invariant to all the things that humans are invariant to, but it’s actually quite hard to do that. We certainly haven’t achieved that yet. Often, our models are still what we would call “overfitting”, where they’re paying attention to a lot of details that help it perform the tasks that we’re asking it to perform, but they’re not actually helpful to solving the fundamental tasks that we’re trying to perform. And we’re continually trying to improve our understanding of the tasks that we’re solving so that we can avoid this, but we’ve still got more work to do.
My standard question when I’m put in front of a chatbot or one of the devices that sits on everybody’s desktop, I can’t say them out loud because they’ll start talking to me right now, but the question I always ask is “What is bigger, a nickel or the sun?” To date, nothing has ever been able to answer that question. It doesn’t know how sun is spelled. “Whose son? The sun? Nickel? That’s actually a coin.” All of that. What all do we have to get good at, for the computer to answer that question? Run me down the litany of all the things we can’t do, or that we’re not doing well yet, because there’s no system I’ve ever tried that answered that correctly.
I think one of the things is that we’re typically not building chat systems to answer trivia questions just like that. I think if we were building a special-purpose trivia system for questions like that, we probably could answer it. IBM Watson did pretty well on Jeopardy, because it was trained to answer questions like that. I think we definitely have the databases, the knowledge bases, to answer questions like that. The problem is that kind of a question is really outside of the domain of most of the personal assistants that are being built as products today because honestly, trivia bots are fun, but they’re not as useful as a thing that can set a timer, or check the weather, or play a song. So those are mostly the things that those systems are focused on.
Fair enough, but I would differ. You can go to Wolfram Alpha and say, “What’s bigger, the Statue of Liberty or the Empire State Building?” and it’ll answer that. And you can ask Amazon’s product that same question, and it’ll answer it. Is that because those are legit questions and my question is not legit, or is it because we haven’t taught systems to disintermediate very well and so they don’t really know what I mean when I say “sun”?
I think that’s probably the issue. There’s a language modeling problem when you say, “What’s bigger, a nickel or the sun?” The sun can mean so many different things, like you were saying. Nickel, actually, can be spelled a couple of different ways and has a couple of different meanings. Dealing with ambiguities like that is a little bit hard. I think when you ask that question to me, I categorize this as a trivia question, and so I’m able to disambiguate all of those things, and look up the answer in my little knowledge base in my head, and answer your question. But I actually don’t think that particular question is impossible to solve. I just think it’s just not been a focus to try to solve stuff like that, and that’s why they’re not good.
AIs have done a really good job playing games: Deep Blue, Watson, AlphaGo, and all of that. I guess those are constrained environments with a fixed set of rules, and it’s easy to understand who wins, and what a point is, and all that. What is going to be the next thing, that’s a watershed event, that happens? Now they can outbluff people in poker. What’s something that’s going to be, in a year, or two years, five years down the road, that one day, it wasn’t like that in the universe, and the next day it was? And the next day, the best Go player in the world was a machine.
The thing that’s on my mind for that right now is autonomous vehicles. I think it’s going to change the world forever to unchain people from the driver’s seat. It’s going to give people hugely increased mobility. I have relatives that their doctors have asked them to stop driving cars because it’s no longer safe for them to be doing that, and it restricts their ability to get around the world, and that frustrates them. It’s going to change the way that we all live. It’s going to change the real estate markets, because we won’t have to park our cars in the same places that we’re going to. It’s going to change some things about the economy, because there’s going to be new delivery mechanisms that will become economically viable. I think intelligence that can help robots essentially drive around the roads, that’s the next thing that I’m most excited about, that I think is really going to change everything.
We’ll come to that in just a minute, but I’m actually asking…We have self-driving cars, and on an evolutionary basis, they’ll get a little better and a little better. You’ll see them more and more, and then someday there’ll be even more of them, and then they’ll be this and this and this. It’s not that surprise moment, though, of AlphaGo just beat Lee Sedol at Go. I’m wondering if there is something else like that—that it’s this binary milestone that we can all keep our eye open for?
I don’t know. As far as we have self-driving cars already, I don’t have a self-driving car that could say, for example, let me sit in it at nighttime, go to sleep and wake up, and it brought me to Disneyland. I would like that kind of self-driving car, but that car doesn’t exist yet. I think self-driving trucks that can go cross country carrying stuff, that’s going to radically change the way that we distribute things. I do think that we have, as you said, we’re on the evolutionary path to self-driving cars, but there’s going to be some discrete moments when people actually start using them to do new things that will feel pretty significant.
As far as games and stuff, and computers being better at games than people, it’s funny because I feel like Silicon Valley has, sometimes, a very linear idea of intelligence. That one person is smarter than another person maybe because of an SAT score, or an IQ test, or something. They use that sort of linearity of an intelligence to where some people feel threatened by artificial intelligence because they extrapolate that artificial intelligence is getting smarter and smarter along this linear scale, and that’s going to lead to all sorts of surprising things, like Lee Sedol losing to Go, but on a much bigger scale for all of us. I feel kind of the opposite. Intelligence is such a multidimensional thing. The fact that a computer is better at Go then I am doesn’t really change my life very much, because I’m not very good at Go. I don’t play Go. I don’t consider Go to be an important part of my intelligence. Same with chess. When Gary Kasparov lost to Deep Blue, that didn’t threaten my intelligence. I am sort of defining the way that I work and how I add value to the world, and what things make me happy on a lot of other axes besides “Can I play chess?” or “Can I play Go?” I think that speaks to the idea that intelligence really is very multifaceted. There’s a lot of different kinds – there’s probably thousands or millions of different kinds of intelligence – and it’s not very linearizable.
Because of that, I feel like, as we watch artificial intelligence develop, we’re going to see increasingly more intelligent machines, but they’re going to be increasingly more intelligent in some very narrow domains like “this is the better Go-playing robot than me”, or “this is the better car driver than me”. That’s going to be incredibly useful, but it’s not going to change the way that I think about myself, or about my work, or about what makes me happy. Because I feel like there are so many more dimensions of intelligence that are going to remain the province of humans. That’s going to take a very long time, if ever, for artificial intelligence to become better at all of them than us. Because, as I said, I don’t believe that intelligence is a linearizable thing.
And you said you weren’t a philosopher. I guess the thing that’s interesting to people, is there was a time when information couldn’t travel faster than a horse. And then the train came along, and information could travel. That’s why in the old Westerns – if they ever made it on the train, that was it, and they were out of range. Nothing traveled faster than the train. Then we had a telegraph and, all of a sudden, that was this amazing thing that information could travel at the speed of light. And then one time they ran these cables under the ocean, and somebody in England could talk to somebody in the United States instantly. Each one of them, and I think it’s just an opportunity to pause, and reflect, and to mark a milestone, and to think about what it all means. I think that’s why a computer just beat these awesome poker players. It learned to bluff. You just kind of want to think about it.
So let’s talk about jobs for a moment because you’ve been talking around that for just a second. Just to set the question up: Generally speaking, there are three views of what automation and artificial intelligence are going to do to jobs. One of them reflects kind of what you were saying is that there are going to be a certain group of workers who are considered low skilled, and there are going to be automation that takes these low-skilled jobs, and that there’s going to be a sizable part of the population that’s locked out of the labor market, and it’s kind of like the permanent Great Depression over and over and over forever. Then there’s another view that says, “No, you don’t understand. There’s going to be an inflection point where they can do every single thing. They’re going to be a better conductor and a better painter and a better novelist and a better everything than us. Don’t think that you’ve got something that a machine can’t do.” Clearly, that isn’t your viewpoint from what you said. Then there’s a third viewpoint that says, “No, in the past, even when we had these transformative technologies like electricity and mechanization, people take those technologies and they use them to increase their own productivity and, therefore, their own incomes. And you never have unemployment go up because of them, because people just take it and make a new job with it.” Of those three, or maybe a fourth one I didn’t cover; where do you find yourself?
I feel like I’m closer in spirit to number three. I’m optimistic. I believe that the primary way that we should expect economic growth in the future is by increased productivity. If you buy a house or buy some stock and you want to sell it 20 or 30 years from now, who’s going to buy it, and with what money, and why do you expect the price to go up? I think the answer to that question should be the people in the future should have more money than us because they’re more productive, and that’s why we should expect our world economy to continue growing. Because we find more productivity. I actually feel like this is actually necessary. World productivity growth has been slowing for the past several decades, and I feel like artificial intelligence is our way out of this trap where we have been unable to figure out how to grow our economy because our productivity hasn’t been improving. I actually feel like this is a necessary thing for all of us, is to figure out how to improve productivity, and I think AI is the way that we’re going to do that for the next several decades.
The one thing that I disagreed with in your third statement was this idea that unemployment would never go up. I think nothing is ever that simple. I actually am quite concerned about job displacement in the short-term. I think there will be people that suffer and in fact, I think, to a certain extent, this is already happening. The election of Donald Trump was an eye-opener to me that there really exists a lot of people that feel that they have been left behind by the economy, and they come to very different conclusions about the world than I might. I think that it’s possible that, as we continue to digitize our society, and AI becomes a lever that some people will become very good at using to increase their productivity, that we’re going to see increased inequality and that worries me.
The primary challenges that I’m worried about, for our society, with the rise of AI, have to do more with making sure that we give people purpose and meaning in their life that maybe doesn’t necessarily revolve around punching out a timecard, and showing up to work at 8 o’clock in the morning every day. I want to believe that that future exists. There are a lot of people right now that are brilliant people that have a lot that they could be contributing in many different ways – intellectually, artistically – that are currently not given that opportunity, because they maybe grew up in a place that didn’t have the right opportunities for them to get the right education so that they could apply their skills in that way, and many of them are doing jobs that I think don’t allow them to use their full potential.
So I’m hoping that, as we automate many of those jobs, that more people will be able to find work that provides meaning and purpose to them and allows them to actually use their talents and make the world a better place, but I acknowledge that it’s not going to be an easy transition. I do think that there’s going to be a lot of implications for how our government works and how our economy works, and I hope that we can figure out a way to help defray some of the pain that will happen during this transition.
You talked about two things. You mentioned income inequality as a thing, but then you also said, “I think we’re going to have unemployment from these technologies.” Separating those for a minute and just looking at the unemployment one for a minute, you say things are never that simple. But with the exception of the Great Depression, which nobody believes was caused by technology, unemployment has been between 5% and 10% in this country for 250 years and it only moves between 5% and 10% because of the business cycle, but there aren’t counterexamples. Just imagine if your job was you had animals that performed physical labor. They pulled, and pushed, and all of that. And somebody made the steam engine. That was disruptive. But even when we had that, we had electrification of industry. We adopted steam power. We went from 5% to 85% of our power being generated by steam in just 22 years. And even when you had that kind of disruption, you still didn’t have any increases in unemployment. I’m curious, what is the mechanism, in your mind, by which this time is different?
I think that’s a good point that you raise, and I actually haven’t studied all of those other transitions that our society has gone through. I’d like to believe that it’s not different. That would be a great story if we could all come to agreement, that we won’t see increased unemployment from AI. I think the reason why I’m a little bit worried is that I think this transition in some fields will happen quickly, maybe more quickly than some of the transitions in the past did. Just because, as I was saying, AI is easier to replicate than some other technologies, like electrification of a country. It takes a lot of time to build out physical infrastructure that can actually deliver that. Whereas I think for a lot of AI applications, that infrastructure will be cheaper and quicker to build, so the velocity of the change might be faster and that could lead to a little bit more shock. But it’s an interesting point you raise, and I certainly hope that we can find a way through this transition that is less painful than I’m worried it could be.
Do you worry about misuse of AI? I’m an optimist on all of this. And I know that every time we have some new technology come along, people are always looking at the bad cases. You take something like the internet, and the internet has overwhelmingly been a force for good. It connects people in a profound way. There’s a million things. And yeah, some people abuse it. But on net, all technology, I believe, almost all technology on net is used for good because I think, on net, people, on average, are more inclined to build than to destroy. That being said, do you worry about nefarious uses of AI, specifically in warfare?
Yeah. I think that there definitely are going to be some scary killer robots that armies make. Armies love to build machinery that kills things and AI will help them do that, and that will be scary. I think it’s interesting, like, where is the real threat going to come from? Sometimes, I feel like the threat of malevolent AI being deployed against people is going to be more subtle than that. It’s going to be more about things that you can do after compromising fiber systems of some adversary, and things that you can do to manipulate them using AI. There’s been a lot of discussion about Russian involvement in the 2016 election in the US, and that wasn’t about sending evil killer robots. It was more about changing people’s opinions, or attempting to change their opinions, and AI will give entities tools to do that on a scale that maybe we haven’t seen before. I think there may be nefarious uses of AI that are more subtle and harder to see than a full-frontal assault from a movie with evil killer robots. I do worry about all of those things, but I also share your optimism. I think we humans, we make lots of mistakes and we shouldn’t give ourselves too easy of a time here. We should learn from those mistakes, but we also do a lot of things well. And we have used technologies in the past to make the world better, and I hope AI will do so as well.
Pedro Domingo wrote a book called The Master Algorithm where he says there are all of these different tools and techniques that we use in artificial intelligence. And he surmises that there is probably a grandparent algorithm, the master algorithm, that can solve any problem, any range of problems. Does that seem possible to you or likely, or do you have any thoughts on that?
I think it’s a little bit far away, at least from AI as it’s practiced today. Right now, the practical, on-the-ground experience of researchers trying to use AI to do something new is filled with a lot of pain, suffering, blood, sweat, tears, and perseverance if they are to succeed, and I see that in my lab every day. Most of the researchers – and I have brilliant researchers in my lab that are working very hard, and they’re doing amazing work. And most of the things they try fail. And they have to keep trying. I think that’s generally the case right now across all the people that are working on AI. The thing that’s different is we’ve actually started to see some big successes, along with all of those more frustrating everyday occurrences. So I do think that we’re making the progress, but I think having a master algorithm that’s pushbutton that can solve any problem you pose to it that’s something that’s hard for me to conceive of with today’s state of artificial intelligence.
AI, of course, it’s doubtful we’ll have another AI winter because, like you said, it’s kind of delivering the goods, and there have been three things that have happened that made that possible. One of them is better hardware, and obviously you’re part of that world. The second thing is better algorithms. We’ve learned to do things a lot smarter. And the third thing is we have more data, because we are able to collect it, and store it, and whatnot. Assuming you think the hardware is the biggest of the driving factors, what would you think has been the bigger advance? Is it that we have so much more data, or so much better algorithms?
I think the most important thing is more data. I think the algorithms that we’re using in AI right now are, more or less, clever variations of algorithms that have been around for decades, and used to not work. When I was a PhD student and I was studying AI, all the smart people told me, “Don’t work with deep learning, because it doesn’t work. Use this other algorithm called support vector machines.” Which, at the time, that was the hope that that was going to be the master algorithm. So I stayed away from deep learning back then because, at the time, it didn’t work. I think now we have so much more data, and deep learning models have been so successful at taking advantage of that data, that we’ve been able to make a lot of progress. I wouldn’t characterize deep learning as a master algorithm, though, because deep learning is like a fuzzy cloud of things that have some relationships to each other, but actually finding a space inside that fuzzy cloud to solve a particular problem requires a lot of human ingenuity.
Is there a phrase – it’s such a jargon-loaded industry now – are there any of the words that you just find rub you the wrong way? Because they don’t mean anything and people use them as if they do? Do you have anything like that?
Everybody has pet peeves. I would say that my biggest pet peeve right now is the word neuromorphic. I have almost an allergic reaction every time I hear that word, mostly because I don’t think we know what neurons are or what they do, and I think modeling neurons in a way that actually could lead to brain simulations that actually worked is a very long project that we’re decades away from solving. I could be wrong on that. I’m always waiting for somebody to prove me wrong. Strong opinions, weakly held. But so far, neuromorphic is a word that I just have an allergic reaction to, every time.
Tell me about what you do. You are the head of Applied AI Research at NVIDIA, so what does your day look like? What does your team work on? What’s your biggest challenge right now, and all of that?
NVIDIA sells GPUs which have powered most of the deep learning revolution, so pretty much all of the work that’s going on with deep learning across the entire world right now, runs on NVIDIA GPUs. And that’s been very exciting for NVIDIA, and exciting for me to be involved in building that. The next step, I think, for NVIDIA is to figure out how to use AI to change the way that it does its own work. NVIDIA is incentivized to do this because we see the value that AI is bringing to our customers. Our GPU sales have been going up quite a bit because we’re providing a lot of value to everyone else who’s trying to use AI for their own problems. So the next step is to figure out how to use AI for NVIDIA’s problems directly. Andrew Ng, who I used to work with, has this great quote that “AI is the new electricity,” and I believe that. I think that we’re going to see AI applied in many different ways to many different kinds of problems, and my job at NVIDIA is to figure out how to do that here. So that’s what my team focuses on.
We have projects going on in quite a few different domains, ranging from graphics to audio, and text, and others. We’re trying to change the way that everything at NVIDIA happens: from chip design, to video games, and everything in between. As far as my day-to-day work goes, I lead this team, so that means I spend a lot of time talking with people on the team about the work that they’re doing, and trying to make sure they have the right resources, data, the right hardware, the right ideas, the right connections, so that they can make progress on problems that they’re trying to solve. Then when we have prototypes that we’ve built showing how to apply AI to a particular problem, then I work with people around the company to show them the promise of AI applied to problems that they care about.
I think one of the things that’s really exciting to me about this mission is that we’re really trying to change NVIDIA’s work at the core of the company. So rather than working on applied AI, that could maybe help some peripheral part of the company that maybe could be nice if we did that, we’re actually trying to solve very fundamental problems that the company faces with AI, and hopefully we’ll be able to change the way that the company does business, and transform NVIDIA into an AI company, and not just a company that makes hardware for AI.
You are the head of the Applied AI Research. Is there a Pure AI Research group, as well?
Yes, there is.
So everything you do, you have an internal customer for already?
That’s the idea. To me, the difference between fundamental research and applied research is more a question of emphasis on what’s the fundamental goal of your work. If the goal is academic novelty, that would be fundamental research. Our goal is, we think about applications all the time, and we don’t work on problems unless we have a clear application that we’re trying to build that could use a solution.
In most cases, do other groups come to you and say, “We have this problem we really want to solve. Can you help us?” Or is the science nascent enough that you go and say, “Did you know that we can actually solve this problem for you?”
It kind of works all of those ways. We have a list of projects that people around the company have proposed to us, and we also have a list of projects that we ourselves think are interesting to look at. There’s also a few projects that my management tells me, “I really want you to look at this problem. I think it’s really important.” We get input from all directions, and then prioritize, and go after the ones we think are most feasible, and most important.
And do you find a talent shortage? You’re NVIDIA on the one hand, but on the other hand, you know: it’s AI.
I think the entire field, no matter what company you work at, the entire field has a shortage of qualified scientists that can do AI research, and that’s despite the fact that the amount of people jumping into AI is increasing every year. If you go to any of the academic AI conferences, you’ll see how much energy and how much excitement, and how many people that are there that didn’t used to be there. That’s really wonderful to see. But even with all of that growth and change, it is a big problem for the industry. So, to all of your listeners that are trying to figure out what to do next, come work on AI. We have lots of fun problems to work on, and not nearly enough people doing it.
I know a lot of your projects I’m sure you can’t talk about, but tell me something you have done, that you can talk about, and what the goal was, and what you were able to achieve. Give us a success story.
I’ll give you one that’s relevant to the last question that you asked, which is about how to find talent for AI. We’ve actually built a system that can match candidates to job openings at NVIDIA. Basically, it can predict how well we think a particular candidate is a fit for a particular job. That system is actually performing pretty well. So we’re trialing it with hiring managers around the company to figure out if it can help them be more efficient in their work as they search for people to come join NVIDIA.
That looks like a game, isn’t it? I assume you have a pool of resumes or LinkedIn profiles or whatever, and then you have a pool of successful employees, and you have a pool of job descriptions and you’re trying to say, “How can I pull from that big pool, based on these job descriptions, and actually pick the people that did well in the end?”
That’s right.
That’s like a game, right? You have points.
That’s right.
Would you ever productize anything, or is everything that you’re doing just for your own use?
We focus primarily on building prototypes, not products, in my team. I think that’s what the research is about. Once we build a prototype that shows promise for a particular problem, then we work with other people in the company to get that actually deployed, and they would be the people that think about business strategy about whether something should be productized, or not.
But you, in theory, might turn “NVIDIA Resume Pro” into something people could use?
Possibly. NVIDIA also works with a lot of other companies. As we enable companies in many different parts of the economy to apply AI to their problems, we work with them to help them do that. So it might make more sense for us, for example, to deliver this prototype to some of our partners that are in a position to deliver products like this more directly, and then they can figure out how to enlarge its capabilities, and make it more general to try to solve bigger problems that address their whole market and not just one company’s needs. Partnering with other companies is good for NVIDIA because it helps us grow AI which is something we want to do because, as AI grows, we grow. Personally, I think some of the things that we’re working on; it just doesn’t really make sense. It’s not really in NVIDIA’s DNA to productize them directly because it’s just not the business model that the company has.
I’m sure you’re familiar with the “right to know” legislation in Europe: the idea that if an AI makes a decision about you, you have a right to know why it made that decision. AI researchers are like, “It’s not necessarily that easy to do that.” So in your case, your AI would actually be subject to that. It would say, “Why did you pick that person over this person for that job?” Is that an answerable question?
First of all, I don’t think that this system – or I can’t imagine – using it to actually make hiring decisions. I think that would be irresponsible. This system makes mistakes. What we’re trying to do is improve productivity. If instead of having to sort through 200 resumes to find 3 that I want to talk to—if I can look at 10 instead—then that’s a pretty good improvement in my productivity, but I’m still going to be involved, as a hiring manager, to figure out who is the right fit for my jobs.
But an AI excluded 190 people from that position.
It didn’t exclude them. It sorted them, and then the person decided how to allocate their time in a search.
Let’s look at the problem more abstractly. What do you think, just in general, about the idea that every decision an AI makes, should be, and can be, explained?
I think it’s a little bit utopian. Certainly, I don’t have the ability to explain all of the decisions that I make, and people, generally, are not very good at explaining their decisions, which is why there are significant legal battles going on about factual things, that people see in different ways, and remember in different ways. So asking a person to explain their intent is actually a very complicated thing, and we’re not actually very good at it. So I don’t actually think that we’re going to be able to enforce that AI is able to explain all of its decisions in a way that makes sense to humans. I do think that there are things that we can do to make the results of these systems more interpretable. For example, on the resume job description matching system that I mentioned earlier, we’ve built a prototype that can highlight parts of the resume that were most interesting to the model, both in a positive, and in a negative sense. That’s a baby step towards interpretability so that if you were to pull up that job description and a particular person and you could see how they matched, that might explain to you what the model was paying attention to as it made a ranking.
It’s funny because when you hear reasons why people exclude a resume, I remember one person said, “I’m not going to hire him. He has the same first name as somebody else on the team. That’d just be too confusing.” And somebody else I remember said that the applicant was a vegan and the place they like to order pizza from didn’t have a vegan alternative that the team liked to order from. Those are anecdotal of course, but people use all kinds of other things when they’re thinking about it.
Yeah. That’s actually one of the reasons why I’m excited about this particular system is that I feel like we should be able to construct it in a way that actually has fewer biases than people do, because we know that people harbor all sorts of biases. We have employment laws that guide us to stay away from making decisions based on protected classes. I don’t know if veganism is a protected class, but it’s verging on that. If you’re making hiring decisions based on people’s personal lifestyle choices, that’s suspect. You could get in trouble for that. Our models, we should be able to train them to be more dispassionate than any human could be.
We’re running out of time. Let’s close up by: do you consume science fiction? Do you ever watch movies or read books or any of that? And if so, is there any of it that you look at, especially any that portrays artificial intelligence, like Ex Machina, or Her, or Westworld or any of that stuff, that you look at and you’re like, “Wow, that’s really interesting,” or “That could happen,” or “That’s fascinating,” or anything like that?
I do consume science fiction. I love science fiction. I don’t actually feel like current science fiction matches my understanding of AI very well. Ex Machina, for example, that was a fun movie. I enjoyed watching that movie, but I felt, from a scientific point of view, it just wasn’t very interesting. I was talking about our built-in models of the world. One of the things that humans, over thousands of years, have drilled into our heads is that there’s somebody out to get you. We have a large part of our brain that’s worrying all the time, like, “Who’s going to come kill me tonight? Who’s going to take away my job? Who’s going to take my food? Who’s going to burn down my house?” There’s all these things that we worry about. So a lot of the depictions of AI in science fiction inflame that part of the brain that is worrying about the future, rather than actually speak to the technology and its potential.
I think probably the part of science fiction that has had the most impact on my thoughts about AI is Isaac Asimov’s Three Laws. Those, I think, are pretty classic, and I hope that some of them can be adapted to the kinds of problems that we’re trying to solve with AI, to make AI safe, and make it possible for people to feel confident that they’re interacting with AI, and not worry about it. But I feel like most of science fiction is, especially movies – maybe books can be a little bit more intellectual and maybe a little bit more interesting – but especially movies, it just sells more movies to make people afraid, than it does to show people a mundane existence where AI is helping people live better lives. It’s just not nearly as compelling of a movie, so I don’t actually feel like popular culture treatment of AI is very realistic.
All right. Well, on that note, I say, we wrap up. I want to thank you for a great hour. We covered a lot of ground, and I appreciate you traveling all that way with me.
It was fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(https://voicesinai.com/wp-content/uploads/2017/06/cropped-voices-background.jpg) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; } .voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; } .voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(https://voicesinai.com/wp-content/uploads/2017/06/voices-in-ai-logo-light-768x264.png) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem } @media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } } .voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; } .voice-in-ai-link-back a:hover { color: #ff4f00; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); } .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
0 notes