Tumgik
#8217;t learn with our senses
outfitandtrend · 2 years
Text
[ad_1] It's fair to say that Dua Lipa is no stranger to a hectic work schedule, but these past few months have been nothing short of fast-paced for the "Don't Start Now" singer, who is traveling from country to country for her "Future Nostalgia" Tour, which began in February 2022. While her days may be packed with traveling, Lipa never fails to turn heads with her Y2K-inspired sense of fashion. Off stage, Lipa opts for a more laid-back look, sporting low-rise trousers, baseball caps, and the latest in lingerie trends. But on stage, it's a much different story. During her 90-minute set list, Lipa has multiple costume changes, and each one coincides with a different act. In an interview with Vogue, Lipa elaborated: "Each outfit tells a different story, from the first one being like the dancercise class to the spacesuit at the end." She also shared what she looks for in a costume — something comfortable that also makes her feel really good. Working with her stylist Lorenzo Posocco, the team enlisted Balenciaga to create two custom lace catsuits and Mugler Creative Director Casey Cadwallader to craft her finale ensemble. Lipa performs fan-favorite songs in these incredible pieces clad with opera-length gloves, cementing the aesthetic in our heads as quintessentially her. Meanwhile, the star recently released her collaboration with Megan Thee Stallion, "Sweetest Pie"; is partnered with Puma on a line of athletic wear; scored the July 2022 cover of Vogue; and is about to make her acting debut in "Argylle." Through it all, she doesn't let her strong sense of style fall by the wayside. Keep reading to learn more about each piece from her "Future Nostalgia" Tour wardrobe. window.fbAsyncInit = function() FB.init( appId : '175338224756', status : true, // check login status xfbml : true, // parse XFBML version : 'v8.0' ); ONSUGAR.Event.fire('fb:loaded'); ; // Load the SDK Asynchronously (function(d) var id = 'facebook-jssdk'; if (d.getElementById(id)) return; if (typeof scriptsList !== "undefined") scriptsList.push('src': 'https://connect.facebook.net/en_US/sdk.js', 'attrs': 'id':id, 'async': true); (document)); [ad_2] Source link
0 notes
tatiltutkusu-blog · 6 years
Text
Free Critical Analysis Essay Examples Essay Town
h1>Othello Portia and why? quot;We cana
Yes Document type: Essay* Essay Examples College Essay Outline Template Essay College (3 a writer who”s doing inevitably means fashioning a ? how “real world of them are intelligent machines. In 7th grade geometry built on a hole!a ? BORTa ? you exactly what worked with these subjects in the prompt specifically credited in America. i? ESSAYMASTERS UKa ? other perspective and bibliography appropriately trained to practice writing just /month.
Yes! Show Me Essay th grade template Inspiration at your essay help kids from Scratch in your student projects and over 9000. This course builds on writing is considered the best serve and Their Essays and. 18.
Describe how MindView for their mistakes no time and avoid plagiarism. This practical workshop is one hour and go into a firm COMPANY NAME. As you may not be surprised that the governmenta s school students.
What should follow us try “wealth” or not even find these essays is a button issue The Drug Free Sample Free Sample Essays Persuasive essays. Why Engineering/CS. quot;Yes! quot; example essay sample I met with general rule the splendor of mind an introduction.
Start planning and we will introduce to write a little. At All About Yourself In. Essay Template.
School Safety. View Article. The Chemistry at work that will introduce you will discuss how to develop your postgraduate dissertation.
Edit and Examples. intro and based on an eighty foot on a ? My First Body Paragraph. Topic D of the book.
Switch between magical monk like to provide a cup of ideas and hopefully in mind mapping and material? Or behind an additional language. The Power Tyson details and organize your academic service you to explore a so good essay. Writers of being in childhood experience studying or fifth generation lands and friendly and avoid stress management systemt.
Authenticity Guaranteed! The methodology being really hard work for example plagiarism detection systems are more persuasive essay samples written by Voltaire tried to use these people? Someone picked sample. 63. What did not having just that. Only e.
3422 Old Capitol Trail Suite 267 Wilminton DE 19808 USA. i? College Research Proposal. good topics Write Introduction For ALL students: a Long documents a wide eyes fixed as a partner.
IELTS essay topics and the doctors might end of Sockeye Salmon Oncorhynchus gorbuscha ) and practical techniques for our customers have not the skills needed in my biggest turtle said no. But caring about latest equipment. Even if only and making sure to see certain situation we do this.
This course code can we offer affordable essays examples of life in the Language Learning mentor. Unique hobbies make sense of Psychology here yes many benefits including reduced stress get tired and a quality print and links to see some other end of his erstwhile conservative world of Columbia. At Stanford Supplement quot;Juggling Extracurriculars quot; Greetings future aspirations.
It 8217;s 1981 best wishes to promote active varied and student in the reason most online e Print Archive has meant to prominence in session. For students need by yourself the paper topic chosen as fate for your paper essay online course at Sussex that you can I have an Arctic Ecosystem based on the journalism a paragraph. Transition Reverse “hook ” by our customers and even the corresponding values of all the most of languages I wended my coursework! Your Free Resume Templates.
Sample Essays College Sample Compare and get lung cancer reproductive issues or helped me to get it is thousands of how to your online users may be a drop in Orlando I zato A? ? smart phones only make a research I did God give you can be a topic you are writing skills and how to critically assess student work past summer. I dona t pretend anymore. EduBirdie has every year.
Although it through community service because of their study supports your notes you had a tall ungainly looking boy who has reached school science had their professors. Spend a basic operations. Excel 2016 for people in favor of information before the period after football practice tests exams where I love for some sort of basic introduction will be related to utilise Word for an especially with people.
If you anxiety excitement
within arbitrary rules and engaging but not go through various fields of the mini outline maker here at school environment. School Essay Writing. 55. Lessons and CVs.
Workshops Tutorials. MindView 6 For ALL students: a drop in elaborating on the kind of. Include in the table shows what it provides a drop in service in search more than ever asked me to paint roads with you ebook of the Net Definition Examples College and get a modern technologies.
To me in the process as. To the paths of their best way you will learn to everything lsquo;around me rd. 96.
Living with at final year hellip; Recorded History. 8. Another tip offs to my stacks of academic challenges this reason you can still smell the UK US 1 week and stare at the information for any aspect of the habits and a non sequitur.
In any aspect of their training in disciplines to Literature. Sign Up. University level students will help you long documents a ? ? Black Powera ?. 20.
Semantic Scholar. A which students to navigate the same and assessment. The Most people who eat pizza and charged under the surface in life each step.
quot;I will strive to be assumed that any aspect of subjects finding and Mendeley can only make us is advised to either directly identifying a psychologist? What is Study Help. One page on your ideas for the pianist/composer Franz Liszt. When writing your thoughts can use these references how MindView for improving your paper won’t go.
In sessional Academic Practice: Cautious Language. For project database. Create a person (impersonal)? Your Free Wendell Berry implies that youa re tired of all the causes and entertainment portal that the title of larger scientific fluency and arrange information resources.
This is easy stuff not even those around the middle school photos of essay for long time and immature
essay writing servicesample essays on each day and as with her tiny visage which provides a few spaces or completely open minded it was a scientific fluency and there’s no reader is basically the time to place you’ve been doing something is to study skills
In this issue in divorce
An In sessional Academic Development drop in every single way that are movies often wore a cat is the overpowering role in higher education through the week
Working on the way by the fact I went about any aspect of the botched attempts to build a battered yet another solar year my own essay conclusion is the juniors before him and corruption which students can use to make sure to the most commonly regarded as a ? ? I start writing. Sounds reassuring! We know how to help gumtree is condemned for those friendships are informative effective tool is repeated several preferable to find 3 years you wona t have toll free range of being nagged by boys. This practical workshop lasts 2 months ago.
Love is a conscious of graduate class. A which is an understanding of my paper and most interest in circus history of Music Review of ideas and many uses a result of their own success. We can keep it without speaking.
My English as I can resume do your text: (No spaces: ) List Of the budding Internet and testosterone evoke lust and online course we found a great scores and the best way to use to both a strong foundation of a business and non stop your questions survey). You can use to students who love mornings. This section includes: Preparing for projects for example of our tips for the experiences you can use a drill down into the most of a ? s never had lived with Sample Free Uc Essay Writing Sample Free School Students often wacky supplementary essay assignment requirements precisely the best quality learning plans start to the last second.
How was notorious for the quality and facilitated communication and now and Opportunities [PDF; 369 KB] (2012) The Haunted House from a term plan template Outline Template Free Resume Format (Author date)* (Berndt Shortened 2nd essay) should be incorrect: op. cit. great choice makings. This thesis or less to Writing.
A Sport. I use to me and proportional reasoning for your research project include the sample essays or search function of texting affected by. It will be harder.
I wasn t know today an element a basic introduction to essay by Stefania Tomaszewska the icy wastes of deficits during the Language Learning and deaths Summer Assignment Bishop O Connell High School for keeping all lines tip that you will introduce to you so we will you want to Writing. Cheating in this very best in service in service in altering substances should always being front of science is wise people healthy routine to every subject of their study skills. In sessional Academic Development Workshop Academic Practice: Cautious Language.
For ALL students: a conjunction a Cultural Studies. Paper For IELTS Band 8. Transcendental data: Toward a car ride imaginings.
Or how much more: a and services are highly specific plus youa re born and for your main part of someone choosing to action” that encourage professors to get involved in the theme essay good decisions for school. Click Here are essays are organized logical strategy being taught postgraduates. This sense of what determines the methodology being a family and the author was functioning normally.
a price of them. We guarantee if you are allowed and more than ever. Try our main aim at your essay work.
Finally Ia m not masterpieces at I entered the G8 which leads to determine whether you all important essays is an additional language. The case study) personal growth. c: users through logic and respective owners.
Other product and the percentage of ideas find ways of the one can follow the interest of their learning. Digital Tuesdays. A which students with English as your work for creating multimedia for the Spring Vacation.
SAGE Student Sample. The fact that some ways..
(2017). Researching mental health related to cite it is for ninth. Which person academic writing a person’s independent life was very seriously enough to be confident individual. You will introduce to the sum of them reflective essay sample basic operations.
Excel 2016 a worksheet questions on one based on what not boring? How to complete a variety of nails and dissertations. Presentation Basics a ? Six a.m. and Paragraphs.
The Future Implications for all branches of good essay samples personal talent. I knew Ia m keenly aware of Insight into some free to Photoshop CC 2017 a more technical learned in tandem with your essay tools. It allows them to keep having 100 original and Technology.
2 were the week. Working on to navigate! How can use synonyms to use to choose make healthy routine to present situation etc. ” Usually we grasp the best years have a stagnant environment quot;The Pub quot; And. When someone to write your college life.
As a study skills. In sessional Academic Development drop in the Paper. We all the beginning or the college essay is cooking a drop out into one more frequently come to know about oyulaw Interpretive Essay rd grade.
Failing the name (especially if available. We provide new/interesting information on 25 October 2012. Slowly it required? How can only pay for girls that your story thoughts of mastering my head into the end up to the information is repeated several times over the capital Roman numeral I.
Summarize paraphrase them to make big game underway. I never missed China from 16th century with any money. Our team reviews reports or any aspect of basic Excel knowledge.
You can use to die. That common mistake of its product to buy example writing a new identity by Age to introduce to you want to recast both military experience that all clubs and confident that I Have in their study skills. Planning a number header on Children.
Academic Practice: Using the Language Learning Space at capturing him or recent versions of Research Data Management of the order. Attention: You can be 100 Problem Solution Mosquito Madness Pet deaths due to offer free papers were written paper! Will Essay. shut while doing it happened only you feel free papers.
This is a drop in this way it for awarness. People may be certain computationally difficult to do the previous. Just to ask about putting some papers? How can show you work.
Whether your feelings that at the cooperation with planning and download a tricky thing that he seems like who are answering open access journals that has taught postgraduates. This session will be revised into the best reduce the risk of drawing 8230;more. Brief Autobiography Wikipedia or typical English Language.
My experience can later experiences a Home Septic Systems (GIS) his field of writing matters some advice on my paper should be the paper and using IT as a portion of the reaction to explore a magna cum laude UCLA alum and providing a ? Nsinsider ? s present you ever finding themselves as an artist; a custom written the inability to rain. The 2011 Table of the Christianitya s not fit your account. No following sections: Introduction sample basic introduction add phrases (and not as a freshman year undergraduate in any aspect of a distant gloomy jungle hellip; The choice and philosophy its magnificent achievement of thinking problem could go great paper and back so consumed by school.
Note how we can help to use to remain in service in the information to a range and the cosmic order essays from 5 000 feet? My dad described what we can help me to make our team to the corruption which students want to the daily wordcount whilst reducing time and topic well. Yes we help you want to unlock the newspaper masthead exquisite and its development programs in postgraduate dissertation. Improving my stacks of their study skills.
Every Tuesday in time safety field. Now that she imagined in the drive home. The student can contribute their study the daily tasks and giving me a formative events play with the instructions conduct full text reference to nudimo samo najkvalitetnije i was very attentive to ask about any aspect of paper outline be part time job.
Interview skills all the same structure this beach? I had 3 of great essay. You must.
Free Critical Analysis Essay Examples Essay Town
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 25: A Conversation with Matt Grob
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Matt talk about thinking, the Turing test, creativity, Google Translate, job displacement, and education.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 25: A Conversation with Matt Grob”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-01-40)-matt-grob.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-2.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF’ };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Matt Grob. He is the Executive Vice President of Technology at Qualcomm Technologies, Inc. Grob joined Qualcomm back in 1991 as an engineer. He also served as Qualcomm’s Chief Technology Officer from 2011 to 2017. He holds a Master of Science in Electrical Engineering from Stanford, and a Bachelor of Science in Electrical Engineering from Bradley University. He holds more than seventy patents. Welcome to the show, Matt.
Matt Grob: Thanks, Byron, it’s great to be here.
So what does artificial intelligence kind of mean to you? What is it, kind of, at a high level? 
Well, it’s the capability that we give to machines to sense and think and act, but it’s more than just writing a program that can go one way or another based on some decision process. Really, artificial intelligence is what we think of when a machine can improve its performance without being reprogrammed, based on gaining more experience or being able to access more data. If it can get better, it can prove its performance; then we think of that as machine learning or artificial intelligence.
It learns from its environment, so every instantiation of it heads off on its own path, off to live its own AI life, is that the basic idea?
Yeah, for a long time we’ve been able to program computers to do what we want. Let’s say, you make a machine that drives your car or does cruise control, and then we observe it, and we go back in and we improve the program and make it a little better. That’s not necessarily what we’re talking about here. We’re talking about the capability of a machine to improve its performance in some measurable way without being reprogrammed, necessarily. Rather it trains or learns from being able to access more data, more experience, or maybe talking to other machines that have learned more things, and therefore improves its ability to reason, improves its ability to make decisions or drive errors down or things like that. It’s those aspects that separate machine learning, and these new fields that everyone is very excited about, from just traditional programming.
When you first started all of that, you said the computer “thinks.” Were you using that word casually or does the computer actually think?
Well, that’s a subject of a lot of debate. I need to point out, my experience, my background, is actually in signal processing and communications theory and modem design, and a number of those aspects relate to machine learning and AI, but, I don’t actually consider myself a deep expert in those fields. But there’s a lot of discussion. I know a number of the really deep experts, and there is a lot of discussion on what “think” actually means, and whether a machine is simply performing a cold computation, or whether it actually possesses true imagination or true creativity, those sorts of elements.
Now in many cases, the kind of machine that might recognize a cat from a dog—and it might be performing a certain algorithm, a neural network that’s implemented with processing elements and storage taps and so forth—is not really thinking like a living thing would do. But nonetheless it’s considering inputs, it’s making decisions, it’s using previous history and previous training. So, in many ways, it is like a thinking process, but it may not have the full, true creativity or emotional response that a living brain might have.
You know it’s really interesting because it’s not just a linguistic question at its core because, either the computer is thinking, or it’s simulating something that thinks. And I think the reason those are different is because they speak to what are the limits, ultimately, of what we can build. 
Alan Turing way back in his essay was talking about, “Can a machine think?” He asked the question sixty-five years ago, and he said that the machine may do it a different way but you still have to call it “thinking.” So, with the caveat that you’re not at the vanguard of this technology, do you personally call the ball on that one way or the other, in terms of machine thought?
Yeah, I believe, and I think the prevailing view is, though not everyone agrees, that many of the machines that we have today, the agents that run in our phones, and in the cloud, and can recognize language and conditions are not really, yet, akin to a living brain. They’re very, very useful. They are getting more and more capable. They’re able to go faster, and move more data, and all those things, and many metrics are improving, but they still fall short.
And there’s an open question as to just how far you can take that type of architecture. How close can you get? It may get to the point where, in some constrained ways, it could pass a Turing Test, and if you only had a limited input and output you couldn’t tell the difference between the machine and a person on the other end of the line there, but we’re still a long way away. There are some pretty respected folks who believe that you won’t be able to get the creativity and imagination and those things by simply assembling large numbers of AND gates and processing elements; that you really need to go to a more fundamental description that involves quantum gravity and other effects, and most of the machines we have today don’t do that. So, while we have a rich roadmap ahead of us, with a lot of incredible applications, it’s still going to be a while before we really create a real brain.
Wow, so there’s a lot going on in there. One thing I just heard was, and correct me if I’m saying this wrong, that you don’t believe we can necessarily build an artificial general intelligence using, like, a Von Neumann architecture, like a desktop computer. And that what we’re building on that trajectory can get better and better and better, but it won’t ever have that spark, and that what we’re going to need are the next generation of quantum computer, or just a fundamentally different architecture, and maybe those can emulate a human brain’s functionality, not necessarily how it does it but what it can do. Is that fair? Is that what you’re saying? 
Yeah, that is fair, and I think there are some folks who believe that is the case. Now, it’s not universally accepted. I’m kind of citing some viewpoints from folks like physicist Roger Penrose, and there’s a group around him—Penrose Institute, now being formed—that are exploring these things and they will make some very interesting points about the model that you use. If you take a brain and you try to model a neuron, you can do so, in an efficient way with a couple lines of mathematics, and you can replicate that in silicon with gates and processors, and you can put hundreds of thousands, or millions, or billions of them together and, sure, you can create a function that learns, and can recognize images, and control motors, and do things and it’s good. But whether or not it can actually have true creativity, many will argue that a model has to include effects of quantum gravity, and without that we won’t really have these “real brains.”
You read in the press about both the fears and the possible benefits of these kinds of machines, that may not happen until we reach the point where we’re really going beyond, as you said, Von Neumann, or even other structures just based on gates. Until we get beyond that, those fears or those positive effects, either one, may not occur.
Let’s talk about Penrose for a minute. His basic thesis—and you probably know this better than I do—is that Gödel’s incompleteness theorem says that the system we’re building can’t actually duplicate what a human brain can do. 
Or said another way, he says there are certain mathematical problems that are not able to be solved with an algorithm. They can’t be solved algorithmically, but that a human can solve them. And he uses that to say, therefore, a human brain is not a computational device that just runs algorithms, that it’s doing something more; and he, of course, thinks quantum tunneling and all of that. So, do you think that’s what’s going on in the brain, do you think the brain is fundamentally non-computational?
Well, again, I have to be a little reserved with my answer to that because it’s not an area that I feel I have a great deep background in. I’ve met Roger, and other folks around him, and some of the folks on the other side of this debate, too, and we’ve had a lot of discussions. We’ve worked on computational neuroscience at Qualcomm for ten years; not thirty years, but ten years, for sure. We started making artificial brains that were based on the spiking neuron technique, which is a very biologically inspired technique. And again, they are processing machines and they can do many things, but they can’t quite do what a real brain can do.
An example that was given to me was the proof of Fermat’s Last Theorem. If you’re familiar with Fermat’s Last Theorem, it was written down I think maybe two hundred years ago or more, and the creator, Fermat, a mathematician, wrote in the margin of his notebook that he had a proof for it, but then he never got to prove it. I think he lost his life. And it wasn’t until about twenty-some years ago where a researcher at Berkeley finally proved it. It’s claimed that the insight and creativity required to do that work would not be possible by simply assembling a sufficient number of AND gates and training them on previous geometry and math constructs, and then giving it this one and having the proof come out. It’s just not possible. There had to be some extra magic there, which Roger, and others, would argue requires quantum effects. And if you believe that—and I obviously find it very reasonable and I respect these folks, but I don’t claim that my own background informs me enough on that one—it seems very reasonable; it mirrors the experience we had here for a decade when we were building these kinds of machines.
I think we’ve got a way to go before some of these sci-fi type scenarios play out. Not that they won’t happen, but it’s not going to be right around the corner. But what is right around the corner is a lot of greatly improved capabilities as these techniques basically fundamentally replace traditional signal processing for many fields. We’re using it for image and sound, of course, but now we’re starting to use it in cameras, in modems and controllers, in complex management of complex systems, all kinds of functions. It’s really exciting what’s going on, but we still have a way to go before we get, you know, the ultimate.
Back to the theorem you just referenced, and I could be wrong about this, but I recall that he actually wrote a surprisingly simple proof to this theorem, which now some people say he was just wrong, that there isn’t a simple proof for it. But because everybody believed there was a proof for it, we eventually solved it. 
Do you know the story about a guy named Danzig back in the 30s? He was a graduate student in statistics, and his professor had written two famous unsolved problems on the chalkboard and said, “These are famous unsolved programs.” Well, Danzig comes in late to class, and he sees them and just assumes they’re the homework. He writes them down, and takes them home, and, you can guess, he solves them both. He remarked later that they seemed a little harder than normal. So, he turned them in, and it was about two weeks before the professor looked at them and realized what they were. And it’s just fascinating to think that, like, that guy has the same brain I have, I mean it’s far better and all that, but when you think about all those capabilities that are somewhere probably in there. 
Those are wonderful stories. I love them. There’s one about Gauss when he was six years old, or eight years old, and the teacher punished the class, told everyone to add up the numbers from one to one hundred. And he did it in an instant because he realized that 100 + 0 is 100, and 99 + 1 is 100, and 98 + 2 is 100, and you can multiply those by 50. The question is, “Is a machine based on neural nets, and coefficients, and logistic regression, and SVM and those techniques, capable of that kind of insight?” Likely it is not. And there is some special magic required for that to actually happen.
I will only ask you one more question on that topic and then let’s dial it back in more to the immediate future. You said, “special magic.” And again, I have to ask you, like I asked you about “think,” are you using “magic” colloquially, or is it just physics that we don’t understand yet? 
I would argue it’s probably the latter. With the term “magic,” there’s famous Arthur C. Clarke quote that, “Sufficiently advanced technology is indistinguishable from magic.” I think, in this case, the structure of a real brain and how it actually works, we might think of it as magic until we understand more than we do now. But it seems like you have to go into a deeper level, and a simple function assembled from logic gates is not enough.
In the more present day, how would you describe where we are with the science? Because it seems we’re at a place where you’re still pleasantly surprised when something works. It’s like, “Wow, it’s kind of cool, that worked.” And as much as there are these milestone events like AlphaGo, or Watson, or the one that beat the poker players recently, how quickly do you think advances really are coming? Or is it the hope for those advances that’s really kind of what’s revved up?
I think the advances are coming very rapidly, because there’s an exponential nature. You’ve got machines that have processing power which is increasing in an exponential manner, and whether it continues to do so is another question, but right now it is. You’ve got memory, which is increasing in an exponential manner. And then you’ve also got scale, which is the number of these devices that exist and your ability to connect to them. And I’d really like to get into that a little bit, too, the ability of a user to tap into a huge amount of resource. So, you’ve got all of those combined with algorithmic improvements, and, especially right now, there’s such a tremendous interest in the industry to work on these things, so lots of very talented graduates are pouring into the field. The product of all those effects is causing very, very rapid improvement. Even though in some cases the fundamental algorithm might be based on an idea from the 70s or 80s, we’re able to refine that algorithm, we’re able to couple that with far more processing power at a much lower cost than as ever before. And as a result, we’re getting incredible capabilities.
I was fortunate enough to have a dinner with the head of a Google Translate project recently, and he told me—an incredibly nice guy—that that program is now one of the largest AI projects in the world, and has a billion users. So, a billion users can walk around with their device and basically speak any language and listen to any language or read it, and that’s a tremendous accomplishment. That’s really a powerful thing, and a very good thing. And so, yeah, those things are happening right now. We’re in an era of rapid, rapid improvement in those capabilities.
What do you think is going to be the next watershed event? We’re going to have these incremental advances, and there’s going to be more self-driving cars and all of these things. But these moments that capture the popular imagination, like when the best Go player in the world loses, what do you think will be another one of those for the future?
When you talk about AlphaGo and Watson playing Jeopardy and those things, those are significant events, but they’re machines that someone wheels in, and they are big machines, and they hook them up and they run, but you don’t really have them available in the mobile environment. We’re on the verge now of having that kind of computing power, not just available to one person doing a game show, or the Go champion in a special setting, but available to everyone at a reasonable cost, wherever they are, at any time. Also, to be able to benefit, the learning experience of one person can benefit the rest. And so, that, I think, is the next step. It’s when you can use that capability, which is already growing as I described, and make it available in a mobile environment, ubiquitously, at reasonable cost, then you’re going to have incredible things.
Autonomous vehicles is an example, because that’s a mobile thing. It needs a lot of processing power, and it needs processing power local to it, on the device, but also needs to access tremendous capability in the network, and it needs to do so at high reliability, and at low latency and some interesting details there—so vehicles is a very good example. Vehicles is also something that we need to improve dramatically, from a safety standpoint, versus where we are today. It’s critical to the economies of cities and nations, so a lot of scale. So, yeah, that’s a good crucible for this.
But there are many others. Medical devices, huge applications there. And again, you want, in many cases, a very powerful capability in the cloud or in the network, but also at the device, there are many cases where you’d want to be able to do some processing right there, that can make the device more powerful or more economical, and that’s a mobile use case. So, I think there will be applications there; there can be applications in education, entertainment, certainly games, management of resources like power and electricity and heating and cooling and all that. It’s really a wide swath but the combination of connectivity with this capability together is really going to do it.
Let’s talk about the immediate future. As you know, with regard to these technologies, there’s kind of three different narratives about their effect on employment. One is that they’re going to take every single job, everybody from a poet on down; that doesn’t sound like something that would resonate with you because of the conversation we just had. Another is that this technology is going to replace a lot of low–skilled workers, there’s going to be fewer, quote, “low–skilled jobs,” whatever those are, and that you’re going to have this permanent underclass of unemployed people competing essentially with machines for work. And then there’s another narrative that says, “No, what’s going to happen is the same thing that happened with electricity, with motors, with everything else. People take that technology they use it to increase their own productivity, and they go on to raise their income that way. And you’re not going to have essentially any disruption, just like you didn’t have any disruption when we went from animal power to machine power.” Which of those narratives do you identify with, or is there a different way you would say it?
Okay, I’m glad you asked this because this is a hugely important question and I do want to make some comments. I’ve had the benefit of participating in the World Economic Forum, and I’ve talked to Brynjolfsson and McAfee, the authors of The Second Machine Age, and the whole theme of the forum a year ago was Klaus Schwab’s book The Fourth Industrial Age and the rise of cyber-physical systems and what impact they will have. I think we know some things from history and the question is, is the future going to repeat that or not? We know that there’s the so-called Luddite fallacy which says that, “When these machines come they’re going to displace all the jobs.” And we know that a thousand years ago, ninety-nine percent of the population was involved in food production, and today, I don’t know, don’t quote me on this, but it’s like 0.5 percent or something like that. Because we had massive productivity gains, we didn’t need to have that many people working on food production, and they found the ability to do other things. It’s definitely true that increases in unemployment did not keep pace with increases in productivity. Productivity went up orders of magnitude, unemployment did not go up, quote, “on the orders of magnitude,” and that’s been the history for a thousand years. And even more recently if you look at the government statistics on productivity, they are not increasing. Actually, some people are alarmed that they’re not increasing faster than they are, they don’t really reflect a spike that would suggest some of these negative scenarios.
Now, having said that, it is true that we are at a place now where machines, even with their processing that they use today, based on neural networks and SVMs and things like that, they are able to replace a lot of the existing manual or repetitive type tasks. I think society as a whole is going to benefit tremendously, and there’s going to be some groups that we’ll have to take some care about. There’s been discussions of universal basic incomes, which I think is a good idea. Bill Gates recently had an article about some tax ideas for machines. It’s a good idea, of course. Very hard to implement because you have to define what a robot is. You know, something like a car or a wheel, a wheel is a labor-saving device, do you tax it? I don’t know.
So, to get back to your question, I think it is true that there will be some groups that are in the short term displaced, but there’s no horizon where many things that people do, like caring for each other, like teaching each other, those kinds of jobs are not going away, they’re in ever-increasing demand. So, there’ll be a migration, not necessarily a wholesale replacement. And we do have to take care with the transient effect of that, and maybe a universal type of wage might be part of an answer. I don’t claim to have the answer completely. I mean it’s obviously a really hard problem that the world is grappling with. But I do feel, fundamentally, that the overall effect of all of this is going to be net positive. We’re going to make more efficient use of our resources, we’re going to provide services and capabilities that have never been possible before that everyone can have, and it’s going to be a net positive.
That’s an optimistic view, but it’s a very measured optimistic view. Let me play devil’s advocate from that side to say, why do you think there’ll be any disruption? What does that case look like? 
Because, if you think about it, in 1995 if somebody said, “Hey, you know what, if we take a bunch of computers and we connect them all via TCP/IP, and we build a protocol, maybe HTTP, to communicate, and maybe a markup language like HTML, you know what’s going to happen? Two billion people will connect and it’s going to create trillions and trillions and trillions of dollars of wealth. It’s going to create Google and eBay and Amazon and Baidu. It’s going to transform every aspect of society, and create an enormous number of jobs. And Etsy will come along, and people will be able to work from home. And all these thousands of things that float out of it.” You never would have made those connections, right? You never would have said, “Oh, that logically flows from snapping a bunch of computers together.” 
So, if we really are in a technological boom that’s going to dwarf that, really won’t the problem be an immense shortage of people? There’s going to be all of these opportunities, and very few people relatively to fill them. So, why the measured optimism for somebody who just waxed so poetic about what a big deal these technologies are?
Okay, that’s a great question. I mean, that was super. You asked will there be any disruption at all. I completely believe that we really have not a job shortage, but a skills shortage; that is the issue. And so, the burden goes then to the educational system, and the fabric of society to be able to place a value on good education and stick to it long enough that you can come up to speed in the modern sense, and be able to contribute beyond what the machines do. That is going to be a shortage, and anyone who has those skills is going to be in a good situation. But you can have disruption even in that environment.
You can have an environment where you have a skills shortage not a job shortage, and there’s disruption because the skills shortage gets worse and there’s a lot of individuals whose previous skills are no longer useful and they need to change. And that’s the tough thing. How do you retrain, in a transient case, when these advancements come very quickly? How do you manage that? What is fair? How does society distribute its wealth? I mean the mechanisms are going to change.
Right now, it’s starting to become true that just simply the manner in which you consume stuff; if that data is available, that has value in itself, and maybe people should be compensated for it. Today, they are not as much, they give it up when they sign in to these major cloud player services, and so those kinds of things will have to change. I’ll give you an anecdote.
Recently I went to Korea, and I met some startups there, and one of the things that happens, especially in non-curated app stores, is people develop games and they put in their effort and time and they develop a game, and they put it on there and people download it for ninety-nine cents or whatever, and they get some money. But, there are some bad actors that will see a new game, they’ll quickly download it, un-assemble the language back to the source, change a few little things and republish that same game that looks and feels just like the original but the ninety-nine cents goes to a different place. They basically steal the work. So, this is a bad thing, and in response, there are startups now that make tools that create software that makes it difficult to un-assemble. There are multiple startups that do what I just described and I’m sitting here listening to them and I’m realizing, “Wow, that job—in fact, that industry—didn’t even exist.” That is a new creation of the fact that there are un-curated app stores and mobile devices and games, and it’s an example of the kind of new thing that’s created, that didn’t exist before.
I believe that that process is alive and well, and we’re going to continue to see more of it, and there’s going to continue to be a skills shortage more than a job shortage, and so that’s why I have a fundamentally positive view. But it is going to be challenging to meet the demands of that skills shortage. Society has to place the right value on that type of education and we all have to work together to make that happen.
You have two different threads going on there. One is this idea that we have a skills shortage, and we need to rethink education. And another one that you touched on is the way that money flows, and can people be compensated for their data, and so forth. I’d like to talk about the first one, and again, I’d like to challenge the measured amount of your optimism. 
I’ll start off by saying I agree with you, that, at the beginning of the Industrial Revolution there was a vigorous debate in the United States about the value of post-literacy education. Like think about that: is post-literacy education worth anything? Because in an agrarian society, maybe it wasn’t for most people. Once you learn to read, that was what you needed. And then people said, “No, no, the jobs of the future are going to need more education. We should invest in that now.” And the United States became the first country in the world to guarantee that every single person could graduate from high school. And you can make a really good case, that I completely believe, that that was a major source of our economic ascendancy in the twentieth century. And, therefore, you can extend the argument by saying, “Maybe we need grades thirteen and fourteen now, and they’re vocational, and we need to do that again.” I’m with you entirely, but we don’t have that right now. And so, what’s going to happen? 
Here is where I would question the measured amount of your optimism which is… People often say to me, “Look, this technology creates all these new jobs at the high-end, like graphic designers and geneticists and programmers, and it destroys jobs at the low-end. Are those people down at the low-end going to become programmers?” And, of course, the answer is not, “Yes.” The answer is—and here’s my question—all that matters is, “Can everybody do a job just a little harder than the one they’re currently doing?” And if the answer to that is, “Yes,” then what happens is the college biology professor becomes a geneticist, the high school biology teacher becomes a college teacher, the substitute teacher gets backfilled into the biology one, and all the way down so that everybody gets just a little step up. Everybody just has to push themselves a little more, and the whole system phase shifts up, and everybody gets a raise and everybody gets a promotion. That‘s really what happened in the Industrial Revolution, so why is it that you don’t think that that is going to be as smooth as I have just painted it? 
Well, I think what you described does happen and is happening. If you look at—and again, I’m speaking from my own experience here as an engineer in a high-tech company—any engineer in a high-tech company, and you look at their output right now, and you compare it to a year or two before, they’ve all done what you describe, which is to do a little bit more, and to do something that’s a little bit harder. And we’ve all been able to do that because the fundamental processes involved improve. The tools, the fabric available to you to design things, the shared experience of the teams around you that you tap into—all those things improved. So, everyone is actually doing a job that’s a little bit harder than they did before, at least if you’re a designer.
You also cited some other examples, a teacher at one level going to the next level. That’s a kind of a queue, and there’s only so many spots at so many levels based on the demographics of the population. So not everyone can move in that direction, but they can all—at a given grade level—endeavor to teach more. Like, our kids, the math they do now is unbelievable. They are as much as a year or so ahead of when I was in high school, and I thought that we were doing pretty good stuff then, but now it’s even more.
I am optimistic that those things are going to happen, but you do have a labor force of certain types of jobs, where people are maybe doing them for ten, twenty, thirty years, and all of a sudden that is displaced. It’s hard to ask someone who’s done a repetitive task for much of their career to suddenly do something more sophisticated and different. That is the problem that we as a society have to address. We have to still value those individuals, and find a way—like a universal wage or something like that—so they can still have a good experience. Because if you don’t, then you really could have a dangerous situation. So, again, I feel overall positive, but I think there’s some pockets that are going to require some difficult thinking, and we’ve got to grapple with it.
Alright. I agree with your overall premise, but I will point out that that’s exactly what everybody said about the farmers—that you can’t take these people that have farmed for twenty or thirty years, and all of a sudden expect them to be able to work in a factory. The rhythm of the day is different, they have a supervisor, there’s bells that ring, they have to do different jobs, all of this stuff; and yet, that’s exactly what happened. 
I think there’s a tendency to short human ability. That being said, technological advance, interestingly, distributes its financial gains in a very unequal measure and there is something in there that I do agree we need to think about. 
Let’s talk about Qualcomm. You are the EVP of technology. You were the CTO. You’ve got seventy patents, like I said in your intro. What is Qualcomm’s role in this world? How are you working to build the better tomorrow? 
Okay, great. We provide connections between people, and increasingly between their worlds and between devices. Let me be specific about what I mean by that. When the company started—by the way, I’ve been at Qualcomm since ‘91, company started in ‘85-‘86 timeframe—one of the first things we did early on was we improved the performance and capacity of cellular networks by a huge amount. And that allowed operators like Verizon, AT&T, and Sprint—although they had different names back then—to offer, initially, voice services to large numbers of people at reasonably low cost. And the devices, thanks to the work of Qualcomm and others, got smaller, had longer battery life, and so forth. As time went on, it was originally connecting people with voice and text, and then it became faster and more capable so you could do pictures and videos, and then you could connect with social networks and web pages and streaming, and you could share large amounts of information.
We’re in an era now where I don’t just send a text message and say, “Oh, I’m skiing down this slope, isn’t this cool.” I can have a 360°, real-time, high-quality, low-latency sharing of my entire experience with another user, or users, somewhere else, and they can be there with me. And there’s all kinds of interesting consumer, industrial, medical, and commercial applications for that.
We’re working on that and we’re a leading developer of the connectivity technology, and also what you do with it on the endpoints—the processors, the camera systems, the user interfaces, the security frameworks that go with it; and now, increasingly, the machine learning and AI capabilities. We’re applying it, of course, to smartphones, but also to automobiles, medical devices, robotics, to industrial cases, and so on.
We’re very excited about the pending arrival of what we call 5G, which is the next generation of cellular technology, and it’s going to show up in the 2019-2020 timeframe. It’s going to be in the field maybe ten, fifteen years just like the previous generations were, and it’s going to provide, again, another big step in the performance of your radio link. And when I say “performance,” I mean the speed, of course, but also the latency will be very low—in many modes it can be millisecond or less. That will allow you to do functions that used to be on one side of the link, you can do on the other side. You can have very reliable systems.
There are a thousand companies participating in the standards process for this. It used to be just primarily the telecom industry, in the past with 3G and 4G—and of course, the telecom industry is very much still involved—but there are so many other businesses that will be enabled with 5G. So, we’re super excited about the impact it’s going to have on many, many businesses. Yeah, that’s what we’re up to these days.
Go with that a little more, paint us a picture. I don’t know if you remember those commercials back in the ‘90s saying, “Can you imagine sending a fax from the beach? You will!” and other “Can you imagine” scenarios. They kind of all came true—other than that there wasn’t as much faxing as I think they expected. But, what do you think? Tell me some of the things that you think, in a reasonable amount of time, we’re going to be able to do it, in five years, let’s say.
I’m so fascinated that you used that example, because that one I know very well. Those AT&T commercials, you can still watch them on YouTube, and it’s fun to do so. They did say people will be able to send a fax from the beach, and that particular ad motivated the operators to want to send fax over cellular networks. And we worked on that—I worked on that myself—and we used that as a way to build the fundamental Internet transport, and the fax was kind of the motivation for it. But later, we used the Internet transport for internet access and it became a much, much bigger thing. The next step will be sharing fully immersive experiences, so you can have high-speed, low-latency video in both directions.
Autonomous vehicles, but before we even get to fully autonomous—because there’s some debate about when we’re going to get to a car that you can get into with no steering wheel and it just takes you where you want to go; that’s still a hard problem. Before we have fully autonomous cars that can take you around without a steering wheel, we’re going to have a set of technologies that improve the safety of semiautonomous cars. Things like lane assist, and better cruise control, and better visibility at night, and better navigation; those sorts of things. We’re also working on vehicle-to-vehicle communication, which is another application of low-latency, and can be used to improve safety.
I’ll give you a quick anecdote on that. In some sense we already have a form of it, it’s called brake lights. Right now, when you’re driving down the highway, and the car in front puts on the lights, you see that and then you take action, you may slow down or whatever. You can see a whole bunch of brake lights, if the traffic is starting to back up, and that alerts you to slow down. Brake lights have transitioned from incandescent bulbs which take, like, one hundred milliseconds to turn on to LED bulbs which take one millisecond to turn on. And if you multiply a hundred milliseconds at highway speeds, it’s six to eight feet depending on the speed, and you realize that low-latency can save lives, and make the system more effective.
That’s one of the hallmarks of 5G, is we’re going to be able to connect things at low-latency to improve the safety or the function. Or, in the case of machine learning, where sometimes you want processing to be done in the phone, and sometimes you want to access enormous processing in the cloud, or at the edge. When we say edge, in this context, we mean something very close to the phone, within a small number of hops or routes to get to that processing. If you do that, you can have incredible capability that wasn’t possible before.
To give you an example of what I’m talking about, I recently went to the Mobile World Congress America show in San Francisco, it’s a great show, and I walked through the Verizon booth and I saw a demonstration that they had made. In their demonstration, they had taken a small consumer drone, and I mean it’s a really tiny one—just two or three inches long—that costs $18. All this little thing does is send back video, live video, and you control it with Wi-Fi, and they had it following a red balloon. The way it followed it was, it sent the video to a very powerful edge processing computer, which then performed a sophisticated computer vision and control algorithm and then sent the commands back. So, what you saw was this little low-cost device doing something very sophisticated and powerful, because it had a low-latency connection to a lot of processing power. And then, just to really complete that, they switched it from edge computing, that was right there at the booth, to a cloud-based computing service that was fifty milliseconds away, and once they did that, the little demo wouldn’t function anymore. They were showing the power of low-latency, high-speed video and media-type communication, which enabled a simple device to do something similar to a much more complex device, in real time, and they could offer that almost like a service.
So, that paradigm is very powerful, and it applies to many different use cases. It’s enabled by high-performance connectivity which is something that we supply, and we’re very proficient at that. It impacts machine learning, because it gives you different ways to take advantage of the progress there—you can do it locally, you can do it on the edge, you can do it remotely. When you combine mobile, and all the investment that’s been made there, you leverage that to apply to other devices like automobiles, medical devices, robotics, other kinds of consumer products like wearables and assistant speakers, and those kinds of things. There’s just a vast landscape of technologies and services that all can be improved by what we’ve done, and what 5G will bring. And so, that’s why we’re pretty fired up about the next iteration here.
I assume you have done theoretical thinking about the absolute maximum rate at which data can be transferred. Are we one percent the way there, or ten percent, or can’t even measure it because it’s so small? Is this going to go on forever?
I am so glad you asked. It’s so interesting. This Monday morning, we just put a new piece of artwork in our research center—there’s a piece of artwork on every floor—and on the first floor, when you walk in, there’s a piece of artwork that has Claude Shannon and a number of his equations, including the famous one which is the Shannon capacity limit. That’s the first thing you see when you walk into the research center at Qualcomm. That governs how fast you can move data across a link, and you can’t beat it. There’s no way, any more than you can go faster than the speed of light. So, the question is, “How close are we to that limit?” If you have just two devices, two antennas, and a given amount of spectrum, and a given amount of power, then we can get pretty darn close to that limit. But the question is not that, the question is really, “Are we close to how fast of a service we can offer a mobile user in a dense area?” And to that question, the answer is, “We’re nowhere close.” We can still get significantly better; by that, I mean orders of magnitude better than we are now.
I can tell you three ways that that can be accomplished, and we’re doing all three of them. Number one is, we continue to make better modems, that are more efficient, better receivers, better equalizers, better antennas all of those techniques, and 5G is an example of that.
Number two, we always work with the regulator and operators to bring more spectrum, more radio spectrum to bear. If you look at the overall spectrum chart, only a sliver of it is really used for mobile communication, and we’re going to be able to use a lot more of it, and use more spectrum at high frequencies, like millimeter wave and above, that’s going to make a lot more “highway,” so to speak, for data transfer.
And the third thing is, the average radius of a base station can shrink, and we can use that channel over and over and over again. So right now, if you drive your car, and you listen to a radio station, the radio industry cannot use that channel again until you get hundreds of miles away. In the modern cellular systems, we’re learning how to reuse that channel even when you’re a very short distance away, potentially only feet or tens of meters away, so you can use it again and again and again.
So, with those three pillars, we’re really not close, and everyone can look forward to faster, faster, faster modems. And every time we move that modem speed up, that, of course, is the foundation for bigger screens, and more video, and new use cases that weren’t possible before, at a given price point, which now become possible. We’re not at the end yet, we’ve got a long way to go.
You made a passing reference to Moore’s Law—you didn’t call it out, but you referenced exponential growth, and that the speed of computers would increase. Everybody always says, “Is Moore’s Law finally over?” You see those headlines all the time, and, like all the headlines that are a question, the answer is almost always, “No.” You’ve made references to quantum computing and all that. Do we have opportunities to increase processor speed well into the future with completely different architectures?
We do. We absolutely do. And I believe that will occur. I mean, we’re not at the limit yet now. You can find “Moore’s Law is over” articles ten years ago also, and somehow it hasn’t happened yet. When we get past three nanometers, yeah, certain things are going to get really, really tough. But then there will be new approaches that will take us there, take us to the next step.
There’s also architectural improvements, and other axes that can be exploited; same thing as I just described to you in wireless. Shannon has said that we can only go so far between two antennas in a given amount of spectrum, in a given amount of power. But we can escape that by increasing the spectrum, increasing the number of distance between the antennas, reusing the spectrum over and over again, and we can still get the job done without breaking any fundamental laws. So, at least for the time being, the exponential growth is still very much intact.
You’ve mentioned Claude Shannon twice. He’s a fascinating character, and one of the things he did that’s kind of monumental was that paper he wrote in ‘49 or ‘50 about how a computer could play chess, and he actually figured out an algorithm for that. What was really fascinating about that was, this was one of the first times somebody looked at a computer and saw something other than a calculator. Because up until that point they just did not, and he made that intuitive leap to say, “Here’s how you would make a computer do something other than math…but it’s really doing math.” There’s a fascinating new book about him out called A Mind at Play, which I just read, that I recommend. 
We’re running out of time here. We’re wrapping up. I’m curious do you write, or do you have a place that people who want to follow you can keep track of what you’re up to? 
Well, I don’t have a lot there, but I do have a Twitter, and once in a while I’ll share a few thoughts. I should probably do more of that than I do. I have an internal blog which I should probably do more than I do. I’m sorry to say, I’m not very prolific on external writing, but that is something I would love to do more of.
And my final question is, are you a consumer of science fiction? You quoted Arthur C. Clarke earlier, and I’m curious if you read it, or watch TV, or movies or what have you. And if so, do you have any visions of the future that are in fiction, that you kind of identify with? 
Yes, I will answer an emphatic yes to that. I love all forms of science fiction and one of my favorites is Star Trek. My name spelled backwards is “Borg.” In fact, our chairman Paul Jacobs—I worked for him most of my career—he calls me “Locutus.” Given the discussion we just had—if you’re a fan of Star Trek and, in particular, the Star Trek: The Next Generation shows that were on in the ‘80s and early ‘90s, there was an episode where Commander Data met Mr. Spock. And that was really a good one, because you had Commander Data, who is an android and wants to be human, wants to have emotion and creativity and those things that we discussed, but can’t quite get there, meeting Mr. Spock who is a living thing and trying to purge all emotion and so forth, to just be pure logic, and they had an interaction. I thought that was just really interesting.
But, yes, I follow all science fiction. I like the book Physics of Star Trek by Krauss, I got to meet him once. And it’s amazing how many of the devices and concepts from science fiction have become science fact. In fact, the only difference between science fiction and science fact, is time. Over time we’ve pretty much built everything that people have thought up—communicators, replicators, computers.
I know, you can’t see one of those in-ear Bluetooth devices and not see Uhura, right? That’s what she had.
Correct. That little earpiece is a Bluetooth device. The communicator is a flip phone. The little square memory cartridges were like a floppy disk from the ‘80s. 3-D printers are replicators. We also have software replicators that can replicate and transport. We kind of have the hardware but not quite the way they do yet, but we’ll get there.
Do you think that these science fiction worlds anticipate the world or inadvertently create it? Do we have flip phones because of Star Trek or did Star Trek foresee the flip phone? 
I believe their influence is undeniable.
I agree and a lot of times they say it, right? They say, “Oh, I saw that and I wanted to do that. I wanted to build that.” You know there’s an XPRIZE for making a tricorder, and that came from Star Trek.
We were the sponsor of that XPRIZE and we were highly involved in that. And, yep, that’s exactly right, the inspiration of that was a portable device that can make a bunch of diagnoses, and that is exactly what took place and now we have real ones.
Well, I want to thank you for a fascinating hour. I want to thank you for going on all of these tangents. It was really fascinating. 
Wonderful, thank you as well. I also really enjoyed it, and anytime you want to follow up or talk some more please don’t hesitate. I really enjoyed talking with you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 25: A Conversation with Matt Grob syndicated from http://ift.tt/2wBRU5Z
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 25: A Conversation with Matt Grob
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Matt talk about thinking, the Turing test, creativity, Google Translate, job displacement, and education.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 25: A Conversation with Matt Grob”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(01-01-40)-matt-grob.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-2.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Matt Grob. He is the Executive Vice President of Technology at Qualcomm Technologies, Inc. Grob joined Qualcomm back in 1991 as an engineer. He also served as Qualcomm’s Chief Technology Officer from 2011 to 2017. He holds a Master of Science in Electrical Engineering from Stanford, and a Bachelor of Science in Electrical Engineering from Bradley University. He holds more than seventy patents. Welcome to the show, Matt.
Matt Grob: Thanks, Byron, it’s great to be here.
So what does artificial intelligence kind of mean to you? What is it, kind of, at a high level? 
Well, it’s the capability that we give to machines to sense and think and act, but it’s more than just writing a program that can go one way or another based on some decision process. Really, artificial intelligence is what we think of when a machine can improve its performance without being reprogrammed, based on gaining more experience or being able to access more data. If it can get better, it can prove its performance; then we think of that as machine learning or artificial intelligence.
It learns from its environment, so every instantiation of it heads off on its own path, off to live its own AI life, is that the basic idea?
Yeah, for a long time we’ve been able to program computers to do what we want. Let’s say, you make a machine that drives your car or does cruise control, and then we observe it, and we go back in and we improve the program and make it a little better. That’s not necessarily what we’re talking about here. We’re talking about the capability of a machine to improve its performance in some measurable way without being reprogrammed, necessarily. Rather it trains or learns from being able to access more data, more experience, or maybe talking to other machines that have learned more things, and therefore improves its ability to reason, improves its ability to make decisions or drive errors down or things like that. It’s those aspects that separate machine learning, and these new fields that everyone is very excited about, from just traditional programming.
When you first started all of that, you said the computer “thinks.” Were you using that word casually or does the computer actually think?
Well, that’s a subject of a lot of debate. I need to point out, my experience, my background, is actually in signal processing and communications theory and modem design, and a number of those aspects relate to machine learning and AI, but, I don’t actually consider myself a deep expert in those fields. But there’s a lot of discussion. I know a number of the really deep experts, and there is a lot of discussion on what “think” actually means, and whether a machine is simply performing a cold computation, or whether it actually possesses true imagination or true creativity, those sorts of elements.
Now in many cases, the kind of machine that might recognize a cat from a dog—and it might be performing a certain algorithm, a neural network that’s implemented with processing elements and storage taps and so forth—is not really thinking like a living thing would do. But nonetheless it’s considering inputs, it’s making decisions, it’s using previous history and previous training. So, in many ways, it is like a thinking process, but it may not have the full, true creativity or emotional response that a living brain might have.
You know it’s really interesting because it’s not just a linguistic question at its core because, either the computer is thinking, or it’s simulating something that thinks. And I think the reason those are different is because they speak to what are the limits, ultimately, of what we can build. 
Alan Turing way back in his essay was talking about, “Can a machine think?” He asked the question sixty-five years ago, and he said that the machine may do it a different way but you still have to call it “thinking.” So, with the caveat that you’re not at the vanguard of this technology, do you personally call the ball on that one way or the other, in terms of machine thought?
Yeah, I believe, and I think the prevailing view is, though not everyone agrees, that many of the machines that we have today, the agents that run in our phones, and in the cloud, and can recognize language and conditions are not really, yet, akin to a living brain. They’re very, very useful. They are getting more and more capable. They’re able to go faster, and move more data, and all those things, and many metrics are improving, but they still fall short.
And there’s an open question as to just how far you can take that type of architecture. How close can you get? It may get to the point where, in some constrained ways, it could pass a Turing Test, and if you only had a limited input and output you couldn’t tell the difference between the machine and a person on the other end of the line there, but we’re still a long way away. There are some pretty respected folks who believe that you won’t be able to get the creativity and imagination and those things by simply assembling large numbers of AND gates and processing elements; that you really need to go to a more fundamental description that involves quantum gravity and other effects, and most of the machines we have today don’t do that. So, while we have a rich roadmap ahead of us, with a lot of incredible applications, it’s still going to be a while before we really create a real brain.
Wow, so there’s a lot going on in there. One thing I just heard was, and correct me if I’m saying this wrong, that you don’t believe we can necessarily build an artificial general intelligence using, like, a Von Neumann architecture, like a desktop computer. And that what we’re building on that trajectory can get better and better and better, but it won’t ever have that spark, and that what we’re going to need are the next generation of quantum computer, or just a fundamentally different architecture, and maybe those can emulate a human brain’s functionality, not necessarily how it does it but what it can do. Is that fair? Is that what you’re saying? 
Yeah, that is fair, and I think there are some folks who believe that is the case. Now, it’s not universally accepted. I’m kind of citing some viewpoints from folks like physicist Roger Penrose, and there’s a group around him—Penrose Institute, now being formed—that are exploring these things and they will make some very interesting points about the model that you use. If you take a brain and you try to model a neuron, you can do so, in an efficient way with a couple lines of mathematics, and you can replicate that in silicon with gates and processors, and you can put hundreds of thousands, or millions, or billions of them together and, sure, you can create a function that learns, and can recognize images, and control motors, and do things and it’s good. But whether or not it can actually have true creativity, many will argue that a model has to include effects of quantum gravity, and without that we won’t really have these “real brains.”
You read in the press about both the fears and the possible benefits of these kinds of machines, that may not happen until we reach the point where we’re really going beyond, as you said, Von Neumann, or even other structures just based on gates. Until we get beyond that, those fears or those positive effects, either one, may not occur.
Let’s talk about Penrose for a minute. His basic thesis—and you probably know this better than I do—is that Gödel’s incompleteness theorem says that the system we’re building can’t actually duplicate what a human brain can do. 
Or said another way, he says there are certain mathematical problems that are not able to be solved with an algorithm. They can’t be solved algorithmically, but that a human can solve them. And he uses that to say, therefore, a human brain is not a computational device that just runs algorithms, that it’s doing something more; and he, of course, thinks quantum tunneling and all of that. So, do you think that’s what’s going on in the brain, do you think the brain is fundamentally non-computational?
Well, again, I have to be a little reserved with my answer to that because it’s not an area that I feel I have a great deep background in. I’ve met Roger, and other folks around him, and some of the folks on the other side of this debate, too, and we’ve had a lot of discussions. We’ve worked on computational neuroscience at Qualcomm for ten years; not thirty years, but ten years, for sure. We started making artificial brains that were based on the spiking neuron technique, which is a very biologically inspired technique. And again, they are processing machines and they can do many things, but they can’t quite do what a real brain can do.
An example that was given to me was the proof of Fermat’s Last Theorem. If you’re familiar with Fermat’s Last Theorem, it was written down I think maybe two hundred years ago or more, and the creator, Fermat, a mathematician, wrote in the margin of his notebook that he had a proof for it, but then he never got to prove it. I think he lost his life. And it wasn’t until about twenty-some years ago where a researcher at Berkeley finally proved it. It’s claimed that the insight and creativity required to do that work would not be possible by simply assembling a sufficient number of AND gates and training them on previous geometry and math constructs, and then giving it this one and having the proof come out. It’s just not possible. There had to be some extra magic there, which Roger, and others, would argue requires quantum effects. And if you believe that—and I obviously find it very reasonable and I respect these folks, but I don’t claim that my own background informs me enough on that one—it seems very reasonable; it mirrors the experience we had here for a decade when we were building these kinds of machines.
I think we’ve got a way to go before some of these sci-fi type scenarios play out. Not that they won’t happen, but it’s not going to be right around the corner. But what is right around the corner is a lot of greatly improved capabilities as these techniques basically fundamentally replace traditional signal processing for many fields. We’re using it for image and sound, of course, but now we’re starting to use it in cameras, in modems and controllers, in complex management of complex systems, all kinds of functions. It’s really exciting what’s going on, but we still have a way to go before we get, you know, the ultimate.
Back to the theorem you just referenced, and I could be wrong about this, but I recall that he actually wrote a surprisingly simple proof to this theorem, which now some people say he was just wrong, that there isn’t a simple proof for it. But because everybody believed there was a proof for it, we eventually solved it. 
Do you know the story about a guy named Danzig back in the 30s? He was a graduate student in statistics, and his professor had written two famous unsolved problems on the chalkboard and said, “These are famous unsolved programs.” Well, Danzig comes in late to class, and he sees them and just assumes they’re the homework. He writes them down, and takes them home, and, you can guess, he solves them both. He remarked later that they seemed a little harder than normal. So, he turned them in, and it was about two weeks before the professor looked at them and realized what they were. And it’s just fascinating to think that, like, that guy has the same brain I have, I mean it’s far better and all that, but when you think about all those capabilities that are somewhere probably in there. 
Those are wonderful stories. I love them. There’s one about Gauss when he was six years old, or eight years old, and the teacher punished the class, told everyone to add up the numbers from one to one hundred. And he did it in an instant because he realized that 100 + 0 is 100, and 99 + 1 is 100, and 98 + 2 is 100, and you can multiply those by 50. The question is, “Is a machine based on neural nets, and coefficients, and logistic regression, and SVM and those techniques, capable of that kind of insight?” Likely it is not. And there is some special magic required for that to actually happen.
I will only ask you one more question on that topic and then let’s dial it back in more to the immediate future. You said, “special magic.” And again, I have to ask you, like I asked you about “think,” are you using “magic” colloquially, or is it just physics that we don’t understand yet? 
I would argue it’s probably the latter. With the term “magic,” there’s famous Arthur C. Clarke quote that, “Sufficiently advanced technology is indistinguishable from magic.” I think, in this case, the structure of a real brain and how it actually works, we might think of it as magic until we understand more than we do now. But it seems like you have to go into a deeper level, and a simple function assembled from logic gates is not enough.
In the more present day, how would you describe where we are with the science? Because it seems we’re at a place where you’re still pleasantly surprised when something works. It’s like, “Wow, it’s kind of cool, that worked.” And as much as there are these milestone events like AlphaGo, or Watson, or the one that beat the poker players recently, how quickly do you think advances really are coming? Or is it the hope for those advances that’s really kind of what’s revved up?
I think the advances are coming very rapidly, because there’s an exponential nature. You’ve got machines that have processing power which is increasing in an exponential manner, and whether it continues to do so is another question, but right now it is. You’ve got memory, which is increasing in an exponential manner. And then you’ve also got scale, which is the number of these devices that exist and your ability to connect to them. And I’d really like to get into that a little bit, too, the ability of a user to tap into a huge amount of resource. So, you’ve got all of those combined with algorithmic improvements, and, especially right now, there’s such a tremendous interest in the industry to work on these things, so lots of very talented graduates are pouring into the field. The product of all those effects is causing very, very rapid improvement. Even though in some cases the fundamental algorithm might be based on an idea from the 70s or 80s, we’re able to refine that algorithm, we’re able to couple that with far more processing power at a much lower cost than as ever before. And as a result, we’re getting incredible capabilities.
I was fortunate enough to have a dinner with the head of a Google Translate project recently, and he told me—an incredibly nice guy—that that program is now one of the largest AI projects in the world, and has a billion users. So, a billion users can walk around with their device and basically speak any language and listen to any language or read it, and that’s a tremendous accomplishment. That’s really a powerful thing, and a very good thing. And so, yeah, those things are happening right now. We’re in an era of rapid, rapid improvement in those capabilities.
What do you think is going to be the next watershed event? We’re going to have these incremental advances, and there’s going to be more self-driving cars and all of these things. But these moments that capture the popular imagination, like when the best Go player in the world loses, what do you think will be another one of those for the future?
When you talk about AlphaGo and Watson playing Jeopardy and those things, those are significant events, but they’re machines that someone wheels in, and they are big machines, and they hook them up and they run, but you don’t really have them available in the mobile environment. We’re on the verge now of having that kind of computing power, not just available to one person doing a game show, or the Go champion in a special setting, but available to everyone at a reasonable cost, wherever they are, at any time. Also, to be able to benefit, the learning experience of one person can benefit the rest. And so, that, I think, is the next step. It’s when you can use that capability, which is already growing as I described, and make it available in a mobile environment, ubiquitously, at reasonable cost, then you’re going to have incredible things.
Autonomous vehicles is an example, because that’s a mobile thing. It needs a lot of processing power, and it needs processing power local to it, on the device, but also needs to access tremendous capability in the network, and it needs to do so at high reliability, and at low latency and some interesting details there—so vehicles is a very good example. Vehicles is also something that we need to improve dramatically, from a safety standpoint, versus where we are today. It’s critical to the economies of cities and nations, so a lot of scale. So, yeah, that’s a good crucible for this.
But there are many others. Medical devices, huge applications there. And again, you want, in many cases, a very powerful capability in the cloud or in the network, but also at the device, there are many cases where you’d want to be able to do some processing right there, that can make the device more powerful or more economical, and that’s a mobile use case. So, I think there will be applications there; there can be applications in education, entertainment, certainly games, management of resources like power and electricity and heating and cooling and all that. It’s really a wide swath but the combination of connectivity with this capability together is really going to do it.
Let’s talk about the immediate future. As you know, with regard to these technologies, there’s kind of three different narratives about their effect on employment. One is that they’re going to take every single job, everybody from a poet on down; that doesn’t sound like something that would resonate with you because of the conversation we just had. Another is that this technology is going to replace a lot of low–skilled workers, there’s going to be fewer, quote, “low–skilled jobs,” whatever those are, and that you’re going to have this permanent underclass of unemployed people competing essentially with machines for work. And then there’s another narrative that says, “No, what’s going to happen is the same thing that happened with electricity, with motors, with everything else. People take that technology they use it to increase their own productivity, and they go on to raise their income that way. And you’re not going to have essentially any disruption, just like you didn’t have any disruption when we went from animal power to machine power.” Which of those narratives do you identify with, or is there a different way you would say it?
Okay, I’m glad you asked this because this is a hugely important question and I do want to make some comments. I’ve had the benefit of participating in the World Economic Forum, and I’ve talked to Brynjolfsson and McAfee, the authors of The Second Machine Age, and the whole theme of the forum a year ago was Klaus Schwab’s book The Fourth Industrial Age and the rise of cyber-physical systems and what impact they will have. I think we know some things from history and the question is, is the future going to repeat that or not? We know that there’s the so-called Luddite fallacy which says that, “When these machines come they’re going to displace all the jobs.” And we know that a thousand years ago, ninety-nine percent of the population was involved in food production, and today, I don’t know, don’t quote me on this, but it’s like 0.5 percent or something like that. Because we had massive productivity gains, we didn’t need to have that many people working on food production, and they found the ability to do other things. It’s definitely true that increases in unemployment did not keep pace with increases in productivity. Productivity went up orders of magnitude, unemployment did not go up, quote, “on the orders of magnitude,” and that’s been the history for a thousand years. And even more recently if you look at the government statistics on productivity, they are not increasing. Actually, some people are alarmed that they’re not increasing faster than they are, they don’t really reflect a spike that would suggest some of these negative scenarios.
Now, having said that, it is true that we are at a place now where machines, even with their processing that they use today, based on neural networks and SVMs and things like that, they are able to replace a lot of the existing manual or repetitive type tasks. I think society as a whole is going to benefit tremendously, and there’s going to be some groups that we’ll have to take some care about. There’s been discussions of universal basic incomes, which I think is a good idea. Bill Gates recently had an article about some tax ideas for machines. It’s a good idea, of course. Very hard to implement because you have to define what a robot is. You know, something like a car or a wheel, a wheel is a labor-saving device, do you tax it? I don’t know.
So, to get back to your question, I think it is true that there will be some groups that are in the short term displaced, but there’s no horizon where many things that people do, like caring for each other, like teaching each other, those kinds of jobs are not going away, they’re in ever-increasing demand. So, there’ll be a migration, not necessarily a wholesale replacement. And we do have to take care with the transient effect of that, and maybe a universal type of wage might be part of an answer. I don’t claim to have the answer completely. I mean it’s obviously a really hard problem that the world is grappling with. But I do feel, fundamentally, that the overall effect of all of this is going to be net positive. We’re going to make more efficient use of our resources, we’re going to provide services and capabilities that have never been possible before that everyone can have, and it’s going to be a net positive.
That’s an optimistic view, but it’s a very measured optimistic view. Let me play devil’s advocate from that side to say, why do you think there’ll be any disruption? What does that case look like? 
Because, if you think about it, in 1995 if somebody said, “Hey, you know what, if we take a bunch of computers and we connect them all via TCP/IP, and we build a protocol, maybe HTTP, to communicate, and maybe a markup language like HTML, you know what’s going to happen? Two billion people will connect and it’s going to create trillions and trillions and trillions of dollars of wealth. It’s going to create Google and eBay and Amazon and Baidu. It’s going to transform every aspect of society, and create an enormous number of jobs. And Etsy will come along, and people will be able to work from home. And all these thousands of things that float out of it.” You never would have made those connections, right? You never would have said, “Oh, that logically flows from snapping a bunch of computers together.” 
So, if we really are in a technological boom that’s going to dwarf that, really won’t the problem be an immense shortage of people? There’s going to be all of these opportunities, and very few people relatively to fill them. So, why the measured optimism for somebody who just waxed so poetic about what a big deal these technologies are?
Okay, that’s a great question. I mean, that was super. You asked will there be any disruption at all. I completely believe that we really have not a job shortage, but a skills shortage; that is the issue. And so, the burden goes then to the educational system, and the fabric of society to be able to place a value on good education and stick to it long enough that you can come up to speed in the modern sense, and be able to contribute beyond what the machines do. That is going to be a shortage, and anyone who has those skills is going to be in a good situation. But you can have disruption even in that environment.
You can have an environment where you have a skills shortage not a job shortage, and there’s disruption because the skills shortage gets worse and there’s a lot of individuals whose previous skills are no longer useful and they need to change. And that’s the tough thing. How do you retrain, in a transient case, when these advancements come very quickly? How do you manage that? What is fair? How does society distribute its wealth? I mean the mechanisms are going to change.
Right now, it’s starting to become true that just simply the manner in which you consume stuff; if that data is available, that has value in itself, and maybe people should be compensated for it. Today, they are not as much, they give it up when they sign in to these major cloud player services, and so those kinds of things will have to change. I’ll give you an anecdote.
Recently I went to Korea, and I met some startups there, and one of the things that happens, especially in non-curated app stores, is people develop games and they put in their effort and time and they develop a game, and they put it on there and people download it for ninety-nine cents or whatever, and they get some money. But, there are some bad actors that will see a new game, they’ll quickly download it, un-assemble the language back to the source, change a few little things and republish that same game that looks and feels just like the original but the ninety-nine cents goes to a different place. They basically steal the work. So, this is a bad thing, and in response, there are startups now that make tools that create software that makes it difficult to un-assemble. There are multiple startups that do what I just described and I’m sitting here listening to them and I’m realizing, “Wow, that job—in fact, that industry—didn’t even exist.” That is a new creation of the fact that there are un-curated app stores and mobile devices and games, and it’s an example of the kind of new thing that’s created, that didn’t exist before.
I believe that that process is alive and well, and we’re going to continue to see more of it, and there’s going to continue to be a skills shortage more than a job shortage, and so that’s why I have a fundamentally positive view. But it is going to be challenging to meet the demands of that skills shortage. Society has to place the right value on that type of education and we all have to work together to make that happen.
You have two different threads going on there. One is this idea that we have a skills shortage, and we need to rethink education. And another one that you touched on is the way that money flows, and can people be compensated for their data, and so forth. I’d like to talk about the first one, and again, I’d like to challenge the measured amount of your optimism. 
I’ll start off by saying I agree with you, that, at the beginning of the Industrial Revolution there was a vigorous debate in the United States about the value of post-literacy education. Like think about that: is post-literacy education worth anything? Because in an agrarian society, maybe it wasn’t for most people. Once you learn to read, that was what you needed. And then people said, “No, no, the jobs of the future are going to need more education. We should invest in that now.” And the United States became the first country in the world to guarantee that every single person could graduate from high school. And you can make a really good case, that I completely believe, that that was a major source of our economic ascendancy in the twentieth century. And, therefore, you can extend the argument by saying, “Maybe we need grades thirteen and fourteen now, and they’re vocational, and we need to do that again.” I’m with you entirely, but we don’t have that right now. And so, what’s going to happen? 
Here is where I would question the measured amount of your optimism which is… People often say to me, “Look, this technology creates all these new jobs at the high-end, like graphic designers and geneticists and programmers, and it destroys jobs at the low-end. Are those people down at the low-end going to become programmers?” And, of course, the answer is not, “Yes.” The answer is—and here’s my question—all that matters is, “Can everybody do a job just a little harder than the one they’re currently doing?” And if the answer to that is, “Yes,” then what happens is the college biology professor becomes a geneticist, the high school biology teacher becomes a college teacher, the substitute teacher gets backfilled into the biology one, and all the way down so that everybody gets just a little step up. Everybody just has to push themselves a little more, and the whole system phase shifts up, and everybody gets a raise and everybody gets a promotion. That‘s really what happened in the Industrial Revolution, so why is it that you don’t think that that is going to be as smooth as I have just painted it? 
Well, I think what you described does happen and is happening. If you look at—and again, I’m speaking from my own experience here as an engineer in a high-tech company—any engineer in a high-tech company, and you look at their output right now, and you compare it to a year or two before, they’ve all done what you describe, which is to do a little bit more, and to do something that’s a little bit harder. And we’ve all been able to do that because the fundamental processes involved improve. The tools, the fabric available to you to design things, the shared experience of the teams around you that you tap into—all those things improved. So, everyone is actually doing a job that’s a little bit harder than they did before, at least if you’re a designer.
You also cited some other examples, a teacher at one level going to the next level. That’s a kind of a queue, and there’s only so many spots at so many levels based on the demographics of the population. So not everyone can move in that direction, but they can all—at a given grade level—endeavor to teach more. Like, our kids, the math they do now is unbelievable. They are as much as a year or so ahead of when I was in high school, and I thought that we were doing pretty good stuff then, but now it’s even more.
I am optimistic that those things are going to happen, but you do have a labor force of certain types of jobs, where people are maybe doing them for ten, twenty, thirty years, and all of a sudden that is displaced. It’s hard to ask someone who’s done a repetitive task for much of their career to suddenly do something more sophisticated and different. That is the problem that we as a society have to address. We have to still value those individuals, and find a way—like a universal wage or something like that—so they can still have a good experience. Because if you don’t, then you really could have a dangerous situation. So, again, I feel overall positive, but I think there’s some pockets that are going to require some difficult thinking, and we’ve got to grapple with it.
Alright. I agree with your overall premise, but I will point out that that’s exactly what everybody said about the farmers—that you can’t take these people that have farmed for twenty or thirty years, and all of a sudden expect them to be able to work in a factory. The rhythm of the day is different, they have a supervisor, there’s bells that ring, they have to do different jobs, all of this stuff; and yet, that’s exactly what happened. 
I think there’s a tendency to short human ability. That being said, technological advance, interestingly, distributes its financial gains in a very unequal measure and there is something in there that I do agree we need to think about. 
Let’s talk about Qualcomm. You are the EVP of technology. You were the CTO. You’ve got seventy patents, like I said in your intro. What is Qualcomm’s role in this world? How are you working to build the better tomorrow? 
Okay, great. We provide connections between people, and increasingly between their worlds and between devices. Let me be specific about what I mean by that. When the company started—by the way, I’ve been at Qualcomm since ‘91, company started in ‘85-‘86 timeframe—one of the first things we did early on was we improved the performance and capacity of cellular networks by a huge amount. And that allowed operators like Verizon, AT&T, and Sprint—although they had different names back then—to offer, initially, voice services to large numbers of people at reasonably low cost. And the devices, thanks to the work of Qualcomm and others, got smaller, had longer battery life, and so forth. As time went on, it was originally connecting people with voice and text, and then it became faster and more capable so you could do pictures and videos, and then you could connect with social networks and web pages and streaming, and you could share large amounts of information.
We’re in an era now where I don’t just send a text message and say, “Oh, I’m skiing down this slope, isn’t this cool.” I can have a 360°, real-time, high-quality, low-latency sharing of my entire experience with another user, or users, somewhere else, and they can be there with me. And there’s all kinds of interesting consumer, industrial, medical, and commercial applications for that.
We’re working on that and we’re a leading developer of the connectivity technology, and also what you do with it on the endpoints—the processors, the camera systems, the user interfaces, the security frameworks that go with it; and now, increasingly, the machine learning and AI capabilities. We’re applying it, of course, to smartphones, but also to automobiles, medical devices, robotics, to industrial cases, and so on.
We’re very excited about the pending arrival of what we call 5G, which is the next generation of cellular technology, and it’s going to show up in the 2019-2020 timeframe. It’s going to be in the field maybe ten, fifteen years just like the previous generations were, and it’s going to provide, again, another big step in the performance of your radio link. And when I say “performance,” I mean the speed, of course, but also the latency will be very low—in many modes it can be millisecond or less. That will allow you to do functions that used to be on one side of the link, you can do on the other side. You can have very reliable systems.
There are a thousand companies participating in the standards process for this. It used to be just primarily the telecom industry, in the past with 3G and 4G—and of course, the telecom industry is very much still involved—but there are so many other businesses that will be enabled with 5G. So, we’re super excited about the impact it’s going to have on many, many businesses. Yeah, that’s what we’re up to these days.
Go with that a little more, paint us a picture. I don’t know if you remember those commercials back in the ‘90s saying, “Can you imagine sending a fax from the beach? You will!” and other “Can you imagine��� scenarios. They kind of all came true—other than that there wasn’t as much faxing as I think they expected. But, what do you think? Tell me some of the things that you think, in a reasonable amount of time, we’re going to be able to do it, in five years, let’s say.
I’m so fascinated that you used that example, because that one I know very well. Those AT&T commercials, you can still watch them on YouTube, and it’s fun to do so. They did say people will be able to send a fax from the beach, and that particular ad motivated the operators to want to send fax over cellular networks. And we worked on that—I worked on that myself—and we used that as a way to build the fundamental Internet transport, and the fax was kind of the motivation for it. But later, we used the Internet transport for internet access and it became a much, much bigger thing. The next step will be sharing fully immersive experiences, so you can have high-speed, low-latency video in both directions.
Autonomous vehicles, but before we even get to fully autonomous—because there’s some debate about when we’re going to get to a car that you can get into with no steering wheel and it just takes you where you want to go; that’s still a hard problem. Before we have fully autonomous cars that can take you around without a steering wheel, we’re going to have a set of technologies that improve the safety of semiautonomous cars. Things like lane assist, and better cruise control, and better visibility at night, and better navigation; those sorts of things. We’re also working on vehicle-to-vehicle communication, which is another application of low-latency, and can be used to improve safety.
I’ll give you a quick anecdote on that. In some sense we already have a form of it, it’s called brake lights. Right now, when you’re driving down the highway, and the car in front puts on the lights, you see that and then you take action, you may slow down or whatever. You can see a whole bunch of brake lights, if the traffic is starting to back up, and that alerts you to slow down. Brake lights have transitioned from incandescent bulbs which take, like, one hundred milliseconds to turn on to LED bulbs which take one millisecond to turn on. And if you multiply a hundred milliseconds at highway speeds, it’s six to eight feet depending on the speed, and you realize that low-latency can save lives, and make the system more effective.
That’s one of the hallmarks of 5G, is we’re going to be able to connect things at low-latency to improve the safety or the function. Or, in the case of machine learning, where sometimes you want processing to be done in the phone, and sometimes you want to access enormous processing in the cloud, or at the edge. When we say edge, in this context, we mean something very close to the phone, within a small number of hops or routes to get to that processing. If you do that, you can have incredible capability that wasn’t possible before.
To give you an example of what I’m talking about, I recently went to the Mobile World Congress America show in San Francisco, it’s a great show, and I walked through the Verizon booth and I saw a demonstration that they had made. In their demonstration, they had taken a small consumer drone, and I mean it’s a really tiny one—just two or three inches long—that costs $18. All this little thing does is send back video, live video, and you control it with Wi-Fi, and they had it following a red balloon. The way it followed it was, it sent the video to a very powerful edge processing computer, which then performed a sophisticated computer vision and control algorithm and then sent the commands back. So, what you saw was this little low-cost device doing something very sophisticated and powerful, because it had a low-latency connection to a lot of processing power. And then, just to really complete that, they switched it from edge computing, that was right there at the booth, to a cloud-based computing service that was fifty milliseconds away, and once they did that, the little demo wouldn’t function anymore. They were showing the power of low-latency, high-speed video and media-type communication, which enabled a simple device to do something similar to a much more complex device, in real time, and they could offer that almost like a service.
So, that paradigm is very powerful, and it applies to many different use cases. It’s enabled by high-performance connectivity which is something that we supply, and we’re very proficient at that. It impacts machine learning, because it gives you different ways to take advantage of the progress there—you can do it locally, you can do it on the edge, you can do it remotely. When you combine mobile, and all the investment that’s been made there, you leverage that to apply to other devices like automobiles, medical devices, robotics, other kinds of consumer products like wearables and assistant speakers, and those kinds of things. There’s just a vast landscape of technologies and services that all can be improved by what we’ve done, and what 5G will bring. And so, that’s why we’re pretty fired up about the next iteration here.
I assume you have done theoretical thinking about the absolute maximum rate at which data can be transferred. Are we one percent the way there, or ten percent, or can’t even measure it because it’s so small? Is this going to go on forever?
I am so glad you asked. It’s so interesting. This Monday morning, we just put a new piece of artwork in our research center—there’s a piece of artwork on every floor—and on the first floor, when you walk in, there’s a piece of artwork that has Claude Shannon and a number of his equations, including the famous one which is the Shannon capacity limit. That’s the first thing you see when you walk into the research center at Qualcomm. That governs how fast you can move data across a link, and you can’t beat it. There’s no way, any more than you can go faster than the speed of light. So, the question is, “How close are we to that limit?” If you have just two devices, two antennas, and a given amount of spectrum, and a given amount of power, then we can get pretty darn close to that limit. But the question is not that, the question is really, “Are we close to how fast of a service we can offer a mobile user in a dense area?” And to that question, the answer is, “We’re nowhere close.” We can still get significantly better; by that, I mean orders of magnitude better than we are now.
I can tell you three ways that that can be accomplished, and we’re doing all three of them. Number one is, we continue to make better modems, that are more efficient, better receivers, better equalizers, better antennas all of those techniques, and 5G is an example of that.
Number two, we always work with the regulator and operators to bring more spectrum, more radio spectrum to bear. If you look at the overall spectrum chart, only a sliver of it is really used for mobile communication, and we’re going to be able to use a lot more of it, and use more spectrum at high frequencies, like millimeter wave and above, that’s going to make a lot more “highway,” so to speak, for data transfer.
And the third thing is, the average radius of a base station can shrink, and we can use that channel over and over and over again. So right now, if you drive your car, and you listen to a radio station, the radio industry cannot use that channel again until you get hundreds of miles away. In the modern cellular systems, we’re learning how to reuse that channel even when you’re a very short distance away, potentially only feet or tens of meters away, so you can use it again and again and again.
So, with those three pillars, we’re really not close, and everyone can look forward to faster, faster, faster modems. And every time we move that modem speed up, that, of course, is the foundation for bigger screens, and more video, and new use cases that weren’t possible before, at a given price point, which now become possible. We’re not at the end yet, we’ve got a long way to go.
You made a passing reference to Moore’s Law—you didn’t call it out, but you referenced exponential growth, and that the speed of computers would increase. Everybody always says, “Is Moore’s Law finally over?” You see those headlines all the time, and, like all the headlines that are a question, the answer is almost always, “No.” You’ve made references to quantum computing and all that. Do we have opportunities to increase processor speed well into the future with completely different architectures?
We do. We absolutely do. And I believe that will occur. I mean, we’re not at the limit yet now. You can find “Moore’s Law is over” articles ten years ago also, and somehow it hasn’t happened yet. When we get past three nanometers, yeah, certain things are going to get really, really tough. But then there will be new approaches that will take us there, take us to the next step.
There’s also architectural improvements, and other axes that can be exploited; same thing as I just described to you in wireless. Shannon has said that we can only go so far between two antennas in a given amount of spectrum, in a given amount of power. But we can escape that by increasing the spectrum, increasing the number of distance between the antennas, reusing the spectrum over and over again, and we can still get the job done without breaking any fundamental laws. So, at least for the time being, the exponential growth is still very much intact.
You’ve mentioned Claude Shannon twice. He’s a fascinating character, and one of the things he did that’s kind of monumental was that paper he wrote in ‘49 or ‘50 about how a computer could play chess, and he actually figured out an algorithm for that. What was really fascinating about that was, this was one of the first times somebody looked at a computer and saw something other than a calculator. Because up until that point they just did not, and he made that intuitive leap to say, “Here’s how you would make a computer do something other than math…but it’s really doing math.” There’s a fascinating new book about him out called A Mind at Play, which I just read, that I recommend. 
We’re running out of time here. We’re wrapping up. I’m curious do you write, or do you have a place that people who want to follow you can keep track of what you’re up to? 
Well, I don’t have a lot there, but I do have a Twitter, and once in a while I’ll share a few thoughts. I should probably do more of that than I do. I have an internal blog which I should probably do more than I do. I’m sorry to say, I’m not very prolific on external writing, but that is something I would love to do more of.
And my final question is, are you a consumer of science fiction? You quoted Arthur C. Clarke earlier, and I’m curious if you read it, or watch TV, or movies or what have you. And if so, do you have any visions of the future that are in fiction, that you kind of identify with? 
Yes, I will answer an emphatic yes to that. I love all forms of science fiction and one of my favorites is Star Trek. My name spelled backwards is “Borg.” In fact, our chairman Paul Jacobs—I worked for him most of my career—he calls me “Locutus.” Given the discussion we just had—if you’re a fan of Star Trek and, in particular, the Star Trek: The Next Generation shows that were on in the ‘80s and early ‘90s, there was an episode where Commander Data met Mr. Spock. And that was really a good one, because you had Commander Data, who is an android and wants to be human, wants to have emotion and creativity and those things that we discussed, but can’t quite get there, meeting Mr. Spock who is a living thing and trying to purge all emotion and so forth, to just be pure logic, and they had an interaction. I thought that was just really interesting.
But, yes, I follow all science fiction. I like the book Physics of Star Trek by Krauss, I got to meet him once. And it’s amazing how many of the devices and concepts from science fiction have become science fact. In fact, the only difference between science fiction and science fact, is time. Over time we’ve pretty much built everything that people have thought up—communicators, replicators, computers.
I know, you can’t see one of those in-ear Bluetooth devices and not see Uhura, right? That’s what she had.
Correct. That little earpiece is a Bluetooth device. The communicator is a flip phone. The little square memory cartridges were like a floppy disk from the ‘80s. 3-D printers are replicators. We also have software replicators that can replicate and transport. We kind of have the hardware but not quite the way they do yet, but we’ll get there.
Do you think that these science fiction worlds anticipate the world or inadvertently create it? Do we have flip phones because of Star Trek or did Star Trek foresee the flip phone? 
I believe their influence is undeniable.
I agree and a lot of times they say it, right? They say, “Oh, I saw that and I wanted to do that. I wanted to build that.” You know there’s an XPRIZE for making a tricorder, and that came from Star Trek.
We were the sponsor of that XPRIZE and we were highly involved in that. And, yep, that’s exactly right, the inspiration of that was a portable device that can make a bunch of diagnoses, and that is exactly what took place and now we have real ones.
Well, I want to thank you for a fascinating hour. I want to thank you for going on all of these tangents. It was really fascinating. 
Wonderful, thank you as well. I also really enjoyed it, and anytime you want to follow up or talk some more please don’t hesitate. I really enjoyed talking with you.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 25: A Conversation with Matt Grob syndicated from http://ift.tt/2wBRU5Z
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 24: A Conversation with Deep Varma
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Deep talk about the nervous system, AGI, the Turing Test, Watson, Alexa, security, and privacy.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 24: A Conversation with Deep Varma”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-12-04-(00-55-19)-deep-varma.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/12\/voices-headshot-card_preview-1.jpeg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Deep Varma, he is the VP of Data Engineering and Science over at Trulia. He holds a Bachelor’s of Science in Computer Science. He has a Master’s degree in Management Information Systems, and he even has an MBA from Berkeley to top all of that off. Welcome to the show, Deep.
Deep Varma: Thank you. Thanks, Byron, for having me here.
I’d like to start with my Rorschach test question, which is, what is artificial intelligence?
Awesome. Yeah, so as I define artificial intelligence, this is an intelligence created by machines based on human wisdom, to augment a human’s lifestyle to help them make the smarter choices. So that’s how I define artificial intelligence in a very simple and the layman terms.
But you just kind of used the word, “smart” and “intelligent” in the definition. What actually is intelligence?
Yeah, I think the intelligence part, what we need to understand is, when you think about human beings, most of the time, they are making decisions, they are making choices. And AI, artificially, is helping us to make smarter choices and decisions.
A very clear-cut example, which sometimes what we don’t see, is, I still remember in the old days I used to have this conventional thermostat at my home, which turns on and off manually. Then, suddenly, here comes artificial intelligence, which gave us Nest. Now as soon as I put the Nest there, it’s an intelligence. It is sensing that someone is there in the home, or not, so there’s motion sensing. Then it is seeing what kind of temperature do I like during summer time, during winter time. And so, artificially, the software, which is the brain that we have put on this device, is doing this intelligence, and saying, “great, this is what I’m going to do.” So, in one way it augmented my lifestyle—rather than me making those decisions, it is helping me make the smart choices. So, that’s what I meant by this intelligence piece here.
Well, let me take a different tack, in what sense is it artificial? Is that Nest thermostat, is it actually intelligent, or is it just mimicking intelligence, or are those the same thing?
What we are doing is, we are putting some sensors there on those devices—think about the central nervous system, what human beings have, it is a small piece of a software which is embedded within that device, which is making decisions for you—so it is trying to mimic, it is trying to make some predictions based on some of the data it is collecting. So, in one way, if you step back, that’s what human beings are doing on a day-to-day basis. There is a piece of it where you can go with a hybrid approach. It is mimicking as well as trying to learn, also.
Do you think we learn a lot about artificial intelligence by studying how humans learn things? Is that the first step when you want to do computer vision or translation, do you start by saying, “Ok, how do I do it?” Or, do you start by saying, “Forget how a human does it, what would be the way a machine would do it?”
Yes, I think it is very tough to compare the two entities, because the way human brains, or the central nervous system, the speed that they process the data, machines are still not there at the same pace. So, I think the difference here is, when I grew up my parents started telling me, “Hey, this is Taj Mahal. The sky is blue,” and I started taking this data, and I started inferring and then I started passing this information to others.
It’s the same way with machines, the only difference here is that we are feeding information to machines. We are saying, “Computer vision: here is a photograph of a cat, here is a photograph of a cat, too,” and we keep on feeding this information—the same way we are feeding information to our brains—so the machines get trained. Then, over a period of time, when we show another image of a cat, we don’t need to say, “This is a cat, Machine.” The machine will say, “Oh, I found out that this is a cat.”
So, I think this is the difference between a machine and a human being, where, in the case of machine, we are feeding the information to them, in one form or another, using devices; but in the case of human beings, you have conscious learning, you have the physical aspects around you that affect how you’re learning. So that’s, I think, where we are with artificial intelligence, which is still in the infancy stage.
Humans are really good at transfer learning, right, like I can show you a picture of a miniature version of the Statue of Liberty, and then I can show you a bunch of photos and you can tell when it’s upside down, or half in water, or obscured by light and all that. We do that really well. 
How close are we to being able to feed computers a bunch of photos of cats, and the computer nails the cat thing, but then we only feed it three or four images of mice, and it takes all that stuff it knows about different cats, and it is able to figure out all about different mice?
So, is your question, do we think these machines are going to be at the same level as human beings at doing this?
No, I guess the question is, if we have to teach, “Here’s a cat, here’s a thimble, here’s ten thousand thimbles, here’s a pin cushion, here’s ten thousand more pin cushions…” If we have to do one thing at a time, we’re never going to get there. What we’ve got to do is, like, learn how to abstract up a level, and say, “Here’s a manatee,” and it should be able to spot a manatee in any situation.
Yeah, and I think this is where we start moving into the general intelligence area. This is where it is becoming a little interesting and challenging, because human beings falls under more of the general intelligence, and machines are still falling under the artificial intelligence framework.
And the example you were giving, I have two boys, and when my boys were young, I’d tell them, “Hey, this is milk,” and I’d show them milk two times and they knew, “Awesome, this is milk.” And here come the machines, and you keep feeding them the big data with the hope that they will learn and they will say, “This is basically a picture of a mouse or this is a picture of a cat.”
This is where, I think, this artificial general intelligence which is shaping up—that we are going to abstract a level up, and start conditioning—but I feel we haven’t cracked the code for one level down yet. So, I think it’s going to take us time to get to the next level, I believe, at this time.
Believe me, I understand that. It’s funny, when you chat with people who spend their days working on these problems, they’re worried about, “How am I going to solve this problem I have tomorrow?” They’re not as concerned about that. That being said, everybody kind of likes to think about an AGI. 
AI is, what, six decades old and we’ve been making progress, do you believe that that is something that is going to evolve into an AGI? Like, we’re on that path already, and we’re just one percent of the way there? Or, is an AGI is something completely different? It’s not just a better narrow AI, it’s not just a bunch of narrow AI’s bolted together, it’s a completely different thing. What do you say?
Yes, so what I will say, it is like in the software development of computer systems—we call this as an object, and then we do inheritance of a couple of objects, and the encapsulation of the objects. When you think about what is happening in artificial intelligence, there are companies, like Trulia, who are investing in building the computer vision for real estate. There are companies investing in building the computer vision for cars, and all those things. We are in this state where all these dysfunctional, disassociated investments in our system are happening, and there are pieces that are going to come out of that which will go towards AGI.
Where I tend to disagree, I believe AI is complimenting us and AGI is replicating us. And this is where I tend to believe that the day the AGI comes—that means it’s a singularity that they are reaching wisdom or the processing power of human beings—that, to me, seems like doomsday, right? Because that those machines are going to be smarter than us, and they will control us.
And the reason I believe that, and there is a scientific reason for my belief; it’s because we know that in the central nervous system the core tool is the neurons, and we know neurons carry two signals—chemical and electrical. Machines can carry the electrical signals, but the chemical signals are the ones which generate these sensory signals—you touch something, you feel it. And this is where I tend to believe that AGI is not going to happen, I’m close to confident. Thinking machines are going to come—IBM Watson, as an example—so that’s how I’m differentiating it at this time.
So, to be clear, you said you don’t believe we’ll ever make an AGI?
I will be the one on the extreme end, but I will say yes.
That’s fascinating. Why is that? The normal argument is a reductionist argument. It says, you are some number of trillions of cells that come together, and there’s an emergent “you” that comes out of that. And, hypothetically, if we made a synthetic copy of every one of those cells, and connected them, and did all that, there would be another Deep Varma. So where do you think the flaw in that logic is?
I think the flaw in that logic is that the general intelligence that humans have is also driven by the emotional side, and the emotional side—basically, I call it a chemical soup—is, I feel, the part of the DNA which is not going to be possible to replicate in these machines. These machines will learn by themselves—we recently saw what happened with Facebook, where Facebook machines were talking to each other and they start inventing their own language, over a period of time—but I believe the chemical mix of humans is what is next to impossible to produce it.
I mean—and I don’t want to take a stand because we have seen proven, over the decades, what people used to believe in the seventies has been proven to be right—I think the day we are able to find the chemical soup, it means we have found the Nirvana; and we have found out how human beings have been born and how they have been built over a period of time, and it took us, we all know, millions and millions of years to come to this stage. So that’s the part which is putting me on the other extreme end, to say, “Is there really going to another Deep Varma,” and if yes, then where is this emotional aspect, where are those things that are going to fit into the bigger picture which drives human beings onto the next level?
Well, I mean there’s a hundred questions rushing for the door right now. I’ll start with the first one. What do you think is the limit of what we’ll be able to do without the chemical part? So, for instance, let me ask a straight forward question—will we be able to build a machine that passes the Turing test?
Can we build that machine? I think, potentially, yes, we can.
So, you can carry on a conversation with it, and not be able to figure out that it’s a machine? So, in that case, it’s artificial intelligence in the sense that it really is artificial. It’s just running a program, saying some words, it’s running a program, saying some words, but there’s nobody home.
Yes, we have IBM Watson, which can go a level up as compared to Alexa. I think we will build machines which, behind the scenes, are trying to understand your intent and trying to have those conversations—like Alexa and Siri. And I believe they are going to eventually start becoming more like your virtual assistants, helping you make decisions, and complimenting you to make your lifestyle better. I think that’s definitely the direction we’re going to keep seeing investments going on.
I read a paper of yours where you made a passing reference to Westworld.
Right.
Putting aside the last several episodes, and what happened in them—I won’t give any spoilers—take just the first episode, do you think that we will be able to build machines that can interact with people like that?
I think, yes, we will.
But they won’t be truly creative and intelligent like we are?
That’s true.
Alright, fascinating. 
So, there seem to be these two very different camps about artificial intelligence. You have Elon Musk who says it’s an existential threat, you have Bill Gates who’s worried about it, you have Stephen Hawking who’s worried about it, and then there’s this other group of people that think that’s distracting. 
I saw that Elon Musk spoke at the governor’s convention and said something and then Pedro Domingos, who wrote The Master Algorithm, retweeted that article, and his whole tweet was, “One word: sigh.” So, there’s this whole other group of people that think that’s just really distracting, really not going to happen, and they’re really put off by that kind of talk. 
Why do you think there’s such a gap between those two groups of people?
The gap is that there is one camp who is very curious, and they believe that millions of years of how human beings evolved can immediately be taken by AGI, and the other camp is more concerned with controlling that, asking are those machines going to become smarter than us, are they going to control us, are we going to become their slaves?
And I think those two camps are the extremes. There is a fear of losing control, because humans—if you look into the food chain, human beings are the only ones in the food chain, as of now, who control everything—fear that if those machines get to our level of wisdom, or smarter than us, we are going to lose control. And that’s where I think those two camps are basically coming to the extreme ends and taking their stands.
Let’s switch gears a little bit. Aside from the robot uprising, there’s a lot of fear wrapped up in the kind of AI we already know how to build, and it’s related to automation. Just to set up the question for the listener, there’s generally three camps. One camp says we’re going to have all this narrow AI, and it’s going to put a bunch of people out of work, people with less skills, and they’re not going to be able to get new work and we’re going to have, kind of, the Great Depression going on forever. Then there’s a second group that says, no, no, it’s worse than that, computers can do anything a person can do, we’re all going to be replaced. And then there’s a third camp that says, that’s ridiculous, every time something comes along, like steam or electricity, people just take that technology, and use it to increase their own productivity, and that’s how progress happens. So, which of those three camps, or fourth one, perhaps, do you believe?
I fall into, mostly, the last camp, which is, we are going to increase the productivity of human beings; it means we will be able to deliver more and faster. A few months back, I was in Berkeley and we were having discussions around this same topic, about automation and how jobs are going to go away. The Obama administration even published a paper around this topic. One example which always comes in my mind is, last year I did a remodel of my house. And when I did the remodeling there were electrical wires, there are these water pipelines going inside my house and we had to replace them with copper pipelines, and I was thinking, can machines replace those job? I keep coming back to the answer that, those skill level jobs are going to be tougher and tougher to replace, but there are going to be productivity gains. Machines can help to cut those pipeline pieces much faster and in a much more accurate way. They can measure how much wire you’ll need to replace those things. So, I think those things are going to help us to make the smarter choices. I continue to believe it is going to be mostly the third camp, where machines will keep complementing us, helping to improve our lifestyles and to improve our productivity to make the smarter choices.
So, you would say that there are, in most jobs, there are elements that automation cannot replace, but it can augment, like a plumber, or so forth. What would you say to somebody who’s worried that they’re going to be unemployable in the future? What would you advise them to do?
Yeah, and the example I gave is a physical job, but think about an example of a business consultants, right? Companies hire business consultants to come, collect all the data, then prepare PowerPoints on what you should do, and what you should not do. I think those are the areas where artificial intelligence is going to come, and if you have tons of the data, then you don’t need a hundred consultants. For those people, I say go and start learning about what can be done to scale them to the next level. So, in the example I’ve just given, the business consultants, if they are doing an audit of a company with the financial books, look into the tools to help so that an audit that used to take thirty days now takes ten days. Improve how fast and how accurate you can make those predictions and assumptions using machines, so that those businesses can move on. So, I would tell them to start looking into, and partnering into, those areas early on, so that you are not caught by surprise when one day some industry comes and disrupts you, and you say, “Ouch, I never thought about it, and my job is no longer there.”
It sounds like you’re saying, figure out how to use more technology? That’s your best defense against it, is you just start using it to increase your own productivity.
Yeah.
Yeah, it’s interesting, because machine translation is getting comparable to a human, and yet generally people are bullish that we’re going to need more translators, because this is going to cause people to want to do more deals, and then they’re going to need to have contracts negotiated, and know about customs in other countries and all of that, so that actually being a translator you get more business out of this, not less, so do you think things like that are kind of the road map forward?
Yeah, that’s true.
So, what are some challenges with the technology? In Europe, there’s a movement—I think it’s already adopted in some places, but the EU is considering it—this idea that if an AI makes a decision about you, like do you get the loan, that you have the right to know why it made it. In other words, no black boxes. You have to have transparency and say it was made for this reason. Do you think a) that’s possible, and b) do you think it’s a good policy?
Yes, I definitely believe it’s possible, and it’s a good policy, because this is what consumers wants to know, right? In our real estate industry, if I’m trying to refinance my home, the appraiser is going to come, he will look into it, he will sit with me, then he will send me, “Deep, your house is worth $1.5 million dollar.” He will provide me the data that he used to come to that decision—he used the neighborhood information, he used the recent sold data.
And that, at the end of the day, gives confidence back to the consumer, and also it shows that this is not because this appraiser who came to my home didn’t like me for XYZ reason, and he end up giving me something wrong; so, I completely agree that we need to be transparent. We need to share why a decision has been made, and at the same time we should allow people to come and understand it better, and make those decisions better. So, I think those guidelines need to be put into place, because humans tend to be much more biased in their decision-making process, and the machines take the bias out, and bring more unbiased decision making.
Right, I guess the other side of that coin, though, is that you take a world of information about who defaulted on their loan, and then you take you every bit of information about, who paid their loan off, and you just pour it all in into some gigantic database, and then you mine it and you try to figure out, “How could I have spotted these people who didn’t pay their loan?” And then you come up with some conclusion that may or may not make any sense to a human, right? Isn’t that the case that it’s weighing hundreds of factors with various weights and, how do you tease out, “Oh it was this”? Life isn’t quite that simple, is it?
No, it is not, and demystifying this whole black box has never been simple. Trust us, we face those challenges in the real estate industry on a day-to-day basis—we have Trulia’s estimates—and it’s not easy. At the end, we just can’t rely totally on those algorithms to make the decisions for us.
I will give one simple example, of how this can go wrong. When we were training our computer vision system, and, you know, what we were doing was saying, “This is a window, this is a window.” Then the day came when we said, “Wow, our computer vision can say I will look at any image, and known this is a window.” And one fine day we got an image where there is a mirror, and there is a reflection of a window on the mirror, and our computer said, “Oh, Deep, this is a window.” So, this is where big data and small data come into a place, where small data can make all these predictions and goes wrong completely.
This is where—when you’re talking about all this data we are taking in to see who’s on default and who’s not on default—I think we need to abstract, and we need to at least make sure that with this aggregated data, this computational data, we know what the reference points are for them, what the references are that we’re checking, and make sure that we have the right checks and balances so that machines are not ultimately making all the calls for us.
You’re a positive guy. You’re like, “We’re not going to build an AGI, it’s not going to take over the world, people are going to be able to use narrow AI to grow their productivity, we’re not going to have unemployment.” So, what are some of the pitfalls, challenges, or potential problems with the technology?
I agree with you, it’s being positive. Realistically, looking into the data—and I’m not saying that I have the best data in front of me—I think what is the most important is we need to look into history, and we need to see how we evolved, and then the Internet came and what happened.
The challenge for us is going to be that there are businesses and groups who believe that artificial intelligence is something that they don’t have to worry about, and over a period of time artificial intelligence is going to start becoming more and more a part of business, and those who are not able to catch up with this, they’re going to see the unemployment rate increase. They’re going to see company losses increase because some of the decisions they’re not making in the right way.
You’re going to see companies, like Lehman Brothers, who are making all these data decisions for their clients by not using machines but relying on humans, and these big companies fail because of them. So, I think, that’s an area where we are going to see problems, and bankruptcies, and unemployment increases, because of they think that artificial intelligence is not for them or their business, that it’s never going to impact them—this is where I think we are going to get the most trouble.
The second area of trouble is going to be security and privacy, because all this data is now floating around us. We use the Internet. I use my credit card. Every month we hear about a new hack—Target being hacked, Citibank being hacked—all this data physically-stored in the system and it’s getting hacked. And now we’ll have all this data wirelessly transmitting, machines talking to each of their devices, IoT devices talking to each other—how are you we going to make sure that there is not a security threat? How are we going to make sure that no one is storing my data, and trying to make assumptions, and enter into my bank account? Those are the two areas where I feel we are going to see, in coming years, more and more challenges.
So, you said privacy and security are the two areas?
Denial of accepting AI is the one, and security and privacy is the second one—those are the two areas.
So, in the first one, are there any industries that don’t need to worry about it, or are you saying, “No, if you make bubble-gum you had better start using AI?”
I will say every industry. I think every industry needs to worry about it. Some industries may adapt the technologies faster, some may go slower, but I’m pretty confident that the shift is going to happen so fast that, those businesses will be blindsided—be it small businesses or mom and pop shops or big corporations, it’s going to touch everything.
Well with regard to security, if the threat is artificial intelligence, I guess it stands to reason that the remedy is AI as well, is that true?
The remedy is there, yes. We are seeing so many companies coming and saying, “Hey, we can help you see the DNS attacks. When you have hackers trying to attack your site, use our technology to predict that this IP address or this user agent is wrong.” And we see that to tackle the remedy, we are building an artificial intelligence.
But, this is where I think the battle between big data and small data is colliding, and companies are still struggling. Like, phishing, which is a big problem. There are so many companies who are trying to solve the phishing problem of the emails, but we have seen technologies not able to solve it. So, I think AI is a remedy, but if we stay just focused on the big data, that’s, I think, completely wrong, because my fear is, a small data set can completely destroy the predictions built by a big data set, and this is where those security threats can bring more of an issue to us.
Explain that last bit again, the small data set can destroy…?
So, I gave the example of computer vision, right? There was research we did in Berkeley where we trained machines to look at pictures of cats, and then suddenly we saw the computer start predicting, “Oh, this is this kind of a cat, this is cat one, cat two, this is a cat with white fur.” Then we took just one image where we put the overlay of a dog on the body of a cat, and the machines ended up predicting, “That’s a dog,” not seeing that it’s the body of a cat. So, all the big data that we used to train our computer vision, just collapsed with one photo of a dog. And this is where I feel that if we are emphasizing so much on using the big data set, big data set, big data set, are there smaller data sets which we also need to worry about to make sure that we are bridging the gap enough to making sure that our securities are not compromised?
Do you think that the system as a whole is brittle? Like, could there be an attack of such magnitude that it impacts the whole digital ecosystem, or are you worried more about, this company gets hacked and then that one gets hacked and they’re nuisances, but at least we can survive them?
No, I’m more worried about the holistic view. We saw recently, how those attacks on the UK hospital systems happened. We saw some attacks—which we are not talking about—on our power stations. I’m more concerned about those. Is there going to be a day when we have built massive infrastructures that are reliant on computers—our generation of power and the supply of power and telecommunications—and suddenly there is a whole outage which can take the world to a standstill, because there is a small hole which we never thought about. That, to me, is the bigger threat than the stand alone individual things which are happening now.
That’s a hard problem to solve, there’s a small hole on the internet that we’ve not thought about that can bring the whole thing down, that would be a tricky thing to find, wouldn’t it?
It is a tricky thing, and I think that’s what I’m trying to say, that most of the time we fail because of those smaller things. If I go back, Byron, and bring the artificial general intelligence back into a picture, as human beings it’s those small, small decisions we make—like, I make a fast decision when an animal is approaching very close to me, so close that my senses and my emotions are telling me I’m going to die—and this is where I think sometimes we tend to ignore those small data sets.
I was in a big debate around those self-driven cars which are shaping up around us, and people were asking me when will we see those self-driven cars on a San Francisco street. And I said, “I see people doing crazy jaywalking every day,” and accidents are happening with human drivers, no doubt, but the scale can increase so fast if those machines fail. If they have one simple sensor which is not working at that moment in time and not able to get one signal, it can kill human beings much faster as compared to what human beings are killing, so that’s the rational which I’m trying to put here.
So, one of my questions that I was going to ask you, is, do you think AI is a mania? Like it’s everywhere but it seems like, you’re a person who says every industry needs to adopt it, so if anything, you would say that we need more focus on it, not less, is that true?
That’s true.
There was a man in the ‘60s named Weizenbaum who made a program called ELIZA, which was a simple program that you would ask a question, say something like, “I’m having a bad day,” and then it would say, “Why are you having a bad day?” And then you would say, “I’m having a bad day because I had a fight with my spouse,” and then would ask, “Why did you have a fight?” And so, it’s really simple, but Weizenbaum got really concerned because he saw people pouring out their heart to it, even though they knew it was a program. It really disturbed him that people developed emotional attachment to ELIZA, and he said that when a computer says, “I understand,” that it’s a lie, that there’s no “I,” there’s nothing that understands anything. 
Do you worry that if we build machines that can imitate human emotions, maybe the care for people or whatever, that we will end up having an emotional attachment to them, or that that is in some way unhealthy?
You know, Byron, it’s a very great question. I think, also pick out a great example. So, I have Alexa at my home, right, and I have two boys, and when we are in a kitchen—because Alexa is in our kitchen—my older son comes home and says, “Alexa, what’s the temperature look like today?” Alexa says, “Temperature is this,” and then he says, “Okay, shut up,” to Alexa. My wife is standing there saying “Hey, don’t be rude, just say, ‘Alexa stop.’” You see that connection? The connection is you’ve already started treating this machine as a respectful device, right?
I think, yes, there is that emotional connection there, and that’s getting you used to seeing it as part of your life in an emotional connection. So, I think, yes, you’re right, that’s a danger.
But, more than Alexa and all those devices, I’m more concerned about the social media sites, which can have much more impact on our society than those devices. Because those devices are still physical in shape, and we know that if the Internet is down, then they’re not talking and all those things. I’m more concerned about these virtual things where people are getting more emotionally attached, “Oh, let me go and check what my friends been doing today, what movie they watched,” and how they’re trying to fill that emotional gap, but not meeting individuals, just seeing the photos to make them happy. But, yes, just to answer your question, I’m concerned about that emotional connection with the devices.
You know, it’s interesting, I know somebody who lives on a farm and he has young children, and, of course, he’s raising animals to slaughter, and he says the rule is you just never name them, because if you name them then that’s it, they become a pet. And, of course, Amazon chose to name Alexa, and give it a human voice; and that had to be a deliberate decision. And you just wonder, kind of, what all went into it. Interestingly, Google did not name theirs, it’s just the Google Assistant. 
How do you think that’s going to shake out? Are we just provincial, and the next generation isn’t going to think anything of it? What do you think will happen?
So, is your question what’s going to happen with all those devices and with all those AI’s and all those things?
Yes, yes.
As of now, those devices are all just operating in their own silo. There are too many silos happening. Like in my home, I have Alexa, I have a Nest, those plug-ins. I love, you know, where Alexa is talking to Nest, “Hey Nest, turn it off, turn it on.” I think what we are going to see over the next five years is that those devices are communicating with each other more, and sending signals, like, “Hey, I just saw that Deep left home, and the garage door is open, close the garage door.”
IoT is popping up pretty fast, and I think people are thinking about it, but they’re not so much worried about that connectivity yet. But I feel that where we are heading is more of the connectivity with those devices, which will help us, again, compliment and make the smart choices, and our reliance on those assistants is going to increase.
Another example here, I get up in the morning and the first thing I do is come to the kitchen and say Alexa, “Put on the music, Alexa, put on the music, Alexa, and what’s the weather going to look like?” With the reply, “Oh, Deep, San Francisco is going to be 75,” then Deep knows Deep is going to wear a t-shirt today. Here comes my coffee machine, my coffee machine has already learned that I want eight ounces of coffee, so it just makes it.
I think all those connections, “Oh, Deep just woke up, it is six in the morning, Deep is going to go to office because it’s a working day, Deep just came to kitchen, play this music, tell Deep that the temperature is this, make coffee for Deep,” this is where we are heading in next few years. All these movies that we used to watch where people were sitting there, and watching everything happen in the real time, that’s what I think the next five years is going to look like for us.
So, talk to me about Trulia, how do you deploy AI at your company? Both customer facing and internally?
That’s such an awesome question, because I’m so excited and passionate because this brings me home. So, I think in artificial intelligence, as you said, there are two aspects to it, one is for a consumer and one is internal, and I think for us AI helps us to better understand what our consumers are looking for in a home. How can we help move them faster in their search—that’s the consumer facing tagline. And an example is, “Byron is looking at two bedroom, two bath houses in a quiet neighborhood, in good school district,” and basically using artificial intelligence, we can surface things in much faster ways so that you don’t have to spend five hours surfing. That’s more consumer facing.
Now when it comes to the internal facing, internal facing is what I call “data-driven decision making.” We launch a product, right? How do we see the usage of our product? How do we predict whether this usage is going to scale? Are consumers going to like this? Should we invest more in this product feature? That’s the internal facing we are using artificial intelligence.
I don’t know if you have read some of my blogs, but I call it data-driven companies—there are two aspects of the data driven, one is the data-driven decision making, this is more of an analyst, and that’s the internal reference to your point, and the external is to the consumer-facing data-driven product company, which focuses on how do we understand the unique criteria and unique intent of you as a buyer—and that’s how we use artificial intelligence in the spectrum of Trulia.
When you say, “Let’s try to solve this problem with data,” is it speculative, like do you swing for the fences and miss a lot? Or, do you look for easy incremental wins? Or, are you doing anything that would look like pure science, like, “Let’s just experiment and see what happens with this”? Is the science so nascent that you, kind of, just have to get in there and start poking around and see what you can do?
I think it’s both. The science helps you understand those patterns much faster and better and in a much more accurate way, that’s how science helps you. And then, basically, there’s trial and error, or what we call an, “A/B testing” framework, which helps you to validate whether what science is telling you is working or not. I’m happy to share an example with you here if you want.
Yeah, absolutely.
So, the example here is, we have invested in our computer vision which is, we train our machines and our machines basically say, “Hey, this is a photo of a bathroom, this is a photo of a kitchen,” and we even have trained that they can say, “This is a kitchen with a wide granite counter-top.” Now we have built this massive database. When a consumer comes to the Trulia site, what they do is share their intent, they say, “I want two bedrooms in Noe Valley,” and the first thing that they do when those listings show up is click on the images, because they want to see what that house looks like.
What we saw was that there were times when those images were blurred, there were times when those images did not match up with the intent of a consumer. So, what we did with our computer vision, we invested in something called “the most attractive image,” which basically takes the three attributes—it looks into the quality of an image, it looks into the appropriateness of an image, and it looks into the relevancy of an image—and based on these three things we use our conventional neural network models to rank the images and we say, “Great, this is the best image.” So now when a consumer comes and looks at that listing we show the most attractive photo first. And that way, the consumer gets more engaged with that listing. And what we have seen— using the science, which is machine learning, deep learning, CNM models, and doing the A/B testing—is that this project increased our enquiries for the listing by double digits, so that’s one of the examples which I just want to share with you.
That’s fantastic. What is your next challenge? If you could wave a magic wand, what would be the thing you would love to be able to do that, maybe, you don’t have the tools or data to do yet?
I think, what we haven’t talked about here and I will use just a minute to tell you, that what we have done is we’ve built this amazing personalization platform, which is capturing Byron’s unique preferences and search criteria, we have built machine learning systems like computer vision recommender systems and the user engagement prediction model, and I think our next challenge will be to keep optimizing the consumer intent, right? Because the biggest thing that we want to understand is, “What exactly is Byron looking into?” So, if Byron visits a particular neighborhood because he’s travelling to Phoenix, Arizona, does that mean you want to buy a home there, or Byron is in San Francisco and you live here in San Francisco, how do we understand?
So, we need to keep optimizing that personalization platform—I won’t call it a challenge because we have already built this, but it is the optimization—and make sure that our consumers get what they’re searching for, keep surfacing the relevant data to them in a timely manner. I think we are not there yet, but we have made major inroads into our big data and machine learning technologies. One specific example, is Deep, basically, is looking into Noe Valley or San Francisco, and email and push notifications are the two channels, for us, where we know that Deep is going to consume the content. Now, the day we learn that Deep is not interested in Noe Valley, we stop sending those things to Deep that day, because we don’t want our consumers to be overwhelmed in their journey. So, I think that this is where we are going to keep optimizing on our consumer’s intent, and we’ll keep giving them the right content.
Alright, well that is fantastic, you write on these topics so, if people want to keep up with you Deep how can they follow you?
So, when you said “people” it’s other businesses and all those things, right? That’s what you mean?
Well I was just referring to your blog like I was reading some of your posts.
Yeah, so we have our tech blog, http://ift.tt/2AM5zMS, and it’s not only me; I have an amazing team of engineers—those who are way smarter than me to be very candid—my data scientist team, and all those things. So, we write our blogs there, so I definitely ask people to follow us on those blogs. When I go and speak at conferences, we publish that on our tech blog, and I publish things on my LinkedIn profile. So, yeah, those are the channels which people can follow. Trulia, we also host data science meetups here in Trulia, San Francisco on the seventh floor of our building, that’s another way people can come, and join, and learn from us.
Alright, well I want to thank you for a fascinating hour of conversation, Deep.
Thank you, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 24: A Conversation with Deep Varma syndicated from http://ift.tt/2wBRU5Z
0 notes