Tumgik
#also there’s text under the little headshot. if you have the background set to white and can’t see it
forgetmenautical · 1 year
Text
Tumblr media
a second take at @420technoblazeit ‘s aster!
98 notes · View notes
mealha · 5 years
Text
Hands on with the Leica SL2, plus sample images
The SL2 is a full frame mirrorless camera built around a 47-megapixel CMOS sensor that has been designed around the idea that less is more. (Jeanette D. Moses/)
Early this morning Leica announced the SL2, a full frame mirrorless camera built around a 47-megapixel CMOS sensor. It brings improved ergonomics and a simplified three-button layout compared to the original SL, which came out in 2015. Mirrorless camera technology has obviously come along way since Leica first introduced the SL, and while other camera companies seem to release a new mirrorless camera at least once a year, Leica's process works a little bit differently.
Prior to the camera’s announcement we had a chance to visit the Leica headquarters in Wetzler, Germany, tour the facilities where these cameras are produced, and be some of the very first in the world to get hands on time with this forthcoming camera. Read on to learn more about our shooting experience with the SL2.
The SL2 is a perfectly capable camera for studio work. (Jeanette D. Moses/)
Design and feel
The ergonomics of the SL2 have gotten an overhaul compared to the original SL. The camera has a full-metal housing made of anodized aluminum and machined magnesium and has an IP54 rating, which gives it a rugged and substantial feeling. As with all Leica cameras the SL2 feels like it is built to last for a long time. Hand-applied leatherette wraps around the body of the camera. The grip has been redesigned for a more comfortable shooting experience and the button layout on the back of the camera has been simplified to match what is found on Leica Q and M cameras. On the back of the camera to the left of the 3.2 inch touchscreen you will find buttons for "Play" "FN" and "Menu". An On/Off switch sits to the left of the 5.76 megapixel OLED viewfinder with a comfortable rounded eyecup. To the right of the eyecup you find joystick and back dial. The top of the camera has two customizable buttons, a substantial top dial, and the shutter. The simplified design is intuitive and easy to operate without taking the camera away from your eye.
Shot under continuous lighting with the SL2. (Jeanette D. Moses/)
The user interface of the SL2’s menus also take a less-is-more approach. Photo and video functions get dedicated menus that are also color coded: photo options appear with white text on a black background, while the video menu feature black text on a white background. Major settings can be accessed via touch by pressing the back menu button: like shutter speed, ISO, file format and AF choices. Deeper menu options can be accessed using the joystick, but don’t expect to be overwhelmed with options, the SL2’s menu is only six pages long without many nested settings. It’s design is intuitive and things are easy to find, meaning you get to spend more time making pictures and less time fiddling with settings.
Shooting experience
Leica’s simplified design philosophy translates into a camera that’s fun to shoot in a wide variety of situations. It can work for journalists, portraits, or street photography. During our time with it we were pleased with the results that it got in the studio, on the streets, and even photographing New York City nightlife.
Studio portrait with the SL2. (Jeanette D. Moses/)
When shooting with the SL lenses the Autofocus inside the camera was speedy and accurate. The camera features face and object detection, and although the SL2 doesn’t feature the eye-AF boxes found in so many other mirrorless cameras, it did a nice job grabbing focus on our subjects eyes. There did seem to be occasionally lag in the time between shooting a picture and being able to review it on the camera’s LCD screen, but the version we were shooting with was running pre-production firmware, we suspect this issue will be fixed in the cameras that will hit stores at the end of this month.
One of the biggest appeals of the SL2 is the ability to shoot with any of Leica’s excellent M mount lenses. During our time with the SL2 we honestly spent the bulk of our time using an M mount adapter and manual focus M lenses. Nailing focus a manual focus lens can take a bit of getting used to, but we found that the in-camera 5-axis stabilization did an excellent job at minimizing shake when shooting with these lenses.
The SL2 is great in lowlight as well. This was shot in a dimly lit restaurant at ISO 12,500. (Jeanette D. Moses/)
The SL2 has an ISO sensitivity up to 50,000, and although we didn’t have to boost that high, we found that the camera did a great job in dimly lit situations too.
Ultimately the SL2 has an intuitive design that lets photographers focus on the art of making pictures. This isn’t going to be a must-have for every type of photographer, but the camera’s autofocus capabilities, fast processor and that new 47 megapixel sensor, all make it very appealing—especially if you already have a nice stash of Leica M glass sitting on a shelf at home.
The SL2 will be available just before the holidays on November 21 for $5,995.
A nature scene in Germany. (Jeanette D. Moses/) The Leica SL2 is an excellent camera for street photography. (Jeanette D. Moses/) Pigeons enjoying a snack on German cobblestones. (Jeanette D. Moses/) A storefront in Germany. (Jeanette D. Moses/) Sample image with the SL2. The camera is comfortable for a long day of street shooting. (Jeanette D. Moses/) A sunny day in Wetzler, Germany. (Jeanette D. Moses/) This building in the old part of Wetzler is where the first photo was taken with a Leica camera. (Jeanette D. Moses/) The SL2 has an option to shoot images in monochrome and the results are moody and beautiful. (Jeanette D. Moses/) A street scene in Berlin, Germany. (Jeanette D. Moses/) A rainy nighttime scene shot with a manual focus M mount lens. (Jeanette D. Moses/) A food truck in Berlin, Germany. (Jeanette D. Moses/) The camera is great for shooting on the streets thanks to its ergonomic design and quiet shutter. (Jeanette D. Moses/) A street scene in Berlin, Germany shot with an M mount lens and the SL2. (Jeanette D. Moses/) A dog and his people wait for their photo booth shots in East Berlin. (Jeanette D. Moses/) Mauer Park in Berlin, Germany. Shot at night at ISO 12,500 (Jeanette D. Moses/) Back in the studio with the SL2. This headshot was captured with a 75mm lens. (Jeanette D. Moses/) Studio portrait captured with a 75mm SL lens. The camera does a great job at grabbing focus on a subject's face and eyes, even if they are moving. (Jeanette D. Moses/) Studio portrait captured with the SL2 and a 75mm lens. (Jeanette D. Moses/) The camera is a nice option for capturing nightlife as well. This burlesque dancer was photographed with a 35mm M mount lens at ISO 6400, f/6.8 at 1/125 sec. (Jeanette D. Moses/) Burlesque dancer in New York City. (Jeanette D. Moses/) from Popular Photography | RSS https://ift.tt/36GQSYM
0 notes
lindyhunt · 6 years
Text
The 17 Best Photoshop Filters & Plugins of 2018
There's no denying that Adobe Photoshop is a powerful photo editing tool, with loads of built-in features and effects. In fact, if you're a marketer with little (or zero) photography or graphic design experience, Photoshop can sometimes feel a bit overwhelming.
However, if you've spent years learning the ins and outs of Photoshop, you might now be arriving at a point where you feel like you've exhausted all of Photoshop's built-in benefits.
Regardless of your experience level, there are free Photoshop filters and plugins that can help. As a beginner, these free add-ons can help simplify complex editing processes. As an expert, they can help expand your available Photoshop tool set even further, and help lead you in new artistic directions.
Keep the following steps in mind when downloading the Photoshop filters and plugins we've included below. Each file must be saved in a specific location on your computer.
How to Install Photoshop Plugins
Open Photoshop.
Select Edit from the dropdown menu, and select Preferences > Plugins.
Check the "Additional Plugins Folder" box to accept new files.
Download a plugin or filter to your desktop.
Open your Program Files folder and select your Photoshop folder.
Open your Plugins folder, found inside your Photoshop folder.
Drag your new Photoshop plugin from your desktop into the Plugins folder.
Reopen Photoshop and find your new plugin under Filters in the dropdown menu.
Free Photoshop Filters
Note: The filters below are technically Photoshop "actions" (.ATN files). An action is a pre-recorded series of steps that allows you to apply effects -- in this case, filters -- automatically.
1. Dramatic Sepia (via Efeito Photoshop)
Price: Free
Sure, you can easily create that classic, reddish-brown sepia effect in Photoshop manually by selecting Image > Adjustments > Photo Filters, and then choosing "Sepia" from the dropdown menu. But if you're looking for a sepia filter that's a bit more -- well, dramatic -- the free Dramatic Sepia action from Efeito Photoshop is a popular option.
Image Credit: Romenigps
2. Blue Evening (via Photographypla.Net)
Price: Free
Opposite the warm tones of the sepia filter above, Blue Evening cools down your photos with a pretty blue hue. This Photoshop filter adds a dose of solemnness to your images. You can also use it to soften a sunlit photo and emphasize a cold temperature outside.
Image Credit: Photographypla.Net
3. Old Photo (via DeviantArt)
Price: Free
If sepia is a bit too much for your taste, but you're still trying to create a nostalgic, old-timey feel, the Old Photo action has got you covered. The action will adjust the color and contrast of your image, transporting it back in time (visually-speaking).
Image Credit: sakiryildirim
4. Nightmare (via Shutter Pulse)
Price: Free
Did you ever want to give your photo a haunting look? The setting of your shot only does so much to affect how people perceive the image. For added creepiness, you have the Nightmare filter. Download this Photoshop filter to make any photo look like it came from a horror movie.
Image Credit: Shutter Pulse
5. HDR Tools (via DeviantArt)
Price: Free
HDR Tools is a set of four "actions" that transforms dull backgrounds to reveal intense, eye-catching details. You can turn natural grey tones into beautiful backgrounds that create a contrast against the foreground you didn't have before. The duller the image, the heavier the HDR filter you should apply.
Image Credit: Forfie
6. Dream Blur (via DeviantArt)
Price: Free
As its name suggests, the Dream Blur action adds a filter to your image that creates a subtle, dream-like atmosphere. Specifically, the action produces a dark, blurry vignette at the edges of your image while also upping the saturation levels.
Image Credit: JoshJanusch
7. Vintage (via DeviantArt)
Price: Free
Unlike the Old Photo action from earlier on this list, the Vintage action does more than just visually transport your image back in time -- it also adds a distinctive neon effect (perfect for giving your next project a groovy feel).
Image Credit: beckasweird
8. Lithprint (via DeviantArt)
Price: Free
The Lithprint action imitates the vintage look produced by the black-and-white lith printing process. But compared to the other vintage filters on this list, Lithprint is much more drastic. In addition to adjusting contrast, highlights, and shadows in your image, it adds a gritty texture.
Image Credit: rawimage
Free and Inexpensive Photoshop Plugins
9. virtualPhotographer
Price: Free
If you're struggling to produce particular effects in Photoshop (e.g., black and white, high contrast, polarization, etc.), virtualPhotographer by OptikVerve Labs could be the plugin you've been looking for. VirtualPhotographer's primary claim to fame? It allows you to add complicated effects to images with a single click.
Image Credit: optikVerve Labs
10. ON1 Effects
Price: Free
Like the virtualPhotographer plugin, ON1 Effects is a free Photoshop plugin that makes it easier for you to add complex effects to your images. What sets ON1 Effects apart is that it boasts a library of filters -- including vignette, adjustable contrast, and HDR look -- that you can stack on top of each other, allowing you to easily build layers of different effects.
Image Credit: ON1
11. Snapheal
Price: $20
Snapheal is one of the best ways to remove unwanted flaws and blemishes from your photos. Cleaning up a person's headshot or the background of a scenic shot? This photoshop plugin helps you polish your work in three steps: Upload your photo, remove unwanted objects, and enhance the final product.
Image Credit: Snapheal
12. Ink
Price: Free
This tool is a detail-oriented person's best friend. Ink is a Photoshop plugin that allows you to see additional information on a design you're creating -- this includes text size, font name, color codes, and the size of your image in pixels. It also gives you grid lines to help you center and level your artwork.
Image Credit: Chrometaphore
13. Flaticon
Price: Free
Wish you could sort through thousands of free icons and add them to your projects without having to leave the comfort of Photoshop? Then Flaticon by Freepik is definitely worth a look. The free plugin's icons are available in .SVG, .PSD, and .PNG formats.
Image Credit: Flaticon
14. RH Hover Color Picker
Price: $16
Photoshop's color palate has been known to irritate users -- especially those who need a more customizable dashboard to capture the color they need when editing and designing. RH Hover Color Picker is the solution. This photoshop plugin by Rico Holmes (hence "RH") acts as a convenient flywheel that you can simply "hover" over your image as you perfect your illustration. It even gives you the RGB code of the color you've selected so you can easily find it later.
Image Credit: Rico Holmes
15. Pexels
Price: Free
You might know Pexels as a free stock photography gallery. What you might not know is that you can integrate this free content right into Photoshop. The Pexels plugin has more than 30,000 free images to choose from, and syncs your Liked photos with Photoshop so you can call up your favorite stock photos for quick editing.
Image Credit: Pexels
16. Focus
Price: $60
Focus is "portrait mode" on steroids. A smartphone camera can use portrait mode to blur the background of closeup imagery, but it often can't handle wide, complex shots where you need to highlight specific parts of the photo. For that, you have Focus. This plugin, now available in the form of Focus 2, allows you to blur backgrounds, set the blur's intensity, and sharpen the edges of the object you're focusing on to make it truly pop out of the picture.
Image Credit: Skylum
17. Tych Panel
Price: Free
The free Tych Panel plugin makes it easy for you to create double panel (diptych), triple panel (triptych), and quadruple+ panel (ntych) projects in Photoshop. Just select the number of rows and panels you want, as well as the alignment style, and Tych Panel will format everything for you automatically.
Image Credit: Lumens
Ready to get started? Download some of the filters and plugins above and grab your free guide to photoshop below.
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 15: A Conversation with Daniel H. Wilson
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 15: A Conversation with Daniel H. Wilson”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-30-(00-57-18)-daniel-h-wilson.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-1-1.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF’ };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and as its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a Masters degree in AI and Robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? You think about Galatea… and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the middle ages, the medieval times, and then up through to the beginning of the industrial revolution… you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works… They were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.’” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is [that] the notions of how this stuff works were very set down, but they were also very magical, right? There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it [there] were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead, you know. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, [they] were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word ‘robot’.
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term ‘robot’ was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot as a fully-formed image, the way we kind of think of them now? What would have been different about that view of robots?
Well, no. Those robots… I don’t think they were even… they just looked like people. I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal, you know, with lights and things on them, that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
Here’s what I’m struck by, that just… the reason why I ask you if you thought they were fully modern: Let me just read you this quote from the play, and tell me what it sounds like to you… This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s like…a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play. Yeah…we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything, right?
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now. “The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately. Honestly, I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the ‘candy age’ of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is… there are a million examples. Talking to Alexa, I don’t have to say ‘please’ or ‘thank you’. I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E, you know… leaning back in floating chairs, and doing nothing, and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guy’s name is Offenhouse, and he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? He said, Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—it’s this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa, who—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s going to be, not a mutually exclusive thing, where the machines take over everything and you are free to be by yourself… because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves. I think another example you can find easily is just looking at athletes.
You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it says, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it says, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex… and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products: Are people going to fall in love with a toaster? Are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question…you remember that original Twilight Zone show, about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. The part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you”—I’m not interested in replacing human relationships that I have. I don’t know how many friends you have, but I have a couple of really good friends.
That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. So—having written novels and comic books and screenplays and the occasional videogame—I can’t wait to interact with these types of agents in a storytelling setting, where the game, where the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back.
Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent… Man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines… And by asking you the question, I am kind of implicitly asking you to define life.
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in. Will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive… which would probably mean that they need rights, and things like that… then I think the proof is just in the comparison. I’m making the assumption that every other human is conscious.
I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. If it isn’t, like…  Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of ‘life’ that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious?
And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of ‘alive’.
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else… There’s a baby on the left, and an elderly person on the right. what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what ‘aliveness’ is.
Let me try it again, if ‘aliveness’ isn’t the thing. So, I asked the question because… up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient… and animal cruelty, because the animals are [sentient]. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there is going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what are we going to do with these… what does the near-future hold. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: Are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue…
I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, as [they] take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations”. I joined the XPRIZE Science Fiction Advisory Council, and it really focuses on optimistic futures, right? They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, people—utopias—let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—that product has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa is a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “la” sound, which is difficult for children to man, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an ‘it’.
If I was defining ‘it’ for a dictionary or something, I would obviously define the entity Alexa as an ‘it’, but the most optimal interaction I can have with her is… She’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do.
So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold… it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools; that’s the most fundamental thing about people… [and] that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books real quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States… and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 15: A Conversation with Daniel H. Wilson syndicated from http://ift.tt/2wBRU5Z
0 notes
lindyhunt · 6 years
Text
The Best Editing Apps for Photos
By now, it's clear that creating great visual content is critical for marketers -- and that's especially true on social media.
As of 2017, Instagram has doubled its monthly active user base in the last two years, which means a lot more people are viewing and sharing photos in 2018. Plus, visual content is 40X more likely to get shared on social media than other types.
In other words, people like to be shown, not told -- and in turn, they share.
For that reason, it's important for marketers to know how to create compelling photos for their business' social networks and blogs. And while it can be worth putting the investment of time and money into photo editing software on your computer, many of us are exclusively using our phones to take pictures and could stand to edit them without uploading them to a desktop. That's especially true when you're posting in real time, like at an event. Luckily, there are numerous great inexpensive and free photo editing apps out there for mobile devices -- some of them even cost just a few bucks. But which apps are the best?
Check out our short list of 12 apps below -- organized by apps that are compatible with both iOS and Android, apps offered just for the iPhone, and apps designed to edit face-focused photos.
Inexpensive and Free Photo Editing Apps for iOS and Android
1. Afterlight
$0.99 | iOS | Android | Windows
There was a time when I was a bit more old-school in my photo editing. I counted on Instagram tools alone, sometimes combining them with the "enhance" feature on my phone's Photos app. Then, I learned about Afterlight -- a somewhat rudimentary tool, but one that has all the features you need to do a basic photo edit.
From controlling the color tones, to adjusting exposure and brightness, to rotating and straightening a photo, it has everything you need for lighting or color fixes. It also contains 74 filters, including a Fusion feature that lets you mix tools, filters, and textures to create your own personal look. Into frames? Afterlight has a whopping 128 to choose from, boasting a perfect pairing with Instagram.
My favorite tools, though, have to be the ones for Brightness and Shadows. Some pictures do well with a decrease in shadows and increase in brightness for cleaner, fresher look. But flipping those around can also create a more mysterious, nighttime feel -- That's what I did with this photo of tree ornaments.
2. VSCO Cam
Free | iOS | Android
Over the past few years, VSCO Cam has become a highly popular photo editing app for mobile. While it does boast a wider set of editing tools than most other editing apps, its main claim to fame is its filters.
These filters have a softer, more authentic look that resembles real film, compared to the over-saturated looks of many Instagram filters. Plus, it's great for when you need to edit a photo on the fly. Simply upload the photo to VSCO Cam, slap on one of the great filters -- I used C1 below -- and call it a day. (There are more filters available for purchase, too.)
3. Photoshop Express
Free | iOS | Android
Believe it or not, Adobe Photoshop isn't just for your computer. Adobe Photoshop Express puts most of what people love about Adobe's popular photo editing program in their pockets -- lighting, color, and sharpness options included.
Photoshop Express is especially useful for making photo collages -- something the app's developers likely highlighted for mobile users who want to share many photos at once on Facebook or Instagram. The app's "Decorate" setting even allows you to annotate your photo with digital stickers before saving and posting directly on social media.
Although this photo editor makes Photoshop's best features easily accessible, keep in mind it does carry the natural limitations of a mobile app. Specifically, you can only upload JPG files smaller than 16 megapixels (MP).
Nonetheless, what it does on a smaller platform is still super impressive. You should also try similar Adobe photo editing apps such as Adobe Lightroom and Adobe Capture.
Here's how Photoshop Express helps you choose different collage orientations for multiple photos:
4. Snapseed
Free | iOS | Android
Snapseed is another app that's great for basic image enhancements. It's got all the classic adjustment tools, such as tuning, cropping, and straightening. Plus, its sharpening tool is one of the best we've seen -- it really does enhance a photo's detail, without making it look grainy, like many other photo sharpening adjusters out there.
But what makes this tool particularly unique is its "Selective Adjust" tool. It allows you to pinpoint an area in a photo and adjust the brightness, contrast, and saturation of that single point. So if you want viewers to focus on a certain part of your photo -- say, the buds in the center of a plant -- then you can make the buds more vivid.
Want more help with Snapseed? Google, the maker of the app, created a dedicated support page with tips and instructions.
5. SKRWT
$0.99 | iOS | Android
Ever taken a picture of something straight-on -- a doorway, a building, your food -- and found the perspective was just a little bit askew or tilted? The SKRWT app lets you adjust the perspective of your photos to make the lines look clean and square.
Have a look at what I was able to do with a simple window shot.
Before:
After:
At first, the "before" image doesn't look that skewed, but seeing the "after" version really shows what a difference symmetry can make. If it bugs you to see a photo that's slightly at an angle, then this app is well worth the dollar.
6. Live Collage
Free | iOS | Android
Collages made on Photoshop Express can be great, whether it's to show a comparison (like a before-and-after series), or to highlight multiple photos from the same event or theme. But our favorite photo collage app is Live Collage, mostly because of its wide variety of layouts. It contains several options for photo organization, both classic and fun, with interesting and colorful backgrounds. Plus, you can add customized text in different fonts, colors, and sizes.
If you're strapped for time, there are basic photo editing options within the app, too, making it a handy one-stop shop.
7. Foodie
Free | iOS | Android
If you're anything like I am, your personal social media feeds are loaded with images of food. It's no wonder that food-specific apps are coming out of the woodwork to make photos look even more delectable.
Called out by Bustle for taking "food pictures to some next level gorgeous," Foodie uses more than 30 filters and other editing features to turn what might otherwise be a humdrum snack into a visual feast.
When I applied the CR2 filter to a photo of chocolate candy, this was the result:
Best Photo Editing Apps for iPhone
8. Camera+
$2.99 | iOS only
With the highest price tag on the list, you have to wonder what makes Camera+ so special. When it was first released, Lifehacker called it "The Best Camera App for iPhone," with TIME writing, "If the iPhone's standard camera is like a digital point-and-shoot, the Camera+ app is like a high-quality SLR lens."
While the app has many of the classic photo editing tools like color tints, retro effects, and crops, there are a few gems that make it unique. First is its image stabilizer, which helps you capture the sharpest photos possible before you even take a picture. It also lets you zoom in up to 6X, which can really up the quality of your shot if you're trying to hone in on something far away.
Finally, its Clarity filter is what The Wall Street Journal's Kevin Sintumuang calls its "secret sauce -- it adds pro-camera crispness to almost any shot." I'd have to agree -- just check out how it enhanced this photo of my dog.
9. Mextures
$0.99 | iOS only
Mextures is one of the more advanced apps on this list -- and its crown jewel is layer-based editing. That allows users to stack different adjustment layers on top of each other, moving and editing them individually, allowing for nearly limitless creativity. You can also apply multiple filters, textures, and blending models to the same photo to create a really unique look. If you find an editing formula you really like, you can save it to apply to other photos later, or even share it with your friends.
Here's what happened when I took a simple photo of candlesticks on a white background only and applied three enhancements -- Waterfront overlay, Bokeh Baby Overlay, and the Color Dodge blending mode.
10. Enlight
$3.99 | iOS only
I'll just say how I feel: This app is incredible.
Winner of the Apple Design Award in 2017, Enlight will change the way you see even the most ordinary picture the next time you open your iPhone camera. Among its 10 different photo editing features, the app's Photo Mixer allows you to blend multiple photos together -- or combine a photo with text -- for a super artistic result.
According to Les Shu of Digital Trends, Enlight is "a powerful Photoshop-like app, minus the steep learning curve."
Check out a stunning example of what the app's Photo Mixer can do below.
Best Face Editing Apps
11. Facetune
$3.99 on iOS | $5.99 on Android
Never take a selfie you don't like again. Facetune is considered the top photo app in more than 120 countries, allowing you to make up for unflattering mobile photos with professional-level corrections to numerous facial features.
The app offers eight different types of corrections and enhancements to a person's face in a given photo -- including to the hair, eyes, skin, and smile. Taking a new professional headshot? I highly recommend you touch it up in the Facetune app before adding the photo to your LinkedIn profile (not that I don't think you're beautiful already).
Here's just one example of a skin tone correction done with Facetune, making all the difference:
12. Visage Makeup Editor
Free | iOS | Android
Disclaimer: There's absolutely nothing wrong with under-eye circles. We all have them, and we sometimes wear them like medals. (We do, however, take issue with and don't recommend a lack of sleep.)
That said, when it comes to sharing photos of ourselves on social media, vanity sometimes enters the picture. Sound familiar? There's an app for that.
We like the Visage makeup editor, which instantly retouches photos and lets you add some special effects, like a "Pop Art Style" filter that can make your selfie look slightly Warhol-esque. The app comes equipped with some interesting backgrounds, as well as lighting and color features, with more available for purchase.
The only drawback? The free version is a bit ad-heavy, and unless you upgrade to pro, your finished product will be stuck with a branded hashtag at the bottom.
Now Comes the Fun Part
See how easy it is to create and share visual content? Of course, mastering these apps will require a bit of practice, but if you're unsure where to start, just look around you -- that's what we did when we tried each of them.
Think about your marketing goals for this year. Then, ask yourself what kind of photos will help you accomplish them. From there, you can pick and choose the best apps from this list.
So start getting visual. We can't wait to see what you create.
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 15: A Conversation with Daniel H. Wilson
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 15: A Conversation with Daniel H. Wilson”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-30-(00-57-18)-daniel-h-wilson.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-1-1.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and as its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a Masters degree in AI and Robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? You think about Galatea… and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the middle ages, the medieval times, and then up through to the beginning of the industrial revolution… you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works… They were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.’” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is [that] the notions of how this stuff works were very set down, but they were also very magical, right? There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it [there] were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead, you know. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, [they] were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word ‘robot’.
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term ‘robot’ was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot as a fully-formed image, the way we kind of think of them now? What would have been different about that view of robots?
Well, no. Those robots… I don’t think they were even… they just looked like people. I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal, you know, with lights and things on them, that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
Here’s what I’m struck by, that just… the reason why I ask you if you thought they were fully modern: Let me just read you this quote from the play, and tell me what it sounds like to you… This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s like…a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play. Yeah…we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything, right?
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now. “The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately. Honestly, I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the ‘candy age’ of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is… there are a million examples. Talking to Alexa, I don’t have to say ‘please’ or ‘thank you’. I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E, you know… leaning back in floating chairs, and doing nothing, and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guy’s name is Offenhouse, and he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? He said, Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—it’s this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa, who—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s going to be, not a mutually exclusive thing, where the machines take over everything and you are free to be by yourself… because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves. I think another example you can find easily is just looking at athletes.
You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it says, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it says, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex… and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products: Are people going to fall in love with a toaster? Are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question…you remember that original Twilight Zone show, about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. The part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you”—I’m not interested in replacing human relationships that I have. I don’t know how many friends you have, but I have a couple of really good friends.
That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. So—having written novels and comic books and screenplays and the occasional videogame—I can’t wait to interact with these types of agents in a storytelling setting, where the game, where the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back.
Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent… Man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines… And by asking you the question, I am kind of implicitly asking you to define life.
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in. Will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive… which would probably mean that they need rights, and things like that… then I think the proof is just in the comparison. I’m making the assumption that every other human is conscious.
I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. If it isn’t, like…  Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of ‘life’ that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious?
And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of ‘alive’.
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else… There’s a baby on the left, and an elderly person on the right. what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what ‘aliveness’ is.
Let me try it again, if ‘aliveness’ isn’t the thing. So, I asked the question because… up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient… and animal cruelty, because the animals are [sentient]. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there is going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what are we going to do with these… what does the near-future hold. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: Are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue…
I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, as [they] take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations”. I joined the XPRIZE Science Fiction Advisory Council, and it really focuses on optimistic futures, right? They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, people—utopias—let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—that product has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa is a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “la” sound, which is difficult for children to man, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an ‘it’.
If I was defining ‘it’ for a dictionary or something, I would obviously define the entity Alexa as an ‘it’, but the most optimal interaction I can have with her is… She’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do.
So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold… it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools; that’s the most fundamental thing about people… [and] that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books real quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States… and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 15: A Conversation with Daniel H. Wilson syndicated from http://ift.tt/2wBRU5Z
0 notes
techscopic · 7 years
Text
Voices in AI – Episode 15: A Conversation with Daniel H. Wilson
Today’s leading minds talk AI with host Byron Reese
.voice-in-ai-byline-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-byline-embed span { color: #FF6B00; }
In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
0:00
0:00
0:00
var go_alex_briefing = { expanded: true, get_vars: {}, twitter_player: false, auto_play: false };
(function( $ ) { ‘use strict’;
go_alex_briefing.init = function() { this.build_get_vars();
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘action’] ) { this.twitter_player = ‘true’; }
if ( ‘undefined’ != typeof go_alex_briefing.get_vars[‘auto_play’] ) { this.auto_play = go_alex_briefing.get_vars[‘auto_play’]; }
if ( ‘true’ == this.twitter_player ) { $( ‘#top-header’ ).remove(); }
var $amplitude_args = { ‘songs’: [{“name”:”Episode 15: A Conversation with Daniel H. Wilson”,”artist”:”Byron Reese”,”album”:”Voices in AI”,”url”:”https:\/\/voicesinai.s3.amazonaws.com\/2017-10-30-(00-57-18)-daniel-h-wilson.mp3″,”live”:false,”cover_art_url”:”https:\/\/voicesinai.com\/wp-content\/uploads\/2017\/10\/voices-headshot-card-1-1.jpg”}], ‘default_album_art’: ‘http://ift.tt/2yEaCKF&#8217; };
if ( ‘true’ == this.auto_play ) { $amplitude_args.autoplay = true; }
Amplitude.init( $amplitude_args );
this.watch_controls(); };
go_alex_briefing.watch_controls = function() { $( ‘#small-player’ ).hover( function() { $( ‘#small-player-middle-controls’ ).show(); $( ‘#small-player-middle-meta’ ).hide(); }, function() { $( ‘#small-player-middle-controls’ ).hide(); $( ‘#small-player-middle-meta’ ).show();
});
$( ‘#top-header’ ).hover(function(){ $( ‘#top-header’ ).show(); $( ‘#small-player’ ).show(); }, function(){
});
$( ‘#small-player-toggle’ ).click(function(){ $( ‘.hidden-on-collapse’ ).show(); $( ‘.hidden-on-expanded’ ).hide(); /* Is expanded */ go_alex_briefing.expanded = true; });
$(‘#top-header-toggle’).click(function(){ $( ‘.hidden-on-collapse’ ).hide(); $( ‘.hidden-on-expanded’ ).show(); /* Is collapsed */ go_alex_briefing.expanded = false; });
// We’re hacking it a bit so it works the way we want $( ‘#small-player-toggle’ ).click(); $( ‘#top-header-toggle’ ).hide(); };
go_alex_briefing.build_get_vars = function() { if( document.location.toString().indexOf( ‘?’ ) !== -1 ) {
var query = document.location .toString() // get the query string .replace(/^.*?\?/, ”) // and remove any existing hash string (thanks, @vrijdenker) .replace(/#.*$/, ”) .split(‘&’);
for( var i=0, l=query.length; i<l; i++ ) { var aux = decodeURIComponent( query[i] ).split( '=' ); this.get_vars[ aux[0] ] = aux[1]; } } };
$( function() { go_alex_briefing.init(); }); })( jQuery );
.go-alexa-briefing-player { margin-bottom: 3rem; margin-right: 0; float: none; }
.go-alexa-briefing-player div#top-header { width: 100%; max-width: 1000px; min-height: 50px; }
.go-alexa-briefing-player div#top-large-album { width: 100%; max-width: 1000px; height: auto; margin-right: auto; margin-left: auto; z-index: 0; margin-top: 50px; }
.go-alexa-briefing-player div#top-large-album img#large-album-art { width: 100%; height: auto; border-radius: 0; }
.go-alexa-briefing-player div#small-player { margin-top: 38px; width: 100%; max-width: 1000px; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info { width: 90%; text-align: center; }
.go-alexa-briefing-player div#small-player div#small-player-full-bottom-info div#song-time-visualization-large { width: 75%; }
.go-alexa-briefing-player div#small-player-full-bottom { background-color: #f2f2f2; border-bottom-left-radius: 5px; border-bottom-right-radius: 5px; height: 57px; }
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; }
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and as its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a Masters degree in AI and Robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? You think about Galatea… and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the middle ages, the medieval times, and then up through to the beginning of the industrial revolution… you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works… They were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.’” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is [that] the notions of how this stuff works were very set down, but they were also very magical, right? There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it [there] were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead, you know. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, [they] were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word ‘robot’.
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term ‘robot’ was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot as a fully-formed image, the way we kind of think of them now? What would have been different about that view of robots?
Well, no. Those robots… I don’t think they were even… they just looked like people. I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal, you know, with lights and things on them, that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
Here’s what I’m struck by, that just… the reason why I ask you if you thought they were fully modern: Let me just read you this quote from the play, and tell me what it sounds like to you… This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s like…a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play. Yeah…we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything, right?
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now. “The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately. Honestly, I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the ‘candy age’ of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is… there are a million examples. Talking to Alexa, I don’t have to say ‘please’ or ‘thank you’. I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E, you know… leaning back in floating chairs, and doing nothing, and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guy’s name is Offenhouse, and he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? He said, Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—it’s this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa, who—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s going to be, not a mutually exclusive thing, where the machines take over everything and you are free to be by yourself… because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves. I think another example you can find easily is just looking at athletes.
You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it says, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it says, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex… and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products: Are people going to fall in love with a toaster? Are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question…you remember that original Twilight Zone show, about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. The part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you”—I’m not interested in replacing human relationships that I have. I don’t know how many friends you have, but I have a couple of really good friends.
That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. So—having written novels and comic books and screenplays and the occasional videogame—I can’t wait to interact with these types of agents in a storytelling setting, where the game, where the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back.
Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent… Man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines… And by asking you the question, I am kind of implicitly asking you to define life.
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in. Will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive… which would probably mean that they need rights, and things like that… then I think the proof is just in the comparison. I’m making the assumption that every other human is conscious.
I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. If it isn’t, like…  Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of ‘life’ that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious?
And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of ‘alive’.
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else… There’s a baby on the left, and an elderly person on the right. what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what ‘aliveness’ is.
Let me try it again, if ‘aliveness’ isn’t the thing. So, I asked the question because… up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient… and animal cruelty, because the animals are [sentient]. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there is going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what are we going to do with these… what does the near-future hold. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: Are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue…
I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, as [they] take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations”. I joined the XPRIZE Science Fiction Advisory Council, and it really focuses on optimistic futures, right? They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, people—utopias—let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—that product has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa is a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “la” sound, which is difficult for children to man, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an ‘it’.
If I was defining ‘it’ for a dictionary or something, I would obviously define the entity Alexa as an ‘it’, but the most optimal interaction I can have with her is… She’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do.
So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold… it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools; that’s the most fundamental thing about people… [and] that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books real quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States… and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here. 
Voices in AI
Visit VoicesInAI.com to access the podcast, or subscribe now:
iTunes
Play
Stitcher
RSS
.voice-in-ai-link-back-embed { font-size: 1.4rem; background: url(http://ift.tt/2g4q8sx) black; background-position: center; background-size: cover; color: white; padding: 1rem 1.5rem; font-weight: 200; text-transform: uppercase; margin-bottom: 1.5rem; }
.voice-in-ai-link-back-embed:last-of-type { margin-bottom: 0; }
.voice-in-ai-link-back-embed .logo { margin-top: .25rem; display: block; background: url(http://ift.tt/2g3SzGL) center left no-repeat; background-size: contain; width: 100%; padding-bottom: 30%; text-indent: -9999rem; margin-bottom: 1.5rem }
@media (min-width: 960px) { .voice-in-ai-link-back-embed .logo { width: 262px; height: 90px; float: left; margin-right: 1.5rem; margin-bottom: 0; padding-bottom: 0; } }
.voice-in-ai-link-back-embed a:link, .voice-in-ai-link-back-embed a:visited { color: #FF6B00; }
.voice-in-ai-link-back a:hover { color: #ff4f00; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links { margin-left: 0 !important; margin-right: 0 !important; margin-bottom: 0.25rem; }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:link, .voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:visited { background-color: rgba(255, 255, 255, 0.77); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links a:hover { background-color: rgba(255, 255, 255, 0.63); }
.voice-in-ai-link-back-embed ul.go-alexa-briefing-subscribe-links .stitcher .stitcher-logo { display: inline; width: auto; fill: currentColor; height: 1em; margin-bottom: -.15em; } Voices in AI – Episode 15: A Conversation with Daniel H. Wilson syndicated from http://ift.tt/2wBRU5Z
0 notes