Who's Afraid of General AI?

Byron Reese, technology entrepreneur and author of the new book, “The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity,” discusses the possibility of creating machines that truly think.

Chris Wiltz

August 6, 2018

11 Min Read
Who's Afraid of General AI?

Byron Reese believes technology has only truly reshaped humanity three times in history. The first came with the harnessing of fire. The second with the development of agriculture. And the “third age” came with the invention of the wheel and writing. Reese, CEO and publisher of the technology research company, Gigaom, and host of the Voices in AI podcast, has spent the majority of his career exploring how technology and humanity intersect. He believes the emergence of artificial intelligence is pushing us into a “fourth age” in which AI and robotics will forever transform not only how we work and play, but also how we think about deeper philosophical topics, such as the nature of consciousness.

His latest book, “The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity,” touches on all of these subjects. Byron Reese spoke* with Design News about the implications of artificial general intelligence (AGI), the possibility of creating machines that truly think, automation's impact on jobs, and the ways society might be forever transformed by AI.

Image source: Byron Reese / Simon and Schuster

Design News: You've written books on the impact of technology on society in the past. What made you decide to delve into this specific topic of artificial intelligence now?

Byron Reese: I really did feel like the AI space was full of all these knowledgeable people who had very different conclusions about how things could shake out. And when you get to the job debate—what effect automation is going to have on employment—again, very smart people have very different ideas.

I came to this conclusion that it's not that they know different things; it's that they believe different things. And I couldn't find anything that kind of addressed that. The more I got into it, I realized it's a philosophy question more than a technical question.

That whole thread really excited me, so I distilled it down to, I think, three essential questions. The first is: How is automation going to affect employment? The second is around AGI. Is it something we're going to build? What is it? How hard is it? Are we on the way? Does anybody know how to make it? And then the third question is the question of whether computers can be alive and conscious or not.

In the first section [of the book], I say that there are only three different scenarios for automation: [AI] can do anything a person can do; it can make better poetry and everything; and every job is going to vanish. Another scenario is that we're not going to have any uptick in unemployment because for 200 years, it's remained steady in this country. And then there's this other [notion], which is that we're going to destroy jobs faster than we can create them and we're going to have perpetual long-term unemployment.

RELATED ARTICLES:

DN: It seems a lot of the discussion around AI focuses on what AI and automation can do, but tends to discount what humans can do. How do you sort through all that and get to what's possible versus what isn't?

BR: With robots, it is oddly easy because they're so primitive. Someone once said, if there's ever a robot uprising, just wait 15 minutes because all their batteries will be dead.

[Robots] can do so little. What they can do is repetitive things over and over and over. What I tried to do is go through and read everything and try to go visit the people that are making these things. It's hard, though, because 'robot' is not the right word. An automatic beekeeper is a robot, but an automatic bookkeeper is software. And yet they do the same thing—they replace human labor.

So if you broaden the definition to robots being software, which I think is fair, then I think what people are excited about is machine learning. Machine learning has a really basic assumption, which is that the future is like the past, but that's only useful when you can study the past and extrapolate it into the future.

There are a lot of things where the future isn't like the past. You can't necessarily just ingest everything that's ever been written and have something that passes the Turing Test. I have a technical background and I hold patents, but I'm very cognizant of the fact that when I call my airline of choice and I say my frequent flier number, the machine still doesn't recognize it half the time. So I think I'm just keenly aware, and that's why I think that of the people who say AI is going to take over the world, none of them are in the field.

The people working in the field just want AI to tell the difference between someone saying 'eight,' 'H,' and 'A.' One of the questions I ask everybody on my show is, 'Do you believe that artificial general intelligence is possible, given all the techniques we now have?' Most people say, 'no,' that the trajectory of narrow AI is just better narrow AI; it's never an AGI. Creating an AGI is going to require a completely different thing.

Byron Reese (Image source: Byron Reese / Simon and Schuster)

DN: What would be required to develop an AGI?

BR: I'm really impressed by humans' ability in transfer learning. I really think that's the whole ball of wax. Humans pattern recognize very well. I can look at a cloud and say, 'Oh, that looks like horses.' But you can feed the computer a photograph of a cat and it'll say it's a stop sign.

People are really good at transfer learning. If I said to you, 'Imagine a trout in a river and imagine an identical trout in a bottle of formaldehyde in a lab,' I could then ask you a series of questions: What do they have in common? Are they the same weight? You might say, 'yes.' Are they the same temperature? No. Same color? Probably not. I could ask you 40 or 50 things and you would instantly just know the answer without ever having that exact experience. You just have experiences of things in water.

People have all of these experiences that we just effortlessly combine and recombine in a way that I think is way beyond machines. And I don't believe it's something you can learn with machine learning. I don't think you can machine learn your way to transfer learning. I just think it's too many permutations; too many different things. That fish has a thousand attributes. It has a smell and it has a chemical constraint and all of these things. And yet you'll know the answer to the questions about the fish in the river versus the lab.

I'm not one to say I'm down on machines. I'm just keenly aware that they can do math very fast and that's it. And you have to say, how much of our daily life is reducible to math?

DN: Based on applications we've seen, it would seem that narrow AI is ideal as opposed to general AI, which might be subject to all of the same errors a biological mind might be. Is mimicking biology even the right path forward for AI, in your estimation?

BR: Almost nobody is working on general artificial intelligence, interestingly. All the money that big companies spend, they're spending on narrow AI. I think there's a fear or a belief that in the course of action of building narrow AI, we're going to build a general intelligence. So that theory is that intelligence has a few simple rules, like physics or magnetism does, and that you can derive all intelligence from a few simple rules.

Pedro Domingos wrote a book called The Master Algorithm and he believes there is such a thing. We don't know what it is, of course. But in theory, you can just point it at the internet and it will figure everything out. It would ingest it all and it would be a general intelligence. So I think that's what people worry about—that we kind of accidentally build [AGI].

I think it's a tenuous proposition that we can engineer human intelligence. If an AGI is possible, I think it will have to be evolved. We will have to evolve it. It will have to iterate itself; I don't believe we can engineer it.

DN: And yet there are groups looking to create AI using biological-based models.

BR: There's this worm, the nematode worm. It's the most successful creature on the planet. Nematode worms are the most abundant animal on Earth. They're everywhere and they're about as long as a human hair is wide. Their genome was sequenced 20 years ago and their brain has 302 neurons.

Over the past 20 years, there's been this effort called the OpenWorm Project, where people have tried to model that worm's behavior from the neurons. So the question is: How do I put 302 neurons into computer memory and have this emergent worm behavior come about? It seems reasonable that if you knew how a neuron works, you could model 302 of them and they would do exactly what the worm does.

Now, 20 years in, the people of the OpenWorm Project don't even know if it's possible to do this with 302 neurons. And it boils down to that we don't know how a thought is encoded. We don't know what a neuron does. A neuron may be as complex as a supercomputer, so I don't think we're going to be able to model it after biology. And I don't believe that we're going to figure intelligence out because we don't know by what mechanism intelligence is created. We don't even understand our own intelligence. We don't even have an agreed upon definition of what intelligence is. So I don't believe we're going to engineer it. We may be able to evolve it if it's possible. But I don't have any evidence to believe that will happen.

That isn't to say I disbelieve it. It's just that I never look at my iPhone and think that it's one percent as smart as me, or one millionth as smart as me, or even one billionth as smart as me. It just doesn't register on the smart scale.

DN: In the book, you talk a lot about the societal impact of AI and automation and conversations happening around things like universal basic income, retraining workers, and these sorts of ideas. There are a lot of different discussions going on that stem from the same issue. But do have any sense of where that discussion should be happening first or where we should really be focusing before we try to move forward?

BR: In the modern era, the most powerful social force is public opinion. Legal gay marriage is happening not because one generation died out and a new one came up, but because people actually changed their minds. Smoking was vilified not because a generation of smokers died and a generation of non-smokers came up. It's all just public opinion changing. Politicians don't lead on that front. They follow.

How do you think public opinion is made? It's made through the traditional media and through social media—through everything all kind of dancing together in this modern marketplace of ideas and thought, and people's opinions change. I think that's kind of an interesting thing. So I don't know if there is a central debate. I think what is happening now is exactly how it always happens. We just start having a thousand different discussions in a thousand different places and slowly, the ship changes course.

DN: So then do you think we'll be having these same discussions five or 10 years down the road? Will we still be wrestling with some of these same issues, or do you think there will be a whole new conversation to be had?

BR: I think it will be the same ones. I will say this, though. AI effectively makes everybody smarter. If you look at the internet, we bolted together two billion computers over the course of a quarter of a century and everything in society is transformed just because computers can talk to each other.

And then, all of a sudden, you have this new technology, AI, come along, and it's about making everybody smarter. I think it's sort of hard to spin that as a bad thing, and it's certainly something that's going to change everything in a more profound way, I think, than the Internet did.

*This interview has been edited for clarity.

Chris Wiltz is a senior editor at Design News covering emerging technologies, including VR/AR, AI, and robotics.

Sign up for the Design News Daily newsletter.

You May Also Like