Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Philosophy, Technology and Morality: A Volatile Combination?

Philosophy, Technology and Morality: A Volatile Combination?

In the midst of a global coronavirus pandemic, some worry about the end time and what is to blame -- technology or humanity? Dr. Shannon Vallor shares her surprising insights.

Growing concerns about environmental degradation, climate change, artificial intelligence and other potentially ‘existential’ threats to human survival and progress have made the future of humanity seem increasingly tenuous. Is it inevitable that the human species will self-destruct, as some argue? Or is there a way forward that not only leads to our survival but also to flourishing. What follows is an interview with Dr. Shannon Vallor, Associate Professor of Philosophy and Chair of the Philosophy Department at Santa Clara University in Silicon Valley, as well as President of the International Society for Philosophy and Technology (www.spt.org). 

Design News: Thought leaders like Elon Musk and Stephen Hawking have presented a rather depressing view of the future of humanity as technology advances. What is your view on our future?

Shannon Vallor: When people like these talk about what they call existential risks to humanity, i.e., risks to human survival and human well-being on the planet, they typically do so in the context of artificial intelligence (AI) and similar technologies. These are things that might happen but not necessarily will happen. It all depends upon how we approach science and technology.

I do think that some of the rhetoric about these risks especially with respect to AI are a bit overblown. As I’ll say in my talk, I’m not too worried about super intelligent robot overlords taking over humanity or enslaving us. But there are other kinds of risks that technology poses if humans don't develop the kinds of virtues, the kinds of character strengths, morals and intellectual skills that are needed in order to manage the use of science and technology.

Image Source: Linus Pauling Lecture Series / Shannon Vallor

What I will talk about are some of the existential risks that are generated today by some of our past failures to manage our scientific and technical power wisely.  We can talk about environmental risks. We can talk about the kinds of risks from nuclear and biological weapons that still persist and so on. But does that mean that we have to think about technology as something that's fundamentally a threat to human beings?  Well, certainly that is a mistake because there are a lot of other existential risks including natural risks that science and technology have helped us mitigate or lessen.

We can't look at science and technology as fundamentally threats to human flourishing. We have to embrace them. But the question is how do we do that. What's the difference between embracing science and technology in a way that in dangers our future as opposed to embracing science and technology in a way that makes it more likely that we will have a future that’s long and flourishing one? 

Design News:  I’d like the happier outcome, too. I would think that engineers and scientists might be one of the first groups of people that need to adjust the way they're doing things in order to help.

Shannon Vallor: When we think about the future of employment, for example, we often think about automation and the ways in which human employment and labor might be threatened with job losses. We think about driverless cars and the millions of people who will be out of work if human drivers are not needed. To counter that future, a lot of people are pushing greater education and career development in the areas of science and technology. These areas are pushed on the assumption that those are the fields of human employment that are going to be most needed and most valuable in the future. This is true but with an asterisk, a qualifier, because there's a lot of automation of scientific and engineering practice that's now ramping up.

Image Source: Linus Pauling Lecture Series / Shannon Vallor

Design News: You have a picture of a USB stick for robot emotions on your website. Why is that?

Shannon Vallor: There’s an organization called 826 National founded by Dave Eggers with locations in San Francisco called 826 Valencia and one in Los Angeles called 826 LA. Each outlet sells or displays paraphernalia that excites the imagination because the organization is devoted to promoting creative writing skills for young people between the ages of 6 to 18 years old.

The San Francisco outlet has a pirate theme and in LA “Echo Park” outlet has a time travel and robotic theme. For example, the LA store sells anti-robot fluid and little containers of nanobots – essential an empty box. They also sell USB modules for robot emotions such as love, anger, schadenfreude and the like. You simply plug these USB sticks into your robot and voila. It's obviously meant as humor, but I thought my roboticists or robot ethicists friends would get a kick out of them.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish