78RPM, I agree with some of what you said. For example, empathy for animals is not anthropomorphization. I see more of myself than you might think looking not only into animals' eyes, but also in caring for plants. And I think dolphins are a good candidate for being at least as smart as we are. But that has nothing to do with the anthropomorphization of machines, which is what I was talking about: it's not the same subject. I think the term "biological machine intelligence" makes no sense. There aren't any biological machines, and the idea that somehow creating enough neural density will produce biological-like consciousness is, as I pointed out before, science fiction. Although in the distant future this might be possible, I think that, at least for now, this is ascribing god-like powers to humans that we simply don't have.
The [atomic] bomb will never go off. I speak as an expert in explosives. -- Admiral William Leahy, U.S. Navy, 1945
Comparing biological engineering to computers is similar to comparing the atom bomb to explosives. If we confine our thinking to computers with their current 2D architecture, we will not achieve conscious machines. 3D is needed to achieve the neural density needed. Henry Markram has replied to critics who say we don't know enough about brain architecture yet by saying that we do know a lot about small parts of the brain. We will build more modules based upon what we know. Nature did not start with a grand design (well, some people do believe that but I don't); rather, a lot evolved by accidents that were successful enough to survive.
Rene Decartes justified cruel experiments upon animals by saying they were just automata whose cries were nothing more than the clang of a bell being hit. It's hard to imagine that there are people who can look into the eyes of an animal and not see something of themselves -- and I'm not saying you're one of them. But it's clear that animals can experience pain and fear and suffering. In that way they are much like ourselves, even if they are not as intelligent. This is not anthropomorphization; it's empathy. Any result of research on biological machine intelligence will not result in an improved human, but likely an entirely new species -- and some new biological species have already been developed in labs. We cannot predict what it will be like. But I'm sure that humans are not the end point of evolution and that somehow we will have a hand in developing our descendants, for better or for worse.
78RPM and others, you may be right about what's possible in the distant future. At DN we're well aware of some of the most out-there developments in robotics and 3D printing, such as those you mention, and have reported on many of them. But that's quite different from talking about what the state of the art is now. And there's still a huge gap between sentient biological systems and machines. I brought in robotics because that's where I see the distortions most often, such as the human tendency to attribute human-like characteristics such as feelings or especially self-awareness to non-humans. This phenomenon is called anthropomorphization. All sorts of things may be possible in the future of technology. But self-aware robots don't exist now and I don't see how they can ever exist at least in the near future. To date, self-awareness has first depended on a biological entity, which robots are not, and second on...well, we don't really know what but humans have it and it's not entirely clear whether other species do. But machines sure don't. I find it curious that so many seem to think that if we just build the hardware--a sophisticated enough bio-mechanic machine modeled on our biological brain--the "software" will somehow spontaneously arrive, i.e., the machine will somehow become self-aware. This is just another version of the Frankenstein or golem myth. I think perhaps we've been reading/seeing too much science fiction.
I agree with your points, ttemple. It's true, computers can calculate more rapidly than human brains and will always be steps ahead in some ways, which is why we appreciate them and need them for some things. But there are definitely thought processes unique to humans that will always be, in my opinion, even more valuable than what computers bring to the equation (pun intended :)).
Perhaps my choice of words was a little unclear. Computers already duplicate "some" of the processes of our minds. What I meant was that they would never duplicate all of the processes of our minds.
I will stand by my prediction that computers will never duplicate certain processes of our minds. This is not meant to be pessimistic, but a testament to the wonders of the human mind. (Obviously 78 feels differently, but that's OK.)
@Ann, GTOlover, etmax, I actually think it will be possible for humans to build conscious machines that are smarter than we are. It won't be done by conventional programming and functional design. The resulting organisms will be neural networks with many sensors whose thoughts and motivations are as unpredictable as any other animal's. Look up the work of Prof. Pieter Abbeel, UC Berkeley and see how his robots are trained by observing a process. Also see the goal of neuroscientist Henry Markram to build a supercomputer model of the human brain in ten years. He has a $1.3Bn grant from the European Union to work on it. Others are working on 3D printing biological material like nerve cells.
Once we understand an architecture that works at a simple level, it a matter of making the architecture massively complex. We can't predict whether these biological machines will be completely different entities or just slowly melded with human brains until we evolve into a new animal. Wouldn't it be cool if we could navigate by gravitational or magnetic fields so we could be like turtles who swim thousands of miles across an ocean to return to our place of birth? What if we could see a broader color spectrum as birds seem to do? What if we could do complex matrix algebra in our heads and develop a deeper mathematical model of the universe than our present feeble brains could attain? And would it be too much to hope that we could have the moral sensitivity of Ghandi or Thoreau or Rachel Carson?
"I don't believe that computers will ever duplicate some of the processes of our minds. " These are famous last words in my opinion. It's just the latest in a series of defeatist attitudes taken by someone who cannot himself figure out how to do the task at hand. Never heard any of them? Let me enlighten you. Gallileo was told he was mad for suggesting that the Earth was not the center of the universe. Today, we accept that as a given. Columbus was told he would fall off the edge of the world if he tried to sail around the world. We know he was right. Thomas Alva Edison warned people about the dangers of alternating current and built an electric chair to prove it. People once thought that computers required rooms of circuits and therefore no one would ever have a computer in the home, and here we are. Many scoffed at Jules Verne for suggesting a nuclear powered submarine or travel to the moon, and the United States built the sub and launched men to the moon and brought them home safely. Another thought is that we would never see gas go above $1 per gallon, and boy did we shoot that one in the backside.
One thing I have learned is never to say never. If you stay pessimistic, you will discourage tomorrow's brightest young minds from trying to do anything new. Isn't that what we want anyhow? Someone to try something new? Mary Shelley warned us about creating life from dead people, and yet we do organ transplants daily. How many times did your mother say "Don't do that! You're gonna break your neck?" Most of us did not break our necks - nor did we all listen to our mothers. Sometimes people just want to live in a perfect world where everything stays the same and nobody wants to change anything. Not me. I want to change what I can while still preserving what must be preserved. That is our legacy as human beings. We push at what people tell us we can't do until we find a way to do it, and then we wonder if we should have been more careful.
A new service lets engineers and orthopedic surgeons design and 3D print highly accurate, patient-specific, orthopedic medical implants made of metal -- without owning a 3D printer. Using free, downloadable software, users can import ASCII and binary .STL files, design the implant, and send an encrypted design file to a third-party manufacturer.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.