78RPM, I agree with some of what you said. For example, empathy for animals is not anthropomorphization. I see more of myself than you might think looking not only into animals' eyes, but also in caring for plants. And I think dolphins are a good candidate for being at least as smart as we are. But that has nothing to do with the anthropomorphization of machines, which is what I was talking about: it's not the same subject. I think the term "biological machine intelligence" makes no sense. There aren't any biological machines, and the idea that somehow creating enough neural density will produce biological-like consciousness is, as I pointed out before, science fiction. Although in the distant future this might be possible, I think that, at least for now, this is ascribing god-like powers to humans that we simply don't have.
The [atomic] bomb will never go off. I speak as an expert in explosives. -- Admiral William Leahy, U.S. Navy, 1945
Comparing biological engineering to computers is similar to comparing the atom bomb to explosives. If we confine our thinking to computers with their current 2D architecture, we will not achieve conscious machines. 3D is needed to achieve the neural density needed. Henry Markram has replied to critics who say we don't know enough about brain architecture yet by saying that we do know a lot about small parts of the brain. We will build more modules based upon what we know. Nature did not start with a grand design (well, some people do believe that but I don't); rather, a lot evolved by accidents that were successful enough to survive.
Rene Decartes justified cruel experiments upon animals by saying they were just automata whose cries were nothing more than the clang of a bell being hit. It's hard to imagine that there are people who can look into the eyes of an animal and not see something of themselves -- and I'm not saying you're one of them. But it's clear that animals can experience pain and fear and suffering. In that way they are much like ourselves, even if they are not as intelligent. This is not anthropomorphization; it's empathy. Any result of research on biological machine intelligence will not result in an improved human, but likely an entirely new species -- and some new biological species have already been developed in labs. We cannot predict what it will be like. But I'm sure that humans are not the end point of evolution and that somehow we will have a hand in developing our descendants, for better or for worse.
78RPM and others, you may be right about what's possible in the distant future. At DN we're well aware of some of the most out-there developments in robotics and 3D printing, such as those you mention, and have reported on many of them. But that's quite different from talking about what the state of the art is now. And there's still a huge gap between sentient biological systems and machines. I brought in robotics because that's where I see the distortions most often, such as the human tendency to attribute human-like characteristics such as feelings or especially self-awareness to non-humans. This phenomenon is called anthropomorphization. All sorts of things may be possible in the future of technology. But self-aware robots don't exist now and I don't see how they can ever exist at least in the near future. To date, self-awareness has first depended on a biological entity, which robots are not, and second on...well, we don't really know what but humans have it and it's not entirely clear whether other species do. But machines sure don't. I find it curious that so many seem to think that if we just build the hardware--a sophisticated enough bio-mechanic machine modeled on our biological brain--the "software" will somehow spontaneously arrive, i.e., the machine will somehow become self-aware. This is just another version of the Frankenstein or golem myth. I think perhaps we've been reading/seeing too much science fiction.
I agree with your points, ttemple. It's true, computers can calculate more rapidly than human brains and will always be steps ahead in some ways, which is why we appreciate them and need them for some things. But there are definitely thought processes unique to humans that will always be, in my opinion, even more valuable than what computers bring to the equation (pun intended :)).
Perhaps my choice of words was a little unclear. Computers already duplicate "some" of the processes of our minds. What I meant was that they would never duplicate all of the processes of our minds.
I will stand by my prediction that computers will never duplicate certain processes of our minds. This is not meant to be pessimistic, but a testament to the wonders of the human mind. (Obviously 78 feels differently, but that's OK.)
@Ann, GTOlover, etmax, I actually think it will be possible for humans to build conscious machines that are smarter than we are. It won't be done by conventional programming and functional design. The resulting organisms will be neural networks with many sensors whose thoughts and motivations are as unpredictable as any other animal's. Look up the work of Prof. Pieter Abbeel, UC Berkeley and see how his robots are trained by observing a process. Also see the goal of neuroscientist Henry Markram to build a supercomputer model of the human brain in ten years. He has a $1.3Bn grant from the European Union to work on it. Others are working on 3D printing biological material like nerve cells.
Once we understand an architecture that works at a simple level, it a matter of making the architecture massively complex. We can't predict whether these biological machines will be completely different entities or just slowly melded with human brains until we evolve into a new animal. Wouldn't it be cool if we could navigate by gravitational or magnetic fields so we could be like turtles who swim thousands of miles across an ocean to return to our place of birth? What if we could see a broader color spectrum as birds seem to do? What if we could do complex matrix algebra in our heads and develop a deeper mathematical model of the universe than our present feeble brains could attain? And would it be too much to hope that we could have the moral sensitivity of Ghandi or Thoreau or Rachel Carson?
A new federally sponsored manufacturing innovation center to strengthen US manufacturing abilities in fiber-reinforced composites has formed, bringing together materials suppliers, OEMs, university R&D labs, and national labs.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.