Nissan's NSC-2015, on display at the CEATEC 2012 conference in Japan, can find its own parking spot and return to pick you up after being summoned via mobile app. The car uses sensors and a camera to keep track of its location, and gives an owner a 360-degree camera view via an LTE connection of the area around the car, allowing him or her to remotely trigger the car's alarm in case of suspicious activity. Nissan will begin selling the car in 2015.
Refer to Baxter, by Rethink Robots -- Baxter shows that system safety analysis precedes "programming": arms and actuators are padded, power loss actions are fail-safe, the system senses the presence of humans and slows down, humans can overpower the arms and actuators.
As I discussed under Theory of Mind, Baxter (which has no Theory of Mind (yet)) obeys "statutory" laws rather than emotions to slow down when it senses humans nearby.
Baxter doesn't need to be able to weigh "lives in the balance" – from a system safety stand point Baxter is incapable of being in a direct accident where it would have to choose. This is system planning, not programming.
That said, Baxter and the Robotic Cars have sensors in place that have the capacity to detect hazards and accidents outside of their tasks. Baxter has the potential to see smoke and flames. An automotive machine vision system than can detect pedestrians has the potential to detect collapsed humans.
What obligations do free humans have to act when "bad things" happen, e.g., witnessing a fire breaking out or seeing a body lying on a sidewalk? In the situation of slaves/servants, do these obligations revert to the owner/boss?
This is Second Law stuff. If not now, then "soon", cars will have all system elements and infrastructure in place to both detect hazards and accidents and to report them ("Hey, buddy, you have a low tire." "Slow dow, you idiot!" "Did you see what that moron just did?"). If the robotic car just steers around a body in the street, who gets sued if it doesn't "call 911"?
"They're robots, not some machine God who knows all."
Super human intelligence is implied by the reference to Dr. Asimov's Laws. If, as you suggest, the cars driving would be constrained by the laws, the car's design and the right-of-way management would be so constrained as well! How could the car's driving function comply with the 1st law if the brake system was not fail-safe? The onus is not on the programming but on the system safety assessment.
Robots are flying planes, and programming there is only a fraction of the safety consideration. I refer to SAE ARP4754 and SAE ARP4761, where you start out methodically considering the safety impact of every element at every level. The safety related effect of a kicked up rock IS considered and validated for non-robotic systems - today! I suppose that by the time we are trying to assess the safety considerations of robots that can weigh the relative "lives in the balance" impacts of two unavoidable imminent hazards, we will have already had to assess the safety considerations of robots automating the safety assessment of robots (DO-178'D'?). The safety assessment comes first, then the definition of safety requirements, >then< the programming.
3drob, I would imagine that that this could easily be solved with a gate, similar to those at parking structures. You simply count cars and parking spaces
On a slightly different topic, another advange would be that parking lots could be higher density, and therefore, not take up as much space per car. All you need is actual driving clearance, not room to fully open doors between cars.
Ok, too far off-topic for this arena, but how does a robotic device "forbid the exposure of pedestrians to vehicles with fallible brakes"? They're robots, not some machine God who knows all. Things happen- a sharp rock could be kicked up and sever a brake hose, and dozens of other things. A machine can only respond as well as its programming permits, so all the onus is on the programming (my point in this).
Ah, but the First Law would forbid the exposure of pedestrians to vehicles with fallible brakes, wouldn't it, rendering your example irrelevant. Remember, Dr. Asimov truly believed only robots could save humanity from itself. IIRC, what is missing from his laws is Risk Management – safety comes at a cost.
Theory of Mind has an interesting relationship with driving – 90% of us "feel" the emotions of the pedestrian and drivers we see, or more precisely, our minds extrapolate the emotions we would have if we were in the other person's place, and we act accordingly. Thus, seeing a person in a position to feel threatened by a speeding car would cause most of us to feel that sensation and be motivated to slow down. Now the other 10%, not having such sensations, would have rely on explicit rule-base thinking "If I hit someone, my car will be damaged/my premiums will go up/I would be ticketed/etc." or "If I break the law, and get caught, I will be punished". Theory of Mind for robot cars?
"driverless cars will not be able to see beyond the nearest car and see other driver intentions"
Actually, driverless cars will able to sense every thing every other car in the lot senses and negotiate intentions with every other car, and eventually read gestures, which vary from culture to culture (if not from city to city or city to county road ....)
My concern with all self-driving vehicles will be the application of logic. Naturally, as a sci-fi reader I think back to Isaac Asimov's 3 rules of robotics and how they would resolve conflicts. Okay, we have an unattended car and an unavoidable collision ahead, two individuals in a crosswalk and the vehicle brakes have failed on a narrow street. But wait, one individual is smaller than the other, which means less damage to the unit. Of course the smaller human is 6 while the larger human is 85, but logic would demand that the car hit the 6-year-old and damage itself less. Why? The first rule of robotics is not to harm humans, but that becomes a moot point under the current circumstances- a collision in unavoidable. The second law, obeying human commands, is not applicable. The third law, self preservation, remains. So it runs over the kid and causes a smaller dent in the bumper. Now how would I, as a presumed human, respond to the same situation? I can only guess. But I'm certain how the car would respond, given its directives, and I'm not sure how much I like that.
The idea of computer-driven cars seems cool, but then I ask, who writes the programming? Would it be Microsoft, the most skilled at mass-market programming, but also the inventors of the fabled Blue Screen of Death? The designers who thought it made perfect sense to hit the Start button to turn off the computer? I dunno. I guess this all sounded a lot better on paper.
Hi Nancy, I do the same as well when parking my car at the mall. I agree, for the physically impaired this car will definitely provide mobility but individiuals who are not physically challenge, improve your driving skills and keep the eyes on the road!
I'm a little surprised that Nissan plans to bring this car to market by 2015. Self-parking cars is one thing -- the driver is still behind the wheel while the car is parking itself. This, however, seems to call for full autonomy. Up to now, the problem has been so-called "rogue vehicles" -- i.e,, vehicles driven by humans. Autonomous cars get confused by the crazy and unpredictable things that humans do. In an article we did earlier this year, an autonomous vehicle expert told us: "We never saw a robotic vehicle run a stop sign or fail to use turn signals. They were much more predictable than the humans." So what will happen when these Nissan vehicles are sharing the road with human-driven cars and pedestrians?
While it's interesting to dream of automated cars like in a Phillip K. Dick movie, it is cool to start to see them becoming a reality, even if only in minor stages.
I'd have to agree with 3drob, in that there are all sorts of questions that arise and considerations to worry about; however, even now, I don't think it's all fantasy, but just another hurdle to overcome.
While it may be cool to have a permanent built-in valet, I can see this as a potential major catastrophe waiting to happen once it's released to a 'real world', uncontrolled environment.
Minor inconveniences could range from the car chooses it's 'parking spot'... in a city, without the infrastructure to rely upon, could it mistake stop and go traffic as a parking spot? Park in front of a fire hydrant, loading zone, or other non-parking area.... What about those timed zones with parking only certain hours or on one side of the street on certain days of the week/month? Will we have a rash of double-parked automaton cars? Will it drive into pay lots or parking garages? Will we have automated parking lot attendants that recognize robot cars to let them in and out, and when they return we discover a parking fee charge on our phone? In suburban environments, will we find strange cars in our driveways blocking our cars in? In commercial areas, could we expect to find businesses losing money because robot cars are filling their parking lots for a business down the street?
Of course we can imagine all sorts of possibilities of futuristic robot cars... whether it's as mundane as letting your car find it's own parking space during those crowded Christmas shopping excursions, sporting events, concerts, amusement parks, or other parking nightmares... to the more practical, such as those busy adults, teenagers, and children with tight, varied, and/or conflicting schedules... imagine your family only has one car, but during your spouse's workday, your spouse needs to go to the doctor but you need the car an hour later to drive to meeting... after your spouse takes the car to the doctor, it'll drive itself back to your parking lot so you can have it when you need it... Or how about mandatory automated car usage for those convicted of DUIs or DWIs? How about for teenagers and adult drivers that seem to drive by 'feel'? Imagine ambulances where you actually have two or more paramedics able to provide care while en route while the vehicle drives itself? Or long haul vehicles, with advanced, situational AI for dangerous routes like ice road hauling? How about installed hardware that comes standard, but for a price an AI module can turn your car into a robot car just by plugging it into a computer port (look out OBD) under the hood or dash... We could also imagine emergency overrides, such as if vehicles are blocking a fire lane, hydrant, or access to a building, special override controllers would send the (automated) vehicles out of the way to find new parking spots. Could AI be developed for police and military so that armored vehicles could recognize threats and screen our people (IFF) from harm as they advance (moving screen)? Imagine fully automated dump trucks at mines, quarries, and construction sites making runs without running stop lights because the drivers are paid by the run.
But what if something goes wrong? Could the car self-analyze if it breaks down, summon a tow truck or repair vehicle, or even ambulance or rescue team, or even use predictive or preventative schedules to maintain itself, drive itself to a shop while you sleep and you simple see the charge the next day?
Unfortunately, we can also imagine more nefarious uses... Imagine your phone or car being hacked... Maybe the least scary is your car simply being stolen... Imagine your car making one or more unknown stops, picking up an unwanted passenger before it arrives at your location... or your car being borrowed as an escape car in a crime... used as a drug mule/transport while you sleep? ...or, as gwilki suggests, imagine a group of kids hacking several cars and having a Mario Kart race in real life around the mall parking lot, suburb, or even downtown? What about hacker 'hit men' using the cars, over-riding safeties, and using them to commit vehicular homicides (or 'accidents')? Even scarier, imagine terrorists using these to deliver ordinance to targets... no need even for a suicidal zealot, just "let your fingers do the walking" (driving)... nuclear, chemical, and/or biological delivery via (hacked) car and satellite phone
Automated cars can represent an idyllic dreamworld or a nightmarish hell, but, with most advancing technology, it'll be somewhere in between... We're just hoping for more of the former and none of the latter...
Safety Integrity Level compliance — a global standard for the safety level of control systems — is a key method manufacturers can use to ensure customers and partners that their products’ control systems are in top working order.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.