kottke.org posts about driverless cars
Tonight, Elon Musk shared part two of Tesla's "Master Plan" (here's part one, from 2006). The company is going all-in on sustainable energy, building out their fleet of available vehicle types (including semi trucks and buses), and pushing towards fully self-driving cars that can be leased out to people in need of a ride.
When true self-driving is approved by regulators, it will mean that you will be able to summon your Tesla from pretty much anywhere. Once it picks you up, you will be able to sleep, read or do anything else enroute to your destination.
You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you're at work or on vacation, significantly offsetting and at times potentially exceeding the monthly loan or lease cost. This dramatically lowers the true cost of ownership to the point where almost anyone could own a Tesla. Since most cars are only in use by their owner for 5% to 10% of the day, the fundamental economic utility of a true self-driving car is likely to be several times that of a car which is not.
In cities where demand exceeds the supply of customer-owned cars, Tesla will operate its own fleet, ensuring you can always hail a ride from us no matter where you are.
Summing up: Telsa, Uber, and probably Apple all want to replace human drivers with robot chauffeurs. It's a race between the Jetson's future and the Terminator's future. Fun!
When people drive cars, collisions often happen so quickly that they are entirely accidental. When self-driving cars eliminate driver error in these cases, decisions on how to crash can become pre-meditated. The car can think quickly, "Shall I crash to the right? To the left? Straight ahead?" and do a cost/benefit analysis for each option before acting. This is the trolley problem.
How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty?
The video above introduces a wrinkle I had never considered before: what if the consumer could choose the sort of safety they want? If you had to choose between buying a car that would save as many lives as possible and a car that would save you above all other concerns, which would you select? You can imagine that answer would be different for different people and that car companies would build & market cars to appeal to each of them. Perhaps Apple would make a car that places the security of the owner above all else, Google would be a car that would prioritize saving the most lives, and Uber would build a car that keeps the largest Uber spenders alive.1
Ethical concerns like the trolley problem will seem quaint when the full power of manufacturing, marketing, and advertising is applied to self-driving cars. Imagine trying to choose one of the 20 different types of ketchup at the supermarket except that if you choose the wrong one, you and your family die and, surprise, it's not actually your choice, it's the potential Trump voter down the street who buys into Car Company X's advertising urging him to "protect himself" because he feels marginalized in a society that increasingly values diversity over "traditional American values". I mean, we already see this with huge, unsafe gas-guzzlers driving on the same roads as small, safer, energy-efficient cars, but the addition of software will turbo-charge this process. But overall cars will be much safer so it'll all be ok?
Chris Umson is the Director of Self-Driving Cars at Google[x] and in March, he gave a talk at TED about the company's self-driving cars. The second half of the presentation is fascinating; Umson shows more than a dozen different traffic scenarios and how the car sees and reacts to each one.
It will be interesting to see how roads, cars, and our behavior will change when self-driving cars hit the streets. Right now, street markings, signage, and automobiles are designed for how human drivers see the world. Computers see the road quite differently, and if Google's take on the self-driving car becomes popular, it would be wise to adopt different standards to help them navigate more smoothly. Maintaining painted lines might be more important, along with eliminating superfluous signage close to the roadway. Maybe human-driven cars would be required to display a special marking alerting self-driving cars to potential hazards.1 Positioning of headlights and taillights might become more standard.
Human drivers, cyclists, and pedestrians will necessarily adapt to self-driving cars as well. Some will take advantage of the cars' politeness. But mostly I suspect that learning to interact with self-driving cars will require a different approach, just as people talk to computers differently than they do to other humans -- think of how you formulate a successful search query, speak to Siri, or, more to the point, manipulate a Wii remote so the sensor dingus on top of your TV can interpret what you're doing.
Observations from a Mountain View resident about driving with self-driving cars.
Google cars drive like your grandma -- they're never the first off the line at a stop light, they don't accelerate quickly, they don't speed, and they never take any chances with lane changes (cut people off, etc.).
And we know how easy it is to take advantage of old people:
It's safe to cut off a Google car. I ride a motorcycle to work and in California motorcycles are allowed to split lanes (i.e., drive in the gap between lanes of cars at a stoplight, slow traffic, etc.). Obviously I do this at every opportunity because it cuts my commute time in 1/3.
Once, I got a little caught out as the traffic transitioned from slow moving back to normal speed. I was in a lane between a Google car and some random truck and, partially out of experiment and partially out of impatience, I gunned it and cut off the Google car sort of harder than maybe I needed too... The car handled it perfectly (maybe too perfectly). It slowed down and let me in. However, it left a fairly significant gap between me and it. If I had been behind it, I probably would have found this gap excessive and the lengthy slowdown annoying. Honestly, I don't think it will take long for other drivers to realize that self-driving cars are "easy targets" in traffic.
But the overall opinion is that self-driving cars are excellent at driving.
I think that, inevitably, non-self driving cars will eventually be banned from the roads to let SD cars operate at their full potential (which personally I'm not thrilled about as I'm a car-nut and I love to drive).
Driving may not have Second Amendment protection, but I predict a hell of a fight against banning non-self-driving cars from the roads, akin to how some people feel about guns. "You can pry the steering wheel from my cold dead hands", that sort of thing. As future drivers feel threatened and membership dwindles and radicalizes, perhaps AAA will become more like the present-day NRA. (via mr)
Update: Several people pointed out that those in the pro-driving camp may not have much of a choice whether to keep driving or not. As more self-driving cars are put into use, insurance rates for human drivers will rise because the pool of insured will shrink and self-driving cars will prove to be safer by an order of magnitude or more. And then driving will return to being a hobby for the wealthy, like car racing is now.
Update: Paul Barnsley is an economist specializing in risk, and he wrote in about the implications of self-driving cars on insurance rates:
I don't buy the theory that self-driving cars will do much to current insurance premiums. The pool of drivers will shrink, sure, but the average quality of its members will stay pretty constant, maybe even improve (if better-than-average drivers want to stay behind the wheel) and they'll be driving in a lower risk environment (because of all the other self-driving cars).
So less risk will be shared over a smaller, but still plenty large, group. Since you can run a viable insurance market over a much smaller group of people than "all car owners in the US" I'd expect the lowered risk effected to dominate and premiums to drop relative to their current levels, though they will be high in comparison to self-driving, which may be your correspondents' argument.
Interesting. Thanks, Paul!
Jan Chipchase writes about Twelve Concepts in Autonomous Mobility, aka behavioral and design considerations of self-driving vehicles. You may not have thought of some of these.
Nanny mode: vehicles that are assigned to pick up young children from school, but end up trailing them at a discreet distance because the kids prefer to walk home alone.
Car surprise: when you come across your car somewhere where you didn't expect it to be and witness your vehicle engaging in unexpected activities e.g. pickup up flowers at the mall: the equivalent of catching your parent or kid smoking or shoplifting.
And why is Google, an advertising company, interested in self-driving cars? Perhaps this:
Trailer trashing: where dodgy looking vehicles are assigned to trail an otherwise apparent owner either as a joke or to send a message e.g. a hearse sent by a debt collection agency to scare-up payment. You'll also see this happen with more aggressive companies who send a vehicle around to their competitors to send a message, recruit their staff or to gather intelligence. Task Rabbit or San Da ha + autonomous mobility + intent. The most obvious market for this will be straight-up advertising.
In 2015, you can follow brands on Facebook, Twitter, and Instagram. In 2023, the brands follow you! Around town!
The trolley problem is an ethical and psychological thought experiment. In its most basic formulation, you're the driver of a runaway trolley about to hit and certainly kill five people on the track ahead, but you have the option of switching to a second track at the last minute, killing only a single person. What do you do?
The problem becomes stickier as you consider variations of the problem:
As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you -- your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?
As driverless cars and other autonomous machines are increasingly on our minds, so too is the trolley problem. How will we program our driverless cars to react in situations where there is no choice to avoid harming someone? Would we want the car to run over a small child instead of a group of five adults? How about choosing between a woman pushing a stroller and three elderly men? Do you want your car to kill you (by hitting a tree at 65mph) instead of hitting and killing someone else? No? How many people would it take before you'd want your car to sacrifice you instead? Two? Six? Twenty? Is there a place in the car's system preferences panel to set the number of people? Where do we draw those lines and who gets to decide? Google? Tesla? Uber?1 Congress? Captain Kirk?
If that all seems like a bit too much to ponder, Kyle York shared some lesser-known trolley problem variations at McSweeney's to lighten the mood.
There's an out of control trolley speeding towards a worker. You have the ability to pull a lever and change the trolley's path so it hits a different worker. The first worker has an intended suicide note in his back pocket but it's in the handwriting of the second worker. The second worker wears a T-shirt that says PLEASE HIT ME WITH A TROLLEY, but the shirt is borrowed from the first worker.
Reeeeally makes you think, huh?
Burkhard Bilger got inside the secretive Google X lab and reports back on the search giant's effort to build a self-driving car.
The Google car has now driven more than half a million miles without causing an accident-about twice as far as the average American driver goes before crashing. Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. "The risk is too high," Thrun says. "You would never accept it." The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces. (The first drops call forth a small icon of a cloud onscreen and a voice warning that auto-drive will soon disengage.) It can't tell wet concrete from dry or fresh asphalt from firm. It can't hear a traffic cop's whistle or follow hand signals.
And yet, for each of its failings, the car has a corresponding strength. It never gets drowsy or distracted, never wonders who has the right-of-way. It knows every turn, tree, and streetlight ahead in precise, three-dimensional detail. Dolgov was riding through a wooded area one night when the car suddenly slowed to a crawl. "I was thinking, What the hell? It must be a bug," he told me. "Then we noticed the deer walking along the shoulder." The car, unlike its riders, could see in the dark. Within a year, Thrun added, it should be safe for a hundred thousand miles.
America's legal system will make it difficult for self-driving cars to be accepted here...while not a legal kerfuffle yet, see Tesla's current difficulties w/r/t fire risk in electric cars for a taste of what's to come with self-driving cars. Europe is more likely...someplace like Holland or Denmark. They take their public and personal transportation seriously over there.
Brad Templeton imagines how the design of cars and other transportation systems might change with widespread use of driverless cars. I especially like the robocar used as a mobile office or a place to get a good night's sleep as you travel from one place to the other.
The in-car environment will become more of a work and entertainment space than just a travel space. Passengers will expect things like a screen, a keyboard, and a desk. Passengers may wish to face one another (though not all are comfortable riding backwards.)
Quiet will be a very important consideration, though passengers will be allowed to wear headphones if desired, unlike drivers today.
The smooth ride (especially on the highway) of a robocar may generate demand for cars for night-travel, while the passengers sleep. Such vehicles might aim to make a trip last 8 hours rather than make the fastest possible trip, and as such would be much more energy efficient for such trips.
(This also requires a very low crash rate, as seat belts don't work as well on flat beds.)
My guess is that the first big market for driverless cars will not be the US but somewhere smaller, more urban, and more used to experimentation with alternate modes of transportation. (via the atlantic)
Driverless cars is the type of innovation that may have unanticipated consequences. Sure, you can read Twitter while you're being spirited around by your robotic car, but driverless cars may also end private car ownership. And what will intersections look like when used exclusively by driverless cars? Perhaps a little like this:
"There would be an intersection manager," Stone says, "an autonomous agent directing traffic at a much finer-grain scale than just a red light for one direction and a green light for another direction."
Because of this, we won't need traffic lights at all (or stop signs, for that matter). Traffic will constantly flow, and at a rate that would probably unnerve the average human driver.
I wonder how people will abuse or have fun with driverless cars. Driver- and passenger-less car joyrides? Will they be hackable and if so, dangerous?
In a short essay about The Unintended Effects of Driverless Cars (like the kind being tested by Google), Koushik Dutta guesses at what they might mean for the future of transportation.
Currently, a car spends 96% of its time idle. Compare that with planes which spend almost their entire lifetime in operation/airborne. Idle planes aren't making money, and they need to recoup their hefty $120M price tag. There is an unforgiving economic incentive to make sure it is always in use.
The proliferation of driverless cars will have a similar effect. Cars will spend less time idle: why would a household buy 2 (or even 3) cars, when they only need 1? Ride to work, then send the car home to your spouse. Need to go grocery shopping, but your kid also needs a ride to a soccer game? No problem, a driverless car can handle that.
Most people don't need cars most of the time but pay for the convenience of having one nearby when they do. Schedule-able on-demand driverless cars could eliminate that need, with the added bonus of expanding effortlessly to fit current capacity (e.g. imagine a family of four needing to go in four different directions at four different times...just schedule four Hertz Driverless pickups from your phone). Of course, people said similarish things about the Segway...