In a world of autonomous cars, the autonomous driver will be king.
We’re all familiar with the image of that one person walking steadfast against the throng of automatons. It’s a delicious visual, especially for the palate of rugged-individualist America. When autonomous cars become affordable for the average person, it’s only a matter of time before there’s enough of them that traffic becomes a smooth-flowing stream of cars free from human error.
That’s the utopian hellscape that car enthusiasts fear, and we cling tightly to our keys and steering wheels as symbols of vehicular freedom. You can have my stickshift when you pry it from my cold, dead fist.
But the daily commute is currently just a miserable chore. In the future, it’s a chance to fully explore your freedom.
Image source: AndroidHeadlines
The self-driving car is already here, albeit for a small market. With just a software update, a Tesla can even drop you off at your front door, then go park itself in your garage (which is presumably far away, because you’re rich enough to buy a Tesla). Uber is testing self-driving taxis in a small area of Pittsburgh. The Google car got pulled over. Soon enough, they’ll be able to do everything.
I suspect within 10 years, this technology will have trickled down to most new cars. A short time after that, they’ll make up most of the traffic on the road. It is at that point where the self-driving human can regularly exploit traffic’s programmed responses and even engage in sensor-fooling behavior to become the white-shirted Dr. Pepper girl of the daily commute.
Like any good video game player, I see driving within a rigidly structured system as an opportunity to find exploits, weaknesses within the system. The rules of automated traffic will be no different from those of a computer game. They can often be bent to your advantage.
Just ask anyone who’s ever found that wall-riding in Gran Turismo is sometimes faster than taking the proper racing line. You can make the system’s models work in your favor. This becomes especially tempting when you know that these automated cars, unlike their human counterparts, won’t ever try to retaliate by tailgating, speeding up to prevent a pass, or coming back around to cut you off.
Their goal is to get their passengers around safely by avoiding obstacles like you. Getting past traffic then becomes a matter of figuring out how traffic deals with you.
Some of these scenarios are fairly easy to imagine. Say you’re preparing to exit a business and turn right onto a busy street, where traffic is crowded but fast-flowing. You might wait a long time for a gap between human drivers, but there might be a way to trick robot drivers into slowing down to leave a gap for you.
“The cars are going to have cameras on them to look for things like stop signs. It wouldn’t be a huge stretch to say that if you paint a stop sign on the side of your car, they’ll stop for you,” says Ben Thomas, a former software developer at HERE Maps. “That actually probably will be a problem initially. Something we actually currently have trouble with? Having computers differentiate between an actual stop sign and one on a school bus. It’s been a pain in our ass for years. The image processing guys think they have something ~90-percent accurate, but it takes a massive amount of computer power, well beyond what a robot car would have the power for. Also, you could have a stop light on your car. It should trigger the cameras just the same.”
Alternately, there’s a brute-force method (also known as the I’m-a-jerk method): approach the stop line at speed and stomp on the brakes just before you get there. The Google car will anticipate a crash as you’re coming and slow down, allowing room for you to jump in. You might be able to drive fairly aggressively with only a minor risk of crashing—though the robo-car might tolerate some level of fender-rubbing over completely slamming on the brakes.
“I don’t personally think that the cars should take any action within their physical tolerances to avoid a crash,” Thomas says. “If the decision matrix for how to drive got out of whack, it could slam on the brakes over and over, and babies and old people are kind of fragile. At least initially, it would probably be best overall if you just had cars react within certain standard human tolerances.”
Which raises the question: How do autonomous cars respond when you’re side by side on the highway and you turn on your turn signal? How do they respond to tailgaters? Will they move over to the slow lane if you come up behind and flash to pass, or will they hold their lane “for optimal traffic flow”? (If they move out of the fast lane when I flash my brights, I give robots a +1 in Good Traffic Behavior over humans.)
Forcing a decision on the robots might just sound like you’re driving like a jerk, but remember: the computer driver might not be capable of kindness, either.
If you’re trying to make that same right turn out of the parking lot and traffic is moving briskly bumper-to-bumper—the design ideal, if all traffic is autonomous—that leaves you with no reasonable opening. A computer won’t open a gap to let you in, at least not unless programmers anticipate the need for this feature. This would involve writing some bit of “be a decent human being” into the script. If that’s not already implemented, you’re left having to shove into the system or you’ll never leave. (A real jerk will just hang out and block the intersection. There’s a vulnerability even pedestrian protesters can exploit.)
So you nose out, and try to see at what point an autonomous car notices you and slows down in anticipation of an accident. Or you shove in and let the robots deal with you.
“Can we discourage this behavior while also prioritizing safety? No, not any more than we do now,” Thomas says. “The best way to discourage this would be to have robot cars behave as close to humans (humans who are good drivers) as possible, but without leveraging the extra response time and control over the vehicle that a computer would have. Why? Because part of the safety has to be on the other driver. If you make the car as responsive and reactive as possible, it’s going to encourage humans to drive aggressively. If you don’t, then the other drivers are still somewhat on the hook, because they probably don’t want to die.”
Sure, eventually the programming will be sophisticated enough that you’re unlikely to get stuck in a situation like this forever. But think of the bugs you run into with your smartphone, or your computer workstation, or your home PC. Did your favorite app start crashing a lot after a recent update? There are problems, compromises, and workarounds in any piece of software, even if it’s been around a long time.
Overlay those problems on a networked system of self-driving cars, and plop a non-networked vehicle with free will in there. How will the network react? Will you be able to get anywhere? You might have to push the boundaries.
“The network of automated cars is maybe more creepy than you realize,” Thomas says. “I know that the car companies want to have all of their cars networked together to share information about construction, accidents, and road conditions that mapping companies don’t have in the map because they happened 10 minutes ago. It wouldn’t take a huge stretch of imagination to have them also networked together to tell each other about ‘asshole drivers.’ It could be entirely possible that once you get flagged as an asshole, other robot cars would give you a safety bubble for a certain amount of time. (I really doubt such data could be stored permanently without violating some kind of privacy law).”
This implies that you might be able to just dive into traffic and do whatever you want, because you’d always have a safety bubble. But there will be some limitations.
“You couldn’t just barge in to traffic and immediately start acting like a dick and get a bubble. The cars are going to have to have a tolerance for what to consider, because computing cycles and network bandwidth are finite and they probably shouldn’t waste time on everything. Example: That person didn’t have enough room to merge but tried to anyway. Are they an asshole I should tell other cars to avoid? Or, is it more likely this was a one time mistake? Maybe they just sneezed really hard while holding the steering wheel. You can’t build a pattern off of one data point, in this example. So you couldn’t just do whatever you want all the time,” Thomas says.
This sounds remarkably human, which is a complicated thing. All the social traffic norms we’re innately familiar with will either have to be programmed into autonomous cars to accommodate human drivers, or human drivers will have to establish new traffic norms to deal with autonomous cars.
My bet is that the latter is more likely; updates to traffic patterns will be slow to roll out because they’ll have to be tested and verified. Older model cars won’t have the sophistication of newer ones—somebody’s going to be rolling around on AndroiDriver 1.6 forever because the hardware can’t handle v2.0.
Humans adapt much more quickly. And once you’re learning how to drive so that you get a desirable response from a computer, you’re essentially just playing a game—a driving game in real life, a game where most other vehicles are trying very hard to avoid crashing.
If traffic is going to be a game, I’ll be one of the people playing it until I learn how to use all the rules to my benefit. When all drivers are robots, the driver with free will can beat traffic.
Leave a Reply