Another day, another expression of techno-utopian naivete, this time from Elon Musk.
Almost an hour ago, the Tesla chief published an answer to criticism about Tesla’s Autopilot. He was noting that other auto makers were also worried about self-driving cars. But then Musk, apparently assuming that you’d accept an engineer’s objections to a new product, decided he had an idea. You may recall that just last year, Musk showed off a test drive in a car that drove itself by using artificial intelligence. Here’s what the latest post says:
This car almost killed me because the system tries to detect potential threats in its environment by prompting it to perform actions that it is not ready to perform. And that, by the way, was the best I could do. I was stopped in traffic with a fatality risk of 6 per cent. But if you keep driving by just saying “everything ok?”, “no big deal?”, then you are much worse. You are essentially saying “I’m gonna let the car do it, but I’m not gonna act like I’m allowed to.”
I suppose these bad ideas don’t come as any surprise, given that Musk is a product designer. What he’s proposing here is a video-game version of self-driving, in which you actually have to stand there and play along:
In short, Musk is arguing for a new set of rules of engagement: Let the car decide when and where to separate you from the vehicle. But that’s exactly the wrong way to approach any project as technology. It’s a lot more important for you to work out the social dynamics of your interaction with the technology and its possible problems.
I don’t doubt Musk’s sincerity, or his capability as a technologist. But he’s raising all sorts of problems. For one thing, his idea is so extreme that it won’t work. In fact, a company with a market capitalization of $57 billion could find that its valuation is driven almost entirely by the prospects for eventually being able to sell the stuff Musk proposed here. And it’s likely that any artificial intelligence, involving whatever sort of neural network Musk envisions, would need to be controlled by some sort of regulator.
For another, I have a sneaking suspicion that most people wouldn’t be willing to be separated from the car — including myself.
Musk may genuinely think that giving us outstretched arms and giving the driverless car a chance to “clear” itself will make things all better. But I think there’s also a certain comfort in having someone else control — that’s right, control — our actions. I suspect that, given some real-world testing, most people wouldn’t be willing to be separated from the car — including myself.
Of course, none of this means people won’t interact with vehicles. In fact, they probably will. But clearly, they’ll do so with other people, not the system.
Once all of us are, on some level, interacting with a vehicle — as toddlers do with cars and caretakers do with caretakers, and I imagine teenagers will do with cars — I don’t see a lot of reason to believe that we’ll do anything but respect the limits of its autonomous capabilities.
The basic lesson, though, remains the same: If Elon Musk thinks we should let technology work without any social rules, I’m afraid he’s probably right.