I would challenge two parts of Musk's argument: that a computer camera system can cost effectively emulate human driving and vision performance and the idea that humans are safe drivers with only eyes.
To me, self driving seems like the opposite of Moore's law.
In the last decades, one of the principle of SWE was to take into account how much computing power will have improved when planning multi-year project. Meaning, you could write something that was too heavy for today's machines, but would be bleeding edge in 5 years.
IMHO, self-driving is actually the inverse situation.
Piloting a car in an human-centered environment is difficult and requires the machine to behave humanly. This of course requires an absurd amount od data and training to pull off. But what happens when self driving adoption increases? At a certain point, so many driverless cars will be roaming the streets that most daily interactions will be automated by letting the cars negotiate in a nice, deterministic, algorithmic way. Thus, reliance on predictive and opaque systems like neural networks will be needed less and less, actually reducing the complexity of self driving.
My main point is: what should win is the tech that makes cars drive as well as trained professional drivers. Once that's done, adoption will drive down human driving and thus unpredictable behavior on the road, reducing the computational load needed to correctly perform tasks. Next, cars will start to behave more programmatically and deterministic and will need less sensors and tech. Car companies will have accurate maps of everything, and cars will mostly become shuttles which can rely more on predetermined routines and less on world models, especially as smart cities gain a foodhold too.
A well-written article that is a pleasure to read, in our slop driven day and age.
I have been skeptic of the no extra sensors approach Tesla took since they announced it . It is obvious that extra sensors is what can make self-driven cars outperform human drivers, not merely match them.
In the last decades, one of the principle of SWE was to take into account how much computing power will have improved when planning multi-year project. Meaning, you could write something that was too heavy for today's machines, but would be bleeding edge in 5 years.
IMHO, self-driving is actually the inverse situation.
Piloting a car in an human-centered environment is difficult and requires the machine to behave humanly. This of course requires an absurd amount od data and training to pull off. But what happens when self driving adoption increases? At a certain point, so many driverless cars will be roaming the streets that most daily interactions will be automated by letting the cars negotiate in a nice, deterministic, algorithmic way. Thus, reliance on predictive and opaque systems like neural networks will be needed less and less, actually reducing the complexity of self driving.
My main point is: what should win is the tech that makes cars drive as well as trained professional drivers. Once that's done, adoption will drive down human driving and thus unpredictable behavior on the road, reducing the computational load needed to correctly perform tasks. Next, cars will start to behave more programmatically and deterministic and will need less sensors and tech. Car companies will have accurate maps of everything, and cars will mostly become shuttles which can rely more on predetermined routines and less on world models, especially as smart cities gain a foodhold too.
I have been skeptic of the no extra sensors approach Tesla took since they announced it . It is obvious that extra sensors is what can make self-driven cars outperform human drivers, not merely match them.