Fully driverless cars: why they have many of the same vulnerabilities as ‘Web3’
‘Web3’ is in quotes to distinguish that it refers to the concept of “decentralised blockchains and crypto”, as opposed to the Semantic Web.
Driverless cars, which are being constantly touted as a solution to the problem of error-prone human drivers, are in fact a very dangerous proposition for the time being. Much of the reason for this is that in order to function in any kind of reasonably safe capacity, they will require centralisation — which in this specific example, isn’t a viable solution outside of an actual driver. Let’s go into why this is, and why decentralised driverless cars are a problem.
A fully decentralised system can’t account for external factors
Suppose you have a fully decentralised driverless car; it drives with no specific data, other than what it can perceive from its sensors. Is it capable of knowing how busy a particular road is? Whether potholes or other known obstacles exist in its path, that could be around corners it cannot see, should be avoided or dealt with? How should it know when to slow down for speed bumps if it can’t tell the difference between them and an inclination in the road? How fast to drive on a motorway with a minimum speed limit if it can’t see other cars there and doesn’t know what the minimum speed limit is?
There are many problems a decentralised car can solve on its own; slowing down when a car in front slows down, seeing certain obstacles and reacting to them, and so forth. Without some form of centralised data given to it, there are also many things it cannot figure out on its own, simply because it lacks the ability to do so. As such, there are various details it needs to be given, in order to be able to reasonably operate.
The problem with this is that those details are still ultimately given by decentralised systems, and this presents many risks.
A decentralised car system cannot have a non-driver failsafe
Unlike most applications, where decentralised systems have human failsafes that can fix them in the event of a problem, a decentralised car system — due to the fact that many cars will be on the road already — has more or less zero time to fix itself before catastrophes can begin occurring. This is similar to Web3, in which the moment you compromise a currency huge amounts of money can be immediately stolen; alternatively you can slowly monopolise the currency in such a situation, but in both the catastrophe is already happening.
As such, if you are going to build such a system — it can’t afford to fail under any circumstances, because if it does, not only would the results be horrific but public trust in that system would disappear instantly. Even a centralised human failsafe, acting to fix the erroneous information, would not be able to fix it in time.
Imagine what could happen if such a system was compromised: you could update the information such that every car suddenly got given a different speed limit for a particular motorway, or was told that a particular road was in service when in fact it was closed and couldn’t be passed through. In an instant you could cause many cars to crash, or choose to try and eliminate someone travelling in a particular car if you had sufficient control of the system, by manipulating information about that particular stretch of road or the driving limits around it.
Of course, some security mechanisms will always exist in these systems: it’s unlikely that any such system would ever allow any user to give two different cars different speed limits for the same motorway, for example — but if you wanted to eliminate someone or cause a giant crash, you wouldn’t need to do that. Just give every car on the motorway a bad speed sudden speed limit change or other dodgy parameters that prevent normal safe operation, and watch them all crash.
In a sense, this vulnerability partially exists already: modern GPS navigation in cars is often supplemented by “current” information, e.g. from Google Maps, that allows you to see when a particular road is congested. Were an entity able to compromise this system, you would be able to effectively change the route of most cars by suggesting other routes were more congested than they really were.
The only failsafe is a human driver
Human drivers — by definition of being human — are far from perfect. They can be drunk; they can be tired or stressed or distracted; they can be careless.
However, they are also the only viable failsafe when it comes to driving a car. For example, let’s imagine the scenario in which your GPS’ real-time data has been compromised, and it tells you that several routes are congested or that a particular road is open when it is not.
A human driver, who knows the local area, will recognise the unusual nature of such information and may choose to ignore it. They can see the hastily-erected closed signs on the side of a road and act on them when a driverless car may simply assume they are objects on the side of the road and therefore not a concern. This is one of the reasons why trained taxi drivers, with intricate knowledge of a local area, often perform a lot better in their role than random Uber drivers relying on their GPS to tell them where to go.
There isn’t a replacement for this. The same problem applies to planes, ships and all other modes of transport: while humans are often looking after the controls, letting the vehicle do its own navigation, they are still necessary should that navigation fail or encounter a problem. Most planes are perfectly capable of flying themselves on autopilot, but that doesn’t mean human pilots are no longer needed in the cockpit.
Web3 has the same problem: you can’t make a currency system, or a proof-of-ownership system, that doesn’t have human failsafes that continually check on it as it operates. Without them, by the time you know the system has failed, it is already too late to do anything about it.