“Once you trust a self-driving car with your life, you pretty much will trust Artificial Intelligence with anything.” ~Dave Waters
We have been strong proponents and optimists when it comes to the continuous developments in the field of self-driving cars, and other autonomous technologies for that matter. We’ve looked at it from different angles and placed the field on the pedestal a number of times throughout the year. We thought everyone was on board this hypetrain that seems to have swept the headlines in recent months – that is, until we examined some of the more recent statements from the part of Steve Wozniak, the famous co-founder of Apple. Earlier in January at the Nordic Business Forum in Stockholm, he criticized Tesla’s autonomous technology and shared his unique (at least from the celebrity point-of-view) opinions about the electric-car maker. Back then, he commented on Tesla “overhyping its self-driving technology”, which was rather surprising given his initial optimism for the company given that he was an owner of two Tesla Model S Cars. Now, Wozniak has reflected on the field as a whole and made some interesting comments about it all. He reveals why self-driving cars will never be a widespread phenomenon and why he thinks AI simply does not manage the task. This made us consider the flip-side of the coin, is this optimism for self-driving cars overhyped and what is the case against it? It’s been 42 years since Steve Wozniak founded Apple together with Steve Jobs and Ronald Wayne. Even today, the 68-year-old risk capitalist has a certain weight when he reviews and comments on contemporary and future technology. It’s safe to say that we should all be listening when he opens his mouth. Wozniak has once again criticized what he thinks is an exaggerated hype around Tesla. The whole technology area still seems to be a rock in the senior’s shoes. During an event in Barcelona, Wozniak explained why he has now simply lost confidence in the notion that self-driving cars will be a widely spread phenomenon in the future. He simply believes that artificial intelligence will never be able to cope with all the demanding aspects that driving include – at least not comparable to vehicles manually driven. “They have to drive on human roads. If they had train tracks, there would be no problem at all. I don’t believe that kind of visionary intelligence is going to be like a human” Wozniak said in a statement to Arabian Business. As an example of a complex challenge for the cars, he gives the example of temporary road signs placed by the police. “The artificial intelligence of cars is developed to identify everything that’s normal on the road – not that which is abnormal. They will not be able to tell what is on the signs and understand what that means. I’ve really given up” he says.
Who else is pessimistic about it all?
Disregarding the whole Uber catastrophic event that occurred earlier this year, let’s look at some other indicators that have popped up previously. In 2016, scientists from the US Road Safety Authority NHTSA organized an open meeting about self-driving cars where they concluded that those cars are not yet mature to be released on a larger scale. They stated the need for technical issues to be solved before anything was implemented more widely. NHTSA believed that it was necessary to act quickly now that some cars already have the technology associated with self-propelled cars, such as autobrake and auto-filing.
There have been a wide range of companies such as GM that have announced plans for self-driving cars together with Lyft in a few years. Google, that has been testing self-driving cars for a number of years, argued before the US Congress to give NHTSA the opportunity to give the company special permission to self-drive without steering wheel and pedals in traffic. However, critics at the meeting, in which both consumer organizations, engineering and car manufacturers argued, pointed out that technology is still weather-dependent and that road maintenance is sometimes insufficient to ensure that technology for following markings can not work.
Autonomous cars can not yet respond to commands from a police who wishes to stop the vehicle, and the technology for interpreting traffic signals itself is still too primitive. “Before the self-driving cars have learned to handle everyday traffic in traffic, there is a threat to public safety to let them out in traffic”, Mark Golden warned at the National Society of Professional Engineers.
What questions arise in relation to this? How should the car’s computer analyze its surroundings? Currently, two technologies compete with each other – image recognition (Computer Vision) and laser tension (Lidar). The former is advocated by Teslas Elon Musk, as it allows the car to “recognize” what is in the environment. The latter is proposed by Google and builds on ultra fast laser pulses that scan the environment and build a 3D model of it. Both technologies have their disadvantages. Lidar is significantly more expensive and does not offer the same opportunities for the computer to “understand” what is in the environment. Computer Vision requires light and access to huge amounts of data for image recognition to work.
What about “Car-hacking”? No, not jacking. We know that countries engage in various forms of cyberwarfare (even though it’s not widely talked about in the media), so what will happen when another stage of society is intertwined in a potentially hackable form? In such a case we can introduce a third-party rogue group or even individuals with the right tools to hack into them. Self-driving cars are more than prone and exposed to hacker attacks. That’s a given. Even if they are usually illegal, car manufacturers will not escape their responsibility to deliver as intrusive products as possible. If something goes wrong, where is the limit when the car manufacturer did not protect the car enough against a hostile takeover? (Physically speaking) Who bears the responsibility for this? How much electronic vulnerability can be accepted with a car when it is transferred to the customer?
Conflict with normal cars? Self-driving cars do not “drive like people” meaning that fewer will accept the self-driving cars when they start appearing in traffic. If the cars run in a way that is very distinct compared to the way all other drivers drive, it will in the worst case cause accidents, and at best frustration. Several car manufacturers and companies now look at how self-driving cars interact with people and how accidents of this type could be avoided. The cars must be able to possess a certain degree of creative sometimes, but without risking something or someone. When self-driving cars are in the majority and have their own lanes, it becomes easier, but until the point when self-driving cars share the road with human drivers, there is a risk of accidents.
These are some fundamental questions that not only Steve Wozniak has thought of but the wider community itself. As autonomy becomes implemented in various areas of society, this will undoubtedly need to be addressed. Toning down the optimism is difficult without coming across as some sort of a buzzkill or neo-Luddite – but it needs to be done for the sake of clarity and safety.