The 2008 animated Pixar movie “Wall-E” follows the refuse-based adventures of a sentient, autonomous trash compactor whose primary function was to clean an abandoned city on a now-deserted planet Earth, long ago having been abandoned by humanity. The movie highlights some of the issues that would likely occur from human beings’ over-reliance on an automated lifestyle – issues such as waste management, obesity and human environmental impact, to name a few. “Wall-E” is set hundreds of years in the future, but some of those issues ostensibly exist in the world we inhabit today.
The transportation sector around the globe is a multitrillion-dollar industry. There’s money and mistakes to be made. While we are probably a ways off from sentient automobiles, the age of vehicle autonomy is well upon us. Every week, another company releases some update, patch or application that nudges autonomous tech in a new direction.
There have been some setbacks – name me a sector that doesn’t have any – but cars that are less reliant on humans are here to stay. This is almost universally viewed as positive, with many examples given to support this position, such as:
- Fewer accidents.
- A move away from owned to rented vehicles, lessening the need for parking garages.
- A productivity increase during commuting time.
- A reduction in traffic congestion.
There are many more, but the age of connectivity comes with risks. One must exercise caution with any kind of new technology. What happens when things go wrong? Computers malfunction sometimes; we’re all familiar with Windows’ blue screen of death.
You are turning over your most precious commodity – your family – to a computer. And if that computer fails when you are trusting it not to – let’s say when it is in full autonomous mode – how will that fail affect things? In what manner will it fail? It will likely fail however the lowest-bidding subcontractor designed it to fail.
Even if it does not fail, a computer still needs to be told what to do, at least initially. Computers can learn things and eventually make better iterative decisions based on this learning, but what do you tell a computer it should do when faced with a myriad of input data?
Autonomous vehicles (ones that fly) have been around a long time. Most commercial airliners are autonomously piloted more than 90% of the time. Aircraft, along with the routes they take, are heavily regulated. They essentially all report in to the same system around the world. There is a reason all pilots around the world must communicate in English. There has to be one universal language to avoid miscommunication and errors.
Autonomous automobiles have none of that. There is no central control, no clearing house and no standardization, to the extent that even the levels of autonomy differ by manufacturer. They can, though, roughly be classified in the following manner:
Level 0 — Nothing
The baseline since Gottlieb Daimler traded horse power for horsepower. Level zero applies to all vehicles that rely solely on humans to dictate driving actions. That is my car, and almost every car that has come before it. At best it has cruise control, but it is the “dumb” version that will crash you into a wall if you let it. Example: my 2009 Honda Ridgeline truck.
Level 1 — Driver Assistance
What does this level offer us? Some automation, but not much. For level one, you are looking at adaptive cruise control or lane departure tech to come as standard on your vehicle. While the human driver still supervises everything, the vehicle is capable of some decisions on its own. Example: your eco-friendly neighbor’s 2016 Toyota Prius.
Level 2 — Partial Automation
We get a step up from driver assistance in level two. This combines multiple automated functions such as lane assist, automatic braking and adaptive cruise control to ensure they work in a smooth, coordinated fashion. Anticipating traffic signal changes, lane changes and scanning for hazards are still the domain of the driver. Example: the Audi your boss drives that has Traffic Jam Assist as standard.
Level 3 — Conditional Automation
A car running level three automation can take full control of the vehicle during certain parts of a journey under certain conditions and within certain parameters. The vehicle will, however, turn control back over to the human driver when it encounters a situation it cannot handle or when it cannot interpret input data. The onus is, therefore, on the driver to stay alert because the vehicle may prompt the driver to intervene at any moment. The incident in Tempe, AZ, in March 2018, involving a pedestrian fatality involved a vehicle running level three autonomy. Example: Tesla’s Autopilot.
Level 4 — High Automation
An auto at level four automation does not require a human to ride along during certain journeys, subject to geographic and road-type limitations. These are currently being tested, and we should see them within the next 18 months. Think Amazon last mile and pizza delivery vehicles. Example: Johnny Cab, from the original “Total Recall.”
Level 5 — Full Automation
At level five, absent inputting the destination, which will probably be done via your phone beforehand, there is no driver involvement. You will enter the vehicle, turn on your movie or laptop and that is it until you reach your destination. Example: KITT from “Knight Rider.”
Level 6 — Beyond Full Automation
Well, there is no level six – at least yet. What would level six look like if it did exist? A teleporter? Something that transports you from your bedroom, via the bathroom and kitchen, straight to the office? A flying car? We have returned to Wall-E territory. Example: The Jetsons’ Aerocar.
Technology in vehicles is designed to assist us and make us safer. For good reason, a few of the car and tech companies working on autonomous driving have said they do not want to release anything below level four. Either force people to drive, or let the machine do all of the work. Partial implementation runs the risk of scaring people away from the technology. The more reliant you are on tech, the tougher it is when you do not have it. When, in an instant, the computer turns full control back to you because its inputs are confusing, are you ready?
See also: Autonomous Vehicles: Truly Imminent?
What does the future look like? We should expect a reduction in the frequency of accidents, but, given the complicated nature of what is now hidden under a fender, accidents will likely cost more (increased severity).
Software updates can be problematic. They do not work well on airplanes, for example. You would not release beta software for an airliner. A recent over-the-air software update by Tesla reportedly disabled the autopilot system. Too much automation in the cockpit or car, and things can go bad when the computer gets an input it does not understand.
Walt Disney promised us self-driving cars back in 1958. They are here – somewhat – but 60 years is a long time to wait in line. As a juxtaposition to that, with robotaxis already hitting our roads, the future has arrived more quickly than most people anticipated.