Challenges to Overcome on the Road to Self-Driving Cars

  • Brian Leon is a freelance automotive journalist and former Associate Editor of the New York Daily News Autos. He is currently a master student at Uppsala University in Sweden studying marketing and completing a thesis in the area of trust in autonomous vehicles.

Share on facebook
Share on twitter
Share on reddit

If you’re one of the millions of Americans that commute to work by car, chances are you’ve had the following thought while stuck in seemingly endless traffic: screw this!

Wouldn’t it be nice if the car could do the traffic-laden driving for you so that you could respond to emails, catch up on the news, watch an interesting video, or even take a quick nap while being shuttled along to your destination? Yes, it would, and if you’re one of the few that says they love driving – even in traffic – you’re either lying or we need to have a serious chat.

Beyond freeing up our time spent sitting in a vehicle and making driving safer and relatively traffic-free, there are many benefits to the development and widespread adoption of autonomous vehicles (AVs). But while the future seems closer than ever, there are still many hurdles to overcome before we get there. From moral quandaries to multi-billion-dollar infrastructure needs, a driverless future can only be achieved if we face down – and conquer – some formidable challenges.

Solving Ethical and Physical Development Problems

Volvo XC90 Detecting a Cyclist
Autonomous vehicles will need to be programmed to make nearly impossible moral decisions in certain situations, raising concerns around who is at fault in a collision. (Volvo)

One overarching question that’s been pondered by top researchers and even the former hosts of Top Gear is how to deal with the ethical and moral implications of letting a robot take the wheel of a vehicle. The example most commonly used is a variation on a classic ethical thought experiment known as the “Trolley Problem,” which introduced this scenario way back in 1967:

You spot a runaway trolley heading towards a group of five tied-up or otherwise incapacitated people on the tracks. However, you are standing next to a lever that would divert the trolley to a different track, instead killing only one person. Do you:

  1. Do nothing and allow the trolley to collide with the five people on its original track?
  2. Pull the lever, diverting the trolley onto the side track where it will collide with one person?

This experiment has been modified to include convicted criminals, children, even dogs and cats, but the difficulty in making a decision remains the same. While it’s a puzzling conundrum for humans, imagine having to program a robot to make a similar decision one way or another. What do you tell it to do? And if the unthinkable occurs, who is at fault?

Researchers at the Massachusetts Institute of Technology (MIT) studied the problem to better understand how it should relate to autonomous vehicles. Using what they called the Moral Machine, the study generated data from nearly 2 million online participants in over 200 countries.

While the researchers found many important differences in opinion based on region, ethical preferences, and more, three overarching themes emerged: respondents generally chose to spare the lives of humans over other animals, many individuals rather than a few, and young people over old. While the level of autonomy needed to make these decisions has yet to be reached, it is a problem that engineers – or even government agencies – eventually need to decide on before driverless cars become a reality.

Beyond the ethical implications, AVs still face a number of physical challenges to overcome as well, from weather complications to making decisions in unfamiliar scenarios. If you live in an area with frequent inclement weather and drive a vehicle with advanced safety systems, you may have noticed that snow, mud, or other substances can block important sensors and render functions like automatic emergency braking useless. Now imagine that the entire vehicle relied on those sensors just to operate properly.

Driverless cars will also have to deal with a large number of human drivers when the first fully autonomous models reach American roads, and as any driver will know, other motorists can be aggressively unpredictable. Our experience as drivers helps us make decisions in a split second, but AVs will have to be programmed to handle every possible situation.

Building and Updating the Grid for Autonomous Vehicles

Self-Driving Chrysler Pacifica Hybrid Waymo
For autonomous vehicles to be fully capable, Vehicle-to-X technologies will need to be implemented on a widespread scale, especially in our infrastructure. (Waymo)

Beyond just developing vehicles to be able to drive themselves, the world around these driverless cars is going to have to get smarter, more connected, and more widespread than ever. This is the overarching goal behind many Vehicle-to-X (V2X) technologies in development and will be essential to making fully autonomous vehicles a reality.

In short, V2X tech allows vehicles to communicate with each other; with important road infrastructure pieces like traffic lights, smart signs, and more; and even with pedestrians or other devices to increase safety. By generating and sharing more data between all parties, AVs will be better able to move efficiently, safely, and smartly through urban, suburban, and even rural roads, reducing traffic to an almost nonexistent level, at least ideally.

Of course, all of this is easier said than done. You’re probably thinking: If my city or state can’t even fix a pothole in a timely manner, how are they possibly going to pay for and install traffic lights that can talk to my car? It’s a fair question, and one that cities, states, and even the U.S. Department of Transportation are considering in an effort to make smart roads less of a pipe dream and more of a reality.

There’s also the question of how to update the electrical grid for cities and towns to handle a massive increase in electric vehicles, as most AVs and other future cars, trucks, and SUVs will likely be gas-free. But that’s another problem entirely…

Getting People to Trust the Technology

Volvo 360c Sleeping Illustration
Despite the moral and technological hurdles, the top concern for driverless car development is how to get humans to trust the technology enough to use it. (Volvo)

Perhaps the most pressing question for those developing AVs doesn’t have to do with ethics, weather, or infrastructure, but rather with our own fears. A study in the Journal of Engineering and Technology Management found that public trust is the main barrier to adoption of driverless cars, and that for us to trust them, they must also meet certain performance and reliability standards involving privacy and data security.

This is similar to the phenomenon behind widespread fears of flying rather than driving, despite the fact that 1.2 million people die globally in car accidents each year versus only 79 people as a result of air accidents in 2017. Logically, driving should be feared more because it is inherently more dangerous, but the lack of control over the situation is what makes most people more afraid of flying than driving.

As these technologies become more prevalent and advanced in the vehicles we drive and ride in, it’s likely that our trust in them will continue to increase. But as it stands, one of the most important challenges to overcome on the road to driverless cars is not the robots behind the wheel, but their passengers.


About the Author

  • Brian Leon is a freelance automotive journalist and former Associate Editor of the New York Daily News Autos. He is currently a master student at Uppsala University in Sweden studying marketing and completing a thesis in the area of trust in autonomous vehicles.

Close Menu