Autonomous Cars Must Think Like Babies To Drive Like Humans

can be reached at meehna@gmail.com
can be reached at meehna@gmail.com
Share on facebook
Share on twitter
Share on pocket

We are all born with common sense. (Although that can be debatable with certain people.) It’s common sense that lets us navigate the world and make snap decisions. And, if we are ever to achieve self-driving, Level 5 autonomous vehicles, then artificial intelligence (AI) technology must have it too.

  • Common sense is the missing ingredient to get fully autonomous cars on the road.
  • Because humans have common sense, they can make important split decisions  when driving.
  • So far we can’t program common sense, which makes autonomous vehicles unpredictable in certain situations.
  • Understanding how babies learn could provide the key to finding a solution.

Common sense seems pretty, well, common. It’s way more complicated than you think—and it comes into play all the time when you’re driving. If you’re going down the highway and see a tree branch in the middle of the road, what do you do? What if it’s a kid darting across? How about a dog or a huge puddle? Or as Melanie Mitchell, professor of computer science at Portland State University and external professor at the Santa Fe Institute, posits in her aeon article, a snowman?

Photo by eberhard grossgasteiger on Unsplash

Of course, in each of these scenarios, common sense dictates the correct move. For example, depending on how big the tree branch and whether there’s a car in the lane next door, you’d either swerve out of the way or drive over it. If it’s a kid, you wouldn’t think twice about taking every evasive measure not to hit her. You’d probably drive right over the puddle—unless you just got a carwash. As for the snowman, you might enjoy the fun of crashing right through it or perhaps appreciate its aesthetic value and go around. Either way, no harm no foul.

It Comes Down To Common Sense

Different things we encounter on the road require quick assessments of what it is and how it’s expected to behave for us to take the proper action. Self-driving cars need to have this ability to discern what type of object they’re encountering and adjust. If they make a mistake, it could cause accidents.

The largest percentage of accidents with self-driving cars involves human drivers rear-ending them.  So why does this happen? Because self-driving cars slam on the brakes for objects humans wouldn’t. That’s because AI can perceive an object but doesn’t have the “knowledge” or background to figure out what it is and take the appropriate action. We are making progress though. Waymo just demonstrated one of its self-driving cars avoiding a cyclist.

So, let’s train autonomous vehicles and devise more comprehensive rules for them to make discerning decisions like we do. Easier said than done. The most advanced AI systems use deep neural networks, algorithms trained to spot patterns. They learn based on statistics fed to them from examples we provide.

How Do You Program Innate Knowledge?

Humans make calculations not only from new information gathered from the environment, but also innate knowledge of certain basic concepts. As Mitchell explains, these “help to bootstrap our way to understanding –  including the notions of discrete objects and events, the three-dimensional nature of space, and the very idea of causality itself.”

From our basic brain architecture, we also possess nascent concepts of sociality. That’s how babies recognize simple facial expressions and learn language and its role in communication. Add to that the experience of growing up in the world, developing social and emotional aspects of understanding. After decades of AI research, we still don’t quite know how to teach machines how to grasp the world the way we do.

We’re at a place where AI is smart, but it doesn’t have the refinement of common sense. And for humans that comes from broad experiential knowledge and the ability to adapt our actions based on the situation. Particularly when driving, our brains have to process in seconds or fractions thereof to determine what to do. Seen in that light, it’s amazing common sense is so common.

Photo Zach Lucero

The Gap In How We Are Teaching Computers

The way we’ve been teaching AI systems to learn is by cataloguing human knowledge through manual programming, crowdsourcing, or web-mining commonsense ‘assertions’ or computational representations of stereotyped situations. However, feeding in this “knowledge” doesn’t give computers the full landscape because it doesn’t include the intuitive knowledge we’re born with. Here’s another riddle wrapped in a mystery inside an enigma: most of our core knowledge isn’t written, spoken or in our conscious awareness. How do we teach something we might not know or be able to access?

Since AI doesn’t have common sense, it’s going to make unpredictable (to us) mistakes. Okay, so we need to bridge the knowledge gap between human and machine to get autonomous cars safely on the road.

Maybe we should back it up to how babies go about learning commonsense to build the underpinnings of AI systems. And that’s exactly what the US Defense Advanced Research Projects Agency (DARPA), a major funder of AI research, did. In a recently launched four-year program on “Foundations of Human Common Sense”, it posed the challenge of creating an AI system to learn from “experience” to match the cognitive abilities of an 18-month old baby.

Harnessing Baby Brain Power

Why use an 18-month old’s brain power as the standard? By that time, babies can determine “object permanence”, which is understanding that an object still exists even if it’s blocked by another. They also know that colliding objects don’t pass through each other but affect each other’s motion. In addition, they grasp that agents with intentions, such as humans or animals, can change an object’s motion.

Photo Christin Hume

But that’s not all. An 18-month old can grasp what another can or cannot see and when someone shows the signs they need help. Now that I know all these skills a baby acquires at this age, the terrible twos now make a whole lot of (common?) sense. By that time, tykes have developed the wherewithal of awareness, processing ability and desire to make life hell. That part, I think we should keep out of AI’s curriculum.

For DARPA’s Foundations of Human Common Sense challenge, researchers will develop a computer program that will learn common sense from videos or virtual reality. DARPA will then evaluate the success of each team’s computer model by performing experiments similar to ones done with infants to measure results.

You Can’t Just Teach To The Test

However—and you knew there had to be one—just because an algorithm can match the performance of a human doesn’t mean it’s got the same comprehensive intelligence. Here’s why: the AI program will be designed for a specific test and trained to pass it. This is similar to teachers basing lessons around improving standardized testing scores. You can teach kids to be successful within confined and set parameters but that doesn’t represent the comprehensive knowledge you need in the world to succeed. There’s too much unpredictability that isn’t factored in.

I don’t mean to crash through your snowman and crush all your autonomous car dreams. Only by identifying what technology needs to accomplish and figuring out where it falls short can we pave the way to a solution. And, I have faith there will be one. Human ingenuity never fails to amaze me. Did you ever think we’d have a rideshare submarine or a flying car? Neither did I. But somebody did.


About the Author

can be reached at meehna@gmail.com
Close Menu