The automotive industry is undergoing major transformations, with electric vehicles, connected vehicle technology, and automation all seeking to disrupt driving as we’ve known it for the last century. With the huge global potential of these vehicles to reshape personal transport, commercial vehicles, and society as a whole — the race to total automation is in full swing.
The market size of advanced driver-assistance systems (ADAS) and Autonomous vehicles (AVs) is already estimated to be 516.3 million units, and it’s predicted to create $400 billion in revenue by 2035.
Dozens of tech companies and legacy auto manufacturers worldwide, including Google’s Waymo, Nvidia, Intel, China’s Baidu, and GM’s Cruise, are working on developing the “brains” to help vehicles become smarter, more efficient, and increasingly data driven. But to stay competitive, these companies must continue connecting the dots to make fully autonomous technology a reality.
In this blog, we’re exploring the five building blocks of automated driving (perception, localization, driving behavior, path planning, and actuation) to provide a deeper understanding of how these concepts need to interconnect to ensure AVs can navigate and interact safely in any environment.
The challenges of making automated driving a reality
Scientific American, January 5, 1918
Humans have been dreaming about autonomous vehicles (AVs) for almost as long as cars have existed. So why has it taken so long for these dreams to come to life?
Four fundamental questions need to be answered by AVs for this to happen, and technology still hasn’t come up with all the answers yet. For Level 5 automation, a car must be able to drive itself in all conditions and environments, and it will not require a human to take over at any time.
This means a vehicle needs to know:
- Where it is
- What’s around it
- Where it wants to go
- How it can get to this destination safely
Experts predict that complete automation could be a reality by 2035 and have the potential to go mainstream quickly. But there’s a lot that needs to happen between now and then. Let’s take a closer look at the five essential tenets to answer these questions and for safe and comfortable automated driving.
Perception
For an automated vehicle to navigate in any environment, it must understand its surroundings — whether on a highway at rush hour, at a busy seaport, or in unpredictable and hostile environments such as heavy rain or snow.
Accurate perception includes identifying and classifying objects, measuring their distance from the host vehicle, and understanding whether they are stationary — or whether speed and direction need to be attributed to them. Without a sophisticated and robust level of perception, an automated vehicle can’t make informed decisions about what to do next.
Perception relies mainly on devices like image-sensing cameras, ultrasonic sensors, and laser scanners, which feed data to in-vehicle computers and cloud-based data centers. Many of these sensory devices (such as cameras) can’t perceive their surroundings in low- visibility conditions such as darkness, snow, or fog. This means under these conditions, the vehicle can’t navigate safely, avoid obstacles, or adapt to changing road conditions. Other popular sensors like LiDAR can’t determine whether a traffic light is red or green, which is necessary for driving on the road.
That’s why sensors work together through sensor fusion, to create redundancies. Without an array of systems that can tackle all possible scenarios, an automated vehicle is essentially “blind” in many situations.
This information is processed once perception data is gathered to inform the vehicle’s driving behavior.
Driving behavior
Autonomous vehicles need to make fast, accurate decisions about how to safely and confidently negotiate traffic and road layouts and avoid obstacles. This requires complex methods and systems for predictive reasoning and making high-level decisions about how the vehicle should behave in different situations.
A fully automated vehicle needs to be able to take its perception data and use this to understand things like:
- What a safe distance between vehicles is
- The rules of the road relevant to it, given its specific location
- How to anticipate the actions of other road users
- How to react to dynamic objects and road users
- How to factor in weather and road conditions
- What upcoming scenarios might look like
- What a safe and comfortable driving speed is, relative to all the above points
From basic maneuvers like changing lanes, merging into traffic, slowing for a yellow light, stopping for pedestrians, or navigating through a four-way intersection to unexpected scenarios like lane closures, roadworks, or fresh potholes, an autonomous vehicle needs to know exactly what to do next and be able to do it safely while complying with traffic regulations.
Essentially, these vehicles need to be able to make driving decisions with the same (or a better) level of judgment as an experienced human driver. Their decision-making technology must be predictable and reliable for mass adoption, integration, and social trust to occur.
To achieve Level 5 automation for driving behavior, various technologies are being tested to replicate driving in various scenarios and conditions. Many of these scenarios might be too dangerous or challenging to replicate on public roads, so they are limited to simulated validation testing.
Machine learning algorithms, such as regression, pattern recognition, cluster, and decision matrix algorithms, are being used to develop technology that can assist with predicting movement and other events. These algorithms help vehicles learn from vast amounts of driving data to improve their decision-making over time.
Localization
Once predictive reasoning has been established, the automated vehicle needs to determine its position and orientation within its relative environment in relation to semantic features of the road to the centimeter level. Vehicle general navigation systems have been around in different forms since the 1930s, with Mazda introducing the first in-car GPS system in 1990. GPS as we know it achieved mainstream adoption in 2007, and most new cars now come with built-in GPS systems. Typical localization can be achieved through perception systems (like computer vision and LiDARs) working in conjunction with GPS, IMU (Inertial Measurement Unit), and odometry.
However, even though GPS is a popular and trusted navigation method, it has limitations in achieving a high level of accuracy, and it struggles with availability. If you’ve ever tried to use your GPS in places like underground car parks, tunnels, or high-density city areas with tall buildings, you’ll know exactly what it’s like for your GPS signal to suddenly go dark. So for obvious reasons, it’s a huge problem if an automated vehicle only has GPS for guidance.
Centimeter-level precision is paramount for precise and reliable localization and safety. To achieve this level required for autonomous driving, different systems must work in tandem to account for potential failure points, including those without the same failure points, like GPR’s WaveSense.
WaveSense uses ground penetrating radar to map and localize based on subsurface data. First deployed in Afghanistan in 2013, and since deployed to commercial industries, GPR opens new possibilities for safety and expanded markets for autonomous vehicles and automated operations.
This level of reliability and accuracy is fundamental for effective automated navigation, ensuring a vehicle can always follow routes correctly, no matter the conditions. It has the competitive advantage of mapping a consistently detectable and stable data layer. Because the ground protects this data, it’s always available, rarely changes, and is highly unique. Once an automated vehicle can pinpoint its exact location on Earth and map it to its surroundings, it’s ready for path planning.
Path planning
Path prediction and planning help an AV map out its journey from A to B, using its localization and driving behavior knowledge to determine the safest and most efficient route.
This planning is critical so the vehicle can anticipate potential hazards, adapt smoothly to changing conditions, and consider various on-road factors, such as traffic flow and road layouts.
The challenge here is that a vehicle’s environment is never static. Road conditions and layouts can change, road surfaces can deteriorate, and a traffic accident or an unexpected detour might be just around the corner.
A fully automated vehicle needs to constantly perceive and process this changing data, react to it, and figure out a collision-free solution in real-time. This solution needs to minimize passenger risk and comply with relevant traffic laws.
Current technology that can assist with path planning includes LiDAR, radar, cameras, and ultrasonic sensors, which provide comprehensive data about the vehicle’s surroundings. Geographic Information System (GIS) navigation can provide a vehicle with vital information about road layouts and geometry, traffic conditions, and more.
Advanced algorithms and artificial intelligence (AI) are significant in path planning. They allow the vehicle to collect sensory data to understand its environment and make informed decisions. These machine-learning techniques enable vehicles to learn from past experiences and improve their path-planning strategies.
Together, these technologies form the backbone of the path-planning process, ensuring that autonomous vehicles can navigate safely in various driving scenarios.
Control and actuation
“Control and actuation” in autonomous vehicles translate the path planning output and planned trajectories into precise vehicle commands such as steering, acceleration, and braking. These systems ensure that the vehicle follows the intended path accurately while maintaining stability, comfort, and safety. Control algorithms may include PID (Proportional-Integral-Derivative) controllers, model predictive control (MPC), or other advanced techniques.
Level 5 automation is achieved when all the data we’ve mentioned above can be collected and processed by the AV, and turned into the physical actions needed for smooth and reliable driving in every possible scenario — which is no easy task.
Currently, the technology needed to manage control and actuation successfully involves various sensors and systems. Tools such as radars, cameras, ultrasonic sensors, LiDAR, and Global Navigation Satellite System (GNSS) all need to work in harmony to replicate a single human’s ability to steer, accelerate, or brake correctly at any given moment.
There’s no single solution for control and actuation at this stage of AV development. However, as this technology continues to evolve, we expect to see the integration of more advanced control and actuation systems that will increase the possibility of fully autonomous vehicles on the road within our lifetime.
In summary
Self-driving software is incredibly complex, with multiple sensors, data collection, and predictive intelligence required to ensure that a vehicle can sense its surroundings, make smart decisions at speed, and decide where it’s going and how to get there safely.
Although there have been significant advances in AV technology in the last few decades, ongoing research and development for vehicle localization, perception, planning, control and actuation, and driving behavior is essential for the automotive industry to realize its dreams of Level 5 automation in the next decade.
At GPR, we’re leading the way in localization technology for automated driving, by creating long-lasting, high-definition maps of road and off-road subsurfaces. Our maps are protected by the ground and remain accurate in even the most challenging environmental conditions. Contact us to learn more about GPR for localization.