With autonomy slowly taking over the automotive industry, the concept of driverless vehicles is one that’s generating a lot of interest from the public and various fields. Despite the rising intrigue on this brand of transportation technology, vehicular autonomy is still in its infancy. Since we’ve just scratched the surface and aren’t close to mastering autonomous vehicles any time soon, the amount of questions surrounding this concept are staggering. One of the most inquisitive aspects of autonomous cars is what technologies are used to truly make them driverless.
As I noted in a previous article, safety concerns surrounding these vehicles are high while trust in driverless vehicles is low. Autonomous vehicles are crammed with sensors to monitor their surroundings, detect oncoming obstacles, and determine the car’s course of action. Contrary to the beliefs of most people, OEMs are experimenting with different types of sensors whose effectiveness and usability are determined by many factors like affordability, how each sensor monitors its surroundings, and how quickly it can process data.
The three main sensors that are being used in autonomous vehicles are cameras, radar, and LiDAR, all of which possess their own unique attributes, upsides, and shortcomings that effect the performances of driverless cars differently.
1. Cameras
One of the biggest upsides behind cameras is the optical aspect, which enables an autonomous vehicle to literally visualize its surroundings. Cameras are very efficient at the classification of texture interpretation, are widely available, and more affordable than radar or LiDAR. Cameras were one of the first types of sensors to be used in driverless vehicles, and are currently the top choice for OEMs. Cameras make processing a computationally intense and algorithmically complex task, but are able to process colors (making them better for interpreting surrounding scenery).
The latest high-definition cameras use powerful processors, which use millions of pixels per frame (some able to shoot 30 to 60 frames per second) to develop intricate imaging. Due to the large amount of frames per second, this requires a high amount in megabytes of data to be processed during that second-long span. Consequently, the costs of processing power can be astronomical since manufacturers tend to cram as many cameras in different areas throughout a vehicle as possible.
2. Radar
Radar is an abbreviation for radio detection and ranging. In a computational context, radar is lighter than a camera and uses radio waves to determine the distances of objects, exact speeds they’re going, and even angles they’re facing. Although radar is more efficient than cameras and LiDAR in select situations (like bad weather), this sensory method has less angular accuracy and generates less data than LiDAR. Unlike cameras, radar doesn’t have any data-heavy video feeds to process but has lower processing speeds needed for handling data output compared to LiDAR and cameras. Another upside of radar is its ability to use reflections to see behind obstacles, and doesn’t need a set line of sight since radio waves are reflective.
3. LiDAR
Short for light direction and ranging, LiDAR is the most technologically diverse out of these three sensors (and the costliest for OEMs to add in car designs). While radar uses radio waves to detect obstacles and map out its surrounding environment, LiDAR uses lasers and light to determine the distance between the vehicle and an object.
LiDAR is capable of scanning over 100 meters in all directions, giving it the ability to generate an intricate 3D map of its surroundings. This map can be used to make informed decisions on reacting to different circumstances, and information can be instantly processed. This type of technology gives driverless cars a distinct advantage over conventional vehicles in terms of awareness levels. Having said that, the large amount of data means autonomous vehicles need more powerful processors to handle this onslaught of information that gets generated in under a second.