THE BLOG

What Is LiDAR And Why Is It So Important To Driverless Cars?

22/03/2017 15:41 GMT | Updated 22/03/2017 15:41 GMT

If you're an enthusiastic consumer of news about driverless cars, you will know that the almost universal hardware seen on most research and development vehicles is something called LiDAR. But what is LiDAR and why is it so important?

Definition

LiDAR stands for light detection and ranging. It's a name that hides what LiDAR really is, so let's walk through those initials.

  • Li = Light. What most people don't realise is that this light is infrared, invisible to the naked human eye, and emitted from the unit in a narrow laser beam.
  • D = Detection. The unit looks for signals bouncing back from objects nearby, typically from 1m up to perhaps 100m away
  • A = and...
  • R = Ranging. Understanding the distance to objects in that 1m-100m bracket is really important, and this is more or less detailed, depending on the resolution of the unit and range.

Uses of LiDAR

LiDAR's first and most established use has and is likely to remain surveying, whether from the ground or air (mounted on planes or small drones).

Applications include urban planning, countryside management, livestock counting, heritage and conservation, post disaster damage assessment, space (early versions of LiDAR were used to map the moon), even archaeology of sites to discover lost and buried buildings. One very smart use is in forestry, where LiDAR can be used to measure the volume of unharvested timber on a hillside to negotiate a sale price before a chainsaw is even started!

It's highly effective at creating a three dimensional map of an environment, whether man-made, natural or in-between, and updating that 3D picture several times a second. Human vision performs the same function, but achieves it in a different way - your brain calculating the difference between your left eye and right eye to understand the distance and relationship between you and those objects in 3D space.

Weaknesses

As with any system which relies on receiving a signal travelling in a straight line, LiDAR has one important vulnerability. It can't see behind solid objects. That sounds obvious, but if your car has a single LiDAR unit and a cyclist pulls alongside the car, a comparatively large area is hidden from view. If a bird were to land on your car roof next to the unit, you could temporarily lose half your car's useful LiDAR data!

Interference

Atmospheric conditions and other LiDAR units could, in theory, interfere with infrared laser signals. Practically, however, both of these can be filtered out by software within the system, which is programmed to respond to a very slim frequency of infrared light, returning in a straight line to the unit. If, in the exceptionally unusual event of a pulse clashing with a signal from another vehicle's unit, the LiDAR can change frequency and carry on as normal within a tiny fraction of a second.

Cost

2017-03-21-1490089890-5094496-velodyne_lidar.jpg

[Picture credit: Author's own]

Much is made about the cost of LiDAR units. High resolution units can cost upward of £10,000. These are reliable, and robust, and provide very high resolution data in a 360 degree circle around the car (if mounted on the rooftop, as it often seen on research vehicles), and demonstrated here with a Velodyne LiDAR puck mounted on the middle of a car roof.

Google's Waymo driverless car unit have developed their own system and brought the unit cost down to around £5000, still providing high resolution data.

2017-03-21-1490090043-5239287-VU8.jpg

That cost per unit is far too high to be practical on a production car, so car manufacturers will turn to 'solid state LiDAR', such as the unit pictured here [Picture credit: LeddarTech], made by Canadian company LeddarTech. This is typically a lower resolution, easier to manufacture and far cheaper alternative, with a narrower field of view. The key benefit of this approach are individual units cost 95% less, between £200 and £500 currently, and will fall further in the next couple of years to around £50. LeddarTech will be taking part in the upcoming "How to build a driverless car" introductory workshop, taking place nr. London in April.

Much higher resolution solid-state LiDAR units are in development in 2017, which will generate more data points and higher resolution than moving units, so it won't be long until solid state becomes dominant.

Resolution

Just like cameras, LiDAR units need good resolution - that is enough data mapped in 3D, in enough detail, in order for software to interpret what objects are.

This is easy for us humans, but drop that resolution to a fly's eye and is gets very difficult. Because the infrared laser beams are emitted in a widening beam from the source, the resolution close-by is great (enough to see facial contours and fingers), but far away, it's dreadful (is that a lamp-post or a pedestrian?).

That, among the other issues cited above, is why LiDAR is one of several other types of sensor deployed in driverless cars.

The beams sent out from the unit bounce back when hitting objects and after mapping into 3D space, generate a 'point cloud'. This is a complex group of points (depending on resolution, sometimes several million points created every second) which must be interpreted, usually into polygons. A polygon is a simpler shape to process, because - like the difference between a chocolate box and a wardrobe, the size, shape and distance make it easy for a computer to interpret and categorise.

LiDAR or not LiDAR?

One debate that rages is whether or not LiDAR really is essential to driverless cars or other autonomous vehicles. Tesla for example, think not (though their sister company SpaceX does use it on its rockets), as do a few other small suppliers, who choose to focus on cameras as being the main 'eye' for self-driving vehicles.

On one hand, that allows them to focus their efforts on other sensors (usually cameras, but also perhaps Radar as well) which are far more established, cheaper to buy, and have many more professionals in the talent pool available to do the work.

On the other, this means that rather than developing their software algorithms to interpret a rich 360 degree field of 3D data (which might be easier) they are restricting the robustness of 'validation data' from an additional sensor.

The debate will continue for at least another decade, until we have enough variety of driverless vehicles to understand the safety differences between the two approaches in real-world production vehicles.

In the short term, it's not something you need to worry about... unless you're building a driverless car, of course.