Opening the eyes of autonomous cars: is LiDAR the key to safety?

Australia had its first serious accident involving a self-driving car in March. A pedestrian was critically injured when struck by a Tesla in “autopilot” mode. US safety regulators are also reviewing the technology after a series of accidents in which autonomous cars drove straight into emergency services vehicles – apparently dazzled by their flashing lights.

It’s not uncommon for humans to make similar mistakes – accidents happen – but machines shouldn’t have human failings.

In the autonomous vehicle trade, these accidents are called “edge cases”. It’s where something is unexpected, unusual, or outside artificial intelligence (AI) training parameters. And that’s the challenge facing the autonomous vehicle industry.

Paying attention

Despite all expectations, human drivers cannot yet become passive passengers. Instead, they must have eyes on the road and hands on the wheel, even if the car is driving itself.

But that scenario is subject to human failings, with the temptation to respond to an SMS, read a paper, or stare at the surroundings heightened.

When autonomous vehicles crash, it’s often when the driver isn’t paying attention.

“In most cases, autonomous vehicles can easily understand the world around them,” says Cibby Pulikkaseril, founder and chief technology officer of Australian automotive sensor manufacturer Baraja.

When autonomous vehicles crash, it’s often when the driver isn’t paying attention.

“But, when multiple objects are partially obscured, or there are adversarial targets in the environment, the perception of the vehicle may have difficulty understanding the scene.”

The key, he says, is ensuring the AI behind the wheel has persistent, accurate and diverse information.

In human drivers, impaired senses and poor decision making can be deadly. The same also applies to signal noise, pattern recognition and available processing power. But inferring the cause of an AI’s error is harder than identifying human failings. How many cameras are enough? Do you need to add radar? Does LiDAR – similar to radar, but using lasers instead of radio beams – make both redundant?

And does an AI have sufficient input to recognise what’s going on in the real world around it?

Seeing is believing

“There is ongoing debate in the autonomous-vehicle world about the role of LiDAR technology in the future of self-driving cars,” says Pulikkaseril.

Cameras are small, cheap and passive, and large numbers of them can be attached to a car to capture high-resolution colour views of what’s happening in its surroundings.

“However, they possess a number of characteristics that are problematic in common driving conditions,” Pulikkaseril adds.

Rain, fog, direct sunlight and unusual textures can all can reduce the quality of the cameras’ perception.

In 2016, for example, a Tesla drove directly into a semi-trailer because its AI algorithm failed to discern the vehicle from the setting sun. Its neural network hadn’t been trained for this exact scenario, and it didn’t have enough information or understanding for intuition to kick in.

Inferring the cause of an AI’s error is harder than identifying human failings.

“Camera-based perception is only capable of providing two-dimensional mapping of a three-dimensional world, meaning it can only infer the location of an object, instead of a true measure of distance,” Pulikkaseril says.

Which means, even with the electronic equivalent of stereoscopic vision, cameras can’t always give AI reliable depth and shape recognition.

illustration of lidar sensors a blue and silver metallic box with plugs and a blue metallic object
Baraja’s Spectrum Off-Road LiDAR sensor. Credit: Baraja.

“In contrast, LiDAR technology is an active sensor – that is, it sends a stimulus into the world and listens for the response,” he explains. Two-thousand scan lines per sensor can map out a 3D model of a scene, including distances to objects, shapes, and even a sense of their solidity.

“This allows accurate detection for specific items such as a small tyre in your lane 200 metres away, or whether a pedestrian is pointing in one direction, allowing you to make a stronger determination if they are about to cross the street or not.”

Objects in motion

There’s more to a laser than just a beam of light. It carries with it a whole spectrum of properties.

“Our LiDAR technology can scan a laser beam simply by changing the colour of light,” Pulikkaseril explains. “This scanning mechanism solves the single design challenge of traditional LiDAR: the ability to point a laser in a wide field of view.”

Large, rapid-rotating mountings and oscillating mirrors are no longer needed.

“We simply change the colour of our laser light and let our prism-like optics intrinsically steer the beam,” he says.

“In contrast, LiDAR technology is an active sensor – that is, it sends a stimulus into the world and listens for the response,”

Fewer moving parts mean smaller sensor profiles, greater reliability and cheaper manufacturing.

“While this may seem strange and exotic, these so-called “tunable lasers” are commonplace in telecommunication networks,” Pulikkaseril says. “In fact, we leverage these existing telecom supply chains for our LiDAR.

“The technology provides incredibly accurate depth perception at very high resolutions, with the pulsed laser waves able to determine the distance to an object within a few centimetres, up to hundreds of metres away. It can detect a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules.”

LiDAR
LiDAR “sees” the world as a 3D “point cloud” composite of laser scans, left. Camera-based vision uses image recognition algorithms, right. Credit: LiDAR / Baraja

Baraja has added a new ability. By measuring the Doppler shift (change in frequency) of returning laser light for each point in its scan, it can detect whether an object is in motion and calculate its speed.

“This is extremely valuable as a cluster of points that are all moving at 100 kilometres per hour aids machine perception,” Pulikkaseril says. “For example, this allows for the detection of a small animal or child running across the road.”

Directed attention

Pulikkaseril says Baraja’s LiDAR uses “on-the-fly foveation” – the ability to focus its perceptive resolution on specific objects within its field of view. The density of a scan can be intensified in microseconds.

“In other words, it’s the capacity to choose when and where to increase points density so the perception algorithm can make safe driving decisions,” he says.

It can tell where a snowbank ends and a stone embankment begins.

And that means an AI can make the best choice out of a bad situation, such as deciding which of the two to hit if evasion is impossible.

Credit: Baraja

“Older LiDAR technology isn’t without its weaknesses,” Pulikkaseril says. “External light sources, such as solar radiation or the beams of a headlight (as well as lasers from other LiDAR sensors on the road), can be an issue. This interference can confuse the sensor, sometimes creating non-existent objects that cause the vehicle to swerve or brake suddenly. In some instances, the interference can even blind the sensor.”

Spectrum-Scan LiDAR, he says, filters out most of these problems.

“Our system provides an accurate and precise record of the vehicle’s external environment regardless of the conditions, so we can achieve full performance, even in the blazing sun of our more intense Australian summers.”

Perception is reality

Just like human vision, LiDAR has its limits, which means equally driving to the conditions.

“In terms of dust and fog, these are challenging environments for any optical system, so both cameras and LiDAR technology struggle with obscurants,” Pulikkaseril says.

Even here, he adds, LiDAR has the edge.

“It can detect more than one ‘return’, meaning the laser will partially reflect from the dust cloud, but some light will still penetrate and hit a target on the other side. In our signal processing chain, we can then distinguish between the return from the dust cloud and the target behind it.”


Read also: LIDAR: how self-driving cars ‘see’ where they’re going


But the most effective human drivers don’t rely solely on any one sense. They unconsciously correlate stereoscopic vision, sound and momentum with learned expectations and patterns. Attention is saved for anything out of the ordinary. Then, intuition must instantly determine the best path of action.

AI, however, still has no real understanding of the world. That means intuition is a distant dream, and that machines must rely on raw sensing power.

“The safest system has redundancy and multiple sensor modalities,” Pulikkaseril concludes. “Each sensor has its pros and cons, with some stronger in certain conditions than others, and all are based on different physical properties. The most advanced autonomous teams rely on cameras, LiDAR technology and radar to fully understand the world around them.”

Please login to favourite this article.