Skip to content
Speeding Up Obstacle Detection in Cars with Bio-Inspired Cameras and AI

Speeding Up Obstacle Detection in Cars with Bio-Inspired Cameras and AI

In every car driver's worst nightmares: A pedestrian steps out in front of the car seemingly from nowhere. The driver has only a fraction of a second to brake or steer the car to avert a catastrophe. While some automotive systems do have camera systems capable of alerting drivers or triggering emergency brakes, these are not fast or reliable enough. They need a significant upgrade if they are to be incorporated into autonomous vehicles that don't have a human behind the wheel.

Addressing this challenge, Daniel Gehrig and Davide Scaramuzza from the Department of Informatics at the University of Zurich (UZH) have now combined a unique bio-inspired camera with artificial intelligence (AI) to develop a system capable of detecting obstacles around a car significantly quicker than any existing system. Impressively, it does so using less computational power. The findings from their study have been published in the latest issue of Nature.

Typically, current automotive cameras are frame-based, capturing 30 to 50 frames per second. You can train an artificial neural network to recognize objects like cars, pedestrians, or bikes in these frames. However, the problem arises if a potential hazard manifests in the 20 or 30 milliseconds gap between two frames; that's a blind spot for the camera, which can lead to a delayed handling of an imminent threat.

Event cameras are a new type of technology that uses a different approach. Their smart pixels record information whenever they detect quick movements, essentially eliminating the blind spot between frames for faster obstacle detection. These cameras mimic the way human eyes perceive images, earning them the moniker “neuromorphic cameras.” But they come with their drawbacks — they might fail to spot slow-moving objects, and translating their images into data for AI training can be quite cumbersome.

Gehrig and Scaramuzza have developed a hybrid system that combines a standard camera and an event camera. The standard camera captures 20 images per second, and these are processed by a convolutional neural network trained to recognize objects. The event camera's data, on the other hand, is analyzed by an asynchronous graph neural network specifically designed to process 3-D data that changes over time. The event camera's detections form an anticipation of the standard camera's detections, thereby enhancing its performance. The result is an agile visual detector that is as quick at recognizing threats as a standard camera shooting at an impressive 5,000 frames per second, but with the bandwidth requirement of a typical 50-frames-per-second camera.

On testing this bio-inspired camera system against the current top-rated automotive cameras and visual algorithms, Gehrig and Scaramuzza's system scored one hundred times faster detections. All while reducing the data transmitted for processing and the computational power needed, without compromising on accuracy. Especially beneficial is the system's ability to detect cars and pedestrians that make an appearance in the split second between two consecutive frames of the standard camera. The potential impact on safety — particularly at high speeds — is impossible to overstate.

The researchers suggest that pairing these cameras with LiDAR sensors, as seen in self-driving cars, could enhance their performance even more. Such hybrid systems could be central to making autonomous driving a safe reality without causing exponential growth in data and computational power needs.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.