What is NDS? NDS.Live Join Us News & Updates Contact

Empowering sensor data with NDS

26. September 2023

There are quite a companies who believe in the power of vehicle sensor data. Can sensor data alone be enough to cover the needs of ADAS and automated driving? Yes and no, because information from a single sensor type – such as cameras, radar, or LiDAR technology – in itself is not sufficient to ensure the safety of vehicle passengers and other road users. What is needed is a data standard such as NDS so you can fuse and conflate sensor data observations and create a  map data asset that is taking advantage of the many different data sources using a data specification to strengthen totality and safety. Data from a single sensor stack and raw data alone are not enough. Also, beware of false-positive observations translated to data classifications that may be incorrect. A sign on the side of a road with that country’s typical highway sign color can have a wide variety of meanings and an observation of such a sign may not mean that the road is a highway. ADAS and automated driving systems rely on data and this data needs context when fusing and conflating sensor data into an asset that can be trusted. This blending of sensor data information is best done by specialists and NDS provides a complete specification to then publish HD map data.

Data from a single sensor – such as cameras, radar, or LiDAR technology – is not sufficient to cover modern ADAS scenarios. NDS helps to map many different data types. Source: Pexels

Radar and camera technology

Let’s look at radar technology: In many places, radar is not able to derive object classifications. Radar sensors cannot always distinguish a delineator from a guardrail or precisely recognize signs or bridge piers. In addition, radar does not see very high. This type of technology can be found, for example, in adaptive cruise control systems to ensure distances between vehicles. Another example are camera systems, which are usually located in vehicles near the interior mirror, behind the windshield.  These cameras look ahead and can classify objects. These observation locations and classifications are provided to the vehicle for real-time use in ADAS and automated driving features. And the information can also be collected and then communicated to the outside. The technology can detect where things are by using GNSS and positioning, and can then classify objects it sees, such as a red round signs – probably a sign informing about speed limits. This semantic data observation point is then used further.

Speed limits are a good example. Roughly speaking, there are three types that are clearly recognizable. These are, on the one hand, round signs with a red border. But there are also signs that implicitly translate to a speed limit, such as town entrance signs, signs for residential zones, or special bicycle roads. There are also situations in which there are no signs at all: a certain speed might apply in city centers or in 30-km/h zones, even if there isn’t a new sign at every corner.

Isolated sensor data not enough

Suppliers of radar or camera technologies and ADAS tier 1 suppliers are increasingly jumping on the HD bandwagon proclaiming: We can build HD map data with our hardware-software combination. The problem: These companies don’t specialize in navigation and HD mapping, and their technologies often deliver information that falls short. Both radar and cameras can detect all markers and signs but are missing the contextual information of road attributes that are not sensor observable. However, the precision of this data may not be very high. It takes many trips on one and the same route to find a middle ground among the mass of data. Camera systems use triangulation for location, for a 3D view more than one camera is needed, and their perception quality is not always based on high-resolution – not to talk about environmental limitations of visibility.

Complexity of HD maps

HD map features can be roughly divided into three major blocks: road data, lane data and localization objects data. Let’s take localization barriers as an example where in HD maps there are barrier classifications like crash barriers, walls, fences, and more barrier types. In addition, there is geometry: Where does the barrier start, where does it end, how high is it? Not forgetting relationship: To which lane or road does the barrier belong, such as the driveway or the exact road and lane on which the vehicle is located. A radar may be able to detect a barrier, but may not see the upper edge of it if it is too high. Each sensor has a different field of view and different ways of detecting objects or barriers. Weather also affects the reliability of radar and camera-based systems.

LiDAR systems

LiDAR is the abbreviation for “light detection and ranging” – light-based object detection and distance measurement. Unlike the related radar, LiDAR systems do not work with radio waves but with laser beams. They are used, for example, to create high-resolution and three-dimensional maps – for mining, geology, or automated driving. In combination with radar, ultrasound, and cameras, they provide information about where objects are located in the vehicle’s surrounding.

Limitations of camera, radar, and LiDAR technology

LiDAR systems can be severely affected by moisture – rain, snow or fog.  LiDAR and radar do not see color. For camera systems, darkness and bright light is an issue. And when another vehicle, for example a large truck is driving in front of the car, objects may covered and cannot be detected. In addition, various pieces of information that are required in context cannot be captured by individual sensors: Where does a city or a new country begin? There may be new rules and regulations, such as speed limits, that are not sign-posted and as such, can not be detected by cameras or other sensors. Such data must therefore be sourced and provided by map data.

NDS format makes data diversity mappable

If you only have one sensor type available in the hardware stack and the associated software, your ability to build a comprehensive HD map is limited. Ideally, different sensor technologies and data sources come together using a data conflation and fusion approach. NDS provides a definition for the publication of map data that covers all needed data attributes and supports data variability. The NDS data format specification is not focused on or limited by a single sensor, but helps to make a wide variety of data mappable in the specification. This data is then made available in a format in the vehicle so that ADAS and self-driving vehicle software can make use of this data to compare and validate what the sensor systems see with what the map provides to ensure accuracy of sensor observations and safety when a system controls a vehicle. NDS thus supports the requirements of innovative automotive companies and mobility providers in a particularly comprehensive and flexible way. An NDS membership will give you access to the full specification and global expertise, coming back to our example, which barrier types should be supplied. All special features are documented in the specification.

Back to news →