What is NDS? NDS.Live Join Us News & Updates Contact

Vehicle positioning based on Visual Inertial Odometry (VIO) gains importance

10. December 2021

It’s not just buzzwords like e-mobility or autonomous driving that will determine how we get from A to B in the future. Such developments are only possible through the interaction of many innovative technologies. Augmented Reality (AR), for example, is the latest impressive technology used to wow consumers in premium navigation and driver assistance systems. To get to grips with this topic, we have discussed the possibilities of HD maps in the first part of our blog series. In this second post we will look at how visual inertial odometry (VIO) algorithms help to position vehicles in the road with high precision continuously and reliably and which objects are of importance here.

A VIO localisation algorithm combines multiple localisation cues to provide a vehicle’s position. The choice of design depends on the type of sensors available in the vehicle and the accuracy requirements. Regular GNSS, even with SBAS correction, is sometimes insufficient to meet modern vehicle requirements. In addition, the system must operate in environments where GNSS is not available. Examples are tunnels where the satellites’ signals cannot be received. Therefore, it is necessary to integrate GNSS independent sources of positioning information. Visual odometry and re-positioning against high accuracy maps is a good alternative.

Women holding map in a car
Unlike traditional methods, Visual Inertial Odometry algorithms help to locate vehicles continuously and reliably.
Image source: Pexels

The structure of the localisation system

The heart of the system is the Visual Inertial Odometry Module (VIO). It provides high-frequency and locally accurate position and speed measurements. The challenge: it reports its position measurements in relation to the local coordinate system and cannot be used directly to estimate global position data. There are two main sources of global position information: Localisation objects from the HD map and a feature point map. The former contains accurate information about road geometry and lane-level traffic signs, the latter is auxiliary information to improve visual localisation in areas where HD landmark information is not available.

Semantic extraction, line based tracking and more

The proposed localisation approach requires the recognition of some HD map localisation objects in real time. Some examples are information about road markings, traffic signs and road boundaries (kerbs). Road markings are often recognised by HD maps. This information is suitable for narrowing down the lateral position of the vehicle. In addition, lane markings can also be a constraint in the longitudinal direction on winding and intersecting roads. Traffic signs can also be used by HD maps for vehicle localisation. Unlike road markings, each observed traffic sign contributes to both lateral and longitudinal localisation. In addition to the objects from HD-Map, it is possible to use feature points for localisation. This can be useful in areas with little road infrastructure and helps resolve association ambiguities. You can find out more in this free whitepaper by Artisense, HERE, and NNG.

Man standing at a curved road
Information about road markings, traffic signs and road boundaries (kerbs) are used for localisation.
Image source: Unsplash

Visual Inertial Odometry Module

The VIO module consists of a tracker and a mapper. The mapper is responsible for creating local visual references for the tracker. This is done by estimating depth values for selected points using stereo triangulation and structure from motion data. Once there are enough observations to obtain a good depth estimate, the point is marked as mature and can be used for tracking. The system also includes DNN-based object masking, which hides potentially dynamic objects such as cars, pedestrians, and bicycles.

It’s the interplay that counts

Since the above-mentioned constraints alone are not accurate enough to enable continuous localization of the vehicle, a pose fusion algorithm selects and combines modalities instead. To achieve better linearization results and more accurate map positioning, the graph retains not only the current constraints but also previous observations. This helps especially when fusing lane marking information, as lateral and longitudinal constraints can also be created if the topology of the road has intersections or curves.

In a third part of this blog post series, we will show how precise HD positioning and HD maps can enable futuristic navigational guidance technology today: With a new and accurate use of Augmented Reality driving, it will become even safer and more comfortable. Read the third part of our series here.

Back to news →