This navigation system powers the LPP HORNET unmanned ground vehicle (UGV), its Anti-drone configuration, and other autonomous platforms operating in unknown environments. It uses neural networks and machine learning for terrain segmentation and object recognition.
The system fuses data from cameras, LiDAR, and an Inertial Measurement Unit to support waypoint-based driving with real-time obstacle avoidance and route re-planning. It utilises SLAM technology for precise navigation even in GNSS-denied areas. The system reacts to unexpected situations without requiring prior terrain mapping, ensuring reliability in challenging conditions.
Model training is conducted partially in a virtual environment using a digital twin of the UGV. Simulations incorporate realistic physics and sensor behaviour to prepare the system for unpredictable real-world scenarios. A user-friendly Ground Control Station (GCS) enables efficient path planning and integration with other autonomous units, including UGVs and UAVs. This enables seamless coordination across platforms for a wide range of mission profiles.
LPP also develops a similar autonomous system for UAVs, using visual navigation and optical tracking to ensure jamming-resistant operation.
Perception | Uses sensors such as cameras, LiDAR, and radar to identify and track obstacles in its path. |
Navigation | Uses GPS, IMUs, and wheel encoders to determine its location, plan and execute its route to a given destination. |
Obstacle Avoidance | Detects obstacles using sensors like LiDAR, radar, and adjusts its trajectory or speed to avoid them. |