Advantages

All-Weather & All-Time Performance

"Camera + 4D Imaging Radar" fusion perception creates a farther and clearer vision, handling complex scenarios unaffected by severe weather condition.

High Cost-Effectiveness

The driving-parking integrated 360° fusion perception system enables cost-effective implementation of E2E algorithm architecture, delivering more user-friendly and preferred driving assistance system.

Unified Software-Hardware Architecture

Engineered through deep hardware-software integration and optimized adaptation, the system is designed for mass production, with full-stack capabilities propel the automotive industry toward future.

High Safety & Reliability

Dual redundancy of HW/SW and a 4-in-1 safety framework ensure stable, secure and efficient smart mobility solution.

Scalable ODD

Empowered by all-weather multi-modal database, G-PAL system achieves autonomous generalization and adaptive response from L2 to L4 multi-scenario applications, enabling D2D driving.

Modularized Delivery

Supported by robust system architecture, G-PAL system delivers customized solution with full-stack controllability, adapting to diverse requirements.

Application Scenarios

HD-Map Free Door to Door (D2D) Driving Assistance

G-PAL Driving-Parking Integrated Driving Assistance System

For the Mass Adoption of Driving Assistance, to the Future of Accessible Tech Innovation
Gaia 2.0 Gaia 3.0 Gaia 4.0
The Gaia 2.0 Driving Assistance System, with core advantages of all-weather high reliability and exceptional cost-effectiveness, enables expanded applications in complex scenarios such as urban NOA (Navigate on Autopilot) while optimizing the highway NOA experience. Through deep fusion of "visual perception + 4D mmWave imaging radar", the system constructs a robust environmental perception architecture capable of precisely handling multiple challenges including on/off-ramp, construction zone avoidance and accident vehicle circumvention, significantly enhancing the reliability of control and planning in complex conditions. In the field of APA, the system pioneers a multi-sensors fusion solution combining 4D mmWave imaging radar with surround-view cameras, eliminating reliance on ultrasonic sensors. The innovative architecture strengthens environmental adaptability and achieves breakthrough balance between functionality and cost through optimized hardware configuration, establishing a new benchmark for mass application of driving assistance technologies.
Gaia 3.0 Driving Assistance System establishes a new paradigm for all-weather, high-reliability driving experiences, of which the application can be extended to full-scenarios through full-stack technological upgrade. The solution employs data-level fusion combining cameras and 4D mmWave imaging radars, achieving high-precision environmental modeling and recognition capabilities in extreme scenarios including pedestrian intrusion, stationary obstacle avoidance and heavy rain/fog conditions. Enhanced road topology parsing algorithms continuously improve urban road comprehension and real-time decision efficiency, providing robust technical support for Urban NOA.As the first driving assistance system integrating five 4D mmWave imaging radars in the industry, Gaia 3.0 maintains environmental perception accuracy while delivering a more cost-effective hardware configuration. The redundant camera-radar architecture improves system fault tolerance, offering an engineering solution for mass deployment of driving assistance.
Gaia 4.0 Driving Assistance System employs a full-domain redundant architecture, achieving comprehensive perception through a multi-modal sensor fusion framework. The system integrates high-resolution camera, 4D mmWave imaging radar and LiDAR, coupled with a real-time cross validation mechanism between master/backup domain controllers, establishing a dual protection at the hardware level. It features E-GAS, strictly complying with ASIL-D requirement.

High Performance & HD-Map Free

11V5R Mid-high Compute Domain Controller
(J6M/2*J6M/MDC 510Pro/MDC 610/Orin X)
Delivers better computational performance capable of supporting ADAS applications
including Urban NOA, empowering the market competitiveness of OEMs.

4D Imaging Front Radar

4D Imaging Corner Radar

Front View Long-Focal Camera

Front View Wide-Angle Camera

Rear View Camera

Surround View Camera

Side View Camera

Domain Controller

D2D

Urban NOA

Highway NOA

Roundabout Navigation

General Obstacle Avoidance

Unprotected Left Turn

11V

 ​Front view: 8MP *2

 ​Rear view: 3MP *1

 ​Surround view: 3MP *4

 ​Side view: 3MP *4

5R

 ​4D imaging front radar (FR6C) *1

 ​4D imaging corner radar (CR6C) *4

*Note: Sensors are recommended configurations and can be customized for matching
4D Millimeter Wave Imaging Radar "abbreviated as" 4D Imaging Radar "

Software Architecture

Functions

Urban NOA
Highway NOA

Watch video

Watch video

Watch video

Roundabout Navigation
Unprotected U-Turn
Unprotected Left/Right Turn

Watch video

Watch video

Watch video

Watch video

Auto On/Off-Ramp
Driving through High-Curvature Ramp
ALC
Tunnel Navigation

创新技术

World Model and Large Model Coupling-Driven End-to-End Technology

G-PAL system has introduced the scene generation and generalization capabilities of the world model into the development process of advanced driving assistance functions, thereby achieving efficient and automated iteration of the model. The world model can generalize scenes based on collected data and generate long-tail scenarios of interest, reducing the demand for real-vehicle data collection. The end-to-end model architecture based on the Mixture of Experts (MoE) strategy can reduce the number of activated parameters during model training, enhance training speed, and lower training costs. Through model distillation, the system further reduces the number of parameters in the end-to-end model to enable deployment in real vehicles.

GGPNet: G-PAL Generalized Perception Network

A single neural network achieves multi-task output for 4D imaging radar and camera fusion, unifying perception in both dynamic and static scenario. Leveraging the cross-attention mechanism in Transformer architecture, it efficiently processes complex correlations between visual and 4D imaging radar data, comprehensively enhancing the system’s precision in environment recognition and understanding. This lays a robust perceptual foundation for downstream prediction and decision-making.

Integration Strategy of Predictive Decision-Making Network Model and MCTS Decision Tree

G-PAL has developed a high-efficiency multi-objective joint strategy model that can rapidly generate initial solutions for scenario-level target strategies, significantly enhancing the speed and accuracy of decision-making responses. Additionally, the system has incorporated a backend decision-making mechanism based on Monte Carlo Tree Search (MCTS), which is interpretable and provides safety oversight for multi-vehicle strategies, thereby effectively improving the system's safety and reliability.

360° "Vision+4D mmWave Imaging Radar" Dynamic/Static Perception & General Obstacle Detection

Radar-vision BEV feature fusion + mixed strategies enhances perception performance, demonstrating higher precision-recall rates across diverse weather conditions, road scenarios and traffic participants, while improving target position accuracy , velocity resolution, and heading precision. The vision-radar fused Occupancy framework enables complete 3D scene representation, presenting better generalization capability for irregularly-shaped obstacles and special road structures, while reducing complex rule design requirements and improving general obstacle detection performance.

High-Robustness and High-Reliability Planning and Control Framework

In addition to the predictive decision-making integrated network, G-PAL has developed a highly robust and reliable planning and control component based on optimization theory, which is fully interpretable. By leveraging a data-driven strategy network, the human-like quality of the system has been enhanced. Meanwhile, the robust and reliable planning and control module ensures the safety and functionality of the system’s lower performance bounds. The overall solution balances human-like strategy generation with production feasibility, providing customers with an efficient and comfortable driving experience across all scenarios.

EkaRT: Cross-Domain Multi-Platform Middleware

The middleware provides diversified compatibility with adaptation to seL4/ QNX/ Linux/ Android OS. Its lightweight messaging architecture enables efficient multi-sensors data coordination through low-latency and high-throughput communication. Based on the PTP/ gPTP protocols, EkaRT builds a global clock synchronization network ensuring spatiotemporal consistency for cross-domain collaboration, while deterministic scheduling architecture ensuring dynamic priority allocation for hard real-time tasks. Complete debugging toolchain supports BAG data recording/ playback, visual analytics and anomaly tracing, significantly accelerating development.
Click the left icon to switch
Note: The G-PAL Driving Assistance System involves combined driving assistance functionalities. All demonstrated operations were conducted by professionals under secure conditions, and reproduction attempts are strictly prohibited.

Watch video

Watch video