Robust Trajectory Prediction against Adversarial Attacks

1 Umich
2 NVIDIA
3 Caltech
4 Stanford


Abstract


Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).

Overview of RobustTraj preventing Autonomous Vehicle (AV) from collisions when its trajectory prediction model is under adversarial attacks. When the trajectory prediction model is under attack, the AV predicts the wrong future trajectory of the other agent turning right (yellow vehicle). This results in the AV speeding up instead of slowing down, and eventually colliding into the other vehicle.

RobustTraj Demo


In this work, we propose an adversarial training framework (RobustTraj) for training adversarially robust trajectory prediction models.

Paper


Robust Trajectory Prediction against Adversarial Attacks

Yulong Cao, Danfei Xu, Xinshuo Weng, Z. Morley Mao, Anima Anankuda, Chaowei Xiao and Marco Pavone

description arXiv version
insert_comment BibTeX
integration_instructions Code

Citation


@misc{cao2022robusttraj,
            doi = {10.48550/ARXIV.2208.00094},
            
            url = {https://arxiv.org/abs/2208.00094},
            
            author = {Cao, Yulong and Xu, Danfei and Weng, Xinshuo and Mao, Zhuoqing and 
                Anandkumar, Anima and Xiao, Chaowei and Pavone, Marco},