CoDrive: Improving Automobile Positioning via Collaborative Driving

Abstract

An increasing number of depth sensors and surrounding-aware cameras are being installed in the new generation of cars. For example, Tesla Motors uses a forward radar, a front-facing camera, and multiple ultrasonic sensors to enable its Autopilot feature. Meanwhile, older or legacy cars are expected to be around in volumes, for at least the next 10 to 15 years. Legacy car drivers rely on traditional GPS for navigation services, whose accuracy varies 5 to 10 meters in a clear line-of-sight and degrades up to 30 meters in a downtown environment. At the same time, a sensor-rich car achieves better accuracy due to high-end sensing capabilities. To bridge this gap, we propose CoDrive, a system to provide a sensor-rich car’s accuracy to a legacy car. We achieve this by correcting GPS errors of a legacy car on an opportunistic encounter with a sensor-rich car. CoDrive uses smartphone GPS of all participating cars, RGB-D sensors of sensor-rich cars, and road boundaries of a traffic scene to generate optimization constraints. Our algorithm collectively reduces GPS errors, resulting in accurate reconstruction of a traffic scene’s aerial view. CoDrive does not require stationary landmarks or 3D maps. We empirically evaluate CoDrive which is shown to achieve a 90% and a 30% reduction in cumulative GPS error for legacy and sensor-rich cars respectively, while preserving the shape of the traffic.

Publication
IEEE Conference on Computer Communications (IEEE INFOCOM ‘18). Acceptance rate=19.2% (309/1606)

🏆 Award(s)

Best in-session presentation award in the “Vehicular Networks” session of IEEE INFOCOM 2018.

💡 Patent(s)

(2019). Determining car positions. US Patent: US10380889B2.

PDF Cite Project

Related