BatMobility: Towards Flying Without Seeing for Autonomous Drones Emerson Sie,
Zikun Liu,
Deepak Vasisht
International Conference On Mobile Computing And Networking (MobiCom), 2023
abstract / bibtex / website
Unmanned aerial vehicles (UAVs) rely on optical sensors such as cameras and lidar for autonomous operation. However, optical sensors fail under bad lighting, are occluded by debris and adverse weather conditions, struggle in featureless environments, and easily miss transparent surfaces and thin obstacles. In this paper, we question the extent to which optical sensors are sufficient or even necessary for full UAV autonomy. Specifically, we ask: can UAVs autonomously fly without seeing? We present BatMobility, a lightweight mmWave radar-only perception system for autonomous UAVs that completely eliminates the need for any optical sensors. BatMobility enables vision-free autonomy through two key functionalities – radio flow estimation (a novel FMCW radar-based alternative for optical flow based on surface-parallel doppler shift) and radar-based collision avoidance. We build BatMobility using inexpensive commodity sensors and deploy it as a real-time system on a small off-the-shelf quadcopter, showing its compatibility with existing flight controllers. Surprisingly, our evaluation shows that BatMobility achieves comparable or better performance than commercial-grade optical sensors across a wide range of scenarios.
@inproceedings {sie2023batmobility,
author = {Emerson Sie and Zikun Liu and Deepak Vasisht},
title = {BatMobility: Towards Flying Without Seeing for Autonomous Drones},
booktitle = {The 29th Annual International Conference on Mobile Computing and Networking (ACM MobiCom '23)},
year = {2023},
doi = {10.1145/3570361.3592532},
isbn = {978-1-4503-9990-6/23/10},
}
We enable vision-free UAV autonomy using only on-board doppler shift sensing, which is robust to adverse environments lacking visual or geometric features.
Machine Learning (ML) is an increasingly popular tool for designing wireless systems, both for communication and sensing applications. We design and evaluate the impact of practically feasible adversarial attacks against such ML-based wireless systems. In doing so, we solve challenges that are unique to the wireless domain: lack of synchronization between a benign device and the adversarial device, and the effects of the wireless channel on adversarial noise. We build, RAFA (RAdio Frequency Attack), the first hardware-implemented adversarial attack platform against ML-based wireless systems and evaluate it against two state-of-the-art communication and sensing approaches at the physical layer. Our results show that both these systems experience a significant performance drop in response to the adversarial attack.
@inproceedings {286473,
author = {Zikun Liu and Changming Xu and Emerson Sie and Gagandeep Singh and Deepak Vasisht},
title = {Exploring Practical Vulnerabilities of Machine Learning-based Wireless Systems},
booktitle = {20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)},
year = {2023},
isbn = {978-1-939133-33-5},
address = {Boston, MA},
pages = {1801--1817},
url = {https://www.usenix.org/conference/nsdi23/presentation/liu-zikun},
publisher = {USENIX Association},
month = apr,}
We highlight the risks posed by adversarial attacks on real world 5G machine-learned systems.
Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more. At the same time, there is a need for labelled visual data featuring such items for the purpose of training object detection and recognition models for robots operating in homes, warehouses, stores, libraries, pharmacies, and so on. In this paper, we ask: can we leverage the tracking and identification capabilities of such tags as a basis for a large-scale automatic image annotation system for robotic perception tasks? We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment. Our pipeline uses unmodified commodity RFID readers and RGB-D cameras, and exploits arbitrary small-scale motions afforded by mobile robotic platforms to spatially map RFIDs to corresponding objects in the scene. Our only assumption is that the objects of interest within the environment are pre-tagged with inexpensive battery-free RFIDs costing 3–15 cents each. We demonstrate the efficacy of our pipeline on several RGB-D sequences of tabletop scenes featuring common objects in a variety of indoor environments.
@inproceedings{sie2022rfannotate,
author={Sie, Emerson and Vasisht, Deepak},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
title={RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects in Context},
year={2022},
volume={},
number={},
pages={2590-2596},
doi={10.1109/ICRA46639.2022.9812072}}
We describe a simple method to automate image annotation of objects in real world environments tagged with proximity-based RF trackers (i.e. RFIDs, AirTags, etc).