Unmanned aerial vehicles (UAVs) rely on optical sensors such as cameras and lidar for autonomous operation. However, optical sensors fail under bad lighting, are occluded by debris and adverse weather conditions, struggle in featureless environments, and easily miss transparent surfaces and thin obstacles. In this paper, we question the extent to which optical sensors are sufficient or even necessary for full UAV autonomy. Specifically, we ask: can UAVs autonomously fly without seeing? We present BatMobility, a lightweight mmWave radar-only perception system for autonomous UAVs that completely eliminates the need for any optical sensors. BatMobility enables vision-free autonomy through two key functionalities – radio flow estimation (a novel FMCW radar-based alternative for optical flow based on surface-parallel doppler shift) and radar-based collision avoidance. We build BatMobility using inexpensive commodity sensors and deploy it as a real-time system on a small off-the-shelf quadcopter, showing its compatibility with existing flight controllers. Surprisingly, our evaluation shows that BatMobility achieves comparable or better performance than commercial-grade optical sensors across a wide range of scenarios.
@inproceedings {sie2023batmobility,
author = {Emerson Sie and Zikun Liu and Deepak Vasisht},
title = {BatMobility: Towards Flying Without Seeing for Autonomous Drones},
booktitle = {The 29th Annual International Conference on Mobile Computing and Networking (ACM MobiCom '23)},
year = {2023},
doi = {10.1145/3570361.3592532},
isbn = {978-1-4503-9990-6/23/10},
}
Doppler flow enables robust UAV autonomy in adverse environments lacking visual or geometric features.
Implantable and edible medical devices promise to provide
continuous, more directed, and more comfortable healthcare
treatments. Communicating with such devices and localizing them is a fundamental,
but challenging, mobile networking problem. Recent work has
focused on leveraging nearfield magnetism-based systems to avoid the challenges of
attenuation, refraction, and reflection experienced by radio
waves. However, these systems suffer from limited range,
and require fingerprinting-based localization techniques. We
present InnerCompass, a magnetic backscatter system for in-body
communication and localization. We present new magnetism-native
design insights that enhance the range of these devices.
We also design the first analytical model for magnetic-field
based localization, that generalizes across different scenarios.
We’ve implemented InnerCompass and evaluated it in porcine tissue.
Our results show that InnerCompass can communicate at 5 Kbps at a
distance of 25 cm, and localize with an accuracy of 5 mm
@inproceedings {tao2023innercompass,
author = {Bill Tao and Emerson Sie and Jayanth Shenoy and Deepak Vasisht},
title = {Magnetic Backscatter for In-body Communication and Localization},
booktitle = {The 29th Annual International Conference on Mobile Computing and Networking (ACM MobiCom '23)},
year = {2023},
doi = {10.1145/3570361.3613301},
isbn = {978-1-4503-9990-6/23/10},
}
Magnetic fields are unaffected by human bodies, making them ideal for in-body communication and localization.
Machine Learning (ML) is an increasingly popular tool for designing wireless systems, both for communication and sensing applications. We design and evaluate the impact of practically feasible adversarial attacks against such ML-based wireless systems. In doing so, we solve challenges that are unique to the wireless domain: lack of synchronization between a benign device and the adversarial device, and the effects of the wireless channel on adversarial noise. We build, RAFA (RAdio Frequency Attack), the first hardware-implemented adversarial attack platform against ML-based wireless systems and evaluate it against two state-of-the-art communication and sensing approaches at the physical layer. Our results show that both these systems experience a significant performance drop in response to the adversarial attack.
@inproceedings {286473,
author = {Zikun Liu and Changming Xu and Emerson Sie and Gagandeep Singh and Deepak Vasisht},
title = {Exploring Practical Vulnerabilities of Machine Learning-based Wireless Systems},
booktitle = {20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23)},
year = {2023},
isbn = {978-1-939133-33-5},
address = {Boston, MA},
pages = {1801--1817},
url = {https://www.usenix.org/conference/nsdi23/presentation/liu-zikun},
publisher = {USENIX Association},
month = apr,}
We demonstrate real adversarial attacks on 5G machine-learned systems.
Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more. At the same time, there is a need for labelled visual data featuring such items for the purpose of training object detection and recognition models for robots operating in homes, warehouses, stores, libraries, pharmacies, and so on. In this paper, we ask: can we leverage the tracking and identification capabilities of such tags as a basis for a large-scale automatic image annotation system for robotic perception tasks? We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment. Our pipeline uses unmodified commodity RFID readers and RGB-D cameras, and exploits arbitrary small-scale motions afforded by mobile robotic platforms to spatially map RFIDs to corresponding objects in the scene. Our only assumption is that the objects of interest within the environment are pre-tagged with inexpensive battery-free RFIDs costing 3–15 cents each. We demonstrate the efficacy of our pipeline on several RGB-D sequences of tabletop scenes featuring common objects in a variety of indoor environments.
@inproceedings{sie2022rfannotate,
author={Sie, Emerson and Vasisht, Deepak},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
title={RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects in Context},
year={2022},
volume={},
number={},
pages={2590-2596},
doi={10.1109/ICRA46639.2022.9812072}}
We describe a simple method to automate image annotation of objects in real world environments tagged with proximity-based RF trackers (i.e. RFIDs, AirTags, etc).