Emerson Sie

I am an ECE PhD candidate at the University of Illinois at Urbana-Champaign (UIUC), advised by Prof. Deepak Vasisht.

Previously, I completed my bachelor's degree at UIUC with highest honors, majoring in Computer Engineering.

My research interests lie in the intersection of computer vision, wireless sensing and robotics. Specifically, I am interested in enabling robots to perceive the world beyond the optical spectrum.

Email  /  CV  /  GitHub

profile photo
Publications
Exploring Practical Vulnerabilities of Machine Learning-based Wireless Systems
Zikun Liu, Changming Xu, Emerson Sie, Gagandeep Singh, Deepak Vasisht
Networked Systems Design and Implementation (NSDI), 2023
abstract

Machine Learning (ML) is an increasingly popular tool for designing wireless systems, both for communication and sensing applications. We design and evaluate the impact of practically feasible adversarial attacks against such ML-based wireless systems. In doing so, we solve challenges that are unique to the wireless domain: lack of synchronization between a benign device and the adversarial device, and the effects of the wireless channel on adversarial noise. We build, RAFA (RAdio Frequency Attack), the first hardware-implemented adversarial attack platform against ML-based wireless systems and evaluate it against two state-of-the-art communication and sensing approaches at the physical layer. Our results show that both these systems experience a significant performance drop in response to the adversarial attack.

We highlight the possibility of real world adversarial attacks on 5G machine-learned systems.

BatMobility: Flying Without Seeing for Lightweight UAVs
Emerson Sie, Zikun Liu, Deepak Vasisht
International Conference On Mobile Computing And Networking (MobiCom), 2023
abstract / website

Unmanned aerial vehicles (UAVs) rely on optical sensors such as cameras and lidar for autonomous operation. However, optical sensors fail under bad lighting, are occluded by debris and adverse weather conditions, struggle in featureless environments, and easily miss transparent surfaces and thin obstacles. In this paper, we question the extent to which optical sensors are sufficient or even necessary for full UAV autonomy. Specifically, we ask: can UAVs autonomously fly without seeing? We present BatMobility, a lightweight mmWave radar-only perception system for autonomous UAVs that completely eliminates the need for any optical sensors. BatMobility enables vision-free autonomy through two key functionalities – radio flow estimation (a novel FMCW radar-based alternative for optical flow based on surface-parallel doppler shift) and radar-based collision avoidance. We build BatMobility using inexpensive commodity sensors and deploy it as a real-time system on a small off-the-shelf quadcopter, showing its compatibility with existing flight controllers. Surprisingly, our evaluation shows that BatMobility achieves comparable or better performance than commercial-grade optical sensors across a wide range of scenarios.

We enable vision-free UAV flight using only millimeter-wave radar sensing, which is robust to adverse weather and bad lighting conditions, and can detect transparent (i.e. glass) surfaces.

RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects in Context
Emerson Sie, Deepak Vasisht
International Conference on Robotics and Automation (ICRA), 2022
abstract / bibtex / arXiv / slides

Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more. At the same time, there is a need for labelled visual data featuring such items for the purpose of training object detection and recognition models for robots operating in homes, warehouses, stores, libraries, pharmacies, and so on. In this paper, we ask: can we leverage the tracking and identification capabilities of such tags as a basis for a large-scale automatic image annotation system for robotic perception tasks? We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment. Our pipeline uses unmodified commodity RFID readers and RGB-D cameras, and exploits arbitrary small-scale motions afforded by mobile robotic platforms to spatially map RFIDs to corresponding objects in the scene. Our only assumption is that the objects of interest within the environment are pre-tagged with inexpensive battery-free RFIDs costing 3–15 cents each. We demonstrate the efficacy of our pipeline on several RGB-D sequences of tabletop scenes featuring common objects in a variety of indoor environments.

@inproceedings{sie2022rfannotate,
author={Sie, Emerson and Vasisht, Deepak},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
title={RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects in Context},
year={2022},
volume={},
number={},
pages={2590-2596},
doi={10.1109/ICRA46639.2022.9812072}}

We describe a simple method to automate image annotation of objects in real world environments tagged with proximity-based RF trackers (i.e. RFIDs, AirTags, etc).