Projects

1. Radar Vision – Radar Object Detection

Millimeter-wave (mmW) radars are being increasingly integrated into commercial vehicles to support new advanced driver-assistance systems (ADAS) by enabling robust and high-performance object detection, localization, as well as recognition – a key component of new environmental perception.

We propose a novel radar multiple-perspectives convolutional neural network (RAMP-CNN) that extracts the location and class of objects based on further processing of the range-velocity-angle (RVA) heatmap sequences. To bypass the complexity of 4D convolutional neural networks (NN), we propose to combine several lower-dimension NN models within our RAMP-CNN model that nonetheless approaches the performance upper-bound with lower complexity.

         

Figure 1: test examples from the city road scenario; For each column, the top-row image is the synchronized camera image for visualization, the second-row image is the corresponding radar RA heatmap, and the bottom-row image is the visualization of the RAMP-CNN model results.

2. Radar Vision – High-resolution Radar Imaging

A key shortcoming for present-day vehicular radar imaging is poor azimuth resolution (for side-looking operation) due to the form factor limits on antenna size and placement.

We propose a solution via a new multiple-input and multiple-output synthetic aperture radar (MIMO-SAR) imaging technique, that applies coherent SAR principles to vehicular MIMO radar to improve the side-view (angular) resolution. The proposed 2-stage hierarchical MIMO-SAR processing workflow drastically reduces the computation load while preserving the image resolution. To enable coherent processing over the synthetic aperture, we integrate a radar odometry algorithm that estimates the trajectory of ego-radar.

       
Figure 2: MIMO-SAR imaging for roadside experiment 1. (a) The camera image for the imaging environment with two inclinedly parked cars; (b) The MIMO-SAR imaging result where we use two rectangles to cover the parked cars; (c) Range-angle map imaging for single-frame radar data.

3. Automotive Radar and Camera test-bed and UW Camera-Radar (CR) dataset

A large camera image and radar raw data (I-Q samples post demodulated at the receiver) dataset for various objects have been collected for multiple scenarios – parking lot, curbside, campus road, city road, freeway, etc. – by a vehicle-mounted platform that is driven (see Fig. 3(b)). In particular, significant effort was placed in collecting data for situations where cameras are largely ineffective, i.e. under challenging light conditions.

                   

Figure 3: Radar-Camera data capture platform: (a) This platform consists of 2 FLIR cameras and two perpendicular radars from TI – the right radar is with the 1D horizontal antenna array, and the left one is with the 1D vertical antenna array. (b) Data capture platform mounted on a vehicle with front view.

We show the camera images and radar range-angle heatmaps of several scenario examples in our UWCR dataset at Fig. 4. The data collection platform shown in Fig. 3(a) consists of 2 FLIR cameras (left and right) and two TI AWR1843 EVM radars. Two radar EVM boards are placed to form a ‘2D’ antenna array system13 that can provide more abundant object information. We place one radar array horizontally and the other one vertically to collect the data from both the range-azimuth angle domain and the range-elevation angle domain. We only use the radar data from the horizontal array so far, and we will incorporate the vertical array data into future work.

               

Figure 4: 8 scenario examples in the collected UWCR dataset: row 1, 3 are the camera images; row 2, 4 are the corresponding radar range-azimuth angle heatmaps.

The binocular cameras are synchronized with radars, and they can provide the location and class of semantic objects after we implement the Mask R-CNN detection model and unsupervised depth estimation model on the captured camera images. The semantic object detection results and depth estimation results generated from cameras are manually calibrated and then saved as the requisite ground truth for the following training and evaluation.

4. Lab-scale mmWave radar test-bed

Vehicle Radar Testbed is a millimeter-wave (mm-wave) FMCW radar sensor that can be used to test the performance of signal processing algorithms and collect data in field tests. The testbed shown below consists of the AWR1642 BoosterPack and DCA1000EVM from Texas Instruments.
    Features:
– Two available bandwidths: 76-77 GHz and 77-81GHz
– MIMO configuration: Four Receive antennas and two Transmit antennas with Time Division Modulation(TDM)
– Tx Power: 12dbm
– Rx Gain: 30dB
– C674x DSP for FMCW Signal Processing

Signal Processing: The development of signal processing techniques along with progress in the mm-wave semiconductor technology plays a key role in automotive radar systems. Various signal processing techniques have been developed to provide better resolution and estimation performance in all measurement dimensions: range, azimuth angles, and velocity of the targets surrounding the vehicles.

               

Figure 5: The lab-scale mmW radar test-bed