Research Team led by Dario Pompili receives NSF Grant
Congratulations to Professors Dario Pompili (ECE) and Javier Diez (MAE) on their new NSF award for the project titled “Reliable Underwater Acoustic Video Transmission Towards Human-Robot Dynamic Interaction.” This is a three-year $1M collaborative effort led by Rutgers University (Dario Pompili, PI and Javier Diez, Co-PI) with Northeastern University. Rutgers’ share of this award is $500,000.
In the past decade underwater communications have enabled a wide range of applications; there are, however, novel underwater monitoring applications and systems based on human-robot dynamic interaction that require real-time multimedia acquisition and classification. Remotely Operated Vehicles (ROVs) are key instruments to support such interactive applications as they can capture multimedia data from places where humans cannot easily/safely go; however, underwater vehicles are often tethered to the supporting ship by a fiber cable or have to rise periodically to the surface to communicate with a remote station via Radio Frequency (RF) waves, which constrains the mission. Wireless acoustic communication is the typical physical-layer technology for underwater communication; however, video transmissions via acoustic waves are hard to accomplish as the acoustic waves suffer from attenuation, limited bandwidth, Doppler spreading, high propagation delay, high bit error rate, and time-varying channel. For these reasons, state-of-the-art acoustic communication solutions are still mostly focusing on enabling delay-tolerant, low-bandwidth/low-data-rate scalar data transmission or at best low-quality/low-resolution multimedia streaming in the order of few tens of Kbps. Hence, the objectives of this research program are: (1) To design novel communication solutions for robust, reliable, and high-data rate underwater multimedia streaming in the order of hundreds of Kbps; (2) To investigate the problem of integrating communication methods available in multiple environments on an innovative software-defined testbed architecture integrating MEMS-based Acoustic Vector Sensors (AVSs) that will enable processing-intensive physical-layer functionalities as software-defined, but executed in hardware that can be reconfigured in real time by the user based on the Quality of Experience (QoE).
You can find more details on the project at the NSF page here.
Congratulations Dario and Javier on this exciting collaborative effort!