The Naval Undersea Warfare Center (NUWC) Division, Keyport in conjunction with the Office of Naval Research (ONR)/NavalX are sponsoring a prize challenge focused on Unmanned Underwater Vehicle (UUV) Autonomy (hereinafter referred to as “the Challenge”). The Navy seeks to enhance the quality of the self-governing controls and functionality of UUVs by developing autonomous algorithms to perform various missions. For the Challenge, participants are expected to develop Robot Operations System (ROS) based code to perform mission planning and detection of environmental hazards, and more specifically, tires. These tires are often left over from misguided attempts of reef building efforts during the 1970s that dive teams are attempting to remediate. The Navy’s goal in this challenge is to provide data/information to make the remediation more efficient and effective.
The Navy will provide the digital twin of the virtual operating environment via NVIDIA Omniverse (OV), the UUV representative model to be used within the OV, and an autonomy engine based in Fast-Data Distribution Service (DDS). The OV environment provides oceanography, climatology, bathymetry, and other relevant operating environment feature data. The UUV model in OV allows participants to evaluate overall performance and behavior of the vehicle in meeting the mission objective. Additionally, both the environment and model provide the participants the mechanisms to gather feedback through simulation, effectively allowing one to assess the autonomy and identify areas for improvement to be incorporated for a more robust solution. Assessment can include, but is not limited to, autonomy-related factors, such as decision-making authority, freedom/choice of methods, and perceived levels of control. Additionally, the judges of the Challenge will be analyzing and assessing the developed autonomy using the OV platform. Refer to judging criteria sections for more details.
Current mission operations are extremely manual and human interaction dependent requiring a tremendous amount of time and effort coupled with tacit operational expertise and knowledge to develop mission plans. Even still, error can occur. OV allows the import and fusion of large environmental models that enable the use of Deep Reinforced Learning (DRL) techniques for optimized route planning for UUV operations autonomously. The overarching goal of the DRL-driven autonomy would be to avoid hazards and place the UUV in an optimal position for mission success while maximizing power efficiency, leveraging currents and environmental factors to lengthen mission duration and overall execution. Use of the digital twin environment will allow for mission evaluation, autonomy optimization, and sensor feedback into environmental models closing the link between virtual and live environments.
This Challenge seeks to evaluate the participant’s utilization of tools to perform the mission. The digital twin of the IVER 3 provides representative vehicle control, dynamic maneuverability behaviors, modeled sensors that are available to perform the mission. That is, there exists an abstraction of the controls for UUV operation by keyboard/controller or algorithm.
For the purposes of this competition, the tools that perform, coordinate, and automate are:
- OV as a virtual representation of UUV Test Range Dabob Bay
- Preloaded operating environment data to establish a representative virtual space.
- 6DOF model of the IVER 3 including a representative payload section with all available sensors. The following sensor models are made available and can be utilized.
- INS Sensor (Position and velocity values that will have variable noise applied the longer the vehicle is in water)
- ADCP Depth Sensor (Measurements of distance to the bottom value from the Fathometer LIDAR with noise applied to it.)
- Keller Pressure Sensor (Z position value to provide a noisy depth value)
- Obstacle Avoidance Sensor (Front-facing LIDAR without its tilt to notify the IVER3 of the distance to any obstacles ahead)
- Side Scan Sonar (Two LIDARs on both sides that will create a spread of beams. Provides participants a line of LIDAR points that they can arrange into a Side Scan image)
4. Access to the OV through:
- BlueNERVE – This is a maritime-centric version of MITRE’s NERVE capability, which is a multi-tenant network and virtualization platform. BlueNERVE also provides secure remote access and direct lab-to-lab connectivity to enable remote access to equipment and a common virtual workspace.
- Invited participant’s remote access to Blue Nerve and thus OV. Browser remote desktop and a client-based VPN connection that grants the user access to resources in the BlueNERVE environment.
- Participant’s independent NVIDIA OVX stack, suitable laptop configured with an NVIDIA RDX Graphics Card, or an NVIDIA Jetson edge GPU processing device. The participant would rely on their own hardware to support their efforts in the OV.
Predeveloped algorithms are provided as part of phase 2 of this challenge, as a fundamental starting point with the expectation that changes/modifications will be made to complete prize challenge objectives. Additionally, the Replicator Isaac Sim .api code is provided so participants can develop their own operational environment if desired.
This Challenge will measure the usage of these tools to develop autonomy to search and detect of tires.
Awards:- $50,000
Deadline:- 02-01-2025