AI-native marine robotics infrastructure

Accelerating the next decade.

GPU-native simulation for marine robotics. Hours of training instead of months at sea. Built for labs, industry, and defense.

/00 · Numbers in production
/01
11,435
FPS · single-env BlueROV2 (RTX 5090)
/02
90M
env-steps/s @ 8192 parallel envs
/03
900×
target throughput achieved
/01 · Manifesto

What we stand for

/01

The next decade of marine robotics shouldn't be paced by currents and weather.

/02

It should be paced by simulation fidelity, GPU compute, and how fast AI agents can learn.

/03

We're building the infrastructure that makes that pace possible.

/01 · Why

Five constraints holding marine robotics back

  1. 01

    Real ocean trials cost so much they cap iteration

    A day at sea: ¥10k–¥100k+. Corrosion, entanglement, gear loss. Non-reproducible currents. Progress gated by weather. Without high-fidelity simulation, iteration speed is locked to nature's clock.

  2. 02

    CFD is too slow for RL to even start

    Hi-fi CFD: 1 second of physics takes 1–100 hours to compute. The 1M+ episodes RL needs is mathematically unreachable. Even GPU-accelerated CFD can't approach the real-time × 1000× threshold the field actually requires.

  3. 03

    Legacy underwater sims waste modern GPUs

    Gazebo / UWSim / HoloOcean: single-threaded CPU at hundreds of FPS. RTX 5090 / H100 compute is wasted. Multi-agent parallelism is near zero.

  4. 04

    The sim-to-real gap kills policy transfer

    Insufficient fluid fidelity — sim policies collapse in water. No principled sim-to-real calibration. Sensor simulation (sonar, underwater camera) is oversimplified, creating false confidence.

  5. 05

    AI-native interfaces are missing

    Gym/gymnasium adapters absent or low-quality. Poor multi-agent scene support. Multimodal marine datasets (sonar, optical, IMU) aren't open — foundation-model training is gated by data scarcity.

/02 · Platform

What we're building

11,435
FPS · single-env BlueROV2 (RTX 5090)

Full RL training pipeline

End-to-end on a single RTX 5090. Environment to policy, all in simulation.

90 M
env-steps/s @ 8192 envs

Fossen multi-env throughput

8192 parallel envs on the Fossen 6-DoF kernel. 900× target achieved.

6-DoF
Custom Fossen + SPH

Marine-specific physics

Rigid body dynamics + fluid simulation, built for the ocean.

Newton + Isaac Sim 6

GPU-native physics + render

Newton engine maintained by Anthropic + NVIDIA + Lightwheel + Apple. Isaac Sim 6 rendering.

gym / gymnasium native

Drop-in RL interfaces

rl_games · stable-baselines3 · RSL-RL — direct integration.

Multi-agent · Procedural

Real-world ocean scenes

Ships / AUVs / ROVs / sensors + procedural ocean generation.

/03 · Built on

Our stack

OceanScale is built on the most advanced GPU-native physics and rendering stack, co-maintained by some of the most consequential teams in the field.

Newton
GPU-native physics engine
Isaac Sim 6
high-fidelity rendering
NVIDIA
core compute partner
Anthropic
Newton co-maintainer
Lightwheel
Newton co-maintainer
Apple
Newton co-maintainer
/03 · Demos

See it run

Available

BlueROV2 RL training

RTX 5090 single card, full training pipeline measured. 11,435 FPS single-env.

Preview · v0.3

Multi-AUV formation

Multi-agent navigation and collision avoidance (concept render, v0.3 ship).

Preview · v0.3

SPH fluid visualization

Real-time fluid field rendering, GPU-native SPH kernel (concept render, v0.3 ship).

/04 · Benchmarks

Numbers, side by side

Metric OceanScale UWSim HoloOcean Gazebo
Single-env FPS (RTX 5090)11,435~200~500~100
Multi-env throughput @ 8192 envs90 M env-steps/s
Multi-env parallel scaling8192+partialpartial
GPU-native physicsNewton
RL gym interfaceNative3rd-party3rd-party3rd-party
SPH fluid kernel
Fossen 6-DoFpartial
ROS 2 integrationv0.3

OceanScale figures measured by W2 benchmark (2026-05-15, RTX 5090). Other tools' figures sourced from public documentation and community reports; actual values depend on scene and hardware.

/05 · Contact

Get in touch

Research institutions, industry partners, and prospective collaborators — we'd love to hear from you. We respond within 48 hours.