diff --git a/docs/assets/images/DNNArchitecture.png b/docs/assets/images/DNNArchitecture.png new file mode 100644 index 0000000..124cad6 Binary files /dev/null and b/docs/assets/images/DNNArchitecture.png differ diff --git a/docs/assets/images/FirstFlights.png b/docs/assets/images/FirstFlights.png new file mode 100644 index 0000000..a3c88ae Binary files /dev/null and b/docs/assets/images/FirstFlights.png differ diff --git a/docs/assets/images/LongTrajectory_v2.png b/docs/assets/images/LongTrajectory_v2.png new file mode 100644 index 0000000..95eb528 Binary files /dev/null and b/docs/assets/images/LongTrajectory_v2.png differ diff --git a/docs/assets/images/Novel.png b/docs/assets/images/Novel.png new file mode 100644 index 0000000..8754e70 Binary files /dev/null and b/docs/assets/images/Novel.png differ diff --git a/docs/assets/images/PipelineDiagram_v3.png b/docs/assets/images/PipelineDiagram_v3.png new file mode 100644 index 0000000..4c8a977 Binary files /dev/null and b/docs/assets/images/PipelineDiagram_v3.png differ diff --git a/docs/assets/images/Robustness_v2.png b/docs/assets/images/Robustness_v2.png new file mode 100644 index 0000000..20f1365 Binary files /dev/null and b/docs/assets/images/Robustness_v2.png differ diff --git a/docs/assets/images/Samples_v3.png b/docs/assets/images/Samples_v3.png new file mode 100644 index 0000000..6e41130 Binary files /dev/null and b/docs/assets/images/Samples_v3.png differ diff --git a/docs/index.html b/docs/index.html index 2df9d5a..c1b5f7b 100644 --- a/docs/index.html +++ b/docs/index.html @@ -44,6 +44,33 @@ .section p { margin-bottom: 15px; } + .images-row { + display: flex; + justify-content: space-between; + gap: 10px; + } + .images-row img { + width: 48%; + max-width: 100%; + height: auto; + border: 1px solid #ddd; + border-radius: 5px; + } + .video-container { + position: relative; + padding-bottom: 56.25%; /* 16:9 aspect ratio */ + height: 0; + overflow: hidden; + max-width: 100%; + background: #000; + } + .video-container iframe { + position: absolute; + top: 0; + left: 0; + width: 100%; + height: 100%; + } .code-block { background: #f4f4f4; border-left: 5px solid #ccc; @@ -61,22 +88,60 @@
[A concise tagline or summary of your research]
+Scene Optimized Understanding via Synthesized Visual Inertial Data from Experts
+Watch the videos below for a demonstration of our project in action:
+This project introduces [describe your research briefly], offering a novel approach to [key research problem]. The goal is to [state primary objective or significance].
-The paper demonstrates significant advancements in [specific domain, e.g., autonomous drone navigation, neural network interpretability, etc.], providing practical insights for researchers and practitioners alike.
+We propose a new simulator, training approach, and + policy architecture, collectively called SOUS VIDE, for end-to- + end visual drone navigation. Our trained policies exhibit zero- + shot sim-to-real transfer with robust real-world performance + using only on-board perception and computation. Our simulator, + called FiGS, couples a computationally simple drone dynamics + model with a high visual fidelity Gaussian Splatting scene re- + construction. FiGS can quickly simulate drone flights producing + photo-realistic images at over 100 fps. We use FiGS to collect + 100k-300k observation-action pairs from an expert MPC with + privileged state and dynamics information, randomized over + dynamics parameters and spatial disturbances. We then distill + this expert MPC into an end-to-end visuomotor policy with a + lightweight neural architecture, called SV-Net. SV-Net processes + color image and IMU data streams into low-level body rate and + thrust commands at 20Hz onboard a drone. Crucially, SV-Net + includes a Rapid Motor Adaptation (RMA) module that adapts + at runtime to variations in the dynamics parameters of the drone. + In extensive hardware experiments, we show SOUS VIDE polices + to be robust to ±30% mass and thrust variations, 40 m/s wind + gusts, 60% changes in ambient brightness, shifting or removing + objects from the scene, and people moving aggressively through + the drone’s visual field. The project page and code can be found
We extend our gratitude to [individuals, institutions, or funding agencies].
+This work was supported in part by DARPA grant HR001120C0107, ONR grant N00014-23-1-2354, and Lincoln Labs grant 7000603941. The second author was supported on an NDSEG fellowhsip. Toyota Research Institute provided funds to support this work.