Skip to content

Commit

Permalink
A bit more meat on the page.
Browse files Browse the repository at this point in the history
  • Loading branch information
lowjunen committed Dec 11, 2024
1 parent dbfe526 commit 6f3fd09
Showing 1 changed file with 27 additions and 3 deletions.
30 changes: 27 additions & 3 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ <h1>SOUS VIDE</h1>
</div> -->
<div class="section">
<h2>Demonstration</h2>
<p>Watch the videos below for a demonstration of our project in action:</p>
<p>Main Videos:</p>
<div class="video-container">
<iframe src="https://youtu.be/IhZeXXJ47Js" frameborder="0" allowfullscreen></iframe>
</div>
Expand Down Expand Up @@ -166,8 +166,32 @@ <h2>Getting Started</h2>

<div class="section">
<h2>Results</h2>
<p>Our findings demonstrate [key results], achieving [specific performance metrics or outcomes]. Below is a visualization of one of our key results:</p>
<img src="results/figure1.png" alt="Key Results" style="max-width: 100%; border: 1px solid #ddd;">
<p>This work introduces SOUS VIDE, a novel training
paradigm leveraging Gaussian Splatting and lightweight vi-
suomotor policy architectures for end-to-end drone navigation.
By coupling high-fidelity visual data synthesis with online
adaptation mechanisms, SOUS VIDE achieves zero-shot sim-
to-real transfer, demonstrating remarkable robustness to varia-
tions in mass, thrust, lighting, and dynamic scene changes. Our
experiments underscore the policy’s ability to generalize across
diverse scenarios, including complex and extended trajectories,
with graceful degradation under extreme conditions. Notably,
the integration of a streamlined adaptation module enabled the
policy to overcome limitations of prior visuomotor approaches,
offering a computationally efficient yet effective solution for
addressing model inaccuracies.
These findings highlight the potential of SOUS VIDE as
a foundation for future advancements in autonomous drone
navigation. While its robustness and versatility are evident,
challenges such as inconsistent performance in multi-objective
tasks suggest opportunities for improvement through more
sophisticated objective encodings. Further exploration into
scaling the approach to more complex environments and in-
corporating additional sensory modalities could enhance both
adaptability and reliability. Ultimately, this work paves the
way for deploying learned visuomotor policies in real-world
applications, bridging the gap between simulation and practical
autonomy in drone operations.</p>
</div>

<div class="section">
Expand Down

0 comments on commit 6f3fd09

Please sign in to comment.