diff --git a/docs/index.html b/docs/index.html index c1b5f7b..97865c2 100644 --- a/docs/index.html +++ b/docs/index.html @@ -100,7 +100,7 @@

SOUS VIDE

-->

Demonstration

-

Watch the videos below for a demonstration of our project in action:

+

Main Videos:

@@ -166,8 +166,32 @@

Getting Started

Results

-

Our findings demonstrate [key results], achieving [specific performance metrics or outcomes]. Below is a visualization of one of our key results:

- Key Results +

This work introduces SOUS VIDE, a novel training + paradigm leveraging Gaussian Splatting and lightweight vi- + suomotor policy architectures for end-to-end drone navigation. + By coupling high-fidelity visual data synthesis with online + adaptation mechanisms, SOUS VIDE achieves zero-shot sim- + to-real transfer, demonstrating remarkable robustness to varia- + tions in mass, thrust, lighting, and dynamic scene changes. Our + experiments underscore the policy’s ability to generalize across + diverse scenarios, including complex and extended trajectories, + with graceful degradation under extreme conditions. Notably, + the integration of a streamlined adaptation module enabled the + policy to overcome limitations of prior visuomotor approaches, + offering a computationally efficient yet effective solution for + addressing model inaccuracies. + These findings highlight the potential of SOUS VIDE as + a foundation for future advancements in autonomous drone + navigation. While its robustness and versatility are evident, + challenges such as inconsistent performance in multi-objective + tasks suggest opportunities for improvement through more + sophisticated objective encodings. Further exploration into + scaling the approach to more complex environments and in- + corporating additional sensory modalities could enhance both + adaptability and reliability. Ultimately, this work paves the + way for deploying learned visuomotor policies in real-world + applications, bridging the gap between simulation and practical + autonomy in drone operations.