This is the easiest place to get started with AVstack. It already has avstack-core
and avstack-api
libraries situated in the appropriate places and dependencies set up with the uv.lock
file. Read on for more information on how to get going.
Pioneers of autonomous vehicles (AVs) promised to revolutionize the driving experience and driving safety. However, milestones in AVs have materialized slower than forecast. Two culprits are (1) the lack of verifiability of proposed state-of-the-art AV components, and (2) stagnation of pursuing next-level evaluations, e.g., vehicle-to-infrastructure (V2I) and multi-agent collaboration. In part, progress has been hampered by: the large volume of software in AVs, the multiple disparate conventions, the difficulty of testing across datasets and simulators, and the inflexibility of state-of-the-art AV components. To address these challenges, we present AVstack
, an open-source, reconfigurable software platform for AV design, implementation, test, and analysis. AVstack
solves the validation problem by enabling first-of-a-kind trade studies on datasets and physics-based simulators. AVstack
solves the stagnation problem as a reconfigurable AV platform built on dozens of open-source AV components in a high-level programming language.
Check out tutorials on our ReadTheDocs page! (NOTE: these are pretty out of date, but I'm working on updating.)
This currently only works on a Linux distribution (tested on Ubuntu 20.04 and 22.04). It also only works with Python 3.10. UV must be installed on your system to handle the dependencies. Python 3.10 must be installed on your system.
The best way to get started is to run the following:
git clone --recurse-submodules https://github.com/avstack-lab/avdev-sandbox/
cd avdev-sandbox
uv install # to install the dependencies
Try the following and see if it works.
cd examples/hello_world
uv run python hello_import.py
This will validate whether we can import avstack
and avapi
. Not very interesting, but we have to start somewhere!
To get fancy with it, you'll need perception models and datasets. To install those, run
./initialize.sh # to download models and datasets
The initialization process may take a while -- it downloads perception models and AV datasets from our hosted data buckets. If you have a preferred place to store data and perception models, you can pass that as an argument by running:
./initialize.sh /path/to/save/data /path/to/save/models
Once this is finished, let's try out some more interesting tests such as
cd examples/hello_world
uv run python hello_api.py
which will check if we can find the datasets we downloaded.
And
cd examples/hello_world
uv run python hello_perception.py
which will check if we can properly set up perception models using MMDetection
.
Now that you have the basic tests running, fire up the jupyter notebooks to get in to some more involved experimentation. You can do this in an IDE such as VScode or you can do this through UV in the terminal by running
uv run jupyter notebook
Then go into examples/notebooks
and start playing around with them.
I welcome feedback from the community on bugs with this and other repos. Please put up an issue when you find a problem or need more clarification on how to start.
Copyright 2025 Spencer Hallyburton
AVstack specific code is distributed under the MIT License.