Skip to content

nasa-jpl/cortex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CORTEX: Continuous Optimization in Robotics via Test and Exploration

CORTEX was developed at NASA Jet Propulsion Laboratory (JPL) and is open sourced under the Apache 2.0 License.

The development of CORTEX was funded internally by JPL/JNEXT as part of the Extant Exobiology Life Surveyor (EELS) project, and builds on the NEO Autonomy Framework (hence, NEO-CORTEX). EELS is a snake robot that is being developed to explore the subsurface oceans of Europa and Enceladus. We encourage you to use CORTEX in your own projects, and to contribute to the project by submitting issues and pull requests. See the References section for a list of relevant publications, documents, and projects.

EELS on the cover of Science Robotics - March 2024

EELS: Autonomous snake-like robot with task and motion planning capabilities for ice world exploration

Description

CORTEX is a framework for accelerating robotics development through a combination of modern data infrastructure, test automation, and intelligent data analysis. The framework enables developers to rapidly prototype and test new algorithms and ideas with minimal effort. It also provides a set of tools for specifying and running experiments in a repeatable manner, and for collecting and analyzing data from those experiments. Finally, CORTEX provides facilities for single- and multi-device configuration management, logging, and monitoring, which are essential for managing and operating complex robotics systems.

Installation

CORTEX is installed as a Python library using the setup.py script.

./setup.py install

# If you get a permission denied error:
sudo ./setup.py install

After installing the CORTEX Python library, you can import the modules as follows:

import cortex
from cortex.db import TemporalCRTX
from cortex.db.entities import *
# etc...

See notebooks/guides for examples on how to use the CORTEX library.

Self-hosting with Docker

CORTEX relies on a database connection to store and retrieve data. We have provided a Docker setup which includes a Postgres database with TimescaleDB, and a Grafana dashboard for visualizing. We will assume that you have Docker installed on your system. If you do not have Docker installed, you can download it from the Docker website. The following commands require the docker compose command to work properly.

./setup.py docker --start    start the CORTEX services (Postgres and Grafana)
                  --stop     stop the CORTEX services
                  --restart  restart the CORTEX services
                  --purge    stop and remove the CORTEX services
                   
./setup.py database --init   populate the database with the necessary tables
                    --wipe   clear the database, including locally mounted volumes

In most cases, the Docker images will continue running in the background and start automatically when you restart your computer. You may also choose to connect CORTEX to your own instance of Postgres by modifying the .env file.

Configuration

The following components of CORTEX can be configured:

Architecture

CORTEX is intended to work with a wide variety of robots and configurations. It is designed to be as modular as possible, so that it can be easily adapted and integrated into new and existing systems. The following diagram shows the high-level architecture of the CORTEX data framework: CORTEX Architecture

Agents

CORTEX Agents can be thought of as components that are responsible for performing specific tasks in a robotics system. They are typically implemented in the form of Python scripts, and can be configured using YAML files (where applicable).

CORTEX currently provides the following Agents:

  • worker: responsible for listening to topics, applying preprocessors and transforms, and inserting data into the database. Note that the worker node will typically subsample the data before sending it to the database in order to reduce the amount of data that is sent.
  • monitor: responsible for collecting resource utilization metrics (CPU/Memory) from nodes and processes running on the system.
  • annotator: responsible for recording events that occur during an experiment. This includes recording the start and end times of an experiment, as well as significant events such as state transitions, reaching a goal, crashing, or encountering an obstacle.

Future Agents

We have developed additional agents but have not added them to the open source repository yet. These agents include:

  • ROSA (ROS Agent): an AI agent that uses LLMs to interface with ROS using natural language queries.
  • orchestrator: manages the CORTEX system, including environment setup, configuration, and starting/stopping CORTEX services.
  • sampler: collects data from sources that do not publish on open topics. This includes collecting data by performing service/action calls, or by reading data from files.

Implementation

The following sections describe the various implementations of CORTEX (current and future).

ROS1

While CORTEX agents are generally ROS-agnostic, we have developed a set of ROS nodes that can be used to interface with the CORTEX framework. These nodes are implemented in Python and can be run on any system that has ROS installed. Simply copy the ros1/ package into your ROS workspace.

ROS2

We are currently working on the ROS2 implementation. Please check back soon.