Skip to content

Run ollama server

Actions
Run an ollama server with the model you provide
v3
Latest
Star (12)

Tags

 (1)

Ollama GitHub Action

CI

GitHub Action to install ollama, pull a model and run the ollama server.

Both the ollama install and the model are cached between runs.

Example usage:

      - uses: pydantic/ollama-action@v3
        with:
          model: qwen2:0.5b

We can then run tests and connect to http://localhost:11434 to make requests to ollama. Here we use the qwen2:0.5b model as it's very small and therefore quick to download.

This action is used by pydantic-ai.

Run ollama server is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.

About

Run an ollama server with the model you provide
v3
Latest

Tags

 (1)

Run ollama server is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.