Skip to content
This repository was archived by the owner on Dec 3, 2025. It is now read-only.

Commit de26b52

Browse files
authored
Merge pull request #10 from zappityzap/patch-1
Update quick-start.md
2 parents deb612f + 30fcff7 commit de26b52

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/content/docs/general/quick-start.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ The recommended way to do this is to use [Ollama](https://ollama.com/). Ollama
1616
## Installing Ollama as an inference provider
1717

1818
1. Visit [Install Ollama](https://ollama.com/) and follow the instructions to install Ollama on your machine.
19-
2. Choose a model from the list of models available on Ollama. Two recommended models to get started are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle. See the [Supported models page](/twinny-docs/general/supported-models/) for more options.
19+
2. Choose a model from the list of models available on Ollama. Two recommended models to get started are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle. See the [Supported models page](/general/supported-models/) for more options.
2020

2121
```sh
2222
ollama run codellama:7b-instruct

0 commit comments

Comments
 (0)