A simple Docker setup for running Ollama locally with a web interface. Run AI models on your own machine, with your data staying private and secure.
# Clone and enter the repository
git clone [email protected]:aze3ma/LlamaBase.git
cd LlamaBase
# Start everything with one command
./start.sh
That's it! The script will set up everything you need.
- Docker
- Docker Compose
- 8GB RAM (16GB recommended)
- 20GB disk space
- Web interface at http://localhost:8080
- Command-line access via
ollama
command - Your choice of AI models (llama2, mistral, etc.)
- All data stored locally on your machine
- Open http://localhost:8080
- Create an account
- Start chatting!
# List your models
ollama list
# Chat with a model
ollama run mistral
If something goes wrong:
- Check Docker logs:
docker logs ollama
- Restart services:
docker-compose restart
- Make sure ports 11434 and 8080 are free
MIT License
Made with ❤️ by aze3ma