AI Model Providers
Configure local and cloud AI providers for Jarvis
Local-First Philosophy
Jarvis prioritizes local AI with Ollama — no API keys needed, no internet required, and your data stays private. Cloud providers are optional and can be added later.
Local AI with Ollama (Recommended)
Run AI models directly on your machine with Ollama:
Install Ollama:
curl -fsSL https://ollama.com/install.sh | shDownload a model:
ollama pull llama3.2Test it:
jarvis agent --message "Hello!" --localPopular Ollama Models
| Model | Size | Best For |
|---|---|---|
| llama3.2 | 4GB | General purpose, fast |
| qwen2.5 | 7GB | Coding, technical tasks |
| mistral | 4GB | Balanced performance |
| codellama | 7GB | Code generation |
Cloud Providers (Optional)
Add cloud providers for more powerful models or when you need specific capabilities:
OpenAI
Set your API key:
export OPENAI_API_KEY="sk-..."Or add to config:
jarvis configureAnthropic (Claude)
Set your API key:
export ANTHROPIC_API_KEY="sk-ant-..."Other Providers
Jarvis supports additional providers through the onboarding wizard:
- OpenRouter - Access to multiple models through one API
- AI Gateway - Vercel AI Gateway support
- Moonshot - Chinese AI provider
- Gemini - Google's AI models
- Custom endpoints - Any OpenAI-compatible API
Configuration
Configure providers through the onboarding wizard or manually:
Run onboarding to add providers:
jarvis onboardOr configure manually:
jarvis configureEnvironment Variables
| Variable | Description |
|---|---|
| OLLAMA_HOST | Ollama server address (default: 127.0.0.1:11434) |
| OPENAI_API_KEY | OpenAI API key |
| ANTHROPIC_API_KEY | Anthropic (Claude) API key |