AI Model Providers

Configure local and cloud AI providers for Jarvis

Local-First Philosophy

Jarvis prioritizes local AI with Ollama — no API keys needed, no internet required, and your data stays private. Cloud providers are optional and can be added later.

Local AI with Ollama (Recommended)

Run AI models directly on your machine with Ollama:

Install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Download a model:

ollama pull llama3.2

Test it:

jarvis agent --message "Hello!" --local

Popular Ollama Models

ModelSizeBest For
llama3.24GBGeneral purpose, fast
qwen2.57GBCoding, technical tasks
mistral4GBBalanced performance
codellama7GBCode generation

Cloud Providers (Optional)

Add cloud providers for more powerful models or when you need specific capabilities:

OpenAI

Set your API key:

export OPENAI_API_KEY="sk-..."

Or add to config:

jarvis configure

Anthropic (Claude)

Set your API key:

export ANTHROPIC_API_KEY="sk-ant-..."

Other Providers

Jarvis supports additional providers through the onboarding wizard:

  • OpenRouter - Access to multiple models through one API
  • AI Gateway - Vercel AI Gateway support
  • Moonshot - Chinese AI provider
  • Gemini - Google's AI models
  • Custom endpoints - Any OpenAI-compatible API

Configuration

Configure providers through the onboarding wizard or manually:

Run onboarding to add providers:

jarvis onboard

Or configure manually:

jarvis configure

Environment Variables

VariableDescription
OLLAMA_HOSTOllama server address (default: 127.0.0.1:11434)
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic (Claude) API key
Jarvis

© 2026 Nilkanth Desai. MIT licensed.