
Run any model. Any provider. Your terminal.
Inference Providers
Local runtimes, hosted inference, or a hybrid setup. Swap providers without changing how you work.
Why Agent Open
Local or hosted — Llama, Qwen, DeepSeek, Mistral. Your choice, always.
OpenRouter, Together, Ollama, vLLM. Plug into whatever you're already running.
No vendor lock-in. No telemetry. Fork it, extend it, own it.
Built for how developers actually work. No browser tabs, no context switching.
Who it's for
Run frontier OSS models locally. Contribute back. Be part of a community that ships in the open.
Your code never leaves your machines. No cloud APIs required. Full control over your data.
Use cheap inference providers instead of $20/mo subscriptions. Same power, fraction of the cost.
The Thesis
"The best coding agents shouldn't require proprietary models or cloud lock-in."
Open-source models are catching up fast. Inference is becoming a commodity. The missing piece is a harness that treats every model as a first-class citizen — local or remote, large or small. That's what we're building.