ZenCoderZenCoder

Quick Start

Get ZenCoder running in under 5 minutes. Connect a local model and send your first chat.

๐Ÿ’ก
ZenCoder works best with a local model (Ollama or LM Studio) โ€” zero API keys, zero cost. You can add cloud keys later with zencoder-secrets.

Pre-install: macOS setup

Install the tools ZenCoder uses as its AI backends.

1. Install Ollama (local AI, free)

  1. 1

    Download Ollama

    Go to ollama.com and download the macOS app, or use Homebrew:

    brew install ollama
  2. 2

    Start the Ollama service

    Ollama runs as a background service on localhost:11434.

    Terminal 1bash
    ollama serve
    โ„น๏ธ
    On macOS, the Ollama app starts this automatically after install.
  3. 3

    Pull a coding model

    We recommend one of these for coding tasks:

    # Recommended: fast, great for code
    ollama pull qwen2.5-coder:7b
    
    # Alternative: general purpose
    ollama pull llama3
    
    # Verify it downloaded
    ollama list

    Expected output: model appears in the list with its size.

  4. 4

    Verify Ollama is running

    curl http://localhost:11434/api/tags
    # โ†’ {"models":[{"name":"qwen2.5-coder:7b",...}]}

2. Install LM Studio (optional alternative)

LM Studio is a GUI app that runs local models and exposes an OpenAI-compatible API.

  1. 1

    Download LM Studio

    Download from lmstudio.ai โ€” macOS, Windows, and Linux supported. Install the .dmg as you would any Mac app.

  2. 2

    Download a model

    Open LM Studio โ†’ search for Qwen2.5-Coder or Mistral 7B โ†’ click Download. Models are stored in ~/.cache/lm-studio/models/.

  3. 3

    Start the local server

    In LM Studio: Local Server tab โ†’ select model โ†’ Start Server.

    The server listens on http://localhost:1234/v1 (OpenAI-compatible).

Install ZenCoder

Choose how you want to use ZenCoder โ€” the CLI tools or the VS Code chat extension (or both).

Option 1 โ€” CLI & zencoder-secrets

Installs the zencoder CLI and zencoder-secrets in one step. Works on macOS, Linux, and Windows WSL2.

curl -fsSL https://raw.githubusercontent.com/divyabairavarasu/zencoder-releases/main/install.sh | bash
โ„น๏ธ
The install script places binaries in ~/.local/bin (or your $GOPATH/bin if Go is available). Make sure that directory is on your PATH.
  1. 1

    Open the REPL

    zencoder

    You'll see the ZenCoder banner and a > prompt. Type a question and press Enter twice to send.

    > What does this code do?
    |
    ZenCoder: This code defines an HTTP server that...
  2. 2

    Check health

    zencoder health
    # โ†’ ZenCoder: service status is ok. Model: ollama/qwen2.5-coder:7b

Option 2 โ€” VS Code Chat Extension

Get AI chat, inline completions, and context-aware suggestions directly inside VS Code.

Zencoder AI

Search for Zencoder AI in the VS Code Extensions panel (Ctrl+Shift+X / Cmd+Shift+X) and click Install.

๐Ÿ’ก
After installing, press Cmd+Shift+P (macOS) or Ctrl+Shift+P (Windows/Linux), type ZenCoder AI Chat, and press Enter to open the chat panel and start using it.

Add cloud API keys with zencoder-secrets (optional)

zencoder-secrets is installed automatically alongside the CLI. It is the most secure way to add your own cloud API keys โ€” every key is stored encrypted on your local machine and never leaves your device.

๐Ÿ’ก
No cloud keys are required. ZenCoder works fully offline with Ollama or LM Studio. Use zencoder-secrets only when you want to unlock a specific cloud model.

How it works

You register a provider endpoint and model once. ZenCoder handles routing โ€” you just pick the model in the REPL.

# General syntax
zencoder-secrets add -url <provider-base-url> -model <model-id> -alias <your-key-name>
# โ†’ Enter API key: [hidden โ€” input is masked]
# โ†’ BYOK provider registered.

NVIDIA NIM โ€” featured example

NVIDIA NIM provides free API access to top-tier models including Llama, Mistral, and Nemotron. Sign up at build.nvidia.com to get a free key (no credit card required).

# Add an NVIDIA NIM key
zencoder-secrets add -url https://integrate.api.nvidia.com/v1 \
  -model nemotron-3-nano-30b-a3b \
  -alias nvidia-key1
# โ†’ Enter API key for nvidia (alias: nvidia-key1): [hidden]
# โ†’ BYOK provider registered.
# โ†’ Example model: nvidia/nemotron-3-nano-30b-a3b

# Add more keys for different models (ZenCoder load-balances across them)
zencoder-secrets add -url https://integrate.api.nvidia.com/v1 \
  -model llama-3.3-70b-instruct \
  -alias nvidia-key2

# Verify your keys
zencoder-secrets list

# Test a key is working
zencoder-secrets test -provider nvidia -alias nvidia-key1
โ„น๏ธ
Once registered, pick the model inside the ZenCoder REPL with /model and select your NVIDIA model from the list.

Managing keys

CommandWhat it does
zencoder-secrets add -url <URL> -model <M> -alias <name>Register a key for any provider endpoint
zencoder-secrets listList all registered keys (values masked)
zencoder-secrets test -provider <P> -alias <name>Send a probe request to verify the key works
zencoder-secrets delete -provider <P>Remove all keys for a provider
zencoder-secrets delete -provider <P> -alias <name>Remove a specific key by alias
๐Ÿ’ก
Press Enter twice to send a message (single Enter adds a new line โ€” useful for code blocks). Type /help to see all slash commands.

Essential slash commands

CommandWhat it does
/helpShow all available commands
/modelOpen interactive model picker (arrow keys to navigate)
/model ollama/llama3Switch to a specific model immediately
/modelsAlias for /model
/clearReset the current session's conversation history
/quitExit the REPL
/byok url=<URL> key=<KEY>Register a custom BYOK endpoint

Next

CLI Reference โ€” every command and flag