ZenCoderZenCoder

VSCode Extension

ZenCoder's VSCode extension gives you a streaming chat panel, file context management, and model switching β€” all without leaving your editor.

πŸ’‘
After installing the extension, press Cmd+Shift+P (macOS) or Ctrl+Shift+P (Windows/Linux), type ZenCoder AI Chat, and press Enter to open the chat panel.
ZenCoder Chat panel in VS Code

Chat panel UI walkthrough

Model dropdown (top-left)
Switch between available models. Displays all models returned by GET /models from the daemon.
Routing dropdown (Auto/Local/Cloud)
Controls which models ZenCoder routes to. See guidance below.
Mode dropdown (Ask/Agent/Plan)
Ask = single-turn. Plan = show steps before executing. Agent = full agentic loop.
Message area
Streamed markdown responses. Code blocks are syntax-highlighted.
+ (attach) button
Add files or entire folders to the context window. Context size shown at the bottom.
Clear all (in + menu)
Clears all attached context items immediately. Type /clear in chat to also reset the conversation history.
Send button / EnterΓ—2
Sends the current message with all attached context.
Disclaimer bar (bottom)
πŸ”’ No data stored Β· ⚠️ Free APIs may log prompts Β· πŸ€– AI can make mistakes

Choosing the right routing mode

The routing dropdown in the chat panel controls which models ZenCoder sends your request to. Pick the mode that matches the model you have selected.

Auto

You want ZenCoder to decide

Tries your selected model first. Falls back to the next available model if it's unreachable. Good default for most users.

Local

You have selected an Ollama or LM Studio model

Routes only to locally-running models (Ollama / LM Studio). Never makes an outbound network call. Use this when working offline or for privacy-sensitive code.

Cloud

You have selected a cloud model (NVIDIA NIM, Anthropic, OpenAI, etc.)

Routes only to cloud providers. Requires a key registered via zencoder-secrets. Use this when you need a larger or more capable model.

πŸ’‘
If you pick a Local model from the model dropdown, set routing to Local. If you pick a Cloud model, set routing to Cloud. Use Auto when you want ZenCoder to handle failover automatically.

Context management

Add files and folders to give ZenCoder precise context about your codebase.

  1. 1

    Add a file

    Click the + button β†’ Add File β†’ select a file. It appears as a chip in the context bar. The context size updates at the bottom.

  2. 2

    Add a folder

    Click + β†’ Add Folder. ZenCoder scans the folder and adds all relevant source files respecting zenCoder.maxContextBytes.

  3. 3

    Clear context & session history

    Click + β†’ Clear all to remove all attached context items.

    To also clear the conversation history, type /clear in the chat input and send it β€” this resets the session so the model starts fresh with no prior messages.

ℹ️
Context items are persisted to .zencoder/chat/context.json per workspace, so they survive VSCode restarts.

Keyboard shortcuts

ShortcutAction
EnterSend message
Shift+EnterNew line in message (without sending)
Ctrl+UClear the input field
Cmd+Shift+POpen Command Palette β†’ type ZenCoder: Chat

VSCode settings

Configure via Cmd+, β†’ search ZenCoder.

SettingDefaultDescription
zenCoder.agentAddresshttp://127.0.0.1:7777Daemon address
zenCoder.maxMessages100Chat history size
zenCoder.maxContextBytes200000Max context per request
zenCoder.autoDetectSkillstrueAuto-apply AI skills

Session management

Sessions are stored in .zencoder/chat/ per workspace.

ls .zencoder/chat/
# session.jsonl       β€” conversation history (capped at maxMessages)
# session-meta.json   β€” selected model
# context.json        β€” current context items
# exports/            β€” markdown exports

Next

Agent Mode β€” multi-step autonomous coding