Inline chat & Cmd+K
Stateless. Fast. Familiar.
- Side-panel chat with your code in context
- Cmd+K inline edit with streaming diffs
- Tab autocomplete (FIM)
- @web grounding via LibertAI search
Runs in VS Code. Uses your machine. Asks before it edits.
Takes over autonomously. Keeps working while you're away.
One keybind for a one-shot completion. A local tool loop for sharper refactors. A remote autonomous agent when the task outgrows your laptop. Switch per task — the product picks none of them for you.
Stateless. Fast. Familiar.
Reads, writes, runs, grounds — with your approval.
Autonomous. Persistent. Keeps going.
// Cursor can't do this. Claude Code can't do this. Copilot can't do this.
// None of them have a LiberClaw to hand off to.
You're in VS Code. You open the local agent, rough out a plan, run a few tool calls. Maybe a refactor. Maybe a migration. A few turns in, you realize this is going to take hours.
One action in the side panel. The extension packages your message history, tool history, and the current workspace state into a hand-off bundle.
You see the LiberClaw agents you already have: the research one, the ops one, the one with your personal skills and MCP setup. Pick one. It inherits the task without spawning a new VM or burning a slot. Or spin up a fresh dedicated coding agent if you prefer.
Auto-approve is on — scoped to the workspace sandbox. The agent keeps running tools, writing files, running tests. Your local editor stays in bidirectional sync, so incoming changes show up in your diff view as they happen.
The agent is on an Aleph Cloud VM — it doesn't need you. Reconnect in 20 minutes, 2 hours, tomorrow morning. Pick up where it got to. Review the diff. Merge it into your working tree.
LibertAI is the default when you're signed in — no logging, no training on your code. Or point it at any OpenAI-compatible or Anthropic-compatible endpoint you already have. Tool calling works across both conventions.
Decentralized inference. Private by policy — no logs, no training on user data. Billed through your LiberClaw plan, or use a pay-per-use API key.
OpenAI, Groq, Together, vLLM, or any endpoint speaking the OpenAI Chat API.
Anthropic proper, or a self-hosted vLLM/llama.cpp with the Anthropic messages shim.
Auto-detected on localhost:11434. Full offline mode — code never leaves your machine.
Auto-detected on localhost:8080. Fast, configurable, your hardware.
Spin up a private GPU VM preloaded with Ollama or vLLM. Routes like any OpenAI-compatible endpoint.
The editor and CLI are free, open source, and work with any backend. Remote-agent mode uses your LiberClaw subscription — so a free LiberClaw account already gets you two concurrent coding agents at no extra cost.
Or skip plans entirely: drop an OpenAI / Anthropic / LibertAI key in settings.json and use the editor without any LiberClaw account. Remote-agent mode is the only feature gated behind sign-in.
We don't conflate "private" with "local." Here's exactly where your code goes in each mode.
Ollama or llama.cpp server running on your own machine. Nothing leaves your laptop. No network egress. The only mode where "local" honestly applies.
Inference is remote, but no logging and no training on user data by policy. Aleph TEE instances available for confidential-compute models. Materially stronger than default OpenAI or Anthropic terms.
Your code lives on an Aleph Cloud VM for the agent's lifetime. You control when to destroy it. Same LibertAI inference guarantees apply for the model calls the agent makes.
House rule "Private by default" never means "local". LibertAI is remote-but-private — a different claim, and one we will never blur in marketing.
Same product. Pick the surface that fits how you already work.
Inline chat, Cmd+K, tab complete, local agent, and hand-off — in your existing VS Code. Marketplace + Open VSX.
Claude-Code-class REPL. Tool loop locally, or --agent to offload to a LiberClaw agent you can reconnect to anytime.
Signed VSCodium build with LiberClaw Code pre-installed. One download, zero setup. macOS, Windows, Linux.