100% Local & Private
Run with Ollama locally with zero data sent out. Total privacy.
Cloud Connectors
Hook into OpenAI, Gemini, Groq, or OpenRouter for maximum power.
Capabilities designed for builders
Everything you need to navigate, understand, and build within your existing codebase seamlessly.
AI-Powered Q&A
Get context-aware answers from your local workspace using blazing fast Retrieval-Augmented Generation (RAG).
Confluence One-Click
Seamlessly connect to your Confluence space and instantly start chatting with your documentation alongside your code.
Azure DevOps (ADO)
Deep integration with ADO to fetch work items, user stories, and pull requests directly into your AI context.
Interactive Editor Chat
Ask questions directly in the IDE to receive intelligent, project-specific code solutions. Stop switching context.
Ready in minutes
Installation
WorkspaceGPT is available directly through the marketplace. Install it for VS Code or Cursor.
Choose Provider
Select your engine in Settings > Providers.
- Ollama (100% Local) - Default:
llama3.2:1b - OpenAI / Gemini - High performance models
- OpenRouter - Multiple model access
Connect Contexts
Confluence Start
Go to Settings > Confluence. Click Sign in for one-click auth, and hit Sync.
ADO Synchronization
Go to Settings > ADO Integration. Enter PAT to sync pull requests, tickets, and work items.