Getting Started
Everything you need to set up Chitty Workspace and start building with AI agents.
Quick Start
Get up and running in under two minutes.
Download Chitty Workspace
Grab the installer for your platform (Windows, macOS, or Linux) from the home page.
Add your API key
Open Settings > API Keys and enter your key from OpenAI, Anthropic, xAI, Google, or set up Ollama for local models.
Create an agent
Go to Settings > Agents > Agent Builder to create a custom agent, or use one of the built-in defaults.
Start building
Open the chat and start talking to your agent. It can write code, run commands, browse the web, and manage files.
Agents
Agents are the core of Chitty Workspace. Each agent combines custom instructions, marketplace tools, and memory to create a specialized AI assistant.
Key features
- Custom instructions — Define what the agent knows and how it behaves via a system prompt.
- Tool sets — Assign native tools and marketplace packages to each agent. Tools come from community-developed packages — not auto-generated.
- Agent Builder — AI-powered builder that generates agent instructions and selects the right marketplace packages for you. Describe what you want and the builder configures the agent.
- Memory — Agents retain knowledge across conversations. Memories are scoped globally, per project, or per agent and automatically loaded at the start of each session.
- Browser control — Agents can navigate websites, click elements, fill forms, extract data, and automate web workflows using the built-in browser.
- Project scoping — Agents can be global or scoped to a specific project directory, with project context loaded automatically from
chitty.md. - Preferred model — Each agent can specify which provider and model to use (OpenAI, Anthropic, Google, xAI, or Ollama).
- Shareable — Export agents as JSON and share them with your team or the community.
Agent loop
When you send a message, the agent assembles context in this order: skill instructions, project context (chitty.md), active memories, tool definitions, and conversation history. This context is sent to the LLM, which streams its response back in real time.
If the LLM requests a tool call, the agent executes it locally and sends the result back for a follow-up response. Multiple tool calls can be chained in a single turn. The full conversation — including all tool calls and results — is persisted to the local database.
When conversation history grows beyond the model’s context window, older messages are automatically summarized while preserving tool call/result pairs to maintain continuity.
Tools
Tools give agents the ability to take action. Chitty Workspace provides built-in native tools and a growing catalog of community-developed marketplace packages.
Native tools
Built into the application and available to every agent out of the box.
- file_reader — Read files from the local filesystem
- file_writer — Create and edit files (auto-creates parent directories)
- terminal — Run shell commands (PowerShell on Windows, zsh on macOS, sh on Linux)
- code_search — Search across codebases by regex pattern with glob filters
- browser — Full browser automation via the Chitty Browser Extension
- save_memory — Persist knowledge across sessions (scoped: global, project, or agent)
- search_memory — Search saved memories by keyword
- create_tool — Create custom tools on-the-fly (Python, Node.js, PowerShell, Shell)
- install_package — Install pip or npm packages in isolated tool directories
- open_agent_panel — Open an agent in a new panel and optionally send it a message
Marketplace packages
Marketplace packages are the primary way to extend your agents with powerful, real-world integrations. Each package is a self-contained bundle of tools, authentication, configurable resources, and agent setup — developed by the open-source community and evaluated for functionality and security before publishing.
- Real integrations — Packages connect to actual APIs and services (Google Cloud, databases, web tools, and more)
- Configurable — Control which resources each package can access (e.g. allowed BigQuery datasets or Cloud Storage buckets) and enable or disable features like creating or deleting resources
- One-click install — Browse the Marketplace, install a package, and its tools are immediately available to your agents
- Agent-ready — Packages include suggested prompts and agent configuration so you can start using them right away
How packages reach the Marketplace
- Community develops — Anyone can build a package using the Package Developer Guide
- Review & evaluation — Submitted packages are evaluated for functionality, security, and quality
- Published — Approved packages appear in the Marketplace for all users to install
Want to build your own package? Read the Package Developer Guide →
Slash Commands
Type / in the chat input to use built-in commands. Commands are handled instantly without sending to the LLM.
Available commands
- /schedule — Open the schedule builder to create a new scheduled agent task
- /schedules — List all your scheduled tasks with status, last run, and next run times
- /help — Show all available slash commands
How it works
When you type a message starting with /, Chitty intercepts it before it reaches the LLM. The command is executed locally and the result is displayed inline in the chat panel. You can continue chatting normally after using a command.
The slash command system is extensible — new commands will be added in future updates.
Schedules
The agent scheduler lets you run agents autonomously on a recurring schedule. Perfect for daily briefings, monitoring tasks, data collection, and automated workflows.
Creating a schedule
Type /schedule in any chat panel to open the schedule builder:
Choose an agent
Select which agent should run the task. Use Chitty (default) or any custom agent you’ve created.
Describe the task
Write what the agent should do each time it runs. For example: “Check my email and calendar, give me a morning briefing.”
Set the schedule
Pick from presets (every morning, every hour, weekdays at 9 AM) or enter a custom cron expression for full control.
Create
Click Create Schedule. The task runs automatically in the background at the scheduled times.
Schedule presets
- Every morning (9:00 AM) —
0 9 * * * - Every hour —
0 * * * * - Weekdays at 9:00 AM —
0 9 * * 1-5 - Custom — Any 5-field cron expression (minute hour day month weekday)
Managing schedules
Type /schedules to see all your scheduled tasks with their current status, last run time, and next scheduled run. Schedules can be enabled, disabled, or deleted via the API.
Auto-approval
Scheduled tasks run with auto-approval enabled by default, meaning the agent can execute tools without waiting for manual approval. This is essential for autonomous operation. You can change this per-task if needed.
Example use cases
- Morning briefing — Check email and calendar every weekday at 9 AM
- System monitoring — Check server health every hour
- Data collection — Scrape prices or news every 30 minutes
- Report generation — Generate weekly summary reports every Friday
- Social media — Post scheduled content or check engagement metrics
Browser Extension
The Chitty Browser Extension gives your agents the same browser control you have. It connects to your real Chrome or Edge browser — not a simulated environment — so agents can use your existing login sessions, cookies, and bookmarks.
How it works
Install the Chitty Browser Extension in Chrome (load unpacked from the extension/ folder in your Chitty Workspace directory). Once connected, agents communicate with your browser via Chrome DevTools Protocol.
Agent capabilities
- Navigate — Open any URL in a real browser tab
- Click & type — Interact with buttons, forms, dropdowns — anything you can click, the agent can click
- Screenshot — Capture the current page state and display it in the chat
- Read text — Extract visible content from any page
- Execute JavaScript — Run scripts on the page for advanced automation
- Wait for elements — Wait for dynamic content to load before interacting
Real browser, real sessions
Because agents control your actual browser, they have access to any site where you’re already logged in — Gmail, LinkedIn, GitHub, banking, internal tools, etc. No separate OAuth or API keys needed for these sites. The agent sees and interacts with pages exactly as you would.
Action Panel preview
When the agent opens a page, it appears in the Action Panel’s dynamic view so you can watch what the agent is doing. For cross-origin sites that can’t render in an iframe, Chitty shows the connection status and page info instead.
Installation
- Open
chrome://extensionsin Chrome - Enable Developer mode (top right toggle)
- Click Load unpacked
- Select the
extensionfolder in your Chitty Workspace directory
The extension connects automatically when Chitty Workspace is running. Connection status is shown in the Action Panel’s Activity tab.
Approval System
Chitty protects you from unintended actions. Sensitive operations require your approval before they execute.
Actions that require approval
- Terminal commands — All shell commands
- File writes — Creating or modifying files
- Browser actions — Clicking, typing, navigating, running JavaScript
- Package installs — Installing pip or npm packages
Approval options
When an action requires approval, you see three choices:
- Deny — Reject this action. The agent receives a denial and can try a different approach.
- Always allow for session — Approve this action and auto-approve all future actions for the rest of this session. The agent still “asks” for approval (satisfying LLM guardrails), but the system responds automatically.
- Allow once — Approve just this single action.
Per-agent auto-approve
Agents can be configured with approval mode: auto in their settings, which skips all approval prompts. Use this for trusted agents running autonomous tasks. The default mode is prompt (always ask).
Marketplace
The Marketplace is a catalog of tool packages that extend what your agents can do. Packages are developed by the open-source community, reviewed for quality and security, and published for all users to install.
Available packages
- Web Tools — Web search (DuckDuckGo) and web scraper (extract text, links, tables from any URL)
- Google Cloud — BigQuery queries and Cloud Storage management
- Social Media — X/Twitter posting, search, and engagement
- Amazon AWS — Lambda, RDS, S3, DynamoDB (coming soon)
- Microsoft Azure — Container Apps, Cosmos DB, Blob Storage (coming soon)
- Database Tools — SQLite, PostgreSQL, MySQL management (coming soon)
Installing packages
Open the Marketplace in Chitty Workspace (click Marketplace on the welcome screen or via the Agents tab), find the package you want, and click Install. The tools are added to your workspace instantly and can be assigned to any agent.
Building your own packages
The Package Developer Guide has everything you need to build a marketplace package: package structure, manifest format, tool scripts (Python, Node.js, PowerShell, Shell), authentication, resource scoping, feature flags, and agent configuration.
You can also point your AI coding assistant (Claude Code, Cursor, etc.) to the developer guide URL and it can build a complete package for you.
Read the Package Developer Guide → Submit a Package (Coming Soon)
Providers & Models
Chitty Workspace is BYOK (Bring Your Own Key). Use cloud providers with your own API keys, or run models locally with no API key required.
Cloud providers
Add your API key in Settings > Providers. Keys are stored securely in your OS keyring (Windows Credential Manager, macOS Keychain, or Linux Secret Service) — never in plain text or config files.
- OpenAI — GPT-4o, GPT-4o-mini, o1, o3, and other OpenAI models
- Anthropic — Claude Opus, Claude Sonnet, Claude Haiku
- xAI — Grok 3, Grok 3 Mini
- Google AI — Gemini 2.5 Flash, Gemini Pro
Local providers
Run models on your own hardware — completely offline, no API key needed.
Ollama
Ollama provides a simple way to run open-source models locally. Chitty connects to Ollama on localhost:11434.
- Setup: Install Ollama, then pull a model:
ollama pull llama3 - Supported models: Llama 3, Qwen, Mistral, Phi, DeepSeek, CodeLlama, and any model Ollama supports
- Auto-discovery: Chitty detects installed Ollama models and shows them in the model dropdown
- No API key: Ollama runs entirely on your machine
HuggingFace Sidecar
Run GGUF-format models directly using Chitty’s Python inference sidecar.
- Setup: Place
.ggufmodel files in~/.chitty-workspace/models/ - Sidecar: Chitty starts a Python process that loads the model and serves inference
- Any GGUF model: Download quantized models from HuggingFace and drop them in the models directory
GPU support
Chitty automatically detects your GPU hardware and reports it in the Settings panel. Local models (Ollama and HuggingFace sidecar) will use GPU acceleration when available:
- NVIDIA — CUDA support via Ollama or llama.cpp
- AMD — ROCm support via Ollama
- Apple Silicon — Metal acceleration via Ollama or llama.cpp
- CPU-only — Models run on CPU if no GPU is available (slower but works everywhere)
Adding custom models
In Settings > Models, you can add custom model entries for any provider. Specify the model ID, display name, and token limits. This is useful for fine-tuned models, preview models, or models from OpenAI-compatible APIs.
Per-agent model selection
Each agent can specify a preferred provider and model. When you select that agent, Chitty automatically switches to the right model. This lets you use a fast model for simple tasks and a powerful model for complex work.
Memory
The memory system lets agents retain knowledge across conversations. Unlike chat history, memories are semantic — the agent actively decides what to remember and recalls relevant memories in future sessions.
Memory types
- User — Your role, preferences, and expertise (e.g. “Senior Rust developer, prefers minimal dependencies”)
- Feedback — Corrections and guidance you’ve given (e.g. “Don’t use unwrap() — always handle errors properly”)
- Project — Project-specific context and decisions (e.g. “Migrating from REST to gRPC by end of Q2”)
- Reference — Pointers to external resources (e.g. “CI docs are at /team/ci-setup”)
Memory scoping
Each memory has a scope that controls when it’s loaded:
- Global — Loaded in every conversation. Use for user preferences and general feedback.
- Project — Loaded when chatting within that project directory. Use for project-specific decisions and conventions.
- Agent — Loaded when that specific agent is active. Use for agent-specific corrections.
How it works
- Auto-load: At conversation start, relevant memories are loaded (global + matching project + matching agent).
- Injected: Memories are formatted and injected into the system prompt so the LLM has full context.
- Agent saves: When the agent learns something important, it uses the
save_memorytool to persist it. - Agent searches: The agent can search existing memories with
search_memorybefore asking you repeated questions.
Memory tools
save_memory— Save a new memory with type, scope, and contentsearch_memory— Search memories by keyword
Context Management
Every LLM call assembles a rich context from multiple sources. Understanding how context is assembled helps you get the best results from your agents.
Context assembly order
- System prompt — From the active agent’s instructions (or Chitty’s default system prompt)
- Project context — From
chitty.mdor.chitty/chitty.mdin the project directory - Active memories — Global + project-scoped + agent-scoped memories
- Tool definitions — JSON Schema for each available tool, plus usage instructions
- Conversation history — Previous messages in this conversation (trimmed to fit)
- User message — Your current input
Project context (chitty.md)
Drop a chitty.md (or .chitty/chitty.md) in any project directory. When you set that directory as the project path in a chat panel, Chitty automatically loads and injects it into every LLM call.
Include your tech stack, coding conventions, build commands, and any special instructions for the AI. Chitty can also help generate a chitty.md by scanning your project.
Context window management
When conversation history grows beyond the model’s context window, Chitty automatically compacts it — summarizing older messages while preserving recent tool call/result pairs to maintain continuity. This happens transparently so long conversations don’t break mid-flow.
Tool instructions
Each tool (native and marketplace) carries its own usage instructions. These are automatically injected into the system prompt so the agent knows when and how to use each tool — you don’t need to explain tool usage in your agent instructions.
Local API
Chitty runs a local REST API at http://localhost:8770. You can use it to integrate with other tools or build custom workflows.
Key endpoints
GET /api/agents— List all agentsPOST /api/agents— Create a new agentGET /api/tools— List all available toolsGET /api/schedules— List scheduled tasksPOST /api/schedules— Create a scheduled taskPOST /api/schedules/:id/run— Manually trigger a scheduled taskGET /api/conversations— List conversations (filterable by agent_id)GET /api/providers— List configured providersGET /api/marketplace/packages— List installed packages
The API is only accessible from localhost and requires no authentication since Chitty runs entirely on your machine.