Building Marketplace Packages
A complete guide to creating tool packages for the Chitty Workspace marketplace. Packages are mini-applications that plug into the standard integration framework — authentication, tools, resource scoping, and agent configuration.
Overview
A marketplace package bundles one or more tools that extend what Chitty Workspace agents can do. Each package includes:
- Tools — Script-based actions (Python, Node, Shell, PowerShell) the agent can call
- Authentication — How the package connects to external services (CLI, OAuth, API key)
- Setup Steps — Guided installer for dependencies and credentials
- Configurable Resources — What the user allows the agent to access (datasets, buckets, repos)
- Feature Flags — Toggles for optional/dangerous capabilities (create, delete)
- Agent Configuration — Default instructions, suggested prompts, capabilities summary
Connection Points
Packages integrate through these standard connection points:
| Connection | What it does | Defined in |
|---|---|---|
| Auth | CLI, OAuth, API key — how credentials are obtained | package.json → auth |
| Setup | Step-by-step installer wizard | package.json → setup_steps |
| Tools | Subprocess execution (stdin JSON → stdout JSON) | {tool}/manifest.json + tool.py |
| Resources | User-scoped access (datasets, buckets, etc.) | package.json → configurable_resources |
| Features | User-toggled capabilities | package.json → feature_flags |
| Agent | Instructions, prompts, capabilities | package.json → agent_config |
Package Structure
Every package is a directory containing a package.json and one or more tool subdirectories:
my-package/
├── package.json # Package manifest (metadata, auth, config, setup)
├── auth.py # Shared auth helper (optional)
├── config.py # Shared config enforcement helper (optional)
├── tool-one/
│ ├── manifest.json # Tool definition (params, runtime, instructions)
│ └── tool.py # Tool implementation
└── tool-two/
├── manifest.json
└── tool.js # Can be Node.js, Python, Shell, or PowerShell
Key rules
- Package name must be kebab-case:
my-package - Tool names must be alphanumeric with underscores/hyphens:
my_tool - Entry points must be simple filenames (no path traversal)
- Scripts communicate via JSON on stdin/stdout
package.json — Package Manifest
The root manifest defines everything about your package. Here's a complete annotated example:
{
// ── Identity ──────────────────────────────────────────
"name": "my-cloud-service", // Unique ID (kebab-case)
"display_name": "My Cloud Service", // Shown in UI
"vendor": "Your Name", // Author or organization
"description": "Short description for marketplace cards",
"version": "1.0.0", // Semver
// ── Display ───────────────────────────────────────────
"icon": "cloud", // Icon name
"color": "#4285F4", // Hex color for icon background
"categories": ["cloud", "data"], // For marketplace filtering
"long_description": "Full markdown description for the detail page...",
"docs_url": "https://...", // Link to external docs
"repo_url": "https://github.com/...",
// ── Tools ─────────────────────────────────────────────
"tools": ["my-query-tool", "my-storage-tool"],
// ── Authentication ────────────────────────────────────
"auth": {
"type": "cli", // "cli", "oauth", "api_key", "none"
"instructions": "Human-readable auth instructions",
"oauth_provider": "google", // For OAuth packages
"oauth_scopes": ["scope1"], // Required OAuth scopes
"credentials_key": "my-api-key" // For API key packages (keyring name)
},
// ── Setup Steps ───────────────────────────────────────
"setup_steps": [ /* see below */ ],
// ── Configurable Resources ────────────────────────────
"configurable_resources": [ /* see below */ ],
// ── Feature Flags ─────────────────────────────────────
"feature_flags": [ /* see below */ ],
// ── Agent Configuration ───────────────────────────────
"agent_config": { /* see below */ },
// ── Persistent Connections (optional) ──────────────────
"connections": [ /* see Persistent Connections section */ ]
}
Tool manifest.json
Each tool has its own manifest.json that defines parameters, runtime, and instructions:
{
"name": "my_query_tool",
"display_name": "Query Tool",
"description": "Run queries against the service",
"version": "1.0.0",
"tool_type": "custom",
"runtime": "python", // "python", "node", "shell", "powershell"
"entry_point": "tool.py",
"parameters": {
"action": {
"type": "string",
"description": "Action to perform: list, query, insert",
"required": true
},
"query": {
"type": "string",
"description": "The query to execute",
"required": false
},
"limit": {
"type": "number",
"description": "Max results to return (default: 100)",
"required": false
}
},
"install_commands": [
"pip install my-sdk"
],
"timeout_seconds": 60,
"permission_tier": "moderate", // "safe", "moderate", "elevated"
"source": "marketplace",
"marketplace_id": "my-cloud-service",
"instructions": "Agent instructions — injected into the system prompt.\n\nAvailable actions:\n- list: List all items\n- query: Run a query\n- insert: Insert new data"
}
Parameter types
string— Text valuenumber— Numeric valueboolean— True/falsearray— JSON arrayobject— JSON object
Permission tiers
- safe — Read-only operations, auto-approved
- moderate — Write operations, one-click confirm
- elevated — Destructive or expensive operations, explicit approval
Tool Scripts
Tool scripts are executed as subprocesses. They receive parameters as JSON on stdin and must return a JSON result on stdout.
Input (stdin)
{"action": "query", "query": "SELECT * FROM users", "limit": 10}
Output (stdout) — must be valid JSON
// Success:
{"success": true, "output": {"rows": [...], "count": 10}}
// Error:
{"success": false, "error": "Table not found: users"}
Environment Variables
Chitty sets these environment variables for every tool execution:
| Variable | Description |
|---|---|
CHITTY_TOOL_NAME | Name of the tool being executed |
CHITTY_TOOL_DIR | Path to the tool's directory (where manifest.json is) |
CHITTY_SANDBOX_DIR | Temporary working directory (cleaned up after execution) |
CHITTY_PACKAGE_CONFIG | JSON string with allowed resources and feature flags (see Config Enforcement) |
Python template
import json
import sys
def main():
raw = sys.stdin.read()
params = json.loads(raw) if raw.strip() else {}
action = params.get("action", "")
if action == "list":
# ... do work ...
print(json.dumps({"success": True, "output": {"items": [], "count": 0}}))
else:
print(json.dumps({"success": False, "error": f"Unknown action: {action}"}))
if __name__ == "__main__":
main()
Node.js template
const chunks = [];
process.stdin.on('data', chunk => chunks.push(chunk));
process.stdin.on('end', () => {
const params = JSON.parse(Buffer.concat(chunks).toString() || '{}');
const action = params.action || '';
if (action === 'list') {
console.log(JSON.stringify({ success: true, output: { items: [], count: 0 } }));
} else {
console.log(JSON.stringify({ success: false, error: `Unknown action: ${action}` }));
}
});
Authentication
Packages declare their auth method in package.json → auth. Chitty handles the credential flow — your tool just uses whatever auth mechanism is available.
Auth types
| Type | Use when | How credentials are obtained |
|---|---|---|
cli | Service has a CLI (gcloud, az, aws) | Tool calls the CLI to get tokens at runtime |
oauth | Service supports OAuth 2.0 | Chitty runs the PKCE flow, stores tokens in OS keyring |
api_key | Service uses API keys | User enters key in setup, stored in OS keyring |
none | No auth needed | N/A |
CLI auth example (Google Cloud)
# auth.py — shared helper for all tools in the package
import subprocess
def get_access_token():
result = subprocess.run(
["gcloud", "auth", "print-access-token"],
capture_output=True, text=True, timeout=10
)
if result.returncode != 0:
return None, f"gcloud auth failed: {result.stderr.strip()}"
return result.stdout.strip(), None
Setup steps for authentication
"setup_steps": [
{
"id": "check_cli",
"label": "Check for CLI tool",
"check_command": "gcloud --version",
"install_command_windows": "winget install Google.CloudSDK",
"install_command_mac": "brew install --cask google-cloud-sdk",
"help_text": "The CLI is required for authentication.",
"required": true
},
{
"id": "auth_login",
"label": "Sign in",
"check_command": "gcloud auth print-access-token",
"install_command": "gcloud auth login",
"help_text": "Opens your browser to sign in.",
"required": true
},
{
"id": "set_project",
"label": "Set default project",
"check_command": "gcloud config get-value project",
"prompt_user": true,
"prompt_label": "Project ID",
"prompt_placeholder": "e.g. my-project-12345",
"install_command_template": "gcloud config set project {value}",
"required": true
}
]
Configurable Resources
Resources let users control what the agent can access. For example, a Google Cloud package might let users specify which BigQuery datasets and GCS buckets are allowed.
"configurable_resources": [
{
"id": "datasets", // Unique resource type ID
"label": "BigQuery Datasets", // Shown in UI
"description": "Select which datasets the agent can query.",
"discover_command": "gcloud bq ls --format=json", // Auto-discover
"placeholder": "e.g. my_analytics", // Manual entry hint
"multi": true, // Allow multiple selections
"required_by_actions": ["query", "list_tables"] // Which tool actions need this
}
]
How it works
- User opens the package detail page in Settings > Marketplace
- The "Allowed Resources" section shows each configurable resource
- User can click Discover to auto-detect available resources, or type names manually
- On save, the allowed list is stored in SQLite
- When a tool runs, the config is passed via
CHITTY_PACKAGE_CONFIGenv var - The tool checks the allowed list before executing (see Config Enforcement)
Feature Flags
Feature flags let users enable or disable specific capabilities. Use these for actions that create billable resources, delete data, or have other side effects.
"feature_flags": [
{
"id": "allow_create_dataset",
"label": "Allow creating new datasets",
"description": "When enabled, the agent can create new BigQuery datasets.",
"default_enabled": false,
"gates_actions": ["create_dataset"],
"enable_warning": "Creating datasets may incur storage costs."
},
{
"id": "allow_delete",
"label": "Allow deleting data",
"description": "When enabled, the agent can delete datasets and objects.",
"default_enabled": false,
"gates_actions": ["delete_dataset", "delete_object"],
"enable_warning": "Deleted data cannot be recovered."
}
]
Feature flags appear as toggles on the package detail page. When a flag with enable_warning is toggled on, the user sees a confirmation dialog.
Agent Configuration
Help users get started quickly with default instructions and suggested prompts.
"agent_config": {
"default_instructions": "You have access to My Cloud tools. Always confirm before deleting data.",
"suggested_prompts": [
"List my datasets and show tables",
"Run a SQL query to find recent activity",
"Upload this file to my bucket"
],
"recommended_model": "gpt-4o",
"capabilities": [
"Run SQL queries on cloud datasets",
"Upload and download files",
"Create and manage data resources (when enabled)"
]
}
- default_instructions — Injected into the system prompt when package tools are active
- suggested_prompts — Shown on the package detail page as "Try These Prompts"
- capabilities — Bullet list shown on the detail page with checkmarks
Persistent Connections
Packages can declare persistent background connections — long-running scripts that receive real-time events from external services (WebSockets, listeners, event streams). The platform manages the lifecycle: spawning, health monitoring, auto-restart, and event routing to agents.
When to Use Connections
- Real-time events — Slack @mentions, Discord messages, Teams notifications
- WebSocket listeners — Live data streams, chat bots, monitoring
- Webhook receivers — GitHub events, payment notifications, CI/CD triggers
- Long-polling — Services without WebSocket support
If your integration only needs on-demand API calls (query data, send a message), use regular tool scripts instead.
Declaring Connections in package.json
"connections": [
{
"id": "socket_mode",
"label": "Real-time Events (Socket Mode)",
"description": "Persistent WebSocket for @mentions, DMs, slash commands",
"runtime": "python",
"script": "socket-mode/connect.py",
"requires_feature": "allow_socket_mode",
"requires_credentials": ["oauth_slack_access_token", "slack_app_token"],
"health_interval_secs": 30,
"restart_on_failure": true,
"max_restarts": 5,
"restart_delay_secs": 10,
"events": [
{
"id": "mention",
"label": "@Mention",
"description": "Triggered when someone @mentions the bot",
"agent_configurable": true
},
{
"id": "dm",
"label": "Direct Message",
"description": "Triggered when someone DMs the bot",
"agent_configurable": true
}
]
}
]
Connection Fields
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier within this package |
label | string | Human-readable name shown in UI |
runtime | string | Script runtime: python, node, shell, powershell |
script | string | Path to connection script relative to package dir |
requires_feature | string | Feature flag ID that must be enabled to start |
requires_credentials | string[] | Keyring credential keys that must exist |
health_interval_secs | number | How often to expect heartbeats (default: 30) |
restart_on_failure | boolean | Auto-restart on crash (default: true) |
max_restarts | number | Max restart attempts before giving up (default: 5) |
events | array | Event types this connection can emit |
NDJSON Protocol
Connection scripts communicate with the platform via stdin/stdout newline-delimited JSON. Each message is a single JSON object followed by a newline.
Script → Platform (stdout)
// Connection is ready
{"type": "ready", "message": "Connected to Slack workspace"}
// Periodic heartbeat (keeps connection alive)
{"type": "heartbeat"}
// External event received — platform routes to configured agent
{"type": "event", "event_id": "mention", "correlation_id": "uuid-123", "data": {
"user": "U12345",
"text": "Hey @bot what's the status?",
"channel": "C67890",
"thread_ts": "1234567890.123456"
}}
// Log message (shown in Activity panel)
{"type": "log", "level": "info", "message": "Reconnected"}
// Error (fatal=true stops the connection)
{"type": "error", "message": "Token expired", "fatal": false}
Platform → Script (stdin)
// Agent response to an event — post it back to the external service
{"type": "response", "correlation_id": "uuid-123", "data": {
"text": "The project is on track!",
"channel": "C67890",
"thread_ts": "1234567890.123456"
}}
// Graceful shutdown
{"type": "shutdown"}
Event Routing
Users configure which agent handles each event type via the Marketplace UI. When an event arrives:
- Connection script sends
eventmessage via stdout - Platform looks up the configured agent for this event type
- Agent executes with the event data as the user message
- Agent's response is sent back via stdin as a
responsemessage - Connection script posts the response to the external service
Example: Slack Socket Mode
See the Slack Socket Mode implementation for a complete reference. Key patterns:
- Uses
slack_sdk.socket_mode.aiohttpfor WebSocket connection - Acknowledges events within 3 seconds (Slack requirement)
- Filters bot's own messages to avoid loops
- Reads credentials from environment variables set by the platform
- Sends heartbeats every 25 seconds
- Handles graceful shutdown via stdin
shutdownmessage
Credential Handling
Credentials listed in requires_credentials are read from the OS keyring by the platform and passed to the script via environment variables:
# Platform sets these before spawning the script:
CHITTY_CRED_OAUTH_SLACK_ACCESS_TOKEN=xoxb-...
CHITTY_CRED_SLACK_APP_TOKEN=xapp-1-...
CHITTY_PACKAGE_ID=slack
CHITTY_CONNECTION_ID=socket_mode
Your script reads them from os.environ. Never hardcode credentials.
Config Enforcement
Your tool scripts are responsible for checking the user's configuration before executing actions. Chitty passes the config as the CHITTY_PACKAGE_CONFIG environment variable.
Config JSON structure
{
"package_id": "my-cloud-service",
"resources": {
"datasets": ["analytics_prod", "analytics_staging"],
"buckets": ["data-bucket-123"]
},
"features": {
"allow_create_dataset": false,
"allow_delete": true
}
}
Python enforcement helper
# config.py — include this in your package root
import os, json
def get_package_config():
raw = os.environ.get("CHITTY_PACKAGE_CONFIG", "")
return json.loads(raw) if raw else {"resources": {}, "features": {}}
def get_allowed_resources(resource_type):
"""Get allowed IDs for a resource type. Empty list = no restrictions."""
config = get_package_config()
return [r if isinstance(r, str) else r.get("id", "")
for r in config.get("resources", {}).get(resource_type, [])]
def is_feature_enabled(feature_id):
"""Check if a feature is enabled. True if not configured (default allow)."""
features = get_package_config().get("features", {})
return features.get(feature_id, True) if features else True
def check_resource_allowed(resource_type, resource_id):
"""Returns (allowed, error_message)."""
allowed = get_allowed_resources(resource_type)
if not allowed:
return True, None # No restrictions
if resource_id in allowed:
return True, None
return False, f"'{resource_id}' not in allowed {resource_type}: {', '.join(allowed)}"
def check_feature_allowed(feature_id, label=None):
"""Returns (allowed, error_message)."""
if is_feature_enabled(feature_id):
return True, None
return False, f"'{label or feature_id}' is disabled in package configuration."
Using it in your tool
from config import check_resource_allowed, check_feature_allowed
def main():
params = json.loads(sys.stdin.read())
action = params.get("action")
# Check feature flags
if action == "create_dataset":
allowed, err = check_feature_allowed("allow_create_dataset", "Creating datasets")
if not allowed:
print(json.dumps({"success": False, "error": err}))
return
# Check allowed resources
if action in ["query", "list_tables"]:
dataset = params.get("dataset_id", "")
if dataset:
allowed, err = check_resource_allowed("datasets", dataset)
if not allowed:
print(json.dumps({"success": False, "error": err}))
return
# ... proceed with the action ...
Full Example — Google Cloud Package
Here's the real-world structure of the bundled Google Cloud package:
google-cloud/
├── package.json # Package manifest with BigQuery + Cloud Storage
├── auth.py # Shared: get_access_token(), get_project_id()
├── config.py # Shared: check_resource_allowed(), check_feature_allowed()
├── bigquery/
│ ├── manifest.json # Tool: gcloud_bigquery (7 actions)
│ └── tool.py # BigQuery REST API via urllib
└── cloud-storage/
├── manifest.json # Tool: gcloud_storage (7 actions)
└── tool.py # GCS REST API via urllib
Key patterns from this example:
- Shared auth helper — Both tools import
auth.pyfrom the parent directory usingsys.path.insert(0, ...) - Shared config helper — Both tools import
config.pyfor resource/feature enforcement - REST API over urllib — No external SDK needed, just the stdlib
urllib.request - Action-based design — One tool with an
actionparameter instead of many separate tools - Graceful errors — Always return JSON
{"success": false, "error": "..."}, never crash
Submitting to the Marketplace
Ready to share your package? Submit it for review:
Test locally
Place your package directory in ~/.chitty-workspace/tools/marketplace/ and restart Chitty Workspace. Verify setup, auth, tools, and config all work.
Create a GitHub repo
Push your package to a public GitHub repository with a clear README.
Submit for review
Go to the Chitty Marketplace and click "Submit a Package". Enter your GitHub repo URL, a description, and your contact email. You can also submit via the API:
POST https://chitty.ai/api/v1/submissions
{
"package_name": "my-package",
"github_url": "https://github.com/you/my-package",
"submitter_name": "Your Name",
"submitter_email": "you@example.com",
"description": "What this package does",
"category": "productivity"
}
Review & publish
The Chitty team reviews your package for security, quality, and compatibility. Once approved, it appears in the marketplace for all users.
Review criteria
- No hardcoded credentials or secrets
- Proper config enforcement for resource-scoped actions
- Feature flags for destructive/billable operations
- Clear error messages (never raw stack traces)
- Minimal dependencies (prefer stdlib where possible)
- Cross-platform support (or clear platform requirements)