MCP Server
Any script built with :usage can serve as a Model Context Protocol (MCP) tool server. One command — no Python, no Node.js, no wrapper scripts.
Like completion and docgen, the mcp command is an Additional Command — always available without appearing in your usage array. Requires the native builtin (.so).
How It Works
The MCP server speaks JSON-RPC 2.0 over stdio. When an AI agent connects, it:
- Discovers tools — each leaf subcommand becomes a tool with a JSON Schema derived from its own
:argsdeclarations. Nested commands are fully traversed. - Invokes tools — the agent calls a tool, argsh maps the JSON arguments back to CLI flags and re-invokes your script as a subprocess
- Returns results — stdout becomes the tool result, stderr is emitted as log notifications after execution
- Exposes resources — help text and version info are available as MCP resources
- Offers prompts — reusable prompt templates for common invocations
Your script is both the CLI and the tool server. The :usage/:args declarations are the single source of truth.
Client Configuration
Claude Code
Create .mcp.json at your project root:
Claude Desktop
Add to claude_desktop_config.json:
Cursor
Add to .cursor/mcp.json:
For HTTP transport, use an MCP stdio-to-HTTP bridge.
Example
Given this script:
#!/usr/bin/env bash
source argsh
main() {
local -a usage=(
'serve@readonly' "Start the server"
'build' "Build the project"
'cluster' "Cluster management"
)
:usage "My application" "${@}"
"${usage[@]}"
}
serve() {
local port
local -a args=(
'port|p:int' "Port number"
)
:args "Start the server" "${@}"
echo "serving on :${port:-8080}"
}
build() {
local output
local -a args=(
'output|o' "Output directory"
)
:args "Build the project" "${@}"
echo "building to ${output:-dist}"
}
cluster() {
local -a usage=(
'up' "Start cluster"
'down' "Stop cluster"
)
:usage "Cluster management" "${@}"
"${usage[@]}"
}
cluster::up() {
local nodes
local -a args=(
'nodes|n:int' "Number of nodes"
)
:args "Start cluster" "${@}"
echo "starting ${nodes:-3} nodes"
}
cluster::down() {
local force
local -a args=(
'force|f:+' "Force shutdown"
)
:args "Stop cluster" "${@}"
echo "stopping cluster"
}
main "${@}"
What the agent sees
When connected, tools/list returns only leaf commands — each with its own flags:
{
"tools": [
{
"name": "myapp_serve",
"title": "Start the server",
"description": "Start the server",
"annotations": {"readOnlyHint": true},
"inputSchema": {
"type": "object",
"properties": {
"port": {"type": "integer", "description": "Port number"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_build",
"title": "Build the project",
"description": "Build the project",
"inputSchema": {
"type": "object",
"properties": {
"output": {"type": "string", "description": "Output directory"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_cluster_up",
"title": "Start cluster",
"description": "Start cluster",
"inputSchema": {
"type": "object",
"properties": {
"nodes": {"type": "integer", "description": "Number of nodes"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_cluster_down",
"title": "Stop cluster",
"description": "Stop cluster",
"inputSchema": {
"type": "object",
"properties": {
"force": {"type": "boolean", "description": "Force shutdown"}
},
"required": [],
"additionalProperties": false
}
}
]
}
Note how:
servehas onlyport,buildhas onlyoutput— each tool gets its own flagsclusteris a dispatcher and is not exposed — only its leavescluster_upandcluster_downappearservehas"annotations": {"readOnlyHint": true}from the@readonlyhint
Tool invocation
The agent sends:
argsh reconstructs the CLI call: ./myapp cluster up --nodes 5, captures the output, and returns:
Nested Subcommands
The MCP server recursively discovers your entire command tree. It uses the same function resolution logic as :usage dispatch:
- Auto-resolution:
{caller}::{name}→{last_segment}::{name}→argsh::{name}→{name} - Explicit mapping:
'down:-other::down'resolves to exactlyother::down
Only leaf commands (functions with :args but no :usage) become tools. Dispatchers (functions with :usage) are traversed but not exposed.
Tool Annotations
Add hints to your usage entries with @ suffixes:
| Annotation | MCP hint | Effect |
|---|---|---|
@readonly | readOnlyHint: true | Client may auto-run without confirmation |
@destructive | destructiveHint: true | Client shows confirmation dialog |
@idempotent | idempotentHint: true | Client knows retries are safe |
@openworld | openWorldHint: true | Tool interacts with external systems |
@json | Adds outputSchema | Tool result includes structuredContent |
Annotations can be combined: 'status@readonly@json'.
Structured Output
Add @json to subcommands that return JSON:
When @json is set, the tool definition includes an outputSchema and the tool result includes structuredContent alongside the text content — letting LLMs parse the output reliably.
Resources
The server exposes two resources via resources/list and resources/read:
| URI | Content | MIME type |
|---|---|---|
script:///help | Script's --help output | text/plain |
script:///version | argsh version | text/plain |
Resources give the agent context about your script before it needs to call any tools.
Prompts
Two prompt templates are available via prompts/list and prompts/get:
| Prompt | Description | Arguments |
|---|---|---|
run_subcommand | Run a specific subcommand | subcommand (required), args (optional) |
get_help | Show help for a subcommand | subcommand (optional) |
In clients that support it, these appear as slash commands.
Logging
After tool execution, stderr lines are emitted as MCP log notifications (notifications/message). This provides progress feedback for long-running commands without polluting the tool result.
Protocol Details
The server implements MCP 2025-11-25 with tools, resources, prompts, and logging capabilities:
| Method | Type | Response |
|---|---|---|
initialize | Request | Protocol version, capabilities, server info |
notifications/initialized | Notification | No response (client acknowledgment) |
ping | Request | {} |
tools/list | Request | Tool definitions with per-command inputSchema |
tools/call | Request | Tool result with content, optional structuredContent |
resources/list | Request | Available resources |
resources/read | Request | Resource content by URI |
prompts/list | Request | Available prompt templates |
prompts/get | Request | Rendered prompt with arguments |
logging/setLevel | Request | {} (acknowledged) |
Unknown methods return error code -32601 (Method not found).
Type Mapping
argsh types map to JSON Schema types in inputSchema:
| argsh type | JSON Schema | Notes |
|---|---|---|
:+ (boolean flag) | "boolean" | true → --flag, false/absent → omitted |
:~int | "integer" | |
:~float | "number" | |
| default (string) | "string" |
Required flags (:! modifier) populate the required array.
Compared to docgen llm
docgen llm | mcp | |
|---|---|---|
| Output | Static JSON file | Live stdio server |
| Use case | Embed schemas in API calls | Agent connects directly |
| Execution | Agent runs CLI manually | argsh handles invocation |
| Protocol | Provider-specific (Claude/OpenAI) | Standard MCP (any client) |
| Nested commands | Yes (leaf tools) | Yes (leaf tools) |
| Per-tool flags | Yes | Yes |
| Resources/Prompts | No | Yes |
Use docgen llm when you need to embed tool definitions in your own agent code. Use mcp when the client natively supports MCP.