Skip to main content
Skip to main content

MCP Server

Any script built with :usage can serve as a Model Context Protocol (MCP) tool server. One command — no Python, no Node.js, no wrapper scripts.

./myscript mcp

Like completion and docgen, the mcp command is an Additional Command — always available without appearing in your usage array. Requires the native builtin (.so).

How It Works

The MCP server speaks JSON-RPC 2.0 over stdio. When an AI agent connects, it:

  1. Discovers tools — each leaf subcommand becomes a tool with a JSON Schema derived from its own :args declarations. Nested commands are fully traversed.
  2. Invokes tools — the agent calls a tool, argsh maps the JSON arguments back to CLI flags and re-invokes your script as a subprocess
  3. Returns results — stdout becomes the tool result, stderr is emitted as log notifications after execution
  4. Exposes resources — help text and version info are available as MCP resources
  5. Offers prompts — reusable prompt templates for common invocations

Your script is both the CLI and the tool server. The :usage/:args declarations are the single source of truth.

Client Configuration

Claude Code

Create .mcp.json at your project root:

{
"mcpServers": {
"myapp": {
"type": "stdio",
"command": "./myapp",
"args": ["mcp"]
}
}
}

Claude Desktop

Add to claude_desktop_config.json:

{
"mcpServers": {
"myapp": {
"command": "/absolute/path/to/myapp",
"args": ["mcp"]
}
}
}

Cursor

Add to .cursor/mcp.json:

{
"mcpServers": {
"myapp": {
"command": "./myapp",
"args": ["mcp"]
}
}
}

For HTTP transport, use an MCP stdio-to-HTTP bridge.

Example

Given this script:

#!/usr/bin/env bash
source argsh

main() {
local -a usage=(
'serve@readonly' "Start the server"
'build' "Build the project"
'cluster' "Cluster management"
)
:usage "My application" "${@}"
"${usage[@]}"
}

serve() {
local port
local -a args=(
'port|p:int' "Port number"
)
:args "Start the server" "${@}"
echo "serving on :${port:-8080}"
}

build() {
local output
local -a args=(
'output|o' "Output directory"
)
:args "Build the project" "${@}"
echo "building to ${output:-dist}"
}

cluster() {
local -a usage=(
'up' "Start cluster"
'down' "Stop cluster"
)
:usage "Cluster management" "${@}"
"${usage[@]}"
}

cluster::up() {
local nodes
local -a args=(
'nodes|n:int' "Number of nodes"
)
:args "Start cluster" "${@}"
echo "starting ${nodes:-3} nodes"
}

cluster::down() {
local force
local -a args=(
'force|f:+' "Force shutdown"
)
:args "Stop cluster" "${@}"
echo "stopping cluster"
}

main "${@}"

What the agent sees

When connected, tools/list returns only leaf commands — each with its own flags:

{
"tools": [
{
"name": "myapp_serve",
"title": "Start the server",
"description": "Start the server",
"annotations": {"readOnlyHint": true},
"inputSchema": {
"type": "object",
"properties": {
"port": {"type": "integer", "description": "Port number"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_build",
"title": "Build the project",
"description": "Build the project",
"inputSchema": {
"type": "object",
"properties": {
"output": {"type": "string", "description": "Output directory"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_cluster_up",
"title": "Start cluster",
"description": "Start cluster",
"inputSchema": {
"type": "object",
"properties": {
"nodes": {"type": "integer", "description": "Number of nodes"}
},
"required": [],
"additionalProperties": false
}
},
{
"name": "myapp_cluster_down",
"title": "Stop cluster",
"description": "Stop cluster",
"inputSchema": {
"type": "object",
"properties": {
"force": {"type": "boolean", "description": "Force shutdown"}
},
"required": [],
"additionalProperties": false
}
}
]
}

Note how:

  • serve has only port, build has only outputeach tool gets its own flags
  • cluster is a dispatcher and is not exposed — only its leaves cluster_up and cluster_down appear
  • serve has "annotations": {"readOnlyHint": true} from the @readonly hint

Tool invocation

The agent sends:

{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"myapp_cluster_up","arguments":{"nodes":5}}}

argsh reconstructs the CLI call: ./myapp cluster up --nodes 5, captures the output, and returns:

{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [{"type": "text", "text": "starting 5 nodes"}],
"isError": false
}
}

Nested Subcommands

The MCP server recursively discovers your entire command tree. It uses the same function resolution logic as :usage dispatch:

  • Auto-resolution: {caller}::{name}{last_segment}::{name}argsh::{name}{name}
  • Explicit mapping: 'down:-other::down' resolves to exactly other::down

Only leaf commands (functions with :args but no :usage) become tools. Dispatchers (functions with :usage) are traversed but not exposed.

Tool Annotations

Add hints to your usage entries with @ suffixes:

local -a usage=(
'status@readonly' "Get status (safe to auto-run)"
'deploy@destructive' "Deploy to production"
'sync@idempotent' "Sync data (safe to retry)"
)
AnnotationMCP hintEffect
@readonlyreadOnlyHint: trueClient may auto-run without confirmation
@destructivedestructiveHint: trueClient shows confirmation dialog
@idempotentidempotentHint: trueClient knows retries are safe
@openworldopenWorldHint: trueTool interacts with external systems
@jsonAdds outputSchemaTool result includes structuredContent

Annotations can be combined: 'status@readonly@json'.

Structured Output

Add @json to subcommands that return JSON:

local -a usage=(
'status@json' "Get service status as JSON"
)

When @json is set, the tool definition includes an outputSchema and the tool result includes structuredContent alongside the text content — letting LLMs parse the output reliably.

Resources

The server exposes two resources via resources/list and resources/read:

URIContentMIME type
script:///helpScript's --help outputtext/plain
script:///versionargsh versiontext/plain

Resources give the agent context about your script before it needs to call any tools.

Prompts

Two prompt templates are available via prompts/list and prompts/get:

PromptDescriptionArguments
run_subcommandRun a specific subcommandsubcommand (required), args (optional)
get_helpShow help for a subcommandsubcommand (optional)

In clients that support it, these appear as slash commands.

Logging

After tool execution, stderr lines are emitted as MCP log notifications (notifications/message). This provides progress feedback for long-running commands without polluting the tool result.

Protocol Details

The server implements MCP 2025-11-25 with tools, resources, prompts, and logging capabilities:

MethodTypeResponse
initializeRequestProtocol version, capabilities, server info
notifications/initializedNotificationNo response (client acknowledgment)
pingRequest{}
tools/listRequestTool definitions with per-command inputSchema
tools/callRequestTool result with content, optional structuredContent
resources/listRequestAvailable resources
resources/readRequestResource content by URI
prompts/listRequestAvailable prompt templates
prompts/getRequestRendered prompt with arguments
logging/setLevelRequest{} (acknowledged)

Unknown methods return error code -32601 (Method not found).

Type Mapping

argsh types map to JSON Schema types in inputSchema:

argsh typeJSON SchemaNotes
:+ (boolean flag)"boolean"true--flag, false/absent → omitted
:~int"integer"
:~float"number"
default (string)"string"

Required flags (:! modifier) populate the required array.

Compared to docgen llm

docgen llmmcp
OutputStatic JSON fileLive stdio server
Use caseEmbed schemas in API callsAgent connects directly
ExecutionAgent runs CLI manuallyargsh handles invocation
ProtocolProvider-specific (Claude/OpenAI)Standard MCP (any client)
Nested commandsYes (leaf tools)Yes (leaf tools)
Per-tool flagsYesYes
Resources/PromptsNoYes

Use docgen llm when you need to embed tool definitions in your own agent code. Use mcp when the client natively supports MCP.

Was this section helpful?