Skip to main content
Skip to main content

MCP Server

Any script built with :usage can serve as a Model Context Protocol (MCP) tool server. One command — no Python, no Node.js, no wrapper scripts.

./myscript mcp

Like completion and docgen, the mcp command is an Additional Command — always available without appearing in your usage array. Requires the native builtin (.so).

How It Works

The MCP server speaks JSON-RPC 2.0 over stdio. When an AI agent connects, it:

  1. Discovers tools — each subcommand becomes a tool with a JSON Schema derived from your :args declarations
  2. Invokes tools — the agent calls a tool, argsh maps the JSON arguments back to CLI flags and re-invokes your script as a subprocess
  3. Returns results — stdout becomes the tool result, stderr is included for context

Your script is both the CLI and the tool server. The :usage/:args declarations are the single source of truth.

Client Configuration

Claude Code

Create .mcp.json at your project root:

{
"mcpServers": {
"myapp": {
"type": "stdio",
"command": "./myapp",
"args": ["mcp"]
}
}
}

Claude Desktop

Add to claude_desktop_config.json:

{
"mcpServers": {
"myapp": {
"command": "/absolute/path/to/myapp",
"args": ["mcp"]
}
}
}

Cursor

Add to .cursor/mcp.json:

{
"mcpServers": {
"myapp": {
"command": "./myapp",
"args": ["mcp"]
}
}
}

Example

Given this script:

#!/usr/bin/env bash
source argsh

main() {
local config
local -a verbose args=(
'verbose|v:+' "Enable verbose output"
'config|c' "Config file path"
)
local -a usage=(
'serve' "Start the server"
'build' "Build the project"
)
:usage "My application" "${@}"
"${usage[@]}"
}

serve() {
:args "Start the server" "${@}"
echo "serving on :8080"
[[ -z "${config:-}" ]] || echo "config=${config}"
}

build() {
:args "Build the project" "${@}"
echo "building..."
}

main "${@}"

What the agent sees

When connected, tools/list returns:

{
"tools": [
{
"name": "myapp_serve",
"description": "Start the server",
"inputSchema": {
"type": "object",
"properties": {
"verbose": {
"type": "boolean",
"description": "Enable verbose output"
},
"config": {
"type": "string",
"description": "Config file path"
}
},
"required": []
}
},
{
"name": "myapp_build",
"description": "Build the project",
"inputSchema": {
"type": "object",
"properties": {},
"required": []
}
}
]
}

Tool invocation

The agent sends:

{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"myapp_serve","arguments":{"config":"/etc/app.yaml","verbose":true}}}

argsh reconstructs the CLI call: ./myapp serve --config /etc/app.yaml --verbose, captures the output, and returns:

{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [{"type": "text", "text": "serving on :8080\nconfig=/etc/app.yaml"}],
"isError": false
}
}

Protocol Details

The server implements MCP 2025-11-25 with the tools capability:

MethodTypeResponse
initializeRequestProtocol version, capabilities, server info
notifications/initializedNotificationNo response (client acknowledgment)
pingRequest{}
tools/listRequestTool definitions with inputSchema
tools/callRequestTool result with content and isError

Unknown methods return error code -32601 (Method not found).

Type Mapping

argsh types map to JSON Schema types in inputSchema:

argsh typeJSON SchemaNotes
:+ (boolean flag)"boolean"true--flag, false/absent → omitted
:~int"integer"
:~float"number"
default (string)"string"

Required flags (:! modifier) populate the required array.

Compared to docgen llm

docgen llmmcp
OutputStatic JSON fileLive stdio server
Use caseEmbed schemas in API callsAgent connects directly
ExecutionAgent runs CLI manuallyargsh handles invocation
ProtocolProvider-specific (Claude/OpenAI)Standard MCP (any client)

Use docgen llm when you need to embed tool definitions in your own agent code. Use mcp when the client natively supports MCP.

Was this section helpful?