Skip to main content
Skip to main content

CLIs for LLMs

argsh can generate structured output that Large Language Models (LLMs) can consume to understand and invoke your CLI tool. This enables AI agents to discover commands, flags, and types without manual prompt engineering.

The Problem

LLMs need structured, unambiguous descriptions of CLI tools to use them effectively. Manual documentation drifts from the actual implementation. argsh solves this by generating machine-readable descriptions directly from your :usage and :args declarations.

LLM Tool Schemas

Generate ready-to-use tool schemas directly — no Python glue, no manual conversion:

# Anthropic Claude
./myapp docgen llm claude

# OpenAI / Gemini / Kimi
./myapp docgen llm openai

Each command generates a JSON array of tool definitions with one tool per subcommand. Flags are mapped to JSON Schema types (string, integer, number, boolean), and required flags (:! modifier) populate the required array.

Supported Providers

ProviderCommandSchema Key
Anthropic Claudellm claudeinput_schema
OpenAIllm openaiparameters (wrapped in "type": "function")
Google Geminillm geminiSame as OpenAI (OpenAI-compatible API)
Moonshot Kimillm kimiSame as OpenAI (OpenAI-compatible API)
Tip

Gemini and Kimi use the OpenAI function calling format. If your provider follows the OpenAI convention, use llm openai.

Anthropic Tool Use

./myapp docgen llm claude
[
{
"name": "myapp_serve",
"description": "Start the server",
"input_schema": {
"type": "object",
"properties": {
"verbose": {
"type": "boolean",
"description": "Enable verbose output"
},
"config": {
"type": "string",
"description": "Config file path"
}
},
"required": []
}
}
]

OpenAI Function Calling

./myapp docgen llm openai
[
{
"type": "function",
"function": {
"name": "myapp_serve",
"description": "Start the server",
"parameters": {
"type": "object",
"properties": {
"verbose": {
"type": "boolean",
"description": "Enable verbose output"
},
"config": {
"type": "string",
"description": "Config file path"
}
},
"required": []
}
}
}
]

YAML for Custom Integrations

For custom integrations or providers not listed above, the raw YAML output gives you full control:

./myapp docgen yaml
name: "myapp"
description: "My application server"
synopsis: "myapp [command] [options]"
commands:
- name: "serve"
description: "Start the server"
- name: "build"
description: "Build the project"
options:
- name: "verbose"
short: "v"
description: "Enable verbose output"
type: boolean
- name: "config"
short: "c"
description: "Config file path"
type: "string"

Markdown for Context Windows

For simpler use cases — like pasting CLI documentation into a system prompt — use the Markdown output:

./myapp docgen md

This gives LLMs a human-readable reference that works well as context:

You have access to the myapp CLI tool.

$(./myapp docgen md)

Use this tool by running shell commands.

Automation

Generate LLM-ready tool definitions in CI — no Python glue required:

steps:
- name: Generate LLM tool definitions
run: |
./myapp docgen llm claude > tools/myapp_claude.json
./myapp docgen llm openai > tools/myapp_openai.json

Since the documentation is generated from code, it stays in sync automatically. Every release includes accurate tool definitions without manual updates.

Was this section helpful?