CLIs for LLMs
argsh can generate structured output that Large Language Models (LLMs) can consume to understand and invoke your CLI tool. This enables AI agents to discover commands, flags, and types without manual prompt engineering.
The Problem
LLMs need structured, unambiguous descriptions of CLI tools to use them effectively. Manual documentation drifts from the actual implementation. argsh solves this by generating machine-readable descriptions directly from your :usage and :args declarations.
LLM Tool Schemas
Generate ready-to-use tool schemas directly — no Python glue, no manual conversion:
Each command generates a JSON array of tool definitions with one tool per subcommand. Flags are mapped to JSON Schema types (string, integer, number, boolean), and required flags (:! modifier) populate the required array.
Supported Providers
| Provider | Command | Schema Key |
|---|---|---|
| Anthropic Claude | llm claude | input_schema |
| OpenAI | llm openai | parameters (wrapped in "type": "function") |
| Google Gemini | llm gemini | Same as OpenAI (OpenAI-compatible API) |
| Moonshot Kimi | llm kimi | Same as OpenAI (OpenAI-compatible API) |
Gemini and Kimi use the OpenAI function calling format. If your provider follows the OpenAI convention, use llm openai.
Anthropic Tool Use
OpenAI Function Calling
[
{
"type": "function",
"function": {
"name": "myapp_serve",
"description": "Start the server",
"parameters": {
"type": "object",
"properties": {
"verbose": {
"type": "boolean",
"description": "Enable verbose output"
},
"config": {
"type": "string",
"description": "Config file path"
}
},
"required": []
}
}
}
]
YAML for Custom Integrations
For custom integrations or providers not listed above, the raw YAML output gives you full control:
name: "myapp"
description: "My application server"
synopsis: "myapp [command] [options]"
commands:
- name: "serve"
description: "Start the server"
- name: "build"
description: "Build the project"
options:
- name: "verbose"
short: "v"
description: "Enable verbose output"
type: boolean
- name: "config"
short: "c"
description: "Config file path"
type: "string"
Markdown for Context Windows
For simpler use cases — like pasting CLI documentation into a system prompt — use the Markdown output:
This gives LLMs a human-readable reference that works well as context:
Automation
Generate LLM-ready tool definitions in CI — no Python glue required:
Since the documentation is generated from code, it stays in sync automatically. Every release includes accurate tool definitions without manual updates.