CLI Reference¶
Complete reference for all AIA command-line arguments, options, and flags.
Usage Patterns¶
# Basic usage
aia [options] [PROMPT_ID] [CONTEXT_FILE]*
# Chat mode
aia --chat [PROMPT_ID] [CONTEXT_FILE]*
aia --chat [CONTEXT_FILE]*
# Show help
aia --help
# Show version
aia --version
Mode Options¶
--chat¶
Begin a chat session with the LLM after processing all prompts in the pipeline.
-f, --fuzzy¶
Use fuzzy matching for prompt search (requires fzf to be installed).
Note: If fzf is not installed, AIA will exit with an error.
--terse¶
Adds a special instruction to the prompt asking the AI to keep responses short and to the point.
--tokens¶
Display token usage information after each response in chat mode. Shows input tokens, output tokens, and model ID.
--cost¶
Include cost calculations with token usage. Automatically enables --tokens. Shows estimated cost based on the model's pricing.
Note: --cost implies --tokens, so you don't need to specify both.
--mcp FILE¶
Load an MCP (Model Context Protocol) server from a JSON configuration file. MCP servers provide additional tools and context to AI models. Multiple --mcp options can be used to load multiple servers.
# Load a single MCP server
aia --mcp ~/.config/aia/mcp/github.json my_prompt
# Load multiple MCP servers
aia --mcp github.json --mcp filesystem.json my_prompt
# Combine with chat mode
aia --chat --mcp ~/mcp-servers/memory.json
JSON file format:
{
"name": "github",
"command": "github-mcp-server",
"args": ["stdio"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token"
},
"timeout": 8000
}
--no-mcp¶
Disable all MCP server processing, including servers defined in the configuration file.
Use cases:
- Use --no-mcp when you want faster responses without MCP tool overhead
- Use --no-mcp to temporarily bypass MCP servers configured in your config file
--mcp-list¶
List configured MCP servers and exit. Shows the server name and command for each server. Useful for verifying which servers are loaded from config files and --mcp options.
When combined with --mcp-use or --mcp-skip, only the servers that pass the filter are shown. The header changes from "Configured" to "Active" to reflect this.
When combined with --list-tools, MCP servers are started and their tools are included in the tool listing (see --list-tools).
# List all servers from config file
aia --mcp-list
# List servers loaded from specific MCP JSON files
aia --mcp github.json --mcp filesystem.json --mcp-list
# List only the servers that would be active after filtering
aia --mcp-list --mcp-use github,filesystem
# List all configured servers except those skipped
aia --mcp-list --mcp-skip playwright
# Combine with --list-tools to show MCP tools grouped by server
aia --mcp-list --list-tools
aia --mcp-list --list-tools --mcp-use redis
--mu, --mcp-use NAMES¶
Only connect to the named MCP servers (whitelist). Server names are comma-separated and must match the name field in the server configuration.
# Only use the github server
aia --mcp-use github --chat
# Only use github and filesystem servers
aia --mcp-use github,filesystem --chat
# Can be specified multiple times
aia --mu github --mu filesystem --chat
Precedence: --no-mcp overrides --mcp-use. If both --mcp-use and --mcp-skip are specified, --mcp-use takes precedence.
--ms, --mcp-skip NAMES¶
Skip the named MCP servers (blacklist). All other configured servers will still be connected.
# Skip the playwright server
aia --mcp-skip playwright --chat
# Skip multiple servers
aia --mcp-skip playwright,filesystem --chat
# Can be specified multiple times
aia --ms playwright --ms filesystem --chat
Precedence: --no-mcp overrides --mcp-skip. If --mcp-use is also specified, --mcp-skip is ignored.
Adapter Options¶
--adapter ADAPTER¶
Interface that adapts AIA to the LLM. Currently supported: ruby_llm
Valid adapters: ruby_llm
--available-models [QUERY]¶
List (then exit) available models that match the optional query. Query is a comma-separated list of AND components.
# List all models
aia --available-models
# Filter by provider
aia --available-models openai
# Filter by capability and provider
aia --available-models openai,mini
# Filter by modality
aia --available-models text_to_text
# Complex filter
aia --available-models openai,gpt,text_to_image
Model Options¶
-m MODEL, --model MODEL¶
Name of the LLM model(s) to use. For multiple models, use comma-separated values.
Supports inline role assignment using MODEL=ROLE syntax to assign specific roles to individual models.
# Single model
aia --model gpt-4 my_prompt
# Multiple models (parallel processing)
aia --model "gpt-4,claude-3-sonnet,gemini-pro" my_prompt
# Short form
aia -m gpt-3.5-turbo my_prompt
# Single model with role (inline syntax)
aia --model gpt-4o=architect design_review.md
# Multiple models with different roles
aia --model "gpt-4o=architect,claude=security,gemini=performance" my_prompt
# Same model with multiple roles for diverse perspectives
aia --model "gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist" project_plan.md
# Mixed: some models with roles, some without
aia --model "gpt-4o=expert,claude,gemini" my_prompt
See also: --role for applying a role to all models, --list-roles for discovering available roles.
--[no-]consensus¶
Enable/disable consensus mode for multi-model responses. When enabled, AIA attempts to create a consensus response from multiple models.
# Enable consensus mode (requires multiple models)
aia --model "gpt-4,claude-3-sonnet" --consensus my_prompt
# Disable consensus mode (default: show individual responses)
aia --model "gpt-4,claude-3-sonnet" --no-consensus my_prompt
--sm, --speech-model MODEL¶
Speech model to use for text-to-speech functionality.
--tm, --transcription-model MODEL¶
Transcription model to use for speech-to-text functionality.
File Options¶
-c, --config-file FILE¶
Load configuration from a specific file.
-o, --[no-]output [FILE]¶
Output file for saving AI responses.
# Save to default file (temp.md)
aia --output my_prompt
# Save to specific file
aia --output output.txt my_prompt
# Use absolute path
aia --output /tmp/ai_response.md my_prompt
# Disable file output
aia --no-output my_prompt
-a, --[no-]append¶
Append to output file instead of overwriting.
# Append mode
aia --output log.md --append my_prompt
# Overwrite mode (default)
aia --output log.md --no-append my_prompt
--[no-]history-file [FILE]¶
Conversation history file for logging prompts and responses.
# Enable history logging to default location
aia --history-file my_prompt
# Log to specific file
aia --history-file /var/log/aia_history.log my_prompt
# Disable history logging
aia --no-history-file my_prompt
--md, --[no-]markdown¶
Format output with Markdown.
# Enable Markdown formatting
aia --markdown my_prompt
# Disable Markdown formatting
aia --no-markdown my_prompt
Prompt Options¶
--prompts-dir DIR¶
Directory containing prompt files.
--roles-prefix PREFIX¶
Subdirectory name for role files (default: roles).
# Use custom roles directory
aia --roles-prefix personas --role expert
# Results in looking for roles in ~/.prompts/personas/expert.md
-r, --role ROLE_ID¶
Role ID to prepend to the prompt. This applies the same role to all models.
For per-model role assignment, use the inline MODEL=ROLE syntax with --model instead.
# Apply role to all models
aia --role expert my_prompt
aia -r teacher explain_concept
# With multiple models (same role for all)
aia --model "gpt-4,claude" --role architect design.md
# Per-model roles (inline syntax - see --model)
aia --model "gpt-4=architect,claude=security" design.md
See also: --model for inline role syntax, --list-roles for discovering available roles.
--list-roles¶
List all available roles and exit. Shows role IDs and their descriptions from the roles directory.
# List all available roles
aia --list-roles
# Example output:
# Available roles in /Users/you/.prompts/roles:
# architect - Software architecture expert
# security - Security analysis specialist
# performance - Performance optimization expert
# debugger - Expert debugging assistant
# optimist - Positive perspective analyzer
# pessimist - Critical risk analyzer
# realist - Balanced pragmatic analyzer
Roles are discovered from:
- Default location: ~/.prompts/roles/
- Custom location: Set via --prompts-dir and --roles-prefix
- Nested directories: Supports subdirectories like roles/software/architect.md
Use case: Discover available roles before using them with --role or inline MODEL=ROLE syntax.
See also: --role, --model, --prompts-dir, --roles-prefix
-n, --next PROMPT_ID¶
Next prompt to process (can be used multiple times to build a pipeline).
-p PROMPTS, --pipeline PROMPTS¶
Pipeline of comma-separated prompt IDs to process.
-x, --[no-]exec¶
Designate an executable prompt file.
# Treat prompt as executable
aia --exec my_script_prompt
# Treat as regular prompt (default)
aia --no-exec my_script_prompt
--system-prompt PROMPT_ID¶
System prompt ID to use for chat sessions.
--regex PATTERN¶
Regex pattern to extract parameters from prompt text.
AI Parameters¶
-t, --temperature TEMP¶
Temperature for text generation (0.0 to 2.0). Higher values make output more creative and random.
# Conservative/focused
aia --temperature 0.1 analysis_prompt
# Balanced (default ~0.7)
aia --temperature 0.7 my_prompt
# Creative
aia --temperature 1.5 creative_writing
# Very creative
aia -t 2.0 brainstorm_ideas
--max-tokens TOKENS¶
Maximum tokens for text generation.
--top-p VALUE¶
Top-p sampling value (0.0 to 1.0). Alternative to temperature for controlling randomness.
--frequency-penalty VALUE¶
Frequency penalty (-2.0 to 2.0). Positive values discourage repetition.
# Discourage repetition
aia --frequency-penalty 0.5 my_prompt
# Encourage repetition
aia --frequency-penalty -0.5 my_prompt
--presence-penalty VALUE¶
Presence penalty (-2.0 to 2.0). Positive values encourage discussing new topics.
# Encourage new topics
aia --presence-penalty 0.5 broad_discussion
# Focus on current topics
aia --presence-penalty -0.5 deep_dive
Audio/Image Options¶
--speak¶
Convert text to audio and play it. Uses the configured speech model and voice.
--voice VOICE¶
Voice to use for speech synthesis.
aia --voice alloy --speak my_prompt
aia --voice echo --speak my_prompt
aia --voice fable --speak my_prompt
aia --voice nova --speak my_prompt
aia --voice onyx --speak my_prompt
aia --voice shimmer --speak my_prompt
--is, --image-size SIZE¶
Image size for image generation.
Common sizes: 256x256, 512x512, 1024x1024, 1792x1024, 1024x1792
--iq, --image-quality QUALITY¶
Image quality for image generation.
Values: standard, hd
--style, --image-style STYLE¶
Style for image generation.
Values: vivid, natural
Tool Options¶
--rq LIBS, --require LIBS¶
Ruby libraries to require for Ruby directive execution.
--tools PATH_LIST¶
Add tool file(s) or directories. Comma-separated paths.
# Single tool file
aia --tools ./my_tool.rb my_prompt
# Multiple tools
aia --tools "./tool1.rb,./tool2.rb" my_prompt
# Tool directory
aia --tools ./tools/ my_prompt
# Mixed paths
aia --tools "./tools/,./special_tool.rb" my_prompt
--at, --allowed-tools TOOLS_LIST¶
Allow only these tools to be used. Security feature to restrict tool access.
# Allow specific tools
aia --allowed-tools "calculator,file_reader" my_prompt
aia --at "web_scraper,data_analyzer" analysis_prompt
--rt, --rejected-tools TOOLS_LIST¶
Reject/block these tools from being used.
# Block dangerous tools
aia --rejected-tools "file_writer,system_command" my_prompt
aia --rt "network_access" secure_prompt
--list-tools¶
List available tools and exit. Loads tools from --require and --tools options, then displays each tool's name and description.
When combined with --mcp-list, MCP servers are also started and their tools are included in the output, grouped by server.
Output format depends on where stdout is directed:
| Output | Format | Descriptions |
|---|---|---|
| Terminal | Plain text, word-wrapped | First 3 sentences |
| File/pipe | Markdown with headings | Full description |
The markdown output uses # for the title, ## for source sections (Local Tools, MCP server groups), and ### for individual tool names. Any markdown headings within a tool's description are automatically adjusted to nest under the tool's heading level.
# List local tools loaded via --require
aia --require shared_tools --list-tools
# List local tools loaded from a file path
aia --tools ./my_tools/ --list-tools
# Combine --require and --tools
aia --require shared_tools --tools ./extras/ --list-tools
# Include MCP server tools (requires --mcp-list)
aia --require shared_tools --mcp-list --list-tools
# Include only specific MCP servers
aia --require shared_tools --mcp-list --mcp-use redis --list-tools
# Redirect to a markdown file
aia --require shared_tools --mcp-list --list-tools > tools.md
# Pipe to a markdown renderer
aia --require shared_tools --list-tools | glow -
Example terminal output:
Local Tools:
calculator
Perform advanced mathematical calculations with comprehensive
error handling and validation. This tool supports basic arithmetic
operations, parentheses, and common mathematical functions.
weather_tool
Retrieve comprehensive current weather information for any city
worldwide using the OpenWeatherMap API. This tool provides real-time
weather data including temperature, atmospheric conditions, humidity,
and wind information.
Example markdown output (when redirected):
# Available Tools
> 20 tools from 2 sources
## Local Tools (15)
### `calculator`
Perform advanced mathematical calculations with comprehensive error
handling and validation. This tool supports basic arithmetic operations,
parentheses, and common mathematical functions. ...full description...
## MCP: redis (5)
### `set`
Set a Redis string value with an optional expiration time. ...full description...
Utility Options¶
Log Level Options¶
AIA provides multiple log level options to control logging verbosity. These options set the log level for all three loggers:
- aia: Used within the AIA codebase for application-level logging
- llm: Passed to the RubyLLM gem's configuration (RubyLLM.logger)
- mcp: Passed to the RubyLLM::MCP process (RubyLLM::MCP.logger)
Only one log level option should be used at a time.
-d, --debug¶
Enable debug output (most verbose) and set all loggers to DEBUG level. Also sets $DEBUG_ME = true for the debug_me gem.
--no-debug¶
Explicitly disable debug output. Sets $DEBUG_ME = false.
--info¶
Set all loggers to INFO level. Shows informational messages and above.
--warn¶
Set all loggers to WARN level (this is the default). Shows warnings, errors, and fatal messages.
--error¶
Set all loggers to ERROR level. Shows only errors and fatal messages.
--fatal¶
Set all loggers to FATAL level (least verbose). Shows only critical/fatal messages.
Log Level Hierarchy (from most to least verbose):
1. debug - All messages including detailed debugging information
2. info - Informational messages and above
3. warn - Warnings, errors, and fatal messages (default)
4. error - Only errors and fatal messages
5. fatal - Only critical/fatal messages
-v, --[no-]verbose¶
Enable/disable verbose output.
--refresh DAYS¶
Refresh models database interval in days.
# Refresh immediately
aia --refresh 0
# Refresh weekly
aia --refresh 7
# Refresh monthly
aia --refresh 30
--dump FILE¶
Dump current configuration to a file for inspection or backup.
--completion SHELL¶
Show completion script for shell integration.
# Bash completion
aia --completion bash > ~/.bash_completion.d/aia
# Zsh completion
aia --completion zsh > ~/.zsh/completions/_aia
# Fish completion
aia --completion fish > ~/.config/fish/completions/aia.fish
Supported shells: bash, zsh, fish
--version¶
Show AIA version and exit.
-h, --help¶
Show help message and exit.
Usage Examples¶
Basic Examples¶
# Simple prompt execution
aia hello_world
# Chat mode
aia --chat
# Use specific model
aia --model gpt-4 code_review my_script.py
# Fuzzy prompt selection
aia --fuzzy
Advanced Examples¶
# Multi-model consensus
aia --model "gpt-4,claude-3-sonnet" --consensus analysis_prompt data.csv
# Creative writing with voice output
aia --model gpt-4 --temperature 1.2 --speak --voice nova story_prompt
# Secure tool usage
aia --tools ./safe_tools/ --allowed-tools "calculator,file_reader" --rejected-tools "system_command" analysis_prompt
# Pipeline with custom configuration
aia --pipeline "extract,analyze,summarize" --temperature 0.3 --max-tokens 2000 --output report.md data_source.txt
# Debug mode with verbose output
aia --debug --verbose --model claude-3-sonnet problematic_prompt
Configuration Examples¶
# Use custom configuration
aia --config-file ./project_config.yml --prompts-dir ./project_prompts/ my_prompt
# Save output with markdown formatting
aia --output analysis.md --markdown --append data_analysis dataset.csv
# Audio processing
aia --transcription-model whisper-1 --speech-model tts-1-hd --voice echo audio_prompt audio_file.wav
Exit Codes¶
0- Success1- General error (invalid arguments, file not found, etc.)2- Configuration error3- Model/API error4- Tool execution error
Environment Variables¶
Many CLI options have corresponding environment variables with the AIA_ prefix.
Use double underscore (__) for nested configuration sections:
# Model configuration (top-level, supports MODEL=ROLE syntax)
export AIA_MODEL="gpt-4"
export AIA_MODEL="gpt-4o=architect"
export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"
# LLM settings (nested under llm:)
export AIA_LLM__ADAPTER="ruby_llm"
export AIA_LLM__TEMPERATURE="0.8"
export AIA_LLM__MAX_TOKENS="2048"
# Prompts settings (nested under prompts:)
export AIA_PROMPTS__DIR="/custom/prompts"
export AIA_PROMPTS__ROLES_PREFIX="roles"
# Output settings (nested under output:)
export AIA_OUTPUT__FILE="./output.md"
export AIA_OUTPUT__APPEND="true"
export AIA_OUTPUT__HISTORY_FILE="~/.prompts/_prompts.log"
# Audio settings (nested under audio:)
export AIA_AUDIO__VOICE="alloy"
export AIA_AUDIO__SPEECH_MODEL="tts-1"
# Image settings (nested under image:)
export AIA_IMAGE__SIZE="1024x1024"
export AIA_IMAGE__QUALITY="hd"
# Flags (nested under flags:)
export AIA_FLAGS__CHAT="true"
export AIA_FLAGS__VERBOSE="true"
export AIA_FLAGS__DEBUG="false"
export AIA_FLAGS__TOKENS="true"
export AIA_FLAGS__COST="true"
export AIA_FLAGS__NO_MCP="false"
export AIA_FLAGS__CONSENSUS="true"
# Registry settings (nested under registry:)
export AIA_REGISTRY__REFRESH="7"
# Paths settings (nested under paths:)
export AIA_PATHS__AIA_DIR="~/.config/aia"
export AIA_PATHS__CONFIG_FILE="~/.config/aia/aia.yml"
Note: The AIA_MODEL environment variable supports the same inline MODEL=ROLE syntax as the --model CLI option.
See Configuration for a complete list.
Configuration Precedence¶
Options are resolved in this order (highest to lowest precedence):
- Command line arguments (including inline
MODEL=ROLEsyntax) - Environment variables (including inline syntax in
AIA_MODEL) - Configuration files (including array format with roles)
- Built-in defaults
Role-specific precedence: When using the role feature, inline MODEL=ROLE syntax takes precedence over the --role flag, which takes precedence over roles in config files.
Related Documentation¶
- Configuration Guide - Detailed configuration options
- Getting Started - Basic usage tutorial
- Advanced Prompting - Advanced usage patterns
- Directives Reference - Prompt directive reference