AIA Philosophy
The Prompt is the Code: A Power Tool Philosophy for AI Integration
AIA (AI Assistant) is a command-line interface (CLI) written in Ruby, designed to be a powerful, flexible, and scriptable partner for your daily tasks. This article is the second in a comprehensive series providing a deep dive into the AIA CLI, focusing on its non-interactive “batch mode” — the foundation of AIA’s powerful AI processing capabilities. The example prompts used in this article are for illustration purposes. While AIA is a daily tool for my software development activities along with claude code and aider, AIA is also a powerful business and recreational tool. Like any coding language, AIA’s batch mode is domain agnostic. Whether you use it to review or develop software source code, write documentation, or generate content for marketing materials, AIA’s batch mode can help you automate tasks and streamline your workflow.
Series Articles:
- The Philosophy of Prompt-Driven Development with AIA
- Mastering AIA’s Batch Mode: From Simple Questions to Complex Workflows
- Building AI Workflows: AIA’s Prompt Sequencing and Pipelines (Coming Soon)
- Interactive AI Sessions: AIA’s Chat Mode Deep Dive (Coming Soon)
- Extending AIA: Custom Tools and Function Callbacks (Coming Soon)
Table of Contents
- Introduction: The Problem with Embedded Prompts
- The Command-Line Power Tool Approach
- The Traditional Trap: Embedded Prompts
- AIA’s Philosophy: Code in Prompts
- Better Architecture: Database-Stored Templates
- Command-Line First Philosophy
- Practical Examples: The Developer’s Toolkit
- The Experimentation Advantage
- When to Use AIA vs. Traditional Approaches
- Getting Started: Your First AIA Experiment
- Looking Forward: The Command-Line AI Revolution
- Installation and Next Steps
Introduction: The Problem with Embedded Prompts
Every developer has been there: you’re building a web application that needs AI functionality, so you embed a prompt string directly in your code:
def analyze_sentiment(text)
prompt = "Analyze the sentiment of this text: #{text}"
return llm.query(prompt)
end
It works, but it’s brittle. Want to refine the prompt? Change the code, run tests, deploy. Need different analysis for different contexts? More code changes. Want to experiment with different models? Another deployment cycle.
This is the embedded prompt trap—treating AI prompts as just another string literal in your codebase instead of recognizing them as dynamic, evolving logic that deserves better treatment. Consider putting your prompt templates into a database table. With version control like Ruby gem “paper_trail” setup on the table you have a wonderful way to make and track changes to your prompts over time without the need to go trhough the deployment cycle.
That is the old way of thinking that the prompt is in the code. Today lets talk about the power of the flip-side where the code is in the prompt.
AIA (AI Assistant) takes a radically different approach, embracing the philosophy that “The Prompt is the Code.” Instead of burying prompts in application logic, AIA treats them as first-class executable scripts that can adapt, evolve, and experiment without touching your main codebase. In the process you get a very powerful command-line tool that can be used to solve many problems.
The Command-Line Power Tool Approach
AIA isn’t trying to be a full application framework—it’s a focused command-line tool built on the Unix philosophy of doing one thing well. That one thing is executing dynamic, intelligent prompts that can adapt to their environment and compose with other tools. Of course we all realize that out of many simple tools can come very powerful applications.
Single Responsibility Principle
AIA has one job: take a prompt file, process it with the specified Large Language Model (LLM), and return results. Everything else—data preparation, output formatting, workflow orchestration—is handled by the rich ecosystem of Unix tools that already exist.
# AIA does one thing: AI processing
cat customer_data.csv | aia analyze_feedback --no-out_file | jq '.sentiment' > results.json
# But it composes beautifully with other tools
find feedback/ -name "*.txt" | xargs -I {} aia summarize {} | grep "urgent" | mail -s "Urgent Issues" support@company.com
The Experimentation Platform
AIA’s real power lies in rapid experimentation. As a developer, you can:
- Test different prompt strategies in seconds, not deployment cycles
- Compare multiple LLM models on the same prompt and data without code changes
- Iterate on AI logic without affecting your main application
- Share and version control prompts like any other development artifact
# Test different models on the same prompt with same data
aia analyze_data --model=gpt-4o-mini data.csv -o result_4o.md
aia analyze_data --model=claude-3-haiku data.csv -o result_haiku.md
aia analyze_data --model=gpt-4 data.csv -o result_gpt4.md
# Compare results and choose the best model for your use case
The Traditional Trap: Embedded Prompts
The Familiar Anti-Pattern
Most AI integrations follow this pattern:
class CustomerAnalyzer:
def analyze_sentiment(self, feedback):
prompt = f"""
Analyze the sentiment of this customer feedback:
{feedback}
Provide a score from 1-10 and a brief explanation.
"""
return self.llm_client.query(prompt)
def categorize_feedback(self, feedback):
prompt = f"""
Categorize this feedback into one of these categories:
- Product Issue
- Service Complaint
- Feature Request
- Praise
Feedback: {feedback}
"""
return self.llm_client.query(prompt)
The Hidden Costs
This approach creates several problems:
Iteration Friction: Every prompt change requires the full development cycle—edit code, test, deploy. Experimenting with prompt variations becomes a significant overhead.
Model Lock-in: The LLM provider and model are hardcoded. Switching models or testing different options requires code changes.
Context Limitations: Prompts can only access data explicitly passed to them. No environment awareness, no real-time context, no conditional logic.
Maintenance Overhead: As prompts become more sophisticated, they accumulate complexity within your application code, making both harder to maintain.
AIA’s Philosophy: Code in Prompts
Prompts as Executable Scripts
AIA flips the relationship. Instead of embedding prompts in code, you embed code in prompts:
# sentiment_analysis.txt
//config model=gpt-4o-mini
//config temperature=0.3
<!-- Adapt analysis based on feedback length -->
<% word_count = ENV['FEEDBACK_TEXT'].split.length %>
<% if word_count > 100 %>
This is a detailed feedback entry (<%= word_count %> words).
Provide comprehensive sentiment analysis with:
- Overall sentiment score (1-10)
- Key emotional themes
- Specific concerns or praise points
- Recommended response priority
<% else %>
This is a brief feedback entry (<%= word_count %> words).
Provide concise sentiment analysis:
- Sentiment score (1-10)
- Primary emotion
- One-sentence summary
<% end %>
Feedback to analyze:
//include $FEEDBACK_TEXT
Analysis timestamp: $(date)
The Power of Dynamic Adaptation
This prompt demonstrates capabilities simple for prompt templates:
Environmental Awareness: The prompt adapts its behavior based on input characteristics (word count) and system context (timestamp).
Conditional Logic: Different analysis approaches for different types of input, all managed within the prompt itself.
Model Flexibility: Change models without code changes using the //config
directive.
Version Control: Track prompt evolution separately from application code.
Better Architecture: Database-Stored Templates
Instead of embedding prompts in code, consider storing them in your database where they can be modified without deployments:
Traditional Embedded Approach
class AIService:
def __init__(self):
# Prompts locked into code
self.prompts = {
'sentiment': "Analyze sentiment: {text}",
'summary': "Summarize this text: {text}",
'categorize': "Categorize into {categories}: {text}"
}
def analyze(self, prompt_type, text):
# Inflexible, requires deployment to change
prompt = self.prompts[prompt_type].format(text=text)
return self.llm.query(prompt)
Database Template Approach
class AIService:
def __init__(self):
self.db = Database()
def analyze(self, prompt_name, context):
# Fetch current prompt version from database
template = self.db.get_prompt_template(prompt_name)
# Use AIA to process with dynamic capabilities
env = os.environ.copy()
env.update(context)
# Write template to temp file and execute
with tempfile.NamedTemporaryFile(mode='w', suffix='.txt') as f:
f.write(template.content)
f.flush()
result = subprocess.run(
['aia', f.name, '--no-out_file'],
env=env, capture_output=True, text=True
)
return result.stdout
# Database table structure:
# prompt_templates: id, name, content, version, created_at, updated_at
AIA uses PromptManager
AIA is a Ruby gem and it uses several other Ruby gems. The prompt_manager
gem provides the ability to use prompt templates from either a filesystem or from a database table. AIA manages its prompt files in the file system. The job of managing the interface to the LLM API falls to the ruby_llm
gem which provides access to over 500 LLMs from various providers.
Benefits of This Hybrid Approach
Decoupled Evolution: Prompts evolve independently of application code. Update analysis logic without touching your web app.
A/B Testing: Store multiple prompt versions and test them against each other:
-- Test different prompt versions
SELECT * FROM prompt_templates WHERE name = 'sentiment_analysis' ORDER BY version DESC;
-- Gradual rollout: use version 2 for 10% of requests
SELECT content FROM prompt_templates
WHERE name = 'sentiment_analysis'
AND version = CASE WHEN RANDOM() < 0.1 THEN 2 ELSE 1 END;
Non-Technical Editing: Subject matter experts can refine prompts through web-based admin interfaces without developer involvement.
Audit Trail: Full history of prompt changes with rollback capability. For Ruby on Rails applications that implement the Ruby gem paper_trail
this is an easy thing to do.
Command-Line First Philosophy
Unix Pipeline Integration
AIA embraces the Unix philosophy—small tools that do one thing well and compose beautifully:
# Data processing pipeline
curl -s "api.example.com/feedback" | \
jq -r '.[] | .text' | \
while read feedback; do
echo "$feedback" | FEEDBACK_TEXT="$feedback" aia sentiment_analysis --no-out_file
done | \
jq -s '.' > sentiment_results.json
# Parallel processing
find feedback_files/ -name "*.txt" | \
parallel -j 4 'aia analyze_feedback {} --out_file {.}_analysis.json'
# Real-time monitoring
tail -f /var/log/customer_feedback.log | \
grep "priority:high" | \
while read line; do
echo "$line" | aia urgent_analysis --no-out_file | \
mail -s "Urgent Customer Issue" support@company.com
done
Individual Developer Workflow
AIA is designed for developers who want to:
Experiment Rapidly: Test different approaches without deployment overhead
# Quick iteration cycle
vim my_analysis.txt
aia my_analysis sample_data.csv
# Review results, adjust prompt, repeat
Optimize Costs: Find the cheapest model that meets quality requirements
# Cost testing
for model in gpt-4o-mini claude-3-haiku gpt-3.5-turbo; do
echo "Testing $model..."
aia analyze_data --model=$model sample.csv -o ${model}_results.md
done
Build Personal Workflows: Create specialized analysis tools for daily use
# Personal productivity tools
alias analyze_logs="aia log_analysis /var/log/app.log"
alias summarize_emails="aia email_summary ~/Downloads/emails.mbox"
alias review_code="aia code_review"
Practical Examples: The Developer’s Toolkit
Rapid Prompt Iteration
Instead of this development cycle:
- Edit Python code with new prompt
- Run tests
- Deploy to staging
- Test with real data
- Deploy to production
- Monitor results
WIth AIA you get this cycle:
- Edit prompt file
- Test with real data
- Done
# Traditional approach - requires deployment
def analyze_customer_feedback(text):
prompt = f"Analyze this feedback: {text}" # Locked in code
return llm.query(prompt)
# AIA approach - immediate testing
$ vim customer_feedback_analysis.txt
$ aia customer_feedback_analysis --no-out_file
# Instant results, no deployment needed
Model Comparison and Optimization
Find the best model for your specific use case:
# test_models.sh
for model in gpt-4o-mini claude-3-haiku gpt-4 claude-3-sonnet; do
echo "=== Testing $model ==="
time aia analyze_data --model=$model test_data.csv
echo ""
done
Create a simple prompt for comparing models:
# model_comparison.txt
//config model=$MODEL_NAME
//config temperature=0.2
Analyze this data and provide insights:
//include $DATA_FILE
Current model: $MODEL_NAME
Analysis timestamp: $(date)
# Compare models
for model in gpt-4o-mini claude-3-haiku; do
MODEL_NAME=$model DATA_FILE=sample.csv aia model_comparison --out_file results_$model.txt
done
# Review results
diff results_gpt-4o-mini.txt results_claude-3-haiku.txt
Personal Automation
Build your own AI-powered utilities:
# daily_summary.txt
//config model=gpt-4o-mini
Daily summary for $(date +%Y-%m-%d):
Recent Git commits:
//shell git log --oneline --since="1 day ago"
System resource usage:
//shell df -h | head -5
//shell free -h
Today's tasks:
//include ~/today.md
Please provide:
1. A summary of development activity
2. System health assessment
3. Priority tasks for today
# Add to your .bashrc or daily cron job
alias daily="aia daily_summary --no-out_file"
The Experimentation Advantage
Comparing Approaches
Traditional Embedded Prompts:
- Iteration time: Hours to days (code → test → deploy)
- Model changes: Requires code modification
- A/B testing: Complex feature flags and deployment coordination
- Collaboration: Developers must translate business requirements
AIA Dynamic Prompts:
- Iteration time: Seconds to minutes (edit → test)
- Model changes: Configuration directive change
- A/B testing: Simple file comparison
- Collaboration: Domain experts can directly modify easily understood prompts
Real-World Scenario
You’re building a content moderation system:
# content_moderation.txt
//config model=gpt-4o-mini
//config temperature=0.1
Content Moderation Analysis
Timestamp: $(date)
<% content_type = ENV['CONTENT_TYPE'] || 'general' %>
<% if content_type == 'user_comment' %>
Focus on:
- Harassment or personal attacks
- Spam or promotional content
- Off-topic discussions
<% elsif content_type == 'user_post' %>
Focus on:
- Inappropriate images or content
- Misinformation or false claims
- Violations of community guidelines
<% end %>
Content to moderate:
<%= ENV['CONTENT_TEXT'] %>
Provide:
1. Risk level (low/medium/high)
2. Specific policy violations (if any)
3. Suggested action (approve/flag/remove)
4. Confidence score (1-10)
With AIA, you can:
- Test different moderation strategies instantly
- Compare models for accuracy vs. speed
- Adjust criteria based on content type
- Experiment with different confidence thresholds
All without deploying code changes.
When to Use AIA vs. Traditional Approaches
Use AIA When:
- Rapid Experimentation: You need to test different prompt strategies quickly
- Model Flexibility: You want to compare different LLMs without code changes
- Dynamic Context: Your prompts need environment awareness or conditional logic
- Command-Line Workflows: You work primarily in terminal environments
- Personal Productivity: You’re building tools for your own use
- Prompt Evolution: Your AI logic changes frequently
Use Traditional Embedding When:
- Simple, Static Prompts: Basic, unchanging prompts like translation
- Performance Critical: Every millisecond matters in production
- Tight Integration: AI output directly feeds into specific code paths
- Team Constraints: Your team lacks command-line expertise
The Hybrid Approach
Many developers benefit from both approaches:
class HybridAIService:
def __init__(self):
# Simple, static prompts embedded
self.simple_prompts = {
'translate': "Translate '{text}' to {language}",
'classify': "Classify: {text}. Categories: {categories}"
}
def simple_task(self, task_type, **params):
"""Fast, embedded prompts for simple tasks"""
prompt = self.simple_prompts[task_type].format(**params)
return self.llm.query(prompt)
def complex_task(self, aia_prompt, **env_vars):
"""AIA for complex, dynamic tasks"""
env = os.environ.copy()
env.update(env_vars)
result = subprocess.run(
['aia', aia_prompt, '--no-out_file'],
env=env, capture_output=True, text=True
)
return result.stdout
Getting Started: Your First AIA Experiment
Installation
# Install AIA
gem install aia
# Verify installation
aia --version
# Set up environment
export AIA_PROMPTS_DIR="$HOME/prompts_dir"
mkdir -p "$AIA_PROMPTS_DIR"
export OPENAI_API_KEY="your_api_key_here"
# NOTE: AIA supports *all* the major LLM providers not just OpenAI
Your First Dynamic Prompt
Create a simple prompt that demonstrates AIA’s power:
# Create an experimental prompt
cat > "$AIA_PROMPTS_DIR/experiment.txt" << 'EOF'
//config model=gpt-4o-mini
//config temperature=0.7
AI Experiment - $(date)
User: $(whoami)
System: $(uname -s)
Current directory: $(pwd)
<% if ENV['EXPERIMENT_TYPE'] %>
Experiment type: <%= ENV['EXPERIMENT_TYPE'] %>
<% else %>
No experiment type specified - running in general mode
<% end %>
<% if ENV['DATA_FILE'] && File.exist?(ENV['DATA_FILE']) %>
Data file provided: <%= ENV['DATA_FILE'] %>
File size: <%= File.size(ENV['DATA_FILE']) %> bytes
Content preview:
//shell head -n 5 "$DATA_FILE"
<% else %>
No data file provided
<% end %>
Please provide insights about this experimental setup and suggest interesting analyses I could perform.
EOF
# Test your prompt
aia experiment --no-out_file
# Test with environment variables
EXPERIMENT_TYPE="sentiment_analysis" aia experiment --no-out_file
# Test with data file
echo -e "happy\nsad\nangry\njoyful" > sample_emotions.txt
DATA_FILE="sample_emotions.txt" aia experiment --no-out_file
Model Comparison Experiment
# Create a model comparison prompt
cat > "$AIA_PROMPTS_DIR/model_test.txt" << 'EOF'
//config model=$MODEL_NAME
//config temperature=0.3
Model: $MODEL_NAME
Task: Analyze the sentiment of this text: "$TEXT_TO_ANALYZE"
Please provide:
1. Sentiment score (1-10, where 1 is very negative, 10 is very positive)
2. Confidence level (1-10)
3. Key emotional indicators
4. Brief reasoning
Processing time: $(date)
EOF
# Test different models
for model in gpt-4o-mini claude-3-haiku gpt-4; do
echo "=== Testing $model ==="
MODEL_NAME=$model TEXT_TO_ANALYZE="I love this product but the delivery was slow" aia model_test --no-out_file
echo ""
done
Looking Forward: The Command-Line AI Revolution
AIA represents a shift toward treating AI as a composable command-line tool rather than a monolithic application component. This approach offers several advantages:
Simplicity: One tool, one responsibility—execute AI prompts efficiently Composability: Works seamlessly with existing Unix tools and workflows Experimentation: Rapid iteration without deployment overhead Flexibility: Easy model switching and prompt evolution Personal Productivity: Build custom AI tools for your specific needs
The next article in this series explores how to master AIA’s batch processing capabilities, showing you how to build robust data processing pipelines that leverage AI intelligence within Unix workflows.
Coming Next: “Mastering Batch Mode: Unix Pipelines Meet AI Intelligence”
- Building Data Processing Pipelines: Integrate AIA into existing Unix workflows
- Parallel Processing Patterns: Handle large datasets efficiently
- Error Handling and Reliability: Build robust AI-powered automation
- Performance Optimization: Choose the right models and caching strategies
- Real-World Examples: Practical batch processing use cases
Installation and Next Steps
Ready to start experimenting with AIA? Here’s everything you need:
Prerequisites
- Ruby environment (install Ruby)
- API keys for LLM providers (start with OpenAI)
Quick Setup
# Install AIA
gem install aia
# Create your prompts directory
mkdir -p "$HOME/prompts_dir"
export AIA_PROMPTS_DIR="$HOME/prompts_dir"
# Set your API key
export OPENAI_API_KEY="your_api_key_here"
# Test the installation
echo "What is 2 + 2? Today is $(date) What is tommorrow's date?" > "$AIA_PROMPTS_DIR/test.txt"
aia test --no-out_file
Explore Further
- Check out the help:
aia --help
- List available models:
aia --available_models
- Read the documentation: AIA Repository
- Explore advanced features: AIA Wiki
The revolution starts with recognizing that prompts deserve better than being buried in code. With AIA, the prompt is the code, and that changes everything.
This article is part of a comprehensive series exploring AIA’s capabilities. Next: “Mastering Batch Mode: Unix Pipelines Meet AI Intelligence”
Series Navigation:
- Article 1: The Philosophy of AIA - “The Prompt is the Code” (Current)
- Article 2: Mastering Batch Mode: Unix Pipelines Meet AI Intelligence
- Article 3: Workflow Orchestration: Building AI Process Chains
- Article 4: Interactive Mode: Conversational AI That Remembers
- Article 5: Advanced Tool Integration: Extending AI Capabilities
Learn More: