Agent script: Claude as interpreter

What if you could write a script in plain language and just run it?

./deploy.ais

Natural language instructions that an AI agent interprets and executes, just like Python interprets .py files or Bash interprets .sh files.

Scripts in the original sense

In Claude Code, everything is just instructions. Skills, commands, agents—they’re all markdown files with natural language. These are scripts, but not in the shell or Python sense. Closer to the original meaning: written instructions for an actor to follow.

We’ve come full circle. The first “scripts” were for human performers. Then we wrote scripts for machines in rigid, formal languages. Now we’re back to writing for actors—just artificial ones.

Flipping the model

Think about the difference between using Python’s REPL and writing a .py file. In the REPL, you type commands interactively—useful for exploration, but ephemeral. When you write a script file, you codify those commands into something reusable, shareable, and visible in your filesystem.

The same applies here. A Claude session is like a REPL: interactive, powerful, but ephemeral. An .ais file is like a .py file: a reusable artifact you can run, share, and build upon.

Normally, you start Claude and invoke skills from within. Claude mediates everything. But what if we flip it? Instead of Claude invoking the script, the script invokes Claude.

Traditional: User  Claude  Skill
Flipped:     User  Script  Claude

This matters for two reasons.

First, integration: the flipped model plugs into existing workflows. You can call .ais scripts from cron jobs, CI pipelines, other scripts—anywhere you’d call a normal script. No need to “be in Claude” first.

Second, visibility: skills live in Claude’s namespace. You invoke them with /skillname, but they’re invisible until you’re in a session. Script files live in your filesystem. You can ls them, grep them, organize them into folders, version control them. They’re discoverable the same way your other scripts are—by looking at what’s there.

The .ais extension

Using .md conflates documents (for reading) with scripts (for executing). A distinct extension signals intent: “this file is meant to be executed, not read.”

I’m using .ais—short for “AI script.” It parallels .sh, .py, .js. Short, pronounceable, and not tied to any specific AI provider.

Making it executable

Claude CLI doesn’t natively work as a shebang interpreter, but a thin wrapper bridges the gap.

The interpreter (~/bin/claude-interp):

#!/bin/bash
set -e
SCRIPT="$1"
[[ -z "$SCRIPT" ]] && { echo "Usage: claude-interp <script.ais>" >&2; exit 1; }

content=$(tail -n +2 "$SCRIPT")  # skip shebang

# Parse frontmatter if present
if echo "$content" | head -1 | grep -q '^---$'; then
    frontmatter=$(echo "$content" | awk '/^---$/{n++; next} n==1{print}')
    body=$(echo "$content" | awk '/^---$/{n++; next} n>=2{print}')

    args=()
    add_dir=$(echo "$frontmatter" | grep -E '^add-dir:' | sed 's/^add-dir:[[:space:]]*//' | sed "s|~|$HOME|g")
    [[ -n "$add_dir" ]] && args+=(--add-dir "$add_dir")

    model=$(echo "$frontmatter" | grep -E '^model:' | sed 's/^model:[[:space:]]*//')
    [[ -n "$model" ]] && args+=(--model "$model")

    echo "$body" | claude --print "${args[@]}"
else
    echo "$content" | claude --print
fi

The frontmatter configures Claude (directories to access, model to use). The body after --- is the actual instruction.

An example script (publish-candidate.ais):

#!/usr/bin/env claude-interp
---
add-dir: ~/git/my-wiki
model: sonnet
---

Sample 20 random markdown files from the wiki directory.
Skip any that already have `public: true` in frontmatter.

For each file, assess:
- Value: Would this be interesting to others?
- Readiness: How complete and polished is it?
n4
Recommend the single best candidate to publish next.

Then:

chmod +x publish-candidate.ais
./publish-candidate.ais

That’s it. A natural language script, executable from anywhere.

A different kind of brittleness

Traditional scripts are brittle in familiar ways—they break when file paths change, APIs update, or edge cases appear. Agent scripts describe intent and constraints, letting the AI figure out details. They degrade gracefully and can ask for clarification.

But they’re brittle in unfamiliar ways. They can hallucinate. They can interpret the same instruction differently each run. A traditional script fails loudly and consistently; an agent script might fail silently or succeed unexpectedly.

You trade one kind of brittleness for another. This is why verification matters.

Verification strategies

How do you know it did the right thing?

Build verification into the script itself. The AI can check its own work.

When to use agent scripts

Aspect Bash/Python Agent script
Precision Exact commands Intent + constraints
Error handling Explicit conditionals Adaptive reasoning
Maintenance Breaks on changes Self-adjusts
Debugging Stack traces Ask it to explain
Reproducibility Deterministic Probabilistic

Good fit: Semi-automated workflows that are hard to fully automate, exploratory tasks, anything requiring judgment. Tasks where you’d otherwise write a script and babysit it.

Poor fit: Critical production systems, high-frequency operations, anything requiring audit trails or exact reproducibility. Also: simple tasks where a bash one-liner would do.

And yes, they’re slow. Every run is an API call—expect seconds to minutes, not milliseconds. This isn’t a bug; it’s the cost of having an interpreter that thinks. Use them for tasks where that thinking time is worth it.

Agent scripts won’t replace traditional scripts. But they open up a new category: tasks that were too fuzzy to automate before, now automatable. Tasks where “just figure it out” is a valid instruction.

Self-improving scripts

Here’s something traditional scripts can’t do: get better over time.

An agent script can be instructed to refine itself based on outcomes. “If this approach didn’t work, update this script with what you learned.” Or interactively: “After each run, ask me what could be improved and update the script accordingly.”

The script becomes a living document—not just executed, but evolved. Each run is a chance to sharpen the instructions, add edge cases, or remove ambiguity.

This blurs the line between using a tool and training it.

Beyond Claude

Nothing here is Claude-specific. Swap claude --print for another CLI—GPT, Gemini, a local model—and the concept holds. The .ais format is just a thin wrapper around “pipe natural language to an AI agent.”

As these agents become more capable, we’ll probably need standardized ways to invoke them—AI interpreters, the way we have language runtimes today. But that’s speculation. For now, this works.