Part 2 defines the basic file system tools that the assistant will use. The full source is on GitHub.
In Part 3, now that we have tools wired up, we will use the model itself to extend our assistant — asking it to add new functionality to its own source code.
The interesting thing about having file system tools is that the assistant can now read and write its own source file. Once you realize that, a strange loop opens up: you can prompt the assistant to add new features to itself.
This is worth doing for a few reasons:
It’s a great way to understand how tools work in practice. Building the tooling yourself, rather than reaching for an existing framework, forces you to understand what’s happening at each step — how the model signals it wants to call a tool, how you dispatch to the right function, how the result gets fed back into the conversation. That knowledge is hard to get by reading documentation alone.
It’s a form of bootstrapping. Once the assistant can modify itself, adding new capabilities is just a matter of describing what you want. You’re no longer writing code by hand — you’re directing the model to write it for you, and then immediately using the result.
It’s a useful pattern to know. Self-modifying or self-extending systems show up in agentic workflows all the time. Starting with a simple example makes the underlying mechanics obvious.
Let’s start by asking the assistant to modify itself and add a new slash command for switching between models.
🧑💻 please add a /model <model name> slash command to main.py which will allow for model selection, if no model is given then it just lists the available models
🦀: Here's a summary of every change made:
### New global — `current_model`
current_model = "claude-sonnet-4-6"
Holds the active model name so it can be changed at runtime without touching any other code.
....
| Input | Behaviour |
|---|---|
| `/model` | Prints the current model, then lists all available models |
| `/model claude-opus-4-5` | Validates and switches to that model |
| `/model bad-name` | Prints an error and lists valid options |
`/help` was also updated to document both `/model` forms.
---
### Updated — `chat()` and `chat_streaming()`
The two hardcoded `model="claude-sonnet-4-6"` strings were replaced with `model=current_model` so every new request automatically uses whichever model was last selected.The assistant read main.py, made the changes, and wrote
the file back — using exactly the tools we gave it in Part 2.
Now let’s verify the new command actually works:
🧑💻 /model
Current model: claude-sonnet-4-6
Available models:
claude-sonnet-4-6
claude-opus-4-6
...
🧑💻 /model claude-opus-4-6
Model set to: claude-opus-4-6
🧑💻 /model
Current model: claude-opus-4-6The command works exactly as described. The assistant added the feature, and we verified it.
We can do the same thing with tools. Rather than writing a new tool from scratch, we can point the assistant at a specification file and ask it to implement the tool from that description.
Since we are building on top of Anthropic’s Claude, we can borrow a
SKILL.md file from Clawhub. Clawhub is a community hub for
sharing skill files — structured markdown documents that describe how to
use a particular tool or technique. Claude Code uses these natively; our
custom assistant can use them too, as a specification to build from.
A word of caution: be careful when using skill files from Clawhub. Community-contributed files can be malicious — crafted to exfiltrate data or execute unintended commands. Read any skill file you plan to use before handing it to your assistant, and use the “Hide suspicious” filter when browsing the hub.
For this example I used the curl skill from Clawhub. After
reviewing it, I dropped it into skills/curl/SKILL.md in the
project directory.
Now prompt the assistant to build a tool from it:
🧑💻 The skills/curl/SKILL.md file explains how to use curl, please create a tool in main.py that will utilize it, this tool will be used when a page is retrieved from a url or when data is fetched from a API
🦀: Here is a full breakdown of everything that was added and why each decision was made.
....
### 4. Wired into `chat()` and `TOOLS`
- `curl_http_tool` added to the `TOOLS` list so it is sent in every API request.
- `elif block.name == "curl_http": result = curl_http(**block.input)` added to the tool-dispatch loop.
- `SYSTEM` prompt updated to tell the model to use `curl_http` whenever it needs to retrieve a web page or fetch API data.The assistant read the skill file, implemented a
curl_http Python function, created the matching JSON
schema, and wired it into the existing tool dispatch loop — all in one
pass.
Let’s test it with a live API call:
🧑💻 please fetch the data from the weather api here: https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41&hourly=soil_temperature_0_to_10cm&models=gfs_seamless&timezone=America%2FNew_York&temperature_unit=fahrenheit
🦀: The data was fetched successfully! Here's a clean breakdown of what was returned:
---
### 📍 Location Details
| Field | Value |
|---|---|
| **Latitude** | 52.54° N |
| **Longitude** | 13.36° E |
| **Elevation** | 38.0 m |
| **Timezone** | America/New_York (GMT-5) |
| **Model Used** | GFS Seamless |
---
### 🌱 Hourly Soil Temperature (0–10 cm depth) in °F
The forecast covers **March 3–9, 2026**, with hourly readings. Here are some highlights:
| Date | Min Temp (°F) | Max Temp (°F) |
|---|---|---|
| **Mar 3** | 40.7°F | 50.4°F |
| **Mar 4** | 37.3°F | 47.9°F |
| **Mar 5** | 36.7°F | 46.8°F |
| **Mar 6** | 38.7°F | 49.1°F |
| **Mar 7** | 40.1°F | 51.8°F |
| **Mar 8** | 40.1°F | 47.4°F |
| **Mar 9** | 42.1°F | 49.8°F |It works. The tool would need more hardening for production use — error handling, timeouts, header configuration — but as a proof of concept it demonstrates the pattern cleanly.
This is a proof of concept, not production code. The assistant-generated additions were accepted largely as-is — no error handling, no input validation, no guardrails on what files can be read or written. That’s fine for a local experiment where you control the prompts, but it would be a problem anywhere else.
A few things worth addressing before taking this further:
write_file tool will
happily overwrite any path the model provides. In practice you’d want to
restrict it to a working directory, or at minimum prompt the user before
writing.A few directions worth exploring from here:
chat. The rest of the code doesn’t
change.localhost and run models entirely offline. Useful for
sensitive codebases.git diff, git log, or git status
gives the assistant meaningful context about what’s changed and
why.The real power here is that you can shape the assistant to fit a specific use case — a domain-specific coding tool, a documentation writer, a test generator — rather than accepting a general-purpose default. And once you have file system tools, you can use the assistant itself to do the shaping.
Over these three parts we’ve gone from a basic conversation loop to an assistant that can read and write files, call external APIs, switch models at runtime, and extend itself by writing its own tools. None of the individual pieces are complicated — the Anthropic API is well-designed and the patterns are consistent — but putting them together reveals something genuinely useful.
The self-modification angle is the part I find most interesting. Once the assistant can act on its own source code, the workflow shifts: instead of writing features by hand, you describe what you want and let the model implement it. That’s not a replacement for understanding what’s happening under the hood — you still need to read the output, test it, and decide whether it’s correct — but it changes the shape of the work.
Full source is available on GitHub.
