Part 2 adds three tools: read a file, write a file, and run a shell command. With those three primitives the assistant can actually inspect your codebase and make changes β at which point it starts to feel like a real coding tool. The full source for Part 1 is ony GitHub.
This is the code from the previous article. This is a basic loop between the user and the LLM with some basic / commands.
import anthropic
client = anthropic.Anthropic()
conversation_history = []
def list_models():
models = client.models.list()
for model in models.data:
print(model.id)
def chat_streaming(user_message: str) -> str:
conversation_history.append({"role": "user", "content": user_message})
full_response = ""
print("\nπ¦ ", end="", flush=True)
with client.messages.stream(
model="claude-sonnet-4-6",
max_tokens=8096,
system="You are a coding assistant. Help the user write, understand, and debug code.",
messages=conversation_history,
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
full_response += text
print("\n")
conversation_history.append({"role": "assistant", "content": full_response})
return full_response
def chat(user_message: str) -> str:
conversation_history.append({"role": "user", "content": user_message})
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=8096,
system="You are a coding assistant. Help the user write, understand, and debug code.",
messages=conversation_history,
)
assistant_message = response.content[0].text
conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
def handle_slash_command(command: str) -> bool:
"""Returns True if the input was a slash command."""
if command == "/clear":
conversation_history.clear()
print("Conversation cleared.\n")
elif command == "/models":
list_models()
elif command == "/help":
print("/clear - clear conversation history")
print("/models - list available models")
print("/help - show this message")
print("/exit - quit\n")
elif command in ("/exit", "/quit"):
raise SystemExit
else:
print(f"Unknown command: {command}\n")
return True
def main():
print("Coding assistant ready. Type /help for commands.\n")
while True:
user_input = input("π§βπ» ").strip()
if not user_input:
continue
if user_input.startswith("/"):
handle_slash_command(user_input)
continue
chat_streaming(user_input)
if __name__ == "__main__":
main()Tools in the Anthropic API have two parts: a JSON schema that describes the tool to the model, and a Python function that actually runs when the model calls it.
The schema tells Claude the toolβs name, what it does, and what arguments it expects:
read_file_tool = {
"name": "read_file",
"description": "Read the contents of a file at the given path and return them as a string.",
"input_schema": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to read.",
}
},
"required": ["path"],
},
}The Python function that executes when the model invokes
read_file:
def read_file(path: str) -> str:
with open(path, "r") as f:
return f.read()When the model decides to call this tool, it returns a response with
stop_reason="tool_use" and a tool_use block
containing the tool name and its arguments. You run the matching Python
function with those arguments and send the result back as a
tool_result message so the model can continue its
reply.
Two things need to change: the system prompt should tell Claude it
has a read_file tool, and chat needs an inner
loop that handles tool calls before returning the final reply.
When Claude wants to use a tool it returns
stop_reason="tool_use" instead of "end_turn".
The response content is a list that can contain both text
blocks and tool_use blocks. You append that whole list to
history as the assistant turn, run each tool, collect the results,
append them as a user turn, then call the API again. Claude
picks up where it left off. This repeats until stop_reason
is "end_turn".
TOOLS = [read_file_tool]
SYSTEM = (
"You are a coding assistant. Help the user write, understand, and debug code. "
"You have access to a read_file tool. Use it to inspect files the user mentions."
)
def chat(user_message: str) -> str:
conversation_history.append({"role": "user", "content": user_message})
while True:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=8096,
system=SYSTEM,
tools=TOOLS,
messages=conversation_history,
)
if response.stop_reason == "tool_use":
conversation_history.append({"role": "assistant", "content": response.content})
tool_results = []
for block in response.content:
if block.type == "tool_use":
if block.name == "read_file":
result = read_file(**block.input)
else:
result = f"Unknown tool: {block.name}"
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result,
})
conversation_history.append({"role": "user", "content": tool_results})
else:
assistant_message = next(
block.text for block in response.content if hasattr(block, "text")
)
conversation_history.append({"role": "assistant", "content": assistant_message})
return assistant_message
def main():
print("Coding assistant ready. Type /help for commands.\n")
while True:
user_input = input("π§βπ» ").strip()
if not user_input:
continue
if user_input.startswith("/"):
handle_slash_command(user_input)
continue
response = chat(user_input)
print(f"\nπ¦ {response}\n")The outer while True in main is the user
conversation loop. The inner while True in
chat is the tool-execution loop β it keeps calling the API
until Claude is done using tools and produces a final text reply.
Writing follows the same pattern as reading: a schema and a function.
The schema adds a second required argument, content, for
what to write.
write_file_tool = {
"name": "write_file",
"description": "Write content to a file at the given path, creating it if it does not exist.",
"input_schema": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to write.",
},
"content": {
"type": "string",
"description": "The content to write to the file.",
},
},
"required": ["path", "content"],
},
}The Python function:
def write_file(path: str, content: str) -> str:
with open(path, "w") as f:
f.write(content)
return f"Wrote {len(content)} characters to {path}."Returning a confirmation string matters β Claude reads the
tool_result and uses it to confirm the action succeeded
before continuing its reply.
Add the new tool to TOOLS and its branch to the dispatch
block inside chat:
TOOLS = [read_file_tool, write_file_tool] if block.name == "read_file":
result = read_file(**block.input)
elif block.name == "write_file":
result = write_file(**block.input)
else:
result = f"Unknown tool: {block.name}"With filesystem tools in place, the assistant can now read and modify its own context β a foundation for building richer tooling. Weβll take that further in part 3. Full source is available on GitHub.
