How to Build Your First Local AI Agent (For Python Developers)
Goal: Build a Python script that can “read” your codebase, find logic bugs, and explain them to you—running entirely on your laptop for free.
Phase 1: The “Brain” (Ollama Setup)
Before writing code, we need to install the intelligence.
Concept: Inference Engine vs. Model
The Model (Qwen): Think of this as the “Music File” (MP3). It’s the frozen mathematical weights that know how to code.
The Inference Engine (Ollama): Think of this as the “Music Player” (Spotify). It loads the massive file into your RAM and plays it.
Step 1: Install Ollama
Go to ollama.com.
Click Download (Mac/Windows/Linux).
Install it like a normal app.
Verify it works: Open your terminal (Command Prompt or Terminal) and type:
ollama --version
(If it prints a version number, you are good.)
Step 2: Get a Hugging Face Account (Optional but Recommended)
While Ollama downloads models automatically, as an AI Engineer, you need to know where they come from.
Go to huggingface.co.
Click Sign Up and create an account.
Why? This is the “GitHub of AI.” It is where you find new models (like Qwen, Llama, Mistral) to try in the future.
Step 3: Download the Model
We will use Qwen 2.5 Coder with tool support. The base qwen2.5-coder model doesn’t reliably support function calling, so we’ll use a community variant that adds this capability.
- 7B (7 Billion Parameters): The “Junior Dev.” Fast, runs on almost any laptop (8GB RAM).
- 32B (32 Billion Parameters): The “Senior Dev.” Smarter, but requires a heavy-duty laptop (32GB+ RAM).
Action: In your terminal, run this command to pull the 7B version (safest start):
ollama pull hhao/qwen2.5-coder-tools:7b
(You will see a progress bar downloading ~4.7 GB).
Step 4: Test the Brain
Let’s make sure it’s actually thinking. In your terminal, run:
ollama run hhao/qwen2.5-coder-tools:7b
Type: Write a Python function to reverse a string. If it writes code, your “Brain” is ready. Type /bye to exit.
Phase 2: The Project Setup
Now we create the workspace.
Step 5: Create the Project Folder
mkdir ai-bug-agent
cd ai-bug-agent
Step 6: Create the Virtual Environment (.venv)
This keeps your AI libraries separate from your system Python.
python3 -m venv .venv
source .venv/bin/activate
# (On Windows use: .venv\Scripts\activate)
Step 7: Install Libraries
We only need one main library: openai.
Wait, why OpenAI? Ollama is compatible with OpenAI’s code structure. By using this standard library, you can swap your local model for GPT-4 or Claude later by changing just one line of code.
pip install openai
Phase 3: Building the “Hands” (Tools)
An AI model is a brain in a jar. It can’t see your files unless we give it “Eyes” (Tools).
Concept: Function Calling
Normally, LLMs just output text. Function Calling is when the LLM says: “I don’t want to talk yet. I want to run the function named read_file.” Your script catches this request, runs the code, and gives the result back to the LLM.
Step 8: Create tools.py
Create a new file named tools.py and paste this code.
import os
def list_files(directory="."):
"""
Lists all files in the directory so the Agent knows what exists.
"""
file_list = []
# Ignore junk folders to save 'Context Window' (Memory)
ignore = {'.git', '__pycache__', '.env', 'node_modules', '.venv', '.DS_Store'}
for root, dirs, files in os.walk(directory):
# Filter out hidden/ignored directories
dirs[:] = [d for d in dirs if d not in ignore and not d.startswith('.')]
for file in files:
if file.startswith('.'): continue
file_list.append(os.path.join(root, file))
return "\n".join(file_list)
def read_file(filepath):
"""
Reads the content of a specific file.
Includes a safety limit so we don't crash the model with huge files.
"""
if not os.path.exists(filepath):
return f"Error: File '{filepath}' does not exist."
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
# Safety: If file is > 20,000 characters, truncate it.
if len(content) > 20000:
return (f"Error: File is too large ({len(content)} chars). "
"Reading huge files consumes your Context Window too fast.")
return content
except Exception as e:
return f"Error reading file: {str(e)}"
Phase 4: Building the “Logic” (The Agent)
Now we write the brain that uses the tools.
Concept: The System Prompt
This is the “God Mode” instruction. It tells the AI who it is (Persona) and how it should behave (Rules). It is the most important line of code you will write.
Step 9: Create agent.py
Create a new file named agent.py and paste this code.
import sys
import json
from openai import OpenAI
from tools import list_files, read_file # Import our tools
# --- CONFIGURATION ---
# We point the OpenAI client to our LOCAL Ollama server
client = OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama", # Required dummy key
)
# Use the model we downloaded earlier
MODEL_NAME = "hhao/qwen2.5-coder-tools:7b"
# --- SYSTEM PROMPT ---
# This defines the AI's personality and instructions
SYSTEM_PROMPT = """
You are a Senior QA Analyst. You have read-only access to the file system.
Your goal is to investigate the User's Bug Report by tracing code execution.
PROTOCOL:
1. Call `list_files` to map the territory.
2. Call `read_file` on relevant files.
3. Trace imports and function calls logic.
4. When you find the bug, output the FINAL REPORT.
CONSTRAINT:
- Do NOT guess. You must see the code before you blame it.
- If the bug is a "Logic Error" (e.g. truthiness, incorrect operator), explain WHY.
FINAL OUTPUT FORMAT:
Return the final answer in this exact JSON format (no markdown):
{
"severity": "High/Medium/Low",
"file": "path/to/buggy_file.py",
"cause": "Technical explanation...",
"fix": "Code block of the fix..."
}
"""
def run_agent(user_query):
# This list is the "Context Window" - the chat history
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_query}
]
print(f"🕵️ Agent starting investigation: '{user_query}'")
# --- THE AGENT LOOP ---
# The agent will loop until it decides it's done
while True:
# 1. THINK: Ask the LLM what to do next
response = client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
# We describe our tools to the LLM here
tools=[
{
"type": "function",
"function": {
"name": "list_files",
"description": "List all files in a directory",
"parameters": {"type": "object", "properties": {"directory": {"type": "string"}}}
}
},
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read contents of a file",
"parameters": {
"type": "object",
"properties": {"filepath": {"type": "string"}},
"required": ["filepath"]
}
}
}
]
)
# Get the message from the model
msg = response.choices[0].message
# 2. ACT: Did the model ask to use a tool?
if msg.tool_calls:
# Add the model's "thought" to history so it remembers it asked
messages.append(msg)
for tool_call in msg.tool_calls:
func_name = tool_call.function.name
# Parse arguments (e.g., {"filepath": "main.py"})
args = json.loads(tool_call.function.arguments)
print(f"🤖 Agent is running: {func_name}({args})")
# Execute the actual Python function
if func_name == "list_files":
result = list_files(args.get("directory", "."))
elif func_name == "read_file":
result = read_file(args["filepath"])
else:
result = "Error: Unknown tool"
# 3. OBSERVE: Feed the result back to the model
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
})
# 4. FINISH: The model didn't ask for a tool, so it must be done
else:
print("-" * 40)
print("📝 FINAL REPORT:\n")
print(msg.content)
break
if __name__ == "__main__":
# Get the bug report from the command line argument
if len(sys.argv) > 1:
query = sys.argv[1]
else:
query = "Find the bug in the code."
run_agent(query)
Phase 5: The Test (Verify it works)
Now we need to create a “crime scene” for the detective to solve.
Step 10: Create the Buggy Repo
Create a new folder named test_repo inside your project folder.
mkdir test_repo
Create test_repo/main.py (The innocent caller):
def check_status(user_id):
# Returns a dictionary like {"error": "..."} or {"success": True}
# BUG: In Python, a non-empty dictionary is treated as True!
result = {"error": "Database connection failed"}
if result:
print("✅ Operation Successful!")
else:
print("❌ Operation Failed!")
if __name__ == "__main__":
check_status(1)
Step 11: Run the Agent!
Go back to your root folder and run this command:
python agent.py "In test_repo/main.py, the code prints 'Operation Successful' even when there is an error. Tell me why."
What you should see (The Magic):
- Thinking: The agent will start.
- Action: It will run
list_files. - Observation: It sees
test_repo/main.py. - Action: It runs
read_file('test_repo/main.py'). - Final Report: It should output a JSON explanation saying: “The bug is that the dictionary
{'error': ...}evaluates toTruein the if-statement.”
Congratulations! You have just built a functional AI Agent that runs locally on your machine. You are now an AI Engineer.