I spent three days building my first AI agent, and it was terrible.
Not because I chose the wrong model. Not because my prompts were bad. The agent just... didn't work. It would give vague answers, get confused by simple requests, and sometimes try to do things it clearly wasn't capable of doing.
Then I discovered something that changed everything: the quality of an AI agent depends almost entirely on how you design its tools.
This article is what I wish I'd read before building that first agent. If you're exploring AI agent development—whether you're an Angular developer like me looking to add AI features, or just getting started with AI engineering—this framework will save you a lot of frustration.
What Are Tools in AI Agents?
Let me start with the basics because this confused me at first.
When we talk about "tools" in the context of AI agents, we're talking about functions the AI can call. That's it. Nothing fancy.
Here's a simple example:
const weatherTool = {
name: "get_weather",
description: "Get current weather for a city",
parameters: {
city: {
type: "string",
description: "City name"
}
},
execute: async (city) => {
const data = await fetch(`api.weather.com/${city}`);
return data;
}
};
Without this tool, if someone asks "What's the weather in Tokyo?", the agent can only say "I don't have access to current weather data."
With this tool, the agent can actually call the function, get real data, and give a useful answer.
Tools transform your agent from a conversationalist into something that can actually do things.
This is similar to how we think about components in Angular—each tool should do one thing well, and you compose them together to build something powerful.
The Pattern That Changed My Approach
After my first failed attempt, I found this framework in a book about AI engineering, and it clicked immediately:
Ask yourself: "What would a human do to solve this problem?"
Then turn each step into a tool.
That's it. Simple but incredibly powerful.
Real Example: The Book Recommendation Agent
Let me show you an example that made this concept crystal clear for me.
Imagine you're building an agent that recommends books from investor reading lists. You have a database of 10,000 books that various investors have recommended.
My first instinct (WRONG): Dump all 10,000 books into the agent's context and let it figure things out.
What actually happened: The agent got completely overwhelmed. It couldn't navigate the data effectively. Recommendations were poor or generic.
The better approach: Think like a human analyst. If I were manually analyzing these book recommendations, what would I do?
- Look up which investors recommended specific books
- Filter by genre or topic
- Sort by popularity (how many investors recommended it)
- Compare recommendations between different types of investors (founders vs VCs)
Each of these operations became a tool:
Tools created:
- get_books_by_investor(investor_name)
- get_books_by_genre(genre)
- sort_books_by_recommendations(books)
- get_investors_by_type(type) // founders vs VCs
Result: The agent could now navigate the data intelligently and make genuinely good recommendations.
This is exactly how I think about building features in Angular. When I'm creating a complex component, I break it down into smaller, focused services and components that each handle one responsibility. The same principle applies to AI agent tools.
Why Tool Design Matters More Than You Think
Here's what surprised me: tool design has more impact on your agent's quality than almost anything else.
You could have:
- Perfect prompts
- The best model available
- Great context management
But if your tools are poorly designed, your agent will still struggle.
On the flip side, with well-designed tools, even a simpler model can deliver excellent results.
Tools = the capabilities of your agent.
Think of it this way: an LLM without tools is like a smart person with no hands. They can think, they can talk, but they can't actually do anything. Tools are what let your agent take action.
The Framework I Now Use
When I start building an agent, I follow this process:
Step 1: Define the Goal
Be specific. "Build a customer support agent" is too vague.
Better: "Build an agent that helps customers troubleshoot login issues, check order status, and create support tickets."
Step 2: List Human Actions
If a human were doing this job, what specific actions would they take?
For customer support:
- Look up customer account information
- Search help documentation
- Check order/subscription status
- Create a support ticket
- Send confirmation email
Step 3: One Action = One Tool
Each action becomes a focused tool:
Tools needed:
- get_customer_info(customer_id)
- search_help_docs(query)
- check_order_status(order_id)
- create_ticket(issue_type, description)
- send_email(to, subject, body)
Step 4: Write Clear Descriptions
This part is crucial. The tool description isn't just for you—the agent actually reads it to decide when to use each tool.
Bad description:
{
name: "search",
description: "Search stuff"
}
The agent thinks: "When do I use this? What does it search?"
Good description:
{
name: "search_help_docs",
description: "Search company help documentation for
troubleshooting steps. Use this when
customer has a technical issue. Returns
relevant articles with solutions.",
parameters: {
query: {
type: "string",
description: "Search terms describing the issue
(e.g. 'login error', 'payment failed')"
}
}
}
The agent thinks: "Clear! Use this for technical issues, search help docs."
Coming from a UX design background, I think of tool descriptions as the "interface" between the AI and your functionality. Good UX principles apply here—be clear, be specific, provide examples.
Common Mistakes I'm Learning to Avoid
Mistake #1: Too Many Tools
I made this mistake early. I thought "more tools = more capable agent."
Wrong.
With 25 tools: The agent gets confused about which to use, slows down analyzing all options, and prompts get expensive.
With 5 focused tools: Clear choices, fast decisions, cheaper to run.
Rule I follow now: Start with 3-5 tools. Only add more when there's a clear need.
Mistake #2: Vague Tool Descriptions
Remember: the agent reads your tool descriptions to decide what to use.
Vague descriptions = confused agent = poor results.
Always include:
- What the tool does
- When to use it
- What it returns
- Example inputs
Mistake #3: Tools That Do Too Much
I initially created a handle_customer_issue() tool that did everything—looked up the customer, searched docs, created tickets, sent emails.
The problem: The agent couldn't control the sequence. It was a black box.
Better approach: Separate tools that the agent chains together as needed. This gives the agent flexibility to handle different scenarios.
It's similar to the single responsibility principle we use in Angular services—each service should do one thing well.
Mistake #4: Not Testing Tools First
This one cost me debugging time.
I'd build a tool, give it to the agent, and when things failed, I wouldn't know if it was the tool or the agent logic.
Better workflow:
- Build tool
- Test tool manually (unit test it)
- Verify it works correctly
- Give to agent
- If agent fails now, you know it's the agent logic, not the tool
How Tool Calling Actually Works
Understanding the execution loop helped me design better tools.
Here's what happens when an agent runs:
1. User: "I can't log in to my account"
2. Agent thinks:
"I need to help with login. Let me check their account status."
3. Agent calls: check_account_status(user_id)
4. Tool returns: {status: "locked", reason: "too many failed attempts"}
5. Agent thinks:
"Account is locked. I should unlock it."
6. Agent calls: unlock_account(user_id)
7. Tool returns: {success: true}
8. Agent responds:
"I've unlocked your account. Please try logging in now."
Key insight: The agent decides which tools to use and when. You're not programming IF/THEN logic. The agent figures out the sequence based on the situation.
This is both powerful and a bit mind-bending if you're used to traditional programming. It's more like declarative programming—you declare what tools are available, and the system figures out how to use them.
Practical Patterns I'm Finding Useful
As I build more agents, I'm noticing certain tool patterns that work well:
Data Retrieval Tools
get_customer_info(id)
search_database(query)
fetch_order_history(customer_id)
Action Tools
send_email(to, subject, body)
create_ticket(issue)
update_record(id, data)
Calculation Tools
calculate_total(items)
convert_currency(amount, from, to)
analyze_sentiment(text)
External API Tools
get_weather(city)
search_web(query)
translate_text(text, target_lang)
I'm starting to build a personal library of these patterns. When I need a new agent, I can quickly compose tools from these categories.
Applying This to Frontend Development
As an Angular developer, I see parallels to how we structure applications:
Bad Angular architecture:
// One giant service that does everything
class GodService {
handleEverything() { /* thousands of lines */ }
}
Good Angular architecture:
// Focused services with clear responsibilities
class AuthService { }
class UserService { }
class OrderService { }
class NotificationService { }
The same principle applies to AI agent tools. Break things down into focused, composable pieces.
When I'm building AI features into Angular applications now (like I did with my Angular AI Chat Kit), I think about the AI's tools the same way I think about Angular services—each should have a clear, single responsibility.
The Emergent Behavior Surprise
Here's something that surprised me: with good tool design, agents start doing creative things you didn't explicitly program.
For example, with a meeting scheduler agent that has these tools:
- get_calendar(user_id)
- find_free_slots(calendar1, calendar2)
- create_meeting(attendees, time, duration)
- send_notification(user_id, message)
When someone asks "Schedule a meeting with Sarah next week", the agent figures out on its own:
- Get both calendars
- Find overlapping free time
- Create the meeting
- Send confirmation
You never programmed that exact sequence. The agent reasoned through it based on the tools available.
This is the "magic" of good tool design—the agent becomes more capable than the sum of its tools.
What I'm Building Next
I'm currently working on integrating AI agents into Angular applications, specifically for features in my Angular AI Chat Kit.
The lessons from tool design are directly applicable:
- Each API endpoint becomes a potential tool
- Frontend state management needs to work with agent actions
- User interactions trigger agent workflows
I'm also exploring how to make AI-assisted development workflows more efficient by treating development tasks as agent operations—code review, refactoring, documentation generation, each with specific tools.
Key Takeaways
If you're building AI agents, remember these points:
- Tool quality determines agent quality - spend time on tool design
- Ask "What would a human do?" - then make each step a tool
- Start with 3-5 tools - only add more when needed
- One tool = one job - let the agent chain them
- Write detailed descriptions - the agent reads them
- Test tools independently - before giving to agent
The framework is simple, but it works. I wish I'd known this before spending days debugging my first agent.
What's Next for You?
If you're building AI agents or adding AI features to your applications, try this framework with your next project.
Start small. Build 3-5 focused tools. See what the agent can do.
Then iterate. Add tools as you discover gaps in functionality.
I'm still early in my AI engineering journey, but this framework has already made a huge difference in how I approach agent development. It's one of those concepts that seems obvious in hindsight but isn't intuitive when you're starting out.
What are you building with AI agents? I'd love to hear about your experiences—especially if you've discovered other patterns that work well.
