Autonomous AI agents promise to handle complex tasks without constant human oversight. But agents that hallucinate calculations or invent data are worse than useless. This guide shows you how to build agents that reliably call deterministic tools via MCP.
What Autonomous Agents Need
An autonomous AI agent is a system that can pursue goals over multiple steps, making decisions and taking actions without constant human input. For an agent to be useful (not just impressive), it needs several components:
1. Reasoning Capability
The core LLM must be able to break down tasks, plan steps, and adapt when things don't go as expected. This is what models like GPT-4, Claude, and Gemini provide.
2. Deterministic Tools
For any operation where correctness matters - math, data validation, encoding, timestamps - the agent needs tools that return predictable, accurate results. This is where MCP comes in.
3. Memory & Context
Agents need to track what they've done, what worked, and what their current state is. This can be conversation history, vector databases, or structured state stores.
4. Execution Environment
The agent needs somewhere to run - whether that's a serverless function, a long-running process, or a user's local machine.
Reliable Tool Calling
Tool calling is the mechanism by which an LLM decides it needs external help and formats a request for an external function. MCP standardizes this:
User: "What's 15% of 847?"
Agent (internal): I should use a calculation tool for precision.
Tool Call: math/percentage
{
"value": 847,
"percentage": 15
}
Tool Response:
{
"result": 127.05,
"calculation": "847 × 0.15 = 127.05"
}
Agent: 15% of 847 is 127.05.
Why Tools Beat Prompting
You might think "just ask the LLM to be careful" would work. It doesn't:
- Prompt engineering is fragile: "Think step by step" helps sometimes, not always
- Confidence doesn't equal correctness: LLMs are equally confident when right and wrong
- Some tasks are impossible: LLMs genuinely cannot compute SHA-256 hashes or validate Luhn checksums
- Consistency matters: Tools return the same result every time; LLMs vary
TinyFn for Agent Backends
TinyFn provides 500+ deterministic tools via MCP, covering the operations agents most commonly need:
| Category | Tools | Agent Use Cases |
|---|---|---|
| Math | Arithmetic, percentages, statistics | Financial calculations, data analysis |
| Validation | Email, URL, UUID, credit card, JSON | Input verification, data quality |
| Encoding | Base64, URL, hex, JWT decode | API integration, data transformation |
| Hash | SHA-256, MD5, HMAC, bcrypt | Security operations, checksums |
| DateTime | Parsing, formatting, timezones | Scheduling, logging, deadlines |
| Convert | Units, currencies, number systems | User-facing calculations |
| String | Count, reverse, slugify, case | Text processing, URL generation |
| Geo | Distance, bearing, coordinates | Location-based features |
Architecture Patterns
Pattern 1: Direct MCP Integration
The agent's LLM has direct access to TinyFn via MCP:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ User │────>│ Agent │────>│ TinyFn │
│ │<────│ (LLM) │<────│ MCP │
└─────────────┘ └─────────────┘ └─────────────┘
│
│ MCP Protocol
▼
Tool calls and responses
Best for: Simple agents, chatbots, coding assistants
Pattern 2: Orchestrated Tool Use
An orchestration layer decides which tools to call:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ User │────>│ Orchestrator│────>│ LLM │
│ │<────│ │<────│ (planning) │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
┌──────┴──────┐
│ │
┌─────▼─────┐ ┌─────▼─────┐
│ TinyFn │ │ Other │
│ Tools │ │ Services │
└───────────┘ └───────────┘
Best for: Complex workflows, multi-step tasks, production systems
Pattern 3: Tool-Augmented RAG
Combine retrieval with tool calling:
User Query
│
▼
┌─────────────────────────────────────┐
│ Query Analysis │
│ - Need facts? → RAG retrieval │
│ - Need calculation? → TinyFn tool │
│ - Need both? → Parallel execution │
└─────────────────────────────────────┘
│ │
▼ ▼
┌─────────┐ ┌─────────┐
│ Vector │ │ TinyFn │
│ DB │ │ MCP │
└─────────┘ └─────────┘
│ │
└───────┬───────────┘
▼
LLM Response
Implementation Examples
Python Agent with LangChain
from langchain.agents import create_tool_calling_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
import requests
# TinyFn tool wrapper
class TinyFnTool:
def __init__(self, api_key: str):
self.api_key = api_key
self.base_url = "https://api.tinyfn.io/v1"
def call(self, endpoint: str, params: dict) -> dict:
response = requests.post(
f"{self.base_url}/{endpoint}",
headers={"X-API-Key": self.api_key},
json=params
)
return response.json()
tinyfn = TinyFnTool("your-api-key")
# Define tools for the agent
tools = [
Tool(
name="calculate_percentage",
func=lambda x: tinyfn.call("math/percentage", x),
description="Calculate percentage of a number"
),
Tool(
name="validate_email",
func=lambda x: tinyfn.call("validate/email", {"email": x}),
description="Check if an email address is valid"
),
Tool(
name="hash_sha256",
func=lambda x: tinyfn.call("hash/sha256", {"text": x}),
description="Generate SHA-256 hash of text"
),
]
# Create the agent
llm = ChatOpenAI(model="gpt-4")
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
Node.js Agent
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
// Define TinyFn tools for Claude
const tools = [
{
name: "calculate_percentage",
description: "Calculate a percentage of a number",
input_schema: {
type: "object",
properties: {
value: { type: "number", description: "The base number" },
percentage: { type: "number", description: "The percentage to calculate" }
},
required: ["value", "percentage"]
}
},
{
name: "validate_json",
description: "Validate JSON syntax",
input_schema: {
type: "object",
properties: {
json: { type: "string", description: "JSON string to validate" }
},
required: ["json"]
}
}
];
// Tool execution
async function executeTool(name, input) {
const response = await fetch(`https://api.tinyfn.io/v1/${name}`, {
method: "POST",
headers: {
"X-API-Key": process.env.TINYFN_API_KEY,
"Content-Type": "application/json"
},
body: JSON.stringify(input)
});
return response.json();
}
// Agent loop
async function runAgent(userMessage) {
let messages = [{ role: "user", content: userMessage }];
while (true) {
const response = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
tools,
messages
});
if (response.stop_reason === "tool_use") {
const toolUse = response.content.find(c => c.type === "tool_use");
const result = await executeTool(toolUse.name, toolUse.input);
messages.push({ role: "assistant", content: response.content });
messages.push({
role: "user",
content: [{ type: "tool_result", tool_use_id: toolUse.id, content: JSON.stringify(result) }]
});
} else {
return response.content[0].text;
}
}
}
Error Handling
Robust agents handle tool failures gracefully:
async function safeToolCall(toolName, params, retries = 3) {
for (let i = 0; i < retries; i++) {
try {
const result = await executeTool(toolName, params);
// Check for API-level errors
if (result.error) {
throw new Error(result.error);
}
return { success: true, data: result };
} catch (error) {
if (i === retries - 1) {
return {
success: false,
error: `Tool ${toolName} failed after ${retries} attempts: ${error.message}`
};
}
// Exponential backoff
await sleep(Math.pow(2, i) * 1000);
}
}
}
Build Agents That Don't Hallucinate
Give your autonomous agents access to 500+ deterministic tools.
Get Free API Key