OpenAI's function calling changed how developers build AI applications. But the Model Context Protocol (MCP) takes this further with cross-platform tool sharing. This guide compares the approaches and shows how TinyFn works as a universal tool provider for any AI model.
OpenAI Function Calling Recap
OpenAI introduced function calling in June 2023, allowing GPT models to generate structured JSON that maps to predefined functions. Here's how it works:
import openai
tools = [{
"type": "function",
"function": {
"name": "calculate_factorial",
"description": "Calculate the factorial of a number",
"parameters": {
"type": "object",
"properties": {
"n": {
"type": "integer",
"description": "The number to calculate factorial for"
}
},
"required": ["n"]
}
}
}]
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What's 10 factorial?"}],
tools=tools
)
Function calling works well but has limitations:
- Tool definitions must be included in each API request
- Tools are tied to your application code
- No standardized way to share tools across projects
- Different AI providers have different tool formats
What is MCP?
The Model Context Protocol (MCP) is an open standard developed by Anthropic for connecting AI models to external tools and data sources. Key differences from function calling:
| Aspect | Function Calling | MCP |
|---|---|---|
| Discovery | Manual per request | Automatic from servers |
| Portability | Provider-specific | Cross-platform standard |
| Tool hosting | Your application | Dedicated servers |
| Sharing | Copy/paste definitions | URL-based |
With MCP, tools live on servers that AI clients connect to. The client discovers available tools automatically:
{
"mcpServers": {
"tinyfn": {
"url": "https://api.tinyfn.io/mcp/all/",
"headers": {
"X-API-Key": "your-api-key"
}
}
}
}
MCP vs Function Calling
When to Use Function Calling
- OpenAI-only applications with custom business logic
- Functions that access your private databases
- Tight integration with your existing codebase
- When you need complete control over tool execution
When to Use MCP
- Multi-model applications (Claude + GPT + others)
- Standard utilities that benefit from centralization
- Team environments where tools should be shared
- When you want tools managed by specialists (like TinyFn)
Hybrid Approach
Many applications benefit from both: MCP for standard utilities (math, validation, formatting) and function calling for custom business logic.
// MCP provides: math, validation, encoding, formatting
// via TinyFn server at https://api.tinyfn.io/mcp/all
// Function calling provides: your custom logic
const customTools = [{
type: "function",
function: {
name: "lookup_customer",
description: "Look up customer by ID in your database",
parameters: {
type: "object",
properties: {
customer_id: { type: "string" }
}
}
}
}];
// Both work together in your AI application
Cross-Platform Tool Sharing
One of MCP's biggest advantages is tool portability. Consider a development team using multiple AI tools:
The Problem Without MCP
- Developer A uses Claude Desktop
- Developer B uses a custom GPT-4 integration
- Developer C uses Cursor
- Each needs the same tools (hashing, validation, conversion)
- Each must implement or configure tools separately
The Solution With MCP
- Team configures TinyFn MCP endpoint once
- All MCP-compatible clients get the same tools
- Updates happen on the server, not in each client
- Consistent behavior across all tools
Using TinyFn with OpenAI (Without MCP)
Even if you're using OpenAI exclusively, TinyFn's REST API integrates seamlessly with function calling:
import openai
import requests
TINYFN_KEY = "your-tinyfn-api-key"
# Define TinyFn tools for OpenAI
tools = [{
"type": "function",
"function": {
"name": "tinyfn_hash_sha256",
"description": "Calculate SHA256 hash of text",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "Text to hash"}
},
"required": ["text"]
}
}
}]
def execute_tool(name, args):
"""Execute TinyFn tool via REST API"""
endpoint = name.replace("tinyfn_", "").replace("_", "/")
response = requests.get(
f"https://api.tinyfn.io/v1/{endpoint}",
params=args,
headers={"X-API-Key": TINYFN_KEY}
)
return response.json()
# Use with OpenAI
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hash 'password123' with SHA256"}],
tools=tools
)
# Handle tool calls
for choice in response.choices:
for tool_call in choice.message.tool_calls or []:
result = execute_tool(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)
print(f"Result: {result}")
TinyFn as Universal Provider
TinyFn serves as a universal tool provider that works with any AI model:
Access Methods
| Method | Best For | Endpoint |
|---|---|---|
| MCP | Claude, Cursor, MCP clients | https://api.tinyfn.io/mcp/all |
| REST API | OpenAI, custom apps | https://api.tinyfn.io/v1/{tool} |
| Direct call | Non-AI applications | https://api.tinyfn.io/v1/{tool} |
Consistent Results Everywhere
Whether you access TinyFn via MCP or REST API, you get identical deterministic results:
// Via MCP (Claude Desktop)
Tool: math/factorial
Input: { "n": 20 }
Output: { "result": 2432902008176640000 }
// Via REST API (OpenAI function calling)
GET https://api.tinyfn.io/v1/math/factorial?n=20
Response: { "result": 2432902008176640000 }
// Same deterministic result regardless of access method
Migration Guide
Ready to try MCP alongside or instead of function calling? Here's how to migrate:
Step 1: Identify Portable Tools
Review your function calling tools. Which ones are standard utilities that TinyFn could provide?
- Math calculations (factorial, prime check, GCD)
- String operations (count, format, encode)
- Validations (email, URL, UUID)
- Conversions (units, currencies, timezones)
- Hashing (SHA256, MD5, HMAC)
Step 2: Replace with TinyFn
For each standard tool, switch to TinyFn. Remove the function definition and tool execution code:
// Your code
function validateEmail(email) {
// 50 lines of regex validation
// Edge cases you forgot about
// Bugs you haven't found yet
}
// Function definition in every request
tools = [{
type: "function",
function: {
name: "validate_email",
// ... definition
}
}]
// Just configure once in mcp.json
{
"mcpServers": {
"tinyfn": {
"url": "https://api.tinyfn.io/mcp/validate/",
"headers": { "X-API-Key": "your-key" }
}
}
}
// validate/email tool is automatically available
// RFC-compliant, tested, maintained by TinyFn
Step 3: Keep Custom Business Logic
Continue using function calling for tools that access your private data or implement custom business logic.
Step 4: Test and Iterate
Test the hybrid setup. Verify that TinyFn tools return expected results. Gradually migrate more standard tools to MCP.
Try Cross-Platform Tools Today
Get your free TinyFn API key and use the same deterministic tools across Claude, GPT, and any AI application.
Get Free API Key