Agent Cookbook
Copy-paste snippets for integrating SecuriX with popular AI frameworks.
Agent Cookbook
This cookbook provides ready-to-use snippets for integrating SecuriX into your agentic loops using popular frameworks.
LangChain (Node.js)
When using LangChain, you can wrap your tool calls in a handler that catches SecuriX's llmError and returns it as a tool result.
import { DynamicTool } from "@langchain/core/tools";
import { Securix } from "@securix/core";
const gmailSearchTool = new DynamicTool({
name: "gmail_search",
description: "Search for emails in the user's Gmail account.",
func: async (input) => {
try {
const response = await fetch("https://gmail.api.securix.app/v1/users/me/messages", {
headers: {
"securix-api-key": process.env.SECURIX_API_KEY,
"securix-entity-id": "user_123",
"securix-agent-id": "gemini_vscode_1"
}
});
const data = await response.json();
if (response.status === 401 && data.llmError) {
return data.llmError; // The LLM will follow these instructions
}
return JSON.stringify(data);
} catch (e) {
return "Error fetching emails. Please try again later.";
}
},
});OpenAI Function Calling
If you are using the raw OpenAI SDK, you can handle the SecuriX response in your execution loop.
import OpenAI from "openai";
const openai = new OpenAI();
async function runAgent(prompt: string) {
const runner = openai.beta.chat.completions
.runTools({
model: "gpt-4-turbo",
messages: [{ role: "user", content: prompt }],
tools: [
{
type: "function",
function: {
name: "get_emails",
description: "Get recent emails",
parameters: { type: "object", properties: {} },
},
},
],
})
.on("functionCall", async (call) => {
// Logic to call SecuriX Proxy
const res = await fetch("https://gmail.api.securix.app/...", { ... });
const data = await res.json();
if (data.llmError) {
// Return the LLM error as the function result
return data.llmError;
}
return data;
});
const finalContent = await runner.finalContent();
console.log(finalContent);
}Vercel AI SDK
Using the tool helper from the Vercel AI SDK:
import { tool } from 'ai';
import { z } from 'zod';
export const gmailTool = tool({
description: 'Search messages in Gmail',
parameters: z.object({
query: z.string().describe('The search query'),
}),
execute: async ({ query }) => {
const response = await fetch(`https://gmail.api.securix.app/v1/users/me/messages?q=${query}`, {
headers: {
'securix-api-key': process.env.SECURIX_API_KEY,
'securix-entity-id': 'user_123',
'securix-agent-id': 'gemini_vscode_1'
}
});
const data = await response.json();
if (response.status === 401 && data.llmError) {
// The model will see this text and explain to the user how to fix it
return {
message: data.llmError,
requiresAction: true
};
}
return data;
},
});Best Practices
- Don't Swallow 401s: Always check for
llmErroron 401 responses. This is what enables the self-healing loop. - Entity Isolation: Always pass the correct
securix-entity-idfor the user currently interacting with the agent. - Graceful Degradation: If the SecuriX Gateway is unreachable, ensure your agent has a fallback or a clear error message.