First tool-call
Most teams integrate SotsAI using tool calling.
The typical flow looks like this:
- A user asks a question in Teams, Slack, or another interface
- Your orchestration layer gathers context and profiles
- Your LLM calls SotsAI as a tool
- The LLM uses SotsAI’s output to generate the final response
SotsAI does not replace your LLM — it informs it.
Your orchestration layer is responsible for deciding whether the tool can be called. The LLM should only be allowed to call SotsAI when a user psychometric profile is available.
Conceptual tool definition
Section titled “Conceptual tool definition”Below is the canonical tool contract.
The same contract can be wired into OpenAI/Azure, Mistral, Gemini, or internal LLM tool runners.
{ "name": "sotsai_advice", "description": "Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.", "input_schema": { "type": "object", "additionalProperties": false, "properties": { "context_summary": { "type": "string", "minLength": 10, "maxLength": 1200, "description": "Short, sanitized English summary of the situation. Focus on behavior, stakes, and intent. Avoid names/emails and sensitive identifiers." }, "relationship_type": { "type": "string", "description": "Optional - Relationship between the user and the interlocutor. Example values: 'manager', 'direct_report', 'peer', 'self', 'other'." }, "situation_type_hint": { "type": "string", "description": "Optional hint such as 'giving_feedback' or 'conflict_management'. SotsAI may still classify internally." }, "language": { "type": "string", "default": "en", "description": "Optional - ISO language code of the end-user language (e.g. 'en', 'fr'). The returned content is structured; your LLM renders final text." }, "user_profile": { "type": "object", "additionalProperties": false, "description": "Psychometric profile of the user (the person asking for advice). Required.", "properties": { "tool": { "type": "string", "description": "Psychometric framework identifier. Example: 'disc', 'mbti'." }, "raw_scores": { "type": "object", "description": "Provider-specific scores or factors used to derive the profile." } }, "required": ["tool", "raw_scores"] }, "interlocutor_profile": { "type": "object", "additionalProperties": false, "description": "Psychometric profile of the other person involved. Optional but recommended when the situation involves a specific person.", "properties": { "tool": { "type": "string", "description": "Psychometric framework identifier. Example: 'disc', 'mbti'." }, "raw_scores": { "type": "object", "description": "Provider-specific scores or factors used to derive the profile." } }, "required": ["tool", "raw_scores"] } }, "required": ["context_summary", "user_profile"] }}LLM Provider examples
Section titled “LLM Provider examples”Examples below show how to wire the same tool contract into different LLM providers. Only the provider-specific glue changes — the SotsAI contract stays the same.
// OpenAI / Azure tool wiring (TypeScript)
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const tools = [ { type: "function", function: { name: "sotsai_advice", description: "Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.", parameters: /* canonical input_schema */ } }];
// When the model emits a tool call:// → your backend POSTs arguments to https://sil-api.sotsai.co/v1/advice// → injects X-Sotsai-Api-Key server-side// → returns the JSON response to the model// Mistral tool wiring (TypeScript)
import { Mistral } from "@mistralai/mistralai";
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const tools = [ { type: "function", function: { name: "sotsai_advice", description: "Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.", parameters: /* canonical input_schema */ } }];
// On tool call:// → execute POST /v1/advice from your backend// → never expose the API key to the model// Gemini function calling (TypeScript)
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro", tools: [ { functionDeclarations: [ { name: "sotsai_advice", description: "Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.", parameters: /* canonical input_schema */ } ] } ]});
// When Gemini emits a functionCall:// → backend POSTs to /v1/advice// → returns functionResponse to Gemini1. LLM receives user request.2. Orchestrator checks: - Is there a user psychometric profile? - If no → do NOT call SotsAI.3. Orchestrator exposes tool "sotsai_advice" with canonical schema.4. If LLM emits a tool call: - Validate JSON strictly. - POST arguments to https://sil-api.sotsai.co/v1/advice - Inject X-Sotsai-Api-Key server-side.5. Return structured response to LLM.6. LLM renders final user-facing message.Examples are shown in TypeScript for readability. The same patterns apply in Python or other languages.
Tool handler (your backend)
Section titled “Tool handler (your backend)”When the LLM calls this tool, your backend should execute:
POST https://sil-api.sotsai.co/v1/adviceWith headers:
X-Sotsai-Api-Key: <your_api_key>Content-Type: application/jsonAnd forward the tool arguments as the request body. Your backend should inject authentication and must not expose the API key to the LLM.
What to send to SotsAI
Section titled “What to send to SotsAI”Minimum:
context_summaryuser_profile
Recommended (intended usage):
context_summaryuser_profileinterlocutor_profile(when another person is involved)
Profiles enable SotsAI to reason about friction, perception gaps, and adaptation strategies.
Expected workflow
Section titled “Expected workflow”- If a user psychometric profile exists → call SotsAI → produce tailored guidance
- If no user profile exists → do not call SotsAI
- let your LLM handle the request autonomously, or
- trigger profile collection (DISC or other), then retry
If you want production-ready patterns (retry/caching, profile fallback, where to place the call in the pipeline), go to: