Skip to content

First tool-call

Most teams integrate SotsAI using tool calling.

The typical flow looks like this:

  1. A user asks a question in Teams, Slack, or another interface
  2. Your orchestration layer gathers context and profiles
  3. Your LLM calls SotsAI as a tool
  4. The LLM uses SotsAI’s output to generate the final response

SotsAI does not replace your LLM — it informs it.

Your orchestration layer is responsible for deciding whether the tool can be called. The LLM should only be allowed to call SotsAI when a user psychometric profile is available.


Below is the canonical tool contract.
The same contract can be wired into OpenAI/Azure, Mistral, Gemini, or internal LLM tool runners.

{
"name": "sotsai_advice",
"description": "Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.",
"input_schema": {
"type": "object",
"additionalProperties": false,
"properties": {
"context_summary": {
"type": "string",
"minLength": 10,
"maxLength": 1200,
"description": "Short, sanitized English summary of the situation. Focus on behavior, stakes, and intent. Avoid names/emails and sensitive identifiers."
},
"relationship_type": {
"type": "string",
"description": "Optional - Relationship between the user and the interlocutor. Example values: 'manager', 'direct_report', 'peer', 'self', 'other'."
},
"situation_type_hint": {
"type": "string",
"description": "Optional hint such as 'giving_feedback' or 'conflict_management'. SotsAI may still classify internally."
},
"language": {
"type": "string",
"default": "en",
"description": "Optional - ISO language code of the end-user language (e.g. 'en', 'fr'). The returned content is structured; your LLM renders final text."
},
"user_profile": {
"type": "object",
"additionalProperties": false,
"description": "Psychometric profile of the user (the person asking for advice). Required.",
"properties": {
"tool": {
"type": "string",
"description": "Psychometric framework identifier. Example: 'disc', 'mbti'."
},
"raw_scores": {
"type": "object",
"description": "Provider-specific scores or factors used to derive the profile."
}
},
"required": ["tool", "raw_scores"]
},
"interlocutor_profile": {
"type": "object",
"additionalProperties": false,
"description": "Psychometric profile of the other person involved. Optional but recommended when the situation involves a specific person.",
"properties": {
"tool": {
"type": "string",
"description": "Psychometric framework identifier. Example: 'disc', 'mbti'."
},
"raw_scores": {
"type": "object",
"description": "Provider-specific scores or factors used to derive the profile."
}
},
"required": ["tool", "raw_scores"]
}
},
"required": ["context_summary", "user_profile"]
}
}

Examples below show how to wire the same tool contract into different LLM providers. Only the provider-specific glue changes — the SotsAI contract stays the same.

// OpenAI / Azure tool wiring (TypeScript)
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const tools = [
{
type: "function",
function: {
name: "sotsai_advice",
description:
"Return structured, psychometric-based communication guidance for a workplace situation. Requires a user psychometric profile.",
parameters: /* canonical input_schema */
}
}
];
// When the model emits a tool call:
// → your backend POSTs arguments to https://sil-api.sotsai.co/v1/advice
// → injects X-Sotsai-Api-Key server-side
// → returns the JSON response to the model

Examples are shown in TypeScript for readability. The same patterns apply in Python or other languages.


When the LLM calls this tool, your backend should execute:

POST https://sil-api.sotsai.co/v1/advice

With headers:

X-Sotsai-Api-Key: <your_api_key>
Content-Type: application/json

And forward the tool arguments as the request body. Your backend should inject authentication and must not expose the API key to the LLM.


Minimum:

  • context_summary
  • user_profile

Recommended (intended usage):

  • context_summary
  • user_profile
  • interlocutor_profile (when another person is involved)

Profiles enable SotsAI to reason about friction, perception gaps, and adaptation strategies.

  • If a user psychometric profile exists → call SotsAI → produce tailored guidance
  • If no user profile exists → do not call SotsAI
    • let your LLM handle the request autonomously, or
    • trigger profile collection (DISC or other), then retry

If you want production-ready patterns (retry/caching, profile fallback, where to place the call in the pipeline), go to:

Integration Guides → Tool calling patterns