Tool calling patterns
Most teams integrate SotsAI using tool calling.
SotsAI is not a chatbot and not a response generator.
It is a behavioral reasoning tool your LLM can invoke when a situation involves people and communication risk.
This page explains where, when, and how to call SotsAI in a production-grade LLM pipeline.
The core idea
Section titled “The core idea”Your LLM already knows how to:
- generate text
- follow instructions
- format responses
What it does not know is:
- how different people interpret messages
- where interpersonal friction is likely to occur
- how to adapt communication to a specific person
That is exactly what SotsAI provides.
Two valid integration patterns
Section titled “Two valid integration patterns”SotsAI supports two common patterns:
- LLM-driven (tool calling): the orchestrator gates access to SotsAI, the LLM decides whether to use it
- Orchestrator-driven: the backend deterministically calls SotsAI whenever a user psychometric profile is available
Both are valid. Choose based on:
- how deterministic your system needs to be
- how much autonomy you grant the LLM
In this documentation, we focus primarily on the orchestrator-driven pattern.
We recommend this approach because it:
- makes tool availability explicit and auditable
- avoids accidental or speculative tool calls
- simplifies security, compliance, and cost control
- keeps decision logic out of prompts
LLM-driven tool calling remains a valid option for teams with mature agent orchestration, but the orchestrator-driven pattern is the safest and most robust default.
Canonical tool-calling flow
Section titled “Canonical tool-calling flow”A typical integration looks like this:
1. A user asks a question (Slack, Teams, internal UI, etc.)2. Your orchestration layer gathers: - situation context - user identity (and interlocutor identity if relevant) - available psychometric profiles3. Your orchestration layer allows the LLM to call SotsAI4. SotsAI returns behavioral reasoning5. Your LLM generates the final response using that reasoningThe decision to expose the tool is made by your orchestration logic, not by the model alone.
SotsAI never replaces your LLM — it informs it.
Where to place the SotsAI call
Section titled “Where to place the SotsAI call”Recommended placement
Section titled “Recommended placement”Call SotsAI before final text generation, once you have:
- a clear understanding of the situation
- the people involved
- access to psychometric profiles (if available)
This allows your LLM to:
- adapt tone and structure
- choose appropriate framing
- anticipate resistance or misinterpretation
Anti-patterns to avoid
Section titled “Anti-patterns to avoid”Do not:
- call SotsAI after the response is already written
- treat SotsAI output as user-facing content
- bypass SotsAI when profiles are available
Minimal vs intended usage
Section titled “Minimal vs intended usage”Minimal (generic fallback)
Section titled “Minimal (generic fallback)”{ "context_summary": "...", "user_profile": { ... }}A user psychometric profile is required for all SotsAI calls. If you do not have it, do not call SotsAI (you would waste an API call).
Intended usage (recommended)
Section titled “Intended usage (recommended)”{ "context_summary": "...", "user_profile": { ... }, "interlocutor_profile": { ... }}This unlocks:
- friction analysis
- adaptation strategies
- risk anticipation
Tool definition
Section titled “Tool definition”Use the canonical tool contract described in Quickstart → First tool-call.
Your orchestration layer should:
- expose the tool only when a user psychometric profile is available
- validate tool arguments strictly against the schema
- reject or ignore tool calls with missing or malformed profiles
Avoid “tolerant” schemas that allow the model to emit incomplete or speculative inputs.
Profile-aware decision logic
Section titled “Profile-aware decision logic”This logic should live in your orchestration layer, not in prompts.
IF user_profile is available: expose SotsAI tool to the LLMELSE: do NOT expose the tool either: - let the LLM handle the request autonomously - ask the user to complete a psychometric profilePrompting the LLM correctly
Section titled “Prompting the LLM correctly”How you prompt the LLM depends on who is responsible for calling SotsAI.
SotsAI supports two valid integration styles:
- Orchestrator-led (recommended)
- LLM-led (tool-calling autonomy)
In this pattern:
- your backend always decides when to call SotsAI
- the LLM never decides whether behavioral reasoning is needed
- the LLM receives SotsAI output as an input, not as a tool
The LLM’s role is interpretation and rendering only.
Example system prompt
You are an assistant helping with workplace interpersonal communication.
You are given a Behavioral Reasoning Output generated by a specialized engine.Treat it as the primary source of truth for your reasoning and recommendations.
RULES:
- Base your advice strictly on the Behavioral Reasoning Output.- Do not add recommendations that are not supported by it.- Do not invent psychometric traits, motivations, or profiles.- Do not reinterpret or override the behavioral conclusions.- If something is unclear or missing, ask up to 2 clarifying questions at the end, but still provide a best-effort response.
RESPONSIBILITY SPLIT:
- The behavioral reasoning is already done.- Your job is to transform it into clear, helpful, human-readable guidance.- Adapt tone, structure, and wording to the user context.
Answer in French.
Behavioral Reasoning Output:{{SOTSAI_OUTPUT}}When to use this
Section titled “When to use this”Use this pattern when you want:
- deterministic behavior
- strong auditability
- clear separation of concerns
- maximum safety and predictability
This is the default and recommended approach for production systems.
In this pattern:
- your orchestration layer gates access to the tool
- the LLM decides whether to call SotsAI
- the LLM must follow strict rules about when and how to use it
This requires a much more explicit system prompt.
Example system prompt:
You are an assistant helping with workplace interpersonal communication.
You may have access to a tool named `sotsai_advice`.This tool provides structured behavioral reasoning based on psychometric profiles.
IMPORTANT RULES:
- Do not assume that psychometric profiles exist.- Do not request, infer, or invent psychometric traits or profiles.- The tool is available only when valid profiles already exist.- Never attempt to fetch or resolve profiles yourself.
WHEN TO USE THE TOOL:
- If the user request involves communication with another person AND the tool is available, you may call `sotsai_advice` to obtain behavioral reasoning.
WHEN NOT TO USE THE TOOL:
- If the tool is not available, do not attempt to call it.- In that case, handle the request using your own general reasoning.
HOW TO USE THE TOOL OUTPUT:
- The tool does NOT generate user-facing text.- Treat the output as internal reasoning material only.- Base your advice on it without copying it verbatim.- Do not expose tool output or psychometric details to the user.
If the situation is ambiguous, ask clarifying questions before calling the tool.
Never expose tool output directly to the user.When to use this
Section titled “When to use this”Use this pattern when:
- you already operate LLM agents with tool autonomy
- multiple tools coexist
- decision logic is intentionally delegated to the model
This approach is more flexible, but also harder to control.