Skip to content

Tool calling patterns

Most teams integrate SotsAI using tool calling.

SotsAI is not a chatbot and not a response generator.
It is a behavioral reasoning tool your LLM can invoke when a situation involves people and communication risk.

This page explains where, when, and how to call SotsAI in a production-grade LLM pipeline.


Your LLM already knows how to:

  • generate text
  • follow instructions
  • format responses

What it does not know is:

  • how different people interpret messages
  • where interpersonal friction is likely to occur
  • how to adapt communication to a specific person

That is exactly what SotsAI provides.


SotsAI supports two common patterns:

  • LLM-driven (tool calling): the orchestrator gates access to SotsAI, the LLM decides whether to use it
  • Orchestrator-driven: the backend deterministically calls SotsAI whenever a user psychometric profile is available

Both are valid. Choose based on:

  • how deterministic your system needs to be
  • how much autonomy you grant the LLM

In this documentation, we focus primarily on the orchestrator-driven pattern.

We recommend this approach because it:

  • makes tool availability explicit and auditable
  • avoids accidental or speculative tool calls
  • simplifies security, compliance, and cost control
  • keeps decision logic out of prompts

LLM-driven tool calling remains a valid option for teams with mature agent orchestration, but the orchestrator-driven pattern is the safest and most robust default.


A typical integration looks like this:

1. A user asks a question (Slack, Teams, internal UI, etc.)
2. Your orchestration layer gathers:
- situation context
- user identity (and interlocutor identity if relevant)
- available psychometric profiles
3. Your orchestration layer allows the LLM to call SotsAI
4. SotsAI returns behavioral reasoning
5. Your LLM generates the final response using that reasoning

The decision to expose the tool is made by your orchestration logic, not by the model alone.

SotsAI never replaces your LLM — it informs it.


Call SotsAI before final text generation, once you have:

  • a clear understanding of the situation
  • the people involved
  • access to psychometric profiles (if available)

This allows your LLM to:

  • adapt tone and structure
  • choose appropriate framing
  • anticipate resistance or misinterpretation

Do not:

  • call SotsAI after the response is already written
  • treat SotsAI output as user-facing content
  • bypass SotsAI when profiles are available

{
"context_summary": "...",
"user_profile": { ... }
}

A user psychometric profile is required for all SotsAI calls. If you do not have it, do not call SotsAI (you would waste an API call).

{
"context_summary": "...",
"user_profile": { ... },
"interlocutor_profile": { ... }
}

This unlocks:

  • friction analysis
  • adaptation strategies
  • risk anticipation

Use the canonical tool contract described in Quickstart → First tool-call.

Your orchestration layer should:

  • expose the tool only when a user psychometric profile is available
  • validate tool arguments strictly against the schema
  • reject or ignore tool calls with missing or malformed profiles

Avoid “tolerant” schemas that allow the model to emit incomplete or speculative inputs.


This logic should live in your orchestration layer, not in prompts.

IF user_profile is available:
expose SotsAI tool to the LLM
ELSE:
do NOT expose the tool
either:
- let the LLM handle the request autonomously
- ask the user to complete a psychometric profile

How you prompt the LLM depends on who is responsible for calling SotsAI.

SotsAI supports two valid integration styles:

  • Orchestrator-led (recommended)
  • LLM-led (tool-calling autonomy)

In this pattern:

  • your backend always decides when to call SotsAI
  • the LLM never decides whether behavioral reasoning is needed
  • the LLM receives SotsAI output as an input, not as a tool

The LLM’s role is interpretation and rendering only.

Example system prompt

You are an assistant helping with workplace interpersonal communication.
You are given a Behavioral Reasoning Output generated by a specialized engine.
Treat it as the primary source of truth for your reasoning and recommendations.
RULES:
- Base your advice strictly on the Behavioral Reasoning Output.
- Do not add recommendations that are not supported by it.
- Do not invent psychometric traits, motivations, or profiles.
- Do not reinterpret or override the behavioral conclusions.
- If something is unclear or missing, ask up to 2 clarifying questions at the end,
but still provide a best-effort response.
RESPONSIBILITY SPLIT:
- The behavioral reasoning is already done.
- Your job is to transform it into clear, helpful, human-readable guidance.
- Adapt tone, structure, and wording to the user context.
Answer in French.
Behavioral Reasoning Output:
{{SOTSAI_OUTPUT}}

Use this pattern when you want:

  • deterministic behavior
  • strong auditability
  • clear separation of concerns
  • maximum safety and predictability

This is the default and recommended approach for production systems.