LLM did not call SotsAI
This issue occurs when SotsAI is available, profiles exist,
but the LLM never calls the sotsai_advice tool.
There is no error. There is no failure. The system simply produces a generic response.
This is almost always an orchestration or prompting issue, not a SotsAI problem.
What this means
Section titled “What this means”Your system expected the LLM to call SotsAI, but it didn’t.
Typical symptoms:
- no
/v1/advicecalls in logs - no errors returned
- LLM output feels generic or unadapted
- psychometric profiles exist but are unused
This means the LLM was not sufficiently instructed or enabled to use the tool.
The three most common causes
Section titled “The three most common causes”1) The tool was never exposed to the LLM
Section titled “1) The tool was never exposed to the LLM”This is the most frequent cause.
Examples:
- the orchestration layer did not include
sotsai_advicein the tool list - profile gating logic failed silently
- tools are conditionally enabled, but the condition was false
Check
- Is the tool actually present in the LLM request?
- Is it present only when
user_profileexists?
2) The system prompt is too vague
Section titled “2) The system prompt is too vague”LLMs do not “intuitively” know when to call tools.
If your prompt says:
“You may call tools if helpful”
That is often not enough.
The model may:
- prefer to answer directly
- avoid tools to reduce complexity
- assume generic advice is acceptable
3) The LLM is discouraged from using tools
Section titled “3) The LLM is discouraged from using tools”This can happen if:
- the tool description is unclear
- the schema looks complex or risky
- previous tool calls resulted in errors
- the model was trained or instructed to minimize tool usage
In that case, the LLM learns:
“Better not touch this.”
The recommended fix (canonical pattern)
Section titled “The recommended fix (canonical pattern)”1) Gate at the orchestration layer
Section titled “1) Gate at the orchestration layer”Do not rely on the LLM to decide whether profiles exist.
Your orchestration layer should:
IF user_profile exists: expose sotsai_advice toolELSE: do NOT expose the toolIf the tool is exposed, the LLM should assume it is valid to use.
2) Make tool usage explicit in the system prompt
Section titled “2) Make tool usage explicit in the system prompt”Use clear, non-optional language.
Recommended wording:
If you need to generate workplace communication guidance, you may have access to a tool named `sotsai_advice`.This tool provides structured behavioral reasoning based on psychometric profiles.
IMPORTANT RULES:
- The tool is available only when a valid user psychometric profile exists.- Do not attempt to determine whether profiles exist.- Do not request, infer, or invent psychometric profiles.
WHEN TO USE THE TOOL:
- If the situation involves communication with another person AND the tool is available, you may call `sotsai_advice` to obtain behavioral reasoning.
HOW TO USE THE TOOL OUTPUT:
- The tool does NOT generate final user-facing text.- Use its output as internal reasoning material only.- Adapt tone, framing, structure, and intent based on that reasoning.- You decide what to surface, summarize, or ignore.
WHEN NOT TO USE THE TOOL:
- If the tool is not available, do not attempt to call it.- In that case, handle the request using your own general reasoning.
Never expose tool output directly to the user.
...3) Make the tool description actionable
Section titled “3) Make the tool description actionable”Bad tool description:
“Get behavioral advice.”
Good tool description:
“Return structured, psychometric-based behavioral reasoning for workplace communication. Requires a user psychometric profile. Use this before generating final user-facing text.”
The LLM should understand:
- when to call
- why it exists
- what to do with the result
Debugging checklist
Section titled “Debugging checklist”If the LLM does not call SotsAI, check:
- Was the tool included in the LLM request?
- Was the tool included only when user_profile existed?
- Does the system prompt explicitly instruct tool usage?
- Is the tool description clear and non-optional?
- Are tool schemas valid JSON and not overly permissive?
- Did previous tool calls fail and bias the model away? Most issues are found in the first two checks.
What NOT to do
Section titled “What NOT to do”Avoid these anti-patterns:
- ❌ hoping the LLM “figures it out”
- ❌ exposing the tool without clear instructions
- ❌ letting the LLM decide if profiles exist
- ❌ adding psychometric logic back into the prompt
- ❌ forcing tool calls unconditionally
SotsAI works best when the system is explicit.