Observability
SotsAI is designed to be observable by default without forcing you to log sensitive or personal data.
This page explains what to log, what not to log, and how to debug integrations safely.
What SotsAI is optimized for
Section titled “What SotsAI is optimized for”SotsAI responses are intentionally:
- structured
- deterministic in behavioral intent for a given input
- safe to inspect at a metadata level
This allows you to:
- debug behavioral reasoning
- monitor usage patterns
- detect integration issues
…without logging raw conversations or psychometric data.
What you should log
Section titled “What you should log”Strongly recommended
Section titled “Strongly recommended”At minimum, log:
- request timestamp
- endpoint (
/v1/advice,/v1/disc/profile, etc.) - organization identifier (from your side)
- response status (
ok, error) - error code (if any)
- latency (end-to-end or SotsAI-specific)
Example (pseudo):
{ "service": "sotsai", "endpoint": "/v1/advice", "status": "ok", "org_id": "org_123", "latency_ms": 412}Metadata you can safely inspect
Section titled “Metadata you can safely inspect”SotsAI responses include non-sensitive reasoning signals intended for observability.
Common examples:
metadata.personalization_levelmetadata.strength_scorecontent.primary_tension_framecontent.impact_estimate
These help answer questions like:
- “Was this response fully personalized?”
- “Did profiles apply correctly?”
- “Was the situation considered high-risk?”
These fields describe reasoning outcomes, not user data. They are safe to log.
What you should NOT log
Section titled “What you should NOT log”Avoid logging:
- raw context_summary
- full psychometric profiles
- raw LLM prompts or outputs containing user text
- email addresses or identifiers
- raw tool arguments emitted by the LLM
If you must log content for debugging:
- redact aggressively
- restrict access
- limit retention
Correlation IDs
Section titled “Correlation IDs”SotsAI does not impose a tracing format or ID schema.
You should:
- generate your own correlation or request IDs
- propagate them through:
- your orchestration layer
- your LLM calls
- your SotsAI calls
This allows full end-to-end tracing across: user → LLM → SotsAI → LLM → response
Debugging unexpected output
Section titled “Debugging unexpected output”When output feels incorrect, check in this order:
- Were profiles present?
- Were profiles valid and complete?
- Was the situation context specific enough?
- Did your LLM apply the reasoning, or just restate it?
- Was SotsAI called when it should have been?
Most “bad outputs” are integration issues, not model issues.
Quota and billing observability
Section titled “Quota and billing observability”SotsAI enforces:
- per-organization quotas
- per-endpoint rate limits
You should surface:
- quota exhaustion
- rate-limit errors
- billing-related error codes
in your monitoring dashboards.