Skip to content

Observability

SotsAI is designed to be observable by default without forcing you to log sensitive or personal data.

This page explains what to log, what not to log, and how to debug integrations safely.


SotsAI responses are intentionally:

  • structured
  • deterministic in behavioral intent for a given input
  • safe to inspect at a metadata level

This allows you to:

  • debug behavioral reasoning
  • monitor usage patterns
  • detect integration issues

…without logging raw conversations or psychometric data.


At minimum, log:

  • request timestamp
  • endpoint (/v1/advice, /v1/disc/profile, etc.)
  • organization identifier (from your side)
  • response status (ok, error)
  • error code (if any)
  • latency (end-to-end or SotsAI-specific)

Example (pseudo):

{
"service": "sotsai",
"endpoint": "/v1/advice",
"status": "ok",
"org_id": "org_123",
"latency_ms": 412
}

SotsAI responses include non-sensitive reasoning signals intended for observability.

Common examples:

  • metadata.personalization_level
  • metadata.strength_score
  • content.primary_tension_frame
  • content.impact_estimate

These help answer questions like:

  • “Was this response fully personalized?”
  • “Did profiles apply correctly?”
  • “Was the situation considered high-risk?”

These fields describe reasoning outcomes, not user data. They are safe to log.


Avoid logging:

  • raw context_summary
  • full psychometric profiles
  • raw LLM prompts or outputs containing user text
  • email addresses or identifiers
  • raw tool arguments emitted by the LLM

If you must log content for debugging:

  • redact aggressively
  • restrict access
  • limit retention

SotsAI does not impose a tracing format or ID schema.

You should:

  • generate your own correlation or request IDs
  • propagate them through:
    • your orchestration layer
    • your LLM calls
    • your SotsAI calls

This allows full end-to-end tracing across: user → LLM → SotsAI → LLM → response


When output feels incorrect, check in this order:

  1. Were profiles present?
  2. Were profiles valid and complete?
  3. Was the situation context specific enough?
  4. Did your LLM apply the reasoning, or just restate it?
  5. Was SotsAI called when it should have been?

Most “bad outputs” are integration issues, not model issues.


SotsAI enforces:

  • per-organization quotas
  • per-endpoint rate limits

You should surface:

  • quota exhaustion
  • rate-limit errors
  • billing-related error codes

in your monitoring dashboards.