Interlocutor resolution strategy
SotsAI does not resolve identities.
It expects your system to provide:
- a user profile (always), and
- optionally an interlocutor profile (when the situation involves another person).
This page explains how your system should determine who the interlocutor is, while preserving privacy, determinism, and auditability.
Why this matters
Section titled “Why this matters”Most workplace requests look like this:
“How should I give feedback to him?”
“I need to convince my manager.”
“This keeps happening with people in another team.”
Before calling SotsAI, your system must decide:
- Is there an interlocutor?
- Who is it (or who could it be)?
- Which psychometric profile(s) should be attached?
SotsAI deliberately does not answer these questions itself.
This is intentional:
- to avoid collecting or inferring PII
- to keep identity resolution auditable
- to let you leverage your existing organization data safely
High-level decision flow
Section titled “High-level decision flow”A robust resolution flow looks like this:
1) Does the request involve another person? └─ no → call SotsAI with user_profile only
2) If yes: a) Can the interlocutor be resolved deterministically? └─ yes → attach interlocutor_profile └─ no → continue
3) Can you narrow down candidates safely? └─ one candidate → attach profile └─ multiple candidates → disambiguate └─ zero candidates → degrade gracefully
4) Call SotsAI only with the profiles you haveThree valid resolution strategies
Section titled “Three valid resolution strategies”1) Deterministic org-graph resolution Recommended default
Section titled “1) Deterministic org-graph resolution ”If your system has access to an organizational graph (HRIS, directory, IAM):
- manager relationships
- team membership
- reporting lines
- role types
You can resolve many cases without NLP.
Examples:
- “my manager” →
subject.manager_ref - “my direct report” → users where
manager_ref == subject - “someone on my team” → same
team_ref
This approach is:
- deterministic
- auditable
- privacy-safe
- LLM-independent
If you can resolve identities deterministically, do not involve an LLM.
2) Lightweight inference + org graph Best UX
Section titled “2) Lightweight inference + org graph ”When the relationship is implicit or ambiguous, use a small inference step to extract a relationship hint, then apply deterministic rules.
Typical relationship hints:
managerdirect_reportpeer_same_teamperson_cross_teamexternalunknown
This inference step can be:
- a tiny classifier
- a constrained LLM prompt
- simple heuristics (“my boss”, “someone in sales”, etc.)
Once the hint is known, apply org-graph rules to produce candidates.
This keeps:
- identity resolution outside SotsAI
- logic explicit and testable
3) Disambiguation (multiple candidates)
Section titled “3) Disambiguation (multiple candidates)”Sometimes, resolution yields multiple valid candidates. This is not a failure.
Examples:
- multiple managers (matrix organization)
- several peers in the same team
- several people with th same name
- vague references (“someone in ops”)
In this case, you should:
- pause the advice flow
- ask a single clarifying question, or
- present a UI picker Recommended
Avoid pushing employee lists into the LLM if possible. UI-based disambiguation is safer and more controllable.
Once resolved, call SotsAI with the correct profile.
External interlocutors Important
Section titled “External interlocutors ”Not all conversations involve internal employees.
Examples:
- clients
- vendors
- candidates
- partners
In these cases:
- you usually cannot resolve a psychometric profile
- do not invent one
Recommended handling:
- call SotsAI with
user_profileonly - optionally set
relationship_type: "external"or"other"
SotsAI will focus on self-adaptation strategies.
What NOT to do
Section titled “What NOT to do”Avoid these anti-patterns:
- guessing identities from text
- inferring personalities from a single message
- passing raw employee directories to the LLM
- letting the LLM invent or “simulate” people
- resolving identities inside prompts
These break:
- privacy guarantees
- auditability
- behavioral reliability
Reference resolution logic (example)
Section titled “Reference resolution logic (example)”Below is a reference strategy many teams implement internally.
It is shown for illustration only — SotsAI does not require or provide this logic.
Typical rules:
manager→ explicit manager if known, else manager-like role in same teamdirect_report→ users whosemanager_refequals the subjectpeer_same_team→ same team, excluding subjectperson_cross_team→ other teams, lower confidence
The key idea:
- deterministic rules first
- confidence-based fallback
- explicit disambiguation when needed
# Python codefrom difflib import SequenceMatcher # for the fuzzy name-based resolution
def _similarity(a: str, b: str) -> float: """Lightweight fuzzy similarity in [0, 1].""" a = (a or "").strip().lower() b = (b or "").strip().lower() if not a or not b: return 0.0 return SequenceMatcher(None, a, b).ratio()
def _best_name_score(interlocutor_name: str, candidate_full_name: str, candidate_email: str | None = None) -> float: """ Returns a fuzzy name score. Keep simple on purpose. You can replace this with trigram search, FTS, embeddings, etc. """ q = (interlocutor_name or "").strip().lower() name = (candidate_full_name or "").strip().lower() email = (candidate_email or "").strip().lower()
if not q: return 0.0
# Cheap exact-ish signals if q == name: return 1.0 if q and name and q in name: return 0.85
# Fuzzy fallback score = _similarity(q, name)
# Optional: tiny boost if query matches email local part (often what users type) if email and "@" in email: local = email.split("@", 1)[0] if q in local: score = max(score, 0.75)
return score
def resolve_interlocutor( db, org_id: str, user_ref: str, interlocutor_name: str | None = None, relationship_hint: str | None = None, max_candidates: int = 10,): """ Example client-side interlocutor resolution logic.
Inputs: - db: your DB session/connection - org_id: organization scope - user_ref: the querying user's identifier - interlocutor_name: optional free-text hint (e.g., "Sarah", "Sarah Dupont") - relationship_hint: optional hint (e.g. "manager", "direct_report", "peer_same_team", "person_cross_team") - max_candidates: cap results for UI/LLM safety
Output: - list of candidate interlocutors with confidence scores """
# 1) Load subject user (the person asking) subject = db.query(User).filter( User.org_id == org_id, User.user_ref == user_ref, User.deleted_at.is_(None), ).first()
if subject is None: raise ValueError("Unknown user_ref for this organization")
candidates = []
# 2) Base pool, depending on relationship hint if relationship_hint == "manager": pool = []
# Prefer explicit manager_ref first if subject.manager_ref: manager = db.query(User).filter( User.org_id == org_id, User.user_ref == subject.manager_ref, User.deleted_at.is_(None), ).first() if manager: pool.append((manager, 1.0)) # strong prior
# Fallback: "manager-like" users in same team if not pool and subject.team_ref: users = db.query(User).filter( User.org_id == org_id, User.team_ref == subject.team_ref, User.user_ref != subject.user_ref, User.deleted_at.is_(None), ).all() for u in users: if u.role_type and "manager" in u.role_type.lower(): pool.append((u, 0.8))
elif relationship_hint == "direct_report": users = db.query(User).filter( User.org_id == org_id, User.manager_ref == subject.user_ref, User.deleted_at.is_(None), ).all() pool = [(u, 0.9) for u in users]
elif relationship_hint == "peer_same_team" and subject.team_ref: users = db.query(User).filter( User.org_id == org_id, User.team_ref == subject.team_ref, User.user_ref != subject.user_ref, User.deleted_at.is_(None), ).all() pool = [(u, 0.7) for u in users]
elif relationship_hint == "person_cross_team" and subject.team_ref: users = db.query(User).filter( User.org_id == org_id, User.team_ref != subject.team_ref, User.user_ref != subject.user_ref, User.deleted_at.is_(None), ).all() pool = [(u, 0.6) for u in users]
else: # No relationship hint: start with a broad-but-scoped pool (org only) users = db.query(User).filter( User.org_id == org_id, User.user_ref != subject.user_ref, User.deleted_at.is_(None), ).all() pool = [(u, 0.5) for u in users]
# 3) If we have a name hint, rank candidates using fuzzy match for u, prior in pool: full_name = getattr(u, "full_name", None) or getattr(u, "name", "") or "" email = getattr(u, "email", None) name_score = _best_name_score(interlocutor_name or "", full_name, email) # Combine prior (relationship) + name score confidence = max(prior, min(1.0, (prior * 0.6) + (name_score * 0.7))) candidates.append( { "user_ref": u.user_ref, "confidence": round(confidence, 3), # Optional: include safe-to-display hints for UI disambiguation "display": { "full_name": full_name, "team_ref": getattr(u, "team_ref", None), "role_type": getattr(u, "role_type", None), }, } )
# 4) Sort & trim candidates.sort(key=lambda c: c["confidence"], reverse=True) return candidates[:max_candidates]Where this fits in the pipeline
Section titled “Where this fits in the pipeline”Interlocutor resolution happens before calling SotsAI:
User request↓Intent & relationship detection↓Interlocutor resolution↓Profile fetching↓SotsAI call↓LLM renderingThis keeps responsibilities clean:
- your system resolves who
- SotsAI reasons about how
- your LLM decides what to say