Concepts

Embeddings

How vector embeddings capture semantic meaning across sessions, issues, contacts, and knowledge for search and deduplication.

Embeddings

Every session, issue, contact, and knowledge chunk gets a 1536-dimension vector embedding generated by OpenAI. These embeddings capture semantic meaning - not just keywords - and are stored alongside the original data.

What Gets Embedded

EntityWhat's EmbeddedStorage
SessionsCustomer messages and agent responsessession_embeddings table
IssuesTitle and descriptionissue_embeddings table
ContactsName, email, company, and metadatacontact_embeddings table
KnowledgeEach chunk of analyzed content (with section headings)knowledge_embeddings table

How Embeddings Are Used

  • Deduplication - When new feedback comes in, its embedding is compared against existing issues to find semantic matches (cosine similarity, threshold 0.5-0.6)
  • Semantic search - All resource search tools (MCP, CLI, API) use vector similarity to find relevant content by meaning
  • Graph evaluation - The relationship discovery pipeline uses embeddings to find semantically related entities across all types

Embeddings are managed automatically. They're generated on creation, regenerated when content changes (using a content hash to skip unnecessary recomputation), and removed on deletion.