Solutions

Liya Support Intelligence

Grounded service operations AI

Liya Support Intelligence gives support and operations teams a grounded AI service layer. Use customer-facing chat where it helps, but also deploy Assist, Triage, QA, Discover, and controlled Resolve Actions on top of the same audited knowledge and workflow foundation.

MODULE NETWORK
5 modules · 1 core
ASSIST ACTIVE
Product Preview

What the platform looks like in use.

Support teams see grounded answers, triage state, sources, and next actions in one view.

SUPPORT INTELLIGENCE CONSOLE
Agent Assist
CASE SIGNAL
Customer is asking whether the export failure is a known incident and what workaround support should share right now.
ASSIST OUTPUT
94% grounded
Known issue confirmed for enterprise CSV sync. Recommend sharing the workaround article, acknowledging the incident, and promising the next status update at 2:30 PM CT.
Suggested replyCitations attachedEscalation ready
Intent
incident_lookup
Confidence
94% grounded
Next step
Send + attach workaround
Agent Assist

Source-backed replies inside the support workflow

Liya drafts the response, cites the exact source, and prepares the next action so agents can move fast without losing control or grounding.

Suggested replyCitations attachedEscalation ready
Primary source
Jira: INC-214 export sync
Policy match
Status comms playbook
Product Modules

One platform. Five support intelligence modules.

Chat is one surface, not the whole product. Each module addresses a distinct support workflow using the same grounded retrieval, audit, and knowledge foundation.

LIYA SUPPORT CONSOLE
CASE #4821OPEN
CUSTOMER
Our export keeps failing after the update. Enterprise accounts are completely blocked.
LIYA SUGGESTGROUNDED 94%
Known issue matched: INC-214 export sync. Recommend acknowledging the incident and sharing the manual export workaround.
Jira: INC-214KB: export-fix-guide
MODULE 01ASSIST

Assist

Grounded agent assist for live support work — suggested replies, source-backed answers, case summaries, and next-best actions.

Suggested replies with cited sources
Case summarisation and history lookup
Next-best action and escalation drafts
LIYA SUPPORT CONSOLE
INCOMING — #4821CLASSIFYING
"Can't export our reports. The enterprise admin panel is throwing a 500 error since this morning."
INTENTexport_failure97% CONF
PRIORITYHigh
ROUTE →Enterprise Support
SLA2hr response required
MODULE 02TRIAGE

Triage

Classify, prioritise, and route incoming conversations and tickets using your policies, account context, and support rules.

Issue classification with confidence scores
Queue routing by policy and account tier
Escalation preparation and handoff notes
LIYA SUPPORT CONSOLE
QA REVIEW — #4819AUTO-SCORED
Grounding score84%
Policy adherence97%
Response quality88%
⚠ 1 UNSUPPORTED CLAIM
"Same-day resolution guaranteed" — not grounded in any source
MODULE 03QA

QA

Review support interactions for grounding quality, policy adherence, escalation correctness, and operational consistency.

Grounding quality scoring per conversation
Policy adherence and compliance flags
Coaching suggestions and QA scorecards
LIYA SUPPORT CONSOLE
TOPIC ANALYSIS — 30 DAYS
Export issues142
SSO login errors89
Billing changes51
API rate limits38
3 knowledge gaps detected — review recommended →
MODULE 04DISCOVER

Discover

Surface recurring issues, failed answers, and documentation gaps from real support conversations before they become churn risks.

Recurring issue clustering and volume trends
Low-confidence answer and failure analysis
Knowledge gap detection and doc recommendations
LIYA SUPPORT CONSOLE
PROPOSED ACTIONAPPROVAL REQUIRED
ACTIONReset API rate limit
ACCOUNTAcme Corp (Enterprise)
TICKET#4821
RISKLow — reversible
MODULE 05RESOLVE ACTIONS

Resolve Actions

Connect approved workflows and live systems so Liya can prepare or execute bounded actions with clear audit trails.

Approval-gated action execution
Ticket and account state operations
Full write-path audit trails
Packaging

One product family, packaged for different buyers.

These are not separate products built from scratch. Liya uses one grounded support intelligence core, one set of modules, and one governance model. The difference is how the product is packaged for smaller teams versus larger service organisations.

Same core product, simpler entry points
SMB Packaging
Liya Assist

The fastest entry point for lean support teams that want grounded replies, summaries, and next-best actions without buying a full operations platform.

Liya Help Center AI

A customer-facing knowledge and deflection layer for docs, FAQs, onboarding, billing, and support policy questions.

Liya Knowledge Assistant

An internal team assistant for small organisations that need fast, cited answers over SOPs, product docs, and operating playbooks.

Same core product, broader operational scope
Enterprise Packaging
Liya Support Intelligence

For customer support organisations that need Assist, Triage, QA, Discover, and customer-facing support surfaces in one governed platform.

Liya Service Intelligence

For service-led organisations that span support, success, onboarding, and service delivery with shared workflows and quality controls.

Liya Ops Intelligence

For internal IT, HR, compliance, and operations teams that want the same grounded intelligence model applied to internal service desks and workflows.

How it works

Two layers. Zero hallucinations.

Liya Support Intelligence works in two distinct layers — a one-time knowledge and workflow configuration that shapes what the system knows, and a real-time query pipeline that runs on every message, case, or support action.

KNOWLEDGE CONFIGURATION
Set up once. Every query benefits.
Upload documents, configure intents, set access control, deploy.
STEP 1
Upload
STEP 2
Configure
STEP 3
Access Control
STEP 4
Deploy
01

Upload Your Knowledge

Add documents in any format — PDF, DOCX, JSON, or plain text. Liya Support Intelligence automatically chunks, embeds, and indexes every file into a queryable vector store.

1800-char chunks with 200-char overlap. Embedded via OpenAI text-embedding-3-small (1536 dims), stored in PostgreSQL pgvector. Incremental sync — update or delete without full re-indexing.

02

Define Domains & Intents

Configure the topics your chat covers and how each intent behaves. On Enterprise, build fully custom domains with their own system prompts, source types, and structured output schemas.

Built-in intents: answer_question, summarize, clarify, escalate, general_chat. Custom intents and domains configurable on Growth and Enterprise plans without code.

03

Set Access Control

Assign user roles and document permissions. Retrieval is filtered and responses are masked at runtime based on what each user is authorised to see — no extra code required.

RBAC enforced at the retrieval layer. Users only receive chunks their role permits. Every access decision is captured in the audit log.

04

Deploy

Go live via the no-code dashboard, REST API, SSE streaming endpoint, or an embed token for public-facing widget deployments. From setup to production in under an hour.

Dashboard for no-code teams. /v1/run for sync REST. /v1/run/stream for SSE token delivery. Embed tokens (liya_pub_*) for widgets with per-origin CORS and intent scoping.

QUERY PIPELINE
Runs on every message, in milliseconds.
Classify → Retrieve → Assemble → Generate → Verify → Stream.
< 150ms
Retrieval
< 1.2s
First token
94%
Grounded
01

Intent Classification

Every message is classified before any generation. The router determines which domain, retrieval strategy, and agent path to activate. Fallback policies handle out-of-scope queries.

Intent confidence score logged per message. Configurable fallback: clarify, escalate, or block. Embed tokens restrict callable intents at the credential level.

02

Hybrid Retrieval

Relevant chunks are fetched using hybrid search — dense semantic cosine similarity (pgvector) plus sparse keyword — then re-ranked for precision. Only chunks the user has permission to see are retrieved.

top_k configurable per request. Retrieval metadata logged per message: source_types, enriched_query, chunks_retrieved, top_similarity, retrieval_method.

03

Memory + Context Assembly

Retrieved chunks are merged with conversation history. Sessions persist across turns. After 6 turns, a rolling summary is rebuilt asynchronously to maintain context without exhausting the token budget.

Per-plan session limits: 20–200 turns, 20K–500K tokens. Context scoped per user, per tenant. No context leakage between sessions or users.

04

Grounded Generation + Streaming

The agent generates a response strictly grounded in retrieved context, streamed token-by-token via SSE. Source citations — document, section, relevance score — are included in every structured response.

Streaming endpoint: POST /v1/run/stream. Events: token (delta), done (session_id, latency_ms, sources[], memory_updated), error. Sync available via POST /v1/run.

05

Guardrails & Audit

Before delivery, grounding is verified — if a response can't be supported by retrieved context, it's blocked. Content policy, PII redaction, and RBAC masking all run on every turn.

Per-message flags: guardrails_passed, grounding_verified, flags[], pii_detected, pii_redacted. Full tamper-evident audit trail on every response.

Deployment

Deploy the way your team works

From a no-code dashboard to VPC-isolated self-hosting — Liya Support Intelligence fits your stack, your security requirements, and your go-to-market.

No code required
Dashboard
No-code interface for knowledge, support, and operations teams. Upload documents, configure domains, test intents, monitor conversations, and review audit logs — no engineering required.
POST /v1/run
REST API
Synchronous access via POST /v1/run. Full response payload includes answer, source citations, grounding metadata, session state, and usage metrics. Integrate into any backend.
Server-sent events
SSE Streaming
Real-time token delivery via POST /v1/run/stream. Token events stream directly to your UI. The done event carries sources, session state, and latency breakdown.
Embed tokens
Widget Embed
Public-scoped embed tokens (liya_pub_*) for widget deployments. CORS origin allowlist, intent scoping, and per-token rate limits — ship a chat widget without exposing your API key.
Enterprise only
Custom Deployment
Self-hosted, VPC-isolated on your own infrastructure. Your knowledge base and conversations never leave your environment. White-label and custom domain supported on Enterprise.
API Walkthrough

Stream grounded answers in real time.

One call to /v1/run/stream. Retrieval, generation, guardrails, and memory update all happen automatically — results arrive as SSE events before the response is complete.

request.sh
curl -N -X POST https://api.liyaengine.com/v1/run/stream \
  -H "x-api-key: $LIYA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "pack": "chat",
    "intent": "answer_question",
    "session_id": "sess_u82kd1",
    "input": {
      "message": "Parental leave policy for contractors?",
      "user": { "id": "usr_9f3c", "role": "contractor" }
    },
    "retrieval": { "top_k": 5 },
    "guardrails": {
      "grounding": { "action": "block_if_ungrounded" },
      "pii":       { "action": "redact" }
    }
  }'
sse-events.txt
event: token
data: {"delta":"Based on the Contractor Guide"}

event: token
data: {"delta":", contractors on 12+ month"}

event: token
data: {"delta":" contracts get 4 weeks parental leave."}

event: done
data: {
  "session_id": "sess_u82kd1",
  "latency_ms": 980,
  "retrieval_ms": 145,
  "sources": [{
    "doc": "contractor-guide",
    "section": "Leave Entitlements",
    "relevance": 0.92
  }],
  "memory_updated": true,
  "guardrails": {
    "grounding_verified": true,
    "flags": []
  }
}
Liya Engine components

What powers Liya Support Intelligence

Agent Mode
Multi-turn conversational agent
Vector Store
PostgreSQL + pgvector (1536 dims)
Embedding Model
OpenAI text-embedding-3-small
Memory
Persistent cross-session context
Intent Routing
Configurable classifier + fallback policies
Guardrails
Grounding verification, RBAC masking, PII redaction
BETA — LIMITED SPOTS

Ready to deploy grounded support intelligence?

Liya Support Intelligence is in beta with select enterprise design partners. Request a demo and get white-glove onboarding across Assist, Triage, QA, Discover, and customer-facing support experiences.

Request a demoView all solutions