Liya Engine

Liya Support Intelligence

Grounded intelligence for support operations — agent assist, triage, QA, knowledge discovery, and controlled resolution workflows.

Liya Support Intelligence is Liya Engine's solution for support teams and service operations. It provides AI assistance across the full support workflow — from helping agents write better responses to classifying tickets, reviewing QA, and surfacing knowledge gaps.

Status: Beta
Domain: chat (with support-specific intent routing)
Endpoint: POST /v1/run


What it covers

Support Intelligence ships as five modules. They share the same underlying platform and can be deployed independently or together:

ModuleWhat it does
AssistGrounded reply suggestions, source-backed answers, case summaries, and next-best actions for agents handling live conversations
TriageClassify, prioritise, and route incoming tickets using your policies, account context, and support rules
QAReview completed support interactions for grounding quality, policy adherence, escalation correctness, and consistency
DiscoverSurface recurring issues, failed answers, and documentation gaps from real support conversations
Resolve ActionsPrepare or execute bounded, approved workflows (account lookups, refund preparation, policy-gated actions) with full audit trails

Packaging

Support Intelligence ships in two packaging tiers:

SMB packaging

Three focused entry points for lean support teams:

Liya Assist — The fastest entry point. Grounded reply suggestions, summaries, and next-best actions. No full operations platform required.

Liya Help Center AI — Customer-facing knowledge and deflection for docs, FAQs, onboarding, billing, and support policy questions. Powered by the same Liya Chat embed widget.

Liya Knowledge Assistant — Internal knowledge Q&A for support agents. Routes questions against your runbooks, policies, and escalation guides.

Enterprise packaging

Three expanded offerings for larger service organisations:

Support Intelligence — Full Assist + Triage + QA in one platform. Connects to your ticketing system and surfaces AI across the entire support queue.

Service Intelligence — Support Intelligence plus Discover and Resolve Actions. Adds operational loop-closing — from issue discovery to workflow execution.

Ops Intelligence — Full platform including custom domain intelligence for internal operations teams, procurement, and cross-functional service workflows.


Integration pattern

Support Intelligence integrates at two levels:

1. Agent-facing (real-time)

Call POST /v1/run from your agent desktop or ticketing system when an agent opens a conversation:

curl -X POST https://api.liyaengine.ai/v1/run \
  -H "x-api-key: $LIYA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "pack": "liya-chat",
    "intent": "answer_question",
    "message": "Customer is asking about a delayed shipment — order #45892",
    "session_id": "ticket_45892",
    "metadata": {
      "channel": "agent_assist",
      "agent_id": "agent_007"
    }
  }'

The response includes a grounded reply suggestion, the source documents used, and a confidence score.


2. Batch / async (triage and QA)

For triage and QA workflows, call the intent for each ticket or conversation as it arrives or completes:

# Triage: classify and route an incoming ticket
curl -X POST https://api.liyaengine.ai/v1/run \
  -H "x-api-key: $LIYA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "pack": "liya-chat",
    "intent": "general_chat",
    "message": "Classify this ticket and suggest routing: [ticket content here]",
    "metadata": { "mode": "triage", "ticket_id": "TKT-1234" }
  }'

Knowledge base setup

Grounding quality depends on what you've uploaded. For support use cases:

Document typeWhat to upload
Help articlesYour existing help center content
Policy documentsRefund policy, SLA, escalation runbooks
Product FAQsCommon questions and their authoritative answers
Internal playbooksAgent response guidelines, tone standards
# Upload a support policy document
curl -X POST https://api.liyaengine.ai/dashboard/knowledge/upload \
  -H "Authorization: Bearer $JWT_TOKEN" \
  -F "[email protected]" \
  -F "domain=chat"
 
# Ingest a help center URL
curl -X POST https://api.liyaengine.ai/dashboard/knowledge/ingest-url \
  -H "Authorization: Bearer $JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{ "url": "https://help.yourbrand.com/articles/returns", "domain": "chat" }'

Session and escalation model

Support Intelligence uses the same session model as Liya Chat. Each ticket or conversation maps to a session_id. When the AI determines it cannot handle a request confidently, it returns an escalation signal:

{
  "response": {
    "content": "I don't have enough context to resolve this. I'm flagging this for human review.",
    "intent": "escalate",
    "escalation": {
      "reason": "policy_ambiguity",
      "suggested_team": "billing",
      "context_summary": "Customer disputes charge from 14 days ago, outside standard 7-day window..."
    },
    "metadata": {
      "guardrails_passed": true,
      "grounding_verified": false
    }
  }
}

Use the escalation object to route to the correct queue in your ticketing system.


Guardrails

All Support Intelligence responses run through Liya's grounding verification layer. Responses that fail the grounding check are flagged in metadata.grounding_verified: false and will trigger an escalation rather than a potentially incorrect answer.

Configure response guardrails via the dashboard (PATCH /dashboard/account/config) — set confidence thresholds, allowed response topics, and escalation triggers.


Next steps

On this page