Evose

First Agent · Connect a Knowledge Base

Build a customer-service Agent in 10 minutes that answers based on your enterprise documents

Upgrade the generic customer-service Agent into a real assistant that answers your company's questions. About 10 minutes.

What You'll Accomplish

  1. Create a knowledge base and upload 1–3 PDF/Markdown documents
  2. Create an Agent and bind the knowledge base
  3. Configure the prompt so the Agent admits "I don't know" when it doesn't know
  4. Verify on the Workbench

Prerequisites

Steps

1 · Create a knowledge base (2 minutes)

  1. Workspace → Data · Knowledge baseNew
  2. Name it: Product FAQ
  3. Choose an Embedding model (default is fine)
  4. Click create

2 · Upload documents (2 minutes)

  1. Open the new knowledge base → Upload documents
  2. Drag in PDF / Markdown / Word / Txt — the system will:
    • Parse → automatically chunk (default 1000 tokens / chunk, 200-token overlap)
    • Vectorize and write to pgvector
  3. Wait for status to become Ready (usually within 30 seconds; large files may take 1–2 minutes)

Chunking strategy

The defaults are good enough for most cases. For tuning, see Knowledge base · Chunking and retrieval.

3 · Create the Agent (3 minutes)

  1. Workspace → Apps · AgentNewBlank Agent

  2. Name it: Product Support Pro

  3. Role prompt:

    You are the product support agent for {company}. Use the knowledge base to answer accurately.
    
    Rules:
    - Only answer using the knowledge base; for anything outside it, say "I'm not sure, please contact a human agent."
    - Keep answers concise — give the conclusion first, then the supporting evidence.
    - Cite specific passages from the knowledge base as evidence.
  4. Bind knowledge base: check Product FAQ

  5. Base model: pick what you consider the strongest LLM (commonly Claude / GPT-4 class)

  6. Temperature: 0.2 (customer service needs stability)

  7. Click Save

4 · Debug (2 minutes)

In the Agent edit page's built-in Debug panel:

You: What's your return policy?
Agent: ... (should answer using knowledge-base content with passage citations)

If answers are off:

SymptomAdjust
Off-topic answersIncrease TopK (default 5, try 8–10)
Over-divergent answersLower LLM temperature to 0.1
Info exists in KB but answer omits itCheck Trace — is it retrieval failure or generation drop? Consider smaller chunks
Fabricated answersStrengthen prompt: "If something is not explicitly in the knowledge base, say you're not sure"

Agent debugging in detail

5 · Publish and verify (1 minute)

  1. Top right of the editor → Publish to Workbench
  2. Set visibility (default: just you; can add members or roles)
  3. Switch to Workbench · Task, ask the same question in "Product Support Pro" to verify

Use Traces to Understand What the Agent Does

Workspace → Workspace management · Observability · Traces — find the conversation. You'll see:

Agent.run
├─ retrieve(knowledge_base="Product FAQ", query="return policy", topK=5)
│   ├─ chunk_1: 0.87 → "Our return policy..."
│   ├─ chunk_2: 0.81 → "Within 7 days of shipping..."
│   └─ ...
├─ llm.complete(model=..., temperature=0.2, ...)
│   └─ tokens: in=480, out=120
└─ output: "Per knowledge base [doc:FAQ.pdf p.3]..."

Every step is visible — debugging gets very efficient.

Next Steps

On this page