Evose

First Workflow · Article Automation

Build a workflow in 10 minutes — input topic → AI outline → AI expansion → Markdown output

First Workflow

Use a Workflow to complete a multi-step task: input topic → generate outline → expand body → output Markdown. About 10 minutes.

Agent vs Workflow

  • Agent is conversational — users interact back and forth, and the Agent decides when to call tools
  • Workflow is process-driven — fixed steps; great for batch, automation, and scheduling
  • Not sure which to pick? → Agent vs Workflow

What You'll Accomplish

A 4-node Workflow:

[Start: input topic] → [LLM: generate outline] → [LLM: expand body] → [Output: Markdown]

Steps

1 · Create the Workflow (1 minute)

  1. Workspace → Apps · WorkflowNewBlank Workflow
  2. Name it: Article Generator
  3. Open the visual editor

2 · Configure the start node (1 minute)

The start node is added automatically. Click it and define inputs:

FieldTypeRequiredDefault
topicstringYes(empty)
tonestringNoProfessional
lengthnumberNo800

3 · Add an LLM node · Generate Outline (2 minutes)

Drag an LLM node from the panel and connect it to the start node. Configure:

  • Node name: Generate Outline
  • Model: GPT-4 / Claude (your choice)
  • Temperature: 0.7
  • Prompt:
    You are a senior content writer. Based on the topic "{{topic}}", generate an outline for an article of {{length}} words in {{tone}} style.
    
    Output JSON:
    {
      "title": "...",
      "sections": [
        {"heading": "...", "key_points": ["...", "..."]}
      ]
    }
  • Output variable: outline (type: JSON)

4 · Add a second LLM node · Expand Body (2 minutes)

Drag a second LLM node and connect it after the first. Configure:

  • Node name: Expand Body
  • Model: same as above (can differ)
  • Temperature: 0.7
  • Prompt:
    Based on the outline below, write a complete article of about {{length}} words in {{tone}} style.
    Output in Markdown, no metadata.
    
    Outline:
    {{outline}}
  • Output variable: article (type: string)

5 · Add an output node (1 minute)

Drag an Output node. Configure:

  • Output field: article (bind directly to the previous node's article)

6 · Test (2 minutes)

Top right of the editor → Trial run → input:

{
  "topic": "Common pitfalls of RAG in enterprise rollout",
  "tone": "deep technical",
  "length": 1500
}

Watch the execution flow. Click any node to see inputs / outputs / latency / token counts.

7 · Publish (1 minute)

  • Save version → note: v1 · initial
  • Publish to Workbench
  • Set the trigger:
    • Manual (default) — users trigger from the Workbench
    • Scheduled — configure Cron in Schedule
    • API — for external systems; see Workflow API

Use Traces to Understand Multi-Node Coordination

Workspace → Workspace management · Observability · Traces — find the run record:

Workflow.run [3.2s · 4 nodes]
├─ start [0.01s] → input: {...}
├─ Generate Outline [1.4s] → tokens in/out: 220/580 · cost: 0.012 Credit
├─ Expand Body [1.7s] → tokens in/out: 800/1500 · cost: 0.038 Credit
└─ end [0.02s] → output: {...}

Every step's cost and latency can be attributed. This is the core value of Workflow over calling LLMs directly.

Going Further

I want to…Use
Add a branch (different paths based on LLM output)Conditional node
Call an external API (CRM lookup, send email)HTTP node / Tool node
Write generation results to a databaseData source node
Pause for human reviewHuman approval node
Process a batch of topics in parallelBatch node

Workflow full capability

Next Steps