Multi-Step Workflows
Build reliable AI pipelines with plan, retrieve, generate, verify, and improve steps.
Workflows chain multiple processing steps together, enabling quality verification and iterative improvement of LLM outputs.
ORKA β WORKFLOW PIPELINE ARCHITECTURE
π workflow.run(input)
plan()
Decompose task
β steps[]
retrieve(kb)
topK: 3
β context[]
generate()
systemPrompt + context
β output
verify()
criteria[]
β pass/fail
improve()
maxIterations: 2
β refined output
WorkflowResult
output: stringsteps: StepResult[]totalTokens: number
LLM Step
RAG Step
Validation
Refinement
1. Built-in Workflow Steps
workflow-steps.ts
import { plan, retrieve, generate, verify, improve, custom } from 'orkajs';Β // plan(): Break down the task into sub-stepsplan()Β // retrieve(): Search RAG in a knowledge baseretrieve('knowledge-name', { topK: 3 })Β // generate(): LLM generation with optionsgenerate({ systemPrompt: 'You are a professional assistant.', temperature: 0.3, maxTokens: 1000,})Β // verify(): Verify the quality of the outputverify({ criteria: [ 'The response is relevant to the question', 'The response is based on the provided context', 'The response is professional', ],})Β // improve(): Improve the output iterativelyimprove({ maxIterations: 2, improvementPrompt: 'Improve this response to make it more concise.',})Β // custom(): Custom stepcustom('my-step', async (ctx) => { ctx.output = await myCustomLogic(ctx.input); return ctx;})2. Complete Support Workflow
workflow-example.ts
import { createOrka, OpenAIAdapter, MemoryVectorAdapter, plan, retrieve, generate, verify, improve,} from 'orkajs';Β async function main() { const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), vectorDB: new MemoryVectorAdapter(), });Β // Create a knowledge base await orka.knowledge.create({ name: 'support', source: [ { text: 'To reset your password, go to Settings > Security.', metadata: { topic: 'password' } }, { text: 'Refunds are processed within 5 business days.', metadata: { topic: 'refund' } }, { text: 'Support disponible du lundi au vendredi, 9h-18h.', metadata: { topic: 'hours' } }, { text: 'Contact: support@example.com ou 01 23 45 67 89.', metadata: { topic: 'contact' } }, ], });Β // Create a multi-step workflow const supportWorkflow = orka.workflow({ name: 'support-response', steps: [ plan(), retrieve('support', { topK: 3 }), generate({ systemPrompt: 'You are a professional and empathetic customer support agent.' }), verify({ criteria: [ 'The response is relevant to the question', 'The response is based on the provided context', 'The response is professional and empathetic', ] }), improve({ maxIterations: 1 }), ], onStepComplete: (step) => { console.log(`β
Step "${step.stepName}" completed (${step.latencyMs}ms)`); }, maxRetries: 1, });Β console.log('π Running workflow...\n'); const result = await supportWorkflow.run('How can I reset my password?');Β console.log(`\nπ Output: ${result.output}`); console.log(`\nπ Stats:`); console.log(` - Steps: ${result.steps.length}`); console.log(` - Total latency: ${result.totalLatencyMs}ms`); console.log(` - Total tokens: ${result.totalTokens}`);Β await orka.knowledge.delete('support');}Β main().catch(console.error);3. WorkflowResult Structure
workflow-result.ts
interface WorkflowResult { output: string; // Final output of the workflow steps: StepResult[]; // Results of each step totalLatencyMs: number; // Total execution time totalTokens: number; // Total tokens consumed metadata: Record<string, unknown>;}Β interface StepResult { stepName: string; output: string; latencyMs: number; tokens: number; success: boolean; error?: string;}Β // Example of analyzing resultsconst result = await workflow.run('How can I reset my password?');Β // Analyze each stepfor (const step of result.steps) { console.log(`${step.stepName}: ${step.success ? 'β
' : 'β'} (${step.latencyMs}ms)`); if (!step.success) { console.log(` Error: ${step.error}`); }}Β // Calculate estimated costconst costPer1kTokens = 0.002; // GPT-4o-miniconst estimatedCost = (result.totalTokens / 1000) * costPer1kTokens;console.log(`Estimated cost: $${estimatedCost.toFixed(4)}`);Common Use Cases
π― Quality Assurance
Verify LLM outputs meet specific criteria before returning
π RAG Pipelines
Combine retrieval with generation and verification
π Iterative Refinement
Automatically improve outputs until quality threshold is met
π Observability
Track latency and token usage per step for optimization