Multi-Step Workflows
Chain LLM operations into readable, testable pipelines with Orka AI workflows.
Why Workflows?
Workflows let you chain multiple LLM operations into a single, readable pipeline. Each step transforms the context and passes it to the next. Built-in steps handle common patterns like planning, retrieval, generation, and verification — but you can also create custom steps for any logic.
import { createOrka } from 'orkajs/core';import { OpenAIAdapter } from 'orkajs/adapters';import { plan, retrieve, generate, verify, improve } from 'orkajs/workflow'; const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), vectorDB: myVectorDB,}); const workflow = orka.workflow({ name: 'support-response', steps: [ plan(), // Analyze input and create action plan retrieve('knowledge-base', { topK: 5 }), // Semantic search in knowledge base generate({ temperature: 0.7 }), // Generate response using LLM verify({ criteria: ['relevant', 'no hallucination'] }), // Evaluate quality improve({ maxIterations: 1 }), // Fix issues if verify fails ],}); const result = await workflow.run('How do I reset my password?'); console.log(result.output); // Final responseconsole.log(result.steps); // Results from each stepconsole.log(result.totalLatencyMs); // Total execution timeconsole.log(result.totalTokens); // Total tokens consumed# Built-in Steps
Orka provides six built-in workflow steps that cover the most common LLM patterns:
plan()Analyze input and create an action plan
retrieve(name, opts)Semantic search in a knowledge base
generate(opts)Generate a response using the LLM
verify(opts)Evaluate output quality with LLM-as-judge
improve(opts)Fix issues identified by verify
custom(name, fn)Create any custom step
# Step Details
plan()
Analyzes the input and creates an action plan. Useful for complex queries that require multiple steps.
plan()// Output: "1. Search for password reset documentation2. Extract relevant steps3. Format as user-friendly instructions"retrieve(name, options)
Performs semantic search in a knowledge base and adds the results to the context.
retrieve('documentation', { topK: 5, // Number of results to retrieve minScore: 0.7, // Minimum similarity score filter: { category: 'guides' } // Metadata filter})generate(options)
Generates a response using the LLM with the current context (input + retrieved documents).
generate({ temperature: 0.7, // Creativity level (0-1) maxTokens: 1000, // Maximum response length systemPrompt: 'You are a helpful assistant.' // Custom system prompt})verify(options)
Evaluates the generated output against criteria using LLM-as-judge. Sets ctx.verified to true/false.
verify({ criteria: [ 'relevant', // Is the answer relevant to the question? 'no hallucination', // Is the answer grounded in the context? 'complete', // Does it fully answer the question? 'professional tone' // Is the tone appropriate? ]})improve(options)
If verify() failed, attempts to fix the issues. Runs up to maxIterations times.
improve({ maxIterations: 2, // Maximum improvement attempts // Automatically uses the verification feedback to improve the output})# Custom Steps
Create custom steps for any logic that isn't covered by built-in steps. Custom steps receive the workflow context and must return it.
import { custom } from 'orkajs/workflow'; // Custom step to translate the outputconst translateStep = custom('translate', async (ctx) => { const result = await ctx.llm.generate( `Translate to French: ${ctx.output}`, { temperature: 0.3 } ); ctx.output = result.content; ctx.metadata.translatedTo = 'fr'; // Record step in history ctx.history.push({ stepName: 'translate', output: result.content, latencyMs: result.latencyMs, tokens: result.usage?.totalTokens, }); return ctx;}); // Use in workflowconst workflow = orka.workflow({ name: 'multilingual-support', steps: [ retrieve('docs'), generate(), translateStep, // Your custom step ],});Workflow Context (ctx)
ctx.input: stringThe original input passed to workflow.run()
ctx.output: stringThe current output (modified by each step)
ctx.retrievedDocs: Document[]Documents retrieved by the retrieve() step
ctx.llm: LLMAdapterThe LLM adapter for making calls
ctx.metadata: Record<string, unknown>Custom metadata you can set/read across steps
# Workflow Result
The workflow.run() method returns a WorkflowResult object with the final output and detailed execution information:
| Property | Type | Description |
|---|---|---|
| output | string | Final output after all steps |
| steps | WorkflowStepResult[] | Results from each step |
| totalLatencyMs | number | Total execution time |
| totalTokens | number | Total tokens consumed |
| verified | boolean | Whether verify() passed (if used) |
Complete Example
import { createOrka } from 'orkajs/core';import { OpenAIAdapter, PineconeAdapter } from 'orkajs/adapters';import { plan, retrieve, generate, verify, improve, custom } from 'orkajs/workflow'; const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), vectorDB: new PineconeAdapter({ apiKey: process.env.PINECONE_API_KEY! }),}); // Custom logging stepconst logStep = custom('log', async (ctx) => { console.log('[Workflow] Current output length:', ctx.output.length); return ctx;}); const supportWorkflow = orka.workflow({ name: 'customer-support', steps: [ plan(), retrieve('support-docs', { topK: 5 }), generate({ temperature: 0.5, systemPrompt: 'You are a helpful support agent.' }), logStep, verify({ criteria: ['helpful', 'accurate', 'professional'] }), improve({ maxIterations: 1 }), ],}); async function handleSupportQuery(query: string) { const result = await supportWorkflow.run(query); console.log('Answer:', result.output); console.log('Verified:', result.verified); console.log('Steps:', result.steps.map(s => s.stepName)); console.log('Total time:', result.totalLatencyMs, 'ms'); console.log('Total tokens:', result.totalTokens); return result.output;} await handleSupportQuery('How do I cancel my subscription?');Tree-shaking Imports
// ✅ Import only what you needimport { plan, retrieve, generate, verify, improve, custom } from 'orkajs/workflow'; // ✅ Or import from main packageimport { plan, retrieve, generate } from 'orkajs';