OrkaJS
Orka.JS

Multi-Step Workflows

Build reliable AI pipelines with plan, retrieve, generate, verify, and improve steps.

Workflows chain multiple processing steps together, enabling quality verification and iterative improvement of LLM outputs.

ORKA — WORKFLOW PIPELINE ARCHITECTURE
📝 workflow.run(input)
plan()
Decompose task
→ steps[]
retrieve(kb)
topK: 3
→ context[]
generate()
systemPrompt + context
→ output
verify()
criteria[]
→ pass/fail
improve()
maxIterations: 2
→ refined output
WorkflowResult
output: stringsteps: StepResult[]totalTokens: number
LLM Step
RAG Step
Validation
Refinement

1. Built-in Workflow Steps

workflow-steps.ts
import { plan, retrieve, generate, verify, improve, custom } from 'orkajs';
 
// plan(): Break down the task into sub-steps
plan()
 
// retrieve(): Search RAG in a knowledge base
retrieve('knowledge-name', { topK: 3 })
 
// generate(): LLM generation with options
generate({
systemPrompt: 'You are a professional assistant.',
temperature: 0.3,
maxTokens: 1000,
})
 
// verify(): Verify the quality of the output
verify({
criteria: [
'The response is relevant to the question',
'The response is based on the provided context',
'The response is professional',
],
})
 
// improve(): Improve the output iteratively
improve({
maxIterations: 2,
improvementPrompt: 'Improve this response to make it more concise.',
})
 
// custom(): Custom step
custom('my-step', async (ctx) => {
ctx.output = await myCustomLogic(ctx.input);
return ctx;
})

2. Complete Support Workflow

workflow-example.ts
import {
createOrka,
OpenAIAdapter,
MemoryVectorAdapter,
plan, retrieve, generate, verify, improve,
} from 'orkajs';
 
async function main() {
const orka = createOrka({
llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }),
vectorDB: new MemoryVectorAdapter(),
});
 
// Create a knowledge base
await orka.knowledge.create({
name: 'support',
source: [
{ text: 'To reset your password, go to Settings > Security.', metadata: { topic: 'password' } },
{ text: 'Refunds are processed within 5 business days.', metadata: { topic: 'refund' } },
{ text: 'Support disponible du lundi au vendredi, 9h-18h.', metadata: { topic: 'hours' } },
{ text: 'Contact: support@example.com ou 01 23 45 67 89.', metadata: { topic: 'contact' } },
],
});
 
// Create a multi-step workflow
const supportWorkflow = orka.workflow({
name: 'support-response',
steps: [
plan(),
retrieve('support', { topK: 3 }),
generate({
systemPrompt: 'You are a professional and empathetic customer support agent.'
}),
verify({
criteria: [
'The response is relevant to the question',
'The response is based on the provided context',
'The response is professional and empathetic',
]
}),
improve({ maxIterations: 1 }),
],
onStepComplete: (step) => {
console.log(`✅ Step "${step.stepName}" completed (${step.latencyMs}ms)`);
},
maxRetries: 1,
});
 
console.log('🔄 Running workflow...\n');
const result = await supportWorkflow.run('How can I reset my password?');
 
console.log(`\n📝 Output: ${result.output}`);
console.log(`\n📊 Stats:`);
console.log(` - Steps: ${result.steps.length}`);
console.log(` - Total latency: ${result.totalLatencyMs}ms`);
console.log(` - Total tokens: ${result.totalTokens}`);
 
await orka.knowledge.delete('support');
}
 
main().catch(console.error);

3. WorkflowResult Structure

workflow-result.ts
interface WorkflowResult {
output: string; // Final output of the workflow
steps: StepResult[]; // Results of each step
totalLatencyMs: number; // Total execution time
totalTokens: number; // Total tokens consumed
metadata: Record<string, unknown>;
}
 
interface StepResult {
stepName: string;
output: string;
latencyMs: number;
tokens: number;
success: boolean;
error?: string;
}
 
// Example of analyzing results
const result = await workflow.run('How can I reset my password?');
 
// Analyze each step
for (const step of result.steps) {
console.log(`${step.stepName}: ${step.success ? '✅' : '❌'} (${step.latencyMs}ms)`);
if (!step.success) {
console.log(` Error: ${step.error}`);
}
}
 
// Calculate estimated cost
const costPer1kTokens = 0.002; // GPT-4o-mini
const estimatedCost = (result.totalTokens / 1000) * costPer1kTokens;
console.log(`Estimated cost: $${estimatedCost.toFixed(4)}`);

Common Use Cases

🎯 Quality Assurance

Verify LLM outputs meet specific criteria before returning

📚 RAG Pipelines

Combine retrieval with generation and verification

🔄 Iterative Refinement

Automatically improve outputs until quality threshold is met

📊 Observability

Track latency and token usage per step for optimization