OrkaJS
Orka.JS

Workflows Multi-Étapes

Construisez des pipelines IA fiables avec les étapes plan, retrieve, generate, verify et improve.

Les workflows chaînent plusieurs étapes de traitement, permettant la vérification de qualité et l'amélioration itérative des sorties LLM.

ORKA — WORKFLOW PIPELINE ARCHITECTURE
📝 workflow.run(input)
plan()
Décomposer la tâche
→ steps[]
retrieve(kb)
topK: 3
→ context[]
generate()
systemPrompt + context
→ output
verify()
criteria[]
→ pass/fail
improve()
maxIterations: 2
→ refined output
WorkflowResult
output: stringsteps: StepResult[]totalTokens: number
LLM Step
RAG Step
Validation
Refinement

1. Étapes de Workflow Intégrées

workflow-steps.ts
import { plan, retrieve, generate, verify, improve, custom } from 'orkajs';
 
// plan(): Break down the task into sub-steps
plan()
 
// retrieve(): Search RAG in a knowledge base
retrieve('knowledge-name', { topK: 3 })
 
// generate(): LLM generation with options
generate({
systemPrompt: 'You are a professional assistant.',
temperature: 0.3,
maxTokens: 1000,
})
 
// verify(): Verify the quality of the output
verify({
criteria: [
'The response is relevant to the question',
'The response is based on the provided context',
'The response is professional',
],
})
 
// improve(): Improve the output iteratively
improve({
maxIterations: 2,
improvementPrompt: 'Improve this response to make it more concise.',
})
 
// custom(): Custom step
custom('my-step', async (ctx) => {
ctx.output = await myCustomLogic(ctx.input);
return ctx;
})

2. Workflow Support Complet

workflow-example.ts
import {
createOrka,
OpenAIAdapter,
MemoryVectorAdapter,
plan, retrieve, generate, verify, improve,
} from 'orkajs';
 
async function main() {
const orka = createOrka({
llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }),
vectorDB: new MemoryVectorAdapter(),
});
 
// Create a knowledge base
await orka.knowledge.create({
name: 'support',
source: [
{ text: 'To reset your password, go to Settings > Security.', metadata: { topic: 'password' } },
{ text: 'Refunds are processed within 5 business days.', metadata: { topic: 'refund' } },
{ text: 'Support disponible du lundi au vendredi, 9h-18h.', metadata: { topic: 'hours' } },
{ text: 'Contact: support@example.com ou 01 23 45 67 89.', metadata: { topic: 'contact' } },
],
});
 
// Create a multi-step workflow
const supportWorkflow = orka.workflow({
name: 'support-response',
steps: [
plan(),
retrieve('support', { topK: 3 }),
generate({
systemPrompt: 'You are a professional and empathetic customer support agent.'
}),
verify({
criteria: [
'The response is relevant to the question',
'The response is based on the provided context',
'The response is professional and empathetic',
]
}),
improve({ maxIterations: 1 }),
],
onStepComplete: (step) => {
console.log(`✅ Step "${step.stepName}" completed (${step.latencyMs}ms)`);
},
maxRetries: 1,
});
 
console.log('🔄 Running workflow...\n');
const result = await supportWorkflow.run('How can I reset my password?');
 
console.log(`\n📝 Output: ${result.output}`);
console.log(`\n📊 Stats:`);
console.log(` - Steps: ${result.steps.length}`);
console.log(` - Total latency: ${result.totalLatencyMs}ms`);
console.log(` - Total tokens: ${result.totalTokens}`);
 
await orka.knowledge.delete('support');
}
 
main().catch(console.error);

3. Structure WorkflowResult

workflow-result.ts
interface WorkflowResult {
output: string; // Final output of the workflow
steps: StepResult[]; // Results of each step
totalLatencyMs: number; // Total execution time
totalTokens: number; // Total tokens consumed
metadata: Record<string, unknown>;
}
 
interface StepResult {
stepName: string;
output: string;
latencyMs: number;
tokens: number;
success: boolean;
error?: string;
}
 
// Example of analyzing results
const result = await workflow.run('How can I reset my password?');
 
// Analyze each step
for (const step of result.steps) {
console.log(`${step.stepName}: ${step.success ? '✅' : '❌'} (${step.latencyMs}ms)`);
if (!step.success) {
console.log(` Error: ${step.error}`);
}
}
 
// Calculate estimated cost
const costPer1kTokens = 0.002; // GPT-4o-mini
const estimatedCost = (result.totalTokens / 1000) * costPer1kTokens;
console.log(`Estimated cost: $${estimatedCost.toFixed(4)}`);

Cas d'Usage Courants

🎯 Assurance Qualité

Vérifier que les sorties LLM respectent des critères spécifiques avant de retourner

📚 Pipelines RAG

Combiner récupération avec génération et vérification

🔄 Raffinement Itératif

Améliorer automatiquement les sorties jusqu'à atteindre le seuil de qualité

📊 Observabilité

Suivre la latence et l'utilisation de tokens par étape pour l'optimisation