OrkaJS
Orka.JS

Workflows en Graphe

Construisez des workflows conditionnels complexes avec logique de branchement.

Les workflows en graphe permettent de créer des pipelines de traitement sophistiqués avec conditions, exécution parallèle et routage dynamique basé sur la classification LLM.

Orka — GRAPH WORKFLOW ARCHITECTURE
START
actionNode
'classify'
LLM classifies input into categories
conditionNode
'router'
"technical"
retrieveNode
'faq', topK: 3
llmNode
'tech-answer'
Expert + RAG
"billing"
llmNode
'billing-answer'
Billing expert
"general"
llmNode
'general-answer'
General support
END
GraphResult
output:string
path:string[]
metadata:object
Entry / Exit
LLM Node
Retrieve (RAG)
Condition Router

1. Types de Nœuds Disponibles

Orka fournit plusieurs types de nœuds intégrés pour les opérations courantes :

node-types.ts
import {
startNode, // Graph entry point
endNode, // Graph exit point
actionNode, // Custom logic (async function)
conditionNode, // Conditional routing (returns edge name)
llmNode, // LLM generation with options
retrieveNode, // RAG search in knowledge base
edge // Connection between nodes
} from 'orkajs';
 
// startNode: Initialize context
startNode('start')
 
// endNode: Terminate execution
endNode('end')
 
// actionNode: Custom logic
actionNode('process', async (ctx) => {
ctx.output = await someAsyncOperation(ctx.input);
ctx.metadata.processed = true;
return ctx;
})
 
// conditionNode: Dynamic routing
conditionNode('router', (ctx) => {
if (ctx.metadata.score > 0.8) return 'high-quality';
if (ctx.metadata.score > 0.5) return 'medium';
return 'low';
})
 
// llmNode: LLM generation
llmNode('generate', {
systemPrompt: 'You are an expert assistant.',
temperature: 0.3,
promptTemplate: 'Context: {{context}}\n\nQuestion: {{input}}'
})
 
// retrieveNode: RAG
retrieveNode('search', 'knowledge-base-name', { topK: 5 })

2. Exemple Complet: Support Intelligent

graph-workflow.ts
import {
createOrka,
OpenAIAdapter,
MemoryVectorAdapter,
startNode, endNode, actionNode, conditionNode, llmNode, retrieveNode, edge
} from 'orkajs';
 
async function main() {
const orka = createOrka({
llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }),
vectorDB: new MemoryVectorAdapter(),
});
 
// Create FAQ knowledge base
await orka.knowledge.create({
name: 'faq',
source: [
{ text: 'Pour réinitialiser votre mot de passe, allez dans Paramètres > Sécurité.', metadata: { topic: 'password' } },
{ text: 'Les remboursements sont traités sous 5 jours ouvrés.', metadata: { topic: 'refund' } },
{ text: 'Contactez-nous à support@example.com pour toute question.', metadata: { topic: 'contact' } },
],
});
 
// Define the graph
const graph = orka.graph({
name: 'smart-support',
nodes: [
startNode('start'),
 
// Classify question with LLM
actionNode('classify', async (ctx) => {
const result = await ctx.llm.generate(
`Classify this question into one category: "technical", "billing", or "general".
Question: ${ctx.input}
Respond with ONLY the category.`,
{ temperature: 0, maxTokens: 10 }
);
ctx.output = result.content.trim().toLowerCase();
ctx.metadata.category = ctx.output;
return ctx;
}),
 
// Conditional router based on classification
conditionNode('router', (ctx) => {
const category = (ctx.metadata.category as string) ?? 'general';
if (category.includes('technical')) return 'technical';
if (category.includes('billing')) return 'billing';
return 'general';
}),
 
// Technical branch: RAG + LLM
retrieveNode('tech-retrieve', 'faq', { topK: 3 }),
llmNode('tech-answer', {
systemPrompt: 'You are a technical expert. Answer based on the context.',
temperature: 0.3,
promptTemplate: 'Context:\n{{context}}\n\nTechnical question: {{input}}',
}),
 
// Billing branch: Direct LLM
llmNode('billing-answer', {
systemPrompt: 'You are a billing expert. Be precise and professional.',
temperature: 0.3,
}),
 
// General branch
llmNode('general-answer', {
systemPrompt: 'You are a general support assistant. Be helpful and concise.',
temperature: 0.5,
}),
 
endNode('end'),
],
edges: [
edge('start', 'classify'),
edge('classify', 'router'),
// Edges conditionnels avec label
edge('router', 'tech-retrieve', 'technical'),
edge('router', 'billing-answer', 'billing'),
edge('router', 'general-answer', 'general'),
// Convergence vers end
edge('tech-retrieve', 'tech-answer'),
edge('tech-answer', 'end'),
edge('billing-answer', 'end'),
edge('general-answer', 'end'),
],
onNodeComplete: (nodeId, ctx) => {
console.log(`✅ Node "${nodeId}" completed`);
},
});
 
// Run the graph
const result = await graph.run('How do I reset my password?');
 
console.log(`Output: ${result.output}`);
console.log(`Path: ${result.path.join(' → ')}`);
console.log(`Latency: ${result.totalLatencyMs}ms`);
console.log(`Category: ${result.metadata.category}`);
}

3. Export Diagramme Mermaid

Visualisez la structure de votre graphe avec la génération automatique de diagrammes Mermaid :

mermaid-export.ts
// Generate the Mermaid diagram
const mermaidCode = graph.toMermaid();
console.log(mermaidCode);
 
// Output:
// graph TD
// start([Start])
// classify[classify]
// router{router}
// tech-retrieve[tech-retrieve]
// tech-answer[tech-answer]
// billing-answer[billing-answer]
// general-answer[general-answer]
// end([End])
// start --> classify
// classify --> router
// router -->|technical| tech-retrieve
// router -->|billing| billing-answer
// router -->|general| general-answer
// tech-retrieve --> tech-answer
// tech-answer --> end
// billing-answer --> end
// general-answer --> end

4. Structure GraphResult

graph-result.ts
interface GraphResult {
output: string; // Final output of the graph
path: string[]; // Path taken: ['start', 'classify', 'router', 'tech-answer', 'end']
metadata: Record<string, unknown>; // Accumulated metadata
totalLatencyMs: number; // Total execution time
nodeResults: Map<string, NodeResult>; // Results by node
}
 
// Example usage
const result = await graph.run('Ma question...');
 
// Analyze the path taken
if (result.path.includes('tech-answer')) {
console.log('Technical question detected');
}
 
// Access metadata
console.log('Category:', result.metadata.category);
 
// Time per node
for (const [nodeId, nodeResult] of result.nodeResults) {
console.log(`${nodeId}: ${nodeResult.latencyMs}ms`);
}

Cas d'Usage Courants

🎯 Classification d'Intent

Router les requêtes vers des handlers spécialisés selon la classification LLM

📚 RAG Conditionnel

Récupérer le contexte uniquement si nécessaire, ignorer pour les requêtes simples

🔄 Traitement Multi-Étapes

Chaîner plusieurs appels LLM avec traitement intermédiaire

⚡ Portes de Qualité

Vérifier la qualité de sortie et réessayer ou escalader si nécessaire