OrkaJS
Orka.JS

Graph Workflows

Build complex AI flows with conditions, branches, parallel execution, and Mermaid diagram export.

Why Graph Workflows?

While linear workflows (Multi-Step) are great for simple pipelines, Graph Workflows let you build complex flows with conditions, branches, loops, and parallel execution. Think of them as flowcharts for AI — you define nodes (actions) and edges (connections), and the graph engine handles execution.

import { createOrka, OpenAIAdapter } from 'orkajs';
import { startNode, endNode, actionNode, conditionNode, llmNode, edge } from 'orkajs/graph';
 
const orka = createOrka({
llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }),
vectorDB: myVectorDB,
});
 
const graph = orka.graph({
name: 'smart-support',
nodes: [
startNode('start'),
actionNode('classify', async (ctx) => {
const result = await ctx.llm.generate(
`Classify as "technical" or "general": ${ctx.input}`
);
ctx.metadata.category = result.content.trim().toLowerCase();
return ctx;
}),
conditionNode('router', (ctx) => ctx.metadata.category as string),
llmNode('respond', { temperature: 0.3 }),
endNode('end'),
],
edges: [
edge('start', 'classify'),
edge('classify', 'router'),
edge('router', 'respond', 'technical'), // If category === 'technical'
edge('router', 'respond', 'general'), // If category === 'general'
edge('respond', 'end'),
],
});
 
const result = await graph.run('How do I configure SSL?');
 
console.log(result.output); // Final response
console.log(result.path); // ['start', 'classify', 'router', 'respond', 'end']
console.log(result.steps); // Detailed execution trace

# Node Types

Orka provides seven node types for building graph workflows:

startNode(id)

Entry point of the graph

endNode(id)

Exit point, stops execution

actionNode(id, fn)

Custom action that transforms context

conditionNode(id, fn)

Branching node, returns label string

llmNode(id, opts)

Calls LLM with configurable options

retrieveNode(id, name)

Semantic search in knowledge base

parallelNode(id, nodeIds)

Execute multiple nodes simultaneously

# Node Details

startNode(id) / endNode(id)

Entry and exit points of the graph. Every graph must have exactly one startNode and at least one endNode.

startNode('start') // Entry point
endNode('end') // Exit point - stops execution and returns result

actionNode(id, fn)

Executes custom logic. Receives the graph context and must return it (possibly modified).

actionNode('process', async (ctx) => {
// Access input, output, metadata, llm
const processed = ctx.input.toUpperCase();
ctx.output = processed;
ctx.metadata.processedAt = Date.now();
return ctx;
})

conditionNode(id, fn)

Branching node that returns a string label. The graph follows the edge matching that label.

conditionNode('router', (ctx) => {
const score = ctx.metadata.score as number;
if (score > 0.8) return 'high';
if (score > 0.5) return 'medium';
return 'low';
})
 
// Edges:
edge('router', 'premium-response', 'high')
edge('router', 'standard-response', 'medium')
edge('router', 'fallback-response', 'low')

llmNode(id, options)

Calls the LLM with the current context. Automatically builds a prompt from input + retrieved docs.

llmNode('generate', {
temperature: 0.7,
maxTokens: 1000,
systemPrompt: 'You are a helpful assistant.'
})

retrieveNode(id, knowledgeName, options)

Performs semantic search and adds results to ctx.retrievedDocs.

retrieveNode('search', 'documentation', {
topK: 5,
minScore: 0.7
})

parallelNode(id, nodeIds)

Executes multiple nodes simultaneously. Results are merged into ctx.parallelResults.

// Define nodes to run in parallel
actionNode('fetch-weather', async (ctx) => { ... }),
actionNode('fetch-news', async (ctx) => { ... }),
actionNode('fetch-stocks', async (ctx) => { ... }),
 
// Parallel node runs all three simultaneously
parallelNode('gather-data', ['fetch-weather', 'fetch-news', 'fetch-stocks'])
 
// After execution:
// ctx.parallelResults = {
// 'fetch-weather': { ... },
// 'fetch-news': { ... },
// 'fetch-stocks': { ... }
// }

# Edges

Edges connect nodes and define the flow. For condition nodes, you can specify a label to match.

import { edge } from 'orkajs/graph';
 
// Simple edge: from -> to
edge('start', 'process')
 
// Conditional edge: from -> to (when condition returns 'label')
edge('router', 'premium-path', 'premium')
edge('router', 'standard-path', 'standard')
 
// Multiple edges from one node
edges: [
edge('start', 'classify'),
edge('classify', 'router'),
edge('router', 'technical-support', 'technical'),
edge('router', 'general-support', 'general'),
edge('router', 'sales', 'sales'),
edge('technical-support', 'end'),
edge('general-support', 'end'),
edge('sales', 'end'),
]

# Mermaid Export

Visualize your graph as a Mermaid diagram for documentation or debugging:

console.log(graph.toMermaid());
 
// Output:
// graph TD
// start((start))
// classify[classify]
// router{router}
// respond[respond]
// end((end))
// start --> classify
// classify --> router
// router -->|technical| respond
// router -->|general| respond
// respond --> end

💡 Pro Tip

Paste the Mermaid output into mermaid.live or any Markdown renderer that supports Mermaid to visualize your graph.

Complete Example

import { createOrka, OpenAIAdapter, PineconeAdapter } from 'orkajs';
import {
startNode, endNode, actionNode, conditionNode,
llmNode, retrieveNode, parallelNode, edge
} from 'orkajs/graph';
 
const orka = createOrka({
llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }),
vectorDB: new PineconeAdapter({ apiKey: process.env.PINECONE_API_KEY! }),
});
 
const supportGraph = orka.graph({
name: 'intelligent-support',
nodes: [
startNode('start'),
 
// Classify the query
actionNode('classify', async (ctx) => {
const result = await ctx.llm.generate(
`Classify this query as 'technical', 'billing', or 'general': ${ctx.input}`
);
ctx.metadata.category = result.content.trim().toLowerCase();
return ctx;
}),
 
// Route based on classification
conditionNode('router', (ctx) => ctx.metadata.category as string),
 
// Technical path: search docs first
retrieveNode('search-docs', 'technical-docs', { topK: 5 }),
llmNode('technical-response', {
temperature: 0.3,
systemPrompt: 'You are a technical support expert. Use the provided documentation.'
}),
 
// Billing path: direct response
llmNode('billing-response', {
temperature: 0.5,
systemPrompt: 'You are a billing support agent. Be helpful and clear about policies.'
}),
 
// General path
llmNode('general-response', { temperature: 0.7 }),
 
endNode('end'),
],
edges: [
edge('start', 'classify'),
edge('classify', 'router'),
edge('router', 'search-docs', 'technical'),
edge('router', 'billing-response', 'billing'),
edge('router', 'general-response', 'general'),
edge('search-docs', 'technical-response'),
edge('technical-response', 'end'),
edge('billing-response', 'end'),
edge('general-response', 'end'),
],
});
 
// Run the graph
const result = await supportGraph.run('How do I configure SSL certificates?');
 
console.log('Category:', result.metadata.category); // 'technical'
console.log('Path:', result.path); // ['start', 'classify', 'router', 'search-docs', 'technical-response', 'end']
console.log('Answer:', result.output);

Tree-shaking Imports

// ✅ Import only what you need
import {
startNode, endNode, actionNode, conditionNode,
llmNode, retrieveNode, parallelNode, edge
} from 'orkajs/graph';
 
// ✅ Or import from main package
import { startNode, endNode, actionNode, edge } from 'orkajs';