# OrkaJS - Production-Ready AI Framework > TypeScript-first framework for building LLM applications with agents, RAG, workflows, evaluation, and multi-model orchestration. ## Quick Start Install the full framework: ```bash npm install orkajs ``` Or install selective packages: ```bash npm install @orka-js/core @orka-js/openai @orka-js/memory ``` Scaffold a new project with the CLI: ```bash npx @orka-js/cli init ./my-agent npx @orka-js/cli dev ``` ## Core Concepts - **Intent-based API**: `orka.ask()`, `orka.agent()`, `orka.workflow()`, `orka.graph()`, `orka.test()`, `orka.prompts` - **Modular Architecture**: 41+ packages, install only what you need - **Provider Agnostic**: OpenAI, Anthropic, Mistral, Cohere, Replicate, Ollama, Google Gemini - **Production Ready**: Built-in retry, fallback, observability, evaluation - **Type-Safe**: Full TypeScript support with IDE autocomplete - **Framework Agnostic**: Works with Express, NestJS, Hono, AdonisJS, Cloudflare Workers, etc. ## Package Structure ### Core & Adapters - `@orka-js/core` - Core types, errors, utilities, Orka class, Knowledge, Chunker - `@orka-js/openai` - OpenAI adapter (GPT-4, GPT-4o, GPT-3.5) - `@orka-js/anthropic` - Anthropic adapter (Claude 3.5 Sonnet, Claude 3 Opus) - `@orka-js/mistral` - Mistral AI adapter (Mistral Large, Medium, Small) - `@orka-js/cohere` - Cohere adapter (Command R+, embeddings, streaming) - `@orka-js/google` - Google Gemini adapter (Gemini Pro, Gemini Ultra) - `@orka-js/ollama` - Ollama adapter (Local models: Llama, Mistral, etc.) - `@orka-js/replicate` - Replicate adapter (Llama 2, SDXL, and any hosted open-source model) ### Vector Databases - `@orka-js/memory` - In-memory vector store (dev/testing) - `@orka-js/pinecone` - Pinecone adapter (production-grade) - `@orka-js/qdrant` - Qdrant adapter (self-hosted/cloud) - `@orka-js/chroma` - ChromaDB adapter (open-source) - `@orka-js/pgvector` - PostgreSQL pgvector adapter (vector search inside your existing Postgres DB) ### Agent System - `@orka-js/agent` - Complete agent system with multiple strategies: - **ReActAgent**: Reasoning + Acting loop (Thought → Action → Observation) - **PlanAndExecuteAgent**: Plan-first execution with replanning on failure - **OpenAIFunctionsAgent**: Native OpenAI function calling format - **StructuredChatAgent**: JSON-based communication with schema validation - **HITLAgent**: Human-in-the-loop with approval checkpoints - **StreamingToolAgent** _(v1.5.0)_: Streams LLM tokens in real time while executing tool calls mid-stream - **AgentTeam**: Orchestrate multiple agents collaborating on a task - Strategies: `supervisor`, `peer-to-peer`, `hierarchical`, `round-robin`, `consensus` - `execute(task)`: Run the team and get `TeamResult` with per-agent contributions - `executeStream(task)`: Async iterable of `TeamEvent` (agent_started, message_sent, round_completed, etc.) - Configurable `maxRounds` and `consensusThreshold` - **Agent Toolkits**: - SQLToolkit: `sql_query`, `sql_schema`, `sql_list_tables` (read-only mode, auto LIMIT) - CSVToolkit: `csv_info`, `csv_search`, `csv_filter`, `csv_aggregate` ### Data Processing - `@orka-js/tools` - Comprehensive data processing toolkit: - **Document Loaders**: TextLoader, CSVLoader, JSONLoader, MarkdownLoader, PDFLoader, DirectoryLoader - **Text Splitters**: RecursiveCharacterTextSplitter, MarkdownTextSplitter, CodeTextSplitter, TokenTextSplitter - **Retrievers**: MultiQueryRetriever, ContextualCompressionRetriever, EnsembleRetriever, VectorRetriever, ParentDocumentRetriever, SelfQueryRetriever, BM25Retriever - **Output Parsers**: JSONParser, StructuredOutputParser (Zod), ListOutputParser, AutoFixingParser, XMLParser, CSVParser, CommaSeparatedListParser - **Pre-built Chains**: RetrievalQAChain, ConversationalRetrievalChain, SummarizationChain, QAChain - **Prompt Templates**: PromptTemplate, ChatPromptTemplate, FewShotPromptTemplate ### Workflows & Orchestration - `@orka-js/workflow` - Multi-step workflows with built-in steps: - `plan`: Generate execution plan - `retrieve`: Fetch relevant context from knowledge base - `generate`: Generate response with LLM - `verify`: Validate output quality - `improve`: Iterative refinement - Custom steps support - `@orka-js/graph` - Graph-based workflows: - Node-based execution with conditional branching - Parallel execution support - State management across nodes - Mermaid diagram export for visualization - `@orka-js/orchestration` - Multi-model orchestration strategies: - **RouterLLM**: Route requests by complexity/topic - **FallbackLLM**: Auto-healing failover (OpenAI → Anthropic → Ollama) - **ConsensusLLM**: Best-of-N majority voting - **RaceLLM**: Lowest latency wins - **LoadBalancerLLM**: Distribute load across providers ### Caching & Performance - `@orka-js/cache` - Caching layer for cost optimization: - **MemoryCache**: In-memory caching (single instance) - **RedisCache**: Distributed caching (production) - **CachedLLM**: Wrap any LLM with automatic caching - **CachedEmbeddings**: Cache embedding generations - DJB2 hash replaced with SHA-256 for security ### Resilience & Reliability - `@orka-js/resilience` - Production-grade resilience: - **Retry**: Exponential backoff with jitter - **FallbackLLM**: Multi-provider failover - **ResilientLLM**: Wrapper combining retry + fallback - **Timeout**: AbortController-based timeouts for all adapters ### Memory Management - `@orka-js/memory-store` - Conversation memory: - **Single-session memory**: Buffer, sliding window, summary strategies - **Multi-session memory**: TTL-based session management - **Summary strategy**: Real implementation that compresses old messages - **KGMemory**: Knowledge Graph Memory — extracts entities & relations from conversation turns, builds an in-memory knowledge graph (entities, relations, triples), and uses it for context-aware retrieval - Entity types: `PERSON`, `ORGANIZATION`, `LOCATION`, `CONCEPT`, `PRODUCT`, `EVENT`, `OTHER` - `addMessage(msg)` / `addMessages(msgs)`: auto-extracts knowledge in batches - `queryKnowledge(query)`: LLM-powered graph query returning relevant context - `getContextForQuery(query)`: returns conversation messages enriched with graph context - `getEntities()`, `getRelations()`, `getTriples()`, `getGraphSummary()` - Configurable `maxTriples`, `extractionBatchSize`, `preserveRecentMessages` ### Durable & Scheduled Agents - `@orka-js/durable` - Durable, resumable, and scheduled agents: - **DurableAgent**: Wraps any agent with checkpoint persistence - **MemoryDurableStore**: In-memory store (dev/testing) - **RedisDurableStore**: Distributed store (production) - **Pause/Resume**: Pick up where you left off after interruption - **Cron scheduling**: Run agents on a schedule (e.g. `'0 9 * * *'`) - **Job statuses**: `pending`, `running`, `paused`, `completed`, `failed` - Auto-retry with configurable `maxRetries` ### Agent-to-Agent Communication - `@orka-js/a2a` - Google A2A (Agent-to-Agent) protocol over HTTP: - **A2AServer**: Expose any OrkaJS agent as an A2A-compatible HTTP endpoint - **A2AClient**: Connect to remote agents and send messages - **Agent discovery**: `getAgentCard()` returns skills and metadata - **Multi-agent orchestration**: Chain distributed specialized agents - Task state tracking: `pending` | `running` | `completed` | `failed` ### Real-time Voice Agents - `@orka-js/realtime` - Full STT → LLM → TTS voice pipeline: - **RealtimeAgent**: Wires together speech-to-text, LLM, and text-to-speech - **OpenAISTTAdapter**: Whisper-based speech-to-text - **OpenAITTSAdapter**: OpenAI TTS with streaming (sentence-by-sentence for low latency) - **`process(audio)`**: Full pipeline returning `{transcript, response, audio}` - **`processStream(audio)`**: Async iterable of `RealtimeEvent` (transcript, token, audio_chunk, done, error) - **`wsHandler()`**: WebSocket handler — clients send binary audio, server streams back events - Tool support: LLM can call tools during voice conversation ### Evaluation & Testing - `@orka-js/evaluation` - Automated quality testing: - **Built-in Metrics**: Faithfulness, relevance, coherence, latency, token usage - **Custom Metrics**: Define your own evaluation criteria - **Test Runner**: CI/CD integration with `orka.test()` - **Assertions**: `expectFaithfulness()`, `expectLatency()`, `expectTokens()` - **Reporters**: ConsoleReporter, JSONReporter, JUnitReporter - `@orka-js/test` - Unit testing utilities: - **`mockLLM(responses)`**: Deterministic mock LLM — match by string, RegExp, or function - **`AgentTestBed`**: Test harness with `run()`, `toolCalls`, `output`, and fluent assertions - **`extendExpect(expect)`**: Custom Vitest/Jest matchers: `toHaveOutput`, `toHaveCalledTool` - Peer dependency: Vitest (or Jest) ### Observability & Monitoring - `@orka-js/observability` - Production monitoring: - **Tracer**: Distributed tracing with hooks - **Logging**: Configurable log levels (debug, info, warn, error) - **Hooks**: `onStart`, `onSuccess`, `onError`, `onComplete` - **Memory leak fix**: `maxTraces` (default 1000) + TTL-based eviction - `@orka-js/otel` - OpenTelemetry integration: - **`createOtelExporter(config)`**: Send spans to any OTLP backend (Jaeger, Tempo, Datadog, Grafana Cloud) - **`OtelExporter`**: `startSpan`, `endSpan`, `recordError`, `flush` - **`W3CTraceContextPropagator`**: Inject/extract W3C TraceContext headers for distributed tracing - `@orka-js/collector` - Trace collector (backward-compatible re-export of `@orka-js/devtools`) ### Prompt Management - `@orka-js/prompts` - Prompt versioning and management: - **Registry**: Centralized prompt storage - **Versioning**: Track prompt changes over time - **Diff**: Compare prompt versions - **Rollback**: Revert to previous versions - **Persistence**: File-based storage ### Advanced Features - `@orka-js/multimodal` - Vision and Audio agents: - **VisionAgent**: Image analysis with GPT-4 Vision, Claude 3 - **AudioAgent**: Audio processing - **ContentPart types**: text, image_url, image_base64, audio - Cross-modal workflows - `@orka-js/mcp` - Model Context Protocol: - Connect to standardized MCP servers - Access external tools and data sources - Filesystem, GitHub, Database integrations - MCP gateway support - `@orka-js/finetuning` - Model fine-tuning: - Dataset preparation and validation - Training job management - Evaluation and metrics - OpenAI fine-tuning API integration - `@orka-js/ocr` - Optical Character Recognition: - Document text extraction - Layout analysis - Multi-language support ### Developer Tooling - `@orka-js/cli` - OrkaJS CLI: - `orka init [dir]`: Scaffold a new OrkaJS project with prompts - `orka dev [--port]`: Start the interactive dev server - `npx @orka-js/cli` — no installation required - `@orka-js/server` - Zero-config HTTP server for agents: - `createOrkaServer(config)`: Start a REST API in one call - Auto-generated routes: `GET /ai`, `GET /ai/:name`, `POST /ai/:name`, `POST /ai/:name/stream` - `start()` / `stop()` lifecycle methods - `@orka-js/devtools` - Visual debugging and observability dashboard - `@orka-js/react` - React components for graph visualization: - **``**: Renders agent execution graph in real time - **`useGraph()`**: Hook wrapping `agent.run()` with `execution`, `isRunning`, `reset` ### Framework Integrations - `@orka-js/express` - Express.js middleware: - `orkaMiddleware(config)`: Mount agents on any Express app - Auto-generates REST + SSE streaming routes under configurable prefix - `@orka-js/hono` - Hono middleware (edge-compatible): - `orkaHono(config)`: Works on Cloudflare Workers, Deno, Bun, and Node.js - Same auto-generated routes as Express - `@orka-js/nestjs` - NestJS v2 integration: - `OrkaModule.forRoot()` / `forRootAsync()`: DI registration - `@OrkaAgent()` decorator and `@InjectAgent()` injection - `createOrkaController()`: Auto-generated REST controller - Semantic Guard, Guards, Pipes, CQRS support, microservices ### Meta Package - `orkajs` - Meta-package re-exporting all packages for convenience ## Security Features ### PII Protection (GDPR-compliant) - **PIIGuard**: Detect and redact sensitive information - **Patterns**: Email, phone, SSN, credit card, IP address, etc. - **Custom patterns**: Define your own PII patterns - **Redaction modes**: Replace with `[EMAIL]`, `[PHONE]`, or custom placeholders ### SQL Injection Prevention - **SQLToolkit**: Read-only mode by default - **Query validation**: `validateReadOnlyQuery()` blocks dangerous keywords - **Identifier validation**: `isValidIdentifier()` validates table/column names - **Multi-statement protection**: Blocks multiple SQL statements ### SSRF Protection - **Knowledge URL fetch**: Protocol validation (http/https only) - **Timeout**: 30s default timeout - **Size limit**: 50MB max response size - **Response validation**: Check `response.ok` before processing ## Architecture Patterns ### BaseAgent Abstract Class All agents extend `BaseAgent` which provides: - Event emitter (`on`, `off`, `emit`) - Standard lifecycle hooks - Error handling - Memory management ### Error Hierarchy - `OrkaError`: Base error class with error codes - `LLMError`: LLM-specific errors - `VectorDBError`: Vector database errors - `ValidationError`: Input validation errors - `isRetryable()`: Helper to determine if error is retryable ## Example Usage ### Basic RAG ```typescript import { createOrka } from '@orka-js/core'; import { OpenAIAdapter } from '@orka-js/openai'; import { MemoryVectorAdapter } from '@orka-js/memory'; const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), vectorDB: new MemoryVectorAdapter(), }); // Index documents await orka.knowledge.create({ name: 'docs', source: ['Your company documentation...'], }); // Ask questions with RAG const result = await orka.ask({ knowledge: 'docs', question: 'How does authentication work?', }); console.log(result.answer); ``` ### Streaming Agent with Tools ```typescript import { StreamingToolAgent } from '@orka-js/agent'; import { OpenAIAdapter } from '@orka-js/openai'; const llm = new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }); const agent = new StreamingToolAgent({ goal: 'Help customers with product questions', tools: [searchProductsTool, getDetailsTool], }, llm); // Stream tokens + tool results in real time for await (const event of agent.runStream('Find me a red T-shirt under $30')) { if (event.type === 'token') process.stdout.write(event.content); if (event.type === 'tool_call') console.log(`Calling: ${event.name}`); if (event.type === 'done') console.log('\nDone:', event.output); } ``` ### Multi-Step Workflow ```typescript import { createOrka } from '@orka-js/core'; import { OpenAIAdapter } from '@orka-js/openai'; const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), }); const flow = orka.workflow() .step('plan', { prompt: 'Create a plan for: {{input}}' }) .step('execute', { prompt: 'Execute the plan: {{plan}}' }) .step('verify', { prompt: 'Verify the result: {{execute}}' }); const result = await flow.run({ input: 'Build a REST API' }); ``` ### Multi-Model Orchestration ```typescript import { FallbackLLM, RouterLLM } from '@orka-js/orchestration'; import { OpenAIAdapter } from '@orka-js/openai'; import { AnthropicAdapter } from '@orka-js/anthropic'; import { OllamaAdapter } from '@orka-js/ollama'; // Fallback: OpenAI → Anthropic → Ollama const resilient = new FallbackLLM({ adapters: [ new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), new AnthropicAdapter({ apiKey: process.env.ANTHROPIC_API_KEY! }), new OllamaAdapter({ baseURL: 'http://localhost:11434' }), ], }); // Route complex prompts to GPT-4o const smart = new RouterLLM({ routes: [ { condition: (prompt) => prompt.length > 500, adapter: new OpenAIAdapter({ model: 'gpt-4o' }), }, ], defaultAdapter: new OpenAIAdapter({ model: 'gpt-4o-mini' }), }); ``` ### Durable Agent ```typescript import { DurableAgent, RedisDurableStore } from '@orka-js/durable'; import { ReActAgent } from '@orka-js/agent'; const store = new RedisDurableStore({ url: process.env.REDIS_URL! }); const agent = new ReActAgent({ orka, tools }); const durable = new DurableAgent(agent, store, { retryOnFailure: true, maxRetries: 3 }); const { jobId } = await durable.run('Analyze the quarterly report'); // Check status later const job = await durable.getJob(jobId); console.log(job.status, job.output); // Pause / resume await durable.pause(jobId); await durable.resume(jobId); ``` ### Agent-to-Agent (A2A) ```typescript // Expose an agent import { A2AServer } from '@orka-js/a2a'; const server = new A2AServer({ port: 4000, agents: [agentCard] }); server.registerAgent(agentCard, async (message) => { const result = await agent.run(message.content); return { content: result.output }; }); await server.start(); // Connect to a remote agent import { A2AClient } from '@orka-js/a2a'; const client = new A2AClient({ url: 'http://research-agent:4000' }); const card = await client.getAgentCard(); const response = await client.send({ content: 'Find news about AI regulations' }); ``` ### Voice Agent ```typescript import { RealtimeAgent, OpenAISTTAdapter, OpenAITTSAdapter } from '@orka-js/realtime'; import { OpenAIAdapter } from '@orka-js/openai'; const agent = new RealtimeAgent({ config: { goal: 'You are a helpful voice assistant.', tts: true }, llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), stt: new OpenAISTTAdapter({ apiKey: process.env.OPENAI_API_KEY! }), tts: new OpenAITTSAdapter({ apiKey: process.env.OPENAI_API_KEY!, voice: 'nova' }), }); // Process audio const result = await agent.process(audioBuffer, 'audio/wav'); console.log('User said:', result.transcript); console.log('Agent replied:', result.response); // Stream (WebSocket) const wss = new WebSocketServer({ port: 8080 }); wss.on('connection', agent.wsHandler()); ``` ### pgvector (PostgreSQL) ```typescript import { PgVectorAdapter } from '@orka-js/pgvector'; import { Orka } from '@orka-js/core'; const vectorDB = new PgVectorAdapter({ connectionString: process.env.DATABASE_URL!, dimension: 1536, }); const orka = new Orka({ llm, vectorDB }); await orka.knowledge.ingest([{ id: 'doc-1', content: 'Hello world' }]); const results = await orka.knowledge.query('hello', 5); ``` ### OpenTelemetry ```typescript import { createOtelExporter } from '@orka-js/otel'; import { Orka } from '@orka-js/core'; const exporter = createOtelExporter({ endpoint: 'http://localhost:4318/v1/traces', serviceName: 'my-ai-app', }); const orka = new Orka({ llm, callbacks: { onStart: (run) => exporter.startSpan(run), onEnd: (run) => exporter.endSpan(run), onError: (run, err) => exporter.recordError(run, err), }, }); ``` ### Testing with @orka-js/test ```typescript import { describe, it, expect } from 'vitest'; import { mockLLM, AgentTestBed, extendExpect } from '@orka-js/test'; import { StreamingToolAgent } from '@orka-js/agent'; extendExpect(expect); // install custom matchers once describe('MyAgent', () => { it('calls the booking tool', async () => { const llm = mockLLM([ { when: /book/, toolCall: { name: 'bookDemo', args: { slot: 'tomorrow 10am' } } }, ]); const agent = new StreamingToolAgent({ goal: 'Book demos', tools: [bookTool] }, llm); const bed = new AgentTestBed({ agent, llm }); const result = await bed.run('Book a demo for tomorrow morning'); result.toHaveCalledTool('bookDemo'); expect(result.toolCalls[0].args.slot).toBe('tomorrow 10am'); }); }); ``` ### Express / Hono / NestJS Integration ```typescript // Express import { orkaMiddleware } from '@orka-js/express'; app.use(orkaMiddleware({ orka, agents: [{ name: 'assistant', agent }], prefix: '/ai' })); // Hono (Edge / Cloudflare Workers) import { orkaHono } from '@orka-js/hono'; app.route('/ai', orkaHono({ orka, agents: [{ name: 'assistant', agent }] })); // NestJS import { OrkaModule } from '@orka-js/nestjs'; @Module({ imports: [OrkaModule.forRoot({ llm: new OpenAIAdapter({ apiKey: '...' }) })], }) export class AppModule {} ``` ### Graph Workflow ```typescript import { createOrka } from '@orka-js/core'; const orka = createOrka({ llm }); const graph = orka.graph() .addNode('analyze', async (state) => { const analysis = await llm.generate(`Analyze: ${state.input}`); return { ...state, analysis }; }) .addNode('decide', async (state) => { const decision = state.analysis.includes('complex') ? 'detailed' : 'simple'; return { ...state, decision }; }) .addEdge('analyze', 'decide') .addConditionalEdge('decide', (state) => state.decision) .setEntryPoint('analyze'); const result = await graph.run({ input: 'Explain quantum computing' }); const diagram = graph.toMermaid(); ``` ### Human-in-the-Loop ```typescript import { HITLAgent } from '@orka-js/agent'; import { OpenAIAdapter } from '@orka-js/openai'; const agent = new HITLAgent({ llm: new OpenAIAdapter({ model: 'gpt-4o' }), tools: [transferMoneyTool, sendEmailTool], requiresApproval: ['transfer_money'], onInterrupt: async (action) => { const approved = await waitForUserApproval(action); return approved ? agent.resume() : agent.reject('User rejected the action'); }, }); const result = await agent.run('Transfer $1000 to account XYZ'); ``` ### Evaluation & Testing ```typescript import { createOrka } from '@orka-js/core'; const orka = createOrka({ llm, vectorDB }); await orka.test({ name: 'RAG Quality Test', cases: [ { input: { question: 'What is OrkaJS?' }, expectations: { faithfulness: { threshold: 0.8 }, latency: { max: 2000 }, }, }, ], }); ``` ## Best Practices ### Error Handling ```typescript import { OrkaError, LLMError } from '@orka-js/core'; try { const result = await orka.ask({ knowledge: 'docs', question: '...' }); } catch (error) { if (error instanceof LLMError && error.isRetryable()) { // Retry logic } else if (error instanceof OrkaError) { console.error(`Orka error [${error.code}]:`, error.message); } else { throw error; } } ``` ### Caching for Cost Optimization ```typescript import { CachedLLM, RedisCache } from '@orka-js/cache'; import { OpenAIAdapter } from '@orka-js/openai'; const cache = new RedisCache({ host: 'localhost', port: 6379, ttl: 3600 }); const llm = new CachedLLM({ llm: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY! }), cache, }); const result1 = await llm.generate('What is AI?'); // API call const result2 = await llm.generate('What is AI?'); // Cached (no API call) ``` ### Observability ```typescript import { Tracer } from '@orka-js/observability'; const tracer = new Tracer({ maxTraces: 1000, traceTtlMs: 3600000 }); tracer.on('trace:complete', (trace) => { console.log(`Completed: ${trace.operation} in ${trace.duration}ms`); }); const orka = createOrka({ llm, vectorDB, tracer }); ``` ### PII Protection ```typescript import { PIIGuard } from '@orka-js/security'; const guard = new PIIGuard({ patterns: ['email', 'phone', 'ssn', 'credit_card'] }); const sanitized = guard.redact('My email is john@example.com'); // 'My email is [EMAIL]' ``` ## Development Workflow 1. **Create feature branch**: `git checkout -b feature/my-feature` 2. **Make changes**: Edit files in `packages/*/src/` 3. **Run tests**: `pnpm test` 4. **Create changeset**: `pnpm changeset` (documents changes) 5. **Commit and PR**: push branch, CI handles versioning and publishing on merge to main ## TypeScript Configuration OrkaJS works with all `moduleResolution` modes: - `node` (legacy, supported) - `node16` (modern) - `nodenext` (modern) - `bundler` (Vite, Webpack, etc.) ## Environment Variables ```bash # LLM Providers OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... MISTRAL_API_KEY=... GOOGLE_API_KEY=... COHERE_API_KEY=... REPLICATE_API_TOKEN=... # Vector Databases PINECONE_API_KEY=... PINECONE_ENVIRONMENT=... QDRANT_URL=http://localhost:6333 QDRANT_API_KEY=... DATABASE_URL=postgresql://... # pgvector / Supabase # Caching / Durable REDIS_URL=redis://localhost:6379 REDIS_HOST=localhost REDIS_PORT=6379 REDIS_PASSWORD=... # Observability ORKA_LOG_LEVEL=info ORKA_TRACE_ENABLED=true ``` ## Links - **Documentation**: https://orkajs.com - **GitHub**: https://github.com/orka-ai/orkajs - **npm**: https://www.npmjs.com/package/orkajs - **Discord**: [Community Discord] - **Twitter**: [@OrkaJS] ## Version Information - **Current Version**: v3.6.7 - **Package Count**: 41 modular packages - **License**: MIT - **Node.js**: >= 18.0.0 - **TypeScript**: >= 5.0.0 ## Scope & Organization - **npm organization**: `@orka-js` (with hyphen) - **Meta package**: `orkajs` (no scope) - **All scoped packages**: `@orka-js/*` ## Community & Support - Report bugs on GitHub Issues - Join Discord for community support - Follow on Twitter for updates - Contribute via Pull Requests (see CONTRIBUTING.md) ## Roadmap Status - ✅ V1: Core RAG + Basic Agents - ✅ V2: Workflows + Evaluation + Memory - ✅ V3: Orchestration + Observability + Prompts - ✅ V4: Loaders + Splitters + Retrievers + Parsers - ✅ V5: Multimodal + Caching + Templates - ✅ V6: Advanced Retrievers + Pre-built Chains - ✅ V7: Advanced Agents + Toolkits + New Parsers - ✅ V8: Security (PII Guard, SQL Injection, SSRF) - ✅ V9: MCP + Fine-tuning + OCR + DevTools - ✅ V10: Durable Agents + A2A + Realtime Voice + pgvector + OpenTelemetry + StreamingToolAgent + CLI + Testing Utilities + Framework Integrations (Express, Hono, NestJS v2) + React Graph Visualization - ✅ V11: Multi-Agent Systems (AgentTeam — supervisor, peer-to-peer, hierarchical, round-robin, consensus) - 🚧 V12: GraphRAG ## Key Differentiators 1. **TypeScript-First**: Not a Python port, built for TypeScript from day one 2. **Modular**: Install only what you need, tree-shakeable 3. **Production-Ready**: Built-in resilience, observability, evaluation 4. **Provider Agnostic**: Swap any LLM or VectorDB with one line 5. **Intent-Based API**: Code that reads like English 6. **Framework Agnostic**: Works with Express, Hono, NestJS, Cloudflare Workers, and more 7. **Batteries Included**: Everything you need in one ecosystem 8. **Security-First**: PII protection, SQL injection prevention, SSRF protection 9. **Developer Experience**: CLI scaffolding, visual devtools, test utilities, full TypeScript support --- Last Updated: 2026-04-08