OrkaJS
Orka.JS

LLM Providers

Configure OpenAI, Anthropic, Mistral, and Ollama adapters for Orka AI.

OpenAI

GPT-4o, GPT-4o-mini, embeddings

new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o-mini',
embeddingModel: 'text-embedding-3-small',
})

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus

new AnthropicAdapter({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-5-sonnet-20241022',
})

Mistral

Mistral Small, Medium, Large

new MistralAdapter({
apiKey: process.env.MISTRAL_API_KEY!,
model: 'mistral-small-latest',
})

Ollama

Local models, no API key

new OllamaAdapter({
model: 'llama3.2',
baseURL: 'http://localhost:11434',
})

# Adapter Configuration

Each adapter accepts specific configuration options. Here are the full options for each provider:

OpenAIAdapter

new OpenAIAdapter({
apiKey: string, // Required: OpenAI API key
model?: string, // Default: 'gpt-4o-mini'
embeddingModel?: string, // Default: 'text-embedding-3-small'
baseURL?: string, // Custom API endpoint (for Azure, proxies)
timeoutMs?: number, // Request timeout in milliseconds
maxRetries?: number, // Retry attempts on failure
})
generate(prompt, options)Generate text completion
embed(texts)Generate embeddings for texts
chat(messages, options)Multi-turn conversation

AnthropicAdapter

new AnthropicAdapter({
apiKey: string, // Required: Anthropic API key
model?: string, // Default: 'claude-3-5-sonnet-20241022'
maxTokens?: number, // Default max tokens for responses
timeoutMs?: number, // Request timeout in milliseconds
})

Note: Anthropic does not provide embeddings. Use OpenAIAdapter for embeddings when using Anthropic for generation.

MistralAdapter

new MistralAdapter({
apiKey: string, // Required: Mistral API key
model?: string, // Default: 'mistral-small-latest'
embeddingModel?: string, // Default: 'mistral-embed'
timeoutMs?: number, // Request timeout in milliseconds
})

OllamaAdapter

new OllamaAdapter({
model: string, // Required: Model name (e.g., 'llama3.2', 'mistral')
baseURL?: string, // Default: 'http://localhost:11434'
embeddingModel?: string, // Model for embeddings (e.g., 'nomic-embed-text')
})

Ollama runs locally with no API key required. Install from ollama.ai and pull models with 'ollama pull llama3.2'.

# LLMAdapter Interface

All adapters implement the LLMAdapter interface, ensuring consistent behavior:

interface LLMAdapter {
// Generate text from a prompt
generate(prompt: string, options?: GenerateOptions): Promise<GenerateResult>;
 
// Generate embeddings for one or more texts
embed(texts: string | string[]): Promise<number[][]>;
 
// Multi-turn chat conversation
chat(messages: ChatMessage[], options?: ChatOptions): Promise<ChatResult>;
}
 
interface GenerateOptions {
temperature?: number; // 0-1, controls randomness
maxTokens?: number; // Maximum response length
systemPrompt?: string; // System message for context
stopSequences?: string[]; // Stop generation at these sequences
}
 
interface GenerateResult {
content: string; // Generated text
usage: TokenUsage; // Token consumption
latencyMs: number; // Response time
}

# Switching Providers

One of Orka AI's core strengths is provider portability. Your application code stays the same regardless of which LLM you use:

import { createOrka } from 'orkajs/core';
import { OpenAIAdapter } from 'orkajs/adapters/openai';
import { OllamaAdapter } from 'orkajs/adapters/ollama';
import { MemoryVectorDB } from 'orkajs/adapters/memory';
 
// Development: free, local
const devLLM = new OllamaAdapter({ model: 'llama3.2' });
 
// Production: powerful, cloud
const prodLLM = new OpenAIAdapter({
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4o'
});
 
// Same application code works with both
const orka = createOrka({
llm: process.env.NODE_ENV === 'production' ? prodLLM: devLLM,
vectorDB: new MemoryVectorDB(),
});
 
// This works identically with any adapter
const answer = await orka.ask({
question: 'What is TypeScript?',
knowledge: 'docs',
});

# Model Comparison

ProviderBest ForEmbeddings
OpenAIGeneral purpose, best ecosystem
AnthropicLong context, safety, coding
MistralEuropean, multilingual, cost-effective
OllamaLocal, privacy, development

⚠️ Anthropic Note

Anthropic doesn't provide an embeddings API. If using AnthropicAdapter as your main LLM, you'll need a separate adapter for embeddings (e.g., OpenAI) when using RAG features.

Tree-shaking Imports

// ✅ Import only the adapters you need
import { OpenAIAdapter } from 'orkajs/adapters/openai';
import { AnthropicAdapter } from 'orkajs/adapters/anthropic';
import { MistralAdapter } from 'orkajs/adapters/mistral';
import { OllamaAdapter } from 'orkajs/adapters/ollama';