Observability
Monitor your AI applications with tracing, logging, and custom hooks in Orka AI.
Native Observability & LLMOps
Don't fly blind. Orka JS provides deep introspection into every LLM call, tool execution, and RAG retrieval. Monitor latencies, token usage, and errors with simple, powerful hooks.
observability: { logLevel: 'info', hooks: [{ onTraceEnd: (trace) => { // Direct integration with Datadog/Sentry reportMetrics(trace.totalLatencyMs, trace.totalTokens); }, onError: (err, ctx) => alertDevTeam(err) }],}#
Token Tracking: Detailed usage per trace for precise cost management.
#
Latency Profiling: Identify bottlenecks in your RAG or tool chains instantly.
Configuration
observability-config.ts
const orka = createOrka({ llm: new OpenAIAdapter({ apiKey: '...' }), vectorDB: new MemoryVectorAdapter(), observability: { logLevel: 'info', // 'debug' | 'info' | 'warn' | 'error' hooks: [{ onTraceStart: (trace) => { console.log(`๐ Trace started: ${trace.name}`); }, onTraceEnd: (trace) => { console.log(`โ
Done: ${trace.totalLatencyMs}ms, ${trace.totalTokens} tokens`); }, onError: (error, context) => { console.error(`โ Error: ${error.message}`, context); // Send to Sentry, Datadog, etc. }, }], },});Log Levels
| Level | Description |
|---|---|
| debug | Detailed internal operations |
| info | Normal operations (default) |
| warn | Potential issues |
| error | Failures only |
Manual Tracing
tracing.ts
// Start a traceconst trace = orka.tracer.startTrace('my-pipeline', { userId: 'user-123',});ย // Add eventsorka.tracer.addEvent(trace.id, { type: 'custom', name: 'preprocessing', startTime: Date.now(), endTime: Date.now() + 50,});ย // End the traceconst completed = orka.tracer.endTrace(trace.id);console.log(`Total: ${completed?.totalLatencyMs}ms`);๐ก Production Tip
In production, set logLevel: 'warn' to reduce noise, and use hooks for structured monitoring with Datadog, Sentry, or custom dashboards.