Define and send custom metrics to track your application performance
Metrics are quantitative measurements that provide insights into your application’s performance, user behavior, and business outcomes. They serve as the foundation for data-driven decision making and continuous improvement.
Default metrics are automatically collected for all graphs during execution
Custom metrics are user-defined measurements that you can create to track specific behaviors or KPIs relevant to your use case
All metrics can then be visualized in dashboards on Portal.
Metric Dropdown: Metrics appear in the dropdown only after being recorded. If you don’t see any metrics in the dropdown, execute your graph to generate data. You can also enter metric names manually in the selector.
When running experiments, it’s important to track metrics to understand how different variants impact key metrics like latency or engagement. Metrics can be tracked with attributes that identify which experiment and variant they are associated with.Below is an example demonstrating how to execute a graph with user context and log relevant metrics along with the user context:
Copy
import { telemetry } from '@inworld/runtime';import { MetricType } from '@inworld/runtime/telemetry';import { GraphBuilder, UserContext } from '@inworld/runtime/graph';// Initialize telemetry and configure metricstelemetry.init({ apiKey: process.env.INWORLD_API_KEY, // replace with your API key appName: 'ChatApp', appVersion: '1.0.0'});// Configure metricstelemetry.configureMetric({ metricType: MetricType.COUNTER_UINT, name: 'chat_interactions_total', description: 'Total chat interactions', unit: 'interactions'});telemetry.configureMetric({ metricType: MetricType.HISTOGRAM_DOUBLE, name: 'response_latency_seconds', description: 'Response time distribution', unit: 'seconds'});// Execute graph with user context and metricsasync function handleUserMessage(userId: string, message: string) { const startTime = performance.now(); // Create user context with targeting information const userContext = new UserContext({ userId: userId, userTier: 'premium', region: 'us-west' }, userId); // targetingKey try { // Create graph using GraphBuilder const myGraph = new GraphBuilder({ id: 'chat-graph', apiKey: process.env.INWORLD_API_KEY, enableRemoteConfig: false, }) // Add your nodes and edges here .build(); const outputStream = myGraph.start({ text: message }, userContext); // Process the response for await (const result of outputStream) { // Handle graph output here } // Record success metrics const latency = (performance.now() - startTime) / 1000; telemetry.metric.recordCounterUInt('chat_interactions_total', 1, { userId: userId, userTier: userContext.attributes.userTier, status: 'success' }); telemetry.metric.recordHistogramDouble('response_latency_seconds', latency, { userTier: userContext.attributes.userTier, messageLength: message.length.toString() }); } catch (error) { // Record error metrics const latency = (performance.now() - startTime) / 1000; telemetry.metric.recordCounterUInt('chat_interactions_total', 1, { userId: userId, userTier: userContext.attributes.userTier, status: 'error', errorType: error.name }); telemetry.metric.recordHistogramDouble('response_latency_seconds', latency, { userTier: userContext.attributes.userTier, status: 'error' }); throw error; }}
This approach enables you to:
Pass user context: UserContext provides targeting information for graph execution
Track performance by user segments: Use user attributes (tier, region) in metric tags
Measure real latency: Track actual response times under real conditions
Monitor errors: Record both success and failure metrics with context