Experiments
Run A/B experiments without code deployment using our Experiments system:- Variant Management: Upload different graph configurations and test multiple variants simultaneously
- Targeting Rules: Control which users experience which variants based on user attributes
- Traffic Distribution: Gradually roll out changes with precise traffic allocation controls
- Instant Deployment: Switch between models, prompts, and graph configurations in real-time
Dashboards
Monitor your application performance and experiment results with comprehensive dashboards:- Default Metrics: Track essential performance indicators like graph execution counts, error rates, and latency percentiles
- Custom Metrics: Define and visualize application-specific metrics using our telemetry system
- Real-time Monitoring: Get instant visibility into your application health and experiment performance
Logs & Traces
Monitor and analyze AI interactions in real-time with our advanced logging and tracing systems:- Logs: Review historical data to monitor errors and debug issues.
- Traces: Understand the flow of your application with detailed execution traces. Use them to identify latency bottlenecks and debug issues when they arise.
Prompt & Model Tuning
Refine AI behavior through our interactive testing and prompt iteration tools:- LLM Playground: Test language models with various prompts and variables. Compare outputs across different configurations to identify the optimal settings.
- TTS Playground: Generate AI-powered speech with customizable parameters for speed and emotional tone.
- Model Store: Browse and access the latest AI models available through Inworld, with detailed information about capabilities, performance characteristics, and recommended use cases.