Skip to main content

Understanding Variants

A variant is a named version of your graph that you can:
  • Deploy independently while sharing the same endpoint
  • A/B test against other variants
  • Roll back to if needed
  • Track performance metrics for

Variant Use Cases

Model Experimentation:
  • Test GPT-4 vs Claude vs Gemini
  • Compare different model parameters (temperature, token limits)
  • Evaluate custom vs standard model configurations
Prompt Optimization:
  • A/B test different system prompts
  • Test various conversation styles
  • Optimize for specific use cases or user segments
Graph Architecture:
  • Test different node configurations
  • Compare processing pipelines
  • Optimize for performance vs quality trade-offs

Working with Variants

Prerequisites

Before working with variants, ensure your graph is properly configured for experimentation:
// CRITICAL: Enable remote config in your graph code
const graph = new GraphBuilder({
  id: 'my-graph-id',  // Must match CLI deployment
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: true, // Required for Portal experiments
})
  .addNode(llmNode)
  .setStartNode(llmNode)
  .setEndNode(llmNode)
  .build();
Without enableRemoteConfig: true, your deployed graph will ignore Portal-configured variants and always use local configuration, making A/B testing impossible.

Variant Workflow

  1. Preview your graph structure
  • Command
  • Expected Output
inworld graph variant print ./graph.ts
This shows the JSON representation of your graph before registering it as a variant.
  1. Deploy your graph (if not already deployed)
inworld deploy ./graph.ts  
  1. Register a variant in Portal
  • Command
  • Expected Output
inworld graph variant register -d [variant-name] ./graph.ts

Variant Examples & Use Cases

Create a Baseline Variant

inworld graph variant register -d baseline ./graph.ts
This creates your control group - the current version that new variants will be compared against.

Register an Experimental Variant

inworld graph variant register -d experiment-v2 ./graph-v2.ts
Create a new version with different configuration (model, prompts, parameters) to test against the baseline.

Version-based Variants

inworld graph variant register -d v1.2.0 ./graph.ts
Use semantic versioning for organized variant management and easy rollbacks.

Update Existing Variant

# Force update if variant already exists
inworld graph variant register -d baseline ./graph.ts --force

Variant Management

List All Variants

  • Command
  • Expected Output
inworld graph variant list ./graph.ts
Note: Traffic distribution and variant activation are managed through the Inworld Portal web interface, not through CLI commands. The CLI is used for registering and listing variants, while traffic management happens in the portal.

Graph Registry Integration

To enable A/B testing with your CLI-deployed graphs, you must set up Graph Registry integration:

Portal Setup

  1. Navigate to Graph Registry in Portal
  2. Click Register Graph
  3. Enter your graph ID (same as in CLI: my-graph-id)
  4. Click Register
Once registered, your CLI variants will appear in the Portal interface where you can:
  • Create targeting rules based on user attributes
  • Set traffic distribution percentages
  • Enable/disable experiments
  • Monitor performance metrics

CLI Variant Registration

  • Command
  • Portal Configuration
# Register variants via CLI
inworld graph variant register -d baseline ./graph.ts
inworld graph variant register -d experiment-v1 ./improved-graph.ts

User Context for Proper Experimentation

For experiments to work correctly, you must pass UserContext with unique targeting keys. The CLI’s advanced input format supports this:
# CORRECT: Include UserContext for proper A/B testing
inworld run ./graph.ts '{
  "input": {"message": "Hello"},
  "userContext": {
    "targetingKey": "user123",
    "attributes": {
      "country": "US", 
      "tier": "premium",
      "app_version": "2.1.0"
    }
  }
}'
Common Mistake: Without unique targeting keys, all users get the same variant regardless of your Portal traffic splits. Always include userContext.targetingKey with a unique user identifier.

UserContext Best Practices

Required for A/B Testing:
// In your application code
const userContext = new UserContext({
  tier: user.tier,
  country: user.country,
  app_version: "2.1.0"
}, user.id); // Unique targeting key

graph.start(input, userContext);
Targeting Attributes Examples:
  • User Demographics: age, country, language
  • Subscription Info: tier (free, premium), plan_type
  • Technical: app_version, platform (web, mobile)
  • Behavioral: usage_frequency, feature_usage

Experimentation Workflow

Complete A/B Testing Process

  1. Deploy initial graph → Creates persistent endpoint using CLI
  2. Register as baseline variant → Use CLI to register initial version
  3. Create experimental changes → Modify your graph locally
  4. Register experimental variant → Use CLI to register new version
  5. Manage traffic in Portal → Configure A/B testing splits in web interface
  6. Monitor results in Portal → View performance metrics and user feedback
  7. Iterate with CLI → Register new variants as you develop
  8. Scale via Portal → Adjust traffic distribution to winning variants
This entire process happens transparently - your clients continue using the same endpoint while you optimize the experience behind the scenes.

Experiment Design Best Practices

Setup Phase:
  • Calculate sample size upfront - Use Power Analysis calculator with baseline metrics, desired MDE (typically 2-5%), α = 0.05, and power = 0.80
  • Create variants with consistent naming - Use naming conventions like model-prompt-tools (e.g., “GPT5-Creative-Memory” vs “Claude4Sonnet-Analytical-RAG”)
  • Always pass UserContext with targeting keys - Use unique user IDs as targeting keys to ensure consistent variant assignment
Running Phase:
  • Start with small traffic allocation - Use 10-20% traffic to validate setup, then scale to reach calculated sample size
  • Use rule ordering strategically - Put specific targeting rules (premium users, specific regions) at the top since rules evaluate top-to-bottom
  • Monitor via Portal dashboards - Watch custom metrics alongside default metrics (Graph Executions Total, P99 Latency)
Analysis Phase:
  • Wait for statistical significance before making decisions
  • Use gradual rollout - Deploy winners by increasing traffic allocation (50/50 → 70/30 → 90/10)
  • Clean up experiments properly - Disable and delete old rules to keep Graph Registry organized

Production Variant Management

Scaling Winning Variants

Once you identify a winning variant:
  1. Gradual Rollout: Increase traffic to winner gradually (30% → 50% → 70% → 90%)
  2. Monitor Metrics: Watch for any degradation during rollout
  3. Full Migration: Set winning variant as default for new users
  4. Cleanup: Retire losing variants and clean up Portal rules

Variant Lifecycle

Development:
  • Create feature branches for experimental variants
  • Test locally before registering variants
  • Use descriptive naming conventions
Testing:
  • Register variants with CLI
  • Configure experiments in Portal
  • Monitor performance and user feedback
Production:
  • Scale winning variants gradually
  • Archive losing variants for future reference
  • Document learnings and results

Long-term Management

Variant Hygiene:
  • Regularly review and clean up old variants
  • Archive successful experiments for reference
  • Maintain naming conventions and documentation
  • Set up alerts for experiment performance
Continuous Optimization:
  • Plan regular experimentation cycles
  • Test incremental improvements continuously
  • Use learnings to inform future variant development
  • Build experimentation into your development process

Next Steps

With graph variants set up:
  1. Advanced Integration - Complete Portal integration and monitoring setup
  2. Portal Graph Registry - Learn advanced experiment configuration
  3. A/B Experiment Best Practices - Deep dive into experimentation methodology

Having Issues?

Setup or basic CLI issues? See the CLI Troubleshooting Guide for common problems. Experiment problems? Check the CLI Troubleshooting Guide for A/B testing and variant-specific issues. Your variant system is now ready for sophisticated A/B testing and continuous optimization of your AI graphs!