Skip to main content

Advanced CLI Integration

The CLI is just the beginning of a powerful deployment and experimentation ecosystem. Here’s how CLI deployment integrates with Inworld Portal for complete graph lifecycle management.

Complete Deployment + Experimentation Pipeline

1. CLI Development & Deployment
  • Develop and test locally with inworld serve
  • Deploy to cloud with inworld deploy
  • Register variants with inworld graph variant register
2. Portal Experimentation
  • Configure traffic splits in Graph Registry
  • Set targeting rules based on user attributes
  • Enable experiments for live traffic
3. Portal Monitoring 4. CLI Iteration
  • Create new variants based on experiment results
  • Deploy winning variants
  • Repeat the cycle

Observability Integration

CLI-deployed graphs automatically integrate with Portal’s observability tools:

Automatic Metrics Collection

  • Default metrics: Execution count, latency, token usage automatically collected
  • Custom metrics: Add your own business metrics using telemetry API
  • Dashboard visualization: View trends and performance in Portal

Tracing and Debugging

  • Execution traces: Every graph execution creates detailed traces
  • Error tracking: Failed executions highlighted with error details
  • Performance analysis: Identify bottlenecks across graph nodes

Setup for Maximum Observability

import { telemetry, MetricType } from '@inworld/runtime';

// Initialize telemetry (if not using defaults)
telemetry.init({
  apiKey: process.env.INWORLD_API_KEY,
  appName: 'my-cli-app',
  appVersion: '1.0.0'
});

// Add custom success metrics
telemetry.configureMetric({
  metricType: MetricType.COUNTER_UINT,
  name: 'successful_interactions',
  description: 'Count of successful user interactions'
});

Playground Integration

Before deploying to production, use Inworld Portal’s Playground to test and refine your graph components:

Pre-Deployment Testing Workflow

  1. LLM Playground: Test prompts, models, and parameters
    • Experiment with different language models
    • Test system prompts and user messages
    • Use variables for dynamic content testing
    • Validate responses before implementing in code
  2. TTS Playground: Test voice synthesis (for LLM+TTS template users)
    • Try different AI voices for your use case
    • Adjust speech parameters (speed, tone)
    • Generate audio samples for quality validation
    • Test text-to-speech integration
  3. Export to CLI: Once satisfied with Playground results
    # Create graphs based on Playground testing
    inworld init --template llm_tts --name playground-validated-graph
    
    # Use tested parameters in your graph configuration
    # Deploy with confidence
    inworld deploy ./graph.ts
    
Benefits:
  • Risk Reduction: Validate approach before development
  • Faster Iteration: Test ideas quickly without code changes
  • Parameter Optimization: Find optimal settings interactively
  • Voice Selection: Choose best voices for your template needs

Complete End-to-End Workflow

Here’s the complete workflow from initial development to production monitoring and optimization:

Phase 1: Development & Validation

# 1. Set up workspace
mkdir my-ai-project && cd my-ai-project

# 2. Install and authenticate CLI  
sudo npm install -g @inworld/cli
inworld login
inworld auth status

# 3. Initialize project with validated template
inworld init --template llm_tts --name production-graph
cd production-graph && npm install
Portal Validation:
  • Use LLM Playground to test prompts and models
  • Use TTS Playground to select optimal voices (if using LLM+TTS template)
  • Validate core functionality before development

Phase 2: Local Development & Testing

# 4. Test locally with simple input
inworld run ./graph.ts '{"input": "hello world"}'

# 5. Test with complex user context (for future experiments)
inworld run ./graph.ts '{
  "input": {"message": "hello"}, 
  "userContext": {"targetingKey": "test-user", "attributes": {"tier": "premium"}}
}'

# 6. Start local server for integration testing
inworld serve ./graph.ts --swagger --port 3000

# 7. Generate visualization for documentation
inworld graph visualize ./graph.ts --output ./docs/graph-architecture.png

Phase 3: Cloud Deployment & Graph Registry

# 8. Deploy to Inworld Cloud (creates persistent endpoint)
inworld deploy ./graph.ts

# 9. Register baseline variant for experimentation
inworld graph variant register -d baseline ./graph.ts
Code Update for Experiments:
// CRITICAL: Enable remote config in your graph code
const graph = new GraphBuilder({
  id: 'production-graph',  // Must match CLI deployment
  apiKey: process.env.INWORLD_API_KEY,
  enableRemoteConfig: true, // Required for Portal experiments
})
Portal Setup:
  1. Navigate to Graph Registry
  2. Register your graph ID: production-graph
  3. Verify baseline variant appears

Phase 4: Experimentation & Optimization

CLI: Create Experimental Variants
# 10. Create improved version locally
# (modify prompts, models, parameters)

# 11. Register experimental variant
inworld graph variant register -d experiment-v1 ./improved-graph.ts

# 12. Force update variants as needed
inworld graph variant register -d experiment-v1 ./improved-graph.ts --force
Portal: Configure A/B Test
  1. Create targeting rules in Graph Registry
  2. Set traffic distribution (70% baseline, 30% experiment-v1)
  3. Enable experiment
  4. Ensure proper user context in production:
    // In your application code
    const userContext = new UserContext({
      tier: user.tier,
      country: user.country,
      app_version: "2.1.0"
    }, user.id); // Unique targeting key
    
    graph.start(input, userContext);
    

Phase 5: Monitoring & Analysis

Portal Dashboards:
  1. Monitor default metrics: execution count, latency, errors
  2. Create custom dashboards for business metrics
  3. Track experiment performance by variant
Debugging Tools:
  • Traces: Analyze execution flow and performance bottlenecks
  • Logs: Debug errors and unexpected behavior
  • Metrics: Compare variant performance in real-time

Phase 6: Optimization & Iteration

Based on experiment results:
# 13. Create new variants based on winning patterns
inworld graph variant register -d v2-optimized ./optimized-graph.ts

# 14. Test and visualize optimized features
inworld graph visualize ./optimized-graph.ts --output ./analysis/v2-flow.png
Portal Management:
  1. Gradually increase traffic to winning variants
  2. Retire underperforming variants
  3. Update default variant for new users
  4. Plan next iteration cycle

Phase 7: Production Scaling

# 15. Deploy winning variant as new baseline
inworld graph variant register -d production-v2 ./winning-variant.ts

# 16. Monitor production health
inworld auth status  # Verify credentials  
# Check Portal dashboards for system health
# Set up alerts for error rates or latency spikes
Continuous Improvement:
  • Regular Portal dashboard reviews
  • Monthly experiment planning based on metrics
  • Quarterly graph architecture optimization
  • Regular template and model optimization

Success Metrics

Technical Health:
  • P99 latency < 2 seconds
  • Error rate < 0.1%
  • 99.9% uptime
Experiment Velocity:
  • Weekly variant deployments via CLI
  • Monthly A/B test conclusions
  • Quarterly major feature rollouts
Business Impact:
  • Custom metrics tracking business KPIs
  • User engagement improvements
  • Cost optimization through model efficiency
This complete workflow ensures your CLI deployment integrates seamlessly with Portal’s experimentation and monitoring capabilities for continuous optimization and reliable production operation.

Production Best Practices

Monitoring & Alerting

Set up alerts for:
  • Error rate > 1%
  • P99 latency > 5 seconds
  • Deployment failures
  • Variant performance degradation
Regular monitoring:
  • Daily dashboard reviews
  • Weekly experiment performance analysis
  • Monthly architecture optimization review
  • Quarterly business impact assessment

Development Workflow

Recommended Git workflow:
# Feature branch for new variants
git checkout -b experiment/new-llm-model

# Develop and test locally
inworld serve ./graph.ts --swagger
inworld run ./graph.ts '{"input": "test"}'

# Deploy and register variant
inworld deploy ./graph.ts
inworld graph variant register -d experiment-new-model ./graph.ts

# Merge after successful experiment
git checkout main && git merge experiment/new-llm-model
CI/CD Integration:
  • Automated testing with inworld run in CI pipeline
  • Staging deployment validation
  • Production deployment with monitoring
  • Automated variant registration for approved changes

Security Considerations

API Key Management:
  • Use environment-specific keys
  • Rotate keys regularly
  • Never commit keys to version control
  • Use secure secret management systems
Access Control:
  • Limit CLI access to authorized developers
  • Use role-based permissions in Portal
  • Monitor CLI usage and deployments
  • Regular access reviews and cleanup

Advanced CLI Tips

Productivity Shortcuts

Aliases for common commands:
# Add to ~/.bashrc or ~/.zshrc
alias iw="inworld"
alias iws="inworld serve"
alias iwrun="inworld run"
alias iwdeploy="inworld deploy"
alias iwstatus="inworld auth status"
Environment-specific configs:
# Development
export INWORLD_API_KEY=dev_key_here
alias iwdev="inworld --workspace dev-workspace"

# Production  
export INWORLD_API_KEY=prod_key_here
alias iwprod="inworld --workspace prod-workspace"

Bulk Operations

Deploy multiple graphs:
# Script for bulk deployment
for graph in ./graphs/*.ts; do
  echo "Deploying $graph"
  inworld deploy "$graph"
  inworld graph variant register -d "$(basename "$graph" .ts)" "$graph"
done
Batch variant management:
# List all variants across graphs
find . -name "*.ts" -exec inworld graph variant list {} \;

# Bulk variant updates
inworld graph variant register -d baseline ./graph.ts --force
inworld graph variant register -d canary ./graph.ts --force

Getting Help

Documentation Resources

Community & Support

When encountering issues:
  1. For all troubleshooting: Check the comprehensive CLI Troubleshooting Guide covering setup, development, and production issues
  2. For workflows and best practices: Review Portal logs/traces and the guidance in this advanced integration guide
  3. Use inworld auth status and --info flags for debugging
  4. Check for CLI updates: compare inworld --version with latest releases
CLI Help:
inworld help                    # General help
inworld [command] --help        # Command-specific help
inworld auth help              # Authentication commands
inworld graph help             # Graph management commands  
inworld workspace help         # Workspace management commands
Your CLI is now configured for advanced production workflows with comprehensive monitoring, experimentation, and optimization capabilities!