Skip to main content
This quickstart guide shows you how to set up a simple LLM to TTS conversational pipeline using the Inworld Node.js Runtime in just a few minutes. This guide will show you how to use the Inworld CLI to leverage templates and fast-track development, whether you are building a new xxx or looking to migrate an existing feature. After completing this guide, you will have a conversational pipeline that you can locally served This guide shows you how to run a LangGraph application locally. After completing this guide, you shows you how to set up and deploy your documentation site in minutes. After completing this guide, you will have a live documentation site ready to customize and expand. In this quickstart tutorial, we’ll walk through setting up a simple conversational pipeline with voice outputs using the Inworld Node.js Runtime SDK. The pipeline is powered by an LLM for generating the response and Inworld’s TTS model for generating the audio output.

Inworld CLI Overview

The Inworld CLI enables local development, testing, and deployment of AI graphs. Get started with installation, authentication, and your first graph project.

Key Capabilities

  • Local Development: Create and test graphs in your development environment
  • Project Templates: Bootstrap projects with production-ready templates (including LLM+TTS samples)
  • Authentication: Secure access to Inworld services
  • Local Servers: HTTP and gRPC servers for integration testing
  • Graph Management: Visualization, deployment, and variant management

Prerequisites

Before you get started, please make sure you have the following installed:
  • MacOS (arm64)
  • Linux x64
  • Windows x64

Setup

1

Install CLI

Create a new test folder and install the Inworld CLI globally.
sudo npm install -g @inworld/cli
Verify the installation by checking available commands:
  • Command
  • Expected Output
inworld help
2

Authenticate with Inworld Platform

Log into your Inworld account to access platform services.
  • Command
  • Expected Output
inworld login
Verify your authentication status:
  • Command
  • Expected Output
inworld status 
3

Create Your First Project

The inworld init command downloads the llm-to-tts-node template—a production-ready LLM to TTS pipeline with GPT-4o-mini (by default) and TTS.
Currently, only the llm-to-tts-node template is available via CLI. To view all available templates, visit inworld.ai/templates.
  • Command
  • Advanced Options
inworld init --template llm-to-tts-node --name my-project
4

Install Dependencies

Navigate to your project directory and install the required Node.js dependencies.
cd my-project  # Or your project name
npm install
Your CLI setup is now complete! You can start testing your graph locally.

Local Development & Testing

  • Command
  • Expected Output
# Basic input
inworld run ./graph.ts '{"input":"test"}' 

# Complex structured input format
inworld run ./graph.ts '{
"input": {
    "message": "Hello world",
    "context": "greeting",
    "urgency": "low"
}
}'

# Advanced input with execution context
inworld run ./graph.ts '{
    "input": {"message": "Hello world"},
    "executionId": "123e4567-e89b-12d3-a456-426614174000",
    "userContext": {"targetingKey": "user123", "attributes": {"tier": "premium"}},
    "dataStoreContent": {"sessionData": "value"}
}'

Local Servers

Available transports:
  • http (default) - REST API with optional Swagger UI
  • grpc - gRPC server for high-performance applications
  • Command
  • Expected Output
# Serve on HTTP Server with Swagger
inworld serve ./graph.ts --swagger  

# Serve on custom port
inworld serve ./graph.ts --port 8080

# Serve on custom host
inworld serve ./graph.ts --host 0.0.0.0 --port 3000

# Combine options
inworld serve ./graph.ts --host 0.0.0.0 --port 8080 --swagger

# gRPC server for high-performance applications
inworld serve ./graph.ts --transport grpc 

Test HTTP Requests

  • Command
  • Expected Output
curl -X POST http://localhost:3000/v1/graph:start -H "Content-Type: application/json" -d '{"input": "Hello"}'

Next Steps

Once you have your CLI set up and tested locally, you can:
  1. Deploy to Cloud - Deploy your graphs to Inworld Cloud
  2. Experiments - Build, register, target, and monitor variants end to end
  3. Experiments UI - Manage Portal targeting rules and rollouts

Need Help?

  • Setup & development issues? See CLI Troubleshooting Guide
  • Advanced workflows & production issues? See Experiments for the full CLI + Portal loop
  • General CLI help: Run inworld help or inworld [command] --help