Skip to main content
This quickstart guide will walk through how to use the Inworld CLI to set up a simple LLM to TTS conversational pipeline (powered by Runtime) in just a few minutes.

Prerequisites

Before you get started, please make sure you have the following installed:
  • MacOS (arm64)
  • Linux x64
  • Windows x64

Get Started

1

Install Inworld CLI

Install the Inworld CLI globally.
npm install -g @inworld/cli
2

Log in to your acount

Log in to your Inworld account to use Inworld Runtime. If you don’t have an account, you can create one when prompted to login.
inworld login
# You'll be prompted to login via your browser
Once logged in, your credentials are stored and you won’t need to log in again.
3

Create your first project

The inworld init command downloads the llm-to-tts-node template—a production-ready LLM to TTS pipeline.
Currently, only the llm-to-tts-node template is available via CLI. To view all available templates, visit inworld.ai/templates.
inworld init --template llm-to-tts-node --name my-project
# Enter 'y' when prompted to install dependencies
After the command completes, you’ll have a project directory with all dependencies installed.
4

Run your graph

Navigate to your project directory and run your pipeline with the appropriate inputs.
cd my-project
inworld run ./graph.ts '{"input": {"user_input":"Hello!"}}'

Run a local server

Now that you’ve successfully run your first graph, you can run a local server to test it in your application.
1

Start the local server

Start your local server.
inworld serve ./graph.ts
You can see additional server configuration here (including support for gRPC and Swagger UI).
2

Test the API

Test the API with a simple curl command. Note that for the LLM to TTS pipeline, the API will return raw audio data that needs to be parsed in order to be played.
curl -X POST http://localhost:3000/v1/graph:start \
    -H "Content-Type: application/json" \
    -d '{"input": {"user_input":"Hello!"}}'
Here is an example of the output
{"executionStarted":{"executionId":"01999de9-8a75-75f8-a17b-7ec4c1b4490e","timestamp":"2025-10-01T03:55:52.309Z","variantName":"__default__"}}
{"ttsOutputChunk":{"text":"Hello!","audio":{"data":[0,0,0,0,0,0,0,0,0...],"sampleRate":48000}},"responseNumber":1}
{"ttsOutputChunk":{"text":" How can I assist you today?","audio":{"data":[0,0,0,0,0,0,0,0,0...],"sampleRate":48000}},"responseNumber":1}
{"executionCompleted":true}

Make your first change

Now let’s make our first modification to the LLM to TTS pipeline. Let’s change the model and prompt.
1

Modify graph.ts

Open up the graph.ts file in your project directory, which contains the graph configuration. Modify the provider and modelName under RemoteLLMChatNode to any supported LLM.
graph.ts
import {
  LLMChatRequestBuilderNode,
  RemoteLLMChatNode,
  RemoteTTSNode,
  SequentialGraphBuilder,
  TextChunkingNode,
} from '@inworld/runtime/graph';

const graphBuilder = new SequentialGraphBuilder({
  id: 'custom-text-node-llm',
  nodes: [
    new LLMChatRequestBuilderNode({
      messages: [
        { 
          role: 'system',
          content: { type: 'template', template: 'You are an extremely sarcastic assistant. Always respond with sarcasm.' },
        },
        {
          role: 'user',
          content: { type: 'template', template: '{{user_input}}' },
        },
      ],
    }),
    new RemoteLLMChatNode({
      provider: 'openai', 
      modelName: 'gpt-4o-mini',
      provider: 'google', 
      modelName: 'gemini-2.5-flash',
      stream: true,
    }),
    new TextChunkingNode(),
    new RemoteTTSNode(),
  ],
});

export const graph = graphBuilder.build();

2

Test the API

Test your updated graph.
inworld run ./graph.ts '{"input": {"user_input":"Hello!"}}'

Next Steps

Now that you’ve learned the basics, explore more advanced features:

Need Help?