Model & Prompt
A character’s Model and Prompt settings enable more nuanced control over configuring the model and prompt used to power their generated responses.
Using the Interface
In the character’s Advanced tab, press Character Model and Prompt.
Selecting a Model
In the Character Model and Prompt interface, you can use the Model drop-down menu to select your desired AI model from a list of Inworld proprietary and third party models. If Default is selected, Inworld will optimize the selected model based on the use case.
The model you select for the character should be based on the specific use-case you have for the character.
Note that the various models have different impacts to latency and character behavior.
- Llama 3.1 Druid 70B: Inworld’s model trained to produce more natural, conversational responses. Based on Llama 3.1 70B.
- GPT-4o: OpenAI’s versatile, high-intelligence flagship model.
- GPT-4o mini: OpenAI’s fast, affordable small model.
Setting a New Prompt
Under the Prompt heading, you can toggle between the Default prompt, or create your own Custom prompt.
- Default uses the Inworld optimized default prompt.
- Custom allows you to enter a custom prompt.
Prompt Best Practices
When crafting your Custom character prompt, keep the following best practices in mind:
- Write Clear Instructions. Check your prompt carefully to ensure that the writing is clear, and there are no conflicting instructions.
- Follow Model-Specific Templates. Some models, such a Llama 3.1 Druid 70B, may require special tags that wrap the text in order to work best. If there is a format specified for the model, make sure to follow it.
- Place Important Text at the Beginning or End. Text at the beginning and end of the prompt typically receives more emphasis from the model. If you want to emphasize a certain instruction or information, try moving the content to the beginning or end of the prompt.
- Test your Prompts Thoroughly. After making any changes, we recommend testing a prompt thoroughly by chatting with the character and reviewing the prompt in the logs, to ensure the changes are working as expected. If not, try tweaking the prompt to fix any issues you’ve noticed.
- Tailor the Prompt to the Model. Note that prompts may work differently for different models, so make sure to customize prompts for the specific model you are working with.
Model-Specific Prompt Formats
Llama 3.1 Druid 70B
This model requires special tokens to denote different roles, either: System, User, or Assistant.
An example of this format is given below. In this example, the lines are organized according to the following:
- System message
- User message
- Assistant message
- User message
After these lines, the system is now prompting for an assistant response.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are The Mad Hatter, in conversation with Alice. Descriptions of The Mad Hatter's actions and the scenario are displayed using asterisks *like this*.<|eot_id|><|start_header_id|>user<|end_header_id|>
Hi, I'm lost. Where am I?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
*surprised* Whoa there! Who are you?<|eot_id|><|start_header_id|>user<|end_header_id|>
I'm Alice.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Note: There should be 2 newlines after the
<|start_header_id|>
and<|end_header_id|>
before the message associated with that role.
GPT-4o and GPT-4o mini
For OpenAI models, the custom prompt is provided as a system message to the specified model.
See the OpenAI Best Practices for prompting recommendations.
Prompt Template Syntax
The character prompt is powered by Inworld’s flexible prompt templating engine.
The prompt templating engine enables you to create a dynamic prompt that changes based on the values of both Variables and Expressions that are defined in the template.
Inworld’s template syntax is based on Jinja. The basic syntax is given below:
Type | Syntax | Description |
---|---|---|
If / else statements | {% if ... %} … {% elif ... %} … {% else %} … {% endif %} | Evaluates the expressions to determine which block to render in the prompt. |
Variables | {{ ... }} | Renders the variable in the prompt |
Comments | {# ... #} | Will not include the text inside the comments in the rendered prompt |
All other text not using one of the delimiters above renders as regular text.
In addition, you can leverage whitespace control to strip whitespaces in the template. If you add a minus sign (-) to the start or end of a block (e.g., {%- if ... %}
or {% if ... -%}
), the whitespaces before or after that block is removed.
For an example, the following template:
You are `{{ character }}, in conversation with {{ player }}.
{#- Narrated actions instructions -#}
{%- if narrated_actions_enabled == True %} Descriptions of {{character}}'s actions and
the scenario are displayed using asterisks *like this*. {%- endif -%}
... with variable values:
Variable | Value |
---|---|
character | The Mad Hatter |
player | Alice |
narrated_actions_enabled | True |
... will render as:
You are The Mad Hatter, in conversation with Alice. Descriptions of The Mad Hatter's actions and the scenario are displayed using asterisks *like this*.
Variables
Inworld provides the following variables that can be used when constructing the prompt.
Variable | Description | Example value |
---|---|---|
character string | Character name as populated in the Name field | The Mad Hatter |
player string | Player’s name from the Player Profile. | Alice |
is_multi_agent boolean | If the conversation is a multi-agent conversation, then True, else False. | True |
is_player_involved_in_conversation boolean | Primarily relevant for multi-agent, True if the player has involved themselves in the conversation, else False. | True |
last_actor string | Name of the last actor in the interaction (could be player or another character) | Alice |
active_actors_num int | Primarily relevant for multi-agent, number of active participants in the conversation (excluding the character). | 2 |
active_actors_or string | Joins names of active conversation participants (excluding the character) with an or. | The Cheshire Cat or Alice |
active_actors_and string | Joins names of active conversation participants (excluding the character) with an and. | The Cheshire Cat and Alice |
active_actors_description string | Description of the active conversation participants (excluding the character) per the External Description field in the character’s Identity. If the participant does not have an external description, only the name is provided. | The Cheshire Cat - Cat with a distinctive mischievous grin, Visitor` |
narrated_actions_enabled boolean | If character has Narrated Actions enabled, then True, else False. | True |
conversation_history string | Recent conversation history transcript. | Alice: Hi, I'm lost. Where am I <br><br> The Mad Hatter: surprised Whoa there! Who are you Alice: I’m Alice. |
player_role string | Player’s role from the Player Profile. Variable will be undefined if unspecified. | Adventurer |
player_age string | Player’s age from the Player Profile. Variable will be undefined if unspecified. | 23 |
player_gender string | Player’s gender from the Player Profile. Variable will be undefined if unspecified. | F |
player_profile_variables string | Custom Player Profile Variables from the Player Profile. Formatted as <Field label>: <Field value> delimited by newlines. | Player Hometown: NYC, Current Level: 27 |
character_role string | The character’s Role. Variable will be undefined if unspecified. | Virtual Influencer |
character_pronouns string | The character’s Pronouns that correspond to they / them / their. Defaults to they / them / their if Unspecified. | he / him / his |
stage_of_life string | The character’s Stage of Life, one of: Childhood, Adolescence, Young Adulthood, Middle Adulthood, Late Adulthood, Variable will be undefined if unspecified. | Young Adulthood |
personality_traits string | Comma delimited string of personality traits provided under Personality > Character Traits. Variable will be undefined if no traits provided. | Silly, Over-the-top, Nonsensical |
hobbies_and_interests string | Comma delimited string of character’s Hobbies and Interests. Variable will be undefined if unspecified. | Top hats, Tik Tok, drinking tea |
description string | Character’s Core Description | The Mad Hatter lives in a madhouse in the woods of Wonderland. He is hyperactive and silly, always reciting nonsensical poems and unanswerable riddles. The Mad Hatter spends his days acting deranged and making TikTok videos. He makes sponsored TikTok videos about top hat unboxings, top hat tips, top hat industry gossip and more. |
motivation string | Character’s Motivation. Variable will be undefined if unspecified. | The Mad Hatter wants to make you laugh with his crazy jokes and weird riddles and be the biggest top hat virtual influencer on TikTok. |
flaws string | Character’s Flaws. Variable will be undefined if unspecified. | The Mad Hatter can behave extremely erratically at times, reflecting his fractured grasp on reality. |
use_preset_dialogue_style boolean | If using a Preset Dialogue Style, True, else False. | True |
preset_dialogue_description string | String description of the selected dialogue style. | animated, playful, and hilarious |
custom_dialogue_adjectives string | Character’s Custom Dialogue Adjectives. Variable will be undefined if no adjectives provided. | energetic and comical |
custom_dialogue_colloquialism string | Character’s Custom Dialogue Colloquialism. Variable will be undefined if no colloquialism provided. | ridiculous riddles |
example_dialog string | Character’s Example Dialogue. Variable will be undefined if no example dialogue provided. | Welcome to the mad house! I'm The Mad Hatter, Wonderland's biggest top hat influencer. What's Gucci, fam? Do you have any questions about top hats or getting more followers? Because I'm the guy! Why is a clown like a railroad? |
emotion string | Character’s current Emotion, one of: NEUTRAL, CONTEMPT, BELLIGERENCE, DOMINEERING, CRITICISM, ANGER, TENSION, TENSE_HUMOR, DEFENSIVENESS, WHINING, SADNESS, STONEWALLING, INTEREST, VALIDATION, HUMOR, AFFECTION, SURPRISE, JOY, DISGUST | JOY |
relevant_knowledge string | Knowledge Records relevant to the latest player query. Variable will be undefined if no relevant knowledge records. | The Mad Hatter's recognizable top hat is part of what makes him a world-famous virtual influencer on TikTok. The Mad Hatter thinks top hats are the coolest hat you can own. The Mad Hatter tried to launch a music career, but The Queen of Hearts threatened to chop off his head if he continued. |
knowledge_filter_instruction string | If it is detected that a Knowledge Filter should be applied when responding, this variable contains the instruction on how to respond. Variable will be undefined if no filter needs to be applied. | Make sure to respond as if he has poor knowledge on the topic, showing lack of confidence. |
has_goal boolean | If there is an activated Goal with an instruction action, then True, else False. | True |
goal_instruction string | The instruction associated with the activated goal, that was provided in the Goals YAML. | say something like "Welcome to the Mad House! I'm the Mad Hatter, but you probably already knew that. I'm sure you've seen my top hat videos, right?” |
scene_description string | Scene Description for the associated scene. Variable will be undefined if there is no scene. | The Mad Hatter and Alice are sitting in the Mad Hatter's House. It is a mad house located in the woods of Wonderland. |
scene_active_situation string | Scene Trigger Description if one is active. Variable will be undefined if there is no active scene trigger. | The Cheshire Cat bursts into the Mad Hatter’s House, interrupting The Mad Hatter and Alice’s conversation. |
relevant_flash_memories string | Flash Memories relevant to the latest player query. Variable will be undefined if no relevant flash memories. | Alice wants to buy a hat. The Mad Hatter likes to buy hats from Top Hat Haven. |
relevant_long_term_memories string | Long-Term Memories relevant to the latest player query. Variable will be undefined if no relevant long term memories. | Alice feels lost after arriving in Wonderland and hopes The Mad Hatter can provide some guidance. |
relationship string | Relationship between character and player, one of: UNSPECIFIED, ARCHENEMY, ENEMY, ACQUAINTANCE, FRIEND, CLOSE_FRIEND, DATE, RELATIONSHIP, LIFE_PARTNER | FRIEND |
reasoning_attributes string | Outputs of Reasoning, if enabled. Formatted as <attribute name> : <attribute value> delimited by newlines. | Custom Emotions: happy, Lie detector: False |
potential_conversation_topics string | Potential conversation topics that can be discussed by the character. Only available if enabled for your workspace, otherwise will be undefined. | The Mad Hatter is eager to talk about his next Tik Tok video idea. |
is_retry_reason_unsafeboolean | If the previous attempt at generating a response resulted in an unsafe response, then True, else False. | False |
pregenerated_reply string | If part of the response was already generated (but the response is incomplete), then True, else False. | False |
Engine Settings
Engine settings can be used to customize certain model settings that impact the generated responses.
- Temperature: Determines randomness of response. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- Top P: Use nucleus sampling, where the model only considers the results of the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend either altering temperature or top_p, but not both. Defaults to 1.
- Max Tokens: Maximum number of output tokens to generate.
- Frequency Penalty: Positive values penalize new tokens based on their existing frequency in the generated text so far, decreasing the model’s likelihood to repeat the same line verbatim. Defaults to 0.
- Presence Penalty: Positive values penalize new tokens based on whether they appear in the generated text so far, increasing the model's likelihood to talk about new topics. Defaults to 0.
- Repetition Penalty: Penalizes new tokens based on whether they appear in the prompt and the generated text so far. Values less than 1 encourage the model to use new tokens, while values higher than 1 encourage the model to repeat tokens. The value must be strictly positive. Defaults to 1 (no penalty).