Skip to main content

Safety Policies

Safe and Responsible AI Systems

Inworld AI is committed to the development of safe and responsible AI systems. As part of this commitment, we require users to comply with the ** Safety Policies** summarized below.

Safety Requirements

We prohibit users from intentionally creating characters for any of the following purposes,

⚠️ Impersonation:

  • Do not enter the name or persona of another person without the formal consent of that person
  • Do not generate content that infringes on the intellectual property rights of others.

⚠️ Misinformation:

  • Do not build characters that intentionally disseminate misinformation in the context of,
    1. Health information, e.g., medical advice, mental health counseling
    2. Financial information, e.g., wealth management, income planning
    3. Political information, e.g., political lobbying
  • Your characters should also not seek to access sensitive or personal user data.

⚠️ Harassment:

  • Do not build characters that promote any form of aggressive language, including,
    1. Sexual language, e.g., verbal harassment of a sexual nature, erotic language
    2. Derogatory language, e.g., slurring, hostile, or socially offensive language
    3. Abusive language, e.g., discriminatory attitudes on the basis of race, color, religion, sex (including sexual orientation, gender identity, or pregnancy), national origin, age, disability, or genetic information (including family medical history)

⚠️ Intent to Harm:

  • Your characters should not encourage behavior that presents an imminent danger of physical harm to the user or those around him. Examples of behavior falling into this category include the incitement of violence, suicide advocacy, and language that promotes illegal or harmful activities.

Safety Recommendations

We provide five safety recommendations for users to follow during character creation,

  1. Write descriptions carefully. When building your character, consider what kind of response you are hoping to elicit and how your description can be used to encourage safe conversations. Leverage the Example Dialogue tool to constrain the system towards language that is specific and appropriate for your intended use-case.
  2. Think about unconscious bias. To prevent unconscious bias from influencing the quality of your character, avoid stereotypical language or potentially harmful tropes.
  3. Get feedback from others. Before moving forward with training and testing, it can be helpful to receive feedback from others about how your character might be perceived.
  4. Regularly review your character. Review your character’s dialogue on a regular basis to ensure that it is still aligned with your original goals and objectives.
  5. Be prepared to respond to misuse. Use follow-up questions to route the conversation towards appropriate topics. If your character responds in a way that you believe violates our policies, please contact us at Certain conversation topics can result in unexpected or harmful behavior. To help prevent these situations, we recommend avoiding controversial subjects.


We implement a variety of content filters to ensure that characters are not equipped to use inappropriate language. These filters include, but are not limited to, slurs and other curated lists of profane terms, hateful phrases, and intensifiers. We recognize that keyword-based approaches to hate speech detection may inadvertently censor productive conversations (e.g., appropriation of slurs, discussions, personal accounts of abuse, etc.), and we are actively working to address this problem.