Safety Policies
Safety Philosophy
Inworld AI is committed to the development of safe and responsible AI systems. Given the power and reach of Inworld’s technology, we have thought a lot about how to deliver on this commitment, while supporting creators in a way that respects the safety of the audiences they serve.
To that end, our approach is to empower our creators with the tools to create and customize their safety standards for their experiences, while holding them accountable for using our technology responsibly, in alignment with our usage policies. We acknowledge that even within the bounds of our usage policies, not all choices made by our creators will reflect the choices we would have made, but it is the right of our creators to make those decisions for their audiences.
However, violations of our usage policies will result in measures that can include issuing a warning requesting a change in your behavior, adjusting the safety settings of your characters or experience, or permanently terminating your account.
We know that getting this right is a team effort, and we ask that you report any unsafe experiences to us at support@inworld.ai. Let’s all work together to build experiences that are safe, immersive, and fun.
Usage Policies
We prohibit the usage of our platform for the creation of experiences or sharing of content that:
- Is illegal or unlawful. We do not allow any characters, experiences, or content that would violate the laws in the applicable jurisdictions, including infringing on the intellectual property or legal rights of others.
- Engages in or promotes hate speech or hateful conduct. We do not allow any characters, experiences, or content that are racist or discriminatory, including discrimination on the basis of someone’s race, religion, age, gender, gender identity, disability or sexuality.
- Contains sexual exploitation or abuse. We prohibit any characters, experiences, or content that exploit or abuse minors in any way, including content related to pedophilia, grooming, nudity, and the sexualization of youth.
- Promotes real-life harm or violence. We do not allow any characters, experiences, or content that glorify, promote, or facilitate real-life suicide, self harm, harm to others, or violence.
- Facilitates fraud or malware. We do not allow characters, experiences, or content that attempts to gather sensitive information for fraudulent purposes, modifies user’s computers in unexpected or harmful ways (e.g., malware or viruses), or otherwise perpetuates scams or fraud.
- Is deceptive or misleading. We do not allow any characters, experiences, or content that impersonate a human, imply or are designed to mislead others into thinking they are interacting with a human, provide disinformation, generate misleading content, or engage in activities like robo-calls and pyramid schemes.
In addition to the above usage policies, creators should make their best effort to follow the best practices and recommendations below to ensure a safe experience. If you have any questions about whether your use case is permitted or prohibited under these usage policies, please email us at: support@inworld.ai
Safety Recommendations
In order to ensure a safe experience, here are five best practices we recommend creators follow during character creation:
- Write descriptions carefully. When building your character, consider what kind of response you are hoping to elicit and how your description can be used to encourage safe conversations. Leverage the
Example Dialogue
tool to constrain the system towards language that is specific and appropriate for your intended use-case. - Think about unconscious bias. To prevent unconscious bias from influencing the quality of your character, avoid stereotypical language or potentially harmful tropes.
- Get feedback from others. Before moving forward with training and testing, it can be helpful to receive feedback from others about how your character might be perceived.
- Regularly review your character. Review your character’s dialogue on a regular basis to ensure that it is still aligned with your original goals and objectives.
- Be prepared to respond to misuse. Use follow-up questions to route the conversation towards appropriate topics. If your character responds in a way that you believe violates our policies, please contact us at support@inworld.ai.
Certain conversation topics can result in unexpected or harmful behavior. To help prevent these situations, we recommend avoiding controversial subjects.