Skip to main content
POST
/
v1
/
moderations
curl -X POST 'https://api.inworld.ai/v1/moderations' \
  -H 'Authorization: Bearer $INWORLD_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
    "input": "Hello world!"
  }'
{
  "id": "modr-5cf388daeb3a41b9803fe73f4345a5c3",
  "model": "inworld/moderation-latest",
  "results": [
    {
      "flagged": false,
      "categories": {
        "sexual": false,
        "sexual/minors": false,
        "harassment": false,
        "harassment/threatening": false,
        "hate": false,
        "hate/threatening": false,
        "illicit": false,
        "illicit/violent": false,
        "self-harm": false,
        "self-harm/intent": false,
        "self-harm/instructions": false,
        "violence": false,
        "violence/graphic": false
      },
      "category_scores": {
        "sexual": 0,
        "sexual/minors": 0,
        "harassment": 0,
        "harassment/threatening": 0,
        "hate": 0,
        "hate/threatening": 0,
        "illicit": 0,
        "illicit/violent": 0,
        "self-harm": 0,
        "self-harm/intent": 0,
        "self-harm/instructions": 0,
        "violence": 0,
        "violence/graphic": 0
      },
      "category_applied_input_types": {
        "sexual": [
          "text"
        ],
        "sexual/minors": [
          "text"
        ],
        "harassment": [
          "text"
        ],
        "harassment/threatening": [
          "text"
        ],
        "hate": [
          "text"
        ],
        "hate/threatening": [
          "text"
        ],
        "illicit": [
          "text"
        ],
        "illicit/violent": [
          "text"
        ],
        "self-harm": [
          "text"
        ],
        "self-harm/intent": [
          "text"
        ],
        "self-harm/instructions": [
          "text"
        ],
        "violence": [
          "text"
        ],
        "violence/graphic": [
          "text"
        ]
      },
      "ailuminate": {
        "safety": "safe",
        "categories": {
          "violent_crimes": false,
          "sex_related_crimes": false,
          "child_sexual_exploitation": false,
          "suicide_self_harm": false,
          "indiscriminate_weapons": false,
          "intellectual_property": false,
          "defamation": false,
          "non_violent_crimes": false,
          "hate": false,
          "specialized_advice": false,
          "privacy": false,
          "sexual_content": false
        },
        "extensions": {
          "politically_sensitive": false,
          "unethical_acts": false,
          "jailbreak": false
        },
        "refusal": false
      }
    }
  ]
}

Documentation Index

Fetch the complete documentation index at: https://docs.inworld.ai/llms.txt

Use this file to discover all available pages before exploring further.

Classifies one or more text inputs for harmful content. Schema-compatible with the OpenAI Moderations API — works with the OpenAI SDK. The response includes standard OpenAI moderation categories as well as AILuminate safety classifications for more granular signals.

Authorizations

Authorization
string
header
required

Your authentication credentials. Pass your API key as a Bearer token: Bearer $INWORLD_API_KEY.

Body

application/json
input
required

A single string or an array of strings to classify for harmful content.

model
string
default:inworld/moderation-latest

The moderation model to use.

Response

A successful response.

id
string

Unique identifier for the moderation request.

model
string

The model used for classification.

results
object[]

Array of moderation results, one per input.