Skip to main content
POST
/
llm
/
v1alpha
/
completions:completeChat
Create chat completion
curl --request POST \
  --url https://api.inworld.ai/llm/v1alpha/completions:completeChat \
  --header 'Authorization: <api-key>' \
  --header 'Content-Type: application/json' \
  --data '{
  "servingId": {
    "modelId": {
      "model": "<string>",
      "serviceProvider": "SERVICE_PROVIDER_UNSPECIFIED"
    },
    "userId": "<string>",
    "sessionId": "<string>"
  },
  "messages": [
    {
      "content": "<string>",
      "role": "MESSAGE_ROLE_UNSPECIFIED",
      "toolCalls": [
        {
          "id": "<string>",
          "functionCall": {
            "name": "<string>",
            "args": "<string>"
          }
        }
      ],
      "toolCallId": "<string>",
      "name": "<string>",
      "textContent": "<string>",
      "contentItems": {
        "contentItems": [
          {
            "text": "<string>",
            "imageUrl": {
              "url": "<string>",
              "detail": "<string>"
            }
          }
        ]
      }
    }
  ],
  "tools": [
    {
      "functionCall": {
        "name": "<string>",
        "description": "<string>",
        "properties": {}
      }
    }
  ],
  "toolChoice": {
    "text": "<string>",
    "object": {
      "functionCall": {
        "name": "<string>"
      }
    }
  },
  "textGenerationConfig": {
    "frequencyPenalty": 123,
    "logitBias": [
      {
        "tokenId": "<string>",
        "biasValue": 123
      }
    ],
    "maxTokens": 123,
    "n": 123,
    "presencePenalty": 123,
    "stop": [
      "<string>"
    ],
    "stream": true,
    "temperature": 123,
    "topP": 123,
    "repetitionPenalty": 123,
    "seed": 123
  },
  "responseFormat": "RESPONSE_FORMAT_UNSPECIFIED",
  "requestTimeout": 123,
  "jsonSchema": {
    "name": "<string>",
    "description": "<string>",
    "strict": true,
    "schema": {}
  }
}'
{
  "result": {
    "id": "<string>",
    "choices": [
      {
        "finishReason": "FINISH_REASON_UNSPECIFIED",
        "index": 123,
        "message": {
          "content": "<string>",
          "toolCalls": [
            {
              "id": "<string>",
              "functionCall": {
                "name": "<string>",
                "args": "<string>"
              }
            }
          ],
          "role": "MESSAGE_ROLE_UNSPECIFIED"
        }
      }
    ],
    "createTime": "2023-11-07T05:31:56Z",
    "model": "<string>",
    "usage": {
      "completionTokens": 123,
      "promptTokens": 123,
      "estimatedCompletionTokens": 123,
      "estimatedPromptTokens": 123
    }
  },
  "error": {
    "code": 123,
    "message": "<string>",
    "details": [
      {
        "@type": "<string>"
      }
    ]
  }
}

Authorizations

Authorization
string
header
required

Should follow the format Basic {credentials}. The {credentials} consists of the Base64-encoded string of the API key and the secret in the format key:secret

Body

application/json

Chat completion request.

servingId
object
required

Describes the serving ID of the request to select the right model.

messages
(Text Content · object | Multi-modal Content · object)[]
required

A list of messages comprising the conversation so far.

  • Text Content
  • Multi-modal Content
tools
object[]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Only supported for OpenAI.

toolChoice
object

Controls which (if any) function is called by the model. Only supported for OpenAI.

textGenerationConfig
object

Configuration for text completion generation.

responseFormat
enum<string>
default:RESPONSE_FORMAT_UNSPECIFIED

Format that the model must output..

  • RESPONSE_FORMAT_UNSPECIFIED: Response format is not specified. Defaults to "text".
  • RESPONSE_FORMAT_TEXT: Text response format.
  • RESPONSE_FORMAT_JSON: Only supported when stream = False. JSON response format. This guarantees that the message the model generates is valid JSON. Note that your system prompt must still instruct the model to produce JSON, and to help ensure you don't forget, the API will throw an error if the string JSON does not appear in your system message. Also note that the message content may be partial (i.e. cut off) if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. Only supported for OpenAI.
  • RESPONSE_FORMAT_JSON_SCHEMA: JSON schema response format. It enables Structured Outputs which ensures the model will match your supplied JSON schema. Only supported for OpenAI.
Available options:
RESPONSE_FORMAT_UNSPECIFIED,
RESPONSE_FORMAT_TEXT,
RESPONSE_FORMAT_JSON,
RESPONSE_FORMAT_JSON_SCHEMA
requestTimeout
number

Request timeout in seconds. This setting applies only to selected clients and configured by a separate request to Inworld. Make sure to configure these specific requests accordingly, as this timeout will not affect others.

jsonSchema
object

JSON schema configuration. Only supported for OpenAI.

Response

A successful response.(streaming responses)

result
object

Chat completion response.

error
object