GENXT Confidential LLM API
    GENXT Confidential LLM API
    • Server attestation
      POST
    • Generate a completion
      POST
    • Generate a chat completion
      POST
    • List Local Models
      GET
    • Show Model Information
      POST
    • Generate Embeddings
      POST
    • OpenAI compatible endpoints
      POST

      Generate a chat completion

      POST
      https://api.genxt.ai/api/chat
      This endpoint facilitates the generation of the next message in a conversation using a specified AI model. It supports maintaining a chat history to provide context to the model. The endpoint also supports streaming responses, which can be disabled to receive a single response object.
      Request Request Example
      Shell
      JavaScript
      Java
      Swift
      curl --location --request POST 'https://api.genxt.ai/api/chat' \
      --header 'Content-Type: application/json' \
      --data-raw '{
          "model": "string",
          "messages": [
              {
                  "role": "string",
                  "content": "string",
                  "images": [
                      "string"
                  ]
              }
          ],
          "stream": true
      }'
      Response Response Example
      {
          "model": "string",
          "created_at": "2019-08-24T14:15:22Z",
          "message": {
              "role": "string",
              "content": "string"
          },
          "done": true,
          "total_duration": 0
      }

      Request

      Authorization
      Provide your bearer token in the
      Authorization
      header when making requests to protected resources.
      Example:
      Authorization: Bearer ********************
      Body Params application/json
      model
      string 
      required
      The name of the AI model to use for generating the chat completion.
      messages
      array [object {3}] 
      required
      role
      string 
      optional
      The role of the message sender, such as user, system, or assistant.
      content
      string 
      optional
      The content of the message.
      images
      array[string <base64>]
      optional
      Optional base64-encoded images for multimodal chat interactions.
      stream
      boolean 
      optional
      If true, chat responses are streamed; if false, a single JSON response is provided.
      Default:
      true
      Examples

      Responses

      🟢200Successfully generated the next message in the chat with model statistics and response data.
      application/json
      Body
      model
      string 
      optional
      created_at
      string <date-time>
      optional
      message
      object 
      optional
      role
      string 
      optional
      content
      string 
      optional
      done
      boolean 
      optional
      total_duration
      integer <int64>
      optional
      Total time taken for generating the
      Previous
      Generate a completion
      Next
      List Local Models
      Built with