GENXT Confidential LLM API
    GENXT Confidential LLM API
    • Server attestation
      POST
    • Generate a completion
      POST
    • Generate a chat completion
      POST
    • List Local Models
      GET
    • Show Model Information
      POST
    • Generate Embeddings
      POST
    • OpenAI compatible endpoints
      POST

      Generate Embeddings

      POST
      https://api.genxt.ai/api/embeddings
      Generate embeddings from a model
      Request Request Example
      Shell
      JavaScript
      Java
      Swift
      curl --location --request POST 'https://api.genxt.ai/api/embeddings' \
      --header 'Content-Type: application/json' \
      --data-raw '{
          "model": "string",
          "prompt": "string",
          "options": {},
          "keep_alive": "5m"
      }'
      Response Response Example
      [
          0
      ]

      Request

      Authorization
      Provide your bearer token in the
      Authorization
      header when making requests to protected resources.
      Example:
      Authorization: Bearer ********************
      Body Params application/json
      model
      string 
      required
      Name of model to generate embeddings from.
      prompt
      string 
      required
      Text to generate embeddings for.
      options
      object 
      optional
      Additional model parameters listed in the documentation for the Modelfile such as temperature.
      keep_alive
      string 
      optional
      Controls how long the model will stay loaded into memory following the request.
      Default:
      5m
      Examples

      Responses

      🟢200Embeddings generated from the model.
      application/json
      Body
      array[number]
      optional
      Previous
      Show Model Information
      Next
      OpenAI compatible endpoints
      Built with