openai

package
v0.0.0-...-c27917d Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 11, 2023 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Index

Constants

View Source
const StopToken = "<|im_end|>"

TODO: this should get factored out into the .toml files for each model, but this is an intermediate fix

Variables

This section is empty.

Functions

func NewEmbeddingsServiceClient

func NewEmbeddingsServiceClient(conn *grpc.ClientConn)

func ToChatItems

func ToChatItems(messages []openai.ChatCompletionMessage) ([]*chat.ChatItem, error)

Turn a list of openai.ChatCompletionMessages into a list of chat.ChatItems that can be added to a ChatCompletionRequest proto that can submitted for inference

func ToJsonMessage

func ToJsonMessage(chatItem *chat.ChatItem) (openai.ChatCompletionMessage, error)

Convert a single ChatItem from a ChatCompletionResponse proto into an openai.ChatCompletionMessage that can be serialized to json and sent back to the client This only operates on a single ChatItem because the response from the inference server should only be a single ChatItem per prompt. If you request N responses to your prompt, this will get called separately for each of them.

Types

type AudioRequest

type AudioRequest struct {
	Model          string  `form:"model"`
	Prompt         string  `form:"prompt"`
	ResponseFormat string  `form:"prompt"`
	Temperature    float32 `form:"temperature"`
	InputLanguage  string  `form:"language"`
}

type Embedding

type Embedding struct {
	Object    string    `json:"object"`
	Embedding []float32 `json:"embedding"`
	Index     int       `json:"index"`
}

type EmbeddingRequest

type EmbeddingRequest struct {
	// Input is a slice of strings for which you want to completion an Embedding vector.
	// Each input must not exceed 2048 tokens in length.
	// OpenAPI suggests replacing newlines (\n) in your input with a single space, as they
	// have observed inferior results when newlines are present.
	// E.g.
	//	"The food was delicious and the waiter..."
	Input any `json:"input"`
	// ID of the model to use. You can use the List models API to see all of your available models,
	// or see our Model overview for descriptions of them.
	Model string `json:"model"`
	// A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
	User string `json:"user"`
}

TODO: this probably isn't necessary EmbeddingRequest is the input to a Create embeddings request.

type EmbeddingResponse

type EmbeddingResponse struct {
	Object string      `json:"object"`
	Data   []Embedding `json:"data"`
	Model  string      `json:"model"`
	Usage  Usage       `json:"usage"`
}

type OpenAIHandler

type OpenAIHandler struct {
	Prefix string
}

func (*OpenAIHandler) Routes

func (o *OpenAIHandler) Routes(r *gin.Engine)

type Usage

type Usage struct {
	PromptTokens     int `json:"prompt_tokens"`
	CompletionTokens int `json:"completion_tokens"`
	TotalTokens      int `json:"total_tokens"`
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL