output

package
v0.0.0-...-5c79d48 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 15, 2024 License: AGPL-3.0 Imports: 9 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IsInvalidOutputError

func IsInvalidOutputError(err error) bool

IsInvalidOutputError returns true if the error is an invalidOutputError.

func NewInvalidOutputError

func NewInvalidOutputError(coarse, detail string) error

NewInvalidOutputError builds an error caused by the output of an LLM.

func ParseJSONFromModel

func ParseJSONFromModel[T any](text string) (T, error)

ParseJSONFromModel parses a JSON object from the model output and attempts to sanitize contaminant text to avoid triggering self-correction due to some natural language being bundled with the JSON. The output type is generic, and thus the structure of the expected JSON varies depending on T.

func StreamToDeltas

func StreamToDeltas(stream *openai.ChatCompletionStream) chan string

StreamToDeltas converts an openai.CompletionStream into a channel of strings. This channel can then be consumed manually to search for specific markers, or directly converted into a StreamingMessage with NewStreamingMessage.

Types

type AccessRequest

type AccessRequest struct {
	Roles              []string   `json:"roles"`
	Resources          []Resource `json:"resources"`
	Reason             string     `json:"reason"`
	SuggestedReviewers []string   `json:"suggested_reviewers"`
}

AccessRequest represents an access request suggestion returned by OpenAI's completion API.

type CompletionCommand

type CompletionCommand struct {
	Command string   `json:"command,omitempty"`
	Nodes   []string `json:"nodes,omitempty"`
	Labels  []Label  `json:"labels,omitempty"`
}

CompletionCommand represents a command suggestion returned by OpenAI's completion API.

type GeneratedCommand

type GeneratedCommand struct {
	Command string `json:"command"`
}

GeneratedCommand represents a Bash command generated by LLM.

type Label

type Label struct {
	Key   string `json:"key"`
	Value string `json:"value"`
}

Label represents a label returned by OpenAI's completion API.

type Message

type Message struct {
	Content string
}

Message represents a new message within a live conversation.

type Resource

type Resource struct {
	// The resource type.
	Type string `json:"type"`

	// The resource name.
	Name string `json:"id"`

	// Set if a display-friendly alternative name is available.
	FriendlyName string `json:"friendlyName,omitempty"`
}

Resource represents a resource suggestion returned by OpenAI's completion API.

type StreamingMessage

type StreamingMessage struct {
	Parts <-chan string
}

StreamingMessage represents a new message that is being streamed from the LLM.

func NewStreamingMessage

func NewStreamingMessage(deltas <-chan string, alreadyStreamed, prefix string) (*StreamingMessage, *tokens.AsynchronousTokenCounter, error)

NewStreamingMessage takes a string channel and converts it to a StreamingMessage. If content was already streamed, it must be passed through the alreadyStreamed parameter. If the already streamed content contains a prefix that must be stripped (like a marker to identify the kind of response the model is providing), the prefix can be passed through the prefix parameter. It will be stripped but will still be reflected in the token count.

func (*StreamingMessage) WaitAndConsume

func (msg *StreamingMessage) WaitAndConsume() string

WaitAndConsume waits until the message stream is over and returns the full message. This can only be called once on a message as it empties its Parts channel.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL