api

package module
v2.7.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 8, 2024 License: MIT Imports: 6 Imported by: 9

README

Cohere Go Library

fern shield go shield

The Cohere Go library provides convenient access to the Cohere API from Go.

✨🪩✨ Announcing Cohere's new Go SDK ✨🪩✨

We are very excited to publish this brand new Go SDK. We now officially support Go and will continuously update this library with all of the latest features in our API. Please create issues where you have feedback so that we can continue to improve the developer experience!

Requirements

This module requires Go version >= 1.18.

Installation

Run the following command to use the Cohere Go library in your module:

go get github.com/cohere-ai/cohere-go/v2

Usage

import cohereclient "github.com/cohere-ai/cohere-go/v2/client"

client := cohereclient.NewClient(cohereclient.WithToken("<YOUR_AUTH_TOKEN>"))

Chat

import (
  cohere       "github.com/cohere-ai/cohere-go/v2"
  cohereclient "github.com/cohere-ai/cohere-go/v2/client"
)

client := cohereclient.NewClient(cohereclient.WithToken("<YOUR_AUTH_TOKEN>"))
response, err := client.Chat(
  context.TODO(),
  &cohere.ChatRequest{
    Message: "How is the weather today?",
  },
)

Timeouts

Setting a timeout for each individual request is as simple as using the standard context library. Setting a one second timeout for an individual API call looks like the following:

ctx, cancel := context.WithTimeout(context.TODO(), time.Second)
defer cancel()

response, err := client.Chat(
  context.TODO(),
  &cohere.ChatRequest{
    Message: "How is the weather today?",
  },
)

Client Options

A variety of client options are included to adapt the behavior of the library, which includes configuring authorization tokens to be sent on every request, or providing your own instrumented *http.Client. Both of these options are shown below:

client := cohereclient.NewClient(
  cohereclient.WithToken("<YOUR_AUTH_TOKEN>"),
  cohereclient.WithHTTPClient(
    &http.Client{
      Timeout: 5 * time.Second,
    },
  ),
)

Providing your own *http.Client is recommended. Otherwise, the http.DefaultClient will be used, and your client will wait indefinitely for a response (unless the per-request, context-based timeout is used).

Errors

Structured error types are returned from API calls that return non-success status codes. For example, you can check if the error was due to a bad request (i.e. status code 400) with the following:

response, err := client.Generate(
  context.TODO(),
  &cohere.GenerateRequest{
    Prompt: "invalid prompt",
  },
)
if err != nil {
  if badRequestErr, ok := err.(*cohere.BadRequestError);
    // Do something with the bad request ...
  }
  return err
}

These errors are also compatible with the errors.Is and errors.As APIs, so you can access the error like so:

response, err := client.Generate(
  context.TODO(),
  &cohere.GenerateRequest{
    Prompt: "invalid prompt",
  },
)
if err != nil {
  var badRequestErr *cohere.BadRequestError
  if errors.As(err, badRequestErr) {
    // Do something with the bad request ...
  }
  return err
}

If you'd like to wrap the errors with additional information and still retain the ability to access the type with errors.Is and errors.As, you can use the %w directive:

response, err := client.Generate(
  context.TODO(),
  &cohere.GenerateRequest{
    Prompt: "invalid prompt",
  },
)
if err != nil {
  return fmt.Errorf("failed to generate response: %w", err)
}

Streaming

Calling any of Cohere's streaming APIs is easy. Simply create a new stream type and read each message returned from the server until it's done:

stream, err := client.ChatStream(
  context.TODO(),
  &cohere.ChatStreamRequest{
    Message: "Please write a short story about the weather today.",
  },
)
if err != nil {
  return nil, err
}

// Make sure to close the stream when you're done reading.
// This is easily handled with defer.
defer stream.Close()

for {
  message, err := stream.Recv()
  if errors.Is(err, io.EOF) {
    // An io.EOF error means the server is done sending messages
    // and should be treated as a success.
    break
  }
  if err != nil {
    // The stream has encountered a non-recoverable error. Propagate the
    // error by simply returning the error like usual.
    return nil, err
  }
  // Do something with the message!
}

In summary, callers of the stream API use stream.Recv() to receive a new message from the stream. The stream is complete when the io.EOF error is returned, and if a non-io.EOF error is returned, it should be treated just like any other non-nil error.

Beta Status

This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning the package version to a specific version. This way, you can install the same version each time without breaking changes.

Contributing

While we value open-source contributions to this SDK, this library is generated programmatically. Additions made directly to this library would have to be moved over to our generation code, otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept, but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!

On the other hand, contributions to the README are always very welcome!

Documentation

Index

Constants

This section is empty.

Variables

View Source
var Environments = struct {
	Production string
}{
	Production: "https://api.cohere.ai/v1",
}

Environments defines all of the API environments. These values can be used with the WithBaseURL RequestOption to override the client's default environment, if any.

Functions

func Bool

func Bool(b bool) *bool

Bool returns a pointer to the given bool value.

func Byte

func Byte(b byte) *byte

Byte returns a pointer to the given byte value.

func Complex128

func Complex128(c complex128) *complex128

Complex128 returns a pointer to the given complex128 value.

func Complex64

func Complex64(c complex64) *complex64

Complex64 returns a pointer to the given complex64 value.

func Float32

func Float32(f float32) *float32

Float32 returns a pointer to the given float32 value.

func Float64

func Float64(f float64) *float64

Float64 returns a pointer to the given float64 value.

func Int

func Int(i int) *int

Int returns a pointer to the given int value.

func Int16

func Int16(i int16) *int16

Int16 returns a pointer to the given int16 value.

func Int32

func Int32(i int32) *int32

Int32 returns a pointer to the given int32 value.

func Int64

func Int64(i int64) *int64

Int64 returns a pointer to the given int64 value.

func Int8

func Int8(i int8) *int8

Int8 returns a pointer to the given int8 value.

func MustParseDate added in v2.6.0

func MustParseDate(date string) time.Time

MustParseDate attempts to parse the given string as a date time.Time, and panics upon failure.

func MustParseDateTime added in v2.6.0

func MustParseDateTime(datetime string) time.Time

MustParseDateTime attempts to parse the given string as a datetime time.Time, and panics upon failure.

func Rune

func Rune(r rune) *rune

Rune returns a pointer to the given rune value.

func String

func String(s string) *string

String returns a pointer to the given string value.

func Time

func Time(t time.Time) *time.Time

Time returns a pointer to the given time.Time value.

func UUID added in v2.6.0

func UUID(u uuid.UUID) *uuid.UUID

UUID returns a pointer to the given uuid.UUID value.

func Uint

func Uint(u uint) *uint

Uint returns a pointer to the given uint value.

func Uint16

func Uint16(u uint16) *uint16

Uint16 returns a pointer to the given uint16 value.

func Uint32

func Uint32(u uint32) *uint32

Uint32 returns a pointer to the given uint32 value.

func Uint64

func Uint64(u uint64) *uint64

Uint64 returns a pointer to the given uint64 value.

func Uint8

func Uint8(u uint8) *uint8

Uint8 returns a pointer to the given uint8 value.

func Uintptr

func Uintptr(u uintptr) *uintptr

Uintptr returns a pointer to the given uintptr value.

Types

type ApiMeta

type ApiMeta struct {
	ApiVersion  *ApiMetaApiVersion  `json:"api_version,omitempty" url:"api_version,omitempty"`
	BilledUnits *ApiMetaBilledUnits `json:"billed_units,omitempty" url:"billed_units,omitempty"`
	Tokens      *ApiMetaTokens      `json:"tokens,omitempty" url:"tokens,omitempty"`
	Warnings    []string            `json:"warnings,omitempty" url:"warnings,omitempty"`
	// contains filtered or unexported fields
}

func (*ApiMeta) String

func (a *ApiMeta) String() string

func (*ApiMeta) UnmarshalJSON

func (a *ApiMeta) UnmarshalJSON(data []byte) error

type ApiMetaApiVersion

type ApiMetaApiVersion struct {
	Version        string `json:"version" url:"version"`
	IsDeprecated   *bool  `json:"is_deprecated,omitempty" url:"is_deprecated,omitempty"`
	IsExperimental *bool  `json:"is_experimental,omitempty" url:"is_experimental,omitempty"`
	// contains filtered or unexported fields
}

func (*ApiMetaApiVersion) String

func (a *ApiMetaApiVersion) String() string

func (*ApiMetaApiVersion) UnmarshalJSON

func (a *ApiMetaApiVersion) UnmarshalJSON(data []byte) error

type ApiMetaBilledUnits added in v2.5.0

type ApiMetaBilledUnits struct {
	// The number of billed input tokens.
	InputTokens *float64 `json:"input_tokens,omitempty" url:"input_tokens,omitempty"`
	// The number of billed output tokens.
	OutputTokens *float64 `json:"output_tokens,omitempty" url:"output_tokens,omitempty"`
	// The number of billed search units.
	SearchUnits *float64 `json:"search_units,omitempty" url:"search_units,omitempty"`
	// The number of billed classifications units.
	Classifications *float64 `json:"classifications,omitempty" url:"classifications,omitempty"`
	// contains filtered or unexported fields
}

func (*ApiMetaBilledUnits) String added in v2.5.0

func (a *ApiMetaBilledUnits) String() string

func (*ApiMetaBilledUnits) UnmarshalJSON added in v2.5.0

func (a *ApiMetaBilledUnits) UnmarshalJSON(data []byte) error

type ApiMetaTokens added in v2.7.3

type ApiMetaTokens struct {
	// The number of tokens used as input to the model.
	InputTokens *float64 `json:"input_tokens,omitempty" url:"input_tokens,omitempty"`
	// The number of tokens produced by the model.
	OutputTokens *float64 `json:"output_tokens,omitempty" url:"output_tokens,omitempty"`
	// contains filtered or unexported fields
}

func (*ApiMetaTokens) String added in v2.7.3

func (a *ApiMetaTokens) String() string

func (*ApiMetaTokens) UnmarshalJSON added in v2.7.3

func (a *ApiMetaTokens) UnmarshalJSON(data []byte) error

type AuthTokenType added in v2.5.0

type AuthTokenType string

The token_type specifies the way the token is passed in the Authorization header. Valid values are "bearer", "basic", and "noscheme".

const (
	AuthTokenTypeBearer   AuthTokenType = "bearer"
	AuthTokenTypeBasic    AuthTokenType = "basic"
	AuthTokenTypeNoscheme AuthTokenType = "noscheme"
)

func NewAuthTokenTypeFromString added in v2.5.0

func NewAuthTokenTypeFromString(s string) (AuthTokenType, error)

func (AuthTokenType) Ptr added in v2.5.0

func (a AuthTokenType) Ptr() *AuthTokenType

type BadRequestError

type BadRequestError struct {
	*core.APIError
	Body interface{}
}

func (*BadRequestError) MarshalJSON

func (b *BadRequestError) MarshalJSON() ([]byte, error)

func (*BadRequestError) UnmarshalJSON

func (b *BadRequestError) UnmarshalJSON(data []byte) error

func (*BadRequestError) Unwrap

func (b *BadRequestError) Unwrap() error

type ChatCitation

type ChatCitation struct {
	// The index of text that the citation starts at, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have a start value of `7`. This is because the citation starts at `w`, which is the seventh character.
	Start int `json:"start" url:"start"`
	// The index of text that the citation ends after, counting from zero. For example, a generation of `Hello, world!` with a citation on `world` would have an end value of `11`. This is because the citation ends after `d`, which is the eleventh character.
	End int `json:"end" url:"end"`
	// The text of the citation. For example, a generation of `Hello, world!` with a citation of `world` would have a text value of `world`.
	Text string `json:"text" url:"text"`
	// Identifiers of documents cited by this section of the generated reply.
	DocumentIds []string `json:"document_ids,omitempty" url:"document_ids,omitempty"`
	// contains filtered or unexported fields
}

A section of the generated reply which cites external knowledge.

func (*ChatCitation) String

func (c *ChatCitation) String() string

func (*ChatCitation) UnmarshalJSON

func (c *ChatCitation) UnmarshalJSON(data []byte) error

type ChatCitationGenerationEvent

type ChatCitationGenerationEvent struct {
	// Citations for the generated reply.
	Citations []*ChatCitation `json:"citations,omitempty" url:"citations,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatCitationGenerationEvent) String

func (c *ChatCitationGenerationEvent) String() string

func (*ChatCitationGenerationEvent) UnmarshalJSON

func (c *ChatCitationGenerationEvent) UnmarshalJSON(data []byte) error

type ChatConnector

type ChatConnector struct {
	// The identifier of the connector.
	Id string `json:"id" url:"id"`
	// When specified, this user access token will be passed to the connector in the Authorization header instead of the Cohere generated one.
	UserAccessToken *string `json:"user_access_token,omitempty" url:"user_access_token,omitempty"`
	// Defaults to `false`.
	//
	// When `true`, the request will continue if this connector returned an error.
	ContinueOnFailure *bool `json:"continue_on_failure,omitempty" url:"continue_on_failure,omitempty"`
	// Provides the connector with different settings at request time. The key/value pairs of this object are specific to each connector.
	//
	// For example, the connector `web-search` supports the `site` option, which limits search results to the specified domain.
	Options map[string]interface{} `json:"options,omitempty" url:"options,omitempty"`
	// contains filtered or unexported fields
}

The connector used for fetching documents.

func (*ChatConnector) String

func (c *ChatConnector) String() string

func (*ChatConnector) UnmarshalJSON

func (c *ChatConnector) UnmarshalJSON(data []byte) error

type ChatDataMetrics added in v2.6.0

type ChatDataMetrics struct {
	// The sum of all turns of valid train examples.
	NumTrainTurns *string `json:"num_train_turns,omitempty" url:"num_train_turns,omitempty"`
	// The sum of all turns of valid eval examples.
	NumEvalTurns *string `json:"num_eval_turns,omitempty" url:"num_eval_turns,omitempty"`
	// The preamble of this dataset.
	Preamble *string `json:"preamble,omitempty" url:"preamble,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatDataMetrics) String added in v2.6.0

func (c *ChatDataMetrics) String() string

func (*ChatDataMetrics) UnmarshalJSON added in v2.6.0

func (c *ChatDataMetrics) UnmarshalJSON(data []byte) error

type ChatDocument

type ChatDocument = map[string]string

Relevant information that could be used by the model to generate a more accurate reply. The contents of each document are generally short (under 300 words), and are passed in the form of a dictionary of strings. Some suggested keys are "text", "author", "date". Both the key name and the value will be passed to the model.

type ChatMessage

type ChatMessage struct {
	// One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from.
	Role ChatMessageRole `json:"role" url:"role"`
	// Contents of the chat message.
	Message string `json:"message" url:"message"`
	// contains filtered or unexported fields
}

Represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.

The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.

func (*ChatMessage) String

func (c *ChatMessage) String() string

func (*ChatMessage) UnmarshalJSON

func (c *ChatMessage) UnmarshalJSON(data []byte) error

type ChatMessageRole

type ChatMessageRole string

One of `CHATBOT`, `SYSTEM`, or `USER` to identify who the message is coming from.

const (
	ChatMessageRoleChatbot ChatMessageRole = "CHATBOT"
	ChatMessageRoleSystem  ChatMessageRole = "SYSTEM"
	ChatMessageRoleUser    ChatMessageRole = "USER"
)

func NewChatMessageRoleFromString

func NewChatMessageRoleFromString(s string) (ChatMessageRole, error)

func (ChatMessageRole) Ptr

type ChatRequest

type ChatRequest struct {
	// Text input for the model to respond to.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Message string `json:"message" url:"message"`
	// Defaults to `command-r-plus`.
	//
	// The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
	// Compatible Deployments: Cohere Platform, Private Deployments
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
	//
	// The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Preamble *string `json:"preamble,omitempty" url:"preamble,omitempty"`
	// A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
	//
	// Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
	//
	// The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	ChatHistory []*ChatMessage `json:"chat_history,omitempty" url:"chat_history,omitempty"`
	// An alternative to `chat_history`.
	//
	// Providing a `conversation_id` creates or resumes a persisted conversation with the specified ID. The ID can be any non empty string.
	// Compatible Deployments: Cohere Platform
	ConversationId *string `json:"conversation_id,omitempty" url:"conversation_id,omitempty"`
	// Defaults to `AUTO` when `connectors` are specified and `OFF` in all other cases.
	//
	// Dictates how the prompt will be constructed.
	//
	// With `prompt_truncation` set to "AUTO", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.
	//
	// With `prompt_truncation` set to "AUTO_PRESERVE_ORDER", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.
	//
	// With `prompt_truncation` set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, a `TooManyTokens` error will be returned.
	// Compatible Deployments: Cohere Platform Only AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker, Private Deployments
	PromptTruncation *ChatRequestPromptTruncation `json:"prompt_truncation,omitempty" url:"prompt_truncation,omitempty"`
	// Accepts `{"id": "web-search"}`, and/or the `"id"` for a custom [connector](https://docs.cohere.com/docs/connectors), if you've [created](https://docs.cohere.com/docs/creating-and-deploying-a-connector) one.
	//
	// When specified, the model's reply will be enriched with information found by quering each of the connectors (RAG).
	// Compatible Deployments: Cohere Platform
	Connectors []*ChatConnector `json:"connectors,omitempty" url:"connectors,omitempty"`
	// Defaults to `false`.
	//
	// When `true`, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user's `message` will be generated.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	SearchQueriesOnly *bool `json:"search_queries_only,omitempty" url:"search_queries_only,omitempty"`
	// A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
	//
	// Example:
	// `[
	//
	//	{ "title": "Tall penguins", "text": "Emperor penguins are the tallest." },
	//	{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica." },
	//
	// ]`
	//
	// Keys and values from each document will be serialized to a string and passed to the model. The resulting generation will include citations that reference some of these documents.
	//
	// Some suggested keys are "text", "author", and "date". For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
	//
	// An `id` field (string) can be optionally supplied to identify the document in the citations. This field will not be passed to the model.
	//
	// An `_excludes` field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields will still show up in the citation object. The "_excludes" field will not be passed to the model.
	//
	// See ['Document Mode'](https://docs.cohere.com/docs/retrieval-augmented-generation-rag#document-mode) in the guide for more information.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Documents []ChatDocument `json:"documents,omitempty" url:"documents,omitempty"`
	// Defaults to `"accurate"`.
	//
	// Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want `"accurate"` results or `"fast"` results.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	CitationQuality *ChatRequestCitationQuality `json:"citation_quality,omitempty" url:"citation_quality,omitempty"`
	// Defaults to `0.3`.
	//
	// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
	//
	// Randomness can be further maximized by increasing the  value of the `p` parameter.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Temperature *float64 `json:"temperature,omitempty" url:"temperature,omitempty"`
	// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	MaxTokens *int `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	// The maximum number of input tokens to send to the model. If not specified, `max_input_tokens` is the model's context length limit minus a small buffer.
	//
	// Input will be truncated according to the `prompt_truncation` parameter.
	// Compatible Deployments: Cohere Platform
	MaxInputTokens *int `json:"max_input_tokens,omitempty" url:"max_input_tokens,omitempty"`
	// Ensures only the top `k` most likely tokens are considered for generation at each step.
	// Defaults to `0`, min value of `0`, max value of `500`.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	K *int `json:"k,omitempty" url:"k,omitempty"`
	// Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
	// Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	P *float64 `json:"p,omitempty" url:"p,omitempty"`
	// If specified, the backend will make a best effort to sample tokens
	// deterministically, such that repeated requests with the same
	// seed and parameters should return the same result. However,
	// determinism cannot be totally guaranteed.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	StopSequences []string `json:"stop_sequences,omitempty" url:"stop_sequences,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	FrequencyPenalty *float64 `json:"frequency_penalty,omitempty" url:"frequency_penalty,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	PresencePenalty *float64 `json:"presence_penalty,omitempty" url:"presence_penalty,omitempty"`
	// When enabled, the user's prompt will be sent to the model without
	// any pre-processing.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	RawPrompting *bool `json:"raw_prompting,omitempty" url:"raw_prompting,omitempty"`
	// The prompt is returned in the `prompt` response field when this is enabled.
	ReturnPrompt *bool `json:"return_prompt,omitempty" url:"return_prompt,omitempty"`
	// A list of available tools (functions) that the model may suggest invoking before producing a text response.
	//
	// When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Tools []*Tool `json:"tools,omitempty" url:"tools,omitempty"`
	// A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
	// Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
	//
	// **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
	// “`
	// tool_results = [
	//
	//	{
	//	  "call": {
	//	    "name": <tool name>,
	//	    "parameters": {
	//	      <param name>: <param value>
	//	    }
	//	  },
	//	  "outputs": [{
	//	    <key>: <value>
	//	  }]
	//	},
	//	...
	//
	// ]
	// “`
	// **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	ToolResults []*ChatRequestToolResultsItem `json:"tool_results,omitempty" url:"tool_results,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatRequest) MarshalJSON

func (c *ChatRequest) MarshalJSON() ([]byte, error)

func (*ChatRequest) Stream

func (c *ChatRequest) Stream() bool

func (*ChatRequest) UnmarshalJSON

func (c *ChatRequest) UnmarshalJSON(data []byte) error

type ChatRequestCitationQuality

type ChatRequestCitationQuality string

Defaults to `"accurate"`.

Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want `"accurate"` results or `"fast"` results. Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments

const (
	ChatRequestCitationQualityFast     ChatRequestCitationQuality = "fast"
	ChatRequestCitationQualityAccurate ChatRequestCitationQuality = "accurate"
)

func NewChatRequestCitationQualityFromString

func NewChatRequestCitationQualityFromString(s string) (ChatRequestCitationQuality, error)

func (ChatRequestCitationQuality) Ptr

type ChatRequestConnectorsSearchOptions added in v2.6.0

type ChatRequestConnectorsSearchOptions struct {
	Model       interface{} `json:"model,omitempty" url:"model,omitempty"`
	Temperature interface{} `json:"temperature,omitempty" url:"temperature,omitempty"`
	MaxTokens   interface{} `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	Preamble    interface{} `json:"preamble,omitempty" url:"preamble,omitempty"`
	// If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// contains filtered or unexported fields
}

(internal) Sets inference and model options for RAG search query and tool use generations. Defaults are used when options are not specified here, meaning that other parameters outside of connectors_search_options are ignored (such as model= or temperature=).

func (*ChatRequestConnectorsSearchOptions) String added in v2.6.0

func (*ChatRequestConnectorsSearchOptions) UnmarshalJSON added in v2.6.0

func (c *ChatRequestConnectorsSearchOptions) UnmarshalJSON(data []byte) error

type ChatRequestPromptTruncation

type ChatRequestPromptTruncation string

Defaults to `AUTO` when `connectors` are specified and `OFF` in all other cases.

Dictates how the prompt will be constructed.

With `prompt_truncation` set to "AUTO", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.

With `prompt_truncation` set to "AUTO_PRESERVE_ORDER", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.

With `prompt_truncation` set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, a `TooManyTokens` error will be returned. Compatible Deployments: Cohere Platform Only AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker, Private Deployments

const (
	ChatRequestPromptTruncationOff               ChatRequestPromptTruncation = "OFF"
	ChatRequestPromptTruncationAuto              ChatRequestPromptTruncation = "AUTO"
	ChatRequestPromptTruncationAutoPreserveOrder ChatRequestPromptTruncation = "AUTO_PRESERVE_ORDER"
)

func NewChatRequestPromptTruncationFromString

func NewChatRequestPromptTruncationFromString(s string) (ChatRequestPromptTruncation, error)

func (ChatRequestPromptTruncation) Ptr

type ChatRequestToolResultsItem added in v2.6.0

type ChatRequestToolResultsItem struct {
	Call    *ToolCall                `json:"call,omitempty" url:"call,omitempty"`
	Outputs []map[string]interface{} `json:"outputs,omitempty" url:"outputs,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatRequestToolResultsItem) String added in v2.6.0

func (c *ChatRequestToolResultsItem) String() string

func (*ChatRequestToolResultsItem) UnmarshalJSON added in v2.6.0

func (c *ChatRequestToolResultsItem) UnmarshalJSON(data []byte) error

type ChatSearchQueriesGenerationEvent

type ChatSearchQueriesGenerationEvent struct {
	// Generated search queries, meant to be used as part of the RAG flow.
	SearchQueries []*ChatSearchQuery `json:"search_queries,omitempty" url:"search_queries,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatSearchQueriesGenerationEvent) String

func (*ChatSearchQueriesGenerationEvent) UnmarshalJSON

func (c *ChatSearchQueriesGenerationEvent) UnmarshalJSON(data []byte) error

type ChatSearchQuery

type ChatSearchQuery struct {
	// The text of the search query.
	Text string `json:"text" url:"text"`
	// Unique identifier for the generated search query. Useful for submitting feedback.
	GenerationId string `json:"generation_id" url:"generation_id"`
	// contains filtered or unexported fields
}

The generated search query. Contains the text of the query and a unique identifier for the query.

func (*ChatSearchQuery) String

func (c *ChatSearchQuery) String() string

func (*ChatSearchQuery) UnmarshalJSON

func (c *ChatSearchQuery) UnmarshalJSON(data []byte) error

type ChatSearchResult

type ChatSearchResult struct {
	SearchQuery *ChatSearchQuery `json:"search_query,omitempty" url:"search_query,omitempty"`
	// The connector from which this result comes from.
	Connector *ChatSearchResultConnector `json:"connector,omitempty" url:"connector,omitempty"`
	// Identifiers of documents found by this search query.
	DocumentIds []string `json:"document_ids,omitempty" url:"document_ids,omitempty"`
	// An error message if the search failed.
	ErrorMessage *string `json:"error_message,omitempty" url:"error_message,omitempty"`
	// Whether a chat request should continue or not if the request to this connector fails.
	ContinueOnFailure *bool `json:"continue_on_failure,omitempty" url:"continue_on_failure,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatSearchResult) String

func (c *ChatSearchResult) String() string

func (*ChatSearchResult) UnmarshalJSON

func (c *ChatSearchResult) UnmarshalJSON(data []byte) error

type ChatSearchResultConnector added in v2.5.2

type ChatSearchResultConnector struct {
	// The identifier of the connector.
	Id string `json:"id" url:"id"`
	// contains filtered or unexported fields
}

The connector used for fetching documents.

func (*ChatSearchResultConnector) String added in v2.5.2

func (c *ChatSearchResultConnector) String() string

func (*ChatSearchResultConnector) UnmarshalJSON added in v2.5.2

func (c *ChatSearchResultConnector) UnmarshalJSON(data []byte) error

type ChatSearchResultsEvent

type ChatSearchResultsEvent struct {
	// Conducted searches and the ids of documents retrieved from each of them.
	SearchResults []*ChatSearchResult `json:"search_results,omitempty" url:"search_results,omitempty"`
	// Documents fetched from searches or provided by the user.
	Documents []ChatDocument `json:"documents,omitempty" url:"documents,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatSearchResultsEvent) String

func (c *ChatSearchResultsEvent) String() string

func (*ChatSearchResultsEvent) UnmarshalJSON

func (c *ChatSearchResultsEvent) UnmarshalJSON(data []byte) error

type ChatStreamEndEvent

type ChatStreamEndEvent struct {
	// - `COMPLETE` - the model sent back a finished reply
	// - `ERROR_LIMIT` - the reply was cut off because the model reached the maximum number of tokens for its context length
	// - `MAX_TOKENS` - the reply was cut off because the model reached the maximum number of tokens specified by the max_tokens parameter
	// - `ERROR` - something went wrong when generating the reply
	// - `ERROR_TOXIC` - the model generated a reply that was deemed toxic
	FinishReason ChatStreamEndEventFinishReason `json:"finish_reason" url:"finish_reason"`
	// The consolidated response from the model. Contains the generated reply and all the other information streamed back in the previous events.
	Response *NonStreamedChatResponse `json:"response,omitempty" url:"response,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatStreamEndEvent) String

func (c *ChatStreamEndEvent) String() string

func (*ChatStreamEndEvent) UnmarshalJSON

func (c *ChatStreamEndEvent) UnmarshalJSON(data []byte) error

type ChatStreamEndEventFinishReason

type ChatStreamEndEventFinishReason string

- `COMPLETE` - the model sent back a finished reply - `ERROR_LIMIT` - the reply was cut off because the model reached the maximum number of tokens for its context length - `MAX_TOKENS` - the reply was cut off because the model reached the maximum number of tokens specified by the max_tokens parameter - `ERROR` - something went wrong when generating the reply - `ERROR_TOXIC` - the model generated a reply that was deemed toxic

const (
	ChatStreamEndEventFinishReasonComplete   ChatStreamEndEventFinishReason = "COMPLETE"
	ChatStreamEndEventFinishReasonErrorLimit ChatStreamEndEventFinishReason = "ERROR_LIMIT"
	ChatStreamEndEventFinishReasonMaxTokens  ChatStreamEndEventFinishReason = "MAX_TOKENS"
	ChatStreamEndEventFinishReasonError      ChatStreamEndEventFinishReason = "ERROR"
	ChatStreamEndEventFinishReasonErrorToxic ChatStreamEndEventFinishReason = "ERROR_TOXIC"
)

func NewChatStreamEndEventFinishReasonFromString

func NewChatStreamEndEventFinishReasonFromString(s string) (ChatStreamEndEventFinishReason, error)

func (ChatStreamEndEventFinishReason) Ptr

type ChatStreamEvent

type ChatStreamEvent struct {
	// contains filtered or unexported fields
}

func (*ChatStreamEvent) String

func (c *ChatStreamEvent) String() string

func (*ChatStreamEvent) UnmarshalJSON

func (c *ChatStreamEvent) UnmarshalJSON(data []byte) error

type ChatStreamRequest

type ChatStreamRequest struct {
	// Text input for the model to respond to.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Message string `json:"message" url:"message"`
	// Defaults to `command-r-plus`.
	//
	// The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
	// Compatible Deployments: Cohere Platform, Private Deployments
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// When specified, the default Cohere preamble will be replaced with the provided one. Preambles are a part of the prompt used to adjust the model's overall behavior and conversation style, and use the `SYSTEM` role.
	//
	// The `SYSTEM` role is also used for the contents of the optional `chat_history=` parameter. When used with the `chat_history=` parameter it adds content throughout a conversation. Conversely, when used with the `preamble=` parameter it adds content at the start of the conversation only.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Preamble *string `json:"preamble,omitempty" url:"preamble,omitempty"`
	// A list of previous messages between the user and the model, giving the model conversational context for responding to the user's `message`.
	//
	// Each item represents a single message in the chat history, excluding the current user turn. It has two properties: `role` and `message`. The `role` identifies the sender (`CHATBOT`, `SYSTEM`, or `USER`), while the `message` contains the text content.
	//
	// The chat_history parameter should not be used for `SYSTEM` messages in most cases. Instead, to add a `SYSTEM` role message at the beginning of a conversation, the `preamble` parameter should be used.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	ChatHistory []*ChatMessage `json:"chat_history,omitempty" url:"chat_history,omitempty"`
	// An alternative to `chat_history`.
	//
	// Providing a `conversation_id` creates or resumes a persisted conversation with the specified ID. The ID can be any non empty string.
	// Compatible Deployments: Cohere Platform
	ConversationId *string `json:"conversation_id,omitempty" url:"conversation_id,omitempty"`
	// Defaults to `AUTO` when `connectors` are specified and `OFF` in all other cases.
	//
	// Dictates how the prompt will be constructed.
	//
	// With `prompt_truncation` set to "AUTO", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.
	//
	// With `prompt_truncation` set to "AUTO_PRESERVE_ORDER", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.
	//
	// With `prompt_truncation` set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, a `TooManyTokens` error will be returned.
	// Compatible Deployments: Cohere Platform Only AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker, Private Deployments
	PromptTruncation *ChatStreamRequestPromptTruncation `json:"prompt_truncation,omitempty" url:"prompt_truncation,omitempty"`
	// Accepts `{"id": "web-search"}`, and/or the `"id"` for a custom [connector](https://docs.cohere.com/docs/connectors), if you've [created](https://docs.cohere.com/docs/creating-and-deploying-a-connector) one.
	//
	// When specified, the model's reply will be enriched with information found by quering each of the connectors (RAG).
	// Compatible Deployments: Cohere Platform
	Connectors []*ChatConnector `json:"connectors,omitempty" url:"connectors,omitempty"`
	// Defaults to `false`.
	//
	// When `true`, the response will only contain a list of generated search queries, but no search will take place, and no reply from the model to the user's `message` will be generated.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	SearchQueriesOnly *bool `json:"search_queries_only,omitempty" url:"search_queries_only,omitempty"`
	// A list of relevant documents that the model can cite to generate a more accurate reply. Each document is a string-string dictionary.
	//
	// Example:
	// `[
	//
	//	{ "title": "Tall penguins", "text": "Emperor penguins are the tallest." },
	//	{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica." },
	//
	// ]`
	//
	// Keys and values from each document will be serialized to a string and passed to the model. The resulting generation will include citations that reference some of these documents.
	//
	// Some suggested keys are "text", "author", and "date". For better generation quality, it is recommended to keep the total word count of the strings in the dictionary to under 300 words.
	//
	// An `id` field (string) can be optionally supplied to identify the document in the citations. This field will not be passed to the model.
	//
	// An `_excludes` field (array of strings) can be optionally supplied to omit some key-value pairs from being shown to the model. The omitted fields will still show up in the citation object. The "_excludes" field will not be passed to the model.
	//
	// See ['Document Mode'](https://docs.cohere.com/docs/retrieval-augmented-generation-rag#document-mode) in the guide for more information.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Documents []ChatDocument `json:"documents,omitempty" url:"documents,omitempty"`
	// Defaults to `"accurate"`.
	//
	// Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want `"accurate"` results or `"fast"` results.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	CitationQuality *ChatStreamRequestCitationQuality `json:"citation_quality,omitempty" url:"citation_quality,omitempty"`
	// Defaults to `0.3`.
	//
	// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations, and higher temperatures mean more random generations.
	//
	// Randomness can be further maximized by increasing the  value of the `p` parameter.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Temperature *float64 `json:"temperature,omitempty" url:"temperature,omitempty"`
	// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	MaxTokens *int `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	// The maximum number of input tokens to send to the model. If not specified, `max_input_tokens` is the model's context length limit minus a small buffer.
	//
	// Input will be truncated according to the `prompt_truncation` parameter.
	// Compatible Deployments: Cohere Platform
	MaxInputTokens *int `json:"max_input_tokens,omitempty" url:"max_input_tokens,omitempty"`
	// Ensures only the top `k` most likely tokens are considered for generation at each step.
	// Defaults to `0`, min value of `0`, max value of `500`.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	K *int `json:"k,omitempty" url:"k,omitempty"`
	// Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
	// Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	P *float64 `json:"p,omitempty" url:"p,omitempty"`
	// If specified, the backend will make a best effort to sample tokens
	// deterministically, such that repeated requests with the same
	// seed and parameters should return the same result. However,
	// determinism cannot be totally guaranteed.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	StopSequences []string `json:"stop_sequences,omitempty" url:"stop_sequences,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	FrequencyPenalty *float64 `json:"frequency_penalty,omitempty" url:"frequency_penalty,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	PresencePenalty *float64 `json:"presence_penalty,omitempty" url:"presence_penalty,omitempty"`
	// When enabled, the user's prompt will be sent to the model without
	// any pre-processing.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	RawPrompting *bool `json:"raw_prompting,omitempty" url:"raw_prompting,omitempty"`
	// The prompt is returned in the `prompt` response field when this is enabled.
	ReturnPrompt *bool `json:"return_prompt,omitempty" url:"return_prompt,omitempty"`
	// A list of available tools (functions) that the model may suggest invoking before producing a text response.
	//
	// When `tools` is passed (without `tool_results`), the `text` field in the response will be `""` and the `tool_calls` field in the response will be populated with a list of tool calls that need to be made. If no calls need to be made, the `tool_calls` array will be empty.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	Tools []*Tool `json:"tools,omitempty" url:"tools,omitempty"`
	// A list of results from invoking tools recommended by the model in the previous chat turn. Results are used to produce a text response and will be referenced in citations. When using `tool_results`, `tools` must be passed as well.
	// Each tool_result contains information about how it was invoked, as well as a list of outputs in the form of dictionaries.
	//
	// **Note**: `outputs` must be a list of objects. If your tool returns a single object (eg `{"status": 200}`), make sure to wrap it in a list.
	// “`
	// tool_results = [
	//
	//	{
	//	  "call": {
	//	    "name": <tool name>,
	//	    "parameters": {
	//	      <param name>: <param value>
	//	    }
	//	  },
	//	  "outputs": [{
	//	    <key>: <value>
	//	  }]
	//	},
	//	...
	//
	// ]
	// “`
	// **Note**: Chat calls with `tool_results` should not be included in the Chat history to avoid duplication of the message text.
	// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments
	ToolResults []*ChatStreamRequestToolResultsItem `json:"tool_results,omitempty" url:"tool_results,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatStreamRequest) MarshalJSON

func (c *ChatStreamRequest) MarshalJSON() ([]byte, error)

func (*ChatStreamRequest) Stream

func (c *ChatStreamRequest) Stream() bool

func (*ChatStreamRequest) UnmarshalJSON

func (c *ChatStreamRequest) UnmarshalJSON(data []byte) error

type ChatStreamRequestCitationQuality

type ChatStreamRequestCitationQuality string

Defaults to `"accurate"`.

Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want `"accurate"` results or `"fast"` results. Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker, Private Deployments

const (
	ChatStreamRequestCitationQualityFast     ChatStreamRequestCitationQuality = "fast"
	ChatStreamRequestCitationQualityAccurate ChatStreamRequestCitationQuality = "accurate"
)

func NewChatStreamRequestCitationQualityFromString

func NewChatStreamRequestCitationQualityFromString(s string) (ChatStreamRequestCitationQuality, error)

func (ChatStreamRequestCitationQuality) Ptr

type ChatStreamRequestConnectorsSearchOptions added in v2.6.0

type ChatStreamRequestConnectorsSearchOptions struct {
	Model       interface{} `json:"model,omitempty" url:"model,omitempty"`
	Temperature interface{} `json:"temperature,omitempty" url:"temperature,omitempty"`
	MaxTokens   interface{} `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	Preamble    interface{} `json:"preamble,omitempty" url:"preamble,omitempty"`
	// If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// contains filtered or unexported fields
}

(internal) Sets inference and model options for RAG search query and tool use generations. Defaults are used when options are not specified here, meaning that other parameters outside of connectors_search_options are ignored (such as model= or temperature=).

func (*ChatStreamRequestConnectorsSearchOptions) String added in v2.6.0

func (*ChatStreamRequestConnectorsSearchOptions) UnmarshalJSON added in v2.6.0

func (c *ChatStreamRequestConnectorsSearchOptions) UnmarshalJSON(data []byte) error

type ChatStreamRequestPromptTruncation

type ChatStreamRequestPromptTruncation string

Defaults to `AUTO` when `connectors` are specified and `OFF` in all other cases.

Dictates how the prompt will be constructed.

With `prompt_truncation` set to "AUTO", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be changed and ranked by relevance.

With `prompt_truncation` set to "AUTO_PRESERVE_ORDER", some elements from `chat_history` and `documents` will be dropped in an attempt to construct a prompt that fits within the model's context length limit. During this process the order of the documents and chat history will be preserved as they are inputted into the API.

With `prompt_truncation` set to "OFF", no elements will be dropped. If the sum of the inputs exceeds the model's context length limit, a `TooManyTokens` error will be returned. Compatible Deployments: Cohere Platform Only AUTO_PRESERVE_ORDER: Azure, AWS Sagemaker, Private Deployments

const (
	ChatStreamRequestPromptTruncationOff               ChatStreamRequestPromptTruncation = "OFF"
	ChatStreamRequestPromptTruncationAuto              ChatStreamRequestPromptTruncation = "AUTO"
	ChatStreamRequestPromptTruncationAutoPreserveOrder ChatStreamRequestPromptTruncation = "AUTO_PRESERVE_ORDER"
)

func NewChatStreamRequestPromptTruncationFromString

func NewChatStreamRequestPromptTruncationFromString(s string) (ChatStreamRequestPromptTruncation, error)

func (ChatStreamRequestPromptTruncation) Ptr

type ChatStreamRequestToolResultsItem added in v2.6.0

type ChatStreamRequestToolResultsItem struct {
	Call    *ToolCall                `json:"call,omitempty" url:"call,omitempty"`
	Outputs []map[string]interface{} `json:"outputs,omitempty" url:"outputs,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatStreamRequestToolResultsItem) String added in v2.6.0

func (*ChatStreamRequestToolResultsItem) UnmarshalJSON added in v2.6.0

func (c *ChatStreamRequestToolResultsItem) UnmarshalJSON(data []byte) error

type ChatStreamStartEvent

type ChatStreamStartEvent struct {
	// Unique identifier for the generated reply. Useful for submitting feedback.
	GenerationId string `json:"generation_id" url:"generation_id"`
	// contains filtered or unexported fields
}

func (*ChatStreamStartEvent) String

func (c *ChatStreamStartEvent) String() string

func (*ChatStreamStartEvent) UnmarshalJSON

func (c *ChatStreamStartEvent) UnmarshalJSON(data []byte) error

type ChatTextGenerationEvent

type ChatTextGenerationEvent struct {
	// The next batch of text generated by the model.
	Text string `json:"text" url:"text"`
	// contains filtered or unexported fields
}

func (*ChatTextGenerationEvent) String

func (c *ChatTextGenerationEvent) String() string

func (*ChatTextGenerationEvent) UnmarshalJSON

func (c *ChatTextGenerationEvent) UnmarshalJSON(data []byte) error

type ChatToolCallsGenerationEvent added in v2.6.0

type ChatToolCallsGenerationEvent struct {
	ToolCalls []*ToolCall `json:"tool_calls,omitempty" url:"tool_calls,omitempty"`
	// contains filtered or unexported fields
}

func (*ChatToolCallsGenerationEvent) String added in v2.6.0

func (*ChatToolCallsGenerationEvent) UnmarshalJSON added in v2.6.0

func (c *ChatToolCallsGenerationEvent) UnmarshalJSON(data []byte) error

type CheckApiKeyResponse added in v2.7.4

type CheckApiKeyResponse struct {
	Valid          bool    `json:"valid" url:"valid"`
	OrganizationId *string `json:"organization_id,omitempty" url:"organization_id,omitempty"`
	OwnerId        *string `json:"owner_id,omitempty" url:"owner_id,omitempty"`
	// contains filtered or unexported fields
}

func (*CheckApiKeyResponse) String added in v2.7.4

func (c *CheckApiKeyResponse) String() string

func (*CheckApiKeyResponse) UnmarshalJSON added in v2.7.4

func (c *CheckApiKeyResponse) UnmarshalJSON(data []byte) error

type ClassifyDataMetrics added in v2.6.0

type ClassifyDataMetrics struct {
	LabelMetrics []*LabelMetric `json:"label_metrics,omitempty" url:"label_metrics,omitempty"`
	// contains filtered or unexported fields
}

func (*ClassifyDataMetrics) String added in v2.6.0

func (c *ClassifyDataMetrics) String() string

func (*ClassifyDataMetrics) UnmarshalJSON added in v2.6.0

func (c *ClassifyDataMetrics) UnmarshalJSON(data []byte) error

type ClassifyExample added in v2.5.2

type ClassifyExample struct {
	Text  *string `json:"text,omitempty" url:"text,omitempty"`
	Label *string `json:"label,omitempty" url:"label,omitempty"`
	// contains filtered or unexported fields
}

func (*ClassifyExample) String added in v2.5.2

func (c *ClassifyExample) String() string

func (*ClassifyExample) UnmarshalJSON added in v2.5.2

func (c *ClassifyExample) UnmarshalJSON(data []byte) error

type ClassifyRequest

type ClassifyRequest struct {
	// A list of up to 96 texts to be classified. Each one must be a non-empty string.
	// There is, however, no consistent, universal limit to the length a particular input can be. We perform classification on the first `x` tokens of each input, and `x` varies depending on which underlying model is powering classification. The maximum token length for each model is listed in the "max tokens" column [here](https://docs.cohere.com/docs/models).
	// Note: by default the `truncate` parameter is set to `END`, so tokens exceeding the limit will be automatically dropped. This behavior can be disabled by setting `truncate` to `NONE`, which will result in validation errors for longer texts.
	Inputs []string `json:"inputs,omitempty" url:"inputs,omitempty"`
	// An array of examples to provide context to the model. Each example is a text string and its associated label/class. Each unique label requires at least 2 examples associated with it; the maximum number of examples is 2500, and each example has a maximum length of 512 tokens. The values should be structured as `{text: "...",label: "..."}`.
	// Note: [Fine-tuned Models](https://docs.cohere.com/docs/classify-fine-tuning) trained on classification examples don't require the `examples` parameter to be passed in explicitly.
	Examples []*ClassifyExample `json:"examples,omitempty" url:"examples,omitempty"`
	// The identifier of the model. Currently available models are `embed-multilingual-v2.0`, `embed-english-light-v2.0`, and `embed-english-v2.0` (default). Smaller "light" models are faster, while larger models will perform better. [Fine-tuned models](https://docs.cohere.com/docs/fine-tuning) can also be supplied with their full ID.
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// The ID of a custom playground preset. You can create presets in the [playground](https://dashboard.cohere.ai/playground/classify?model=large). If you use a preset, all other parameters become optional, and any included parameters will override the preset's parameters.
	Preset *string `json:"preset,omitempty" url:"preset,omitempty"`
	// One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.
	// Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
	// If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.
	Truncate *ClassifyRequestTruncate `json:"truncate,omitempty" url:"truncate,omitempty"`
}

type ClassifyRequestTruncate

type ClassifyRequestTruncate string

One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length. Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model. If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.

const (
	ClassifyRequestTruncateNone  ClassifyRequestTruncate = "NONE"
	ClassifyRequestTruncateStart ClassifyRequestTruncate = "START"
	ClassifyRequestTruncateEnd   ClassifyRequestTruncate = "END"
)

func NewClassifyRequestTruncateFromString

func NewClassifyRequestTruncateFromString(s string) (ClassifyRequestTruncate, error)

func (ClassifyRequestTruncate) Ptr

type ClassifyResponse

type ClassifyResponse struct {
	Id              string                                 `json:"id" url:"id"`
	Classifications []*ClassifyResponseClassificationsItem `json:"classifications,omitempty" url:"classifications,omitempty"`
	Meta            *ApiMeta                               `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*ClassifyResponse) String

func (c *ClassifyResponse) String() string

func (*ClassifyResponse) UnmarshalJSON

func (c *ClassifyResponse) UnmarshalJSON(data []byte) error

type ClassifyResponseClassificationsItem

type ClassifyResponseClassificationsItem struct {
	Id string `json:"id" url:"id"`
	// The input text that was classified
	Input *string `json:"input,omitempty" url:"input,omitempty"`
	// The predicted label for the associated query (only filled for single-label models)
	Prediction *string `json:"prediction,omitempty" url:"prediction,omitempty"`
	// An array containing the predicted labels for the associated query (only filled for single-label classification)
	Predictions []string `json:"predictions,omitempty" url:"predictions,omitempty"`
	// The confidence score for the top predicted class (only filled for single-label classification)
	Confidence *float64 `json:"confidence,omitempty" url:"confidence,omitempty"`
	// An array containing the confidence scores of all the predictions in the same order
	Confidences []float64 `json:"confidences,omitempty" url:"confidences,omitempty"`
	// A map containing each label and its confidence score according to the classifier. All the confidence scores add up to 1 for single-label classification. For multi-label classification the label confidences are independent of each other, so they don't have to sum up to 1.
	Labels map[string]*ClassifyResponseClassificationsItemLabelsValue `json:"labels,omitempty" url:"labels,omitempty"`
	// The type of classification performed
	ClassificationType ClassifyResponseClassificationsItemClassificationType `json:"classification_type" url:"classification_type"`
	// contains filtered or unexported fields
}

func (*ClassifyResponseClassificationsItem) String

func (*ClassifyResponseClassificationsItem) UnmarshalJSON

func (c *ClassifyResponseClassificationsItem) UnmarshalJSON(data []byte) error

type ClassifyResponseClassificationsItemClassificationType

type ClassifyResponseClassificationsItemClassificationType string

The type of classification performed

const (
	ClassifyResponseClassificationsItemClassificationTypeSingleLabel ClassifyResponseClassificationsItemClassificationType = "single-label"
	ClassifyResponseClassificationsItemClassificationTypeMultiLabel  ClassifyResponseClassificationsItemClassificationType = "multi-label"
)

func NewClassifyResponseClassificationsItemClassificationTypeFromString

func NewClassifyResponseClassificationsItemClassificationTypeFromString(s string) (ClassifyResponseClassificationsItemClassificationType, error)

func (ClassifyResponseClassificationsItemClassificationType) Ptr

type ClassifyResponseClassificationsItemLabelsValue

type ClassifyResponseClassificationsItemLabelsValue struct {
	Confidence *float64 `json:"confidence,omitempty" url:"confidence,omitempty"`
	// contains filtered or unexported fields
}

func (*ClassifyResponseClassificationsItemLabelsValue) String

func (*ClassifyResponseClassificationsItemLabelsValue) UnmarshalJSON

type CompatibleEndpoint added in v2.6.0

type CompatibleEndpoint string

One of the Cohere API endpoints that the model can be used with.

const (
	CompatibleEndpointChat      CompatibleEndpoint = "chat"
	CompatibleEndpointEmbed     CompatibleEndpoint = "embed"
	CompatibleEndpointClassify  CompatibleEndpoint = "classify"
	CompatibleEndpointSummarize CompatibleEndpoint = "summarize"
	CompatibleEndpointRerank    CompatibleEndpoint = "rerank"
	CompatibleEndpointRate      CompatibleEndpoint = "rate"
	CompatibleEndpointGenerate  CompatibleEndpoint = "generate"
)

func NewCompatibleEndpointFromString added in v2.6.0

func NewCompatibleEndpointFromString(s string) (CompatibleEndpoint, error)

func (CompatibleEndpoint) Ptr added in v2.6.0

type Connector added in v2.2.0

type Connector struct {
	// The unique identifier of the connector (used in both `/connectors` & `/chat` endpoints).
	// This is automatically created from the name of the connector upon registration.
	Id string `json:"id" url:"id"`
	// The organization to which this connector belongs. This is automatically set to
	// the organization of the user who created the connector.
	OrganizationId *string `json:"organization_id,omitempty" url:"organization_id,omitempty"`
	// A human-readable name for the connector.
	Name string `json:"name" url:"name"`
	// A description of the connector.
	Description *string `json:"description,omitempty" url:"description,omitempty"`
	// The URL of the connector that will be used to search for documents.
	Url *string `json:"url,omitempty" url:"url,omitempty"`
	// The UTC time at which the connector was created.
	CreatedAt time.Time `json:"created_at" url:"created_at"`
	// The UTC time at which the connector was last updated.
	UpdatedAt time.Time `json:"updated_at" url:"updated_at"`
	// A list of fields to exclude from the prompt (fields remain in the document).
	Excludes []string `json:"excludes,omitempty" url:"excludes,omitempty"`
	// The type of authentication/authorization used by the connector. Possible values: [oauth, service_auth]
	AuthType *string `json:"auth_type,omitempty" url:"auth_type,omitempty"`
	// The OAuth 2.0 configuration for the connector.
	Oauth *ConnectorOAuth `json:"oauth,omitempty" url:"oauth,omitempty"`
	// The OAuth status for the user making the request. One of ["valid", "expired", ""]. Empty string (field is omitted) means the user has not authorized the connector yet.
	AuthStatus *ConnectorAuthStatus `json:"auth_status,omitempty" url:"auth_status,omitempty"`
	// Whether the connector is active or not.
	Active *bool `json:"active,omitempty" url:"active,omitempty"`
	// Whether a chat request should continue or not if the request to this connector fails.
	ContinueOnFailure *bool `json:"continue_on_failure,omitempty" url:"continue_on_failure,omitempty"`
	// contains filtered or unexported fields
}

A connector allows you to integrate data sources with the '/chat' endpoint to create grounded generations with citations to the data source. documents to help answer users.

func (*Connector) MarshalJSON added in v2.6.0

func (c *Connector) MarshalJSON() ([]byte, error)

func (*Connector) String added in v2.2.0

func (c *Connector) String() string

func (*Connector) UnmarshalJSON added in v2.2.0

func (c *Connector) UnmarshalJSON(data []byte) error

type ConnectorAuthStatus added in v2.2.0

type ConnectorAuthStatus string

The OAuth status for the user making the request. One of ["valid", "expired", ""]. Empty string (field is omitted) means the user has not authorized the connector yet.

const (
	ConnectorAuthStatusValid   ConnectorAuthStatus = "valid"
	ConnectorAuthStatusExpired ConnectorAuthStatus = "expired"
)

func NewConnectorAuthStatusFromString added in v2.2.0

func NewConnectorAuthStatusFromString(s string) (ConnectorAuthStatus, error)

func (ConnectorAuthStatus) Ptr added in v2.2.0

type ConnectorOAuth added in v2.2.0

type ConnectorOAuth struct {
	// The OAuth 2.0 client ID. This field is encrypted at rest.
	ClientId *string `json:"client_id,omitempty" url:"client_id,omitempty"`
	// The OAuth 2.0 client Secret. This field is encrypted at rest and never returned in a response.
	ClientSecret *string `json:"client_secret,omitempty" url:"client_secret,omitempty"`
	// The OAuth 2.0 /authorize endpoint to use when users authorize the connector.
	AuthorizeUrl string `json:"authorize_url" url:"authorize_url"`
	// The OAuth 2.0 /token endpoint to use when users authorize the connector.
	TokenUrl string `json:"token_url" url:"token_url"`
	// The OAuth scopes to request when users authorize the connector.
	Scope *string `json:"scope,omitempty" url:"scope,omitempty"`
	// contains filtered or unexported fields
}

func (*ConnectorOAuth) String added in v2.2.0

func (c *ConnectorOAuth) String() string

func (*ConnectorOAuth) UnmarshalJSON added in v2.2.0

func (c *ConnectorOAuth) UnmarshalJSON(data []byte) error

type ConnectorsListRequest added in v2.4.1

type ConnectorsListRequest struct {
	// Maximum number of connectors to return [0, 100].
	Limit *float64 `json:"-" url:"limit,omitempty"`
	// Number of connectors to skip before returning results [0, inf].
	Offset *float64 `json:"-" url:"offset,omitempty"`
}

type ConnectorsOAuthAuthorizeRequest added in v2.5.0

type ConnectorsOAuthAuthorizeRequest struct {
	// The URL to redirect to after the connector has been authorized.
	AfterTokenRedirect *string `json:"-" url:"after_token_redirect,omitempty"`
}

type CreateConnectorOAuth added in v2.2.0

type CreateConnectorOAuth struct {
	// The OAuth 2.0 client ID. This fields is encrypted at rest.
	ClientId *string `json:"client_id,omitempty" url:"client_id,omitempty"`
	// The OAuth 2.0 client Secret. This field is encrypted at rest and never returned in a response.
	ClientSecret *string `json:"client_secret,omitempty" url:"client_secret,omitempty"`
	// The OAuth 2.0 /authorize endpoint to use when users authorize the connector.
	AuthorizeUrl *string `json:"authorize_url,omitempty" url:"authorize_url,omitempty"`
	// The OAuth 2.0 /token endpoint to use when users authorize the connector.
	TokenUrl *string `json:"token_url,omitempty" url:"token_url,omitempty"`
	// The OAuth scopes to request when users authorize the connector.
	Scope *string `json:"scope,omitempty" url:"scope,omitempty"`
	// contains filtered or unexported fields
}

func (*CreateConnectorOAuth) String added in v2.2.0

func (c *CreateConnectorOAuth) String() string

func (*CreateConnectorOAuth) UnmarshalJSON added in v2.2.0

func (c *CreateConnectorOAuth) UnmarshalJSON(data []byte) error

type CreateConnectorRequest added in v2.5.0

type CreateConnectorRequest struct {
	// A human-readable name for the connector.
	Name string `json:"name" url:"name"`
	// A description of the connector.
	Description *string `json:"description,omitempty" url:"description,omitempty"`
	// The URL of the connector that will be used to search for documents.
	Url string `json:"url" url:"url"`
	// A list of fields to exclude from the prompt (fields remain in the document).
	Excludes []string `json:"excludes,omitempty" url:"excludes,omitempty"`
	// The OAuth 2.0 configuration for the connector. Cannot be specified if service_auth is specified.
	Oauth *CreateConnectorOAuth `json:"oauth,omitempty" url:"oauth,omitempty"`
	// Whether the connector is active or not.
	Active *bool `json:"active,omitempty" url:"active,omitempty"`
	// Whether a chat request should continue or not if the request to this connector fails.
	ContinueOnFailure *bool `json:"continue_on_failure,omitempty" url:"continue_on_failure,omitempty"`
	// The service to service authentication configuration for the connector. Cannot be specified if oauth is specified.
	ServiceAuth *CreateConnectorServiceAuth `json:"service_auth,omitempty" url:"service_auth,omitempty"`
}

type CreateConnectorResponse added in v2.5.0

type CreateConnectorResponse struct {
	Connector *Connector `json:"connector,omitempty" url:"connector,omitempty"`
	// contains filtered or unexported fields
}

func (*CreateConnectorResponse) String added in v2.5.0

func (c *CreateConnectorResponse) String() string

func (*CreateConnectorResponse) UnmarshalJSON added in v2.5.0

func (c *CreateConnectorResponse) UnmarshalJSON(data []byte) error

type CreateConnectorServiceAuth added in v2.2.0

type CreateConnectorServiceAuth struct {
	Type AuthTokenType `json:"type" url:"type"`
	// The token that will be used in the HTTP Authorization header when making requests to the connector. This field is encrypted at rest and never returned in a response.
	Token string `json:"token" url:"token"`
	// contains filtered or unexported fields
}

func (*CreateConnectorServiceAuth) String added in v2.2.0

func (c *CreateConnectorServiceAuth) String() string

func (*CreateConnectorServiceAuth) UnmarshalJSON added in v2.2.0

func (c *CreateConnectorServiceAuth) UnmarshalJSON(data []byte) error

type CreateEmbedJobRequest added in v2.5.0

type CreateEmbedJobRequest struct {
	// ID of the embedding model.
	//
	// Available models and corresponding embedding dimensions:
	//
	// - `embed-english-v3.0` : 1024
	// - `embed-multilingual-v3.0` : 1024
	// - `embed-english-light-v3.0` : 384
	// - `embed-multilingual-light-v3.0` : 384
	Model string `json:"model" url:"model"`
	// ID of a [Dataset](https://docs.cohere.com/docs/datasets). The Dataset must be of type `embed-input` and must have a validation status `Validated`
	DatasetId string         `json:"dataset_id" url:"dataset_id"`
	InputType EmbedInputType `json:"input_type" url:"input_type"`
	// The name of the embed job.
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
	//
	// * `"float"`: Use this when you want to get back the default float embeddings. Valid for all models.
	// * `"int8"`: Use this when you want to get back signed int8 embeddings. Valid for only v3 models.
	// * `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models.
	// * `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
	// * `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
	EmbeddingTypes []EmbeddingType `json:"embedding_types,omitempty" url:"embedding_types,omitempty"`
	// One of `START|END` to specify how the API will handle inputs longer than the maximum token length.
	//
	// Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
	Truncate *CreateEmbedJobRequestTruncate `json:"truncate,omitempty" url:"truncate,omitempty"`
}

type CreateEmbedJobRequestTruncate added in v2.5.0

type CreateEmbedJobRequestTruncate string

One of `START|END` to specify how the API will handle inputs longer than the maximum token length.

Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

const (
	CreateEmbedJobRequestTruncateStart CreateEmbedJobRequestTruncate = "START"
	CreateEmbedJobRequestTruncateEnd   CreateEmbedJobRequestTruncate = "END"
)

func NewCreateEmbedJobRequestTruncateFromString added in v2.5.0

func NewCreateEmbedJobRequestTruncateFromString(s string) (CreateEmbedJobRequestTruncate, error)

func (CreateEmbedJobRequestTruncate) Ptr added in v2.5.0

type CreateEmbedJobResponse added in v2.5.0

type CreateEmbedJobResponse struct {
	JobId string   `json:"job_id" url:"job_id"`
	Meta  *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

Response from creating an embed job.

func (*CreateEmbedJobResponse) String added in v2.5.0

func (c *CreateEmbedJobResponse) String() string

func (*CreateEmbedJobResponse) UnmarshalJSON added in v2.5.0

func (c *CreateEmbedJobResponse) UnmarshalJSON(data []byte) error

type Dataset

type Dataset struct {
	// The dataset ID
	Id string `json:"id" url:"id"`
	// The name of the dataset
	Name string `json:"name" url:"name"`
	// The creation date
	CreatedAt time.Time `json:"created_at" url:"created_at"`
	// The last update date
	UpdatedAt        time.Time               `json:"updated_at" url:"updated_at"`
	DatasetType      DatasetType             `json:"dataset_type" url:"dataset_type"`
	ValidationStatus DatasetValidationStatus `json:"validation_status" url:"validation_status"`
	// Errors found during validation
	ValidationError *string `json:"validation_error,omitempty" url:"validation_error,omitempty"`
	// the avro schema of the dataset
	Schema         *string  `json:"schema,omitempty" url:"schema,omitempty"`
	RequiredFields []string `json:"required_fields,omitempty" url:"required_fields,omitempty"`
	PreserveFields []string `json:"preserve_fields,omitempty" url:"preserve_fields,omitempty"`
	// the underlying files that make up the dataset
	DatasetParts []*DatasetPart `json:"dataset_parts,omitempty" url:"dataset_parts,omitempty"`
	// warnings found during validation
	ValidationWarnings []string `json:"validation_warnings,omitempty" url:"validation_warnings,omitempty"`
	// contains filtered or unexported fields
}

func (*Dataset) MarshalJSON added in v2.6.0

func (d *Dataset) MarshalJSON() ([]byte, error)

func (*Dataset) String

func (d *Dataset) String() string

func (*Dataset) UnmarshalJSON

func (d *Dataset) UnmarshalJSON(data []byte) error

type DatasetPart

type DatasetPart struct {
	// The dataset part ID
	Id string `json:"id" url:"id"`
	// The name of the dataset part
	Name string `json:"name" url:"name"`
	// The download url of the file
	Url *string `json:"url,omitempty" url:"url,omitempty"`
	// The index of the file
	Index *int `json:"index,omitempty" url:"index,omitempty"`
	// The size of the file in bytes
	SizeBytes *int `json:"size_bytes,omitempty" url:"size_bytes,omitempty"`
	// The number of rows in the file
	NumRows *int `json:"num_rows,omitempty" url:"num_rows,omitempty"`
	// The download url of the original file
	OriginalUrl *string `json:"original_url,omitempty" url:"original_url,omitempty"`
	// The first few rows of the parsed file
	Samples []string `json:"samples,omitempty" url:"samples,omitempty"`
	// contains filtered or unexported fields
}

func (*DatasetPart) String

func (d *DatasetPart) String() string

func (*DatasetPart) UnmarshalJSON

func (d *DatasetPart) UnmarshalJSON(data []byte) error

type DatasetType added in v2.5.0

type DatasetType string

The type of the dataset

const (
	DatasetTypeEmbedInput                             DatasetType = "embed-input"
	DatasetTypeEmbedResult                            DatasetType = "embed-result"
	DatasetTypeClusterResult                          DatasetType = "cluster-result"
	DatasetTypeClusterOutliers                        DatasetType = "cluster-outliers"
	DatasetTypeRerankerFinetuneInput                  DatasetType = "reranker-finetune-input"
	DatasetTypeSingleLabelClassificationFinetuneInput DatasetType = "single-label-classification-finetune-input"
	DatasetTypeChatFinetuneInput                      DatasetType = "chat-finetune-input"
	DatasetTypeMultiLabelClassificationFinetuneInput  DatasetType = "multi-label-classification-finetune-input"
)

func NewDatasetTypeFromString added in v2.5.0

func NewDatasetTypeFromString(s string) (DatasetType, error)

func (DatasetType) Ptr added in v2.5.0

func (d DatasetType) Ptr() *DatasetType

type DatasetValidationStatus added in v2.5.0

type DatasetValidationStatus string

The validation status of the dataset

const (
	DatasetValidationStatusUnknown    DatasetValidationStatus = "unknown"
	DatasetValidationStatusQueued     DatasetValidationStatus = "queued"
	DatasetValidationStatusProcessing DatasetValidationStatus = "processing"
	DatasetValidationStatusFailed     DatasetValidationStatus = "failed"
	DatasetValidationStatusValidated  DatasetValidationStatus = "validated"
	DatasetValidationStatusSkipped    DatasetValidationStatus = "skipped"
)

func NewDatasetValidationStatusFromString added in v2.5.0

func NewDatasetValidationStatusFromString(s string) (DatasetValidationStatus, error)

func (DatasetValidationStatus) Ptr added in v2.5.0

type DatasetsCreateRequest added in v2.5.0

type DatasetsCreateRequest struct {
	// The name of the uploaded dataset.
	Name string `json:"-" url:"name"`
	// The dataset type, which is used to validate the data. Valid types are `embed-input`, `reranker-finetune-input`, `single-label-classification-finetune-input`, `chat-finetune-input`, and `multi-label-classification-finetune-input`.
	Type DatasetType `json:"-" url:"type"`
	// Indicates if the original file should be stored.
	KeepOriginalFile *bool `json:"-" url:"keep_original_file,omitempty"`
	// Indicates whether rows with malformed input should be dropped (instead of failing the validation check). Dropped rows will be returned in the warnings field.
	SkipMalformedInput *bool `json:"-" url:"skip_malformed_input,omitempty"`
	// List of names of fields that will be persisted in the Dataset. By default the Dataset will retain only the required fields indicated in the [schema for the corresponding Dataset type](https://docs.cohere.com/docs/datasets#dataset-types). For example, datasets of type `embed-input` will drop all fields other than the required `text` field. If any of the fields in `keep_fields` are missing from the uploaded file, Dataset validation will fail.
	KeepFields []*string `json:"-" url:"keep_fields,omitempty"`
	// List of names of fields that will be persisted in the Dataset. By default the Dataset will retain only the required fields indicated in the [schema for the corresponding Dataset type](https://docs.cohere.com/docs/datasets#dataset-types). For example, Datasets of type `embed-input` will drop all fields other than the required `text` field. If any of the fields in `optional_fields` are missing from the uploaded file, Dataset validation will pass.
	OptionalFields []*string `json:"-" url:"optional_fields,omitempty"`
	// Raw .txt uploads will be split into entries using the text_separator value.
	TextSeparator *string `json:"-" url:"text_separator,omitempty"`
	// The delimiter used for .csv uploads.
	CsvDelimiter *string `json:"-" url:"csv_delimiter,omitempty"`
	// flag to enable dry_run mode
	DryRun *bool `json:"-" url:"dry_run,omitempty"`
}

type DatasetsCreateResponse added in v2.5.0

type DatasetsCreateResponse struct {
	// The dataset ID
	Id *string `json:"id,omitempty" url:"id,omitempty"`
	// contains filtered or unexported fields
}

func (*DatasetsCreateResponse) String added in v2.5.0

func (d *DatasetsCreateResponse) String() string

func (*DatasetsCreateResponse) UnmarshalJSON added in v2.5.0

func (d *DatasetsCreateResponse) UnmarshalJSON(data []byte) error

type DatasetsCreateResponseDatasetPartsItem added in v2.7.4

type DatasetsCreateResponseDatasetPartsItem struct {
	// the name of the dataset part
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// the number of rows in the dataset part
	NumRows *float64 `json:"num_rows,omitempty" url:"num_rows,omitempty"`
	Samples []string `json:"samples,omitempty" url:"samples,omitempty"`
	// the kind of dataset part
	PartKind *string `json:"part_kind,omitempty" url:"part_kind,omitempty"`
	// contains filtered or unexported fields
}

the underlying files that make up the dataset

func (*DatasetsCreateResponseDatasetPartsItem) String added in v2.7.4

func (*DatasetsCreateResponseDatasetPartsItem) UnmarshalJSON added in v2.7.4

func (d *DatasetsCreateResponseDatasetPartsItem) UnmarshalJSON(data []byte) error

type DatasetsGetResponse added in v2.2.0

type DatasetsGetResponse struct {
	Dataset *Dataset `json:"dataset,omitempty" url:"dataset,omitempty"`
	// contains filtered or unexported fields
}

func (*DatasetsGetResponse) String added in v2.2.0

func (d *DatasetsGetResponse) String() string

func (*DatasetsGetResponse) UnmarshalJSON added in v2.2.0

func (d *DatasetsGetResponse) UnmarshalJSON(data []byte) error

type DatasetsGetUsageResponse added in v2.2.0

type DatasetsGetUsageResponse struct {
	// The total number of bytes used by the organization.
	OrganizationUsage *string `json:"organization_usage,omitempty" url:"organization_usage,omitempty"`
	// contains filtered or unexported fields
}

func (*DatasetsGetUsageResponse) String added in v2.2.0

func (d *DatasetsGetUsageResponse) String() string

func (*DatasetsGetUsageResponse) UnmarshalJSON added in v2.2.0

func (d *DatasetsGetUsageResponse) UnmarshalJSON(data []byte) error

type DatasetsListRequest added in v2.2.0

type DatasetsListRequest struct {
	// optional filter by dataset type
	DatasetType *string `json:"-" url:"datasetType,omitempty"`
	// optional filter before a date
	Before *time.Time `json:"-" url:"before,omitempty"`
	// optional filter after a date
	After *time.Time `json:"-" url:"after,omitempty"`
	// optional limit to number of results
	Limit *float64 `json:"-" url:"limit,omitempty"`
	// optional offset to start of results
	Offset *float64 `json:"-" url:"offset,omitempty"`
	// optional filter by validation status
	ValidationStatus *DatasetValidationStatus `json:"-" url:"validationStatus,omitempty"`
}

type DatasetsListResponse added in v2.2.0

type DatasetsListResponse struct {
	Datasets []*Dataset `json:"datasets,omitempty" url:"datasets,omitempty"`
	// contains filtered or unexported fields
}

func (*DatasetsListResponse) String added in v2.2.0

func (d *DatasetsListResponse) String() string

func (*DatasetsListResponse) UnmarshalJSON added in v2.2.0

func (d *DatasetsListResponse) UnmarshalJSON(data []byte) error

type DeleteConnectorResponse added in v2.5.0

type DeleteConnectorResponse = map[string]interface{}

type DetokenizeRequest

type DetokenizeRequest struct {
	// The list of tokens to be detokenized.
	Tokens []int `json:"tokens,omitempty" url:"tokens,omitempty"`
	// An optional parameter to provide the model name. This will ensure that the detokenization is done by the tokenizer used by that model.
	Model string `json:"model" url:"model"`
}

type DetokenizeResponse

type DetokenizeResponse struct {
	// A string representing the list of tokens.
	Text string   `json:"text" url:"text"`
	Meta *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*DetokenizeResponse) String

func (d *DetokenizeResponse) String() string

func (*DetokenizeResponse) UnmarshalJSON

func (d *DetokenizeResponse) UnmarshalJSON(data []byte) error

type EmbedByTypeResponse added in v2.4.1

type EmbedByTypeResponse struct {
	Id string `json:"id" url:"id"`
	// An object with different embedding types. The length of each embedding type array will be the same as the length of the original `texts` array.
	Embeddings *EmbedByTypeResponseEmbeddings `json:"embeddings,omitempty" url:"embeddings,omitempty"`
	// The text entries for which embeddings were returned.
	Texts []string `json:"texts,omitempty" url:"texts,omitempty"`
	Meta  *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*EmbedByTypeResponse) String added in v2.4.1

func (e *EmbedByTypeResponse) String() string

func (*EmbedByTypeResponse) UnmarshalJSON added in v2.4.1

func (e *EmbedByTypeResponse) UnmarshalJSON(data []byte) error

type EmbedByTypeResponseEmbeddings added in v2.4.1

type EmbedByTypeResponseEmbeddings struct {
	// An array of float embeddings.
	Float [][]float64 `json:"float,omitempty" url:"float,omitempty"`
	// An array of signed int8 embeddings. Each value is between -128 and 127.
	Int8 [][]int `json:"int8,omitempty" url:"int8,omitempty"`
	// An array of unsigned int8 embeddings. Each value is between 0 and 255.
	Uint8 [][]int `json:"uint8,omitempty" url:"uint8,omitempty"`
	// An array of packed signed binary embeddings. The length of each binary embedding is 1/8 the length of the float embeddings of the provided model. Each value is between -128 and 127.
	Binary [][]int `json:"binary,omitempty" url:"binary,omitempty"`
	// An array of packed unsigned binary embeddings. The length of each binary embedding is 1/8 the length of the float embeddings of the provided model. Each value is between 0 and 255.
	Ubinary [][]int `json:"ubinary,omitempty" url:"ubinary,omitempty"`
	// contains filtered or unexported fields
}

An object with different embedding types. The length of each embedding type array will be the same as the length of the original `texts` array.

func (*EmbedByTypeResponseEmbeddings) String added in v2.4.1

func (*EmbedByTypeResponseEmbeddings) UnmarshalJSON added in v2.4.1

func (e *EmbedByTypeResponseEmbeddings) UnmarshalJSON(data []byte) error

type EmbedFloatsResponse added in v2.4.1

type EmbedFloatsResponse struct {
	Id string `json:"id" url:"id"`
	// An array of embeddings, where each embedding is an array of floats. The length of the `embeddings` array will be the same as the length of the original `texts` array.
	Embeddings [][]float64 `json:"embeddings,omitempty" url:"embeddings,omitempty"`
	// The text entries for which embeddings were returned.
	Texts []string `json:"texts,omitempty" url:"texts,omitempty"`
	Meta  *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*EmbedFloatsResponse) String added in v2.4.1

func (e *EmbedFloatsResponse) String() string

func (*EmbedFloatsResponse) UnmarshalJSON added in v2.4.1

func (e *EmbedFloatsResponse) UnmarshalJSON(data []byte) error

type EmbedInputType added in v2.5.0

type EmbedInputType string

Specifies the type of input passed to the model. Required for embedding models v3 and higher.

- `"search_document"`: Used for embeddings stored in a vector database for search use-cases. - `"search_query"`: Used for embeddings of search queries run against a vector DB to find relevant documents. - `"classification"`: Used for embeddings passed through a text classifier. - `"clustering"`: Used for the embeddings run through a clustering algorithm.

const (
	EmbedInputTypeSearchDocument EmbedInputType = "search_document"
	EmbedInputTypeSearchQuery    EmbedInputType = "search_query"
	EmbedInputTypeClassification EmbedInputType = "classification"
	EmbedInputTypeClustering     EmbedInputType = "clustering"
)

func NewEmbedInputTypeFromString added in v2.5.0

func NewEmbedInputTypeFromString(s string) (EmbedInputType, error)

func (EmbedInputType) Ptr added in v2.5.0

func (e EmbedInputType) Ptr() *EmbedInputType

type EmbedJob added in v2.5.0

type EmbedJob struct {
	// ID of the embed job
	JobId string `json:"job_id" url:"job_id"`
	// The name of the embed job
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// The status of the embed job
	Status EmbedJobStatus `json:"status" url:"status"`
	// The creation date of the embed job
	CreatedAt time.Time `json:"created_at" url:"created_at"`
	// ID of the input dataset
	InputDatasetId string `json:"input_dataset_id" url:"input_dataset_id"`
	// ID of the resulting output dataset
	OutputDatasetId *string `json:"output_dataset_id,omitempty" url:"output_dataset_id,omitempty"`
	// ID of the model used to embed
	Model string `json:"model" url:"model"`
	// The truncation option used
	Truncate EmbedJobTruncate `json:"truncate" url:"truncate"`
	Meta     *ApiMeta         `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*EmbedJob) MarshalJSON added in v2.6.0

func (e *EmbedJob) MarshalJSON() ([]byte, error)

func (*EmbedJob) String added in v2.5.0

func (e *EmbedJob) String() string

func (*EmbedJob) UnmarshalJSON added in v2.5.0

func (e *EmbedJob) UnmarshalJSON(data []byte) error

type EmbedJobStatus added in v2.5.0

type EmbedJobStatus string

The status of the embed job

const (
	EmbedJobStatusProcessing EmbedJobStatus = "processing"
	EmbedJobStatusComplete   EmbedJobStatus = "complete"
	EmbedJobStatusCancelling EmbedJobStatus = "cancelling"
	EmbedJobStatusCancelled  EmbedJobStatus = "cancelled"
	EmbedJobStatusFailed     EmbedJobStatus = "failed"
)

func NewEmbedJobStatusFromString added in v2.5.0

func NewEmbedJobStatusFromString(s string) (EmbedJobStatus, error)

func (EmbedJobStatus) Ptr added in v2.5.0

func (e EmbedJobStatus) Ptr() *EmbedJobStatus

type EmbedJobTruncate added in v2.5.0

type EmbedJobTruncate string

The truncation option used

const (
	EmbedJobTruncateStart EmbedJobTruncate = "START"
	EmbedJobTruncateEnd   EmbedJobTruncate = "END"
)

func NewEmbedJobTruncateFromString added in v2.5.0

func NewEmbedJobTruncateFromString(s string) (EmbedJobTruncate, error)

func (EmbedJobTruncate) Ptr added in v2.5.0

type EmbedRequest

type EmbedRequest struct {
	// An array of strings for the model to embed. Maximum number of texts per call is `96`. We recommend reducing the length of each text to be under `512` tokens for optimal quality.
	Texts []string `json:"texts,omitempty" url:"texts,omitempty"`
	// Defaults to embed-english-v2.0
	//
	// The identifier of the model. Smaller "light" models are faster, while larger models will perform better. [Custom models](/docs/training-custom-models) can also be supplied with their full ID.
	//
	// Available models and corresponding embedding dimensions:
	//
	// * `embed-english-v3.0`  1024
	// * `embed-multilingual-v3.0`  1024
	// * `embed-english-light-v3.0`  384
	// * `embed-multilingual-light-v3.0`  384
	//
	// * `embed-english-v2.0`  4096
	// * `embed-english-light-v2.0`  1024
	// * `embed-multilingual-v2.0`  768
	Model     *string         `json:"model,omitempty" url:"model,omitempty"`
	InputType *EmbedInputType `json:"input_type,omitempty" url:"input_type,omitempty"`
	// Specifies the types of embeddings you want to get back. Not required and default is None, which returns the Embed Floats response type. Can be one or more of the following types.
	//
	// * `"float"`: Use this when you want to get back the default float embeddings. Valid for all models.
	// * `"int8"`: Use this when you want to get back signed int8 embeddings. Valid for only v3 models.
	// * `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Valid for only v3 models.
	// * `"binary"`: Use this when you want to get back signed binary embeddings. Valid for only v3 models.
	// * `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Valid for only v3 models.
	EmbeddingTypes []EmbeddingType `json:"embedding_types,omitempty" url:"embedding_types,omitempty"`
	// One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.
	//
	// Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
	//
	// If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.
	Truncate *EmbedRequestTruncate `json:"truncate,omitempty" url:"truncate,omitempty"`
}

type EmbedRequestTruncate

type EmbedRequestTruncate string

One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.

Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.

const (
	EmbedRequestTruncateNone  EmbedRequestTruncate = "NONE"
	EmbedRequestTruncateStart EmbedRequestTruncate = "START"
	EmbedRequestTruncateEnd   EmbedRequestTruncate = "END"
)

func NewEmbedRequestTruncateFromString

func NewEmbedRequestTruncateFromString(s string) (EmbedRequestTruncate, error)

func (EmbedRequestTruncate) Ptr

type EmbedResponse

type EmbedResponse struct {
	ResponseType     string
	EmbeddingsFloats *EmbedFloatsResponse
	EmbeddingsByType *EmbedByTypeResponse
}

func (*EmbedResponse) Accept added in v2.4.1

func (e *EmbedResponse) Accept(visitor EmbedResponseVisitor) error

func (EmbedResponse) MarshalJSON added in v2.4.1

func (e EmbedResponse) MarshalJSON() ([]byte, error)

func (*EmbedResponse) UnmarshalJSON

func (e *EmbedResponse) UnmarshalJSON(data []byte) error

type EmbedResponseVisitor added in v2.4.1

type EmbedResponseVisitor interface {
	VisitEmbeddingsFloats(*EmbedFloatsResponse) error
	VisitEmbeddingsByType(*EmbedByTypeResponse) error
}

type EmbeddingType added in v2.7.1

type EmbeddingType string
const (
	EmbeddingTypeFloat   EmbeddingType = "float"
	EmbeddingTypeInt8    EmbeddingType = "int8"
	EmbeddingTypeUint8   EmbeddingType = "uint8"
	EmbeddingTypeBinary  EmbeddingType = "binary"
	EmbeddingTypeUbinary EmbeddingType = "ubinary"
)

func NewEmbeddingTypeFromString added in v2.7.1

func NewEmbeddingTypeFromString(s string) (EmbeddingType, error)

func (EmbeddingType) Ptr added in v2.7.1

func (e EmbeddingType) Ptr() *EmbeddingType

type FinetuneDatasetMetrics added in v2.6.0

type FinetuneDatasetMetrics struct {
	// The number of tokens of valid examples that can be used for training.
	TrainableTokenCount *string `json:"trainable_token_count,omitempty" url:"trainable_token_count,omitempty"`
	// The overall number of examples.
	TotalExamples *string `json:"total_examples,omitempty" url:"total_examples,omitempty"`
	// The number of training examples.
	TrainExamples *string `json:"train_examples,omitempty" url:"train_examples,omitempty"`
	// The size in bytes of all training examples.
	TrainSizeBytes *string `json:"train_size_bytes,omitempty" url:"train_size_bytes,omitempty"`
	// Number of evaluation examples.
	EvalExamples *string `json:"eval_examples,omitempty" url:"eval_examples,omitempty"`
	// The size in bytes of all eval examples.
	EvalSizeBytes *string `json:"eval_size_bytes,omitempty" url:"eval_size_bytes,omitempty"`
	// contains filtered or unexported fields
}

func (*FinetuneDatasetMetrics) String added in v2.6.0

func (f *FinetuneDatasetMetrics) String() string

func (*FinetuneDatasetMetrics) UnmarshalJSON added in v2.6.0

func (f *FinetuneDatasetMetrics) UnmarshalJSON(data []byte) error

type FinetuningListEventsRequest added in v2.7.0

type FinetuningListEventsRequest struct {
	// Maximum number of results to be returned by the server. If 0, defaults to 50.
	PageSize *int `json:"-" url:"page_size,omitempty"`
	// Request a specific page of the list results.
	PageToken *string `json:"-" url:"page_token,omitempty"`
	// Comma separated list of fields. For example: "created_at,name". The default
	// sorting order is ascending. To specify descending order for a field, append
	// " desc" to the field name. For example: "created_at desc,name".
	//
	// Supported sorting fields:
	//
	// - created_at (default)
	OrderBy *string `json:"-" url:"order_by,omitempty"`
}

type FinetuningListFinetunedModelsRequest added in v2.7.0

type FinetuningListFinetunedModelsRequest struct {
	// Maximum number of results to be returned by the server. If 0, defaults to 50.
	PageSize *int `json:"-" url:"page_size,omitempty"`
	// Request a specific page of the list results.
	PageToken *string `json:"-" url:"page_token,omitempty"`
	// Comma separated list of fields. For example: "created_at,name". The default
	// sorting order is ascending. To specify descending order for a field, append
	// " desc" to the field name. For example: "created_at desc,name".
	//
	// Supported sorting fields:
	//
	// - created_at (default)
	OrderBy *string `json:"-" url:"order_by,omitempty"`
}

type FinetuningListTrainingStepMetricsRequest added in v2.7.0

type FinetuningListTrainingStepMetricsRequest struct {
	// Maximum number of results to be returned by the server. If 0, defaults to 50.
	PageSize *int `json:"-" url:"page_size,omitempty"`
	// Request a specific page of the list results.
	PageToken *string `json:"-" url:"page_token,omitempty"`
}

type FinetuningUpdateFinetunedModelRequest added in v2.7.0

type FinetuningUpdateFinetunedModelRequest struct {
	// FinetunedModel name (e.g. `foobar`).
	Name string `json:"name" url:"name"`
	// User ID of the creator.
	CreatorId *string `json:"creator_id,omitempty" url:"creator_id,omitempty"`
	// Organization ID.
	OrganizationId *string `json:"organization_id,omitempty" url:"organization_id,omitempty"`
	// FinetunedModel settings such as dataset, hyperparameters...
	Settings *finetuning.Settings `json:"settings,omitempty" url:"settings,omitempty"`
	// Current stage in the life-cycle of the fine-tuned model.
	Status *finetuning.Status `json:"status,omitempty" url:"status,omitempty"`
	// Creation timestamp.
	CreatedAt *time.Time `json:"created_at,omitempty" url:"created_at,omitempty"`
	// Latest update timestamp.
	UpdatedAt *time.Time `json:"updated_at,omitempty" url:"updated_at,omitempty"`
	// Timestamp for the completed fine-tuning.
	CompletedAt *time.Time `json:"completed_at,omitempty" url:"completed_at,omitempty"`
	// Timestamp for the latest request to this fine-tuned model.
	LastUsed *time.Time `json:"last_used,omitempty" url:"last_used,omitempty"`
}

func (*FinetuningUpdateFinetunedModelRequest) MarshalJSON added in v2.7.0

func (f *FinetuningUpdateFinetunedModelRequest) MarshalJSON() ([]byte, error)

func (*FinetuningUpdateFinetunedModelRequest) UnmarshalJSON added in v2.7.0

func (f *FinetuningUpdateFinetunedModelRequest) UnmarshalJSON(data []byte) error

type FinishReason

type FinishReason string
const (
	FinishReasonComplete   FinishReason = "COMPLETE"
	FinishReasonError      FinishReason = "ERROR"
	FinishReasonErrorToxic FinishReason = "ERROR_TOXIC"
	FinishReasonErrorLimit FinishReason = "ERROR_LIMIT"
	FinishReasonUserCancel FinishReason = "USER_CANCEL"
	FinishReasonMaxTokens  FinishReason = "MAX_TOKENS"
)

func NewFinishReasonFromString

func NewFinishReasonFromString(s string) (FinishReason, error)

func (FinishReason) Ptr

func (f FinishReason) Ptr() *FinishReason

type ForbiddenError added in v2.2.0

type ForbiddenError struct {
	*core.APIError
	Body interface{}
}

func (*ForbiddenError) MarshalJSON added in v2.2.0

func (f *ForbiddenError) MarshalJSON() ([]byte, error)

func (*ForbiddenError) UnmarshalJSON added in v2.2.0

func (f *ForbiddenError) UnmarshalJSON(data []byte) error

func (*ForbiddenError) Unwrap added in v2.2.0

func (f *ForbiddenError) Unwrap() error

type GenerateRequest

type GenerateRequest struct {
	// The input text that serves as the starting point for generating the response.
	// Note: The prompt will be pre-processed and modified before reaching the model.
	Prompt string `json:"prompt" url:"prompt"`
	// The identifier of the model to generate with. Currently available models are `command` (default), `command-nightly` (experimental), `command-light`, and `command-light-nightly` (experimental).
	// Smaller, "light" models are faster, while larger models will perform better. [Custom models](/docs/training-custom-models) can also be supplied with their full ID.
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// The maximum number of generations that will be returned. Defaults to `1`, min value of `1`, max value of `5`.
	NumGenerations *int `json:"num_generations,omitempty" url:"num_generations,omitempty"`
	// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
	//
	// This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token. See [BPE Tokens](/bpe-tokens-wiki) for more details.
	//
	// Can only be set to `0` if `return_likelihoods` is set to `ALL` to get the likelihood of the prompt.
	MaxTokens *int `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	// One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.
	//
	// Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
	//
	// If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.
	Truncate *GenerateRequestTruncate `json:"truncate,omitempty" url:"truncate,omitempty"`
	// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
	// Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
	Temperature *float64 `json:"temperature,omitempty" url:"temperature,omitempty"`
	// If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
	// When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
	Preset *string `json:"preset,omitempty" url:"preset,omitempty"`
	// The generated text will be cut at the beginning of the earliest occurrence of an end sequence. The sequence will be excluded from the text.
	EndSequences []string `json:"end_sequences,omitempty" url:"end_sequences,omitempty"`
	// The generated text will be cut at the end of the earliest occurrence of a stop sequence. The sequence will be included the text.
	StopSequences []string `json:"stop_sequences,omitempty" url:"stop_sequences,omitempty"`
	// Ensures only the top `k` most likely tokens are considered for generation at each step.
	// Defaults to `0`, min value of `0`, max value of `500`.
	K *int `json:"k,omitempty" url:"k,omitempty"`
	// Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
	// Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
	P *float64 `json:"p,omitempty" url:"p,omitempty"`
	// Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
	//
	// Using `frequency_penalty` in combination with `presence_penalty` is not supported on newer models.
	FrequencyPenalty *float64 `json:"frequency_penalty,omitempty" url:"frequency_penalty,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Can be used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
	//
	// Using `frequency_penalty` in combination with `presence_penalty` is not supported on newer models.
	PresencePenalty *float64 `json:"presence_penalty,omitempty" url:"presence_penalty,omitempty"`
	// One of `GENERATION|ALL|NONE` to specify how and if the token likelihoods are returned with the response. Defaults to `NONE`.
	//
	// If `GENERATION` is selected, the token likelihoods will only be provided for generated text.
	//
	// If `ALL` is selected, the token likelihoods will be provided both for the prompt and the generated text.
	ReturnLikelihoods *GenerateRequestReturnLikelihoods `json:"return_likelihoods,omitempty" url:"return_likelihoods,omitempty"`
	// When enabled, the user's prompt will be sent to the model without any pre-processing.
	RawPrompting *bool `json:"raw_prompting,omitempty" url:"raw_prompting,omitempty"`
	// contains filtered or unexported fields
}

func (*GenerateRequest) MarshalJSON added in v2.5.1

func (g *GenerateRequest) MarshalJSON() ([]byte, error)

func (*GenerateRequest) Stream

func (g *GenerateRequest) Stream() bool

func (*GenerateRequest) UnmarshalJSON added in v2.5.1

func (g *GenerateRequest) UnmarshalJSON(data []byte) error

type GenerateRequestReturnLikelihoods

type GenerateRequestReturnLikelihoods string

One of `GENERATION|ALL|NONE` to specify how and if the token likelihoods are returned with the response. Defaults to `NONE`.

If `GENERATION` is selected, the token likelihoods will only be provided for generated text.

If `ALL` is selected, the token likelihoods will be provided both for the prompt and the generated text.

const (
	GenerateRequestReturnLikelihoodsGeneration GenerateRequestReturnLikelihoods = "GENERATION"
	GenerateRequestReturnLikelihoodsAll        GenerateRequestReturnLikelihoods = "ALL"
	GenerateRequestReturnLikelihoodsNone       GenerateRequestReturnLikelihoods = "NONE"
)

func NewGenerateRequestReturnLikelihoodsFromString

func NewGenerateRequestReturnLikelihoodsFromString(s string) (GenerateRequestReturnLikelihoods, error)

func (GenerateRequestReturnLikelihoods) Ptr

type GenerateRequestTruncate

type GenerateRequestTruncate string

One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.

Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.

const (
	GenerateRequestTruncateNone  GenerateRequestTruncate = "NONE"
	GenerateRequestTruncateStart GenerateRequestTruncate = "START"
	GenerateRequestTruncateEnd   GenerateRequestTruncate = "END"
)

func NewGenerateRequestTruncateFromString

func NewGenerateRequestTruncateFromString(s string) (GenerateRequestTruncate, error)

func (GenerateRequestTruncate) Ptr

type GenerateStreamEnd added in v2.5.0

type GenerateStreamEnd struct {
	IsFinished   bool                       `json:"is_finished" url:"is_finished"`
	FinishReason *FinishReason              `json:"finish_reason,omitempty" url:"finish_reason,omitempty"`
	Response     *GenerateStreamEndResponse `json:"response,omitempty" url:"response,omitempty"`
	// contains filtered or unexported fields
}

func (*GenerateStreamEnd) String added in v2.5.0

func (g *GenerateStreamEnd) String() string

func (*GenerateStreamEnd) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamEnd) UnmarshalJSON(data []byte) error

type GenerateStreamEndResponse added in v2.5.0

type GenerateStreamEndResponse struct {
	Id          string                      `json:"id" url:"id"`
	Prompt      *string                     `json:"prompt,omitempty" url:"prompt,omitempty"`
	Generations []*SingleGenerationInStream `json:"generations,omitempty" url:"generations,omitempty"`
	// contains filtered or unexported fields
}

func (*GenerateStreamEndResponse) String added in v2.5.0

func (g *GenerateStreamEndResponse) String() string

func (*GenerateStreamEndResponse) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamEndResponse) UnmarshalJSON(data []byte) error

type GenerateStreamError added in v2.5.0

type GenerateStreamError struct {
	// Refers to the nth generation. Only present when `num_generations` is greater than zero.
	Index        *int         `json:"index,omitempty" url:"index,omitempty"`
	IsFinished   bool         `json:"is_finished" url:"is_finished"`
	FinishReason FinishReason `json:"finish_reason" url:"finish_reason"`
	// Error message
	Err string `json:"err" url:"err"`
	// contains filtered or unexported fields
}

func (*GenerateStreamError) String added in v2.5.0

func (g *GenerateStreamError) String() string

func (*GenerateStreamError) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamError) UnmarshalJSON(data []byte) error

type GenerateStreamEvent added in v2.5.0

type GenerateStreamEvent struct {
	// contains filtered or unexported fields
}

func (*GenerateStreamEvent) String added in v2.5.0

func (g *GenerateStreamEvent) String() string

func (*GenerateStreamEvent) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamEvent) UnmarshalJSON(data []byte) error

type GenerateStreamRequest added in v2.5.0

type GenerateStreamRequest struct {
	// The input text that serves as the starting point for generating the response.
	// Note: The prompt will be pre-processed and modified before reaching the model.
	Prompt string `json:"prompt" url:"prompt"`
	// The identifier of the model to generate with. Currently available models are `command` (default), `command-nightly` (experimental), `command-light`, and `command-light-nightly` (experimental).
	// Smaller, "light" models are faster, while larger models will perform better. [Custom models](/docs/training-custom-models) can also be supplied with their full ID.
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// The maximum number of generations that will be returned. Defaults to `1`, min value of `1`, max value of `5`.
	NumGenerations *int `json:"num_generations,omitempty" url:"num_generations,omitempty"`
	// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
	//
	// This parameter is off by default, and if it's not specified, the model will continue generating until it emits an EOS completion token. See [BPE Tokens](/bpe-tokens-wiki) for more details.
	//
	// Can only be set to `0` if `return_likelihoods` is set to `ALL` to get the likelihood of the prompt.
	MaxTokens *int `json:"max_tokens,omitempty" url:"max_tokens,omitempty"`
	// One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.
	//
	// Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.
	//
	// If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.
	Truncate *GenerateStreamRequestTruncate `json:"truncate,omitempty" url:"truncate,omitempty"`
	// A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
	// Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
	Temperature *float64 `json:"temperature,omitempty" url:"temperature,omitempty"`
	// If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
	Seed *float64 `json:"seed,omitempty" url:"seed,omitempty"`
	// Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
	// When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
	Preset *string `json:"preset,omitempty" url:"preset,omitempty"`
	// The generated text will be cut at the beginning of the earliest occurrence of an end sequence. The sequence will be excluded from the text.
	EndSequences []string `json:"end_sequences,omitempty" url:"end_sequences,omitempty"`
	// The generated text will be cut at the end of the earliest occurrence of a stop sequence. The sequence will be included the text.
	StopSequences []string `json:"stop_sequences,omitempty" url:"stop_sequences,omitempty"`
	// Ensures only the top `k` most likely tokens are considered for generation at each step.
	// Defaults to `0`, min value of `0`, max value of `500`.
	K *int `json:"k,omitempty" url:"k,omitempty"`
	// Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
	// Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
	P *float64 `json:"p,omitempty" url:"p,omitempty"`
	// Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
	//
	// Using `frequency_penalty` in combination with `presence_penalty` is not supported on newer models.
	FrequencyPenalty *float64 `json:"frequency_penalty,omitempty" url:"frequency_penalty,omitempty"`
	// Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
	//
	// Can be used to reduce repetitiveness of generated tokens. Similar to `frequency_penalty`, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
	//
	// Using `frequency_penalty` in combination with `presence_penalty` is not supported on newer models.
	PresencePenalty *float64 `json:"presence_penalty,omitempty" url:"presence_penalty,omitempty"`
	// One of `GENERATION|ALL|NONE` to specify how and if the token likelihoods are returned with the response. Defaults to `NONE`.
	//
	// If `GENERATION` is selected, the token likelihoods will only be provided for generated text.
	//
	// If `ALL` is selected, the token likelihoods will be provided both for the prompt and the generated text.
	ReturnLikelihoods *GenerateStreamRequestReturnLikelihoods `json:"return_likelihoods,omitempty" url:"return_likelihoods,omitempty"`
	// When enabled, the user's prompt will be sent to the model without any pre-processing.
	RawPrompting *bool `json:"raw_prompting,omitempty" url:"raw_prompting,omitempty"`
	// contains filtered or unexported fields
}

func (*GenerateStreamRequest) MarshalJSON added in v2.5.1

func (g *GenerateStreamRequest) MarshalJSON() ([]byte, error)

func (*GenerateStreamRequest) Stream added in v2.5.1

func (g *GenerateStreamRequest) Stream() bool

func (*GenerateStreamRequest) UnmarshalJSON added in v2.5.1

func (g *GenerateStreamRequest) UnmarshalJSON(data []byte) error

type GenerateStreamRequestReturnLikelihoods added in v2.5.0

type GenerateStreamRequestReturnLikelihoods string

One of `GENERATION|ALL|NONE` to specify how and if the token likelihoods are returned with the response. Defaults to `NONE`.

If `GENERATION` is selected, the token likelihoods will only be provided for generated text.

If `ALL` is selected, the token likelihoods will be provided both for the prompt and the generated text.

const (
	GenerateStreamRequestReturnLikelihoodsGeneration GenerateStreamRequestReturnLikelihoods = "GENERATION"
	GenerateStreamRequestReturnLikelihoodsAll        GenerateStreamRequestReturnLikelihoods = "ALL"
	GenerateStreamRequestReturnLikelihoodsNone       GenerateStreamRequestReturnLikelihoods = "NONE"
)

func NewGenerateStreamRequestReturnLikelihoodsFromString added in v2.5.0

func NewGenerateStreamRequestReturnLikelihoodsFromString(s string) (GenerateStreamRequestReturnLikelihoods, error)

func (GenerateStreamRequestReturnLikelihoods) Ptr added in v2.5.0

type GenerateStreamRequestTruncate added in v2.5.0

type GenerateStreamRequestTruncate string

One of `NONE|START|END` to specify how the API will handle inputs longer than the maximum token length.

Passing `START` will discard the start of the input. `END` will discard the end of the input. In both cases, input is discarded until the remaining input is exactly the maximum input token length for the model.

If `NONE` is selected, when the input exceeds the maximum input token length an error will be returned.

const (
	GenerateStreamRequestTruncateNone  GenerateStreamRequestTruncate = "NONE"
	GenerateStreamRequestTruncateStart GenerateStreamRequestTruncate = "START"
	GenerateStreamRequestTruncateEnd   GenerateStreamRequestTruncate = "END"
)

func NewGenerateStreamRequestTruncateFromString added in v2.5.0

func NewGenerateStreamRequestTruncateFromString(s string) (GenerateStreamRequestTruncate, error)

func (GenerateStreamRequestTruncate) Ptr added in v2.5.0

type GenerateStreamText added in v2.5.0

type GenerateStreamText struct {
	// A segment of text of the generation.
	Text string `json:"text" url:"text"`
	// Refers to the nth generation. Only present when `num_generations` is greater than zero, and only when text responses are being streamed.
	Index      *int `json:"index,omitempty" url:"index,omitempty"`
	IsFinished bool `json:"is_finished" url:"is_finished"`
	// contains filtered or unexported fields
}

func (*GenerateStreamText) String added in v2.5.0

func (g *GenerateStreamText) String() string

func (*GenerateStreamText) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamText) UnmarshalJSON(data []byte) error

type GenerateStreamedResponse added in v2.5.0

type GenerateStreamedResponse struct {
	EventType      string
	TextGeneration *GenerateStreamText
	StreamEnd      *GenerateStreamEnd
	StreamError    *GenerateStreamError
}

Response in content type stream when `stream` is `true` in the request parameters. Generation tokens are streamed with the GenerationStream response. The final response is of type GenerationFinalResponse.

func (*GenerateStreamedResponse) Accept added in v2.5.0

func (GenerateStreamedResponse) MarshalJSON added in v2.5.0

func (g GenerateStreamedResponse) MarshalJSON() ([]byte, error)

func (*GenerateStreamedResponse) UnmarshalJSON added in v2.5.0

func (g *GenerateStreamedResponse) UnmarshalJSON(data []byte) error

type GenerateStreamedResponseVisitor added in v2.5.0

type GenerateStreamedResponseVisitor interface {
	VisitTextGeneration(*GenerateStreamText) error
	VisitStreamEnd(*GenerateStreamEnd) error
	VisitStreamError(*GenerateStreamError) error
}

type Generation

type Generation struct {
	Id string `json:"id" url:"id"`
	// Prompt used for generations.
	Prompt *string `json:"prompt,omitempty" url:"prompt,omitempty"`
	// List of generated results
	Generations []*SingleGeneration `json:"generations,omitempty" url:"generations,omitempty"`
	Meta        *ApiMeta            `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*Generation) String

func (g *Generation) String() string

func (*Generation) UnmarshalJSON

func (g *Generation) UnmarshalJSON(data []byte) error

type GetConnectorResponse added in v2.5.0

type GetConnectorResponse struct {
	Connector *Connector `json:"connector,omitempty" url:"connector,omitempty"`
	// contains filtered or unexported fields
}

func (*GetConnectorResponse) String added in v2.5.0

func (g *GetConnectorResponse) String() string

func (*GetConnectorResponse) UnmarshalJSON added in v2.5.0

func (g *GetConnectorResponse) UnmarshalJSON(data []byte) error

type GetModelResponse added in v2.7.1

type GetModelResponse struct {
	// Specify this name in the `model` parameter of API requests to use your chosen model.
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// The API endpoints that the model is compatible with.
	Endpoints []CompatibleEndpoint `json:"endpoints,omitempty" url:"endpoints,omitempty"`
	// Whether the model has been fine-tuned or not.
	Finetuned *bool `json:"finetuned,omitempty" url:"finetuned,omitempty"`
	// The maximum number of tokens that the model can process in a single request. Note that not all of these tokens are always available due to special tokens and preambles that Cohere has added by default.
	ContextLength *float64 `json:"context_length,omitempty" url:"context_length,omitempty"`
	// Public URL to the tokenizer's configuration file.
	TokenizerUrl *string `json:"tokenizer_url,omitempty" url:"tokenizer_url,omitempty"`
	// The API endpoints that the model is default to.
	DefaultEndpoints []CompatibleEndpoint `json:"default_endpoints,omitempty" url:"default_endpoints,omitempty"`
	// contains filtered or unexported fields
}

Contains information about the model and which API endpoints it can be used with.

func (*GetModelResponse) String added in v2.7.1

func (g *GetModelResponse) String() string

func (*GetModelResponse) UnmarshalJSON added in v2.7.1

func (g *GetModelResponse) UnmarshalJSON(data []byte) error

type InternalServerError

type InternalServerError struct {
	*core.APIError
	Body interface{}
}

func (*InternalServerError) MarshalJSON

func (i *InternalServerError) MarshalJSON() ([]byte, error)

func (*InternalServerError) UnmarshalJSON

func (i *InternalServerError) UnmarshalJSON(data []byte) error

func (*InternalServerError) Unwrap

func (i *InternalServerError) Unwrap() error

type LabelMetric added in v2.6.0

type LabelMetric struct {
	// Total number of examples for this label
	TotalExamples *string `json:"total_examples,omitempty" url:"total_examples,omitempty"`
	// value of the label
	Label *string `json:"label,omitempty" url:"label,omitempty"`
	// samples for this label
	Samples []string `json:"samples,omitempty" url:"samples,omitempty"`
	// contains filtered or unexported fields
}

func (*LabelMetric) String added in v2.6.0

func (l *LabelMetric) String() string

func (*LabelMetric) UnmarshalJSON added in v2.6.0

func (l *LabelMetric) UnmarshalJSON(data []byte) error

type ListConnectorsResponse added in v2.5.0

type ListConnectorsResponse struct {
	Connectors []*Connector `json:"connectors,omitempty" url:"connectors,omitempty"`
	// Total number of connectors.
	TotalCount *float64 `json:"total_count,omitempty" url:"total_count,omitempty"`
	// contains filtered or unexported fields
}

func (*ListConnectorsResponse) String added in v2.5.0

func (l *ListConnectorsResponse) String() string

func (*ListConnectorsResponse) UnmarshalJSON added in v2.5.0

func (l *ListConnectorsResponse) UnmarshalJSON(data []byte) error

type ListEmbedJobResponse added in v2.5.0

type ListEmbedJobResponse struct {
	EmbedJobs []*EmbedJob `json:"embed_jobs,omitempty" url:"embed_jobs,omitempty"`
	// contains filtered or unexported fields
}

func (*ListEmbedJobResponse) String added in v2.5.0

func (l *ListEmbedJobResponse) String() string

func (*ListEmbedJobResponse) UnmarshalJSON added in v2.5.0

func (l *ListEmbedJobResponse) UnmarshalJSON(data []byte) error

type ListModelsResponse added in v2.6.0

type ListModelsResponse struct {
	Models []*GetModelResponse `json:"models,omitempty" url:"models,omitempty"`
	// A token to retrieve the next page of results. Provide in the page_token parameter of the next request.
	NextPageToken *string `json:"next_page_token,omitempty" url:"next_page_token,omitempty"`
	// contains filtered or unexported fields
}

func (*ListModelsResponse) String added in v2.6.0

func (l *ListModelsResponse) String() string

func (*ListModelsResponse) UnmarshalJSON added in v2.6.0

func (l *ListModelsResponse) UnmarshalJSON(data []byte) error

type Metrics added in v2.6.0

type Metrics struct {
	FinetuneDatasetMetrics *FinetuneDatasetMetrics `json:"finetune_dataset_metrics,omitempty" url:"finetune_dataset_metrics,omitempty"`
	EmbedData              *MetricsEmbedData       `json:"embed_data,omitempty" url:"embed_data,omitempty"`
	// contains filtered or unexported fields
}

func (*Metrics) String added in v2.6.0

func (m *Metrics) String() string

func (*Metrics) UnmarshalJSON added in v2.6.0

func (m *Metrics) UnmarshalJSON(data []byte) error

type MetricsEmbedData added in v2.7.4

type MetricsEmbedData struct {
	// the fields in the dataset
	Fields []*MetricsEmbedDataFieldsItem `json:"fields,omitempty" url:"fields,omitempty"`
	// contains filtered or unexported fields
}

func (*MetricsEmbedData) String added in v2.7.4

func (m *MetricsEmbedData) String() string

func (*MetricsEmbedData) UnmarshalJSON added in v2.7.4

func (m *MetricsEmbedData) UnmarshalJSON(data []byte) error

type MetricsEmbedDataFieldsItem added in v2.7.4

type MetricsEmbedDataFieldsItem struct {
	// the name of the field
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// the number of times the field appears in the dataset
	Count *float64 `json:"count,omitempty" url:"count,omitempty"`
	// contains filtered or unexported fields
}

func (*MetricsEmbedDataFieldsItem) String added in v2.7.4

func (m *MetricsEmbedDataFieldsItem) String() string

func (*MetricsEmbedDataFieldsItem) UnmarshalJSON added in v2.7.4

func (m *MetricsEmbedDataFieldsItem) UnmarshalJSON(data []byte) error

type ModelsListRequest added in v2.6.0

type ModelsListRequest struct {
	// Maximum number of models to include in a page
	// Defaults to `20`, min value of `1`, max value of `1000`.
	PageSize *float64 `json:"-" url:"page_size,omitempty"`
	// Page token provided in the `next_page_token` field of a previous response.
	PageToken *string `json:"-" url:"page_token,omitempty"`
	// When provided, filters the list of models to only those that are compatible with the specified endpoint.
	Endpoint *CompatibleEndpoint `json:"-" url:"endpoint,omitempty"`
	// When provided, filters the list of models to only the default model to the endpoint. This parameter is only valid when `endpoint` is provided.
	DefaultOnly *bool `json:"-" url:"default_only,omitempty"`
}

type NonStreamedChatResponse

type NonStreamedChatResponse struct {
	// Contents of the reply generated by the model.
	Text string `json:"text" url:"text"`
	// Unique identifier for the generated reply. Useful for submitting feedback.
	GenerationId *string `json:"generation_id,omitempty" url:"generation_id,omitempty"`
	// Inline citations for the generated reply.
	Citations []*ChatCitation `json:"citations,omitempty" url:"citations,omitempty"`
	// Documents seen by the model when generating the reply.
	Documents []ChatDocument `json:"documents,omitempty" url:"documents,omitempty"`
	// Denotes that a search for documents is required during the RAG flow.
	IsSearchRequired *bool `json:"is_search_required,omitempty" url:"is_search_required,omitempty"`
	// Generated search queries, meant to be used as part of the RAG flow.
	SearchQueries []*ChatSearchQuery `json:"search_queries,omitempty" url:"search_queries,omitempty"`
	// Documents retrieved from each of the conducted searches.
	SearchResults []*ChatSearchResult `json:"search_results,omitempty" url:"search_results,omitempty"`
	FinishReason  *FinishReason       `json:"finish_reason,omitempty" url:"finish_reason,omitempty"`
	ToolCalls     []*ToolCall         `json:"tool_calls,omitempty" url:"tool_calls,omitempty"`
	// A list of previous messages between the user and the model, meant to give the model conversational context for responding to the user's `message`.
	ChatHistory []*ChatMessage `json:"chat_history,omitempty" url:"chat_history,omitempty"`
	// The prompt that was used. Only present when `return_prompt` in the request is set to true.
	Prompt *string  `json:"prompt,omitempty" url:"prompt,omitempty"`
	Meta   *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*NonStreamedChatResponse) String

func (n *NonStreamedChatResponse) String() string

func (*NonStreamedChatResponse) UnmarshalJSON

func (n *NonStreamedChatResponse) UnmarshalJSON(data []byte) error

type NotFoundError added in v2.2.0

type NotFoundError struct {
	*core.APIError
	Body interface{}
}

func (*NotFoundError) MarshalJSON added in v2.2.0

func (n *NotFoundError) MarshalJSON() ([]byte, error)

func (*NotFoundError) UnmarshalJSON added in v2.2.0

func (n *NotFoundError) UnmarshalJSON(data []byte) error

func (*NotFoundError) Unwrap added in v2.2.0

func (n *NotFoundError) Unwrap() error

type OAuthAuthorizeResponse added in v2.2.0

type OAuthAuthorizeResponse struct {
	// The OAuth 2.0 redirect url. Redirect the user to this url to authorize the connector.
	RedirectUrl *string `json:"redirect_url,omitempty" url:"redirect_url,omitempty"`
	// contains filtered or unexported fields
}

func (*OAuthAuthorizeResponse) String added in v2.2.0

func (o *OAuthAuthorizeResponse) String() string

func (*OAuthAuthorizeResponse) UnmarshalJSON added in v2.2.0

func (o *OAuthAuthorizeResponse) UnmarshalJSON(data []byte) error

type ParseInfo added in v2.5.2

type ParseInfo struct {
	Separator *string `json:"separator,omitempty" url:"separator,omitempty"`
	Delimiter *string `json:"delimiter,omitempty" url:"delimiter,omitempty"`
	// contains filtered or unexported fields
}

func (*ParseInfo) String added in v2.5.2

func (p *ParseInfo) String() string

func (*ParseInfo) UnmarshalJSON added in v2.5.2

func (p *ParseInfo) UnmarshalJSON(data []byte) error

type RerankRequest

type RerankRequest struct {
	// The identifier of the model to use, one of : `rerank-english-v3.0`, `rerank-multilingual-v3.0`, `rerank-english-v2.0`, `rerank-multilingual-v2.0`
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// The search query
	Query string `json:"query" url:"query"`
	// A list of document objects or strings to rerank.
	// If a document is provided the text fields is required and all other fields will be preserved in the response.
	//
	// The total max chunks (length of documents * max_chunks_per_doc) must be less than 10000.
	//
	// We recommend a maximum of 1,000 documents for optimal endpoint performance.
	Documents []*RerankRequestDocumentsItem `json:"documents,omitempty" url:"documents,omitempty"`
	// The number of most relevant documents or indices to return, defaults to the length of the documents
	TopN *int `json:"top_n,omitempty" url:"top_n,omitempty"`
	// If a JSON object is provided, you can specify which keys you would like to have considered for reranking. The model will rerank based on order of the fields passed in (i.e. rank_fields=['title','author','text'] will rerank using the values in title, author, text  sequentially. If the length of title, author, and text exceeds the context length of the model, the chunking will not re-consider earlier fields). If not provided, the model will use the default text field for ranking.
	RankFields []string `json:"rank_fields,omitempty" url:"rank_fields,omitempty"`
	// - If false, returns results without the doc text - the api will return a list of {index, relevance score} where index is inferred from the list passed into the request.
	// - If true, returns results with the doc text passed in - the api will return an ordered list of {index, text, relevance score} where index + text refers to the list passed into the request.
	ReturnDocuments *bool `json:"return_documents,omitempty" url:"return_documents,omitempty"`
	// The maximum number of chunks to produce internally from a document
	MaxChunksPerDoc *int `json:"max_chunks_per_doc,omitempty" url:"max_chunks_per_doc,omitempty"`
}

type RerankRequestDocumentsItem

type RerankRequestDocumentsItem struct {
	String                         string
	RerankRequestDocumentsItemText *RerankRequestDocumentsItemText
}

func (*RerankRequestDocumentsItem) Accept

func (RerankRequestDocumentsItem) MarshalJSON

func (r RerankRequestDocumentsItem) MarshalJSON() ([]byte, error)

func (*RerankRequestDocumentsItem) UnmarshalJSON

func (r *RerankRequestDocumentsItem) UnmarshalJSON(data []byte) error

type RerankRequestDocumentsItemText

type RerankRequestDocumentsItemText struct {
	// The text of the document to rerank.
	Text string `json:"text" url:"text"`
	// contains filtered or unexported fields
}

func (*RerankRequestDocumentsItemText) String

func (*RerankRequestDocumentsItemText) UnmarshalJSON

func (r *RerankRequestDocumentsItemText) UnmarshalJSON(data []byte) error

type RerankRequestDocumentsItemVisitor

type RerankRequestDocumentsItemVisitor interface {
	VisitString(string) error
	VisitRerankRequestDocumentsItemText(*RerankRequestDocumentsItemText) error
}

type RerankResponse

type RerankResponse struct {
	Id *string `json:"id,omitempty" url:"id,omitempty"`
	// An ordered list of ranked documents
	Results []*RerankResponseResultsItem `json:"results,omitempty" url:"results,omitempty"`
	Meta    *ApiMeta                     `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*RerankResponse) String

func (r *RerankResponse) String() string

func (*RerankResponse) UnmarshalJSON

func (r *RerankResponse) UnmarshalJSON(data []byte) error

type RerankResponseResultsItem

type RerankResponseResultsItem struct {
	// If `return_documents` is set as `false` this will return none, if `true` it will return the documents passed in
	Document *RerankResponseResultsItemDocument `json:"document,omitempty" url:"document,omitempty"`
	// Corresponds to the index in the original list of documents to which the ranked document belongs. (i.e. if the first value in the `results` object has an `index` value of 3, it means in the list of documents passed in, the document at `index=3` had the highest relevance)
	Index int `json:"index" url:"index"`
	// Relevance scores are normalized to be in the range `[0, 1]`. Scores close to `1` indicate a high relevance to the query, and scores closer to `0` indicate low relevance. It is not accurate to assume a score of 0.9 means the document is 2x more relevant than a document with a score of 0.45
	RelevanceScore float64 `json:"relevance_score" url:"relevance_score"`
	// contains filtered or unexported fields
}

func (*RerankResponseResultsItem) String

func (r *RerankResponseResultsItem) String() string

func (*RerankResponseResultsItem) UnmarshalJSON

func (r *RerankResponseResultsItem) UnmarshalJSON(data []byte) error

type RerankResponseResultsItemDocument

type RerankResponseResultsItemDocument struct {
	// The text of the document to rerank
	Text string `json:"text" url:"text"`
	// contains filtered or unexported fields
}

If `return_documents` is set as `false` this will return none, if `true` it will return the documents passed in

func (*RerankResponseResultsItemDocument) String

func (*RerankResponseResultsItemDocument) UnmarshalJSON

func (r *RerankResponseResultsItemDocument) UnmarshalJSON(data []byte) error

type RerankerDataMetrics added in v2.6.0

type RerankerDataMetrics struct {
	// The number of training queries.
	NumTrainQueries *string `json:"num_train_queries,omitempty" url:"num_train_queries,omitempty"`
	// The sum of all relevant passages of valid training examples.
	NumTrainRelevantPassages *string `json:"num_train_relevant_passages,omitempty" url:"num_train_relevant_passages,omitempty"`
	// The sum of all hard negatives of valid training examples.
	NumTrainHardNegatives *string `json:"num_train_hard_negatives,omitempty" url:"num_train_hard_negatives,omitempty"`
	// The number of evaluation queries.
	NumEvalQueries *string `json:"num_eval_queries,omitempty" url:"num_eval_queries,omitempty"`
	// The sum of all relevant passages of valid eval examples.
	NumEvalRelevantPassages *string `json:"num_eval_relevant_passages,omitempty" url:"num_eval_relevant_passages,omitempty"`
	// The sum of all hard negatives of valid eval examples.
	NumEvalHardNegatives *string `json:"num_eval_hard_negatives,omitempty" url:"num_eval_hard_negatives,omitempty"`
	// contains filtered or unexported fields
}

func (*RerankerDataMetrics) String added in v2.6.0

func (r *RerankerDataMetrics) String() string

func (*RerankerDataMetrics) UnmarshalJSON added in v2.6.0

func (r *RerankerDataMetrics) UnmarshalJSON(data []byte) error

type ServiceUnavailableError added in v2.7.0

type ServiceUnavailableError struct {
	*core.APIError
	Body *finetuning.Error
}

Status Service Unavailable

func (*ServiceUnavailableError) MarshalJSON added in v2.7.0

func (s *ServiceUnavailableError) MarshalJSON() ([]byte, error)

func (*ServiceUnavailableError) UnmarshalJSON added in v2.7.0

func (s *ServiceUnavailableError) UnmarshalJSON(data []byte) error

func (*ServiceUnavailableError) Unwrap added in v2.7.0

func (s *ServiceUnavailableError) Unwrap() error

type SingleGeneration

type SingleGeneration struct {
	Id   string `json:"id" url:"id"`
	Text string `json:"text" url:"text"`
	// Refers to the nth generation. Only present when `num_generations` is greater than zero.
	Index      *int     `json:"index,omitempty" url:"index,omitempty"`
	Likelihood *float64 `json:"likelihood,omitempty" url:"likelihood,omitempty"`
	// Only returned if `return_likelihoods` is set to `GENERATION` or `ALL`. The likelihood refers to the average log-likelihood of the entire specified string, which is useful for [evaluating the performance of your model](likelihood-eval), especially if you've created a [custom model](/docs/training-custom-models). Individual token likelihoods provide the log-likelihood of each token. The first token will not have a likelihood.
	TokenLikelihoods []*SingleGenerationTokenLikelihoodsItem `json:"token_likelihoods,omitempty" url:"token_likelihoods,omitempty"`
	// contains filtered or unexported fields
}

func (*SingleGeneration) String

func (s *SingleGeneration) String() string

func (*SingleGeneration) UnmarshalJSON

func (s *SingleGeneration) UnmarshalJSON(data []byte) error

type SingleGenerationInStream

type SingleGenerationInStream struct {
	Id string `json:"id" url:"id"`
	// Full text of the generation.
	Text string `json:"text" url:"text"`
	// Refers to the nth generation. Only present when `num_generations` is greater than zero.
	Index        *int         `json:"index,omitempty" url:"index,omitempty"`
	FinishReason FinishReason `json:"finish_reason" url:"finish_reason"`
	// contains filtered or unexported fields
}

func (*SingleGenerationInStream) String

func (s *SingleGenerationInStream) String() string

func (*SingleGenerationInStream) UnmarshalJSON

func (s *SingleGenerationInStream) UnmarshalJSON(data []byte) error

type SingleGenerationTokenLikelihoodsItem

type SingleGenerationTokenLikelihoodsItem struct {
	Token      string  `json:"token" url:"token"`
	Likelihood float64 `json:"likelihood" url:"likelihood"`
	// contains filtered or unexported fields
}

func (*SingleGenerationTokenLikelihoodsItem) String

func (*SingleGenerationTokenLikelihoodsItem) UnmarshalJSON

func (s *SingleGenerationTokenLikelihoodsItem) UnmarshalJSON(data []byte) error

type StreamedChatResponse

type StreamedChatResponse struct {
	EventType               string
	StreamStart             *ChatStreamStartEvent
	SearchQueriesGeneration *ChatSearchQueriesGenerationEvent
	SearchResults           *ChatSearchResultsEvent
	TextGeneration          *ChatTextGenerationEvent
	CitationGeneration      *ChatCitationGenerationEvent
	ToolCallsGeneration     *ChatToolCallsGenerationEvent
	StreamEnd               *ChatStreamEndEvent
}

StreamedChatResponse is returned in streaming mode (specified with `stream=True` in the request).

func (*StreamedChatResponse) Accept

func (StreamedChatResponse) MarshalJSON

func (s StreamedChatResponse) MarshalJSON() ([]byte, error)

func (*StreamedChatResponse) UnmarshalJSON

func (s *StreamedChatResponse) UnmarshalJSON(data []byte) error

type StreamedChatResponseVisitor

type StreamedChatResponseVisitor interface {
	VisitStreamStart(*ChatStreamStartEvent) error
	VisitSearchQueriesGeneration(*ChatSearchQueriesGenerationEvent) error
	VisitSearchResults(*ChatSearchResultsEvent) error
	VisitTextGeneration(*ChatTextGenerationEvent) error
	VisitCitationGeneration(*ChatCitationGenerationEvent) error
	VisitToolCallsGeneration(*ChatToolCallsGenerationEvent) error
	VisitStreamEnd(*ChatStreamEndEvent) error
}

type SummarizeRequest

type SummarizeRequest struct {
	// The text to generate a summary for. Can be up to 100,000 characters long. Currently the only supported language is English.
	Text string `json:"text" url:"text"`
	// One of `short`, `medium`, `long`, or `auto` defaults to `auto`. Indicates the approximate length of the summary. If `auto` is selected, the best option will be picked based on the input text.
	Length *SummarizeRequestLength `json:"length,omitempty" url:"length,omitempty"`
	// One of `paragraph`, `bullets`, or `auto`, defaults to `auto`. Indicates the style in which the summary will be delivered - in a free form paragraph or in bullet points. If `auto` is selected, the best option will be picked based on the input text.
	Format *SummarizeRequestFormat `json:"format,omitempty" url:"format,omitempty"`
	// The identifier of the model to generate the summary with. Currently available models are `command` (default), `command-nightly` (experimental), `command-light`, and `command-light-nightly` (experimental). Smaller, "light" models are faster, while larger models will perform better.
	Model *string `json:"model,omitempty" url:"model,omitempty"`
	// One of `low`, `medium`, `high`, or `auto`, defaults to `auto`. Controls how close to the original text the summary is. `high` extractiveness summaries will lean towards reusing sentences verbatim, while `low` extractiveness summaries will tend to paraphrase more. If `auto` is selected, the best option will be picked based on the input text.
	Extractiveness *SummarizeRequestExtractiveness `json:"extractiveness,omitempty" url:"extractiveness,omitempty"`
	// Ranges from 0 to 5. Controls the randomness of the output. Lower values tend to generate more “predictable” output, while higher values tend to generate more “creative” output. The sweet spot is typically between 0 and 1.
	Temperature *float64 `json:"temperature,omitempty" url:"temperature,omitempty"`
	// A free-form instruction for modifying how the summaries get generated. Should complete the sentence "Generate a summary _". Eg. "focusing on the next steps" or "written by Yoda"
	AdditionalCommand *string `json:"additional_command,omitempty" url:"additional_command,omitempty"`
}

type SummarizeRequestExtractiveness

type SummarizeRequestExtractiveness string

One of `low`, `medium`, `high`, or `auto`, defaults to `auto`. Controls how close to the original text the summary is. `high` extractiveness summaries will lean towards reusing sentences verbatim, while `low` extractiveness summaries will tend to paraphrase more. If `auto` is selected, the best option will be picked based on the input text.

const (
	SummarizeRequestExtractivenessLow    SummarizeRequestExtractiveness = "low"
	SummarizeRequestExtractivenessMedium SummarizeRequestExtractiveness = "medium"
	SummarizeRequestExtractivenessHigh   SummarizeRequestExtractiveness = "high"
)

func NewSummarizeRequestExtractivenessFromString

func NewSummarizeRequestExtractivenessFromString(s string) (SummarizeRequestExtractiveness, error)

func (SummarizeRequestExtractiveness) Ptr

type SummarizeRequestFormat

type SummarizeRequestFormat string

One of `paragraph`, `bullets`, or `auto`, defaults to `auto`. Indicates the style in which the summary will be delivered - in a free form paragraph or in bullet points. If `auto` is selected, the best option will be picked based on the input text.

const (
	SummarizeRequestFormatParagraph SummarizeRequestFormat = "paragraph"
	SummarizeRequestFormatBullets   SummarizeRequestFormat = "bullets"
)

func NewSummarizeRequestFormatFromString

func NewSummarizeRequestFormatFromString(s string) (SummarizeRequestFormat, error)

func (SummarizeRequestFormat) Ptr

type SummarizeRequestLength

type SummarizeRequestLength string

One of `short`, `medium`, `long`, or `auto` defaults to `auto`. Indicates the approximate length of the summary. If `auto` is selected, the best option will be picked based on the input text.

const (
	SummarizeRequestLengthShort  SummarizeRequestLength = "short"
	SummarizeRequestLengthMedium SummarizeRequestLength = "medium"
	SummarizeRequestLengthLong   SummarizeRequestLength = "long"
)

func NewSummarizeRequestLengthFromString

func NewSummarizeRequestLengthFromString(s string) (SummarizeRequestLength, error)

func (SummarizeRequestLength) Ptr

type SummarizeResponse

type SummarizeResponse struct {
	// Generated ID for the summary
	Id *string `json:"id,omitempty" url:"id,omitempty"`
	// Generated summary for the text
	Summary *string  `json:"summary,omitempty" url:"summary,omitempty"`
	Meta    *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*SummarizeResponse) String

func (s *SummarizeResponse) String() string

func (*SummarizeResponse) UnmarshalJSON

func (s *SummarizeResponse) UnmarshalJSON(data []byte) error

type TokenizeRequest

type TokenizeRequest struct {
	// The string to be tokenized, the minimum text length is 1 character, and the maximum text length is 65536 characters.
	Text string `json:"text" url:"text"`
	// An optional parameter to provide the model name. This will ensure that the tokenization uses the tokenizer used by that model.
	Model string `json:"model" url:"model"`
}

type TokenizeResponse

type TokenizeResponse struct {
	// An array of tokens, where each token is an integer.
	Tokens       []int    `json:"tokens,omitempty" url:"tokens,omitempty"`
	TokenStrings []string `json:"token_strings,omitempty" url:"token_strings,omitempty"`
	Meta         *ApiMeta `json:"meta,omitempty" url:"meta,omitempty"`
	// contains filtered or unexported fields
}

func (*TokenizeResponse) String

func (t *TokenizeResponse) String() string

func (*TokenizeResponse) UnmarshalJSON

func (t *TokenizeResponse) UnmarshalJSON(data []byte) error

type TooManyRequestsError added in v2.6.0

type TooManyRequestsError struct {
	*core.APIError
	Body *TooManyRequestsErrorBody
}

Too many requests

func (*TooManyRequestsError) MarshalJSON added in v2.6.0

func (t *TooManyRequestsError) MarshalJSON() ([]byte, error)

func (*TooManyRequestsError) UnmarshalJSON added in v2.6.0

func (t *TooManyRequestsError) UnmarshalJSON(data []byte) error

func (*TooManyRequestsError) Unwrap added in v2.6.0

func (t *TooManyRequestsError) Unwrap() error

type TooManyRequestsErrorBody added in v2.7.3

type TooManyRequestsErrorBody struct {
	Data *string `json:"data,omitempty" url:"data,omitempty"`
	// contains filtered or unexported fields
}

func (*TooManyRequestsErrorBody) String added in v2.7.3

func (t *TooManyRequestsErrorBody) String() string

func (*TooManyRequestsErrorBody) UnmarshalJSON added in v2.7.3

func (t *TooManyRequestsErrorBody) UnmarshalJSON(data []byte) error

type Tool added in v2.6.0

type Tool struct {
	// The name of the tool to be called. Valid names contain only the characters `a-z`, `A-Z`, `0-9`, `_` and must not begin with a digit.
	Name string `json:"name" url:"name"`
	// The description of what the tool does, the model uses the description to choose when and how to call the function.
	Description string `json:"description" url:"description"`
	// The input parameters of the tool. Accepts a dictionary where the key is the name of the parameter and the value is the parameter spec. Valid parameter names contain only the characters `a-z`, `A-Z`, `0-9`, `_` and must not begin with a digit.
	//
	// “`
	//
	//	{
	//	  "my_param": {
	//	    "description": <string>,
	//	    "type": <string>, // any python data type, such as 'str', 'bool'
	//	    "required": <boolean>
	//	  }
	//	}
	//
	// “`
	ParameterDefinitions map[string]*ToolParameterDefinitionsValue `json:"parameter_definitions,omitempty" url:"parameter_definitions,omitempty"`
	// contains filtered or unexported fields
}

func (*Tool) String added in v2.6.0

func (t *Tool) String() string

func (*Tool) UnmarshalJSON added in v2.6.0

func (t *Tool) UnmarshalJSON(data []byte) error

type ToolCall added in v2.6.0

type ToolCall struct {
	// Name of the tool to call.
	Name string `json:"name" url:"name"`
	// The name and value of the parameters to use when invoking a tool.
	Parameters map[string]interface{} `json:"parameters,omitempty" url:"parameters,omitempty"`
	// contains filtered or unexported fields
}

Contains the tool calls generated by the model. Use it to invoke your tools.

func (*ToolCall) String added in v2.6.0

func (t *ToolCall) String() string

func (*ToolCall) UnmarshalJSON added in v2.6.0

func (t *ToolCall) UnmarshalJSON(data []byte) error

type ToolParameterDefinitionsValue added in v2.6.0

type ToolParameterDefinitionsValue struct {
	// The description of the parameter.
	Description *string `json:"description,omitempty" url:"description,omitempty"`
	// The type of the parameter. Must be a valid Python type.
	Type string `json:"type" url:"type"`
	// Denotes whether the parameter is always present (required) or not. Defaults to not required.
	Required *bool `json:"required,omitempty" url:"required,omitempty"`
	// contains filtered or unexported fields
}

func (*ToolParameterDefinitionsValue) String added in v2.6.0

func (*ToolParameterDefinitionsValue) UnmarshalJSON added in v2.6.0

func (t *ToolParameterDefinitionsValue) UnmarshalJSON(data []byte) error

type UnauthorizedError added in v2.7.0

type UnauthorizedError struct {
	*core.APIError
	Body *finetuning.Error
}

Unauthorized

func (*UnauthorizedError) MarshalJSON added in v2.7.0

func (u *UnauthorizedError) MarshalJSON() ([]byte, error)

func (*UnauthorizedError) UnmarshalJSON added in v2.7.0

func (u *UnauthorizedError) UnmarshalJSON(data []byte) error

func (*UnauthorizedError) Unwrap added in v2.7.0

func (u *UnauthorizedError) Unwrap() error

type UpdateConnectorRequest added in v2.5.0

type UpdateConnectorRequest struct {
	// A human-readable name for the connector.
	Name *string `json:"name,omitempty" url:"name,omitempty"`
	// The URL of the connector that will be used to search for documents.
	Url *string `json:"url,omitempty" url:"url,omitempty"`
	// A list of fields to exclude from the prompt (fields remain in the document).
	Excludes []string `json:"excludes,omitempty" url:"excludes,omitempty"`
	// The OAuth 2.0 configuration for the connector. Cannot be specified if service_auth is specified.
	Oauth             *CreateConnectorOAuth `json:"oauth,omitempty" url:"oauth,omitempty"`
	Active            *bool                 `json:"active,omitempty" url:"active,omitempty"`
	ContinueOnFailure *bool                 `json:"continue_on_failure,omitempty" url:"continue_on_failure,omitempty"`
	// The service to service authentication configuration for the connector. Cannot be specified if oauth is specified.
	ServiceAuth *CreateConnectorServiceAuth `json:"service_auth,omitempty" url:"service_auth,omitempty"`
}

type UpdateConnectorResponse added in v2.5.0

type UpdateConnectorResponse struct {
	Connector *Connector `json:"connector,omitempty" url:"connector,omitempty"`
	// contains filtered or unexported fields
}

func (*UpdateConnectorResponse) String added in v2.5.0

func (u *UpdateConnectorResponse) String() string

func (*UpdateConnectorResponse) UnmarshalJSON added in v2.5.0

func (u *UpdateConnectorResponse) UnmarshalJSON(data []byte) error

Directories

Path Synopsis
Finetuning API (Beta)
Finetuning API (Beta)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL