Documentation ¶
Index ¶
Constants ¶
View Source
const DefaultAnthropicModel = "anthropic.claude-v2:1"
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type APIConfig ¶
type APIConfig struct { LLMInterface Prompt string // prompt to use for the first request (optional) Print bool // whether to print the initial messages to stdout OneRound bool // whether to just run for one round, no conversation JSONMode bool // whether to have model respond in json }
type File ¶
type File interface { Metadata() string io.ReadCloser }
type FinishReason ¶ added in v0.0.15
type FinishReason int
const ( FinishReasonStop FinishReason FinishReasonLength FinishReasonUnknown )
func (FinishReason) String ¶ added in v0.0.15
func (i FinishReason) String() string
type LLMInterface ¶ added in v0.0.22
type LLMInterface interface { // the context window capacity of the LLM MaxTokens() int // estimates how many tokens are used by the messages TokenEstimate(messages []Message) (int, error) // streams the response of the LLM to the messages Streaming(messages []Message, stream io.Writer) (*Response, error) }
func Claude2 ¶ added in v0.0.15
func Claude2(c *bedrockruntime.Client) (LLMInterface, error)
assumes that 1000 tokens are approximately 750 words.
func NewMultiLLMInterface ¶ added in v0.0.23
func NewMultiLLMInterface(firstSmaller, secondLarger LLMInterface) (LLMInterface, error)
uses the first smaller capacity one, until tokens exceed its limit, then uses the second one; thus, can seemlessly switch between a lower capacity and a higher capacity model as chat grows over time.
type Response ¶ added in v0.0.15
type Response struct { Content string FinishReason FinishReason }
Source Files ¶
Click to show internal directories.
Click to hide internal directories.