ollamaclient

package module
v2.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 1, 2024 License: Apache-2.0 Imports: 13 Imported by: 3

Documentation

Overview

Package ollamaclient can be used for communicating with the Ollama service

Index

Constants

This section is empty.

Variables

View Source
var Cache *bigcache.BigCache

Cache is used for caching reproducible results from Ollama (seed -1, temperature 0)

Functions

func Base64EncodeFile added in v2.1.0

func Base64EncodeFile(filePath string) (string, error)

Base64EncodeFile reads in a file and returns a base64-encoded string

func ClearCache added in v2.0.3

func ClearCache()

ClearCache removes the current cache entries

func CloseCache added in v2.0.3

func CloseCache()

CloseCache signals the shutdown of the cache

func InitCache

func InitCache() error

InitCache initializes the BigCache cache

func Massage

func Massage(generatedOutput string) string

Massage will try to extract a shorter message from a longer LLM output using pretty "hacky" string manipulation techniques.

Types

type Config

type Config struct {
	ServerAddr                string
	ModelName                 string
	SeedOrNegative            int
	TemperatureIfNegativeSeed float64
	PullTimeout               time.Duration
	HTTPTimeout               time.Duration
	TrimSpace                 bool
	Verbose                   bool
	ContextLength             int64
}

Config represents configuration details for communicating with the Ollama API

func New

func New() *Config

New initializes a new Config using environment variables

func NewConfig

func NewConfig(serverAddr, modelName string, seedOrNegative int, temperatureIfNegativeSeed float64, pTimeout, hTimeout time.Duration, trimSpace, verbose bool) *Config

NewConfig initializes a new Config using a specified model, address (like http://localhost:11434) and a verbose bool

func (*Config) CopyModel added in v2.3.0

func (oc *Config) CopyModel(source, destination string) error

CopyModel duplicates an existing model under a new name

func (*Config) CreateModel added in v2.3.0

func (oc *Config) CreateModel(name, modelfile string) error

CreateModel creates a new model based on a Modelfile

func (*Config) DeleteModel added in v2.3.0

func (oc *Config) DeleteModel(name string) error

DeleteModel removes a model from the server

func (*Config) DescribeImages added in v2.2.0

func (oc *Config) DescribeImages(imageFilenames []string, desiredWordCount int) (string, error)

DescribeImages can load a slice of image filenames into base64 encoded strings and build a prompt that starts with "Describe this/these image(s):" followed by the encoded images, and return a result. Typically used together with the "llava" model.

func (*Config) Embeddings

func (oc *Config) Embeddings(prompt string) ([]float64, error)

Embeddings sends a request to get embeddings for a given prompt

func (*Config) GetOutput

func (oc *Config) GetOutput(promptAndOptionalImages ...string) (string, error)

GetOutput sends a request to the Ollama API and returns the generated output.

func (*Config) Has

func (oc *Config) Has(model string) (bool, error)

Has returns true if the given model exists

func (*Config) HasModel

func (oc *Config) HasModel() (bool, error)

HasModel returns true if the configured model exists

func (*Config) List

func (oc *Config) List() ([]string, map[string]time.Time, map[string]int64, error)

List collects info about the currently downloaded models

func (*Config) MustOutput

func (oc *Config) MustOutput(promptAndOptionalImages ...string) string

MustOutput returns the output from Ollama, or the error as a string if not

func (*Config) Pull

func (oc *Config) Pull(optionalVerbose ...bool) (string, error)

Pull takes an optional verbose bool and tries to pull the current oc.Model

func (*Config) PullIfNeeded

func (oc *Config) PullIfNeeded(optionalVerbose ...bool) error

PullIfNeeded pulls a model, but only if it's not already there. While Pull downloads/updates the model regardless. Also takes an optional bool for if progress bars should be used when models are being downloaded.

func (*Config) SetContextLength added in v2.3.0

func (oc *Config) SetContextLength(contextLength int64)

SetContextLength sets the context lenght for this Ollama config

func (*Config) SetRandom

func (oc *Config) SetRandom()

SetRandom configures the generated output to not be reproducible

func (*Config) SetReproducible

func (oc *Config) SetReproducible(optionalSeed ...int)

SetReproducible configures the generated output to be reproducible, with temperature 0 and a specific seed. It takes an optional random seed.

func (*Config) SizeOf

func (oc *Config) SizeOf(model string) (int64, error)

SizeOf returns the current size of the given model in bytes, or returns (-1, err) if it the model can't be found.

func (*Config) StreamOutput added in v2.3.1

func (oc *Config) StreamOutput(callbackFunction func(string, bool), promptAndOptionalImages ...string) error

StreamOutput sends a request to the Ollama API and returns the generated output via a callback function. The callback function is given a string and "true" when the streaming is done (or if an error occurred).

type EmbeddingsRequest

type EmbeddingsRequest struct {
	Model  string `json:"model"`
	Prompt string `json:"prompt"`
}

EmbeddingsRequest represents the request payload for getting embeddings

type EmbeddingsResponse

type EmbeddingsResponse struct {
	Embeddings []float64 `json:"embedding"`
}

EmbeddingsResponse represents the response data containing embeddings

type GenerateRequest

type GenerateRequest struct {
	Model   string         `json:"model"`
	Prompt  string         `json:"prompt,omitempty"`
	Images  []string       `json:"images,omitempty"` // base64 encoded images
	Stream  bool           `json:"stream,omitempty"`
	Options RequestOptions `json:"options,omitempty"`
}

GenerateRequest represents the request payload for generating output

type GenerateResponse

type GenerateResponse struct {
	Model              string `json:"model"`
	CreatedAt          string `json:"created_at"`
	Response           string `json:"response"`
	Context            []int  `json:"context,omitempty"`
	TotalDuration      int64  `json:"total_duration,omitempty"`
	LoadDuration       int64  `json:"load_duration,omitempty"`
	SampleCount        int    `json:"sample_count,omitempty"`
	SampleDuration     int64  `json:"sample_duration,omitempty"`
	PromptEvalCount    int    `json:"prompt_eval_count,omitempty"`
	PromptEvalDuration int64  `json:"prompt_eval_duration,omitempty"`
	EvalCount          int    `json:"eval_count,omitempty"`
	EvalDuration       int64  `json:"eval_duration,omitempty"`
	Done               bool   `json:"done"`
}

GenerateResponse represents the response data from the generate API call

type ListResponse

type ListResponse struct {
	Models []Model `json:"models"`
}

ListResponse represents the response data from the tag API call

type Model

type Model struct {
	Modified time.Time `json:"modified_at"`
	Name     string    `json:"name"`
	Digest   string    `json:"digest"`
	Size     int64     `json:"size"`
}

Model represents a downloaded model

type PullRequest

type PullRequest struct {
	Name     string `json:"name"`
	Insecure bool   `json:"insecure,omitempty"`
	Stream   bool   `json:"stream,omitempty"`
}

PullRequest represents the request payload for pulling a model

type PullResponse

type PullResponse struct {
	Status    string `json:"status"`
	Digest    string `json:"digest"`
	Total     int64  `json:"total"`
	Completed int64  `json:"completed"`
}

PullResponse represents the response data from the pull API call

type RequestOptions

type RequestOptions struct {
	Seed          int     `json:"seed"`
	Temperature   float64 `json:"temperature"`
	ContextLength int64   `json:"num_ctx,omitempty"`
}

RequestOptions holds the seed and temperature

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL