huggingface

package
v0.15.0-beta Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 25, 2024 License: MIT Imports: 13 Imported by: 0

README

---
title: "Hugging Face"
lang: "en-US"
draft: false
description: "Learn about how to set up a VDP Hugging Face connector https://github.com/instill-ai/instill-core"
---

The Hugging Face component is an AI connector that allows users to connect the AI models served on the Hugging Face Platform.
It can carry out the following tasks:

- [Text Generation](#text-generation)
- [Fill Mask](#fill-mask)
- [Summarization](#summarization)
- [Text Classification](#text-classification)
- [Token Classification](#token-classification)
- [Translation](#translation)
- [Zero Shot Classification](#zero-shot-classification)
- [Question Answering](#question-answering)
- [Table Question Answering](#table-question-answering)
- [Sentence Similarity](#sentence-similarity)
- [Conversational](#conversational)
- [Image Classification](#image-classification)
- [Image Segmentation](#image-segmentation)
- [Object Detection](#object-detection)
- [Image To Text](#image-to-text)
- [Speech Recognition](#speech-recognition)
- [Audio Classification](#audio-classification)

## Release Stage

`Alpha`

## Configuration

The component configuration is defined and maintained [here](https://github.com/instill-ai/component/blob/main/pkg/connector/huggingface/v0/config/definition.json).

## Connection

| Field | Field ID | Type | Note |
| :--- | :--- | :--- | :--- |
| API Key (required) | `api_key` | string | Fill your Hugging face API token. To find your token, visit https://huggingface.co/settings/tokens. |
| Base URL (required) | `base_url` | string | Hostname for the endpoint. To use Inference API set to https://api-inference.huggingface.co, for Inference Endpoint set to your custom endpoint. |
| Is Custom Endpoint (required) | `is_custom_endpoint` | boolean | Fill true if you are using a custom Inference Endpoint and not the Inference API. |

## Supported Tasks

### Text Generation

Generating text is the task of producing new text. These models can, for example, fill in incomplete text or paraphrase.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_GENERATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Parameters | `parameters` | object | Parameters |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Generated Text | `generated_text` | string | The continuated string |

### Fill Mask

Masked language modeling is the task of masking some of the words in a sentence and predicting which words should replace those masks.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_FILL_MASK` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask) |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Results | `results` | array[object] | Results |

### Summarization

Summarization is the task of producing a shorter version of a document while preserving its important information.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_SUMMARIZATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Parameters | `parameters` | object | Parameters |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Summary Text | `summary_text` | string | The string after summarization |

### Text Classification

Text Classification is the task of assigning a label or class to a given text.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TEXT_CLASSIFICATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Results | `results` | array[object] | Results |

### Token Classification

Token classification is a natural language understanding task in which a label is assigned to some tokens in a text.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TOKEN_CLASSIFICATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Parameters | `parameters` | object | Parameters |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Results | `results` | array[object] | Results |

### Translation

Translation is the task of converting text from one language to another.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TRANSLATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Translation Text | `translation_text` | string | The string after translation |

### Zero Shot Classification

Zero-shot text classification is a task in natural language processing where a model is trained on a set of labeled examples but is then able to classify new examples from previously unseen classes.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_ZERO_SHOT_CLASSIFICATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| String Input (required) | `inputs` | string | String input |
| Parameters | `parameters` | object | Parameters |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Scores | `scores` | array[number] | a list of floats that correspond the the probability of label, in the same order as labels. |
| Labels | `labels` | array[string] | The list of strings for labels that you sent (in order) |
| Sequence (optional) | `sequence` | string | The string sent as an input |

### Question Answering

Question Answering models can retrieve the answer to a question from a given text, which is useful for searching for an answer in a document.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_QUESTION_ANSWERING` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Inputs (required) | `inputs` | object | Inputs |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Answer | `answer` | string | A string that’s the answer within the text. |
| Stop (optional) | `stop` | integer | The index (string wise) of the stop of the answer within context. |
| Score (optional) | `score` | number | A float that represents how likely that the answer is correct |
| Start (optional) | `start` | integer | The index (string wise) of the start of the answer within context. |

### Table Question Answering

Table Question Answering (Table QA) is the answering a question about an information on a given table.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_TABLE_QUESTION_ANSWERING` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Inputs (required) | `inputs` | object | Inputs |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Aggregator (optional) | `aggregator` | string | The aggregator used to get the answer |
| Answer | `answer` | string | The plaintext answer |
| Cells (optional) | `cells` | array[string] | a list of coordinates of the cells contents |
| Coordinates (optional) | `coordinates` | array[array] | a list of coordinates of the cells referenced in the answer |

### Sentence Similarity

Sentence Similarity is the task of determining how similar two texts are.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_SENTENCE_SIMILARITY` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Inputs (required) | `inputs` | object | Inputs |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Scores | `scores` | array[number] | The associated similarity score for each of the given strings |

### Conversational

Conversational response modelling is the task of generating conversational text that is relevant, coherent and knowledgable given a prompt.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_CONVERSATIONAL` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Inputs (required) | `inputs` | object | Inputs |
| Parameters | `parameters` | object | Parameters |
| Options | `options` | object | Options for the model |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Conversation (optional) | `conversation` | object | A facility dictionnary to send back for the next input (with the new user input addition). |
| Generated Text | `generated_text` | string | The answer of the bot |

### Image Classification

Image classification is the task of assigning a label or class to an entire image.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_IMAGE_CLASSIFICATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Image (required) | `image` | string | The image file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Classes | `classes` | array[object] | Classes |

### Image Segmentation

Image Segmentation divides an image into segments where each pixel in the image is mapped to an object.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_IMAGE_SEGMENTATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Image (required) | `image` | string | The image file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Segments | `segments` | array[object] | Segments |

### Object Detection

Object Detection models allow users to identify objects of certain defined classes.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_OBJECT_DETECTION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Image (required) | `image` | string | The image file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Objects | `objects` | array[object] | Objects |

### Image To Text

Image to text models output a text from a given image.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_IMAGE_TO_TEXT` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Image (required) | `image` | string | The image file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Text | `text` | string | Generated text |

### Speech Recognition

Automatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing a given audio to text.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_SPEECH_RECOGNITION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Audio (required) | `audio` | string | The audio file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Text | `text` | string | The string that was recognized within the audio file. |

### Audio Classification

Audio classification is the task of assigning a label or class to a given audio.

| Input | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Task ID (required) | `task` | string | `TASK_AUDIO_CLASSIFICATION` |
| Model (required) | `model` | string | The Hugging Face model to be used |
| Audio (required) | `audio` | string | The audio file |

| Output | ID | Type | Description |
| :--- | :--- | :--- | :--- |
| Classes | `classes` | array[object] | Classes |

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Init

func Init(l *zap.Logger, u base.UsageHandler) *connector

Types

type AudioRequest

type AudioRequest struct {
	Audio string `json:"audio"`
}

type ConversationalInputs

type ConversationalInputs struct {
	// (Required) The last input from the user in the conversation.
	Text string `json:"text"`

	// A list of strings corresponding to the earlier replies from the model.
	GeneratedResponses []string `json:"generated_responses,omitempty"`

	// A list of strings corresponding to the earlier replies from the user.
	// Should be of the same length of GeneratedResponses.
	PastUserInputs []string `json:"past_user_inputs,omitempty"`
}

Used with ConversationalRequest

type ConversationalParameters

type ConversationalParameters struct {
	// (Default: None). Integer to define the minimum length in tokens of the output summary.
	MinLength *int `json:"min_length,omitempty"`

	// (Default: None). Integer to define the maximum length in tokens of the output summary.
	MaxLength *int `json:"max_length,omitempty"`

	// (Default: None). Integer to define the top tokens considered within the sample operation to create
	// new text.
	TopK *int `json:"top_k,omitempty"`

	// (Default: None). Float to define the tokens that are within the sample` operation of text generation.
	// Add tokens in the sample for more probable to least probable until the sum of the probabilities is
	// greater than top_p.
	TopP *float64 `json:"top_p,omitempty"`

	// (Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling,
	// 0 mens top_k=1, 100.0 is getting closer to uniform probability.
	Temperature *float64 `json:"temperature,omitempty"`

	// (Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized
	// to not be picked in successive generation passes.
	RepetitionPenalty *float64 `json:"repetition_penalty,omitempty"`

	// (Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum.
	// Network can cause some overhead so it will be a soft limit.
	MaxTime *float64 `json:"maxtime,omitempty"`
}

Used with ConversationalRequest

type ConversationalRequest

type ConversationalRequest struct {
	// (Required)
	Inputs ConversationalInputs `json:"inputs"`

	Parameters ConversationalParameters `json:"parameters,omitempty"`
	Options    Options                  `json:"options,omitempty"`
}

Request structure for the conversational endpoint

type FeatureExtractionRequest

type FeatureExtractionRequest struct {
	// (Required)
	Inputs string `json:"inputs"`

	Options Options `json:"options,omitempty"`
}

type FillMaskRequest

type FillMaskRequest struct {
	// (Required) a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask)
	Inputs  string  `json:"inputs,omitempty"`
	Options Options `json:"options,omitempty"`
}

Request structure for the Fill Mask endpoint

type ImageRequest

type ImageRequest struct {
	Image string `json:"image"`
}

type ImageSegmentationResponse

type ImageSegmentationResponse struct {
	// The label for the class (model specific) of a segment.
	Label string `json:"label,omitempty"`

	// A float that represents how likely it is that the segment belongs to the given class.
	Score float64 `json:"score,omitempty"`

	// A str (base64 str of a single channel black-and-white img) representing the mask of a segment.
	Mask string `json:"mask,omitempty"`
}

type ImageToTextResponse

type ImageToTextResponse struct {
	// The generated caption
	GeneratedText string `json:"generated_text"`
}

type ObjectBox

type ObjectBox struct {
	XMin int `json:"xmin,omitempty"`
	YMin int `json:"ymin,omitempty"`
	XMax int `json:"xmax,omitempty"`
	YMax int `json:"ymax,omitempty"`
}

type Options

type Options struct {
	// (Default: false). Boolean to use GPU instead of CPU for inference.
	// Requires Startup plan at least.
	UseGPU *bool `json:"use_gpu,omitempty"`
	// (Default: true). There is a cache layer on the inference API to speedup
	// requests we have already seen. Most models can use those results as is
	// as models are deterministic (meaning the results will be the same anyway).
	// However if you use a non deterministic model, you can set this parameter
	// to prevent the caching mechanism from being used resulting in a real new query.
	UseCache *bool `json:"use_cache,omitempty"`
	// (Default: false) If the model is not ready, wait for it instead of receiving 503.
	// It limits the number of requests required to get your inference done. It is advised
	// to only set this flag to true after receiving a 503 error as it will limit hanging
	// in your application to known places.
	WaitForModel *bool `json:"wait_for_model,omitempty"`
}

type QuestionAnsweringInputs

type QuestionAnsweringInputs struct {
	// (Required) The question as a string that has an answer within Context.
	Question string `json:"question"`

	// (Required) A string that contains the answer to the question
	Context string `json:"context"`
}

type QuestionAnsweringRequest

type QuestionAnsweringRequest struct {
	// (Required)
	Inputs  QuestionAnsweringInputs `json:"inputs"`
	Options Options                 `json:"options,omitempty"`
}

Request structure for question answering model

type QuestionAnsweringResponse

type QuestionAnsweringResponse struct {
	// A string that’s the answer within the Context text.
	Answer string `json:"answer,omitempty"`

	// A float that represents how likely that the answer is correct.
	Score float64 `json:"score,omitempty"`

	// The string index of the start of the answer within Context.
	Start int `json:"start,omitempty"`

	// The string index of the stop of the answer within Context.
	Stop int `json:"stop,omitempty"`
}

Response structure for question answering model

type SentenceSimilarityInputs

type SentenceSimilarityInputs struct {
	// (Required) The string that you wish to compare the other strings with.
	// This can be a phrase, sentence, or longer passage, depending on the
	// model being used.
	SourceSentence string `json:"source_sentence"`

	// A list of strings which will be compared against the source_sentence.
	Sentences []string `json:"sentences"`
}

type SentenceSimilarityRequest

type SentenceSimilarityRequest struct {
	// (Required) Inputs for the request.
	Inputs  SentenceSimilarityInputs `json:"inputs"`
	Options Options                  `json:"options,omitempty"`
}

Request structure for the Sentence Similarity endpoint.

type SpeechRecognitionResponse

type SpeechRecognitionResponse struct {
	// The string that was recognized within the audio file.
	Text string `json:"text,omitempty"`
}

type SummarizationParameters

type SummarizationParameters struct {
	// (Default: None). Integer to define the minimum length in tokens of the output summary.
	MinLength *int `json:"min_length,omitempty"`

	// (Default: None). Integer to define the maximum length in tokens of the output summary.
	MaxLength *int `json:"max_length,omitempty"`

	// (Default: None). Integer to define the top tokens considered within the sample operation to create
	// new text.
	TopK *int `json:"top_k,omitempty"`

	// (Default: None). Float to define the tokens that are within the sample` operation of text generation.
	// Add tokens in the sample for more probable to least probable until the sum of the probabilities is
	// greater than top_p.
	TopP *float64 `json:"top_p,omitempty"`

	// (Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling,
	// 0 mens top_k=1, 100.0 is getting closer to uniform probability.
	Temperature *float64 `json:"temperature,omitempty"`

	// (Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized
	// to not be picked in successive generation passes.
	RepetitionPenalty *float64 `json:"repetitionpenalty,omitempty"`

	// (Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum.
	// Network can cause some overhead so it will be a soft limit.
	MaxTime *float64 `json:"maxtime,omitempty"`
}

Used with SummarizationRequest

type SummarizationRequest

type SummarizationRequest struct {
	// String to be summarized
	Inputs     string                  `json:"inputs"`
	Parameters SummarizationParameters `json:"parameters,omitempty"`
	Options    Options                 `json:"options,omitempty"`
}

Request structure for the summarization endpoint

type SummarizationResponse

type SummarizationResponse struct {
	// The summarized input string
	SummaryText string `json:"summary_text,omitempty"`
}

Response structure for the summarization endpoint

type TableQuestionAnsweringInputs

type TableQuestionAnsweringInputs struct {
	// (Required) The query in plain text that you want to ask the table
	Query string `json:"query"`

	// (Required) A table of data represented as a dict of list where entries
	// are headers and the lists are all the values, all lists must
	// have the same size.
	Table map[string][]string `json:"table"`
}

type TableQuestionAnsweringRequest

type TableQuestionAnsweringRequest struct {
	Inputs  TableQuestionAnsweringInputs `json:"inputs"`
	Options Options                      `json:"options,omitempty"`
}

Request structure for table question answering model

type TableQuestionAnsweringResponse

type TableQuestionAnsweringResponse struct {
	// The plaintext answer
	Answer string `json:"answer,omitempty"`

	// A list of coordinates of the cells references in the answer
	Coordinates [][]int `json:"coordinates,omitempty"`

	// A list of coordinates of the cells contents
	Cells []string `json:"cells,omitempty"`

	// The aggregator used to get the answer
	Aggregator string `json:"aggregator,omitempty"`
}

Response structure for table question answering model

type TextClassificationRequest

type TextClassificationRequest struct {
	//String to be classified
	Inputs  string  `json:"inputs"`
	Options Options `json:"options,omitempty"`
}

Request structure for the Text classification endpoint

type TextGenerationParameters

type TextGenerationParameters struct {
	// (Default: None). Integer to define the top tokens considered within the sample operation to create new text.
	TopK *int `json:"top_k,omitempty"`

	// (Default: None). Float to define the tokens that are within the sample` operation of text generation. Add
	// tokens in the sample for more probable to least probable until the sum of the probabilities is greater
	// than top_p.
	TopP *float64 `json:"top_p,omitempty"`

	// (Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling,
	// 0 means top_k=1, 100.0 is getting closer to uniform probability.
	Temperature *float64 `json:"temperature,omitempty"`

	// (Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized
	// to not be picked in successive generation passes.
	RepetitionPenalty *float64 `json:"repetition_penalty,omitempty"`

	// (Default: None). Int (0-250). The amount of new tokens to be generated, this does not include the input
	// length it is a estimate of the size of generated text you want. Each new tokens slows down the request,
	// so look for balance between response times and length of text generated.
	MaxNewTokens *int `json:"max_new_tokens,omitempty"`

	// (Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum.
	// Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens
	// for best results.
	MaxTime *float64 `json:"max_time,omitempty"`

	// (Default: True). Bool. If set to False, the return results will not contain the original query making it
	// easier for prompting.
	ReturnFullText *bool `json:"return_full_text,omitempty"`

	// (Default: 1). Integer. The number of proposition you want to be returned.
	NumReturnSequences *int `json:"num_return_sequences,omitempty"`
}

type TextGenerationRequest

type TextGenerationRequest struct {
	// (Required) a string to be generated from
	Inputs     string                   `json:"inputs"`
	Parameters TextGenerationParameters `json:"parameters,omitempty"`
	Options    Options                  `json:"options,omitempty"`
}

type TextGenerationResponse

type TextGenerationResponse struct {
	GeneratedText string `json:"generated_text,omitempty"`
}

type TextToImageRequest

type TextToImageRequest struct {
	// The prompt or prompts to guide the image generation.
	Inputs     string                       `json:"inputs"`
	Options    Options                      `json:"options,omitempty"`
	Parameters TextToImageRequestParameters `json:"parameters,omitempty"`
}

Request structure for text-to-image model

type TextToImageRequestParameters

type TextToImageRequestParameters struct {
	// The prompt or prompts not to guide the image generation.
	// Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
	NegativePrompt string `json:"negative_prompt,omitempty"`
	// The height in pixels of the generated image.
	Height int64 `json:"height,omitempty"`
	// The width in pixels of the generated image.
	Width int64 `json:"width,omitempty"`
	// The number of denoising steps. More denoising steps usually lead to a higher quality
	// image at the expense of slower inference. Defaults to 50.
	NumInferenceSteps int64 `json:"num_inference_steps,omitempty"`
	// Higher guidance scale encourages to generate images that are closely linked to the text
	// input, usually at the expense of lower image quality. Defaults to 7.5.
	GuidanceScale float64 `json:"guidance_scale,omitempty"`
}

type TokenClassificationParameters

type TokenClassificationParameters struct {
	// (Default: simple)
	AggregationStrategy string `json:"aggregation_strategy,omitempty"`
}

type TokenClassificationRequest

type TokenClassificationRequest struct {
	// (Required) strings to be classified
	Inputs     string                        `json:"inputs"`
	Parameters TokenClassificationParameters `json:"parameters,omitempty"`
	Options    Options                       `json:"options,omitempty"`
}

Request structure for the token classification endpoint

type TranslationRequest

type TranslationRequest struct {
	// (Required) a string to be translated in the original languages
	Inputs string `json:"inputs"`

	Options Options `json:"options,omitempty"`
}

Request structure for the Translation endpoint

type TranslationResponse

type TranslationResponse struct {
	// The translated Input string
	TranslationText string `json:"translation_text,omitempty"`
}

Response structure from the Translation endpoint

type ZeroShotParameters

type ZeroShotParameters struct {
	// (Required) A list of strings that are potential classes for inputs. Max 10 candidate_labels,
	// for more, simply run multiple requests, results are going to be misleading if using
	// too many candidate_labels anyway. If you want to keep the exact same, you can
	// simply run multi_label=True and do the scaling on your end.
	CandidateLabels []string `json:"candidate_labels"`

	// (Default: false) Boolean that is set to True if classes can overlap
	MultiLabel *bool `json:"multi_label,omitempty"`
}

Used with ZeroShotRequest

type ZeroShotRequest

type ZeroShotRequest struct {
	// (Required)
	Inputs string `json:"inputs"`

	// (Required)
	Parameters ZeroShotParameters `json:"parameters"`

	Options Options `json:"options,omitempty"`
}

type ZeroShotResponse

type ZeroShotResponse struct {
	// The string sent as an input
	Sequence string `json:"sequence,omitempty"`

	// The list of labels sent in the request, sorted in descending order
	// by probability that the input corresponds to the to the label.
	Labels []string `json:"labels,omitempty"`

	// a list of floats that correspond the the probability of label, in the same order as labels.
	Scores []float64 `json:"scores,omitempty"`
}

Response structure from the Zero-shot classification endpoint.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL