types

package
v1.40.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 10, 2024 License: Apache-2.0 Imports: 4 Imported by: 16

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AccessDeniedException

type AccessDeniedException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

You are not authorized to perform the action.

func (*AccessDeniedException) Error

func (e *AccessDeniedException) Error() string

func (*AccessDeniedException) ErrorCode

func (e *AccessDeniedException) ErrorCode() string

func (*AccessDeniedException) ErrorFault

func (e *AccessDeniedException) ErrorFault() smithy.ErrorFault

func (*AccessDeniedException) ErrorMessage

func (e *AccessDeniedException) ErrorMessage() string

type AgeRange

type AgeRange struct {

	// The highest estimated age.
	High *int32

	// The lowest estimated age.
	Low *int32
	// contains filtered or unexported fields
}

Structure containing the estimated age range, in years, for a face. Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

type Asset

type Asset struct {

	// The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest
	// file.
	GroundTruthManifest *GroundTruthManifest
	// contains filtered or unexported fields
}

Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

type AssociatedFace added in v1.29.0

type AssociatedFace struct {

	// Unique identifier assigned to the face.
	FaceId *string
	// contains filtered or unexported fields
}

Provides face metadata for the faces that are associated to a specific UserID.

type Attribute

type Attribute string
const (
	AttributeDefault      Attribute = "DEFAULT"
	AttributeAll          Attribute = "ALL"
	AttributeAgeRange     Attribute = "AGE_RANGE"
	AttributeBeard        Attribute = "BEARD"
	AttributeEmotions     Attribute = "EMOTIONS"
	AttributeEyeDirection Attribute = "EYE_DIRECTION"
	AttributeEyeglasses   Attribute = "EYEGLASSES"
	AttributeEyesOpen     Attribute = "EYES_OPEN"
	AttributeGender       Attribute = "GENDER"
	AttributeMouthOpen    Attribute = "MOUTH_OPEN"
	AttributeMustache     Attribute = "MUSTACHE"
	AttributeFaceOccluded Attribute = "FACE_OCCLUDED"
	AttributeSmile        Attribute = "SMILE"
	AttributeSunglasses   Attribute = "SUNGLASSES"
)

Enum values for Attribute

func (Attribute) Values added in v0.29.0

func (Attribute) Values() []Attribute

Values returns all known values for Attribute. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type AudioMetadata

type AudioMetadata struct {

	// The audio codec used to encode or decode the audio stream.
	Codec *string

	// The duration of the audio stream in milliseconds.
	DurationMillis *int64

	// The number of audio channels in the segment.
	NumberOfChannels *int64

	// The sample rate for the audio stream.
	SampleRate *int64
	// contains filtered or unexported fields
}

Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection .

type AuditImage added in v1.24.0

type AuditImage struct {

	// Identifies the bounding box around the label, face, text, object of interest,
	// or personal protective equipment. The left (x-coordinate) and top
	// (y-coordinate) are coordinates representing the top and left sides of the
	// bounding box. Note that the upper-left corner of the image is the origin (0,0).
	// The top and left values returned are ratios of the overall image size. For
	// example, if the input image is 700x200 pixels, and the top-left coordinate of
	// the bounding box is 350x50 pixels, the API returns a left value of 0.5
	// (350/700) and a top value of 0.25 (50/200). The width and height values
	// represent the dimensions of the bounding box as a ratio of the overall image
	// dimension. For example, if the input image is 700x200 pixels, and the bounding
	// box width is 70 pixels, the width returned is 0.1. The bounding box coordinates
	// can have negative values. For example, if Amazon Rekognition is able to detect a
	// face that is at the image edge and is only partially visible, the service can
	// return coordinates that are outside the image bounds and, depending on the image
	// edge, you might get negative values or values greater than 1 for the left or top
	// values.
	BoundingBox *BoundingBox

	// The Base64-encoded bytes representing an image selected from the Face Liveness
	// video and returned for audit purposes.
	Bytes []byte

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	S3Object *S3Object
	// contains filtered or unexported fields
}

An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.

type Beard

type Beard struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the face has beard or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the face has a beard, and the confidence level in the determination.

type BlackFrame added in v1.7.0

type BlackFrame struct {

	// A threshold used to determine the maximum luminance value for a pixel to be
	// considered black. In a full color range video, luminance values range from
	// 0-255. A pixel value of 0 is pure black, and the most strict filter. The maximum
	// black pixel value is computed as follows: max_black_pixel_value =
	// minimum_luminance + MaxPixelThreshold *luminance_range. For example, for a full
	// range video with BlackPixelThreshold = 0.1, max_black_pixel_value is 0 + 0.1 *
	// (255-0) = 25.5. The default value of MaxPixelThreshold is 0.2, which maps to a
	// max_black_pixel_value of 51 for a full range video. You can lower this threshold
	// to be more strict on black levels.
	MaxPixelThreshold *float32

	// The minimum percentage of pixels in a frame that need to have a luminance below
	// the max_black_pixel_value for a frame to be considered a black frame. Luminance
	// is calculated using the BT.709 matrix. The default value is 99, which means at
	// least 99% of all pixels in the frame are black pixels as per the
	// MaxPixelThreshold set. You can reduce this value to allow more noise on the
	// black frame.
	MinCoveragePercentage *float32
	// contains filtered or unexported fields
}

A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see StartSegmentDetection .

type BodyPart added in v0.29.0

type BodyPart string
const (
	BodyPartFace      BodyPart = "FACE"
	BodyPartHead      BodyPart = "HEAD"
	BodyPartLeftHand  BodyPart = "LEFT_HAND"
	BodyPartRightHand BodyPart = "RIGHT_HAND"
)

Enum values for BodyPart

func (BodyPart) Values added in v0.29.0

func (BodyPart) Values() []BodyPart

Values returns all known values for BodyPart. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type BoundingBox

type BoundingBox struct {

	// Height of the bounding box as a ratio of the overall image height.
	Height *float32

	// Left coordinate of the bounding box as a ratio of overall image width.
	Left *float32

	// Top coordinate of the bounding box as a ratio of overall image height.
	Top *float32

	// Width of the bounding box as a ratio of the overall image width.
	Width *float32
	// contains filtered or unexported fields
}

Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0). The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200). The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1. The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

type Celebrity

type Celebrity struct {

	// Provides information about the celebrity's face, such as its location on the
	// image.
	Face *ComparedFace

	// A unique identifier for the celebrity.
	Id *string

	// The known gender identity for the celebrity that matches the provided ID. The
	// known gender identity can be Male, Female, Nonbinary, or Unlisted.
	KnownGender *KnownGender

	// The confidence, in percentage, that Amazon Rekognition has that the recognized
	// face is the celebrity.
	MatchConfidence *float32

	// The name of the celebrity.
	Name *string

	// An array of URLs pointing to additional information about the celebrity. If
	// there is no additional information about the celebrity, this list is empty.
	Urls []string
	// contains filtered or unexported fields
}

Provides information about a celebrity recognized by the RecognizeCelebrities operation.

type CelebrityDetail

type CelebrityDetail struct {

	// Bounding box around the body of a celebrity.
	BoundingBox *BoundingBox

	// The confidence, in percentage, that Amazon Rekognition has that the recognized
	// face is the celebrity.
	Confidence *float32

	// Face details for the recognized celebrity.
	Face *FaceDetail

	// The unique identifier for the celebrity.
	Id *string

	// Retrieves the known gender for the celebrity.
	KnownGender *KnownGender

	// The name of the celebrity.
	Name *string

	// An array of URLs pointing to additional celebrity information.
	Urls []string
	// contains filtered or unexported fields
}

Information about a recognized celebrity.

type CelebrityRecognition

type CelebrityRecognition struct {

	// Information about a recognized celebrity.
	Celebrity *CelebrityDetail

	// The time, in milliseconds from the start of the video, that the celebrity was
	// recognized. Note that Timestamp is not guaranteed to be accurate to the
	// individual frame where the celebrity first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

type CelebrityRecognitionSortBy

type CelebrityRecognitionSortBy string
const (
	CelebrityRecognitionSortById        CelebrityRecognitionSortBy = "ID"
	CelebrityRecognitionSortByTimestamp CelebrityRecognitionSortBy = "TIMESTAMP"
)

Enum values for CelebrityRecognitionSortBy

func (CelebrityRecognitionSortBy) Values added in v0.29.0

Values returns all known values for CelebrityRecognitionSortBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type CompareFacesMatch

type CompareFacesMatch struct {

	// Provides face metadata (bounding box and confidence that the bounding box
	// actually contains a face).
	Face *ComparedFace

	// Level of confidence that the faces match.
	Similarity *float32
	// contains filtered or unexported fields
}

Provides information about a face in a target image that matches the source image face analyzed by CompareFaces . The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

type ComparedFace

type ComparedFace struct {

	// Bounding box of the face.
	BoundingBox *BoundingBox

	// Level of confidence that what the bounding box contains is a face.
	Confidence *float32

	// The emotions that appear to be expressed on the face, and the confidence level
	// in the determination. Valid values include "Happy", "Sad", "Angry", "Confused",
	// "Disgusted", "Surprised", "Calm", "Unknown", and "Fear".
	Emotions []Emotion

	// An array of facial landmarks.
	Landmarks []Landmark

	// Indicates the pose of the face as determined by its pitch, roll, and yaw.
	Pose *Pose

	// Identifies face image brightness and sharpness.
	Quality *ImageQuality

	// Indicates whether or not the face is smiling, and the confidence level in the
	// determination.
	Smile *Smile
	// contains filtered or unexported fields
}

Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities .

type ComparedSourceImageFace

type ComparedSourceImageFace struct {

	// Bounding box of the face.
	BoundingBox *BoundingBox

	// Confidence level that the selected bounding box contains a face.
	Confidence *float32
	// contains filtered or unexported fields
}

Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

type ConflictException added in v1.29.0

type ConflictException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

A User with the same Id already exists within the collection, or the update or deletion of the User caused an inconsistent state. **

func (*ConflictException) Error added in v1.29.0

func (e *ConflictException) Error() string

func (*ConflictException) ErrorCode added in v1.29.0

func (e *ConflictException) ErrorCode() string

func (*ConflictException) ErrorFault added in v1.29.0

func (e *ConflictException) ErrorFault() smithy.ErrorFault

func (*ConflictException) ErrorMessage added in v1.29.0

func (e *ConflictException) ErrorMessage() string

type ConnectedHomeSettings added in v1.18.0

type ConnectedHomeSettings struct {

	// Specifies what you want to detect in the video, such as people, packages, or
	// pets. The current valid labels you can include in this list are: "PERSON",
	// "PET", "PACKAGE", and "ALL".
	//
	// This member is required.
	Labels []string

	// The minimum confidence required to label an object in the video.
	MinConfidence *float32
	// contains filtered or unexported fields
}

Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor . Including this setting in the CreateStreamProcessor request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.

type ConnectedHomeSettingsForUpdate added in v1.18.0

type ConnectedHomeSettingsForUpdate struct {

	// Specifies what you want to detect in the video, such as people, packages, or
	// pets. The current valid labels you can include in this list are: "PERSON",
	// "PET", "PACKAGE", and "ALL".
	Labels []string

	// The minimum confidence required to label an object in the video.
	MinConfidence *float32
	// contains filtered or unexported fields
}

The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.

type ContentClassifier

type ContentClassifier string
const (
	ContentClassifierFreeOfPersonallyIdentifiableInformation ContentClassifier = "FreeOfPersonallyIdentifiableInformation"
	ContentClassifierFreeOfAdultContent                      ContentClassifier = "FreeOfAdultContent"
)

Enum values for ContentClassifier

func (ContentClassifier) Values added in v0.29.0

Values returns all known values for ContentClassifier. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ContentModerationAggregateBy added in v1.26.0

type ContentModerationAggregateBy string
const (
	ContentModerationAggregateByTimestamps ContentModerationAggregateBy = "TIMESTAMPS"
	ContentModerationAggregateBySegments   ContentModerationAggregateBy = "SEGMENTS"
)

Enum values for ContentModerationAggregateBy

func (ContentModerationAggregateBy) Values added in v1.26.0

Values returns all known values for ContentModerationAggregateBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ContentModerationDetection

type ContentModerationDetection struct {

	// A list of predicted results for the type of content an image contains. For
	// example, the image content might be from animation, sports, or a video game.
	ContentTypes []ContentType

	// The time duration of a segment in milliseconds, I.e. time elapsed from
	// StartTimestampMillis to EndTimestampMillis.
	DurationMillis *int64

	// The time in milliseconds defining the end of the timeline segment containing a
	// continuously detected moderation label.
	EndTimestampMillis *int64

	// The content moderation label detected by in the stored video.
	ModerationLabel *ModerationLabel

	// The time in milliseconds defining the start of the timeline segment containing
	// a continuously detected moderation label.
	StartTimestampMillis *int64

	// Time, in milliseconds from the beginning of the video, that the content
	// moderation label was detected. Note that Timestamp is not guaranteed to be
	// accurate to the individual frame where the moderated content first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about an inappropriate, unwanted, or offensive content label detection in a stored video.

type ContentModerationSortBy

type ContentModerationSortBy string
const (
	ContentModerationSortByName      ContentModerationSortBy = "NAME"
	ContentModerationSortByTimestamp ContentModerationSortBy = "TIMESTAMP"
)

Enum values for ContentModerationSortBy

func (ContentModerationSortBy) Values added in v0.29.0

Values returns all known values for ContentModerationSortBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ContentType added in v1.36.0

type ContentType struct {

	// The confidence level of the label given
	Confidence *float32

	// The name of the label
	Name *string
	// contains filtered or unexported fields
}

Contains information regarding the confidence and name of a detected content type.

type CoversBodyPart added in v0.29.0

type CoversBodyPart struct {

	// The confidence that Amazon Rekognition has in the value of Value .
	Confidence *float32

	// True if the PPE covers the corresponding body part, otherwise false.
	Value bool
	// contains filtered or unexported fields
}

Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment .

type CreateFaceLivenessSessionRequestSettings added in v1.24.0

type CreateFaceLivenessSessionRequestSettings struct {

	// Number of audit images to be returned back. Takes an integer between 0-4. Any
	// integer less than 0 will return 0, any integer above 4 will return 4 images in
	// the response. By default, it is set to 0. The limit is best effort and is based
	// on the actual duration of the selfie-video.
	AuditImagesLimit *int32

	// Can specify the location of an Amazon S3 bucket, where reference and audit
	// images will be stored. Note that the Amazon S3 bucket must be located in the
	// caller's AWS account and in the same region as the Face Liveness end-point.
	// Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness
	// system. Requires that the caller has the s3:PutObject permission on the Amazon
	// S3 bucket.
	OutputConfig *LivenessOutputConfig
	// contains filtered or unexported fields
}

A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.

type CustomLabel

type CustomLabel struct {

	// The confidence that the model has in the detection of the custom label. The
	// range is 0-100. A higher value indicates a higher confidence.
	Confidence *float32

	// The location of the detected object on the image that corresponds to the custom
	// label. Includes an axis aligned coarse bounding box surrounding the object and a
	// finer grain polygon for more accurate spatial information.
	Geometry *Geometry

	// The name of the custom label.
	Name *string
	// contains filtered or unexported fields
}

A custom label detected in an image by a call to DetectCustomLabels .

type CustomizationFeature added in v1.31.0

type CustomizationFeature string
const (
	CustomizationFeatureContentModeration CustomizationFeature = "CONTENT_MODERATION"
	CustomizationFeatureCustomLabels      CustomizationFeature = "CUSTOM_LABELS"
)

Enum values for CustomizationFeature

func (CustomizationFeature) Values added in v1.31.0

Values returns all known values for CustomizationFeature. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type CustomizationFeatureConfig added in v1.31.0

type CustomizationFeatureConfig struct {

	// Configuration options for Custom Moderation training.
	ContentModeration *CustomizationFeatureContentModerationConfig
	// contains filtered or unexported fields
}

Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.

type CustomizationFeatureContentModerationConfig added in v1.31.0

type CustomizationFeatureContentModerationConfig struct {

	// The confidence level you plan to use to identify if unsafe content is present
	// during inference.
	ConfidenceThreshold *float32
	// contains filtered or unexported fields
}

Configuration options for Content Moderation training.

type DatasetChanges added in v1.10.0

type DatasetChanges struct {

	// A Base64-encoded binary data object containing one or JSON lines that either
	// update the dataset or are additions to the dataset. You change a dataset by
	// calling UpdateDatasetEntries . If you are using an AWS SDK to call
	// UpdateDatasetEntries , you don't need to encode Changes as the SDK encodes the
	// data for you. For example JSON lines, see Image-Level labels in manifest files
	// and and Object localization in manifest files in the Amazon Rekognition Custom
	// Labels Developer Guide.
	//
	// This member is required.
	GroundTruth []byte
	// contains filtered or unexported fields
}

Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the source-ref field of the update entry with the source-ref filed of the entry that you want to update. If the source-ref field doesn't match an existing entry, the entry is added to dataset as a new entry.

type DatasetDescription added in v1.10.0

type DatasetDescription struct {

	// The Unix timestamp for the time and date that the dataset was created.
	CreationTimestamp *time.Time

	// The status message code for the dataset.
	DatasetStats *DatasetStats

	// The Unix timestamp for the date and time that the dataset was last updated.
	LastUpdatedTimestamp *time.Time

	// The status of the dataset.
	Status DatasetStatus

	// The status message for the dataset.
	StatusMessage *string

	// The status message code for the dataset operation. If a service error occurs,
	// try the API call again later. If a client error occurs, check the input
	// parameters to the dataset API call that failed.
	StatusMessageCode DatasetStatusMessageCode
	// contains filtered or unexported fields
}

A description for a dataset. For more information, see DescribeDataset . The status fields Status , StatusMessage , and StatusMessageCode reflect the last operation on the dataset.

type DatasetLabelDescription added in v1.10.0

type DatasetLabelDescription struct {

	// The name of the label.
	LabelName *string

	// Statistics about the label.
	LabelStats *DatasetLabelStats
	// contains filtered or unexported fields
}

Describes a dataset label. For more information, see ListDatasetLabels .

type DatasetLabelStats added in v1.10.0

type DatasetLabelStats struct {

	// The total number of images that have the label assigned to a bounding box.
	BoundingBoxCount *int32

	// The total number of images that use the label.
	EntryCount *int32
	// contains filtered or unexported fields
}

Statistics about a label used in a dataset. For more information, see DatasetLabelDescription .

type DatasetMetadata added in v1.10.0

type DatasetMetadata struct {

	// The Unix timestamp for the date and time that the dataset was created.
	CreationTimestamp *time.Time

	// The Amazon Resource Name (ARN) for the dataset.
	DatasetArn *string

	// The type of the dataset.
	DatasetType DatasetType

	// The status for the dataset.
	Status DatasetStatus

	// The status message for the dataset.
	StatusMessage *string

	// The status message code for the dataset operation. If a service error occurs,
	// try the API call again later. If a client error occurs, check the input
	// parameters to the dataset API call that failed.
	StatusMessageCode DatasetStatusMessageCode
	// contains filtered or unexported fields
}

Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription .

type DatasetSource added in v1.10.0

type DatasetSource struct {

	// The ARN of an Amazon Rekognition Custom Labels dataset that you want to copy.
	DatasetArn *string

	// The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest
	// file.
	GroundTruthManifest *GroundTruthManifest
	// contains filtered or unexported fields
}

The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the GroundTruthManifest field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset in DatasetArn . You need to specify a value for DatasetArn or GroundTruthManifest , but not both. if you supply both values, or if you don't specify any values, an InvalidParameterException exception occurs. For more information, see CreateDataset .

type DatasetStats added in v1.10.0

type DatasetStats struct {

	// The total number of entries that contain at least one error.
	ErrorEntries *int32

	// The total number of images in the dataset that have labels.
	LabeledEntries *int32

	// The total number of images in the dataset.
	TotalEntries *int32

	// The total number of labels declared in the dataset.
	TotalLabels *int32
	// contains filtered or unexported fields
}

Provides statistics about a dataset. For more information, see DescribeDataset .

type DatasetStatus added in v1.10.0

type DatasetStatus string
const (
	DatasetStatusCreateInProgress DatasetStatus = "CREATE_IN_PROGRESS"
	DatasetStatusCreateComplete   DatasetStatus = "CREATE_COMPLETE"
	DatasetStatusCreateFailed     DatasetStatus = "CREATE_FAILED"
	DatasetStatusUpdateInProgress DatasetStatus = "UPDATE_IN_PROGRESS"
	DatasetStatusUpdateComplete   DatasetStatus = "UPDATE_COMPLETE"
	DatasetStatusUpdateFailed     DatasetStatus = "UPDATE_FAILED"
	DatasetStatusDeleteInProgress DatasetStatus = "DELETE_IN_PROGRESS"
)

Enum values for DatasetStatus

func (DatasetStatus) Values added in v1.10.0

func (DatasetStatus) Values() []DatasetStatus

Values returns all known values for DatasetStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DatasetStatusMessageCode added in v1.10.0

type DatasetStatusMessageCode string
const (
	DatasetStatusMessageCodeSuccess      DatasetStatusMessageCode = "SUCCESS"
	DatasetStatusMessageCodeServiceError DatasetStatusMessageCode = "SERVICE_ERROR"
	DatasetStatusMessageCodeClientError  DatasetStatusMessageCode = "CLIENT_ERROR"
)

Enum values for DatasetStatusMessageCode

func (DatasetStatusMessageCode) Values added in v1.10.0

Values returns all known values for DatasetStatusMessageCode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DatasetType added in v1.10.0

type DatasetType string
const (
	DatasetTypeTrain DatasetType = "TRAIN"
	DatasetTypeTest  DatasetType = "TEST"
)

Enum values for DatasetType

func (DatasetType) Values added in v1.10.0

func (DatasetType) Values() []DatasetType

Values returns all known values for DatasetType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DetectLabelsFeatureName added in v1.21.0

type DetectLabelsFeatureName string
const (
	DetectLabelsFeatureNameGeneralLabels   DetectLabelsFeatureName = "GENERAL_LABELS"
	DetectLabelsFeatureNameImageProperties DetectLabelsFeatureName = "IMAGE_PROPERTIES"
)

Enum values for DetectLabelsFeatureName

func (DetectLabelsFeatureName) Values added in v1.21.0

Values returns all known values for DetectLabelsFeatureName. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DetectLabelsImageBackground added in v1.21.0

type DetectLabelsImageBackground struct {

	// The dominant colors found in the background of an image, defined with RGB
	// values, CSS color name, simplified color name, and PixelPercentage (the
	// percentage of image pixels that have a particular color).
	DominantColors []DominantColor

	// The quality of the image background as defined by brightness and sharpness.
	Quality *DetectLabelsImageQuality
	// contains filtered or unexported fields
}

The background of the image with regard to image quality and dominant colors.

type DetectLabelsImageForeground added in v1.21.0

type DetectLabelsImageForeground struct {

	// The dominant colors found in the foreground of an image, defined with RGB
	// values, CSS color name, simplified color name, and PixelPercentage (the
	// percentage of image pixels that have a particular color).
	DominantColors []DominantColor

	// The quality of the image foreground as defined by brightness and sharpness.
	Quality *DetectLabelsImageQuality
	// contains filtered or unexported fields
}

The foreground of the image with regard to image quality and dominant colors.

type DetectLabelsImageProperties added in v1.21.0

type DetectLabelsImageProperties struct {

	// Information about the properties of an image’s background, including the
	// background’s quality and dominant colors, including the quality and dominant
	// colors of the image.
	Background *DetectLabelsImageBackground

	// Information about the dominant colors found in an image, described with RGB
	// values, CSS color name, simplified color name, and PixelPercentage (the
	// percentage of image pixels that have a particular color).
	DominantColors []DominantColor

	// Information about the properties of an image’s foreground, including the
	// foreground’s quality and dominant colors, including the quality and dominant
	// colors of the image.
	Foreground *DetectLabelsImageForeground

	// Information about the quality of the image foreground as defined by brightness,
	// sharpness, and contrast. The higher the value the greater the brightness,
	// sharpness, and contrast respectively.
	Quality *DetectLabelsImageQuality
	// contains filtered or unexported fields
}

Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.

type DetectLabelsImagePropertiesSettings added in v1.21.0

type DetectLabelsImagePropertiesSettings struct {

	// The maximum number of dominant colors to return when detecting labels in an
	// image. The default value is 10.
	MaxDominantColors int32
	// contains filtered or unexported fields
}

Settings for the IMAGE_PROPERTIES feature type.

type DetectLabelsImageQuality added in v1.21.0

type DetectLabelsImageQuality struct {

	// The brightness of an image provided for label detection.
	Brightness *float32

	// The contrast of an image provided for label detection.
	Contrast *float32

	// The sharpness of an image provided for label detection.
	Sharpness *float32
	// contains filtered or unexported fields
}

The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.

type DetectLabelsSettings added in v1.21.0

type DetectLabelsSettings struct {

	// Contains the specified filters for GENERAL_LABELS.
	GeneralLabels *GeneralLabelsSettings

	// Contains the chosen number of maximum dominant colors in an image.
	ImageProperties *DetectLabelsImagePropertiesSettings
	// contains filtered or unexported fields
}

Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.

type DetectTextFilters

type DetectTextFilters struct {

	// A Filter focusing on a certain area of the image. Uses a BoundingBox object to
	// set the region of the image.
	RegionsOfInterest []RegionOfInterest

	// A set of parameters that allow you to filter out certain results from your
	// returned results.
	WordFilter *DetectionFilter
	// contains filtered or unexported fields
}

A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

type DetectionFilter

type DetectionFilter struct {

	// Sets the minimum height of the word bounding box. Words with bounding box
	// heights lesser than this value will be excluded from the result. Value is
	// relative to the video frame height.
	MinBoundingBoxHeight *float32

	// Sets the minimum width of the word bounding box. Words with bounding boxes
	// widths lesser than this value will be excluded from the result. Value is
	// relative to the video frame width.
	MinBoundingBoxWidth *float32

	// Sets the confidence of word detection. Words with detection confidence below
	// this will be excluded from the result. Values should be between 0 and 100. The
	// default MinConfidence is 80.
	MinConfidence *float32
	// contains filtered or unexported fields
}

A set of parameters that allow you to filter out certain results from your returned results.

type DisassociatedFace added in v1.29.0

type DisassociatedFace struct {

	// Unique identifier assigned to the face.
	FaceId *string
	// contains filtered or unexported fields
}

Provides face metadata for the faces that are disassociated from a specific UserID.

type DistributeDataset added in v1.10.0

type DistributeDataset struct {

	// The Amazon Resource Name (ARN) of the dataset that you want to use.
	//
	// This member is required.
	Arn *string
	// contains filtered or unexported fields
}

A training dataset or a test dataset used in a dataset distribution operation. For more information, see DistributeDatasetEntries .

type DominantColor added in v1.21.0

type DominantColor struct {

	// The Blue RGB value for a dominant color.
	Blue *int32

	// The CSS color name of a dominant color.
	CSSColor *string

	// The Green RGB value for a dominant color.
	Green *int32

	// The Hex code equivalent of the RGB values for a dominant color.
	HexCode *string

	// The percentage of image pixels that have a given dominant color.
	PixelPercent *float32

	// The Red RGB value for a dominant color.
	Red *int32

	// One of 12 simplified color names applied to a dominant color.
	SimplifiedColor *string
	// contains filtered or unexported fields
}

A description of the dominant colors in an image.

type Emotion

type Emotion struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Type of emotion detected.
	Type EmotionName
	// contains filtered or unexported fields
}

The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.

type EmotionName

type EmotionName string
const (
	EmotionNameHappy     EmotionName = "HAPPY"
	EmotionNameSad       EmotionName = "SAD"
	EmotionNameAngry     EmotionName = "ANGRY"
	EmotionNameConfused  EmotionName = "CONFUSED"
	EmotionNameDisgusted EmotionName = "DISGUSTED"
	EmotionNameSurprised EmotionName = "SURPRISED"
	EmotionNameCalm      EmotionName = "CALM"
	EmotionNameUnknown   EmotionName = "UNKNOWN"
	EmotionNameFear      EmotionName = "FEAR"
)

Enum values for EmotionName

func (EmotionName) Values added in v0.29.0

func (EmotionName) Values() []EmotionName

Values returns all known values for EmotionName. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type EquipmentDetection added in v0.29.0

type EquipmentDetection struct {

	// A bounding box surrounding the item of detected PPE.
	BoundingBox *BoundingBox

	// The confidence that Amazon Rekognition has that the bounding box ( BoundingBox )
	// contains an item of PPE.
	Confidence *float32

	// Information about the body part covered by the detected PPE.
	CoversBodyPart *CoversBodyPart

	// The type of detected PPE.
	Type ProtectiveEquipmentType
	// contains filtered or unexported fields
}

Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment . For more information, see DetectProtectiveEquipment .

type EvaluationResult

type EvaluationResult struct {

	// The F1 score for the evaluation of all labels. The F1 score metric evaluates
	// the overall precision and recall performance of the model as a single value. A
	// higher value indicates better precision and recall performance. A lower score
	// indicates that precision, recall, or both are performing poorly.
	F1Score *float32

	// The S3 bucket that contains the training summary.
	Summary *Summary
	// contains filtered or unexported fields
}

The evaluation results for the training of a model.

type EyeDirection added in v1.28.0

type EyeDirection struct {

	// The confidence that the service has in its predicted eye direction.
	Confidence *float32

	// Value representing eye direction on the pitch axis.
	Pitch *float32

	// Value representing eye direction on the yaw axis.
	Yaw *float32
	// contains filtered or unexported fields
}

Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.

type EyeOpen

type EyeOpen struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the eyes on the face are open.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

type Eyeglasses

type Eyeglasses struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the face is wearing eye glasses or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

type Face

type Face struct {

	// Bounding box of the face.
	BoundingBox *BoundingBox

	// Confidence level that the bounding box contains a face (and not a different
	// object such as a tree).
	Confidence *float32

	// Identifier that you assign to all the faces in the input image.
	ExternalImageId *string

	// Unique identifier that Amazon Rekognition assigns to the face.
	FaceId *string

	// Unique identifier that Amazon Rekognition assigns to the input image.
	ImageId *string

	// The version of the face detect and storage model that was used when indexing
	// the face vector.
	IndexFacesModelVersion *string

	// Unique identifier assigned to the user.
	UserId *string
	// contains filtered or unexported fields
}

Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

type FaceAttributes

type FaceAttributes string
const (
	FaceAttributesDefault FaceAttributes = "DEFAULT"
	FaceAttributesAll     FaceAttributes = "ALL"
)

Enum values for FaceAttributes

func (FaceAttributes) Values added in v0.29.0

func (FaceAttributes) Values() []FaceAttributes

Values returns all known values for FaceAttributes. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type FaceDetail

type FaceDetail struct {

	// The estimated age range, in years, for the face. Low represents the lowest
	// estimated age and High represents the highest estimated age.
	AgeRange *AgeRange

	// Indicates whether or not the face has a beard, and the confidence level in the
	// determination.
	Beard *Beard

	// Bounding box of the face. Default attribute.
	BoundingBox *BoundingBox

	// Confidence level that the bounding box contains a face (and not a different
	// object such as a tree). Default attribute.
	Confidence *float32

	// The emotions that appear to be expressed on the face, and the confidence level
	// in the determination. The API is only making a determination of the physical
	// appearance of a person's face. It is not a determination of the person’s
	// internal emotional state and should not be used in such a way. For example, a
	// person pretending to have a sad face might not be sad emotionally.
	Emotions []Emotion

	// Indicates the direction the eyes are gazing in, as defined by pitch and yaw.
	EyeDirection *EyeDirection

	// Indicates whether or not the face is wearing eye glasses, and the confidence
	// level in the determination.
	Eyeglasses *Eyeglasses

	// Indicates whether or not the eyes on the face are open, and the confidence
	// level in the determination.
	EyesOpen *EyeOpen

	// FaceOccluded should return "true" with a high confidence score if a detected
	// face’s eyes, nose, and mouth are partially captured or if they are covered by
	// masks, dark sunglasses, cell phones, hands, or other objects. FaceOccluded
	// should return "false" with a high confidence score if common occurrences that do
	// not impact face verification are detected, such as eye glasses, lightly tinted
	// sunglasses, strands of hair, and others.
	FaceOccluded *FaceOccluded

	// The predicted gender of a detected face.
	Gender *Gender

	// Indicates the location of landmarks on the face. Default attribute.
	Landmarks []Landmark

	// Indicates whether or not the mouth on the face is open, and the confidence
	// level in the determination.
	MouthOpen *MouthOpen

	// Indicates whether or not the face has a mustache, and the confidence level in
	// the determination.
	Mustache *Mustache

	// Indicates the pose of the face as determined by its pitch, roll, and yaw.
	// Default attribute.
	Pose *Pose

	// Identifies image brightness and sharpness. Default attribute.
	Quality *ImageQuality

	// Indicates whether or not the face is smiling, and the confidence level in the
	// determination.
	Smile *Smile

	// Indicates whether or not the face is wearing sunglasses, and the confidence
	// level in the determination.
	Sunglasses *Sunglasses
	// contains filtered or unexported fields
}

Structure containing attributes of the face that the algorithm detected. A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox , Confidence , Landmarks , Pose , and Quality . GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection . The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have a FaceAttributes input parameter:

  • GetCelebrityRecognition
  • GetPersonTracking
  • GetFaceSearch

The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces . For IndexFaces , use the DetectAttributes input parameter.

type FaceDetection

type FaceDetection struct {

	// The face properties for the detected face.
	Face *FaceDetail

	// Time, in milliseconds from the start of the video, that the face was detected.
	// Note that Timestamp is not guaranteed to be accurate to the individual frame
	// where the face first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about a face detected in a video analysis request and the time the face was detected in the video.

type FaceMatch

type FaceMatch struct {

	// Describes the face properties such as the bounding box, face ID, image ID of
	// the source image, and external image ID that you assigned.
	Face *Face

	// Confidence in the match of this face with the input face.
	Similarity *float32
	// contains filtered or unexported fields
}

Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

type FaceOccluded added in v1.27.0

type FaceOccluded struct {

	// The confidence that the service has detected the presence of a face occlusion.
	Confidence *float32

	// True if a detected face’s eyes, nose, and mouth are partially captured or if
	// they are covered by masks, dark sunglasses, cell phones, hands, or other
	// objects. False if common occurrences that do not impact face verification are
	// detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and
	// others.
	Value bool
	// contains filtered or unexported fields
}

FaceOccluded should return "true" with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. FaceOccluded should return "false" with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others. You can use FaceOccluded to determine if an obstruction on a face negatively impacts using the image for face matching.

type FaceRecord

type FaceRecord struct {

	// Describes the face properties such as the bounding box, face ID, image ID of
	// the input image, and external image ID that you assigned.
	Face *Face

	// Structure containing attributes of the face that the algorithm detected.
	FaceDetail *FaceDetail
	// contains filtered or unexported fields
}

Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.

type FaceSearchSettings

type FaceSearchSettings struct {

	// The ID of a collection that contains faces that you want to search for.
	CollectionId *string

	// Minimum face match confidence score that must be met to return a result for a
	// recognized face. The default is 80. 0 is the lowest confidence. 100 is the
	// highest confidence. Values between 0 and 100 are accepted, and values lower than
	// 80 are set to 80.
	FaceMatchThreshold *float32
	// contains filtered or unexported fields
}

Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for CreateStreamProcessor .

type FaceSearchSortBy

type FaceSearchSortBy string
const (
	FaceSearchSortByIndex     FaceSearchSortBy = "INDEX"
	FaceSearchSortByTimestamp FaceSearchSortBy = "TIMESTAMP"
)

Enum values for FaceSearchSortBy

func (FaceSearchSortBy) Values added in v0.29.0

Values returns all known values for FaceSearchSortBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Gender

type Gender struct {

	// Level of confidence in the prediction.
	Confidence *float32

	// The predicted gender of the face.
	Value GenderType
	// contains filtered or unexported fields
}

The predicted gender of a detected face. Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female. Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform. We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.

type GenderType

type GenderType string
const (
	GenderTypeMale   GenderType = "Male"
	GenderTypeFemale GenderType = "Female"
)

Enum values for GenderType

func (GenderType) Values added in v0.29.0

func (GenderType) Values() []GenderType

Values returns all known values for GenderType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type GeneralLabelsSettings added in v1.21.0

type GeneralLabelsSettings struct {

	// The label categories that should be excluded from the return from DetectLabels.
	LabelCategoryExclusionFilters []string

	// The label categories that should be included in the return from DetectLabels.
	LabelCategoryInclusionFilters []string

	// The labels that should be excluded from the return from DetectLabels.
	LabelExclusionFilters []string

	// The labels that should be included in the return from DetectLabels.
	LabelInclusionFilters []string
	// contains filtered or unexported fields
}

Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels (https://docs.aws.amazon.com/rekognition/latest/dg/labels.html) .

type Geometry

type Geometry struct {

	// An axis-aligned coarse representation of the detected item's location on the
	// image.
	BoundingBox *BoundingBox

	// Within the bounding box, a fine-grained polygon around the detected item.
	Polygon []Point
	// contains filtered or unexported fields
}

Information about where an object ( DetectCustomLabels ) or text ( DetectText ) is located on an image.

type GetContentModerationRequestMetadata added in v1.26.0

type GetContentModerationRequestMetadata struct {

	// The aggregation method chosen for a GetContentModeration request.
	AggregateBy ContentModerationAggregateBy

	// The sorting method chosen for a GetContentModeration request.
	SortBy ContentModerationSortBy
	// contains filtered or unexported fields
}

Contains metadata about a content moderation request, including the SortBy and AggregateBy options.

type GetLabelDetectionRequestMetadata added in v1.26.0

type GetLabelDetectionRequestMetadata struct {

	// The aggregation method chosen for a GetLabelDetection request.
	AggregateBy LabelDetectionAggregateBy

	// The sorting method chosen for a GetLabelDetection request.
	SortBy LabelDetectionSortBy
	// contains filtered or unexported fields
}

Contains metadata about a label detection request, including the SortBy and AggregateBy options.

type GroundTruthManifest

type GroundTruthManifest struct {

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	S3Object *S3Object
	// contains filtered or unexported fields
}

The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

type HumanLoopActivationOutput

type HumanLoopActivationOutput struct {

	// Shows the result of condition evaluations, including those conditions which
	// activated a human review.
	//
	// This value conforms to the media type: application/json
	HumanLoopActivationConditionsEvaluationResults *string

	// Shows if and why human review was needed.
	HumanLoopActivationReasons []string

	// The Amazon Resource Name (ARN) of the HumanLoop created.
	HumanLoopArn *string
	// contains filtered or unexported fields
}

Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

type HumanLoopConfig

type HumanLoopConfig struct {

	// The Amazon Resource Name (ARN) of the flow definition. You can create a flow
	// definition by using the Amazon Sagemaker CreateFlowDefinition (https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateFlowDefinition.html)
	// Operation.
	//
	// This member is required.
	FlowDefinitionArn *string

	// The name of the human review used for this image. This should be kept unique
	// within a region.
	//
	// This member is required.
	HumanLoopName *string

	// Sets attributes of the input data.
	DataAttributes *HumanLoopDataAttributes
	// contains filtered or unexported fields
}

Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

type HumanLoopDataAttributes

type HumanLoopDataAttributes struct {

	// Sets whether the input image is free of personally identifiable information.
	ContentClassifiers []ContentClassifier
	// contains filtered or unexported fields
}

Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

type HumanLoopQuotaExceededException

type HumanLoopQuotaExceededException struct {
	Message *string

	ErrorCodeOverride *string

	ResourceType *string
	QuotaCode    *string
	ServiceCode  *string
	Code         *string
	Logref       *string
	// contains filtered or unexported fields
}

The number of in-progress human reviews you have has exceeded the number allowed.

func (*HumanLoopQuotaExceededException) Error

func (*HumanLoopQuotaExceededException) ErrorCode

func (e *HumanLoopQuotaExceededException) ErrorCode() string

func (*HumanLoopQuotaExceededException) ErrorFault

func (*HumanLoopQuotaExceededException) ErrorMessage

func (e *HumanLoopQuotaExceededException) ErrorMessage() string

type IdempotentParameterMismatchException

type IdempotentParameterMismatchException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

A ClientRequestToken input parameter was reused with an operation, but at least one of the other input parameters is different from the previous call to the operation.

func (*IdempotentParameterMismatchException) Error

func (*IdempotentParameterMismatchException) ErrorCode

func (*IdempotentParameterMismatchException) ErrorFault

func (*IdempotentParameterMismatchException) ErrorMessage

func (e *IdempotentParameterMismatchException) ErrorMessage() string

type Image

type Image struct {

	// Blob of image bytes up to 5 MBs. Note that the maximum image size you can pass
	// to DetectCustomLabels is 4MB.
	Bytes []byte

	// Identifies an S3 object as the image source.
	S3Object *S3Object
	// contains filtered or unexported fields
}

Provides the input image either as bytes or an S3 object. You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations. For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide. You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

type ImageQuality

type ImageQuality struct {

	// Value representing brightness of the face. The service returns a value between
	// 0 and 100 (inclusive). A higher value indicates a brighter face image.
	Brightness *float32

	// Value representing sharpness of the face. The service returns a value between 0
	// and 100 (inclusive). A higher value indicates a sharper face image.
	Sharpness *float32
	// contains filtered or unexported fields
}

Identifies face image brightness and sharpness.

type ImageTooLargeException

type ImageTooLargeException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The input image size exceeds the allowed limit. If you are calling DetectProtectiveEquipment, the image size or resolution exceeds the allowed limit. For more information, see Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.

func (*ImageTooLargeException) Error

func (e *ImageTooLargeException) Error() string

func (*ImageTooLargeException) ErrorCode

func (e *ImageTooLargeException) ErrorCode() string

func (*ImageTooLargeException) ErrorFault

func (e *ImageTooLargeException) ErrorFault() smithy.ErrorFault

func (*ImageTooLargeException) ErrorMessage

func (e *ImageTooLargeException) ErrorMessage() string

type Instance

type Instance struct {

	// The position of the label instance on the image.
	BoundingBox *BoundingBox

	// The confidence that Amazon Rekognition has in the accuracy of the bounding box.
	Confidence *float32

	// The dominant colors found in an individual instance of a label.
	DominantColors []DominantColor
	// contains filtered or unexported fields
}

An instance of a label returned by Amazon Rekognition Image ( DetectLabels ) or by Amazon Rekognition Video ( GetLabelDetection ).

type InternalServerError

type InternalServerError struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Amazon Rekognition experienced a service issue. Try your call again.

func (*InternalServerError) Error

func (e *InternalServerError) Error() string

func (*InternalServerError) ErrorCode

func (e *InternalServerError) ErrorCode() string

func (*InternalServerError) ErrorFault

func (e *InternalServerError) ErrorFault() smithy.ErrorFault

func (*InternalServerError) ErrorMessage

func (e *InternalServerError) ErrorMessage() string

type InvalidImageFormatException

type InvalidImageFormatException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The provided image format is not supported.

func (*InvalidImageFormatException) Error

func (*InvalidImageFormatException) ErrorCode

func (e *InvalidImageFormatException) ErrorCode() string

func (*InvalidImageFormatException) ErrorFault

func (*InvalidImageFormatException) ErrorMessage

func (e *InvalidImageFormatException) ErrorMessage() string

type InvalidManifestException added in v1.32.0

type InvalidManifestException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Indicates that a provided manifest file is empty or larger than the allowed limit.

func (*InvalidManifestException) Error added in v1.32.0

func (e *InvalidManifestException) Error() string

func (*InvalidManifestException) ErrorCode added in v1.32.0

func (e *InvalidManifestException) ErrorCode() string

func (*InvalidManifestException) ErrorFault added in v1.32.0

func (e *InvalidManifestException) ErrorFault() smithy.ErrorFault

func (*InvalidManifestException) ErrorMessage added in v1.32.0

func (e *InvalidManifestException) ErrorMessage() string

type InvalidPaginationTokenException

type InvalidPaginationTokenException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Pagination token in the request is not valid.

func (*InvalidPaginationTokenException) Error

func (*InvalidPaginationTokenException) ErrorCode

func (e *InvalidPaginationTokenException) ErrorCode() string

func (*InvalidPaginationTokenException) ErrorFault

func (*InvalidPaginationTokenException) ErrorMessage

func (e *InvalidPaginationTokenException) ErrorMessage() string

type InvalidParameterException

type InvalidParameterException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Input parameter violated a constraint. Validate your parameter before calling the API operation again.

func (*InvalidParameterException) Error

func (e *InvalidParameterException) Error() string

func (*InvalidParameterException) ErrorCode

func (e *InvalidParameterException) ErrorCode() string

func (*InvalidParameterException) ErrorFault

func (*InvalidParameterException) ErrorMessage

func (e *InvalidParameterException) ErrorMessage() string

type InvalidPolicyRevisionIdException added in v1.20.0

type InvalidPolicyRevisionIdException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The supplied revision id for the project policy is invalid.

func (*InvalidPolicyRevisionIdException) Error added in v1.20.0

func (*InvalidPolicyRevisionIdException) ErrorCode added in v1.20.0

func (*InvalidPolicyRevisionIdException) ErrorFault added in v1.20.0

func (*InvalidPolicyRevisionIdException) ErrorMessage added in v1.20.0

func (e *InvalidPolicyRevisionIdException) ErrorMessage() string

type InvalidS3ObjectException

type InvalidS3ObjectException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Amazon Rekognition is unable to access the S3 object specified in the request.

func (*InvalidS3ObjectException) Error

func (e *InvalidS3ObjectException) Error() string

func (*InvalidS3ObjectException) ErrorCode

func (e *InvalidS3ObjectException) ErrorCode() string

func (*InvalidS3ObjectException) ErrorFault

func (e *InvalidS3ObjectException) ErrorFault() smithy.ErrorFault

func (*InvalidS3ObjectException) ErrorMessage

func (e *InvalidS3ObjectException) ErrorMessage() string

type KinesisDataStream

type KinesisDataStream struct {

	// ARN of the output Amazon Kinesis Data Streams stream.
	Arn *string
	// contains filtered or unexported fields
}

The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

type KinesisVideoStream

type KinesisVideoStream struct {

	// ARN of the Kinesis video stream stream that streams the source video.
	Arn *string
	// contains filtered or unexported fields
}

Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

type KinesisVideoStreamStartSelector added in v1.18.0

type KinesisVideoStreamStartSelector struct {

	// The unique identifier of the fragment. This value monotonically increases based
	// on the ingestion order.
	FragmentNumber *string

	// The timestamp from the producer corresponding to the fragment, in milliseconds,
	// expressed in unix time format.
	ProducerTimestamp *int64
	// contains filtered or unexported fields
}

Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment (https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_reader_Fragment.html) .

type KnownGender added in v1.8.0

type KnownGender struct {

	// A string value of the KnownGender info about the Celebrity.
	Type KnownGenderType
	// contains filtered or unexported fields
}

The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.

type KnownGenderType added in v1.8.0

type KnownGenderType string
const (
	KnownGenderTypeMale      KnownGenderType = "Male"
	KnownGenderTypeFemale    KnownGenderType = "Female"
	KnownGenderTypeNonbinary KnownGenderType = "Nonbinary"
	KnownGenderTypeUnlisted  KnownGenderType = "Unlisted"
)

Enum values for KnownGenderType

func (KnownGenderType) Values added in v1.8.0

func (KnownGenderType) Values() []KnownGenderType

Values returns all known values for KnownGenderType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Label

type Label struct {

	// A list of potential aliases for a given label.
	Aliases []LabelAlias

	// A list of the categories associated with a given label.
	Categories []LabelCategory

	// Level of confidence.
	Confidence *float32

	// If Label represents an object, Instances contains the bounding boxes for each
	// instance of the detected object. Bounding boxes are returned for common object
	// labels such as people, cars, furniture, apparel or pets.
	Instances []Instance

	// The name (label) of the object or scene.
	Name *string

	// The parent labels for a label. The response includes all ancestor labels.
	Parents []Parent
	// contains filtered or unexported fields
}

Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

type LabelAlias added in v1.21.0

type LabelAlias struct {

	// The name of an alias for a given label.
	Name *string
	// contains filtered or unexported fields
}

A potential alias of for a given label.

type LabelCategory added in v1.21.0

type LabelCategory struct {

	// The name of a category that applies to a given label.
	Name *string
	// contains filtered or unexported fields
}

The category that applies to a given label.

type LabelDetection

type LabelDetection struct {

	// The time duration of a segment in milliseconds, I.e. time elapsed from
	// StartTimestampMillis to EndTimestampMillis.
	DurationMillis *int64

	// The time in milliseconds defining the end of the timeline segment containing a
	// continuously detected label.
	EndTimestampMillis *int64

	// Details about the detected label.
	Label *Label

	// The time in milliseconds defining the start of the timeline segment containing
	// a continuously detected label.
	StartTimestampMillis *int64

	// Time, in milliseconds from the start of the video, that the label was detected.
	// Note that Timestamp is not guaranteed to be accurate to the individual frame
	// where the label first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about a label detected in a video analysis request and the time the label was detected in the video.

type LabelDetectionAggregateBy added in v1.22.0

type LabelDetectionAggregateBy string
const (
	LabelDetectionAggregateByTimestamps LabelDetectionAggregateBy = "TIMESTAMPS"
	LabelDetectionAggregateBySegments   LabelDetectionAggregateBy = "SEGMENTS"
)

Enum values for LabelDetectionAggregateBy

func (LabelDetectionAggregateBy) Values added in v1.22.0

Values returns all known values for LabelDetectionAggregateBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type LabelDetectionFeatureName added in v1.22.0

type LabelDetectionFeatureName string
const (
	LabelDetectionFeatureNameGeneralLabels LabelDetectionFeatureName = "GENERAL_LABELS"
)

Enum values for LabelDetectionFeatureName

func (LabelDetectionFeatureName) Values added in v1.22.0

Values returns all known values for LabelDetectionFeatureName. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type LabelDetectionSettings added in v1.22.0

type LabelDetectionSettings struct {

	// Contains filters for the object labels returned by DetectLabels. Filters can be
	// inclusive, exclusive, or a combination of both and can be applied to individual
	// labels or entire label categories. To see a list of label categories, see
	// Detecting Labels (https://docs.aws.amazon.com/rekognition/latest/dg/labels.html)
	// .
	GeneralLabels *GeneralLabelsSettings
	// contains filtered or unexported fields
}

Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.

type LabelDetectionSortBy

type LabelDetectionSortBy string
const (
	LabelDetectionSortByName      LabelDetectionSortBy = "NAME"
	LabelDetectionSortByTimestamp LabelDetectionSortBy = "TIMESTAMP"
)

Enum values for LabelDetectionSortBy

func (LabelDetectionSortBy) Values added in v0.29.0

Values returns all known values for LabelDetectionSortBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Landmark

type Landmark struct {

	// Type of landmark.
	Type LandmarkType

	// The x-coordinate of the landmark expressed as a ratio of the width of the
	// image. The x-coordinate is measured from the left-side of the image. For
	// example, if the image is 700 pixels wide and the x-coordinate of the landmark is
	// at 350 pixels, this value is 0.5.
	X *float32

	// The y-coordinate of the landmark expressed as a ratio of the height of the
	// image. The y-coordinate is measured from the top of the image. For example, if
	// the image height is 200 pixels and the y-coordinate of the landmark is at 50
	// pixels, this value is 0.25.
	Y *float32
	// contains filtered or unexported fields
}

Indicates the location of the landmark on the face.

type LandmarkType

type LandmarkType string
const (
	LandmarkTypeEyeLeft           LandmarkType = "eyeLeft"
	LandmarkTypeEyeRight          LandmarkType = "eyeRight"
	LandmarkTypeNose              LandmarkType = "nose"
	LandmarkTypeMouthLeft         LandmarkType = "mouthLeft"
	LandmarkTypeMouthRight        LandmarkType = "mouthRight"
	LandmarkTypeLeftEyeBrowLeft   LandmarkType = "leftEyeBrowLeft"
	LandmarkTypeLeftEyeBrowRight  LandmarkType = "leftEyeBrowRight"
	LandmarkTypeLeftEyeBrowUp     LandmarkType = "leftEyeBrowUp"
	LandmarkTypeRightEyeBrowLeft  LandmarkType = "rightEyeBrowLeft"
	LandmarkTypeRightEyeBrowRight LandmarkType = "rightEyeBrowRight"
	LandmarkTypeRightEyeBrowUp    LandmarkType = "rightEyeBrowUp"
	LandmarkTypeLeftEyeLeft       LandmarkType = "leftEyeLeft"
	LandmarkTypeLeftEyeRight      LandmarkType = "leftEyeRight"
	LandmarkTypeLeftEyeUp         LandmarkType = "leftEyeUp"
	LandmarkTypeLeftEyeDown       LandmarkType = "leftEyeDown"
	LandmarkTypeRightEyeLeft      LandmarkType = "rightEyeLeft"
	LandmarkTypeRightEyeRight     LandmarkType = "rightEyeRight"
	LandmarkTypeRightEyeUp        LandmarkType = "rightEyeUp"
	LandmarkTypeRightEyeDown      LandmarkType = "rightEyeDown"
	LandmarkTypeNoseLeft          LandmarkType = "noseLeft"
	LandmarkTypeNoseRight         LandmarkType = "noseRight"
	LandmarkTypeMouthUp           LandmarkType = "mouthUp"
	LandmarkTypeMouthDown         LandmarkType = "mouthDown"
	LandmarkTypeLeftPupil         LandmarkType = "leftPupil"
	LandmarkTypeRightPupil        LandmarkType = "rightPupil"
	LandmarkTypeUpperJawlineLeft  LandmarkType = "upperJawlineLeft"
	LandmarkTypeMidJawlineLeft    LandmarkType = "midJawlineLeft"
	LandmarkTypeChinBottom        LandmarkType = "chinBottom"
	LandmarkTypeMidJawlineRight   LandmarkType = "midJawlineRight"
	LandmarkTypeUpperJawlineRight LandmarkType = "upperJawlineRight"
)

Enum values for LandmarkType

func (LandmarkType) Values added in v0.29.0

func (LandmarkType) Values() []LandmarkType

Values returns all known values for LandmarkType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type LimitExceededException

type LimitExceededException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

An Amazon Rekognition service limit was exceeded. For example, if you start too many jobs concurrently, subsequent calls to start operations (ex: StartLabelDetection ) will raise a LimitExceededException exception (HTTP status code: 400) until the number of concurrently running jobs is below the Amazon Rekognition service limit.

func (*LimitExceededException) Error

func (e *LimitExceededException) Error() string

func (*LimitExceededException) ErrorCode

func (e *LimitExceededException) ErrorCode() string

func (*LimitExceededException) ErrorFault

func (e *LimitExceededException) ErrorFault() smithy.ErrorFault

func (*LimitExceededException) ErrorMessage

func (e *LimitExceededException) ErrorMessage() string

type LivenessOutputConfig added in v1.24.0

type LivenessOutputConfig struct {

	// The path to an AWS Amazon S3 bucket used to store Face Liveness session results.
	//
	// This member is required.
	S3Bucket *string

	// The prefix prepended to the output files for the Face Liveness session results.
	S3KeyPrefix *string
	// contains filtered or unexported fields
}

Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller's AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.

type LivenessSessionStatus added in v1.24.0

type LivenessSessionStatus string
const (
	LivenessSessionStatusCreated    LivenessSessionStatus = "CREATED"
	LivenessSessionStatusInProgress LivenessSessionStatus = "IN_PROGRESS"
	LivenessSessionStatusSucceeded  LivenessSessionStatus = "SUCCEEDED"
	LivenessSessionStatusFailed     LivenessSessionStatus = "FAILED"
	LivenessSessionStatusExpired    LivenessSessionStatus = "EXPIRED"
)

Enum values for LivenessSessionStatus

func (LivenessSessionStatus) Values added in v1.24.0

Values returns all known values for LivenessSessionStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type MalformedPolicyDocumentException added in v1.20.0

type MalformedPolicyDocumentException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The format of the project policy document that you supplied to PutProjectPolicy is incorrect.

func (*MalformedPolicyDocumentException) Error added in v1.20.0

func (*MalformedPolicyDocumentException) ErrorCode added in v1.20.0

func (*MalformedPolicyDocumentException) ErrorFault added in v1.20.0

func (*MalformedPolicyDocumentException) ErrorMessage added in v1.20.0

func (e *MalformedPolicyDocumentException) ErrorMessage() string

type MatchedUser added in v1.29.0

type MatchedUser struct {

	// A provided ID for the UserID. Unique within the collection.
	UserId *string

	// The status of the user matched to a provided FaceID.
	UserStatus UserStatus
	// contains filtered or unexported fields
}

Contains metadata for a UserID matched with a given face.

type MediaAnalysisDetectModerationLabelsConfig added in v1.32.0

type MediaAnalysisDetectModerationLabelsConfig struct {

	// Specifies the minimum confidence level for the moderation labels to return.
	// Amazon Rekognition doesn't return any labels with a confidence level lower than
	// this specified value.
	MinConfidence *float32

	// Specifies the custom moderation model to be used during the label detection
	// job. If not provided the pre-trained model is used.
	ProjectVersion *string
	// contains filtered or unexported fields
}

Configuration for Moderation Labels Detection.

type MediaAnalysisInput added in v1.32.0

type MediaAnalysisInput struct {

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	//
	// This member is required.
	S3Object *S3Object
	// contains filtered or unexported fields
}

Contains input information for a media analysis job.

type MediaAnalysisJobDescription added in v1.32.0

type MediaAnalysisJobDescription struct {

	// The Unix date and time when the job was started.
	//
	// This member is required.
	CreationTimestamp *time.Time

	// Reference to the input manifest that was provided in the job creation request.
	//
	// This member is required.
	Input *MediaAnalysisInput

	// The identifier for a media analysis job.
	//
	// This member is required.
	JobId *string

	// Operation configurations that were provided during job creation.
	//
	// This member is required.
	OperationsConfig *MediaAnalysisOperationsConfig

	// Output configuration that was provided in the creation request.
	//
	// This member is required.
	OutputConfig *MediaAnalysisOutputConfig

	// The status of the media analysis job being retrieved.
	//
	// This member is required.
	Status MediaAnalysisJobStatus

	// The Unix date and time when the job finished.
	CompletionTimestamp *time.Time

	// Details about the error that resulted in failure of the job.
	FailureDetails *MediaAnalysisJobFailureDetails

	// The name of a media analysis job.
	JobName *string

	// KMS Key that was provided in the creation request.
	KmsKeyId *string

	// Provides statistics on input manifest and errors identified in the input
	// manifest.
	ManifestSummary *MediaAnalysisManifestSummary

	// Output manifest that contains prediction results.
	Results *MediaAnalysisResults
	// contains filtered or unexported fields
}

Description for a media analysis job.

type MediaAnalysisJobFailureCode added in v1.32.0

type MediaAnalysisJobFailureCode string
const (
	MediaAnalysisJobFailureCodeInternalError       MediaAnalysisJobFailureCode = "INTERNAL_ERROR"
	MediaAnalysisJobFailureCodeInvalidS3Object     MediaAnalysisJobFailureCode = "INVALID_S3_OBJECT"
	MediaAnalysisJobFailureCodeInvalidManifest     MediaAnalysisJobFailureCode = "INVALID_MANIFEST"
	MediaAnalysisJobFailureCodeInvalidOutputConfig MediaAnalysisJobFailureCode = "INVALID_OUTPUT_CONFIG"
	MediaAnalysisJobFailureCodeInvalidKmsKey       MediaAnalysisJobFailureCode = "INVALID_KMS_KEY"
	MediaAnalysisJobFailureCodeAccessDenied        MediaAnalysisJobFailureCode = "ACCESS_DENIED"
	MediaAnalysisJobFailureCodeResourceNotFound    MediaAnalysisJobFailureCode = "RESOURCE_NOT_FOUND"
	MediaAnalysisJobFailureCodeResourceNotReady    MediaAnalysisJobFailureCode = "RESOURCE_NOT_READY"
	MediaAnalysisJobFailureCodeThrottled           MediaAnalysisJobFailureCode = "THROTTLED"
)

Enum values for MediaAnalysisJobFailureCode

func (MediaAnalysisJobFailureCode) Values added in v1.32.0

Values returns all known values for MediaAnalysisJobFailureCode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type MediaAnalysisJobFailureDetails added in v1.32.0

type MediaAnalysisJobFailureDetails struct {

	// Error code for the failed job.
	Code MediaAnalysisJobFailureCode

	// Human readable error message.
	Message *string
	// contains filtered or unexported fields
}

Details about the error that resulted in failure of the job.

type MediaAnalysisJobStatus added in v1.32.0

type MediaAnalysisJobStatus string
const (
	MediaAnalysisJobStatusCreated    MediaAnalysisJobStatus = "CREATED"
	MediaAnalysisJobStatusQueued     MediaAnalysisJobStatus = "QUEUED"
	MediaAnalysisJobStatusInProgress MediaAnalysisJobStatus = "IN_PROGRESS"
	MediaAnalysisJobStatusSucceeded  MediaAnalysisJobStatus = "SUCCEEDED"
	MediaAnalysisJobStatusFailed     MediaAnalysisJobStatus = "FAILED"
)

Enum values for MediaAnalysisJobStatus

func (MediaAnalysisJobStatus) Values added in v1.32.0

Values returns all known values for MediaAnalysisJobStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type MediaAnalysisManifestSummary added in v1.32.0

type MediaAnalysisManifestSummary struct {

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	S3Object *S3Object
	// contains filtered or unexported fields
}

Summary that provides statistics on input manifest and errors identified in the input manifest.

type MediaAnalysisModelVersions added in v1.36.0

type MediaAnalysisModelVersions struct {

	// The Moderation base model version.
	Moderation *string
	// contains filtered or unexported fields
}

Object containing information about the model versions of selected features in a given job.

type MediaAnalysisOperationsConfig added in v1.32.0

type MediaAnalysisOperationsConfig struct {

	// Contains configuration options for a DetectModerationLabels job.
	DetectModerationLabels *MediaAnalysisDetectModerationLabelsConfig
	// contains filtered or unexported fields
}

Configuration options for a media analysis job. Configuration is operation-specific.

type MediaAnalysisOutputConfig added in v1.32.0

type MediaAnalysisOutputConfig struct {

	// Specifies the Amazon S3 bucket to contain the output of the media analysis job.
	//
	// This member is required.
	S3Bucket *string

	// Specifies the Amazon S3 key prefix that comes after the name of the bucket you
	// have designated for storage.
	S3KeyPrefix *string
	// contains filtered or unexported fields
}

Output configuration provided in the job creation request.

type MediaAnalysisResults added in v1.32.0

type MediaAnalysisResults struct {

	// Information about the model versions for the features selected in a given job.
	ModelVersions *MediaAnalysisModelVersions

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	S3Object *S3Object
	// contains filtered or unexported fields
}

Contains the results for a media analysis job created with StartMediaAnalysisJob.

type ModerationLabel

type ModerationLabel struct {

	// Specifies the confidence that Amazon Rekognition has that the label has been
	// correctly identified. If you don't specify the MinConfidence parameter in the
	// call to DetectModerationLabels , the operation returns labels with a confidence
	// value greater than or equal to 50 percent.
	Confidence *float32

	// The label name for the type of unsafe content detected in the image.
	Name *string

	// The name for the parent label. Labels at the top level of the hierarchy have
	// the parent label "" .
	ParentName *string

	// The level of the moderation label with regard to its taxonomy, from 1 to 3.
	TaxonomyLevel *int32
	// contains filtered or unexported fields
}

Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

type MouthOpen

type MouthOpen struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the mouth on the face is open or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

type Mustache

type Mustache struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the face has mustache or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the face has a mustache, and the confidence level in the determination.

type NotificationChannel

type NotificationChannel struct {

	// The ARN of an IAM role that gives Amazon Rekognition publishing permissions to
	// the Amazon SNS topic.
	//
	// This member is required.
	RoleArn *string

	// The Amazon SNS topic to which Amazon Rekognition posts the completion status.
	//
	// This member is required.
	SNSTopicArn *string
	// contains filtered or unexported fields
}

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations (https://docs.aws.amazon.com/rekognition/latest/dg/api-video.html) . Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics (https://docs.aws.amazon.com/rekognition/latest/dg/api-video-roles.html#api-video-roles-all-topics) .

type OrientationCorrection

type OrientationCorrection string
const (
	OrientationCorrectionRotate0   OrientationCorrection = "ROTATE_0"
	OrientationCorrectionRotate90  OrientationCorrection = "ROTATE_90"
	OrientationCorrectionRotate180 OrientationCorrection = "ROTATE_180"
	OrientationCorrectionRotate270 OrientationCorrection = "ROTATE_270"
)

Enum values for OrientationCorrection

func (OrientationCorrection) Values added in v0.29.0

Values returns all known values for OrientationCorrection. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type OutputConfig

type OutputConfig struct {

	// The S3 bucket where training output is placed.
	S3Bucket *string

	// The prefix applied to the training output files.
	S3KeyPrefix *string
	// contains filtered or unexported fields
}

The S3 bucket and folder location where training output is placed.

type Parent

type Parent struct {

	// The name of the parent label.
	Name *string
	// contains filtered or unexported fields
}

A parent label for a label. A label can have 0, 1, or more parents.

type PersonDetail

type PersonDetail struct {

	// Bounding box around the detected person.
	BoundingBox *BoundingBox

	// Face details for the detected person.
	Face *FaceDetail

	// Identifier for the person detected person within a video. Use to keep track of
	// the person throughout the video. The identifier is not stored by Amazon
	// Rekognition.
	Index int64
	// contains filtered or unexported fields
}

Details about a person detected in a video analysis request.

type PersonDetection

type PersonDetection struct {

	// Details about a person whose path was tracked in a video.
	Person *PersonDetail

	// The time, in milliseconds from the start of the video, that the person's path
	// was tracked. Note that Timestamp is not guaranteed to be accurate to the
	// individual frame where the person's path first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video. For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

type PersonMatch

type PersonMatch struct {

	// Information about the faces in the input collection that match the face of a
	// person in the video.
	FaceMatches []FaceMatch

	// Information about the matched person.
	Person *PersonDetail

	// The time, in milliseconds from the beginning of the video, that the person was
	// matched in the video.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection ( FaceMatch ), information about the person ( PersonDetail ), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch .

type PersonTrackingSortBy

type PersonTrackingSortBy string
const (
	PersonTrackingSortByIndex     PersonTrackingSortBy = "INDEX"
	PersonTrackingSortByTimestamp PersonTrackingSortBy = "TIMESTAMP"
)

Enum values for PersonTrackingSortBy

func (PersonTrackingSortBy) Values added in v0.29.0

Values returns all known values for PersonTrackingSortBy. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Point

type Point struct {

	// The value of the X coordinate for a point on a Polygon .
	X *float32

	// The value of the Y coordinate for a point on a Polygon .
	Y *float32
	// contains filtered or unexported fields
}

The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image. An array of Point objects makes up a Polygon . A Polygon is returned by DetectText and by DetectCustomLabels Polygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

type Pose

type Pose struct {

	// Value representing the face rotation on the pitch axis.
	Pitch *float32

	// Value representing the face rotation on the roll axis.
	Roll *float32

	// Value representing the face rotation on the yaw axis.
	Yaw *float32
	// contains filtered or unexported fields
}

Indicates the pose of the face as determined by its pitch, roll, and yaw.

type ProjectAutoUpdate added in v1.31.0

type ProjectAutoUpdate string
const (
	ProjectAutoUpdateEnabled  ProjectAutoUpdate = "ENABLED"
	ProjectAutoUpdateDisabled ProjectAutoUpdate = "DISABLED"
)

Enum values for ProjectAutoUpdate

func (ProjectAutoUpdate) Values added in v1.31.0

Values returns all known values for ProjectAutoUpdate. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProjectDescription

type ProjectDescription struct {

	// Indicates whether automatic retraining will be attempted for the versions of
	// the project. Applies only to adapters.
	AutoUpdate ProjectAutoUpdate

	// The Unix timestamp for the date and time that the project was created.
	CreationTimestamp *time.Time

	// Information about the training and test datasets in the project.
	Datasets []DatasetMetadata

	// Specifies the project that is being customized.
	Feature CustomizationFeature

	// The Amazon Resource Name (ARN) of the project.
	ProjectArn *string

	// The current status of the project.
	Status ProjectStatus
	// contains filtered or unexported fields
}

A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects .

type ProjectPolicy added in v1.20.0

type ProjectPolicy struct {

	// The Unix datetime for the creation of the project policy.
	CreationTimestamp *time.Time

	// The Unix datetime for when the project policy was last updated.
	LastUpdatedTimestamp *time.Time

	// The JSON document for the project policy.
	PolicyDocument *string

	// The name of the project policy.
	PolicyName *string

	// The revision ID of the project policy.
	PolicyRevisionId *string

	// The Amazon Resource Name (ARN) of the project to which the project policy is
	// attached.
	ProjectArn *string
	// contains filtered or unexported fields
}

Describes a project policy in the response from ListProjectPolicies .

type ProjectStatus

type ProjectStatus string
const (
	ProjectStatusCreating ProjectStatus = "CREATING"
	ProjectStatusCreated  ProjectStatus = "CREATED"
	ProjectStatusDeleting ProjectStatus = "DELETING"
)

Enum values for ProjectStatus

func (ProjectStatus) Values added in v0.29.0

func (ProjectStatus) Values() []ProjectStatus

Values returns all known values for ProjectStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProjectVersionDescription

type ProjectVersionDescription struct {

	// The base detection model version used to create the project version.
	BaseModelVersion *string

	// The duration, in seconds, that you were billed for a successful training of the
	// model version. This value is only returned if the model version has been
	// successfully trained.
	BillableTrainingTimeInSeconds *int64

	// The Unix datetime for the date and time that training started.
	CreationTimestamp *time.Time

	// The training results. EvaluationResult is only returned if training is
	// successful.
	EvaluationResult *EvaluationResult

	// The feature that was customized.
	Feature CustomizationFeature

	// Feature specific configuration that was applied during training.
	FeatureConfig *CustomizationFeatureConfig

	// The identifer for the AWS Key Management Service key (AWS KMS key) that was
	// used to encrypt the model during training.
	KmsKeyId *string

	// The location of the summary manifest. The summary manifest provides aggregate
	// data validation results for the training and test datasets.
	ManifestSummary *GroundTruthManifest

	// The maximum number of inference units Amazon Rekognition uses to auto-scale the
	// model. Applies only to Custom Labels projects. For more information, see
	// StartProjectVersion .
	MaxInferenceUnits *int32

	// The minimum number of inference units used by the model. Applies only to Custom
	// Labels projects. For more information, see StartProjectVersion .
	MinInferenceUnits *int32

	// The location where training results are saved.
	OutputConfig *OutputConfig

	// The Amazon Resource Name (ARN) of the project version.
	ProjectVersionArn *string

	// If the model version was copied from a different project,
	// SourceProjectVersionArn contains the ARN of the source model version.
	SourceProjectVersionArn *string

	// The current status of the model version.
	Status ProjectVersionStatus

	// A descriptive message for an error or warning that occurred.
	StatusMessage *string

	// Contains information about the testing results.
	TestingDataResult *TestingDataResult

	// Contains information about the training results.
	TrainingDataResult *TrainingDataResult

	// The Unix date and time that training of the model ended.
	TrainingEndTimestamp *time.Time

	// A user-provided description of the project version.
	VersionDescription *string
	// contains filtered or unexported fields
}

A description of a version of a Amazon Rekognition project version.

type ProjectVersionStatus

type ProjectVersionStatus string
const (
	ProjectVersionStatusTrainingInProgress ProjectVersionStatus = "TRAINING_IN_PROGRESS"
	ProjectVersionStatusTrainingCompleted  ProjectVersionStatus = "TRAINING_COMPLETED"
	ProjectVersionStatusTrainingFailed     ProjectVersionStatus = "TRAINING_FAILED"
	ProjectVersionStatusStarting           ProjectVersionStatus = "STARTING"
	ProjectVersionStatusRunning            ProjectVersionStatus = "RUNNING"
	ProjectVersionStatusFailed             ProjectVersionStatus = "FAILED"
	ProjectVersionStatusStopping           ProjectVersionStatus = "STOPPING"
	ProjectVersionStatusStopped            ProjectVersionStatus = "STOPPED"
	ProjectVersionStatusDeleting           ProjectVersionStatus = "DELETING"
	ProjectVersionStatusCopyingInProgress  ProjectVersionStatus = "COPYING_IN_PROGRESS"
	ProjectVersionStatusCopyingCompleted   ProjectVersionStatus = "COPYING_COMPLETED"
	ProjectVersionStatusCopyingFailed      ProjectVersionStatus = "COPYING_FAILED"
	ProjectVersionStatusDeprecated         ProjectVersionStatus = "DEPRECATED"
	ProjectVersionStatusExpired            ProjectVersionStatus = "EXPIRED"
)

Enum values for ProjectVersionStatus

func (ProjectVersionStatus) Values added in v0.29.0

Values returns all known values for ProjectVersionStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProtectiveEquipmentBodyPart added in v0.29.0

type ProtectiveEquipmentBodyPart struct {

	// The confidence that Amazon Rekognition has in the detection accuracy of the
	// detected body part.
	Confidence *float32

	// An array of Personal Protective Equipment items detected around a body part.
	EquipmentDetections []EquipmentDetection

	// The detected body part.
	Name BodyPart
	// contains filtered or unexported fields
}

Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment .

type ProtectiveEquipmentPerson added in v0.29.0

type ProtectiveEquipmentPerson struct {

	// An array of body parts detected on a person's body (including body parts
	// without PPE).
	BodyParts []ProtectiveEquipmentBodyPart

	// A bounding box around the detected person.
	BoundingBox *BoundingBox

	// The confidence that Amazon Rekognition has that the bounding box contains a
	// person.
	Confidence *float32

	// The identifier for the detected person. The identifier is only unique for a
	// single call to DetectProtectiveEquipment .
	Id *int32
	// contains filtered or unexported fields
}

A person detected by a call to DetectProtectiveEquipment . The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

type ProtectiveEquipmentSummarizationAttributes added in v0.29.0

type ProtectiveEquipmentSummarizationAttributes struct {

	// The minimum confidence level for which you want summary information. The
	// confidence level applies to person detection, body part detection, equipment
	// detection, and body part coverage. Amazon Rekognition doesn't return summary
	// information with a confidence than this specified value. There isn't a default
	// value. Specify a MinConfidence value that is between 50-100% as
	// DetectProtectiveEquipment returns predictions only where the detection
	// confidence is between 50% - 100%. If you specify a value that is less than 50%,
	// the results are the same specifying a value of 50%.
	//
	// This member is required.
	MinConfidence *float32

	// An array of personal protective equipment types for which you want summary
	// information. If a person is detected wearing a required requipment type, the
	// person's ID is added to the PersonsWithRequiredEquipment array field returned
	// in ProtectiveEquipmentSummary by DetectProtectiveEquipment .
	//
	// This member is required.
	RequiredEquipmentTypes []ProtectiveEquipmentType
	// contains filtered or unexported fields
}

Specifies summary attributes to return from a call to DetectProtectiveEquipment . You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary ( ProtectiveEquipmentSummary ) field of the response from DetectProtectiveEquipment . The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary .

type ProtectiveEquipmentSummary added in v0.29.0

type ProtectiveEquipmentSummary struct {

	// An array of IDs for persons where it was not possible to determine if they are
	// wearing personal protective equipment.
	PersonsIndeterminate []int32

	// An array of IDs for persons who are wearing detected personal protective
	// equipment.
	PersonsWithRequiredEquipment []int32

	// An array of IDs for persons who are not wearing all of the types of PPE
	// specified in the RequiredEquipmentTypes field of the detected personal
	// protective equipment.
	PersonsWithoutRequiredEquipment []int32
	// contains filtered or unexported fields
}

Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment . You specify the required type of PPE in the SummarizationAttributes ( ProtectiveEquipmentSummarizationAttributes ) input parameter. The summary includes which persons were detected wearing the required personal protective equipment ( PersonsWithRequiredEquipment ), which persons were detected as not wearing the required PPE ( PersonsWithoutRequiredEquipment ), and the persons in which a determination could not be made ( PersonsIndeterminate ). To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the PersonsWithRequiredEquipment array. If you want to find out more about a person, such as the location ( BoundingBox ) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in the Persons array by DetectProtectiveEquipment .

type ProtectiveEquipmentType added in v0.29.0

type ProtectiveEquipmentType string
const (
	ProtectiveEquipmentTypeFaceCover ProtectiveEquipmentType = "FACE_COVER"
	ProtectiveEquipmentTypeHandCover ProtectiveEquipmentType = "HAND_COVER"
	ProtectiveEquipmentTypeHeadCover ProtectiveEquipmentType = "HEAD_COVER"
)

Enum values for ProtectiveEquipmentType

func (ProtectiveEquipmentType) Values added in v0.29.0

Values returns all known values for ProtectiveEquipmentType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProvisionedThroughputExceededException

type ProvisionedThroughputExceededException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The number of requests exceeded your throughput limit. If you want to increase this limit, contact Amazon Rekognition.

func (*ProvisionedThroughputExceededException) Error

func (*ProvisionedThroughputExceededException) ErrorCode

func (*ProvisionedThroughputExceededException) ErrorFault

func (*ProvisionedThroughputExceededException) ErrorMessage

type QualityFilter

type QualityFilter string
const (
	QualityFilterNone   QualityFilter = "NONE"
	QualityFilterAuto   QualityFilter = "AUTO"
	QualityFilterLow    QualityFilter = "LOW"
	QualityFilterMedium QualityFilter = "MEDIUM"
	QualityFilterHigh   QualityFilter = "HIGH"
)

Enum values for QualityFilter

func (QualityFilter) Values added in v0.29.0

func (QualityFilter) Values() []QualityFilter

Values returns all known values for QualityFilter. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Reason

type Reason string
const (
	ReasonExceedsMaxFaces  Reason = "EXCEEDS_MAX_FACES"
	ReasonExtremePose      Reason = "EXTREME_POSE"
	ReasonLowBrightness    Reason = "LOW_BRIGHTNESS"
	ReasonLowSharpness     Reason = "LOW_SHARPNESS"
	ReasonLowConfidence    Reason = "LOW_CONFIDENCE"
	ReasonSmallBoundingBox Reason = "SMALL_BOUNDING_BOX"
	ReasonLowFaceQuality   Reason = "LOW_FACE_QUALITY"
)

Enum values for Reason

func (Reason) Values added in v0.29.0

func (Reason) Values() []Reason

Values returns all known values for Reason. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type RegionOfInterest

type RegionOfInterest struct {

	// The box representing a region of interest on screen.
	BoundingBox *BoundingBox

	// Specifies a shape made up of up to 10 Point objects to define a region of
	// interest.
	Polygon []Point
	// contains filtered or unexported fields
}

Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox or Polygon to set a region of the screen. A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.

type ResourceAlreadyExistsException

type ResourceAlreadyExistsException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

A resource with the specified ID already exists.

func (*ResourceAlreadyExistsException) Error

func (*ResourceAlreadyExistsException) ErrorCode

func (e *ResourceAlreadyExistsException) ErrorCode() string

func (*ResourceAlreadyExistsException) ErrorFault

func (*ResourceAlreadyExistsException) ErrorMessage

func (e *ResourceAlreadyExistsException) ErrorMessage() string

type ResourceInUseException

type ResourceInUseException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The specified resource is already being used.

func (*ResourceInUseException) Error

func (e *ResourceInUseException) Error() string

func (*ResourceInUseException) ErrorCode

func (e *ResourceInUseException) ErrorCode() string

func (*ResourceInUseException) ErrorFault

func (e *ResourceInUseException) ErrorFault() smithy.ErrorFault

func (*ResourceInUseException) ErrorMessage

func (e *ResourceInUseException) ErrorMessage() string

type ResourceNotFoundException

type ResourceNotFoundException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The resource specified in the request cannot be found.

func (*ResourceNotFoundException) Error

func (e *ResourceNotFoundException) Error() string

func (*ResourceNotFoundException) ErrorCode

func (e *ResourceNotFoundException) ErrorCode() string

func (*ResourceNotFoundException) ErrorFault

func (*ResourceNotFoundException) ErrorMessage

func (e *ResourceNotFoundException) ErrorMessage() string

type ResourceNotReadyException

type ResourceNotReadyException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The requested resource isn't ready. For example, this exception occurs when you call DetectCustomLabels with a model version that isn't deployed.

func (*ResourceNotReadyException) Error

func (e *ResourceNotReadyException) Error() string

func (*ResourceNotReadyException) ErrorCode

func (e *ResourceNotReadyException) ErrorCode() string

func (*ResourceNotReadyException) ErrorFault

func (*ResourceNotReadyException) ErrorMessage

func (e *ResourceNotReadyException) ErrorMessage() string

type S3Destination added in v1.18.0

type S3Destination struct {

	// The name of the Amazon S3 bucket you want to associate with the streaming video
	// project. You must be the owner of the Amazon S3 bucket.
	Bucket *string

	// The prefix value of the location within the bucket that you want the
	// information to be published to. For more information, see Using prefixes (https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html)
	// .
	KeyPrefix *string
	// contains filtered or unexported fields
}

The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.

type S3Object

type S3Object struct {

	// Name of the S3 bucket.
	Bucket *string

	// S3 object key name.
	Name *string

	// If the bucket is versioning enabled, you can specify the object version.
	Version *string
	// contains filtered or unexported fields
}

Provides the S3 bucket name and object name. The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations. For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

type SearchedFace added in v1.29.0

type SearchedFace struct {

	// Unique identifier assigned to the face.
	FaceId *string
	// contains filtered or unexported fields
}

Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.

type SearchedFaceDetails added in v1.29.0

type SearchedFaceDetails struct {

	// Structure containing attributes of the face that the algorithm detected. A
	// FaceDetail object contains either the default facial attributes or all facial
	// attributes. The default attributes are BoundingBox , Confidence , Landmarks ,
	// Pose , and Quality . GetFaceDetection is the only Amazon Rekognition Video
	// stored video operation that can return a FaceDetail object with all attributes.
	// To specify which attributes to return, use the FaceAttributes input parameter
	// for StartFaceDetection . The following Amazon Rekognition Video operations
	// return only the default attributes. The corresponding Start operations don't
	// have a FaceAttributes input parameter:
	//   - GetCelebrityRecognition
	//   - GetPersonTracking
	//   - GetFaceSearch
	// The Amazon Rekognition Image DetectFaces and IndexFaces operations can return
	// all facial attributes. To specify which attributes to return, use the Attributes
	// input parameter for DetectFaces . For IndexFaces , use the DetectAttributes
	// input parameter.
	FaceDetail *FaceDetail
	// contains filtered or unexported fields
}

Contains data regarding the input face used for a search.

type SearchedUser added in v1.29.0

type SearchedUser struct {

	// A provided ID for the UserID. Unique within the collection.
	UserId *string
	// contains filtered or unexported fields
}

Contains metadata about a User searched for within a collection.

type SegmentDetection

type SegmentDetection struct {

	// The duration of a video segment, expressed in frames.
	DurationFrames *int64

	// The duration of the detected segment in milliseconds.
	DurationMillis *int64

	// The duration of the timecode for the detected segment in SMPTE format.
	DurationSMPTE *string

	// The frame number at the end of a video segment, using a frame index that starts
	// with 0.
	EndFrameNumber *int64

	// The frame-accurate SMPTE timecode, from the start of a video, for the end of a
	// detected segment. EndTimecode is in HH:MM:SS:fr format (and ;fr for drop
	// frame-rates).
	EndTimecodeSMPTE *string

	// The end time of the detected segment, in milliseconds, from the start of the
	// video. This value is rounded down.
	EndTimestampMillis int64

	// If the segment is a shot detection, contains information about the shot
	// detection.
	ShotSegment *ShotSegment

	// The frame number of the start of a video segment, using a frame index that
	// starts with 0.
	StartFrameNumber *int64

	// The frame-accurate SMPTE timecode, from the start of a video, for the start of
	// a detected segment. StartTimecode is in HH:MM:SS:fr format (and ;fr for drop
	// frame-rates).
	StartTimecodeSMPTE *string

	// The start time of the detected segment in milliseconds from the start of the
	// video. This value is rounded down. For example, if the actual timestamp is
	// 100.6667 milliseconds, Amazon Rekognition Video returns a value of 100 millis.
	StartTimestampMillis int64

	// If the segment is a technical cue, contains information about the technical cue.
	TechnicalCueSegment *TechnicalCueSegment

	// The type of the segment. Valid values are TECHNICAL_CUE and SHOT .
	Type SegmentType
	// contains filtered or unexported fields
}

A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection .

type SegmentType

type SegmentType string
const (
	SegmentTypeTechnicalCue SegmentType = "TECHNICAL_CUE"
	SegmentTypeShot         SegmentType = "SHOT"
)

Enum values for SegmentType

func (SegmentType) Values added in v0.29.0

func (SegmentType) Values() []SegmentType

Values returns all known values for SegmentType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type SegmentTypeInfo

type SegmentTypeInfo struct {

	// The version of the model used to detect segments.
	ModelVersion *string

	// The type of a segment (technical cue or shot detection).
	Type SegmentType
	// contains filtered or unexported fields
}

Information about the type of a segment requested in a call to StartSegmentDetection . An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection .

type ServiceQuotaExceededException added in v0.29.0

type ServiceQuotaExceededException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The size of the collection exceeds the allowed limit. For more information, see Guidelines and quotas in Amazon Rekognition in the Amazon Rekognition Developer Guide.

func (*ServiceQuotaExceededException) Error added in v0.29.0

func (*ServiceQuotaExceededException) ErrorCode added in v0.29.0

func (e *ServiceQuotaExceededException) ErrorCode() string

func (*ServiceQuotaExceededException) ErrorFault added in v0.29.0

func (*ServiceQuotaExceededException) ErrorMessage added in v0.29.0

func (e *ServiceQuotaExceededException) ErrorMessage() string

type SessionNotFoundException added in v1.24.0

type SessionNotFoundException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Occurs when a given sessionId is not found.

func (*SessionNotFoundException) Error added in v1.24.0

func (e *SessionNotFoundException) Error() string

func (*SessionNotFoundException) ErrorCode added in v1.24.0

func (e *SessionNotFoundException) ErrorCode() string

func (*SessionNotFoundException) ErrorFault added in v1.24.0

func (e *SessionNotFoundException) ErrorFault() smithy.ErrorFault

func (*SessionNotFoundException) ErrorMessage added in v1.24.0

func (e *SessionNotFoundException) ErrorMessage() string

type ShotSegment

type ShotSegment struct {

	// The confidence that Amazon Rekognition Video has in the accuracy of the
	// detected segment.
	Confidence *float32

	// An Identifier for a shot detection segment detected in a video.
	Index *int64
	// contains filtered or unexported fields
}

Information about a shot detection segment detected in a video. For more information, see SegmentDetection .

type Smile

type Smile struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the face is smiling or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the face is smiling, and the confidence level in the determination.

type StartSegmentDetectionFilters

type StartSegmentDetectionFilters struct {

	// Filters that are specific to shot detections.
	ShotFilter *StartShotDetectionFilter

	// Filters that are specific to technical cues.
	TechnicalCueFilter *StartTechnicalCueDetectionFilter
	// contains filtered or unexported fields
}

Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection .

type StartShotDetectionFilter

type StartShotDetectionFilter struct {

	// Specifies the minimum confidence that Amazon Rekognition Video must have in
	// order to return a detected segment. Confidence represents how certain Amazon
	// Rekognition is that a segment is correctly identified. 0 is the lowest
	// confidence. 100 is the highest confidence. Amazon Rekognition Video doesn't
	// return any segments with a confidence level lower than this specified value. If
	// you don't specify MinSegmentConfidence , the GetSegmentDetection returns
	// segments with confidence values greater than or equal to 50 percent.
	MinSegmentConfidence *float32
	// contains filtered or unexported fields
}

Filters for the shot detection segments returned by GetSegmentDetection . For more information, see StartSegmentDetectionFilters .

type StartTechnicalCueDetectionFilter

type StartTechnicalCueDetectionFilter struct {

	// A filter that allows you to control the black frame detection by specifying the
	// black levels and pixel coverage of black pixels in a frame. Videos can come from
	// multiple sources, formats, and time periods, with different standards and
	// varying noise levels for black frames that need to be accounted for.
	BlackFrame *BlackFrame

	// Specifies the minimum confidence that Amazon Rekognition Video must have in
	// order to return a detected segment. Confidence represents how certain Amazon
	// Rekognition is that a segment is correctly identified. 0 is the lowest
	// confidence. 100 is the highest confidence. Amazon Rekognition Video doesn't
	// return any segments with a confidence level lower than this specified value. If
	// you don't specify MinSegmentConfidence , GetSegmentDetection returns segments
	// with confidence values greater than or equal to 50 percent.
	MinSegmentConfidence *float32
	// contains filtered or unexported fields
}

Filters for the technical segments returned by GetSegmentDetection . For more information, see StartSegmentDetectionFilters .

type StartTextDetectionFilters

type StartTextDetectionFilters struct {

	// Filter focusing on a certain area of the frame. Uses a BoundingBox object to
	// set the region of the screen.
	RegionsOfInterest []RegionOfInterest

	// Filters focusing on qualities of the text, such as confidence or size.
	WordFilter *DetectionFilter
	// contains filtered or unexported fields
}

Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

type StreamProcessingStartSelector added in v1.18.0

type StreamProcessingStartSelector struct {

	// Specifies the starting point in the stream to start processing. This can be
	// done with a producer timestamp or a fragment number in a Kinesis stream.
	KVSStreamStartSelector *KinesisVideoStreamStartSelector
	// contains filtered or unexported fields
}

This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.

type StreamProcessingStopSelector added in v1.18.0

type StreamProcessingStopSelector struct {

	// Specifies the maximum amount of time in seconds that you want the stream to be
	// processed. The largest amount of time is 2 minutes. The default is 10 seconds.
	MaxDurationInSeconds *int64
	// contains filtered or unexported fields
}

Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.

type StreamProcessor

type StreamProcessor struct {

	// Name of the Amazon Rekognition stream processor.
	Name *string

	// Current status of the Amazon Rekognition stream processor.
	Status StreamProcessorStatus
	// contains filtered or unexported fields
}

An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor . The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

type StreamProcessorDataSharingPreference added in v1.18.0

type StreamProcessorDataSharingPreference struct {

	// If this option is set to true, you choose to share data with Rekognition to
	// improve model performance.
	//
	// This member is required.
	OptIn bool
	// contains filtered or unexported fields
}

Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.

type StreamProcessorInput

type StreamProcessorInput struct {

	// The Kinesis video stream input stream for the source streaming video.
	KinesisVideoStream *KinesisVideoStream
	// contains filtered or unexported fields
}

Information about the source streaming video.

type StreamProcessorNotificationChannel added in v1.18.0

type StreamProcessorNotificationChannel struct {

	// The Amazon Resource Number (ARN) of the Amazon Amazon Simple Notification
	// Service topic to which Amazon Rekognition posts the completion status.
	//
	// This member is required.
	SNSTopicArn *string
	// contains filtered or unexported fields
}

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation. Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4. Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.

type StreamProcessorOutput

type StreamProcessorOutput struct {

	// The Amazon Kinesis Data Streams stream to which the Amazon Rekognition stream
	// processor streams the analysis results.
	KinesisDataStream *KinesisDataStream

	// The Amazon S3 bucket location to which Amazon Rekognition publishes the
	// detailed inference results of a video analysis operation.
	S3Destination *S3Destination
	// contains filtered or unexported fields
}

Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

type StreamProcessorParameterToDelete added in v1.18.0

type StreamProcessorParameterToDelete string
const (
	StreamProcessorParameterToDeleteConnectedHomeMinConfidence StreamProcessorParameterToDelete = "ConnectedHomeMinConfidence"
	StreamProcessorParameterToDeleteRegionsOfInterest          StreamProcessorParameterToDelete = "RegionsOfInterest"
)

Enum values for StreamProcessorParameterToDelete

func (StreamProcessorParameterToDelete) Values added in v1.18.0

Values returns all known values for StreamProcessorParameterToDelete. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type StreamProcessorSettings

type StreamProcessorSettings struct {

	// Label detection settings to use on a streaming video. Defining the settings is
	// required in the request parameter for CreateStreamProcessor . Including this
	// setting in the CreateStreamProcessor request enables you to use the stream
	// processor for label detection. You can then select what you want the stream
	// processor to detect, such as people or pets. When the stream processor has
	// started, one notification is sent for each object class specified. For example,
	// if packages and pets are selected, one SNS notification is published the first
	// time a package is detected and one SNS notification is published the first time
	// a pet is detected, as well as an end-of-session summary.
	ConnectedHome *ConnectedHomeSettings

	// Face search settings to use on a streaming video.
	FaceSearch *FaceSearchSettings
	// contains filtered or unexported fields
}

Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use FaceSearch to recognize faces in a streaming video, or you can use ConnectedHome to detect labels.

type StreamProcessorSettingsForUpdate added in v1.18.0

type StreamProcessorSettingsForUpdate struct {

	// The label detection settings you want to use for your stream processor.
	ConnectedHomeForUpdate *ConnectedHomeSettingsForUpdate
	// contains filtered or unexported fields
}

The stream processor settings that you want to update. ConnectedHome settings can be updated to detect different labels with a different minimum confidence.

type StreamProcessorStatus

type StreamProcessorStatus string
const (
	StreamProcessorStatusStopped  StreamProcessorStatus = "STOPPED"
	StreamProcessorStatusStarting StreamProcessorStatus = "STARTING"
	StreamProcessorStatusRunning  StreamProcessorStatus = "RUNNING"
	StreamProcessorStatusFailed   StreamProcessorStatus = "FAILED"
	StreamProcessorStatusStopping StreamProcessorStatus = "STOPPING"
	StreamProcessorStatusUpdating StreamProcessorStatus = "UPDATING"
)

Enum values for StreamProcessorStatus

func (StreamProcessorStatus) Values added in v0.29.0

Values returns all known values for StreamProcessorStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Summary

type Summary struct {

	// Provides the S3 bucket name and object name. The region for the S3 bucket
	// containing the S3 object must match the region you use for Amazon Rekognition
	// operations. For Amazon Rekognition to process an S3 object, the user must have
	// permission to access the S3 object. For more information, see How Amazon
	// Rekognition works with IAM in the Amazon Rekognition Developer Guide.
	S3Object *S3Object
	// contains filtered or unexported fields
}

The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label. You get the training summary S3 bucket location by calling DescribeProjectVersions .

type Sunglasses

type Sunglasses struct {

	// Level of confidence in the determination.
	Confidence *float32

	// Boolean value that indicates whether the face is wearing sunglasses or not.
	Value bool
	// contains filtered or unexported fields
}

Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

type TechnicalCueSegment

type TechnicalCueSegment struct {

	// The confidence that Amazon Rekognition Video has in the accuracy of the
	// detected segment.
	Confidence *float32

	// The type of the technical cue.
	Type TechnicalCueType
	// contains filtered or unexported fields
}

Information about a technical cue segment. For more information, see SegmentDetection .

type TechnicalCueType

type TechnicalCueType string
const (
	TechnicalCueTypeColorBars      TechnicalCueType = "ColorBars"
	TechnicalCueTypeEndCredits     TechnicalCueType = "EndCredits"
	TechnicalCueTypeBlackFrames    TechnicalCueType = "BlackFrames"
	TechnicalCueTypeOpeningCredits TechnicalCueType = "OpeningCredits"
	TechnicalCueTypeSlate          TechnicalCueType = "Slate"
	TechnicalCueTypeContent        TechnicalCueType = "Content"
)

Enum values for TechnicalCueType

func (TechnicalCueType) Values added in v0.29.0

Values returns all known values for TechnicalCueType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type TestingData

type TestingData struct {

	// The assets used for testing.
	Assets []Asset

	// If specified, Rekognition splits training dataset to create a test dataset for
	// the training job.
	AutoCreate bool
	// contains filtered or unexported fields
}

The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.

type TestingDataResult

type TestingDataResult struct {

	// The testing dataset that was supplied for training.
	Input *TestingData

	// The subset of the dataset that was actually tested. Some images (assets) might
	// not be tested due to file formatting and other issues.
	Output *TestingData

	// The location of the data validation manifest. The data validation manifest is
	// created for the test dataset during model training.
	Validation *ValidationData
	// contains filtered or unexported fields
}

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

type TextDetection

type TextDetection struct {

	// The confidence that Amazon Rekognition has in the accuracy of the detected text
	// and the accuracy of the geometry points around the detected text.
	Confidence *float32

	// The word or line of text recognized by Amazon Rekognition.
	DetectedText *string

	// The location of the detected text on the image. Includes an axis aligned coarse
	// bounding box surrounding the text and a finer grain polygon for more accurate
	// spatial information.
	Geometry *Geometry

	// The identifier for the detected text. The identifier is only unique for a
	// single call to DetectText .
	Id *int32

	// The Parent identifier for the detected text identified by the value of ID . If
	// the type of detected text is LINE , the value of ParentId is Null .
	ParentId *int32

	// The type of text that was detected.
	Type TextTypes
	// contains filtered or unexported fields
}

Information about a word or line of text detected by DetectText . The DetectedText field contains the text that Amazon Rekognition detected in the image. Every word and line has an identifier ( Id ). Each word belongs to a line and has a parent identifier ( ParentId ) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words. For more information, see Detecting text in the Amazon Rekognition Developer Guide.

type TextDetectionResult

type TextDetectionResult struct {

	// Details about text detected in a video.
	TextDetection *TextDetection

	// The time, in milliseconds from the start of the video, that the text was
	// detected. Note that Timestamp is not guaranteed to be accurate to the
	// individual frame where the text first appears.
	Timestamp int64
	// contains filtered or unexported fields
}

Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

type TextTypes

type TextTypes string
const (
	TextTypesLine TextTypes = "LINE"
	TextTypesWord TextTypes = "WORD"
)

Enum values for TextTypes

func (TextTypes) Values added in v0.29.0

func (TextTypes) Values() []TextTypes

Values returns all known values for TextTypes. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ThrottlingException

type ThrottlingException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

Amazon Rekognition is temporarily unable to process the request. Try your call again.

func (*ThrottlingException) Error

func (e *ThrottlingException) Error() string

func (*ThrottlingException) ErrorCode

func (e *ThrottlingException) ErrorCode() string

func (*ThrottlingException) ErrorFault

func (e *ThrottlingException) ErrorFault() smithy.ErrorFault

func (*ThrottlingException) ErrorMessage

func (e *ThrottlingException) ErrorMessage() string

type TrainingData

type TrainingData struct {

	// A manifest file that contains references to the training images and
	// ground-truth annotations.
	Assets []Asset
	// contains filtered or unexported fields
}

The dataset used for training.

type TrainingDataResult

type TrainingDataResult struct {

	// The training data that you supplied.
	Input *TrainingData

	// Reference to images (assets) that were actually used during training with
	// trained model predictions.
	Output *TrainingData

	// A manifest that you supplied for training, with validation results for each
	// line.
	Validation *ValidationData
	// contains filtered or unexported fields
}

The data validation manifest created for the training dataset during model training.

type UnindexedFace

type UnindexedFace struct {

	// The structure that contains attributes of a face that IndexFaces detected, but
	// didn't index.
	FaceDetail *FaceDetail

	// An array of reasons that specify why a face wasn't indexed.
	//   - EXTREME_POSE - The face is at a pose that can't be detected. For example,
	//   the head is turned too far away from the camera.
	//   - EXCEEDS_MAX_FACES - The number of faces detected is already higher than
	//   that specified by the MaxFaces input parameter for IndexFaces .
	//   - LOW_BRIGHTNESS - The image is too dark.
	//   - LOW_SHARPNESS - The image is too blurry.
	//   - LOW_CONFIDENCE - The face was detected with a low confidence.
	//   - SMALL_BOUNDING_BOX - The bounding box around the face is too small.
	Reasons []Reason
	// contains filtered or unexported fields
}

A face that IndexFaces detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.

type UnsearchedFace added in v1.29.0

type UnsearchedFace struct {

	// Structure containing attributes of the face that the algorithm detected. A
	// FaceDetail object contains either the default facial attributes or all facial
	// attributes. The default attributes are BoundingBox , Confidence , Landmarks ,
	// Pose , and Quality . GetFaceDetection is the only Amazon Rekognition Video
	// stored video operation that can return a FaceDetail object with all attributes.
	// To specify which attributes to return, use the FaceAttributes input parameter
	// for StartFaceDetection . The following Amazon Rekognition Video operations
	// return only the default attributes. The corresponding Start operations don't
	// have a FaceAttributes input parameter:
	//   - GetCelebrityRecognition
	//   - GetPersonTracking
	//   - GetFaceSearch
	// The Amazon Rekognition Image DetectFaces and IndexFaces operations can return
	// all facial attributes. To specify which attributes to return, use the Attributes
	// input parameter for DetectFaces . For IndexFaces , use the DetectAttributes
	// input parameter.
	FaceDetails *FaceDetail

	// Reasons why a face wasn't used for Search.
	Reasons []UnsearchedFaceReason
	// contains filtered or unexported fields
}

Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn't used for Search.

type UnsearchedFaceReason added in v1.29.0

type UnsearchedFaceReason string
const (
	UnsearchedFaceReasonFaceNotLargest   UnsearchedFaceReason = "FACE_NOT_LARGEST"
	UnsearchedFaceReasonExceedsMaxFaces  UnsearchedFaceReason = "EXCEEDS_MAX_FACES"
	UnsearchedFaceReasonExtremePose      UnsearchedFaceReason = "EXTREME_POSE"
	UnsearchedFaceReasonLowBrightness    UnsearchedFaceReason = "LOW_BRIGHTNESS"
	UnsearchedFaceReasonLowSharpness     UnsearchedFaceReason = "LOW_SHARPNESS"
	UnsearchedFaceReasonLowConfidence    UnsearchedFaceReason = "LOW_CONFIDENCE"
	UnsearchedFaceReasonSmallBoundingBox UnsearchedFaceReason = "SMALL_BOUNDING_BOX"
	UnsearchedFaceReasonLowFaceQuality   UnsearchedFaceReason = "LOW_FACE_QUALITY"
)

Enum values for UnsearchedFaceReason

func (UnsearchedFaceReason) Values added in v1.29.0

Values returns all known values for UnsearchedFaceReason. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type UnsuccessfulFaceAssociation added in v1.29.0

type UnsuccessfulFaceAssociation struct {

	// Match confidence with the UserID, provides information regarding if a face
	// association was unsuccessful because it didn't meet UserMatchThreshold.
	Confidence *float32

	// A unique identifier assigned to the face.
	FaceId *string

	// The reason why the association was unsuccessful.
	Reasons []UnsuccessfulFaceAssociationReason

	// A provided ID for the UserID. Unique within the collection.
	UserId *string
	// contains filtered or unexported fields
}

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.

type UnsuccessfulFaceAssociationReason added in v1.29.0

type UnsuccessfulFaceAssociationReason string
const (
	UnsuccessfulFaceAssociationReasonFaceNotFound               UnsuccessfulFaceAssociationReason = "FACE_NOT_FOUND"
	UnsuccessfulFaceAssociationReasonAssociatedToADifferentUser UnsuccessfulFaceAssociationReason = "ASSOCIATED_TO_A_DIFFERENT_USER"
	UnsuccessfulFaceAssociationReasonLowMatchConfidence         UnsuccessfulFaceAssociationReason = "LOW_MATCH_CONFIDENCE"
)

Enum values for UnsuccessfulFaceAssociationReason

func (UnsuccessfulFaceAssociationReason) Values added in v1.29.0

Values returns all known values for UnsuccessfulFaceAssociationReason. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type UnsuccessfulFaceDeletion added in v1.29.0

type UnsuccessfulFaceDeletion struct {

	// A unique identifier assigned to the face.
	FaceId *string

	// The reason why the deletion was unsuccessful.
	Reasons []UnsuccessfulFaceDeletionReason

	// A provided ID for the UserID. Unique within the collection.
	UserId *string
	// contains filtered or unexported fields
}

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.

type UnsuccessfulFaceDeletionReason added in v1.29.0

type UnsuccessfulFaceDeletionReason string
const (
	UnsuccessfulFaceDeletionReasonAssociatedToAnExistingUser UnsuccessfulFaceDeletionReason = "ASSOCIATED_TO_AN_EXISTING_USER"
	UnsuccessfulFaceDeletionReasonFaceNotFound               UnsuccessfulFaceDeletionReason = "FACE_NOT_FOUND"
)

Enum values for UnsuccessfulFaceDeletionReason

func (UnsuccessfulFaceDeletionReason) Values added in v1.29.0

Values returns all known values for UnsuccessfulFaceDeletionReason. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type UnsuccessfulFaceDisassociation added in v1.29.0

type UnsuccessfulFaceDisassociation struct {

	// A unique identifier assigned to the face.
	FaceId *string

	// The reason why the deletion was unsuccessful.
	Reasons []UnsuccessfulFaceDisassociationReason

	// A provided ID for the UserID. Unique within the collection.
	UserId *string
	// contains filtered or unexported fields
}

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.

type UnsuccessfulFaceDisassociationReason added in v1.29.0

type UnsuccessfulFaceDisassociationReason string
const (
	UnsuccessfulFaceDisassociationReasonFaceNotFound               UnsuccessfulFaceDisassociationReason = "FACE_NOT_FOUND"
	UnsuccessfulFaceDisassociationReasonAssociatedToADifferentUser UnsuccessfulFaceDisassociationReason = "ASSOCIATED_TO_A_DIFFERENT_USER"
)

Enum values for UnsuccessfulFaceDisassociationReason

func (UnsuccessfulFaceDisassociationReason) Values added in v1.29.0

Values returns all known values for UnsuccessfulFaceDisassociationReason. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type User added in v1.29.0

type User struct {

	// A provided ID for the User. Unique within the collection.
	UserId *string

	// Communicates if the UserID has been updated with latest set of faces to be
	// associated with the UserID.
	UserStatus UserStatus
	// contains filtered or unexported fields
}

Metadata of the user stored in a collection.

type UserMatch added in v1.29.0

type UserMatch struct {

	// Describes the UserID metadata.
	Similarity *float32

	// Confidence in the match of this UserID with the input face.
	User *MatchedUser
	// contains filtered or unexported fields
}

Provides UserID metadata along with the confidence in the match of this UserID with the input face.

type UserStatus added in v1.29.0

type UserStatus string
const (
	UserStatusActive   UserStatus = "ACTIVE"
	UserStatusUpdating UserStatus = "UPDATING"
	UserStatusCreating UserStatus = "CREATING"
	UserStatusCreated  UserStatus = "CREATED"
)

Enum values for UserStatus

func (UserStatus) Values added in v1.29.0

func (UserStatus) Values() []UserStatus

Values returns all known values for UserStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ValidationData added in v0.29.0

type ValidationData struct {

	// The assets that comprise the validation data.
	Assets []Asset
	// contains filtered or unexported fields
}

Contains the Amazon S3 bucket location of the validation data for a model training job. The validation data includes error information for individual JSON Lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide. You get the ValidationData object for the training dataset ( TrainingDataResult ) and the test dataset ( TestingDataResult ) by calling DescribeProjectVersions . The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.

type Video

type Video struct {

	// The Amazon S3 bucket name and file name for the video.
	S3Object *S3Object
	// contains filtered or unexported fields
}

Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

type VideoColorRange added in v1.7.0

type VideoColorRange string
const (
	VideoColorRangeFull    VideoColorRange = "FULL"
	VideoColorRangeLimited VideoColorRange = "LIMITED"
)

Enum values for VideoColorRange

func (VideoColorRange) Values added in v1.7.0

func (VideoColorRange) Values() []VideoColorRange

Values returns all known values for VideoColorRange. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type VideoJobStatus

type VideoJobStatus string
const (
	VideoJobStatusInProgress VideoJobStatus = "IN_PROGRESS"
	VideoJobStatusSucceeded  VideoJobStatus = "SUCCEEDED"
	VideoJobStatusFailed     VideoJobStatus = "FAILED"
)

Enum values for VideoJobStatus

func (VideoJobStatus) Values added in v0.29.0

func (VideoJobStatus) Values() []VideoJobStatus

Values returns all known values for VideoJobStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type VideoMetadata

type VideoMetadata struct {

	// Type of compression used in the analyzed video.
	Codec *string

	// A description of the range of luminance values in a video, either LIMITED (16
	// to 235) or FULL (0 to 255).
	ColorRange VideoColorRange

	// Length of the video in milliseconds.
	DurationMillis *int64

	// Format of the analyzed video. Possible values are MP4, MOV and AVI.
	Format *string

	// Vertical pixel dimension of the video.
	FrameHeight *int64

	// Number of frames per second in the video.
	FrameRate *float32

	// Horizontal pixel dimension of the video.
	FrameWidth *int64
	// contains filtered or unexported fields
}

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

type VideoTooLargeException

type VideoTooLargeException struct {
	Message *string

	ErrorCodeOverride *string

	Code   *string
	Logref *string
	// contains filtered or unexported fields
}

The file size or duration of the supplied media is too large. The maximum file size is 10GB. The maximum duration is 6 hours.

func (*VideoTooLargeException) Error

func (e *VideoTooLargeException) Error() string

func (*VideoTooLargeException) ErrorCode

func (e *VideoTooLargeException) ErrorCode() string

func (*VideoTooLargeException) ErrorFault

func (e *VideoTooLargeException) ErrorFault() smithy.ErrorFault

func (*VideoTooLargeException) ErrorMessage

func (e *VideoTooLargeException) ErrorMessage() string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL