types

package
v1.28.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 16, 2024 License: Apache-2.0 Imports: 4 Imported by: 20

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AmazonOpenSearchServerlessBufferingHints added in v1.15.0

type AmazonOpenSearchServerlessBufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 300 (5 minutes).
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MBs, before delivering it to the
	// destination. The default value is 5. We recommend setting this parameter to a
	// value greater than the amount of data you typically ingest into the delivery
	// stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the
	// value should be 10 MB or higher.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

Describes the buffering to perform before delivering data to the Serverless offering for Amazon OpenSearch Service destination.

type AmazonOpenSearchServerlessDestinationConfiguration added in v1.15.0

type AmazonOpenSearchServerlessDestinationConfiguration struct {

	// The Serverless offering for Amazon OpenSearch Service index name.
	//
	// This member is required.
	IndexName *string

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Serverless offering for Amazon OpenSearch Service Configuration API
	// and for indexing documents.
	//
	// This member is required.
	RoleARN *string

	// Describes the configuration of a destination in Amazon S3.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The buffering options. If no value is specified, the default values for
	// AmazonopensearchserviceBufferingHints are used.
	BufferingHints *AmazonOpenSearchServerlessBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the collection in the Serverless
	// offering for Amazon OpenSearch Service.
	CollectionEndpoint *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to the
	// Serverless offering for Amazon OpenSearch Service. The default value is 300 (5
	// minutes).
	RetryOptions *AmazonOpenSearchServerlessRetryOptions

	// Defines how documents should be delivered to Amazon S3. When it is set to
	// FailedDocumentsOnly, Firehose writes any documents that could not be indexed to
	// the configured Amazon S3 destination, with AmazonOpenSearchService-failed/
	// appended to the key prefix. When set to AllDocuments, Firehose delivers all
	// incoming records to Amazon S3, and also writes failed documents with
	// AmazonOpenSearchService-failed/ appended to the prefix.
	S3BackupMode AmazonOpenSearchServerlessS3BackupMode

	// The details of the VPC of the Amazon OpenSearch or Amazon OpenSearch Serverless
	// destination.
	VpcConfiguration *VpcConfiguration
	// contains filtered or unexported fields
}

Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.

type AmazonOpenSearchServerlessDestinationDescription added in v1.15.0

type AmazonOpenSearchServerlessDestinationDescription struct {

	// The buffering options.
	BufferingHints *AmazonOpenSearchServerlessBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the collection in the Serverless
	// offering for Amazon OpenSearch Service.
	CollectionEndpoint *string

	// The Serverless offering for Amazon OpenSearch Service index name.
	IndexName *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The Serverless offering for Amazon OpenSearch Service retry options.
	RetryOptions *AmazonOpenSearchServerlessRetryOptions

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials.
	RoleARN *string

	// The Amazon S3 backup mode.
	S3BackupMode AmazonOpenSearchServerlessS3BackupMode

	// Describes a destination in Amazon S3.
	S3DestinationDescription *S3DestinationDescription

	// The details of the VPC of the Amazon ES destination.
	VpcConfigurationDescription *VpcConfigurationDescription
	// contains filtered or unexported fields
}

The destination description in the Serverless offering for Amazon OpenSearch Service.

type AmazonOpenSearchServerlessDestinationUpdate added in v1.15.0

type AmazonOpenSearchServerlessDestinationUpdate struct {

	// The buffering options. If no value is specified, AmazonopensearchBufferingHints
	// object default values are used.
	BufferingHints *AmazonOpenSearchServerlessBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the collection in the Serverless
	// offering for Amazon OpenSearch Service.
	CollectionEndpoint *string

	// The Serverless offering for Amazon OpenSearch Service index name.
	IndexName *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to the
	// Serverless offering for Amazon OpenSearch Service. The default value is 300 (5
	// minutes).
	RetryOptions *AmazonOpenSearchServerlessRetryOptions

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Serverless offering for Amazon OpenSearch Service Configuration API
	// and for indexing documents.
	RoleARN *string

	// Describes an update for a destination in Amazon S3.
	S3Update *S3DestinationUpdate
	// contains filtered or unexported fields
}

Describes an update for a destination in the Serverless offering for Amazon OpenSearch Service.

type AmazonOpenSearchServerlessRetryOptions added in v1.15.0

type AmazonOpenSearchServerlessRetryOptions struct {

	// After an initial failure to deliver to the Serverless offering for Amazon
	// OpenSearch Service, the total amount of time during which Firehose retries
	// delivery (including the first attempt). After this time has elapsed, the failed
	// documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A
	// value of 0 (zero) results in no retries.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Configures retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service.

type AmazonOpenSearchServerlessS3BackupMode added in v1.15.0

type AmazonOpenSearchServerlessS3BackupMode string
const (
	AmazonOpenSearchServerlessS3BackupModeFailedDocumentsOnly AmazonOpenSearchServerlessS3BackupMode = "FailedDocumentsOnly"
	AmazonOpenSearchServerlessS3BackupModeAllDocuments        AmazonOpenSearchServerlessS3BackupMode = "AllDocuments"
)

Enum values for AmazonOpenSearchServerlessS3BackupMode

func (AmazonOpenSearchServerlessS3BackupMode) Values added in v1.15.0

Values returns all known values for AmazonOpenSearchServerlessS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type AmazonopensearchserviceBufferingHints added in v1.7.0

type AmazonopensearchserviceBufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 300 (5 minutes).
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MBs, before delivering it to the
	// destination. The default value is 5. We recommend setting this parameter to a
	// value greater than the amount of data you typically ingest into the delivery
	// stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the
	// value should be 10 MB or higher.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

Describes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.

type AmazonopensearchserviceDestinationConfiguration added in v1.7.0

type AmazonopensearchserviceDestinationConfiguration struct {

	// The ElasticsearAmazon OpenSearch Service index name.
	//
	// This member is required.
	IndexName *string

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Amazon OpenSearch Service Configuration API and for indexing
	// documents.
	//
	// This member is required.
	RoleARN *string

	// Describes the configuration of a destination in Amazon S3.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The buffering options. If no value is specified, the default values for
	// AmazonopensearchserviceBufferingHints are used.
	BufferingHints *AmazonopensearchserviceBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Specify either this
	// ClusterEndpoint or the DomainARN field.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon OpenSearch Service domain. The IAM role must have
	// permissions for DescribeElasticsearchDomain, DescribeElasticsearchDomains, and
	// DescribeElasticsearchDomainConfig after assuming the role specified in RoleARN.
	DomainARN *string

	// The Amazon OpenSearch Service index rotation period. Index rotation appends a
	// timestamp to the IndexName to facilitate the expiration of old data.
	IndexRotationPeriod AmazonopensearchserviceIndexRotationPeriod

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// OpenSearch Service. The default value is 300 (5 minutes).
	RetryOptions *AmazonopensearchserviceRetryOptions

	// Defines how documents should be delivered to Amazon S3. When it is set to
	// FailedDocumentsOnly, Firehose writes any documents that could not be indexed to
	// the configured Amazon S3 destination, with AmazonOpenSearchService-failed/
	// appended to the key prefix. When set to AllDocuments, Firehose delivers all
	// incoming records to Amazon S3, and also writes failed documents with
	// AmazonOpenSearchService-failed/ appended to the prefix.
	S3BackupMode AmazonopensearchserviceS3BackupMode

	// The Amazon OpenSearch Service type name. For Elasticsearch 6.x, there can be
	// only one type per index. If you try to specify a new type for an existing index
	// that already has another type, Firehose returns an error during run time.
	TypeName *string

	// The details of the VPC of the Amazon OpenSearch or Amazon OpenSearch Serverless
	// destination.
	VpcConfiguration *VpcConfiguration
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Amazon OpenSearch Service

type AmazonopensearchserviceDestinationDescription added in v1.7.0

type AmazonopensearchserviceDestinationDescription struct {

	// The buffering options.
	BufferingHints *AmazonopensearchserviceBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Firehose uses either
	// this ClusterEndpoint or the DomainARN field to send data to Amazon OpenSearch
	// Service.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon OpenSearch Service domain.
	DomainARN *string

	// The Amazon OpenSearch Service index name.
	IndexName *string

	// The Amazon OpenSearch Service index rotation period
	IndexRotationPeriod AmazonopensearchserviceIndexRotationPeriod

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The Amazon OpenSearch Service retry options.
	RetryOptions *AmazonopensearchserviceRetryOptions

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials.
	RoleARN *string

	// The Amazon S3 backup mode.
	S3BackupMode AmazonopensearchserviceS3BackupMode

	// Describes a destination in Amazon S3.
	S3DestinationDescription *S3DestinationDescription

	// The Amazon OpenSearch Service type name. This applies to Elasticsearch 6.x and
	// lower versions. For Elasticsearch 7.x and OpenSearch Service 1.x, there's no
	// value for TypeName.
	TypeName *string

	// The details of the VPC of the Amazon ES destination.
	VpcConfigurationDescription *VpcConfigurationDescription
	// contains filtered or unexported fields
}

The destination description in Amazon OpenSearch Service.

type AmazonopensearchserviceDestinationUpdate added in v1.7.0

type AmazonopensearchserviceDestinationUpdate struct {

	// The buffering options. If no value is specified, AmazonopensearchBufferingHints
	// object default values are used.
	BufferingHints *AmazonopensearchserviceBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Specify either this
	// ClusterEndpoint or the DomainARN field.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon OpenSearch Service domain. The IAM role must have
	// permissions for DescribeDomain, DescribeDomains, and DescribeDomainConfig after
	// assuming the IAM role specified in RoleARN.
	DomainARN *string

	// The Amazon OpenSearch Service index name.
	IndexName *string

	// The Amazon OpenSearch Service index rotation period. Index rotation appends a
	// timestamp to IndexName to facilitate the expiration of old data.
	IndexRotationPeriod AmazonopensearchserviceIndexRotationPeriod

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// OpenSearch Service. The default value is 300 (5 minutes).
	RetryOptions *AmazonopensearchserviceRetryOptions

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Amazon OpenSearch Service Configuration API and for indexing
	// documents.
	RoleARN *string

	// Describes an update for a destination in Amazon S3.
	S3Update *S3DestinationUpdate

	// The Amazon OpenSearch Service type name. For Elasticsearch 6.x, there can be
	// only one type per index. If you try to specify a new type for an existing index
	// that already has another type, Firehose returns an error during runtime. If you
	// upgrade Elasticsearch from 6.x to 7.x and don’t update your delivery stream,
	// Firehose still delivers data to Elasticsearch with the old index name and type
	// name. If you want to update your delivery stream with a new index name, provide
	// an empty string for TypeName.
	TypeName *string
	// contains filtered or unexported fields
}

Describes an update for a destination in Amazon OpenSearch Service.

type AmazonopensearchserviceIndexRotationPeriod added in v1.7.0

type AmazonopensearchserviceIndexRotationPeriod string
const (
	AmazonopensearchserviceIndexRotationPeriodNoRotation AmazonopensearchserviceIndexRotationPeriod = "NoRotation"
	AmazonopensearchserviceIndexRotationPeriodOneHour    AmazonopensearchserviceIndexRotationPeriod = "OneHour"
	AmazonopensearchserviceIndexRotationPeriodOneDay     AmazonopensearchserviceIndexRotationPeriod = "OneDay"
	AmazonopensearchserviceIndexRotationPeriodOneWeek    AmazonopensearchserviceIndexRotationPeriod = "OneWeek"
	AmazonopensearchserviceIndexRotationPeriodOneMonth   AmazonopensearchserviceIndexRotationPeriod = "OneMonth"
)

Enum values for AmazonopensearchserviceIndexRotationPeriod

func (AmazonopensearchserviceIndexRotationPeriod) Values added in v1.7.0

Values returns all known values for AmazonopensearchserviceIndexRotationPeriod. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type AmazonopensearchserviceRetryOptions added in v1.7.0

type AmazonopensearchserviceRetryOptions struct {

	// After an initial failure to deliver to Amazon OpenSearch Service, the total
	// amount of time during which Firehose retries delivery (including the first
	// attempt). After this time has elapsed, the failed documents are written to
	// Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results
	// in no retries.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Configures retry behavior in case Firehose is unable to deliver documents to Amazon OpenSearch Service.

type AmazonopensearchserviceS3BackupMode added in v1.7.0

type AmazonopensearchserviceS3BackupMode string
const (
	AmazonopensearchserviceS3BackupModeFailedDocumentsOnly AmazonopensearchserviceS3BackupMode = "FailedDocumentsOnly"
	AmazonopensearchserviceS3BackupModeAllDocuments        AmazonopensearchserviceS3BackupMode = "AllDocuments"
)

Enum values for AmazonopensearchserviceS3BackupMode

func (AmazonopensearchserviceS3BackupMode) Values added in v1.7.0

Values returns all known values for AmazonopensearchserviceS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type AuthenticationConfiguration added in v1.19.0

type AuthenticationConfiguration struct {

	// The type of connectivity used to access the Amazon MSK cluster.
	//
	// This member is required.
	Connectivity Connectivity

	// The ARN of the role used to access the Amazon MSK cluster.
	//
	// This member is required.
	RoleARN *string
	// contains filtered or unexported fields
}

The authentication configuration of the Amazon MSK cluster.

type BufferingHints

type BufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 300. This parameter is
	// optional but if you specify a value for it, you must also specify a value for
	// SizeInMBs , and vice versa.
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MiBs, before delivering it to
	// the destination. The default value is 5. This parameter is optional but if you
	// specify a value for it, you must also specify a value for IntervalInSeconds ,
	// and vice versa. We recommend setting this parameter to a value greater than the
	// amount of data you typically ingest into the delivery stream in 10 seconds. For
	// example, if you typically ingest data at 1 MiB/sec, the value should be 10 MiB
	// or higher.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

Describes hints for the buffering to perform before delivering data to the destination. These options are treated as hints, and therefore Firehose might choose to use different values when it is optimal. The SizeInMBs and IntervalInSeconds parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.

type CloudWatchLoggingOptions

type CloudWatchLoggingOptions struct {

	// Enables or disables CloudWatch logging.
	Enabled *bool

	// The CloudWatch group name for logging. This value is required if CloudWatch
	// logging is enabled.
	LogGroupName *string

	// The CloudWatch log stream name for logging. This value is required if
	// CloudWatch logging is enabled.
	LogStreamName *string
	// contains filtered or unexported fields
}

Describes the Amazon CloudWatch logging options for your delivery stream.

type CompressionFormat

type CompressionFormat string
const (
	CompressionFormatUncompressed CompressionFormat = "UNCOMPRESSED"
	CompressionFormatGzip         CompressionFormat = "GZIP"
	CompressionFormatZip          CompressionFormat = "ZIP"
	CompressionFormatSnappy       CompressionFormat = "Snappy"
	CompressionFormatHadoopSnappy CompressionFormat = "HADOOP_SNAPPY"
)

Enum values for CompressionFormat

func (CompressionFormat) Values added in v0.29.0

Values returns all known values for CompressionFormat. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ConcurrentModificationException

type ConcurrentModificationException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

Another modification has already happened. Fetch VersionId again and use it to update the destination.

func (*ConcurrentModificationException) Error

func (*ConcurrentModificationException) ErrorCode

func (e *ConcurrentModificationException) ErrorCode() string

func (*ConcurrentModificationException) ErrorFault

func (*ConcurrentModificationException) ErrorMessage

func (e *ConcurrentModificationException) ErrorMessage() string

type Connectivity added in v1.19.0

type Connectivity string
const (
	ConnectivityPublic  Connectivity = "PUBLIC"
	ConnectivityPrivate Connectivity = "PRIVATE"
)

Enum values for Connectivity

func (Connectivity) Values added in v1.19.0

func (Connectivity) Values() []Connectivity

Values returns all known values for Connectivity. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ContentEncoding

type ContentEncoding string
const (
	ContentEncodingNone ContentEncoding = "NONE"
	ContentEncodingGzip ContentEncoding = "GZIP"
)

Enum values for ContentEncoding

func (ContentEncoding) Values added in v0.29.0

func (ContentEncoding) Values() []ContentEncoding

Values returns all known values for ContentEncoding. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type CopyCommand

type CopyCommand struct {

	// The name of the target table. The table must already exist in the database.
	//
	// This member is required.
	DataTableName *string

	// Optional parameters to use with the Amazon Redshift COPY command. For more
	// information, see the "Optional Parameters" section of Amazon Redshift COPY
	// command (https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html) . Some
	// possible examples that would apply to Firehose are as follows: delimiter '\t'
	// lzop; - fields are delimited with "\t" (TAB character) and compressed using
	// lzop. delimiter '|' - fields are delimited with "|" (this is the default
	// delimiter). delimiter '|' escape - the delimiter should be escaped. fixedwidth
	// 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' - fields are
	// fixed width in the source, with each width specified after every column in the
	// table. JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the
	// path specified is the format of the data. For more examples, see Amazon
	// Redshift COPY command examples (https://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html)
	// .
	CopyOptions *string

	// A comma-separated list of column names.
	DataTableColumns *string
	// contains filtered or unexported fields
}

Describes a COPY command for Amazon Redshift.

type DataFormatConversionConfiguration

type DataFormatConversionConfiguration struct {

	// Defaults to true . Set it to false if you want to disable format conversion
	// while preserving the configuration details.
	Enabled *bool

	// Specifies the deserializer that you want Firehose to use to convert the format
	// of your data from JSON. This parameter is required if Enabled is set to true.
	InputFormatConfiguration *InputFormatConfiguration

	// Specifies the serializer that you want Firehose to use to convert the format of
	// your data to the Parquet or ORC format. This parameter is required if Enabled
	// is set to true.
	OutputFormatConfiguration *OutputFormatConfiguration

	// Specifies the Amazon Web Services Glue Data Catalog table that contains the
	// column information. This parameter is required if Enabled is set to true.
	SchemaConfiguration *SchemaConfiguration
	// contains filtered or unexported fields
}

Specifies that you want Firehose to convert data from the JSON format to the Parquet or ORC format before writing it to Amazon S3. Firehose uses the serializer and deserializer that you specify, in addition to the column information from the Amazon Web Services Glue table, to deserialize your input data from JSON and then serialize it to the Parquet or ORC format. For more information, see Firehose Record Format Conversion (https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html) .

type DefaultDocumentIdFormat added in v1.18.0

type DefaultDocumentIdFormat string
const (
	DefaultDocumentIdFormatFirehoseDefault DefaultDocumentIdFormat = "FIREHOSE_DEFAULT"
	DefaultDocumentIdFormatNoDocumentId    DefaultDocumentIdFormat = "NO_DOCUMENT_ID"
)

Enum values for DefaultDocumentIdFormat

func (DefaultDocumentIdFormat) Values added in v1.18.0

Values returns all known values for DefaultDocumentIdFormat. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DeliveryStreamDescription

type DeliveryStreamDescription struct {

	// The Amazon Resource Name (ARN) of the delivery stream. For more information,
	// see Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	DeliveryStreamARN *string

	// The name of the delivery stream.
	//
	// This member is required.
	DeliveryStreamName *string

	// The status of the delivery stream. If the status of a delivery stream is
	// CREATING_FAILED , this status doesn't change, and you can't invoke
	// CreateDeliveryStream again on it. However, you can invoke the
	// DeleteDeliveryStream operation to delete it.
	//
	// This member is required.
	DeliveryStreamStatus DeliveryStreamStatus

	// The delivery stream type. This can be one of the following values:
	//   - DirectPut : Provider applications access the delivery stream directly.
	//   - KinesisStreamAsSource : The delivery stream uses a Kinesis data stream as a
	//   source.
	//
	// This member is required.
	DeliveryStreamType DeliveryStreamType

	// The destinations.
	//
	// This member is required.
	Destinations []DestinationDescription

	// Indicates whether there are more destinations available to list.
	//
	// This member is required.
	HasMoreDestinations *bool

	// Each time the destination is updated for a delivery stream, the version ID is
	// changed, and the current version ID is required when updating the destination.
	// This is so that the service knows it is applying the changes to the correct
	// version of the delivery stream.
	//
	// This member is required.
	VersionId *string

	// The date and time that the delivery stream was created.
	CreateTimestamp *time.Time

	// Indicates the server-side encryption (SSE) status for the delivery stream.
	DeliveryStreamEncryptionConfiguration *DeliveryStreamEncryptionConfiguration

	// Provides details in case one of the following operations fails due to an error
	// related to KMS: CreateDeliveryStream , DeleteDeliveryStream ,
	// StartDeliveryStreamEncryption , StopDeliveryStreamEncryption .
	FailureDescription *FailureDescription

	// The date and time that the delivery stream was last updated.
	LastUpdateTimestamp *time.Time

	// If the DeliveryStreamType parameter is KinesisStreamAsSource , a
	// SourceDescription object describing the source Kinesis data stream.
	Source *SourceDescription
	// contains filtered or unexported fields
}

Contains information about a delivery stream.

type DeliveryStreamEncryptionConfiguration

type DeliveryStreamEncryptionConfiguration struct {

	// Provides details in case one of the following operations fails due to an error
	// related to KMS: CreateDeliveryStream , DeleteDeliveryStream ,
	// StartDeliveryStreamEncryption , StopDeliveryStreamEncryption .
	FailureDescription *FailureDescription

	// If KeyType is CUSTOMER_MANAGED_CMK , this field contains the ARN of the customer
	// managed CMK. If KeyType is Amazon Web Services_OWNED_CMK ,
	// DeliveryStreamEncryptionConfiguration doesn't contain a value for KeyARN .
	KeyARN *string

	// Indicates the type of customer master key (CMK) that is used for encryption.
	// The default setting is Amazon Web Services_OWNED_CMK . For more information
	// about CMKs, see Customer Master Keys (CMKs) (https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys)
	// .
	KeyType KeyType

	// This is the server-side encryption (SSE) status for the delivery stream. For a
	// full description of the different values of this status, see
	// StartDeliveryStreamEncryption and StopDeliveryStreamEncryption . If this status
	// is ENABLING_FAILED or DISABLING_FAILED , it is the status of the most recent
	// attempt to enable or disable SSE, respectively.
	Status DeliveryStreamEncryptionStatus
	// contains filtered or unexported fields
}

Contains information about the server-side encryption (SSE) status for the delivery stream, the type customer master key (CMK) in use, if any, and the ARN of the CMK. You can get DeliveryStreamEncryptionConfiguration by invoking the DescribeDeliveryStream operation.

type DeliveryStreamEncryptionConfigurationInput

type DeliveryStreamEncryptionConfigurationInput struct {

	// Indicates the type of customer master key (CMK) to use for encryption. The
	// default setting is Amazon Web Services_OWNED_CMK . For more information about
	// CMKs, see Customer Master Keys (CMKs) (https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys)
	// . When you invoke CreateDeliveryStream or StartDeliveryStreamEncryption with
	// KeyType set to CUSTOMER_MANAGED_CMK, Firehose invokes the Amazon KMS operation
	// CreateGrant (https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateGrant.html)
	// to create a grant that allows the Firehose service to use the customer managed
	// CMK to perform encryption and decryption. Firehose manages that grant. When you
	// invoke StartDeliveryStreamEncryption to change the CMK for a delivery stream
	// that is encrypted with a customer managed CMK, Firehose schedules the grant it
	// had on the old CMK for retirement. You can use a CMK of type
	// CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. If a
	// CreateDeliveryStream or StartDeliveryStreamEncryption operation exceeds this
	// limit, Firehose throws a LimitExceededException . To encrypt your delivery
	// stream, use symmetric CMKs. Firehose doesn't support asymmetric CMKs. For
	// information about symmetric and asymmetric CMKs, see About Symmetric and
	// Asymmetric CMKs (https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-concepts.html)
	// in the Amazon Web Services Key Management Service developer guide.
	//
	// This member is required.
	KeyType KeyType

	// If you set KeyType to CUSTOMER_MANAGED_CMK , you must specify the Amazon
	// Resource Name (ARN) of the CMK. If you set KeyType to Amazon Web
	// Services_OWNED_CMK , Firehose uses a service-account CMK.
	KeyARN *string
	// contains filtered or unexported fields
}

Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).

type DeliveryStreamEncryptionStatus

type DeliveryStreamEncryptionStatus string
const (
	DeliveryStreamEncryptionStatusEnabled         DeliveryStreamEncryptionStatus = "ENABLED"
	DeliveryStreamEncryptionStatusEnabling        DeliveryStreamEncryptionStatus = "ENABLING"
	DeliveryStreamEncryptionStatusEnablingFailed  DeliveryStreamEncryptionStatus = "ENABLING_FAILED"
	DeliveryStreamEncryptionStatusDisabled        DeliveryStreamEncryptionStatus = "DISABLED"
	DeliveryStreamEncryptionStatusDisabling       DeliveryStreamEncryptionStatus = "DISABLING"
	DeliveryStreamEncryptionStatusDisablingFailed DeliveryStreamEncryptionStatus = "DISABLING_FAILED"
)

Enum values for DeliveryStreamEncryptionStatus

func (DeliveryStreamEncryptionStatus) Values added in v0.29.0

Values returns all known values for DeliveryStreamEncryptionStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DeliveryStreamFailureType

type DeliveryStreamFailureType string
const (
	DeliveryStreamFailureTypeRetireKmsGrantFailed      DeliveryStreamFailureType = "RETIRE_KMS_GRANT_FAILED"
	DeliveryStreamFailureTypeCreateKmsGrantFailed      DeliveryStreamFailureType = "CREATE_KMS_GRANT_FAILED"
	DeliveryStreamFailureTypeKmsAccessDenied           DeliveryStreamFailureType = "KMS_ACCESS_DENIED"
	DeliveryStreamFailureTypeDisabledKmsKey            DeliveryStreamFailureType = "DISABLED_KMS_KEY"
	DeliveryStreamFailureTypeInvalidKmsKey             DeliveryStreamFailureType = "INVALID_KMS_KEY"
	DeliveryStreamFailureTypeKmsKeyNotFound            DeliveryStreamFailureType = "KMS_KEY_NOT_FOUND"
	DeliveryStreamFailureTypeKmsOptInRequired          DeliveryStreamFailureType = "KMS_OPT_IN_REQUIRED"
	DeliveryStreamFailureTypeCreateEniFailed           DeliveryStreamFailureType = "CREATE_ENI_FAILED"
	DeliveryStreamFailureTypeDeleteEniFailed           DeliveryStreamFailureType = "DELETE_ENI_FAILED"
	DeliveryStreamFailureTypeSubnetNotFound            DeliveryStreamFailureType = "SUBNET_NOT_FOUND"
	DeliveryStreamFailureTypeSecurityGroupNotFound     DeliveryStreamFailureType = "SECURITY_GROUP_NOT_FOUND"
	DeliveryStreamFailureTypeEniAccessDenied           DeliveryStreamFailureType = "ENI_ACCESS_DENIED"
	DeliveryStreamFailureTypeSubnetAccessDenied        DeliveryStreamFailureType = "SUBNET_ACCESS_DENIED"
	DeliveryStreamFailureTypeSecurityGroupAccessDenied DeliveryStreamFailureType = "SECURITY_GROUP_ACCESS_DENIED"
	DeliveryStreamFailureTypeUnknownError              DeliveryStreamFailureType = "UNKNOWN_ERROR"
)

Enum values for DeliveryStreamFailureType

func (DeliveryStreamFailureType) Values added in v0.29.0

Values returns all known values for DeliveryStreamFailureType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DeliveryStreamStatus

type DeliveryStreamStatus string
const (
	DeliveryStreamStatusCreating       DeliveryStreamStatus = "CREATING"
	DeliveryStreamStatusCreatingFailed DeliveryStreamStatus = "CREATING_FAILED"
	DeliveryStreamStatusDeleting       DeliveryStreamStatus = "DELETING"
	DeliveryStreamStatusDeletingFailed DeliveryStreamStatus = "DELETING_FAILED"
	DeliveryStreamStatusActive         DeliveryStreamStatus = "ACTIVE"
)

Enum values for DeliveryStreamStatus

func (DeliveryStreamStatus) Values added in v0.29.0

Values returns all known values for DeliveryStreamStatus. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type DeliveryStreamType

type DeliveryStreamType string
const (
	DeliveryStreamTypeDirectPut             DeliveryStreamType = "DirectPut"
	DeliveryStreamTypeKinesisStreamAsSource DeliveryStreamType = "KinesisStreamAsSource"
	DeliveryStreamTypeMSKAsSource           DeliveryStreamType = "MSKAsSource"
)

Enum values for DeliveryStreamType

func (DeliveryStreamType) Values added in v0.29.0

Values returns all known values for DeliveryStreamType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Deserializer

type Deserializer struct {

	// The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data,
	// which means converting it from the JSON format in preparation for serializing it
	// to the Parquet or ORC format. This is one of two deserializers you can choose,
	// depending on which one offers the functionality you need. The other option is
	// the OpenX SerDe.
	HiveJsonSerDe *HiveJsonSerDe

	// The OpenX SerDe. Used by Firehose for deserializing data, which means
	// converting it from the JSON format in preparation for serializing it to the
	// Parquet or ORC format. This is one of two deserializers you can choose,
	// depending on which one offers the functionality you need. The other option is
	// the native Hive / HCatalog JsonSerDe.
	OpenXJsonSerDe *OpenXJsonSerDe
	// contains filtered or unexported fields
}

The deserializer you want Firehose to use for converting the input data from JSON. Firehose then serializes the data to its final format using the Serializer . Firehose supports two types of deserializers: the Apache Hive JSON SerDe (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-JSON) and the OpenX JSON SerDe (https://github.com/rcongiu/Hive-JSON-Serde) .

type DestinationDescription

type DestinationDescription struct {

	// The ID of the destination.
	//
	// This member is required.
	DestinationId *string

	// The destination in the Serverless offering for Amazon OpenSearch Service.
	AmazonOpenSearchServerlessDestinationDescription *AmazonOpenSearchServerlessDestinationDescription

	// The destination in Amazon OpenSearch Service.
	AmazonopensearchserviceDestinationDescription *AmazonopensearchserviceDestinationDescription

	// The destination in Amazon ES.
	ElasticsearchDestinationDescription *ElasticsearchDestinationDescription

	// The destination in Amazon S3.
	ExtendedS3DestinationDescription *ExtendedS3DestinationDescription

	// Describes the specified HTTP endpoint destination.
	HttpEndpointDestinationDescription *HttpEndpointDestinationDescription

	// The destination in Amazon Redshift.
	RedshiftDestinationDescription *RedshiftDestinationDescription

	// [Deprecated] The destination in Amazon S3.
	S3DestinationDescription *S3DestinationDescription

	// Optional description for the destination
	SnowflakeDestinationDescription *SnowflakeDestinationDescription

	// The destination in Splunk.
	SplunkDestinationDescription *SplunkDestinationDescription
	// contains filtered or unexported fields
}

Describes the destination for a delivery stream.

type DocumentIdOptions added in v1.18.0

type DocumentIdOptions struct {

	// When the FIREHOSE_DEFAULT option is chosen, Firehose generates a unique
	// document ID for each record based on a unique internal identifier. The generated
	// document ID is stable across multiple delivery attempts, which helps prevent the
	// same record from being indexed multiple times with different document IDs. When
	// the NO_DOCUMENT_ID option is chosen, Firehose does not include any document IDs
	// in the requests it sends to the Amazon OpenSearch Service. This causes the
	// Amazon OpenSearch Service domain to generate document IDs. In case of multiple
	// delivery attempts, this may cause the same record to be indexed more than once
	// with different document IDs. This option enables write-heavy operations, such as
	// the ingestion of logs and observability data, to consume less resources in the
	// Amazon OpenSearch Service domain, resulting in improved performance.
	//
	// This member is required.
	DefaultDocumentIdFormat DefaultDocumentIdFormat
	// contains filtered or unexported fields
}

Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.

type DynamicPartitioningConfiguration added in v1.6.0

type DynamicPartitioningConfiguration struct {

	// Specifies that the dynamic partitioning is enabled for this Firehose delivery
	// stream.
	Enabled *bool

	// The retry behavior in case Firehose is unable to deliver data to an Amazon S3
	// prefix.
	RetryOptions *RetryOptions
	// contains filtered or unexported fields
}

The configuration of the dynamic partitioning mechanism that creates smaller data sets from the streaming data by partitioning it based on partition keys. Currently, dynamic partitioning is only supported for Amazon S3 destinations.

type ElasticsearchBufferingHints

type ElasticsearchBufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 300 (5 minutes).
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MBs, before delivering it to the
	// destination. The default value is 5. We recommend setting this parameter to a
	// value greater than the amount of data you typically ingest into the delivery
	// stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the
	// value should be 10 MB or higher.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

Describes the buffering to perform before delivering data to the Amazon ES destination.

type ElasticsearchDestinationConfiguration

type ElasticsearchDestinationConfiguration struct {

	// The Elasticsearch index name.
	//
	// This member is required.
	IndexName *string

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Amazon ES Configuration API and for indexing documents. For more
	// information, see Grant Firehose Access to an Amazon S3 Destination (https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3)
	// and Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The configuration for the backup Amazon S3 location.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The buffering options. If no value is specified, the default values for
	// ElasticsearchBufferingHints are used.
	BufferingHints *ElasticsearchBufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Specify either this
	// ClusterEndpoint or the DomainARN field.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon ES domain. The IAM role must have permissions for
	// DescribeDomain , DescribeDomains , and DescribeDomainConfig after assuming the
	// role specified in RoleARN. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// . Specify either ClusterEndpoint or DomainARN .
	DomainARN *string

	// The Elasticsearch index rotation period. Index rotation appends a timestamp to
	// the IndexName to facilitate the expiration of old data. For more information,
	// see Index Rotation for the Amazon ES Destination (https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation)
	// . The default value is OneDay .
	IndexRotationPeriod ElasticsearchIndexRotationPeriod

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// ES. The default value is 300 (5 minutes).
	RetryOptions *ElasticsearchRetryOptions

	// Defines how documents should be delivered to Amazon S3. When it is set to
	// FailedDocumentsOnly , Firehose writes any documents that could not be indexed to
	// the configured Amazon S3 destination, with AmazonOpenSearchService-failed/
	// appended to the key prefix. When set to AllDocuments , Firehose delivers all
	// incoming records to Amazon S3, and also writes failed documents with
	// AmazonOpenSearchService-failed/ appended to the prefix. For more information,
	// see Amazon S3 Backup for the Amazon ES Destination (https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-s3-backup)
	// . Default value is FailedDocumentsOnly . You can't change this backup mode after
	// you create the delivery stream.
	S3BackupMode ElasticsearchS3BackupMode

	// The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type
	// per index. If you try to specify a new type for an existing index that already
	// has another type, Firehose returns an error during run time. For Elasticsearch
	// 7.x, don't specify a TypeName .
	TypeName *string

	// The details of the VPC of the Amazon destination.
	VpcConfiguration *VpcConfiguration
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Amazon ES.

type ElasticsearchDestinationDescription

type ElasticsearchDestinationDescription struct {

	// The buffering options.
	BufferingHints *ElasticsearchBufferingHints

	// The Amazon CloudWatch logging options.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Firehose uses either
	// this ClusterEndpoint or the DomainARN field to send data to Amazon ES.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon ES domain. For more information, see Amazon Resource
	// Names (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// . Firehose uses either ClusterEndpoint or DomainARN to send data to Amazon ES.
	DomainARN *string

	// The Elasticsearch index name.
	IndexName *string

	// The Elasticsearch index rotation period
	IndexRotationPeriod ElasticsearchIndexRotationPeriod

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The Amazon ES retry options.
	RetryOptions *ElasticsearchRetryOptions

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	RoleARN *string

	// The Amazon S3 backup mode.
	S3BackupMode ElasticsearchS3BackupMode

	// The Amazon S3 destination.
	S3DestinationDescription *S3DestinationDescription

	// The Elasticsearch type name. This applies to Elasticsearch 6.x and lower
	// versions. For Elasticsearch 7.x and OpenSearch Service 1.x, there's no value for
	// TypeName .
	TypeName *string

	// The details of the VPC of the Amazon OpenSearch or the Amazon OpenSearch
	// Serverless destination.
	VpcConfigurationDescription *VpcConfigurationDescription
	// contains filtered or unexported fields
}

The destination description in Amazon ES.

type ElasticsearchDestinationUpdate

type ElasticsearchDestinationUpdate struct {

	// The buffering options. If no value is specified, ElasticsearchBufferingHints
	// object default values are used.
	BufferingHints *ElasticsearchBufferingHints

	// The CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The endpoint to use when communicating with the cluster. Specify either this
	// ClusterEndpoint or the DomainARN field.
	ClusterEndpoint *string

	// Indicates the method for setting up document ID. The supported methods are
	// Firehose generated document ID and OpenSearch Service generated document ID.
	DocumentIdOptions *DocumentIdOptions

	// The ARN of the Amazon ES domain. The IAM role must have permissions for
	// DescribeDomain , DescribeDomains , and DescribeDomainConfig after assuming the
	// IAM role specified in RoleARN . For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// . Specify either ClusterEndpoint or DomainARN .
	DomainARN *string

	// The Elasticsearch index name.
	IndexName *string

	// The Elasticsearch index rotation period. Index rotation appends a timestamp to
	// IndexName to facilitate the expiration of old data. For more information, see
	// Index Rotation for the Amazon ES Destination (https://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation)
	// . Default value is OneDay .
	IndexRotationPeriod ElasticsearchIndexRotationPeriod

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// ES. The default value is 300 (5 minutes).
	RetryOptions *ElasticsearchRetryOptions

	// The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for
	// calling the Amazon ES Configuration API and for indexing documents. For more
	// information, see Grant Firehose Access to an Amazon S3 Destination (https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3)
	// and Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	RoleARN *string

	// The Amazon S3 destination.
	S3Update *S3DestinationUpdate

	// The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type
	// per index. If you try to specify a new type for an existing index that already
	// has another type, Firehose returns an error during runtime. If you upgrade
	// Elasticsearch from 6.x to 7.x and don’t update your delivery stream, Firehose
	// still delivers data to Elasticsearch with the old index name and type name. If
	// you want to update your delivery stream with a new index name, provide an empty
	// string for TypeName .
	TypeName *string
	// contains filtered or unexported fields
}

Describes an update for a destination in Amazon ES.

type ElasticsearchIndexRotationPeriod

type ElasticsearchIndexRotationPeriod string
const (
	ElasticsearchIndexRotationPeriodNoRotation ElasticsearchIndexRotationPeriod = "NoRotation"
	ElasticsearchIndexRotationPeriodOneHour    ElasticsearchIndexRotationPeriod = "OneHour"
	ElasticsearchIndexRotationPeriodOneDay     ElasticsearchIndexRotationPeriod = "OneDay"
	ElasticsearchIndexRotationPeriodOneWeek    ElasticsearchIndexRotationPeriod = "OneWeek"
	ElasticsearchIndexRotationPeriodOneMonth   ElasticsearchIndexRotationPeriod = "OneMonth"
)

Enum values for ElasticsearchIndexRotationPeriod

func (ElasticsearchIndexRotationPeriod) Values added in v0.29.0

Values returns all known values for ElasticsearchIndexRotationPeriod. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ElasticsearchRetryOptions

type ElasticsearchRetryOptions struct {

	// After an initial failure to deliver to Amazon ES, the total amount of time
	// during which Firehose retries delivery (including the first attempt). After this
	// time has elapsed, the failed documents are written to Amazon S3. Default value
	// is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Configures retry behavior in case Firehose is unable to deliver documents to Amazon ES.

type ElasticsearchS3BackupMode

type ElasticsearchS3BackupMode string
const (
	ElasticsearchS3BackupModeFailedDocumentsOnly ElasticsearchS3BackupMode = "FailedDocumentsOnly"
	ElasticsearchS3BackupModeAllDocuments        ElasticsearchS3BackupMode = "AllDocuments"
)

Enum values for ElasticsearchS3BackupMode

func (ElasticsearchS3BackupMode) Values added in v0.29.0

Values returns all known values for ElasticsearchS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type EncryptionConfiguration

type EncryptionConfiguration struct {

	// The encryption key.
	KMSEncryptionConfig *KMSEncryptionConfig

	// Specifically override existing encryption information to ensure that no
	// encryption is used.
	NoEncryptionConfig NoEncryptionConfig
	// contains filtered or unexported fields
}

Describes the encryption for a destination in Amazon S3.

type ExtendedS3DestinationConfiguration

type ExtendedS3DestinationConfiguration struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	BucketARN *string

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The buffering option.
	BufferingHints *BufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The compression format. If no value is specified, the default is UNCOMPRESSED.
	CompressionFormat CompressionFormat

	// The time zone you prefer. UTC is the default.
	CustomTimeZone *string

	// The serializer, deserializer, and schema for converting data from the JSON
	// format to the Parquet or ORC format before writing it to Amazon S3.
	DataFormatConversionConfiguration *DataFormatConversionConfiguration

	// The configuration of the dynamic partitioning mechanism that creates smaller
	// data sets from the streaming data by partitioning it based on partition keys.
	// Currently, dynamic partitioning is only supported for Amazon S3 destinations.
	DynamicPartitioningConfiguration *DynamicPartitioningConfiguration

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	EncryptionConfiguration *EncryptionConfiguration

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// Specify a file extension. It will override the default file extension
	FileExtension *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The configuration for backup in Amazon S3.
	S3BackupConfiguration *S3DestinationConfiguration

	// The Amazon S3 backup mode. After you create a delivery stream, you can update
	// it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't
	// update the delivery stream to disable it.
	S3BackupMode S3BackupMode
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Amazon S3.

type ExtendedS3DestinationDescription

type ExtendedS3DestinationDescription struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	BucketARN *string

	// The buffering option.
	//
	// This member is required.
	BufferingHints *BufferingHints

	// The compression format. If no value is specified, the default is UNCOMPRESSED .
	//
	// This member is required.
	CompressionFormat CompressionFormat

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	//
	// This member is required.
	EncryptionConfiguration *EncryptionConfiguration

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The time zone you prefer. UTC is the default.
	CustomTimeZone *string

	// The serializer, deserializer, and schema for converting data from the JSON
	// format to the Parquet or ORC format before writing it to Amazon S3.
	DataFormatConversionConfiguration *DataFormatConversionConfiguration

	// The configuration of the dynamic partitioning mechanism that creates smaller
	// data sets from the streaming data by partitioning it based on partition keys.
	// Currently, dynamic partitioning is only supported for Amazon S3 destinations.
	DynamicPartitioningConfiguration *DynamicPartitioningConfiguration

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// Specify a file extension. It will override the default file extension
	FileExtension *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The configuration for backup in Amazon S3.
	S3BackupDescription *S3DestinationDescription

	// The Amazon S3 backup mode.
	S3BackupMode S3BackupMode
	// contains filtered or unexported fields
}

Describes a destination in Amazon S3.

type ExtendedS3DestinationUpdate

type ExtendedS3DestinationUpdate struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	BucketARN *string

	// The buffering option.
	BufferingHints *BufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The compression format. If no value is specified, the default is UNCOMPRESSED .
	CompressionFormat CompressionFormat

	// The time zone you prefer. UTC is the default.
	CustomTimeZone *string

	// The serializer, deserializer, and schema for converting data from the JSON
	// format to the Parquet or ORC format before writing it to Amazon S3.
	DataFormatConversionConfiguration *DataFormatConversionConfiguration

	// The configuration of the dynamic partitioning mechanism that creates smaller
	// data sets from the streaming data by partitioning it based on partition keys.
	// Currently, dynamic partitioning is only supported for Amazon S3 destinations.
	DynamicPartitioningConfiguration *DynamicPartitioningConfiguration

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	EncryptionConfiguration *EncryptionConfiguration

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// Specify a file extension. It will override the default file extension
	FileExtension *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	RoleARN *string

	// You can update a delivery stream to enable Amazon S3 backup if it is disabled.
	// If backup is enabled, you can't update the delivery stream to disable it.
	S3BackupMode S3BackupMode

	// The Amazon S3 destination for backup.
	S3BackupUpdate *S3DestinationUpdate
	// contains filtered or unexported fields
}

Describes an update for a destination in Amazon S3.

type FailureDescription

type FailureDescription struct {

	// A message providing details about the error that caused the failure.
	//
	// This member is required.
	Details *string

	// The type of error that caused the failure.
	//
	// This member is required.
	Type DeliveryStreamFailureType
	// contains filtered or unexported fields
}

Provides details in case one of the following operations fails due to an error related to KMS: CreateDeliveryStream , DeleteDeliveryStream , StartDeliveryStreamEncryption , StopDeliveryStreamEncryption .

type HECEndpointType

type HECEndpointType string
const (
	HECEndpointTypeRaw   HECEndpointType = "Raw"
	HECEndpointTypeEvent HECEndpointType = "Event"
)

Enum values for HECEndpointType

func (HECEndpointType) Values added in v0.29.0

func (HECEndpointType) Values() []HECEndpointType

Values returns all known values for HECEndpointType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type HiveJsonSerDe

type HiveJsonSerDe struct {

	// Indicates how you want Firehose to parse the date and timestamps that may be
	// present in your input data JSON. To specify these format strings, follow the
	// pattern syntax of JodaTime's DateTimeFormat format strings. For more
	// information, see Class DateTimeFormat (https://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html)
	// . You can also use the special value millis to parse timestamps in epoch
	// milliseconds. If you don't specify a format, Firehose uses
	// java.sql.Timestamp::valueOf by default.
	TimestampFormats []string
	// contains filtered or unexported fields
}

The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.

type HttpEndpointBufferingHints

type HttpEndpointBufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 300 (5 minutes).
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MBs, before delivering it to the
	// destination. The default value is 5. We recommend setting this parameter to a
	// value greater than the amount of data you typically ingest into the delivery
	// stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the
	// value should be 10 MB or higher.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

Describes the buffering options that can be applied before data is delivered to the HTTP endpoint destination. Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if specify a value for one of them, you must also provide a value for the other.

type HttpEndpointCommonAttribute

type HttpEndpointCommonAttribute struct {

	// The name of the HTTP endpoint common attribute.
	//
	// This member is required.
	AttributeName *string

	// The value of the HTTP endpoint common attribute.
	//
	// This member is required.
	AttributeValue *string
	// contains filtered or unexported fields
}

Describes the metadata that's delivered to the specified HTTP endpoint destination.

type HttpEndpointConfiguration

type HttpEndpointConfiguration struct {

	// The URL of the HTTP endpoint selected as the destination. If you choose an HTTP
	// endpoint as your destination, review and follow the instructions in the
	// Appendix - HTTP Endpoint Delivery Request and Response Specifications (https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html)
	// .
	//
	// This member is required.
	Url *string

	// The access key required for Kinesis Firehose to authenticate with the HTTP
	// endpoint selected as the destination.
	AccessKey *string

	// The name of the HTTP endpoint selected as the destination.
	Name *string
	// contains filtered or unexported fields
}

Describes the configuration of the HTTP endpoint to which Kinesis Firehose delivers data.

type HttpEndpointDescription

type HttpEndpointDescription struct {

	// The name of the HTTP endpoint selected as the destination.
	Name *string

	// The URL of the HTTP endpoint selected as the destination.
	Url *string
	// contains filtered or unexported fields
}

Describes the HTTP endpoint selected as the destination.

type HttpEndpointDestinationConfiguration

type HttpEndpointDestinationConfiguration struct {

	// The configuration of the HTTP endpoint selected as the destination.
	//
	// This member is required.
	EndpointConfiguration *HttpEndpointConfiguration

	// Describes the configuration of a destination in Amazon S3.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The buffering options that can be used before data is delivered to the
	// specified destination. Firehose treats these options as hints, and it might
	// choose to use more optimal values. The SizeInMBs and IntervalInSeconds
	// parameters are optional. However, if you specify a value for one of them, you
	// must also provide a value for the other.
	BufferingHints *HttpEndpointBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The configuration of the requeste sent to the HTTP endpoint specified as the
	// destination.
	RequestConfiguration *HttpEndpointRequestConfiguration

	// Describes the retry behavior in case Firehose is unable to deliver data to the
	// specified HTTP endpoint destination, or if it doesn't receive a valid
	// acknowledgment of receipt from the specified HTTP endpoint destination.
	RetryOptions *HttpEndpointRetryOptions

	// Firehose uses this IAM role for all the permissions that the delivery stream
	// needs.
	RoleARN *string

	// Describes the S3 bucket backup options for the data that Firehose delivers to
	// the HTTP endpoint destination. You can back up all documents ( AllData ) or only
	// the documents that Firehose could not deliver to the specified HTTP endpoint
	// destination ( FailedDataOnly ).
	S3BackupMode HttpEndpointS3BackupMode
	// contains filtered or unexported fields
}

Describes the configuration of the HTTP endpoint destination.

type HttpEndpointDestinationDescription

type HttpEndpointDestinationDescription struct {

	// Describes buffering options that can be applied to the data before it is
	// delivered to the HTTPS endpoint destination. Firehose teats these options as
	// hints, and it might choose to use more optimal values. The SizeInMBs and
	// IntervalInSeconds parameters are optional. However, if specify a value for one
	// of them, you must also provide a value for the other.
	BufferingHints *HttpEndpointBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The configuration of the specified HTTP endpoint destination.
	EndpointConfiguration *HttpEndpointDescription

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The configuration of request sent to the HTTP endpoint specified as the
	// destination.
	RequestConfiguration *HttpEndpointRequestConfiguration

	// Describes the retry behavior in case Firehose is unable to deliver data to the
	// specified HTTP endpoint destination, or if it doesn't receive a valid
	// acknowledgment of receipt from the specified HTTP endpoint destination.
	RetryOptions *HttpEndpointRetryOptions

	// Firehose uses this IAM role for all the permissions that the delivery stream
	// needs.
	RoleARN *string

	// Describes the S3 bucket backup options for the data that Kinesis Firehose
	// delivers to the HTTP endpoint destination. You can back up all documents (
	// AllData ) or only the documents that Firehose could not deliver to the specified
	// HTTP endpoint destination ( FailedDataOnly ).
	S3BackupMode HttpEndpointS3BackupMode

	// Describes a destination in Amazon S3.
	S3DestinationDescription *S3DestinationDescription
	// contains filtered or unexported fields
}

Describes the HTTP endpoint destination.

type HttpEndpointDestinationUpdate

type HttpEndpointDestinationUpdate struct {

	// Describes buffering options that can be applied to the data before it is
	// delivered to the HTTPS endpoint destination. Firehose teats these options as
	// hints, and it might choose to use more optimal values. The SizeInMBs and
	// IntervalInSeconds parameters are optional. However, if specify a value for one
	// of them, you must also provide a value for the other.
	BufferingHints *HttpEndpointBufferingHints

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// Describes the configuration of the HTTP endpoint destination.
	EndpointConfiguration *HttpEndpointConfiguration

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The configuration of the request sent to the HTTP endpoint specified as the
	// destination.
	RequestConfiguration *HttpEndpointRequestConfiguration

	// Describes the retry behavior in case Firehose is unable to deliver data to the
	// specified HTTP endpoint destination, or if it doesn't receive a valid
	// acknowledgment of receipt from the specified HTTP endpoint destination.
	RetryOptions *HttpEndpointRetryOptions

	// Firehose uses this IAM role for all the permissions that the delivery stream
	// needs.
	RoleARN *string

	// Describes the S3 bucket backup options for the data that Kinesis Firehose
	// delivers to the HTTP endpoint destination. You can back up all documents (
	// AllData ) or only the documents that Firehose could not deliver to the specified
	// HTTP endpoint destination ( FailedDataOnly ).
	S3BackupMode HttpEndpointS3BackupMode

	// Describes an update for a destination in Amazon S3.
	S3Update *S3DestinationUpdate
	// contains filtered or unexported fields
}

Updates the specified HTTP endpoint destination.

type HttpEndpointRequestConfiguration

type HttpEndpointRequestConfiguration struct {

	// Describes the metadata sent to the HTTP endpoint destination.
	CommonAttributes []HttpEndpointCommonAttribute

	// Firehose uses the content encoding to compress the body of a request before
	// sending the request to the destination. For more information, see
	// Content-Encoding (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding)
	// in MDN Web Docs, the official Mozilla documentation.
	ContentEncoding ContentEncoding
	// contains filtered or unexported fields
}

The configuration of the HTTP endpoint request.

type HttpEndpointRetryOptions

type HttpEndpointRetryOptions struct {

	// The total amount of time that Firehose spends on retries. This duration starts
	// after the initial attempt to send data to the custom destination via HTTPS
	// endpoint fails. It doesn't include the periods during which Firehose waits for
	// acknowledgment from the specified destination after each attempt.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Describes the retry behavior in case Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.

type HttpEndpointS3BackupMode

type HttpEndpointS3BackupMode string
const (
	HttpEndpointS3BackupModeFailedDataOnly HttpEndpointS3BackupMode = "FailedDataOnly"
	HttpEndpointS3BackupModeAllData        HttpEndpointS3BackupMode = "AllData"
)

Enum values for HttpEndpointS3BackupMode

func (HttpEndpointS3BackupMode) Values added in v0.29.0

Values returns all known values for HttpEndpointS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type InputFormatConfiguration

type InputFormatConfiguration struct {

	// Specifies which deserializer to use. You can choose either the Apache Hive JSON
	// SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the
	// request.
	Deserializer *Deserializer
	// contains filtered or unexported fields
}

Specifies the deserializer you want to use to convert the format of the input data. This parameter is required if Enabled is set to true.

type InvalidArgumentException

type InvalidArgumentException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

The specified input parameter has a value that is not valid.

func (*InvalidArgumentException) Error

func (e *InvalidArgumentException) Error() string

func (*InvalidArgumentException) ErrorCode

func (e *InvalidArgumentException) ErrorCode() string

func (*InvalidArgumentException) ErrorFault

func (e *InvalidArgumentException) ErrorFault() smithy.ErrorFault

func (*InvalidArgumentException) ErrorMessage

func (e *InvalidArgumentException) ErrorMessage() string

type InvalidKMSResourceException

type InvalidKMSResourceException struct {
	Message *string

	ErrorCodeOverride *string

	Code *string
	// contains filtered or unexported fields
}

Firehose throws this exception when an attempt to put records or to start or stop delivery stream encryption fails. This happens when the KMS service throws one of the following exception types: AccessDeniedException , InvalidStateException , DisabledException , or NotFoundException .

func (*InvalidKMSResourceException) Error

func (*InvalidKMSResourceException) ErrorCode

func (e *InvalidKMSResourceException) ErrorCode() string

func (*InvalidKMSResourceException) ErrorFault

func (*InvalidKMSResourceException) ErrorMessage

func (e *InvalidKMSResourceException) ErrorMessage() string

type InvalidSourceException added in v1.23.0

type InvalidSourceException struct {
	Message *string

	ErrorCodeOverride *string

	Code *string
	// contains filtered or unexported fields
}

Only requests from CloudWatch Logs are supported when CloudWatch Logs decompression is enabled.

func (*InvalidSourceException) Error added in v1.23.0

func (e *InvalidSourceException) Error() string

func (*InvalidSourceException) ErrorCode added in v1.23.0

func (e *InvalidSourceException) ErrorCode() string

func (*InvalidSourceException) ErrorFault added in v1.23.0

func (e *InvalidSourceException) ErrorFault() smithy.ErrorFault

func (*InvalidSourceException) ErrorMessage added in v1.23.0

func (e *InvalidSourceException) ErrorMessage() string

type KMSEncryptionConfig

type KMSEncryptionConfig struct {

	// The Amazon Resource Name (ARN) of the encryption key. Must belong to the same
	// Amazon Web Services Region as the destination Amazon S3 bucket. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	AWSKMSKeyARN *string
	// contains filtered or unexported fields
}

Describes an encryption key for a destination in Amazon S3.

type KeyType

type KeyType string
const (
	KeyTypeAwsOwnedCmk        KeyType = "AWS_OWNED_CMK"
	KeyTypeCustomerManagedCmk KeyType = "CUSTOMER_MANAGED_CMK"
)

Enum values for KeyType

func (KeyType) Values added in v0.29.0

func (KeyType) Values() []KeyType

Values returns all known values for KeyType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type KinesisStreamSourceConfiguration

type KinesisStreamSourceConfiguration struct {

	// The ARN of the source Kinesis data stream. For more information, see Amazon
	// Kinesis Data Streams ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams)
	// .
	//
	// This member is required.
	KinesisStreamARN *string

	// The ARN of the role that provides access to the source Kinesis data stream. For
	// more information, see Amazon Web Services Identity and Access Management (IAM)
	// ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam)
	// .
	//
	// This member is required.
	RoleARN *string
	// contains filtered or unexported fields
}

The stream and role Amazon Resource Names (ARNs) for a Kinesis data stream used as the source for a delivery stream.

type KinesisStreamSourceDescription

type KinesisStreamSourceDescription struct {

	// Firehose starts retrieving records from the Kinesis data stream starting with
	// this timestamp.
	DeliveryStartTimestamp *time.Time

	// The Amazon Resource Name (ARN) of the source Kinesis data stream. For more
	// information, see Amazon Kinesis Data Streams ARN Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kinesis-streams)
	// .
	KinesisStreamARN *string

	// The ARN of the role used by the source Kinesis data stream. For more
	// information, see Amazon Web Services Identity and Access Management (IAM) ARN
	// Format (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-iam)
	// .
	RoleARN *string
	// contains filtered or unexported fields
}

Details about a Kinesis data stream used as the source for a Firehose delivery stream.

type LimitExceededException

type LimitExceededException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

You have already reached the limit for a requested resource.

func (*LimitExceededException) Error

func (e *LimitExceededException) Error() string

func (*LimitExceededException) ErrorCode

func (e *LimitExceededException) ErrorCode() string

func (*LimitExceededException) ErrorFault

func (e *LimitExceededException) ErrorFault() smithy.ErrorFault

func (*LimitExceededException) ErrorMessage

func (e *LimitExceededException) ErrorMessage() string

type MSKSourceConfiguration added in v1.19.0

type MSKSourceConfiguration struct {

	// The authentication configuration of the Amazon MSK cluster.
	//
	// This member is required.
	AuthenticationConfiguration *AuthenticationConfiguration

	// The ARN of the Amazon MSK cluster.
	//
	// This member is required.
	MSKClusterARN *string

	// The topic name within the Amazon MSK cluster.
	//
	// This member is required.
	TopicName *string
	// contains filtered or unexported fields
}

The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.

type MSKSourceDescription added in v1.19.0

type MSKSourceDescription struct {

	// The authentication configuration of the Amazon MSK cluster.
	AuthenticationConfiguration *AuthenticationConfiguration

	// Firehose starts retrieving records from the topic within the Amazon MSK cluster
	// starting with this timestamp.
	DeliveryStartTimestamp *time.Time

	// The ARN of the Amazon MSK cluster.
	MSKClusterARN *string

	// The topic name within the Amazon MSK cluster.
	TopicName *string
	// contains filtered or unexported fields
}

Details about the Amazon MSK cluster used as the source for a Firehose delivery stream.

type NoEncryptionConfig

type NoEncryptionConfig string
const (
	NoEncryptionConfigNoEncryption NoEncryptionConfig = "NoEncryption"
)

Enum values for NoEncryptionConfig

func (NoEncryptionConfig) Values added in v0.29.0

Values returns all known values for NoEncryptionConfig. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type OpenXJsonSerDe

type OpenXJsonSerDe struct {

	// When set to true , which is the default, Firehose converts JSON keys to
	// lowercase before deserializing them.
	CaseInsensitive *bool

	// Maps column names to JSON keys that aren't identical to the column names. This
	// is useful when the JSON contains keys that are Hive keywords. For example,
	// timestamp is a Hive keyword. If you have a JSON key named timestamp , set this
	// parameter to {"ts": "timestamp"} to map this key to a column named ts .
	ColumnToJsonKeyMappings map[string]string

	// When set to true , specifies that the names of the keys include dots and that
	// you want Firehose to replace them with underscores. This is useful because
	// Apache Hive does not allow dots in column names. For example, if the JSON
	// contains a key whose name is "a.b", you can define the column name to be "a_b"
	// when using this option. The default is false .
	ConvertDotsInJsonKeysToUnderscores *bool
	// contains filtered or unexported fields
}

The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.

type OrcCompression

type OrcCompression string
const (
	OrcCompressionNone   OrcCompression = "NONE"
	OrcCompressionZlib   OrcCompression = "ZLIB"
	OrcCompressionSnappy OrcCompression = "SNAPPY"
)

Enum values for OrcCompression

func (OrcCompression) Values added in v0.29.0

func (OrcCompression) Values() []OrcCompression

Values returns all known values for OrcCompression. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type OrcFormatVersion

type OrcFormatVersion string
const (
	OrcFormatVersionV011 OrcFormatVersion = "V0_11"
	OrcFormatVersionV012 OrcFormatVersion = "V0_12"
)

Enum values for OrcFormatVersion

func (OrcFormatVersion) Values added in v0.29.0

Values returns all known values for OrcFormatVersion. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type OrcSerDe

type OrcSerDe struct {

	// The Hadoop Distributed File System (HDFS) block size. This is useful if you
	// intend to copy the data from Amazon S3 to HDFS before querying. The default is
	// 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding
	// calculations.
	BlockSizeBytes *int32

	// The column names for which you want Firehose to create bloom filters. The
	// default is null .
	BloomFilterColumns []string

	// The Bloom filter false positive probability (FPP). The lower the FPP, the
	// bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the
	// maximum is 1.
	BloomFilterFalsePositiveProbability *float64

	// The compression code to use over data blocks. The default is SNAPPY .
	Compression OrcCompression

	// Represents the fraction of the total number of non-null rows. To turn off
	// dictionary encoding, set this fraction to a number that is less than the number
	// of distinct keys in a dictionary. To always use dictionary encoding, set this
	// threshold to 1.
	DictionaryKeyThreshold *float64

	// Set this to true to indicate that you want stripes to be padded to the HDFS
	// block boundaries. This is useful if you intend to copy the data from Amazon S3
	// to HDFS before querying. The default is false .
	EnablePadding *bool

	// The version of the file to write. The possible values are V0_11 and V0_12 . The
	// default is V0_12 .
	FormatVersion OrcFormatVersion

	// A number between 0 and 1 that defines the tolerance for block padding as a
	// decimal fraction of stripe size. The default value is 0.05, which means 5
	// percent of stripe size. For the default values of 64 MiB ORC stripes and 256 MiB
	// HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum
	// of 3.2 MiB for padding within the 256 MiB block. In such a case, if the
	// available size within the block is more than 3.2 MiB, a new, smaller stripe is
	// inserted to fit within that space. This ensures that no stripe crosses block
	// boundaries and causes remote reads within a node-local task. Firehose ignores
	// this parameter when OrcSerDe$EnablePadding is false .
	PaddingTolerance *float64

	// The number of rows between index entries. The default is 10,000 and the minimum
	// is 1,000.
	RowIndexStride *int32

	// The number of bytes in each stripe. The default is 64 MiB and the minimum is 8
	// MiB.
	StripeSizeBytes *int32
	// contains filtered or unexported fields
}

A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/) .

type OutputFormatConfiguration

type OutputFormatConfiguration struct {

	// Specifies which serializer to use. You can choose either the ORC SerDe or the
	// Parquet SerDe. If both are non-null, the server rejects the request.
	Serializer *Serializer
	// contains filtered or unexported fields
}

Specifies the serializer that you want Firehose to use to convert the format of your data before it writes it to Amazon S3. This parameter is required if Enabled is set to true.

type ParquetCompression

type ParquetCompression string
const (
	ParquetCompressionUncompressed ParquetCompression = "UNCOMPRESSED"
	ParquetCompressionGzip         ParquetCompression = "GZIP"
	ParquetCompressionSnappy       ParquetCompression = "SNAPPY"
)

Enum values for ParquetCompression

func (ParquetCompression) Values added in v0.29.0

Values returns all known values for ParquetCompression. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ParquetSerDe

type ParquetSerDe struct {

	// The Hadoop Distributed File System (HDFS) block size. This is useful if you
	// intend to copy the data from Amazon S3 to HDFS before querying. The default is
	// 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding
	// calculations.
	BlockSizeBytes *int32

	// The compression code to use over data blocks. The possible values are
	// UNCOMPRESSED , SNAPPY , and GZIP , with the default being SNAPPY . Use SNAPPY
	// for higher decompression speed. Use GZIP if the compression ratio is more
	// important than speed.
	Compression ParquetCompression

	// Indicates whether to enable dictionary compression.
	EnableDictionaryCompression *bool

	// The maximum amount of padding to apply. This is useful if you intend to copy
	// the data from Amazon S3 to HDFS before querying. The default is 0.
	MaxPaddingBytes *int32

	// The Parquet page size. Column chunks are divided into pages. A page is
	// conceptually an indivisible unit (in terms of compression and encoding). The
	// minimum value is 64 KiB and the default is 1 MiB.
	PageSizeBytes *int32

	// Indicates the version of row format to output. The possible values are V1 and V2
	// . The default is V1 .
	WriterVersion ParquetWriterVersion
	// contains filtered or unexported fields
}

A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/) .

type ParquetWriterVersion

type ParquetWriterVersion string
const (
	ParquetWriterVersionV1 ParquetWriterVersion = "V1"
	ParquetWriterVersionV2 ParquetWriterVersion = "V2"
)

Enum values for ParquetWriterVersion

func (ParquetWriterVersion) Values added in v0.29.0

Values returns all known values for ParquetWriterVersion. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProcessingConfiguration

type ProcessingConfiguration struct {

	// Enables or disables data processing.
	Enabled *bool

	// The data processors.
	Processors []Processor
	// contains filtered or unexported fields
}

Describes a data processing configuration.

type Processor

type Processor struct {

	// The type of processor.
	//
	// This member is required.
	Type ProcessorType

	// The processor parameters.
	Parameters []ProcessorParameter
	// contains filtered or unexported fields
}

Describes a data processor. If you want to add a new line delimiter between records in objects that are delivered to Amazon S3, choose AppendDelimiterToRecord as a processor type. You don’t have to put a processor parameter when you select AppendDelimiterToRecord .

type ProcessorParameter

type ProcessorParameter struct {

	// The name of the parameter. Currently the following default values are
	// supported: 3 for NumberOfRetries and 60 for the BufferIntervalInSeconds . The
	// BufferSizeInMBs ranges between 0.2 MB and up to 3MB. The default buffering hint
	// is 1MB for all destinations, except Splunk. For Splunk, the default buffering
	// hint is 256 KB.
	//
	// This member is required.
	ParameterName ProcessorParameterName

	// The parameter value.
	//
	// This member is required.
	ParameterValue *string
	// contains filtered or unexported fields
}

Describes the processor parameter.

type ProcessorParameterName

type ProcessorParameterName string
const (
	ProcessorParameterNameLambdaArn               ProcessorParameterName = "LambdaArn"
	ProcessorParameterNameLambdaNumberOfRetries   ProcessorParameterName = "NumberOfRetries"
	ProcessorParameterNameMetadataExtractionQuery ProcessorParameterName = "MetadataExtractionQuery"
	ProcessorParameterNameJsonParsingEngine       ProcessorParameterName = "JsonParsingEngine"
	ProcessorParameterNameRoleArn                 ProcessorParameterName = "RoleArn"
	ProcessorParameterNameBufferSizeInMb          ProcessorParameterName = "BufferSizeInMBs"
	ProcessorParameterNameBufferIntervalInSeconds ProcessorParameterName = "BufferIntervalInSeconds"
	ProcessorParameterNameSubRecordType           ProcessorParameterName = "SubRecordType"
	ProcessorParameterNameDelimiter               ProcessorParameterName = "Delimiter"
	ProcessorParameterNameCompressionFormat       ProcessorParameterName = "CompressionFormat"
	ProcessorParameterNameDataMessageExtraction   ProcessorParameterName = "DataMessageExtraction"
)

Enum values for ProcessorParameterName

func (ProcessorParameterName) Values added in v0.29.0

Values returns all known values for ProcessorParameterName. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ProcessorType

type ProcessorType string
const (
	ProcessorTypeRecordDeAggregation     ProcessorType = "RecordDeAggregation"
	ProcessorTypeDecompression           ProcessorType = "Decompression"
	ProcessorTypeCloudWatchLogProcessing ProcessorType = "CloudWatchLogProcessing"
	ProcessorTypeLambda                  ProcessorType = "Lambda"
	ProcessorTypeMetadataExtraction      ProcessorType = "MetadataExtraction"
	ProcessorTypeAppendDelimiterToRecord ProcessorType = "AppendDelimiterToRecord"
)

Enum values for ProcessorType

func (ProcessorType) Values added in v0.29.0

func (ProcessorType) Values() []ProcessorType

Values returns all known values for ProcessorType. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type PutRecordBatchResponseEntry

type PutRecordBatchResponseEntry struct {

	// The error code for an individual record result.
	ErrorCode *string

	// The error message for an individual record result.
	ErrorMessage *string

	// The ID of the record.
	RecordId *string
	// contains filtered or unexported fields
}

Contains the result for an individual record from a PutRecordBatch request. If the record is successfully added to your delivery stream, it receives a record ID. If the record fails to be added to your delivery stream, the result includes an error code and an error message.

type Record

type Record struct {

	// The data blob, which is base64-encoded when the blob is serialized. The maximum
	// size of the data blob, before base64-encoding, is 1,000 KiB.
	//
	// This member is required.
	Data []byte
	// contains filtered or unexported fields
}

The unit of data in a delivery stream.

type RedshiftDestinationConfiguration

type RedshiftDestinationConfiguration struct {

	// The database connection string.
	//
	// This member is required.
	ClusterJDBCURL *string

	// The COPY command.
	//
	// This member is required.
	CopyCommand *CopyCommand

	// The user password.
	//
	// This member is required.
	Password *string

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The configuration for the intermediate Amazon S3 location from which Amazon
	// Redshift obtains data. Restrictions are described in the topic for
	// CreateDeliveryStream . The compression formats SNAPPY or ZIP cannot be
	// specified in RedshiftDestinationConfiguration.S3Configuration because the
	// Amazon Redshift COPY operation that reads from the S3 bucket doesn't support
	// these compression formats.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The name of the user.
	//
	// This member is required.
	Username *string

	// The CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// Redshift. Default value is 3600 (60 minutes).
	RetryOptions *RedshiftRetryOptions

	// The configuration for backup in Amazon S3.
	S3BackupConfiguration *S3DestinationConfiguration

	// The Amazon S3 backup mode. After you create a delivery stream, you can update
	// it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't
	// update the delivery stream to disable it.
	S3BackupMode RedshiftS3BackupMode
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Amazon Redshift.

type RedshiftDestinationDescription

type RedshiftDestinationDescription struct {

	// The database connection string.
	//
	// This member is required.
	ClusterJDBCURL *string

	// The COPY command.
	//
	// This member is required.
	CopyCommand *CopyCommand

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The Amazon S3 destination.
	//
	// This member is required.
	S3DestinationDescription *S3DestinationDescription

	// The name of the user.
	//
	// This member is required.
	Username *string

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// Redshift. Default value is 3600 (60 minutes).
	RetryOptions *RedshiftRetryOptions

	// The configuration for backup in Amazon S3.
	S3BackupDescription *S3DestinationDescription

	// The Amazon S3 backup mode.
	S3BackupMode RedshiftS3BackupMode
	// contains filtered or unexported fields
}

Describes a destination in Amazon Redshift.

type RedshiftDestinationUpdate

type RedshiftDestinationUpdate struct {

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The database connection string.
	ClusterJDBCURL *string

	// The COPY command.
	CopyCommand *CopyCommand

	// The user password.
	Password *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver documents to Amazon
	// Redshift. Default value is 3600 (60 minutes).
	RetryOptions *RedshiftRetryOptions

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	RoleARN *string

	// You can update a delivery stream to enable Amazon S3 backup if it is disabled.
	// If backup is enabled, you can't update the delivery stream to disable it.
	S3BackupMode RedshiftS3BackupMode

	// The Amazon S3 destination for backup.
	S3BackupUpdate *S3DestinationUpdate

	// The Amazon S3 destination. The compression formats SNAPPY or ZIP cannot be
	// specified in RedshiftDestinationUpdate.S3Update because the Amazon Redshift COPY
	// operation that reads from the S3 bucket doesn't support these compression
	// formats.
	S3Update *S3DestinationUpdate

	// The name of the user.
	Username *string
	// contains filtered or unexported fields
}

Describes an update for a destination in Amazon Redshift.

type RedshiftRetryOptions

type RedshiftRetryOptions struct {

	// The length of time during which Firehose retries delivery after a failure,
	// starting from the initial request and including the first attempt. The default
	// value is 3600 seconds (60 minutes). Firehose does not retry if the value of
	// DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than
	// the current value.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Configures retry behavior in case Firehose is unable to deliver documents to Amazon Redshift.

type RedshiftS3BackupMode

type RedshiftS3BackupMode string
const (
	RedshiftS3BackupModeDisabled RedshiftS3BackupMode = "Disabled"
	RedshiftS3BackupModeEnabled  RedshiftS3BackupMode = "Enabled"
)

Enum values for RedshiftS3BackupMode

func (RedshiftS3BackupMode) Values added in v0.29.0

Values returns all known values for RedshiftS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type ResourceInUseException

type ResourceInUseException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

The resource is already in use and not available for this operation.

func (*ResourceInUseException) Error

func (e *ResourceInUseException) Error() string

func (*ResourceInUseException) ErrorCode

func (e *ResourceInUseException) ErrorCode() string

func (*ResourceInUseException) ErrorFault

func (e *ResourceInUseException) ErrorFault() smithy.ErrorFault

func (*ResourceInUseException) ErrorMessage

func (e *ResourceInUseException) ErrorMessage() string

type ResourceNotFoundException

type ResourceNotFoundException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

The specified resource could not be found.

func (*ResourceNotFoundException) Error

func (e *ResourceNotFoundException) Error() string

func (*ResourceNotFoundException) ErrorCode

func (e *ResourceNotFoundException) ErrorCode() string

func (*ResourceNotFoundException) ErrorFault

func (*ResourceNotFoundException) ErrorMessage

func (e *ResourceNotFoundException) ErrorMessage() string

type RetryOptions added in v1.6.0

type RetryOptions struct {

	// The period of time during which Firehose retries to deliver data to the
	// specified Amazon S3 prefix.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

The retry behavior in case Firehose is unable to deliver data to an Amazon S3 prefix.

type S3BackupMode

type S3BackupMode string
const (
	S3BackupModeDisabled S3BackupMode = "Disabled"
	S3BackupModeEnabled  S3BackupMode = "Enabled"
)

Enum values for S3BackupMode

func (S3BackupMode) Values added in v0.29.0

func (S3BackupMode) Values() []S3BackupMode

Values returns all known values for S3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type S3DestinationConfiguration

type S3DestinationConfiguration struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	BucketARN *string

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The buffering option. If no value is specified, BufferingHints object default
	// values are used.
	BufferingHints *BufferingHints

	// The CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The compression format. If no value is specified, the default is UNCOMPRESSED .
	// The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift
	// destinations because they are not supported by the Amazon Redshift COPY
	// operation that reads from the S3 bucket.
	CompressionFormat CompressionFormat

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	EncryptionConfiguration *EncryptionConfiguration

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Amazon S3.

type S3DestinationDescription

type S3DestinationDescription struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	BucketARN *string

	// The buffering option. If no value is specified, BufferingHints object default
	// values are used.
	//
	// This member is required.
	BufferingHints *BufferingHints

	// The compression format. If no value is specified, the default is UNCOMPRESSED .
	//
	// This member is required.
	CompressionFormat CompressionFormat

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	//
	// This member is required.
	EncryptionConfiguration *EncryptionConfiguration

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	//
	// This member is required.
	RoleARN *string

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string
	// contains filtered or unexported fields
}

Describes a destination in Amazon S3.

type S3DestinationUpdate

type S3DestinationUpdate struct {

	// The ARN of the S3 bucket. For more information, see Amazon Resource Names
	// (ARNs) and Amazon Web Services Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	BucketARN *string

	// The buffering option. If no value is specified, BufferingHints object default
	// values are used.
	BufferingHints *BufferingHints

	// The CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The compression format. If no value is specified, the default is UNCOMPRESSED .
	// The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift
	// destinations because they are not supported by the Amazon Redshift COPY
	// operation that reads from the S3 bucket.
	CompressionFormat CompressionFormat

	// The encryption configuration. If no value is specified, the default is no
	// encryption.
	EncryptionConfiguration *EncryptionConfiguration

	// A prefix that Firehose evaluates and adds to failed records before writing them
	// to S3. This prefix appears immediately following the bucket name. For
	// information about how to specify this prefix, see Custom Prefixes for Amazon S3
	// Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html) .
	ErrorOutputPrefix *string

	// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered
	// Amazon S3 files. You can also specify a custom prefix, as described in Custom
	// Prefixes for Amazon S3 Objects (https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html)
	// .
	Prefix *string

	// The Amazon Resource Name (ARN) of the Amazon Web Services credentials. For more
	// information, see Amazon Resource Names (ARNs) and Amazon Web Services Service
	// Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
	// .
	RoleARN *string
	// contains filtered or unexported fields
}

Describes an update for a destination in Amazon S3.

type SchemaConfiguration

type SchemaConfiguration struct {

	// The ID of the Amazon Web Services Glue Data Catalog. If you don't supply this,
	// the Amazon Web Services account ID is used by default.
	CatalogId *string

	// Specifies the name of the Amazon Web Services Glue database that contains the
	// schema for the output data. If the SchemaConfiguration request parameter is
	// used as part of invoking the CreateDeliveryStream API, then the DatabaseName
	// property is required and its value must be specified.
	DatabaseName *string

	// If you don't specify an Amazon Web Services Region, the default is the current
	// Region.
	Region *string

	// The role that Firehose can use to access Amazon Web Services Glue. This role
	// must be in the same account you use for Firehose. Cross-account roles aren't
	// allowed. If the SchemaConfiguration request parameter is used as part of
	// invoking the CreateDeliveryStream API, then the RoleARN property is required
	// and its value must be specified.
	RoleARN *string

	// Specifies the Amazon Web Services Glue table that contains the column
	// information that constitutes your data schema. If the SchemaConfiguration
	// request parameter is used as part of invoking the CreateDeliveryStream API,
	// then the TableName property is required and its value must be specified.
	TableName *string

	// Specifies the table version for the output data schema. If you don't specify
	// this version ID, or if you set it to LATEST , Firehose uses the most recent
	// version. This means that any updates to the table are automatically picked up.
	VersionId *string
	// contains filtered or unexported fields
}

Specifies the schema to which you want Firehose to configure your data before it writes it to Amazon S3. This parameter is required if Enabled is set to true.

type Serializer

type Serializer struct {

	// A serializer to use for converting data to the ORC format before storing it in
	// Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/) .
	OrcSerDe *OrcSerDe

	// A serializer to use for converting data to the Parquet format before storing it
	// in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/)
	// .
	ParquetSerDe *ParquetSerDe
	// contains filtered or unexported fields
}

The serializer that you want Firehose to use to convert data to the target format before writing it to Amazon S3. Firehose supports two types of serializers: the ORC SerDe (https://hive.apache.org/javadocs/r1.2.2/api/org/apache/hadoop/hive/ql/io/orc/OrcSerde.html) and the Parquet SerDe (https://hive.apache.org/javadocs/r1.2.2/api/org/apache/hadoop/hive/ql/io/parquet/serde/ParquetHiveSerDe.html) .

type ServiceUnavailableException

type ServiceUnavailableException struct {
	Message *string

	ErrorCodeOverride *string
	// contains filtered or unexported fields
}

The service is unavailable. Back off and retry the operation. If you continue to see the exception, throughput limits for the delivery stream may have been exceeded. For more information about limits and how to request an increase, see Amazon Firehose Limits (https://docs.aws.amazon.com/firehose/latest/dev/limits.html) .

func (*ServiceUnavailableException) Error

func (*ServiceUnavailableException) ErrorCode

func (e *ServiceUnavailableException) ErrorCode() string

func (*ServiceUnavailableException) ErrorFault

func (*ServiceUnavailableException) ErrorMessage

func (e *ServiceUnavailableException) ErrorMessage() string

type SnowflakeDataLoadingOption added in v1.24.0

type SnowflakeDataLoadingOption string
const (
	SnowflakeDataLoadingOptionJsonMapping                      SnowflakeDataLoadingOption = "JSON_MAPPING"
	SnowflakeDataLoadingOptionVariantContentMapping            SnowflakeDataLoadingOption = "VARIANT_CONTENT_MAPPING"
	SnowflakeDataLoadingOptionVariantContentAndMetadataMapping SnowflakeDataLoadingOption = "VARIANT_CONTENT_AND_METADATA_MAPPING"
)

Enum values for SnowflakeDataLoadingOption

func (SnowflakeDataLoadingOption) Values added in v1.24.0

Values returns all known values for SnowflakeDataLoadingOption. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type SnowflakeDestinationConfiguration added in v1.24.0

type SnowflakeDestinationConfiguration struct {

	// URL for accessing your Snowflake account. This URL must include your account
	// identifier (https://docs.snowflake.com/en/user-guide/admin-account-identifier) .
	// Note that the protocol (https://) and port number are optional.
	//
	// This member is required.
	AccountUrl *string

	// All data in Snowflake is maintained in databases.
	//
	// This member is required.
	Database *string

	// The private key used to encrypt your Snowflake client. For information, see
	// Using Key Pair Authentication & Key Rotation (https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation)
	// .
	//
	// This member is required.
	PrivateKey *string

	// The Amazon Resource Name (ARN) of the Snowflake role
	//
	// This member is required.
	RoleARN *string

	// Describes the configuration of a destination in Amazon S3.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// Each database consists of one or more schemas, which are logical groupings of
	// database objects, such as tables and views
	//
	// This member is required.
	Schema *string

	// All data in Snowflake is stored in database tables, logically structured as
	// collections of columns and rows.
	//
	// This member is required.
	Table *string

	// User login name for the Snowflake account.
	//
	// This member is required.
	User *string

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The name of the record content column
	ContentColumnName *string

	// Choose to load JSON keys mapped to table column names or choose to split the
	// JSON payload where content is mapped to a record content column and source
	// metadata is mapped to a record metadata column.
	DataLoadingOption SnowflakeDataLoadingOption

	// Passphrase to decrypt the private key when the key is encrypted. For
	// information, see Using Key Pair Authentication & Key Rotation (https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation)
	// .
	KeyPassphrase *string

	// The name of the record metadata column
	MetaDataColumnName *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The time period where Firehose will retry sending data to the chosen HTTP
	// endpoint.
	RetryOptions *SnowflakeRetryOptions

	// Choose an S3 backup mode
	S3BackupMode SnowflakeS3BackupMode

	// Optionally configure a Snowflake role. Otherwise the default user role will be
	// used.
	SnowflakeRoleConfiguration *SnowflakeRoleConfiguration

	// The VPCE ID for Firehose to privately connect with Snowflake. The ID format is
	// com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon
	// PrivateLink & Snowflake (https://docs.snowflake.com/en/user-guide/admin-security-privatelink)
	SnowflakeVpcConfiguration *SnowflakeVpcConfiguration
	// contains filtered or unexported fields
}

Configure Snowflake destination

type SnowflakeDestinationDescription added in v1.24.0

type SnowflakeDestinationDescription struct {

	// URL for accessing your Snowflake account. This URL must include your account
	// identifier (https://docs.snowflake.com/en/user-guide/admin-account-identifier) .
	// Note that the protocol (https://) and port number are optional.
	AccountUrl *string

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The name of the record content column
	ContentColumnName *string

	// Choose to load JSON keys mapped to table column names or choose to split the
	// JSON payload where content is mapped to a record content column and source
	// metadata is mapped to a record metadata column.
	DataLoadingOption SnowflakeDataLoadingOption

	// All data in Snowflake is maintained in databases.
	Database *string

	// The name of the record metadata column
	MetaDataColumnName *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The time period where Firehose will retry sending data to the chosen HTTP
	// endpoint.
	RetryOptions *SnowflakeRetryOptions

	// The Amazon Resource Name (ARN) of the Snowflake role
	RoleARN *string

	// Choose an S3 backup mode
	S3BackupMode SnowflakeS3BackupMode

	// Describes a destination in Amazon S3.
	S3DestinationDescription *S3DestinationDescription

	// Each database consists of one or more schemas, which are logical groupings of
	// database objects, such as tables and views
	Schema *string

	// Optionally configure a Snowflake role. Otherwise the default user role will be
	// used.
	SnowflakeRoleConfiguration *SnowflakeRoleConfiguration

	// The VPCE ID for Firehose to privately connect with Snowflake. The ID format is
	// com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon
	// PrivateLink & Snowflake (https://docs.snowflake.com/en/user-guide/admin-security-privatelink)
	SnowflakeVpcConfiguration *SnowflakeVpcConfiguration

	// All data in Snowflake is stored in database tables, logically structured as
	// collections of columns and rows.
	Table *string

	// User login name for the Snowflake account.
	User *string
	// contains filtered or unexported fields
}

Optional Snowflake destination description

type SnowflakeDestinationUpdate added in v1.24.0

type SnowflakeDestinationUpdate struct {

	// URL for accessing your Snowflake account. This URL must include your account
	// identifier (https://docs.snowflake.com/en/user-guide/admin-account-identifier) .
	// Note that the protocol (https://) and port number are optional.
	AccountUrl *string

	// Describes the Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The name of the content metadata column
	ContentColumnName *string

	// JSON keys mapped to table column names or choose to split the JSON payload
	// where content is mapped to a record content column and source metadata is mapped
	// to a record metadata column.
	DataLoadingOption SnowflakeDataLoadingOption

	// All data in Snowflake is maintained in databases.
	Database *string

	// Passphrase to decrypt the private key when the key is encrypted. For
	// information, see Using Key Pair Authentication & Key Rotation (https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation)
	// .
	KeyPassphrase *string

	// The name of the record metadata column
	MetaDataColumnName *string

	// The private key used to encrypt your Snowflake client. For information, see
	// Using Key Pair Authentication & Key Rotation (https://docs.snowflake.com/en/user-guide/data-load-snowpipe-streaming-configuration#using-key-pair-authentication-key-rotation)
	// .
	PrivateKey *string

	// Describes a data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// Specify how long Firehose retries sending data to the New Relic HTTP endpoint.
	// After sending data, Firehose first waits for an acknowledgment from the HTTP
	// endpoint. If an error occurs or the acknowledgment doesn’t arrive within the
	// acknowledgment timeout period, Firehose starts the retry duration counter. It
	// keeps retrying until the retry duration expires. After that, Firehose considers
	// it a data delivery failure and backs up the data to your Amazon S3 bucket. Every
	// time that Firehose sends data to the HTTP endpoint (either the initial attempt
	// or a retry), it restarts the acknowledgement timeout counter and waits for an
	// acknowledgement from the HTTP endpoint. Even if the retry duration expires,
	// Firehose still waits for the acknowledgment until it receives it or the
	// acknowledgement timeout period is reached. If the acknowledgment times out,
	// Firehose determines whether there's time left in the retry counter. If there is
	// time left, it retries again and repeats the logic until it receives an
	// acknowledgment or determines that the retry time has expired. If you don't want
	// Firehose to retry sending data, set this value to 0.
	RetryOptions *SnowflakeRetryOptions

	// The Amazon Resource Name (ARN) of the Snowflake role
	RoleARN *string

	// Choose an S3 backup mode
	S3BackupMode SnowflakeS3BackupMode

	// Describes an update for a destination in Amazon S3.
	S3Update *S3DestinationUpdate

	// Each database consists of one or more schemas, which are logical groupings of
	// database objects, such as tables and views
	Schema *string

	// Optionally configure a Snowflake role. Otherwise the default user role will be
	// used.
	SnowflakeRoleConfiguration *SnowflakeRoleConfiguration

	// All data in Snowflake is stored in database tables, logically structured as
	// collections of columns and rows.
	Table *string

	// User login name for the Snowflake account.
	User *string
	// contains filtered or unexported fields
}

Update to configuration settings

type SnowflakeRetryOptions added in v1.24.0

type SnowflakeRetryOptions struct {

	// the time period where Firehose will retry sending data to the chosen HTTP
	// endpoint.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Specify how long Firehose retries sending data to the New Relic HTTP endpoint. After sending data, Firehose first waits for an acknowledgment from the HTTP endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period, Firehose starts the retry duration counter. It keeps retrying until the retry duration expires. After that, Firehose considers it a data delivery failure and backs up the data to your Amazon S3 bucket. Every time that Firehose sends data to the HTTP endpoint (either the initial attempt or a retry), it restarts the acknowledgement timeout counter and waits for an acknowledgement from the HTTP endpoint. Even if the retry duration expires, Firehose still waits for the acknowledgment until it receives it or the acknowledgement timeout period is reached. If the acknowledgment times out, Firehose determines whether there's time left in the retry counter. If there is time left, it retries again and repeats the logic until it receives an acknowledgment or determines that the retry time has expired. If you don't want Firehose to retry sending data, set this value to 0.

type SnowflakeRoleConfiguration added in v1.24.0

type SnowflakeRoleConfiguration struct {

	// Enable Snowflake role
	Enabled *bool

	// The Snowflake role you wish to configure
	SnowflakeRole *string
	// contains filtered or unexported fields
}

Optionally configure a Snowflake role. Otherwise the default user role will be used.

type SnowflakeS3BackupMode added in v1.24.0

type SnowflakeS3BackupMode string
const (
	SnowflakeS3BackupModeFailedDataOnly SnowflakeS3BackupMode = "FailedDataOnly"
	SnowflakeS3BackupModeAllData        SnowflakeS3BackupMode = "AllData"
)

Enum values for SnowflakeS3BackupMode

func (SnowflakeS3BackupMode) Values added in v1.24.0

Values returns all known values for SnowflakeS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type SnowflakeVpcConfiguration added in v1.24.0

type SnowflakeVpcConfiguration struct {

	// The VPCE ID for Firehose to privately connect with Snowflake. The ID format is
	// com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon
	// PrivateLink & Snowflake (https://docs.snowflake.com/en/user-guide/admin-security-privatelink)
	//
	// This member is required.
	PrivateLinkVpceId *string
	// contains filtered or unexported fields
}

Configure a Snowflake VPC

type SourceDescription

type SourceDescription struct {

	// The KinesisStreamSourceDescription value for the source Kinesis data stream.
	KinesisStreamSourceDescription *KinesisStreamSourceDescription

	// The configuration description for the Amazon MSK cluster to be used as the
	// source for a delivery stream.
	MSKSourceDescription *MSKSourceDescription
	// contains filtered or unexported fields
}

Details about a Kinesis data stream used as the source for a Firehose delivery stream.

type SplunkBufferingHints added in v1.23.0

type SplunkBufferingHints struct {

	// Buffer incoming data for the specified period of time, in seconds, before
	// delivering it to the destination. The default value is 60 (1 minute).
	IntervalInSeconds *int32

	// Buffer incoming data to the specified size, in MBs, before delivering it to the
	// destination. The default value is 5.
	SizeInMBs *int32
	// contains filtered or unexported fields
}

The buffering options. If no value is specified, the default values for Splunk are used.

type SplunkDestinationConfiguration

type SplunkDestinationConfiguration struct {

	// The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
	//
	// This member is required.
	HECEndpoint *string

	// This type can be either "Raw" or "Event."
	//
	// This member is required.
	HECEndpointType HECEndpointType

	// This is a GUID that you obtain from your Splunk cluster when you create a new
	// HEC endpoint.
	//
	// This member is required.
	HECToken *string

	// The configuration for the backup Amazon S3 location.
	//
	// This member is required.
	S3Configuration *S3DestinationConfiguration

	// The buffering options. If no value is specified, the default values for Splunk
	// are used.
	BufferingHints *SplunkBufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The amount of time that Firehose waits to receive an acknowledgment from Splunk
	// after it sends it data. At the end of the timeout period, Firehose either tries
	// to send the data again or considers it an error, based on your retry settings.
	HECAcknowledgmentTimeoutInSeconds *int32

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver data to Splunk, or if
	// it doesn't receive an acknowledgment of receipt from Splunk.
	RetryOptions *SplunkRetryOptions

	// Defines how documents should be delivered to Amazon S3. When set to
	// FailedEventsOnly , Firehose writes any data that could not be indexed to the
	// configured Amazon S3 destination. When set to AllEvents , Firehose delivers all
	// incoming records to Amazon S3, and also writes failed documents to Amazon S3.
	// The default value is FailedEventsOnly . You can update this backup mode from
	// FailedEventsOnly to AllEvents . You can't update it from AllEvents to
	// FailedEventsOnly .
	S3BackupMode SplunkS3BackupMode
	// contains filtered or unexported fields
}

Describes the configuration of a destination in Splunk.

type SplunkDestinationDescription

type SplunkDestinationDescription struct {

	// The buffering options. If no value is specified, the default values for Splunk
	// are used.
	BufferingHints *SplunkBufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The amount of time that Firehose waits to receive an acknowledgment from Splunk
	// after it sends it data. At the end of the timeout period, Firehose either tries
	// to send the data again or considers it an error, based on your retry settings.
	HECAcknowledgmentTimeoutInSeconds *int32

	// The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
	HECEndpoint *string

	// This type can be either "Raw" or "Event."
	HECEndpointType HECEndpointType

	// A GUID you obtain from your Splunk cluster when you create a new HEC endpoint.
	HECToken *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver data to Splunk or if
	// it doesn't receive an acknowledgment of receipt from Splunk.
	RetryOptions *SplunkRetryOptions

	// Defines how documents should be delivered to Amazon S3. When set to
	// FailedDocumentsOnly , Firehose writes any data that could not be indexed to the
	// configured Amazon S3 destination. When set to AllDocuments , Firehose delivers
	// all incoming records to Amazon S3, and also writes failed documents to Amazon
	// S3. Default value is FailedDocumentsOnly .
	S3BackupMode SplunkS3BackupMode

	// The Amazon S3 destination.>
	S3DestinationDescription *S3DestinationDescription
	// contains filtered or unexported fields
}

Describes a destination in Splunk.

type SplunkDestinationUpdate

type SplunkDestinationUpdate struct {

	// The buffering options. If no value is specified, the default values for Splunk
	// are used.
	BufferingHints *SplunkBufferingHints

	// The Amazon CloudWatch logging options for your delivery stream.
	CloudWatchLoggingOptions *CloudWatchLoggingOptions

	// The amount of time that Firehose waits to receive an acknowledgment from Splunk
	// after it sends data. At the end of the timeout period, Firehose either tries to
	// send the data again or considers it an error, based on your retry settings.
	HECAcknowledgmentTimeoutInSeconds *int32

	// The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
	HECEndpoint *string

	// This type can be either "Raw" or "Event."
	HECEndpointType HECEndpointType

	// A GUID that you obtain from your Splunk cluster when you create a new HEC
	// endpoint.
	HECToken *string

	// The data processing configuration.
	ProcessingConfiguration *ProcessingConfiguration

	// The retry behavior in case Firehose is unable to deliver data to Splunk or if
	// it doesn't receive an acknowledgment of receipt from Splunk.
	RetryOptions *SplunkRetryOptions

	// Specifies how you want Firehose to back up documents to Amazon S3. When set to
	// FailedDocumentsOnly , Firehose writes any data that could not be indexed to the
	// configured Amazon S3 destination. When set to AllEvents , Firehose delivers all
	// incoming records to Amazon S3, and also writes failed documents to Amazon S3.
	// The default value is FailedEventsOnly . You can update this backup mode from
	// FailedEventsOnly to AllEvents . You can't update it from AllEvents to
	// FailedEventsOnly .
	S3BackupMode SplunkS3BackupMode

	// Your update to the configuration of the backup Amazon S3 location.
	S3Update *S3DestinationUpdate
	// contains filtered or unexported fields
}

Describes an update for a destination in Splunk.

type SplunkRetryOptions

type SplunkRetryOptions struct {

	// The total amount of time that Firehose spends on retries. This duration starts
	// after the initial attempt to send data to Splunk fails. It doesn't include the
	// periods during which Firehose waits for acknowledgment from Splunk after each
	// attempt.
	DurationInSeconds *int32
	// contains filtered or unexported fields
}

Configures retry behavior in case Firehose is unable to deliver documents to Splunk, or if it doesn't receive an acknowledgment from Splunk.

type SplunkS3BackupMode

type SplunkS3BackupMode string
const (
	SplunkS3BackupModeFailedEventsOnly SplunkS3BackupMode = "FailedEventsOnly"
	SplunkS3BackupModeAllEvents        SplunkS3BackupMode = "AllEvents"
)

Enum values for SplunkS3BackupMode

func (SplunkS3BackupMode) Values added in v0.29.0

Values returns all known values for SplunkS3BackupMode. Note that this can be expanded in the future, and so it is only as up to date as the client. The ordering of this slice is not guaranteed to be stable across updates.

type Tag

type Tag struct {

	// A unique identifier for the tag. Maximum length: 128 characters. Valid
	// characters: Unicode letters, digits, white space, _ . / = + - % @
	//
	// This member is required.
	Key *string

	// An optional string, which you can use to describe or define the tag. Maximum
	// length: 256 characters. Valid characters: Unicode letters, digits, white space,
	// _ . / = + - % @
	Value *string
	// contains filtered or unexported fields
}

Metadata that you can assign to a delivery stream, consisting of a key-value pair.

type VpcConfiguration

type VpcConfiguration struct {

	// The ARN of the IAM role that you want the delivery stream to use to create
	// endpoints in the destination VPC. You can use your existing Firehose delivery
	// role or you can specify a new role. In either case, make sure that the role
	// trusts the Firehose service principal and that it grants the following
	// permissions:
	//   - ec2:DescribeVpcs
	//   - ec2:DescribeVpcAttribute
	//   - ec2:DescribeSubnets
	//   - ec2:DescribeSecurityGroups
	//   - ec2:DescribeNetworkInterfaces
	//   - ec2:CreateNetworkInterface
	//   - ec2:CreateNetworkInterfacePermission
	//   - ec2:DeleteNetworkInterface
	// When you specify subnets for delivering data to the destination in a private
	// VPC, make sure you have enough number of free IP addresses in chosen subnets. If
	// there is no available free IP address in a specified subnet, Firehose cannot
	// create or add ENIs for the data delivery in the private VPC, and the delivery
	// will be degraded or fail.
	//
	// This member is required.
	RoleARN *string

	// The IDs of the security groups that you want Firehose to use when it creates
	// ENIs in the VPC of the Amazon ES destination. You can use the same security
	// group that the Amazon ES domain uses or different ones. If you specify different
	// security groups here, ensure that they allow outbound HTTPS traffic to the
	// Amazon ES domain's security group. Also ensure that the Amazon ES domain's
	// security group allows HTTPS traffic from the security groups specified here. If
	// you use the same security group for both your delivery stream and the Amazon ES
	// domain, make sure the security group inbound rule allows HTTPS traffic. For more
	// information about security group rules, see Security group rules (https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGroupRules)
	// in the Amazon VPC documentation.
	//
	// This member is required.
	SecurityGroupIds []string

	// The IDs of the subnets that you want Firehose to use to create ENIs in the VPC
	// of the Amazon ES destination. Make sure that the routing tables and inbound and
	// outbound rules allow traffic to flow from the subnets whose IDs are specified
	// here to the subnets that have the destination Amazon ES endpoints. Firehose
	// creates at least one ENI in each of the subnets that are specified here. Do not
	// delete or modify these ENIs. The number of ENIs that Firehose creates in the
	// subnets specified here scales up and down automatically based on throughput. To
	// enable Firehose to scale up the number of ENIs to match throughput, ensure that
	// you have sufficient quota. To help you calculate the quota you need, assume that
	// Firehose can create up to three ENIs for this delivery stream for each of the
	// subnets specified here. For more information about ENI quota, see Network
	// Interfaces  (https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-enis)
	// in the Amazon VPC Quotas topic.
	//
	// This member is required.
	SubnetIds []string
	// contains filtered or unexported fields
}

The details of the VPC of the Amazon OpenSearch or Amazon OpenSearch Serverless destination.

type VpcConfigurationDescription

type VpcConfigurationDescription struct {

	// The ARN of the IAM role that the delivery stream uses to create endpoints in
	// the destination VPC. You can use your existing Firehose delivery role or you can
	// specify a new role. In either case, make sure that the role trusts the Firehose
	// service principal and that it grants the following permissions:
	//   - ec2:DescribeVpcs
	//   - ec2:DescribeVpcAttribute
	//   - ec2:DescribeSubnets
	//   - ec2:DescribeSecurityGroups
	//   - ec2:DescribeNetworkInterfaces
	//   - ec2:CreateNetworkInterface
	//   - ec2:CreateNetworkInterfacePermission
	//   - ec2:DeleteNetworkInterface
	// If you revoke these permissions after you create the delivery stream, Firehose
	// can't scale out by creating more ENIs when necessary. You might therefore see a
	// degradation in performance.
	//
	// This member is required.
	RoleARN *string

	// The IDs of the security groups that Firehose uses when it creates ENIs in the
	// VPC of the Amazon ES destination. You can use the same security group that the
	// Amazon ES domain uses or different ones. If you specify different security
	// groups, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's
	// security group. Also ensure that the Amazon ES domain's security group allows
	// HTTPS traffic from the security groups specified here. If you use the same
	// security group for both your delivery stream and the Amazon ES domain, make sure
	// the security group inbound rule allows HTTPS traffic. For more information about
	// security group rules, see Security group rules (https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGroupRules)
	// in the Amazon VPC documentation.
	//
	// This member is required.
	SecurityGroupIds []string

	// The IDs of the subnets that Firehose uses to create ENIs in the VPC of the
	// Amazon ES destination. Make sure that the routing tables and inbound and
	// outbound rules allow traffic to flow from the subnets whose IDs are specified
	// here to the subnets that have the destination Amazon ES endpoints. Firehose
	// creates at least one ENI in each of the subnets that are specified here. Do not
	// delete or modify these ENIs. The number of ENIs that Firehose creates in the
	// subnets specified here scales up and down automatically based on throughput. To
	// enable Firehose to scale up the number of ENIs to match throughput, ensure that
	// you have sufficient quota. To help you calculate the quota you need, assume that
	// Firehose can create up to three ENIs for this delivery stream for each of the
	// subnets specified here. For more information about ENI quota, see Network
	// Interfaces  (https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-enis)
	// in the Amazon VPC Quotas topic.
	//
	// This member is required.
	SubnetIds []string

	// The ID of the Amazon ES destination's VPC.
	//
	// This member is required.
	VpcId *string
	// contains filtered or unexported fields
}

The details of the VPC of the Amazon ES destination.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL