datapipeline

package
v0.0.0-...-9214b8d Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 6, 2015 License: Apache-2.0 Imports: 4 Imported by: 0

Documentation

Overview

Package datapipeline provides a client for AWS Data Pipeline.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type ActivatePipelineInput

type ActivatePipelineInput struct {
	// Returns a list of parameter values to pass to the pipeline at activation.
	ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"`

	// The identifier of the pipeline to activate.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input of the ActivatePipeline action.

type ActivatePipelineOutput

type ActivatePipelineOutput struct {
	// contains filtered or unexported fields
}

Contains the output from the ActivatePipeline action.

type AddTagsInput

type AddTagsInput struct {
	// The identifier of the pipeline to which you want to add the tags.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// The tags as key/value pairs to add to the pipeline.
	Tags []*Tag `locationName:"tags" type:"list" required:"true"`
	// contains filtered or unexported fields
}

The input to the AddTags action.

type AddTagsOutput

type AddTagsOutput struct {
	// contains filtered or unexported fields
}

The response from the AddTags action.

type CreatePipelineInput

type CreatePipelineInput struct {
	// The description of the new pipeline.
	Description *string `locationName:"description" type:"string"`

	// The name of the new pipeline. You can use the same name for multiple pipelines
	// associated with your AWS account, because AWS Data Pipeline assigns each
	// new pipeline a unique pipeline identifier.
	Name *string `locationName:"name" type:"string" required:"true"`

	// A list of tags to associate with a pipeline at creation time. Tags let you
	// control access to pipelines. For more information, see Controlling User Access
	// to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html)
	// in the AWS Data Pipeline Developer Guide.
	Tags []*Tag `locationName:"tags" type:"list"`

	// A unique identifier that you specify. This identifier is not the same as
	// the pipeline identifier assigned by AWS Data Pipeline. You are responsible
	// for defining the format and ensuring the uniqueness of this identifier. You
	// use this parameter to ensure idempotency during repeated calls to CreatePipeline.
	// For example, if the first call to CreatePipeline does not return a clear
	// success, you can pass in the same unique identifier and pipeline name combination
	// on a subsequent call to CreatePipeline. CreatePipeline ensures that if a
	// pipeline already exists with the same name and unique identifier, a new pipeline
	// will not be created. Instead, you'll receive the pipeline identifier from
	// the previous attempt. The uniqueness of the name and unique identifier combination
	// is scoped to the AWS account or IAM user credentials.
	UniqueID *string `locationName:"uniqueId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input for the CreatePipeline action.

type CreatePipelineOutput

type CreatePipelineOutput struct {
	// The ID that AWS Data Pipeline assigns the newly created pipeline. The ID
	// is a string of the form: df-06372391ZG65EXAMPLE.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the CreatePipeline action.

type DataPipeline

type DataPipeline struct {
	*aws.Service
}

DataPipeline is a client for AWS Data Pipeline.

func New

func New(config *aws.Config) *DataPipeline

New returns a new DataPipeline client.

func (*DataPipeline) ActivatePipeline

func (c *DataPipeline) ActivatePipeline(input *ActivatePipelineInput) (output *ActivatePipelineOutput, err error)

Validates a pipeline and initiates processing. If the pipeline does not pass validation, activation fails. You cannot perform this operation on FINISHED pipelines and attempting to do so will return an InvalidRequestException.

Call this action to start processing pipeline tasks of a pipeline you've

created using the CreatePipeline and PutPipelineDefinition actions. A pipeline cannot be modified after it has been successfully activated.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.ActivatePipelineInput{
		PipelineID: aws.String("id"), // Required
		ParameterValues: []*datapipeline.ParameterValue{
			&datapipeline.ParameterValue{ // Required
				ID:          aws.String("fieldNameString"),  // Required
				StringValue: aws.String("fieldStringValue"), // Required
			},
			// More values...
		},
	}
	resp, err := svc.ActivatePipeline(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) ActivatePipelineRequest

func (c *DataPipeline) ActivatePipelineRequest(input *ActivatePipelineInput) (req *aws.Request, output *ActivatePipelineOutput)

ActivatePipelineRequest generates a request for the ActivatePipeline operation.

func (*DataPipeline) AddTags

func (c *DataPipeline) AddTags(input *AddTagsInput) (output *AddTagsOutput, err error)

Add or modify tags in an existing pipeline.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.AddTagsInput{
		PipelineID: aws.String("id"), // Required
		Tags: []*datapipeline.Tag{ // Required
			&datapipeline.Tag{ // Required
				Key:   aws.String("tagKey"),   // Required
				Value: aws.String("tagValue"), // Required
			},
			// More values...
		},
	}
	resp, err := svc.AddTags(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) AddTagsRequest

func (c *DataPipeline) AddTagsRequest(input *AddTagsInput) (req *aws.Request, output *AddTagsOutput)

AddTagsRequest generates a request for the AddTags operation.

func (*DataPipeline) CreatePipeline

func (c *DataPipeline) CreatePipeline(input *CreatePipelineInput) (output *CreatePipelineOutput, err error)

Creates a new empty pipeline. When this action succeeds, you can then use the PutPipelineDefinition action to populate the pipeline.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.CreatePipelineInput{
		Name:        aws.String("id"), // Required
		UniqueID:    aws.String("id"), // Required
		Description: aws.String("string"),
		Tags: []*datapipeline.Tag{
			&datapipeline.Tag{ // Required
				Key:   aws.String("tagKey"),   // Required
				Value: aws.String("tagValue"), // Required
			},
			// More values...
		},
	}
	resp, err := svc.CreatePipeline(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) CreatePipelineRequest

func (c *DataPipeline) CreatePipelineRequest(input *CreatePipelineInput) (req *aws.Request, output *CreatePipelineOutput)

CreatePipelineRequest generates a request for the CreatePipeline operation.

func (*DataPipeline) DeletePipeline

func (c *DataPipeline) DeletePipeline(input *DeletePipelineInput) (output *DeletePipelineOutput, err error)

Permanently deletes a pipeline, its pipeline definition and its run history. You cannot query or restore a deleted pipeline. AWS Data Pipeline will attempt to cancel instances associated with the pipeline that are currently being processed by task runners. Deleting a pipeline cannot be undone.

To temporarily pause a pipeline instead of deleting it, call SetStatus

with the status set to Pause on individual components. Components that are paused by SetStatus can be resumed.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.DeletePipelineInput{
		PipelineID: aws.String("id"), // Required
	}
	resp, err := svc.DeletePipeline(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) DeletePipelineRequest

func (c *DataPipeline) DeletePipelineRequest(input *DeletePipelineInput) (req *aws.Request, output *DeletePipelineOutput)

DeletePipelineRequest generates a request for the DeletePipeline operation.

func (*DataPipeline) DescribeObjects

func (c *DataPipeline) DescribeObjects(input *DescribeObjectsInput) (output *DescribeObjectsOutput, err error)

Returns the object definitions for a set of objects associated with the pipeline. Object definitions are composed of a set of fields that define the properties of the object.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.DescribeObjectsInput{
		ObjectIDs: []*string{ // Required
			aws.String("id"), // Required
			// More values...
		},
		PipelineID:          aws.String("id"), // Required
		EvaluateExpressions: aws.Boolean(true),
		Marker:              aws.String("string"),
	}
	resp, err := svc.DescribeObjects(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) DescribeObjectsRequest

func (c *DataPipeline) DescribeObjectsRequest(input *DescribeObjectsInput) (req *aws.Request, output *DescribeObjectsOutput)

DescribeObjectsRequest generates a request for the DescribeObjects operation.

func (*DataPipeline) DescribePipelines

func (c *DataPipeline) DescribePipelines(input *DescribePipelinesInput) (output *DescribePipelinesOutput, err error)

Retrieve metadata about one or more pipelines. The information retrieved includes the name of the pipeline, the pipeline identifier, its current state, and the user account that owns the pipeline. Using account credentials, you can retrieve metadata about pipelines that you or your IAM users have created. If you are using an IAM user account, you can retrieve metadata about only those pipelines you have read permission for.

To retrieve the full pipeline definition instead of metadata about the

pipeline, call the GetPipelineDefinition action.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.DescribePipelinesInput{
		PipelineIDs: []*string{ // Required
			aws.String("id"), // Required
			// More values...
		},
	}
	resp, err := svc.DescribePipelines(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) DescribePipelinesRequest

func (c *DataPipeline) DescribePipelinesRequest(input *DescribePipelinesInput) (req *aws.Request, output *DescribePipelinesOutput)

DescribePipelinesRequest generates a request for the DescribePipelines operation.

func (*DataPipeline) EvaluateExpression

func (c *DataPipeline) EvaluateExpression(input *EvaluateExpressionInput) (output *EvaluateExpressionOutput, err error)

Evaluates a string in the context of a specified object. A task runner can use this action to evaluate SQL queries stored in Amazon S3.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.EvaluateExpressionInput{
		Expression: aws.String("longString"), // Required
		ObjectID:   aws.String("id"),         // Required
		PipelineID: aws.String("id"),         // Required
	}
	resp, err := svc.EvaluateExpression(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) EvaluateExpressionRequest

func (c *DataPipeline) EvaluateExpressionRequest(input *EvaluateExpressionInput) (req *aws.Request, output *EvaluateExpressionOutput)

EvaluateExpressionRequest generates a request for the EvaluateExpression operation.

func (*DataPipeline) GetPipelineDefinition

func (c *DataPipeline) GetPipelineDefinition(input *GetPipelineDefinitionInput) (output *GetPipelineDefinitionOutput, err error)

Returns the definition of the specified pipeline. You can call GetPipelineDefinition to retrieve the pipeline definition you provided using PutPipelineDefinition.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.GetPipelineDefinitionInput{
		PipelineID: aws.String("id"), // Required
		Version:    aws.String("string"),
	}
	resp, err := svc.GetPipelineDefinition(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) GetPipelineDefinitionRequest

func (c *DataPipeline) GetPipelineDefinitionRequest(input *GetPipelineDefinitionInput) (req *aws.Request, output *GetPipelineDefinitionOutput)

GetPipelineDefinitionRequest generates a request for the GetPipelineDefinition operation.

func (*DataPipeline) ListPipelines

func (c *DataPipeline) ListPipelines(input *ListPipelinesInput) (output *ListPipelinesOutput, err error)

Returns a list of pipeline identifiers for all active pipelines. Identifiers are returned only for pipelines you have permission to access.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.ListPipelinesInput{
		Marker: aws.String("string"),
	}
	resp, err := svc.ListPipelines(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) ListPipelinesRequest

func (c *DataPipeline) ListPipelinesRequest(input *ListPipelinesInput) (req *aws.Request, output *ListPipelinesOutput)

ListPipelinesRequest generates a request for the ListPipelines operation.

func (*DataPipeline) PollForTask

func (c *DataPipeline) PollForTask(input *PollForTaskInput) (output *PollForTaskOutput, err error)

Task runners call this action to receive a task to perform from AWS Data Pipeline. The task runner specifies which tasks it can perform by setting a value for the workerGroup parameter of the PollForTask call. The task returned by PollForTask may come from any of the pipelines that match the workerGroup value passed in by the task runner and that was launched using the IAM user credentials specified by the task runner.

If tasks are ready in the work queue, PollForTask returns a response immediately.

If no tasks are available in the queue, PollForTask uses long-polling and holds on to a poll connection for up to a 90 seconds during which time the first newly scheduled task is handed to the task runner. To accomodate this, set the socket timeout in your task runner to 90 seconds. The task runner should not call PollForTask again on the same workerGroup until it receives a response, and this may take up to 90 seconds.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.PollForTaskInput{
		WorkerGroup: aws.String("string"), // Required
		Hostname:    aws.String("id"),
		InstanceIdentity: &datapipeline.InstanceIdentity{
			Document:  aws.String("string"),
			Signature: aws.String("string"),
		},
	}
	resp, err := svc.PollForTask(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) PollForTaskRequest

func (c *DataPipeline) PollForTaskRequest(input *PollForTaskInput) (req *aws.Request, output *PollForTaskOutput)

PollForTaskRequest generates a request for the PollForTask operation.

func (*DataPipeline) PutPipelineDefinition

func (c *DataPipeline) PutPipelineDefinition(input *PutPipelineDefinitionInput) (output *PutPipelineDefinitionOutput, err error)

Adds tasks, schedules, and preconditions that control the behavior of the pipeline. You can use PutPipelineDefinition to populate a new pipeline.

PutPipelineDefinition also validates the configuration as it adds it to

the pipeline. Changes to the pipeline are saved unless one of the following three validation errors exists in the pipeline. An object is missing a name or identifier field. A string or reference field is empty. The number of objects in the pipeline exceeds the maximum allowed objects. The pipeline is in a FINISHED state.

Pipeline object definitions are passed to the PutPipelineDefinition action

and returned by the GetPipelineDefinition action.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.PutPipelineDefinitionInput{
		PipelineID: aws.String("id"), // Required
		PipelineObjects: []*datapipeline.PipelineObject{ // Required
			&datapipeline.PipelineObject{ // Required
				Fields: []*datapipeline.Field{ // Required
					&datapipeline.Field{ // Required
						Key:         aws.String("fieldNameString"), // Required
						RefValue:    aws.String("fieldNameString"),
						StringValue: aws.String("fieldStringValue"),
					},
					// More values...
				},
				ID:   aws.String("id"), // Required
				Name: aws.String("id"), // Required
			},
			// More values...
		},
		ParameterObjects: []*datapipeline.ParameterObject{
			&datapipeline.ParameterObject{ // Required
				Attributes: []*datapipeline.ParameterAttribute{ // Required
					&datapipeline.ParameterAttribute{ // Required
						Key:         aws.String("attributeNameString"),  // Required
						StringValue: aws.String("attributeValueString"), // Required
					},
					// More values...
				},
				ID: aws.String("fieldNameString"), // Required
			},
			// More values...
		},
		ParameterValues: []*datapipeline.ParameterValue{
			&datapipeline.ParameterValue{ // Required
				ID:          aws.String("fieldNameString"),  // Required
				StringValue: aws.String("fieldStringValue"), // Required
			},
			// More values...
		},
	}
	resp, err := svc.PutPipelineDefinition(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) PutPipelineDefinitionRequest

func (c *DataPipeline) PutPipelineDefinitionRequest(input *PutPipelineDefinitionInput) (req *aws.Request, output *PutPipelineDefinitionOutput)

PutPipelineDefinitionRequest generates a request for the PutPipelineDefinition operation.

func (*DataPipeline) QueryObjects

func (c *DataPipeline) QueryObjects(input *QueryObjectsInput) (output *QueryObjectsOutput, err error)

Queries a pipeline for the names of objects that match a specified set of conditions.

The objects returned by QueryObjects are paginated and then filtered by the value you set for query. This means the action may return an empty result set with a value set for marker. If HasMoreResults is set to True, you should continue to call QueryObjects, passing in the returned value for marker, until HasMoreResults returns False.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.QueryObjectsInput{
		PipelineID: aws.String("id"),     // Required
		Sphere:     aws.String("string"), // Required
		Limit:      aws.Long(1),
		Marker:     aws.String("string"),
		Query: &datapipeline.Query{
			Selectors: []*datapipeline.Selector{
				&datapipeline.Selector{ // Required
					FieldName: aws.String("string"),
					Operator: &datapipeline.Operator{
						Type: aws.String("OperatorType"),
						Values: []*string{
							aws.String("string"), // Required
							// More values...
						},
					},
				},
				// More values...
			},
		},
	}
	resp, err := svc.QueryObjects(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) QueryObjectsRequest

func (c *DataPipeline) QueryObjectsRequest(input *QueryObjectsInput) (req *aws.Request, output *QueryObjectsOutput)

QueryObjectsRequest generates a request for the QueryObjects operation.

func (*DataPipeline) RemoveTags

func (c *DataPipeline) RemoveTags(input *RemoveTagsInput) (output *RemoveTagsOutput, err error)

Remove existing tags from a pipeline.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.RemoveTagsInput{
		PipelineID: aws.String("id"), // Required
		TagKeys: []*string{ // Required
			aws.String("string"), // Required
			// More values...
		},
	}
	resp, err := svc.RemoveTags(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) RemoveTagsRequest

func (c *DataPipeline) RemoveTagsRequest(input *RemoveTagsInput) (req *aws.Request, output *RemoveTagsOutput)

RemoveTagsRequest generates a request for the RemoveTags operation.

func (*DataPipeline) ReportTaskProgress

func (c *DataPipeline) ReportTaskProgress(input *ReportTaskProgressInput) (output *ReportTaskProgressOutput, err error)

Updates the AWS Data Pipeline service on the progress of the calling task runner. When the task runner is assigned a task, it should call ReportTaskProgress to acknowledge that it has the task within 2 minutes. If the web service does not recieve this acknowledgement within the 2 minute window, it will assign the task in a subsequent PollForTask call. After this initial acknowledgement, the task runner only needs to report progress every 15 minutes to maintain its ownership of the task. You can change this reporting time from 15 minutes by specifying a reportProgressTimeout field in your pipeline. If a task runner does not report its status after 5 minutes, AWS Data Pipeline will assume that the task runner is unable to process the task and will reassign the task in a subsequent response to PollForTask. task runners should call ReportTaskProgress every 60 seconds.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.ReportTaskProgressInput{
		TaskID: aws.String("taskId"), // Required
		Fields: []*datapipeline.Field{
			&datapipeline.Field{ // Required
				Key:         aws.String("fieldNameString"), // Required
				RefValue:    aws.String("fieldNameString"),
				StringValue: aws.String("fieldStringValue"),
			},
			// More values...
		},
	}
	resp, err := svc.ReportTaskProgress(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) ReportTaskProgressRequest

func (c *DataPipeline) ReportTaskProgressRequest(input *ReportTaskProgressInput) (req *aws.Request, output *ReportTaskProgressOutput)

ReportTaskProgressRequest generates a request for the ReportTaskProgress operation.

func (*DataPipeline) ReportTaskRunnerHeartbeat

func (c *DataPipeline) ReportTaskRunnerHeartbeat(input *ReportTaskRunnerHeartbeatInput) (output *ReportTaskRunnerHeartbeatOutput, err error)

Task runners call ReportTaskRunnerHeartbeat every 15 minutes to indicate that they are operational. In the case of AWS Data Pipeline Task Runner launched on a resource managed by AWS Data Pipeline, the web service can use this call to detect when the task runner application has failed and restart a new instance.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.ReportTaskRunnerHeartbeatInput{
		TaskRunnerID: aws.String("id"), // Required
		Hostname:     aws.String("id"),
		WorkerGroup:  aws.String("string"),
	}
	resp, err := svc.ReportTaskRunnerHeartbeat(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) ReportTaskRunnerHeartbeatRequest

func (c *DataPipeline) ReportTaskRunnerHeartbeatRequest(input *ReportTaskRunnerHeartbeatInput) (req *aws.Request, output *ReportTaskRunnerHeartbeatOutput)

ReportTaskRunnerHeartbeatRequest generates a request for the ReportTaskRunnerHeartbeat operation.

func (*DataPipeline) SetStatus

func (c *DataPipeline) SetStatus(input *SetStatusInput) (output *SetStatusOutput, err error)

Requests that the status of an array of physical or logical pipeline objects be updated in the pipeline. This update may not occur immediately, but is eventually consistent. The status that can be set depends on the type of object, e.g. DataNode or Activity. You cannot perform this operation on FINISHED pipelines and attempting to do so will return an InvalidRequestException.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.SetStatusInput{
		ObjectIDs: []*string{ // Required
			aws.String("id"), // Required
			// More values...
		},
		PipelineID: aws.String("id"),     // Required
		Status:     aws.String("string"), // Required
	}
	resp, err := svc.SetStatus(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) SetStatusRequest

func (c *DataPipeline) SetStatusRequest(input *SetStatusInput) (req *aws.Request, output *SetStatusOutput)

SetStatusRequest generates a request for the SetStatus operation.

func (*DataPipeline) SetTaskStatus

func (c *DataPipeline) SetTaskStatus(input *SetTaskStatusInput) (output *SetTaskStatusOutput, err error)

Notifies AWS Data Pipeline that a task is completed and provides information about the final status. The task runner calls this action regardless of whether the task was sucessful. The task runner does not need to call SetTaskStatus for tasks that are canceled by the web service during a call to ReportTaskProgress.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.SetTaskStatusInput{
		TaskID:          aws.String("taskId"),     // Required
		TaskStatus:      aws.String("TaskStatus"), // Required
		ErrorID:         aws.String("string"),
		ErrorMessage:    aws.String("errorMessage"),
		ErrorStackTrace: aws.String("string"),
	}
	resp, err := svc.SetTaskStatus(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) SetTaskStatusRequest

func (c *DataPipeline) SetTaskStatusRequest(input *SetTaskStatusInput) (req *aws.Request, output *SetTaskStatusOutput)

SetTaskStatusRequest generates a request for the SetTaskStatus operation.

func (*DataPipeline) ValidatePipelineDefinition

func (c *DataPipeline) ValidatePipelineDefinition(input *ValidatePipelineDefinitionInput) (output *ValidatePipelineDefinitionOutput, err error)

Tests the pipeline definition with a set of validation checks to ensure that it is well formed and can run without error.

Example
package main

import (
	"fmt"

	"github.com/awslabs/aws-sdk-go/aws"
	"github.com/awslabs/aws-sdk-go/aws/awsutil"
	"github.com/awslabs/aws-sdk-go/service/datapipeline"
)

func main() {
	svc := datapipeline.New(nil)

	params := &datapipeline.ValidatePipelineDefinitionInput{
		PipelineID: aws.String("id"), // Required
		PipelineObjects: []*datapipeline.PipelineObject{ // Required
			&datapipeline.PipelineObject{ // Required
				Fields: []*datapipeline.Field{ // Required
					&datapipeline.Field{ // Required
						Key:         aws.String("fieldNameString"), // Required
						RefValue:    aws.String("fieldNameString"),
						StringValue: aws.String("fieldStringValue"),
					},
					// More values...
				},
				ID:   aws.String("id"), // Required
				Name: aws.String("id"), // Required
			},
			// More values...
		},
		ParameterObjects: []*datapipeline.ParameterObject{
			&datapipeline.ParameterObject{ // Required
				Attributes: []*datapipeline.ParameterAttribute{ // Required
					&datapipeline.ParameterAttribute{ // Required
						Key:         aws.String("attributeNameString"),  // Required
						StringValue: aws.String("attributeValueString"), // Required
					},
					// More values...
				},
				ID: aws.String("fieldNameString"), // Required
			},
			// More values...
		},
		ParameterValues: []*datapipeline.ParameterValue{
			&datapipeline.ParameterValue{ // Required
				ID:          aws.String("fieldNameString"),  // Required
				StringValue: aws.String("fieldStringValue"), // Required
			},
			// More values...
		},
	}
	resp, err := svc.ValidatePipelineDefinition(params)

	if awserr := aws.Error(err); awserr != nil {
		// A service error occurred.
		fmt.Println("Error:", awserr.Code, awserr.Message)
	} else if err != nil {
		// A non-service error occurred.
		panic(err)
	}

	// Pretty-print the response data.
	fmt.Println(awsutil.StringValue(resp))
}
Output:

func (*DataPipeline) ValidatePipelineDefinitionRequest

func (c *DataPipeline) ValidatePipelineDefinitionRequest(input *ValidatePipelineDefinitionInput) (req *aws.Request, output *ValidatePipelineDefinitionOutput)

ValidatePipelineDefinitionRequest generates a request for the ValidatePipelineDefinition operation.

type DeletePipelineInput

type DeletePipelineInput struct {
	// The identifier of the pipeline to be deleted.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input for the DeletePipeline action.

type DeletePipelineOutput

type DeletePipelineOutput struct {
	// contains filtered or unexported fields
}

type DescribeObjectsInput

type DescribeObjectsInput struct {
	// Indicates whether any expressions in the object should be evaluated when
	// the object descriptions are returned.
	EvaluateExpressions *bool `locationName:"evaluateExpressions" type:"boolean"`

	// The starting point for the results to be returned. The first time you call
	// DescribeObjects, this value should be empty. As long as the action returns
	// HasMoreResults as True, you can call DescribeObjects again and pass the marker
	// value from the response to retrieve the next set of results.
	Marker *string `locationName:"marker" type:"string"`

	// Identifiers of the pipeline objects that contain the definitions to be described.
	// You can pass as many as 25 identifiers in a single call to DescribeObjects.
	ObjectIDs []*string `locationName:"objectIds" type:"list" required:"true"`

	// Identifier of the pipeline that contains the object definitions.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The DescribeObjects action returns the object definitions for a specified set of object identifiers. You can filter the results to named fields and used markers to page through the results.

type DescribeObjectsOutput

type DescribeObjectsOutput struct {
	// If True, there are more pages of results to return.
	HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"`

	// The starting point for the next page of results. To view the next page of
	// results, call DescribeObjects again with this marker value.
	Marker *string `locationName:"marker" type:"string"`

	// An array of object definitions that are returned by the call to DescribeObjects.
	PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"`
	// contains filtered or unexported fields
}

If True, there are more results that can be returned in another call to DescribeObjects.

type DescribePipelinesInput

type DescribePipelinesInput struct {
	// Identifiers of the pipelines to describe. You can pass as many as 25 identifiers
	// in a single call to DescribePipelines. You can obtain pipeline identifiers
	// by calling ListPipelines.
	PipelineIDs []*string `locationName:"pipelineIds" type:"list" required:"true"`
	// contains filtered or unexported fields
}

The input to the DescribePipelines action.

type DescribePipelinesOutput

type DescribePipelinesOutput struct {
	// An array of descriptions returned for the specified pipelines.
	PipelineDescriptionList []*PipelineDescription `locationName:"pipelineDescriptionList" type:"list" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the DescribePipelines action.

type EvaluateExpressionInput

type EvaluateExpressionInput struct {
	// The expression to evaluate.
	Expression *string `locationName:"expression" type:"string" required:"true"`

	// The identifier of the object.
	ObjectID *string `locationName:"objectId" type:"string" required:"true"`

	// The identifier of the pipeline.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input for the EvaluateExpression action.

type EvaluateExpressionOutput

type EvaluateExpressionOutput struct {
	// The evaluated expression.
	EvaluatedExpression *string `locationName:"evaluatedExpression" type:"string" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the EvaluateExpression action.

type Field

type Field struct {
	// The field identifier.
	Key *string `locationName:"key" type:"string" required:"true"`

	// The field value, expressed as the identifier of another object.
	RefValue *string `locationName:"refValue" type:"string"`

	// The field value, expressed as a String.
	StringValue *string `locationName:"stringValue" type:"string"`
	// contains filtered or unexported fields
}

A key-value pair that describes a property of a pipeline object. The value is specified as either a string value (StringValue) or a reference to another object (RefValue) but not as both.

type GetPipelineDefinitionInput

type GetPipelineDefinitionInput struct {
	// The identifier of the pipeline.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// The version of the pipeline definition to retrieve. This parameter accepts
	// the values latest (default) and active. Where latest indicates the last definition
	// saved to the pipeline and active indicates the last definition of the pipeline
	// that was activated.
	Version *string `locationName:"version" type:"string"`
	// contains filtered or unexported fields
}

The input for the GetPipelineDefinition action.

type GetPipelineDefinitionOutput

type GetPipelineDefinitionOutput struct {
	// Returns a list of parameter objects used in the pipeline definition.
	ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"`

	// Returns a list of parameter values used in the pipeline definition.
	ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"`

	// An array of objects defined in the pipeline.
	PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list"`
	// contains filtered or unexported fields
}

Contains the output from the GetPipelineDefinition action.

type InstanceIdentity

type InstanceIdentity struct {
	// A description of an Amazon EC2 instance that is generated when the instance
	// is launched and exposed to the instance via the instance metadata service
	// in the form of a JSON representation of an object.
	Document *string `locationName:"document" type:"string"`

	// A signature which can be used to verify the accuracy and authenticity of
	// the information provided in the instance identity document.
	Signature *string `locationName:"signature" type:"string"`
	// contains filtered or unexported fields
}

Identity information for the Amazon EC2 instance that is hosting the task runner. You can get this value by calling a metadata URI from the EC2 instance. For more information, go to Instance Metadata (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html) in the Amazon Elastic Compute Cloud User Guide. Passing in this value proves that your task runner is running on an EC2 instance, and ensures the proper AWS Data Pipeline service charges are applied to your pipeline.

type ListPipelinesInput

type ListPipelinesInput struct {
	// The starting point for the results to be returned. The first time you call
	// ListPipelines, this value should be empty. As long as the action returns
	// HasMoreResults as True, you can call ListPipelines again and pass the marker
	// value from the response to retrieve the next set of results.
	Marker *string `locationName:"marker" type:"string"`
	// contains filtered or unexported fields
}

The input to the ListPipelines action.

type ListPipelinesOutput

type ListPipelinesOutput struct {
	// If True, there are more results that can be obtained by a subsequent call
	// to ListPipelines.
	HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"`

	// If not null, indicates the starting point for the set of pipeline identifiers
	// that the next call to ListPipelines will retrieve. If null, there are no
	// more pipeline identifiers.
	Marker *string `locationName:"marker" type:"string"`

	// A list of all the pipeline identifiers that your account has permission to
	// access. If you require additional information about the pipelines, you can
	// use these identifiers to call DescribePipelines and GetPipelineDefinition.
	PipelineIDList []*PipelineIDName `locationName:"pipelineIdList" type:"list" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the ListPipelines action.

type Operator

type Operator struct {
	// The logical operation to be performed: equal (EQ), equal reference (REF_EQ),
	// less than or equal (LE), greater than or equal (GE), or between (BETWEEN).
	// Equal reference (REF_EQ) can be used only with reference fields. The other
	// comparison types can be used only with String fields. The comparison types
	// you can use apply only to certain object fields, as detailed below.
	//
	//  The comparison operators EQ and REF_EQ act on the following fields:
	//
	//  name @sphere parent @componentParent @instanceParent @status @scheduledStartTime
	// @scheduledEndTime @actualStartTime @actualEndTime   The comparison operators
	// GE, LE, and BETWEEN act on the following fields:
	//
	//  @scheduledStartTime @scheduledEndTime @actualStartTime @actualEndTime
	// Note that fields beginning with the at sign (@) are read-only and set by
	// the web service. When you name fields, you should choose names containing
	// only alpha-numeric values, as symbols may be reserved by AWS Data Pipeline.
	// User-defined fields that you add to a pipeline should prefix their name with
	// the string "my".
	Type *string `locationName:"type" type:"string"`

	// The value that the actual field value will be compared with.
	Values []*string `locationName:"values" type:"list"`
	// contains filtered or unexported fields
}

Contains a logical operation for comparing the value of a field with a specified value.

type ParameterAttribute

type ParameterAttribute struct {
	// The field identifier.
	Key *string `locationName:"key" type:"string" required:"true"`

	// The field value, expressed as a String.
	StringValue *string `locationName:"stringValue" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The attributes allowed or specified with a parameter object.

type ParameterObject

type ParameterObject struct {
	// The attributes of the parameter object.
	Attributes []*ParameterAttribute `locationName:"attributes" type:"list" required:"true"`

	// Identifier of the parameter object.
	ID *string `locationName:"id" type:"string" required:"true"`
	// contains filtered or unexported fields
}

Contains information about a parameter object.

type ParameterValue

type ParameterValue struct {
	// Identifier of the parameter value.
	ID *string `locationName:"id" type:"string" required:"true"`

	// The field value, expressed as a String.
	StringValue *string `locationName:"stringValue" type:"string" required:"true"`
	// contains filtered or unexported fields
}

A value or list of parameter values.

type PipelineDescription

type PipelineDescription struct {
	// Description of the pipeline.
	Description *string `locationName:"description" type:"string"`

	// A list of read-only fields that contain metadata about the pipeline: @userId,
	// @accountId, and @pipelineState.
	Fields []*Field `locationName:"fields" type:"list" required:"true"`

	// Name of the pipeline.
	Name *string `locationName:"name" type:"string" required:"true"`

	// The pipeline identifier that was assigned by AWS Data Pipeline. This is a
	// string of the form df-297EG78HU43EEXAMPLE.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// A list of tags to associated with a pipeline. Tags let you control access
	// to pipelines. For more information, see Controlling User Access to Pipelines
	// (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html)
	// in the AWS Data Pipeline Developer Guide.
	Tags []*Tag `locationName:"tags" type:"list"`
	// contains filtered or unexported fields
}

Contains pipeline metadata.

type PipelineIDName

type PipelineIDName struct {
	// Identifier of the pipeline that was assigned by AWS Data Pipeline. This is
	// a string of the form df-297EG78HU43EEXAMPLE.
	ID *string `locationName:"id" type:"string"`

	// Name of the pipeline.
	Name *string `locationName:"name" type:"string"`
	// contains filtered or unexported fields
}

Contains the name and identifier of a pipeline.

type PipelineObject

type PipelineObject struct {
	// Key-value pairs that define the properties of the object.
	Fields []*Field `locationName:"fields" type:"list" required:"true"`

	// Identifier of the object.
	ID *string `locationName:"id" type:"string" required:"true"`

	// Name of the object.
	Name *string `locationName:"name" type:"string" required:"true"`
	// contains filtered or unexported fields
}

Contains information about a pipeline object. This can be a logical, physical, or physical attempt pipeline object. The complete set of components of a pipeline defines the pipeline.

type PollForTaskInput

type PollForTaskInput struct {
	// The public DNS name of the calling task runner.
	Hostname *string `locationName:"hostname" type:"string"`

	// Identity information for the Amazon EC2 instance that is hosting the task
	// runner. You can get this value by calling the URI, http://169.254.169.254/latest/meta-data/instance-id,
	// from the EC2 instance. For more information, go to Instance Metadata (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html)
	// in the Amazon Elastic Compute Cloud User Guide. Passing in this value proves
	// that your task runner is running on an EC2 instance, and ensures the proper
	// AWS Data Pipeline service charges are applied to your pipeline.
	InstanceIdentity *InstanceIdentity `locationName:"instanceIdentity" type:"structure"`

	// Indicates the type of task the task runner is configured to accept and process.
	// The worker group is set as a field on objects in the pipeline when they are
	// created. You can only specify a single value for workerGroup in the call
	// to PollForTask. There are no wildcard values permitted in workerGroup, the
	// string must be an exact, case-sensitive, match.
	WorkerGroup *string `locationName:"workerGroup" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The data type passed in as input to the PollForTask action.

type PollForTaskOutput

type PollForTaskOutput struct {
	// An instance of TaskObject. The returned object contains all the information
	// needed to complete the task that is being assigned to the task runner. One
	// of the fields returned in this object is taskId, which contains an identifier
	// for the task being assigned. The calling task runner uses taskId in subsequent
	// calls to ReportTaskProgress and SetTaskStatus.
	TaskObject *TaskObject `locationName:"taskObject" type:"structure"`
	// contains filtered or unexported fields
}

Contains the output from the PollForTask action.

type PutPipelineDefinitionInput

type PutPipelineDefinitionInput struct {
	// A list of parameter objects used with the pipeline.
	ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"`

	// A list of parameter values used with the pipeline.
	ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"`

	// The identifier of the pipeline to be configured.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// The objects that define the pipeline. These will overwrite the existing pipeline
	// definition.
	PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"`
	// contains filtered or unexported fields
}

The input of the PutPipelineDefinition action.

type PutPipelineDefinitionOutput

type PutPipelineDefinitionOutput struct {
	// If True, there were validation errors. If errored is True, the pipeline definition
	// is stored but cannot be activated until you correct the pipeline and call
	// PutPipelineDefinition to commit the corrected pipeline.
	Errored *bool `locationName:"errored" type:"boolean" required:"true"`

	// A list of the validation errors that are associated with the objects defined
	// in pipelineObjects.
	ValidationErrors []*ValidationError `locationName:"validationErrors" type:"list"`

	// A list of the validation warnings that are associated with the objects defined
	// in pipelineObjects.
	ValidationWarnings []*ValidationWarning `locationName:"validationWarnings" type:"list"`
	// contains filtered or unexported fields
}

Contains the output of the PutPipelineDefinition action.

type Query

type Query struct {
	// List of selectors that define the query. An object must satisfy all of the
	// selectors to match the query.
	Selectors []*Selector `locationName:"selectors" type:"list"`
	// contains filtered or unexported fields
}

Defines the query to run against an object.

type QueryObjectsInput

type QueryObjectsInput struct {
	// Specifies the maximum number of object names that QueryObjects will return
	// in a single call. The default value is 100.
	Limit *int64 `locationName:"limit" type:"integer"`

	// The starting point for the results to be returned. The first time you call
	// QueryObjects, this value should be empty. As long as the action returns HasMoreResults
	// as True, you can call QueryObjects again and pass the marker value from the
	// response to retrieve the next set of results.
	Marker *string `locationName:"marker" type:"string"`

	// Identifier of the pipeline to be queried for object names.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// Query that defines the objects to be returned. The Query object can contain
	// a maximum of ten selectors. The conditions in the query are limited to top-level
	// String fields in the object. These filters can be applied to components,
	// instances, and attempts.
	Query *Query `locationName:"query" type:"structure"`

	// Specifies whether the query applies to components or instances. Allowable
	// values: COMPONENT, INSTANCE, ATTEMPT.
	Sphere *string `locationName:"sphere" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input for the QueryObjects action.

type QueryObjectsOutput

type QueryObjectsOutput struct {
	// If True, there are more results that can be obtained by a subsequent call
	// to QueryObjects.
	HasMoreResults *bool `locationName:"hasMoreResults" type:"boolean"`

	// A list of identifiers that match the query selectors.
	IDs []*string `locationName:"ids" type:"list"`

	// The starting point for the results to be returned. As long as the action
	// returns HasMoreResults as True, you can call QueryObjects again and pass
	// the marker value from the response to retrieve the next set of results.
	Marker *string `locationName:"marker" type:"string"`
	// contains filtered or unexported fields
}

Contains the output from the QueryObjects action.

type RemoveTagsInput

type RemoveTagsInput struct {
	// The pipeline from which you want to remove tags.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// The keys of the tags you wish to remove.
	TagKeys []*string `locationName:"tagKeys" type:"list" required:"true"`
	// contains filtered or unexported fields
}

The input to the RemoveTags action.

type RemoveTagsOutput

type RemoveTagsOutput struct {
	// contains filtered or unexported fields
}

The result of the RemoveTags action.

type ReportTaskProgressInput

type ReportTaskProgressInput struct {
	// Key-value pairs that define the properties of the ReportTaskProgressInput
	// object.
	Fields []*Field `locationName:"fields" type:"list"`

	// Identifier of the task assigned to the task runner. This value is provided
	// in the TaskObject that the service returns with the response for the PollForTask
	// action.
	TaskID *string `locationName:"taskId" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input for the ReportTaskProgress action.

type ReportTaskProgressOutput

type ReportTaskProgressOutput struct {
	// If True, the calling task runner should cancel processing of the task. The
	// task runner does not need to call SetTaskStatus for canceled tasks.
	Canceled *bool `locationName:"canceled" type:"boolean" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the ReportTaskProgress action.

type ReportTaskRunnerHeartbeatInput

type ReportTaskRunnerHeartbeatInput struct {
	// The public DNS name of the calling task runner.
	Hostname *string `locationName:"hostname" type:"string"`

	// The identifier of the task runner. This value should be unique across your
	// AWS account. In the case of AWS Data Pipeline Task Runner launched on a resource
	// managed by AWS Data Pipeline, the web service provides a unique identifier
	// when it launches the application. If you have written a custom task runner,
	// you should assign a unique identifier for the task runner.
	TaskRunnerID *string `locationName:"taskrunnerId" type:"string" required:"true"`

	// Indicates the type of task the task runner is configured to accept and process.
	// The worker group is set as a field on objects in the pipeline when they are
	// created. You can only specify a single value for workerGroup in the call
	// to ReportTaskRunnerHeartbeat. There are no wildcard values permitted in workerGroup,
	// the string must be an exact, case-sensitive, match.
	WorkerGroup *string `locationName:"workerGroup" type:"string"`
	// contains filtered or unexported fields
}

The input for the ReportTaskRunnerHeartbeat action.

type ReportTaskRunnerHeartbeatOutput

type ReportTaskRunnerHeartbeatOutput struct {
	// Indicates whether the calling task runner should terminate. If True, the
	// task runner that called ReportTaskRunnerHeartbeat should terminate.
	Terminate *bool `locationName:"terminate" type:"boolean" required:"true"`
	// contains filtered or unexported fields
}

Contains the output from the ReportTaskRunnerHeartbeat action.

type Selector

type Selector struct {
	// The name of the field that the operator will be applied to. The field name
	// is the "key" portion of the field definition in the pipeline definition syntax
	// that is used by the AWS Data Pipeline API. If the field is not set on the
	// object, the condition fails.
	FieldName *string `locationName:"fieldName" type:"string"`

	// Contains a logical operation for comparing the value of a field with a specified
	// value.
	Operator *Operator `locationName:"operator" type:"structure"`
	// contains filtered or unexported fields
}

A comparision that is used to determine whether a query should return this object.

type SetStatusInput

type SetStatusInput struct {
	// Identifies an array of objects. The corresponding objects can be either physical
	// or components, but not a mix of both types.
	ObjectIDs []*string `locationName:"objectIds" type:"list" required:"true"`

	// Identifies the pipeline that contains the objects.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// Specifies the status to be set on all the objects in objectIds. For components,
	// this can be either PAUSE or RESUME. For instances, this can be either TRY_CANCEL,
	// RERUN, or MARK_FINISHED.
	Status *string `locationName:"status" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input to the SetStatus action.

type SetStatusOutput

type SetStatusOutput struct {
	// contains filtered or unexported fields
}

type SetTaskStatusInput

type SetTaskStatusInput struct {
	// If an error occurred during the task, this value specifies an id value that
	// represents the error. This value is set on the physical attempt object. It
	// is used to display error information to the user. It should not start with
	// string "Service_" which is reserved by the system.
	ErrorID *string `locationName:"errorId" type:"string"`

	// If an error occurred during the task, this value specifies a text description
	// of the error. This value is set on the physical attempt object. It is used
	// to display error information to the user. The web service does not parse
	// this value.
	ErrorMessage *string `locationName:"errorMessage" type:"string"`

	// If an error occurred during the task, this value specifies the stack trace
	// associated with the error. This value is set on the physical attempt object.
	// It is used to display error information to the user. The web service does
	// not parse this value.
	ErrorStackTrace *string `locationName:"errorStackTrace" type:"string"`

	// Identifies the task assigned to the task runner. This value is set in the
	// TaskObject that is returned by the PollForTask action.
	TaskID *string `locationName:"taskId" type:"string" required:"true"`

	// If FINISHED, the task successfully completed. If FAILED the task ended unsuccessfully.
	// The FALSE value is used by preconditions.
	TaskStatus *string `locationName:"taskStatus" type:"string" required:"true"`
	// contains filtered or unexported fields
}

The input of the SetTaskStatus action.

type SetTaskStatusOutput

type SetTaskStatusOutput struct {
	// contains filtered or unexported fields
}

The output from the SetTaskStatus action.

type Tag

type Tag struct {
	// The key name of a tag defined by a user. For more information, see Controlling
	// User Access to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html)
	// in the AWS Data Pipeline Developer Guide.
	Key *string `locationName:"key" type:"string" required:"true"`

	// The optional value portion of a tag defined by a user. For more information,
	// see Controlling User Access to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html)
	// in the AWS Data Pipeline Developer Guide.
	Value *string `locationName:"value" type:"string" required:"true"`
	// contains filtered or unexported fields
}

Tags are key/value pairs defined by a user and associated with a pipeline to control access. AWS Data Pipeline allows you to associate ten tags per pipeline. For more information, see Controlling User Access to Pipelines (http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-control-access.html) in the AWS Data Pipeline Developer Guide.

type TaskObject

type TaskObject struct {
	// Identifier of the pipeline task attempt object. AWS Data Pipeline uses this
	// value to track how many times a task is attempted.
	AttemptID *string `locationName:"attemptId" type:"string"`

	// Connection information for the location where the task runner will publish
	// the output of the task.
	Objects *map[string]*PipelineObject `locationName:"objects" type:"map"`

	// Identifier of the pipeline that provided the task.
	PipelineID *string `locationName:"pipelineId" type:"string"`

	// An internal identifier for the task. This ID is passed to the SetTaskStatus
	// and ReportTaskProgress actions.
	TaskID *string `locationName:"taskId" type:"string"`
	// contains filtered or unexported fields
}

Contains information about a pipeline task that is assigned to a task runner.

type ValidatePipelineDefinitionInput

type ValidatePipelineDefinitionInput struct {
	// A list of parameter objects used with the pipeline.
	ParameterObjects []*ParameterObject `locationName:"parameterObjects" type:"list"`

	// A list of parameter values used with the pipeline.
	ParameterValues []*ParameterValue `locationName:"parameterValues" type:"list"`

	// Identifies the pipeline whose definition is to be validated.
	PipelineID *string `locationName:"pipelineId" type:"string" required:"true"`

	// A list of objects that define the pipeline changes to validate against the
	// pipeline.
	PipelineObjects []*PipelineObject `locationName:"pipelineObjects" type:"list" required:"true"`
	// contains filtered or unexported fields
}

The input of the ValidatePipelineDefinition action.

type ValidatePipelineDefinitionOutput

type ValidatePipelineDefinitionOutput struct {
	// If True, there were validation errors.
	Errored *bool `locationName:"errored" type:"boolean" required:"true"`

	// Lists the validation errors that were found by ValidatePipelineDefinition.
	ValidationErrors []*ValidationError `locationName:"validationErrors" type:"list"`

	// Lists the validation warnings that were found by ValidatePipelineDefinition.
	ValidationWarnings []*ValidationWarning `locationName:"validationWarnings" type:"list"`
	// contains filtered or unexported fields
}

Contains the output from the ValidatePipelineDefinition action.

type ValidationError

type ValidationError struct {
	// A description of the validation error.
	Errors []*string `locationName:"errors" type:"list"`

	// The identifier of the object that contains the validation error.
	ID *string `locationName:"id" type:"string"`
	// contains filtered or unexported fields
}

Defines a validation error returned by PutPipelineDefinition or ValidatePipelineDefinition. Validation errors prevent pipeline activation. The set of validation errors that can be returned are defined by AWS Data Pipeline.

type ValidationWarning

type ValidationWarning struct {
	// The identifier of the object that contains the validation warning.
	ID *string `locationName:"id" type:"string"`

	// A description of the validation warning.
	Warnings []*string `locationName:"warnings" type:"list"`
	// contains filtered or unexported fields
}

Defines a validation warning returned by PutPipelineDefinition or ValidatePipelineDefinition. Validation warnings do not prevent pipeline activation. The set of validation warnings that can be returned are defined by AWS Data Pipeline.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL