tcqueue

package
v64.2.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 16, 2024 License: MPL-2.0 Imports: 11 Imported by: 0

Documentation

Overview

The queue service is responsible for accepting tasks and tracking their state as they are executed by workers, in order to ensure they are eventually resolved.

## Artifact Storage Types

* **Object artifacts** contain arbitrary data, stored via the object service. * **Redirect artifacts**, will redirect the caller to URL when fetched with a a 303 (See Other) response. Clients will not apply any kind of authentication to that URL. * **Link artifacts**, will be treated as if the caller requested the linked artifact on the same task. Links may be chained, but cycles are forbidden. The caller must have scopes for the linked artifact, or a 403 response will be returned. * **Error artifacts**, only consists of meta-data which the queue will store for you. These artifacts are only meant to indicate that you the worker or the task failed to generate a specific artifact, that you would otherwise have uploaded. For example docker-worker will upload an error artifact, if the file it was supposed to upload doesn't exists or turns out to be a directory. Clients requesting an error artifact will get a `424` (Failed Dependency) response. This is mainly designed to ensure that dependent tasks can distinguish between artifacts that were suppose to be generated and artifacts for which the name is misspelled. * **S3 artifacts** are used for static files which will be stored on S3. When creating an S3 artifact the queue will return a pre-signed URL to which you can do a `PUT` request to upload your artifact. Note that `PUT` request **must** specify the `content-length` header and **must** give the `content-type` header the same value as in the request to `createArtifact`. S3 artifacts will be deprecated soon, and users should prefer object artifacts instead.

## Artifact immutability

Generally speaking you cannot overwrite an artifact when created. But if you repeat the request with the same properties the request will succeed as the operation is idempotent. This is useful if you need to refresh a signed URL while uploading. Do not abuse this to overwrite artifacts created by another entity! Such as worker-host overwriting artifact created by worker-code.

The queue defines the following *immutability special cases*:

* A `reference` artifact can replace an existing `reference` artifact. * A `link` artifact can replace an existing `reference` artifact. * Any artifact's `expires` can be extended (made later, but not earlier).

See:

How to use this package

First create a Queue object:

queue := tcqueue.New(nil)

and then call one or more of queue's methods, e.g.:

err := queue.Ping(.....)

handling any errors...

if err != nil {
	// handle error...
}

Taskcluster Schema

The source code of this go package was auto-generated from the API definition at <rootUrl>/references/queue/v1/api.json together with the input and output schemas it references,

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Action

type Action struct {

	// Actions have a "context" that is one of provisioner, worker-type, or worker, indicating
	// which it applies to. `context` is used by the front-end to know where to display the action.
	//
	// | `context`   | Page displayed        |
	// |-------------|-----------------------|
	// | provisioner | Provisioner Explorer  |
	// | worker-type | Workers Explorer      |
	// | worker      | Worker Explorer       |
	//
	// Possible values:
	//   * "provisioner"
	//   * "worker-type"
	//   * "worker"
	Context string `json:"context"`

	// Description of the provisioner.
	Description string `json:"description"`

	// Method to indicate the desired action to be performed for a given resource.
	//
	// Possible values:
	//   * "POST"
	//   * "PUT"
	//   * "DELETE"
	//   * "PATCH"
	Method string `json:"method"`

	// Short names for things like logging/error messages.
	Name string `json:"name"`

	// Appropriate title for any sort of Modal prompt.
	Title json.RawMessage `json:"title"`

	// When an action is triggered, a request is made using the `url` and `method`.
	// Depending on the `context`, the following parameters will be substituted in the url:
	//
	// | `context`   | Path parameters                                          |
	// |-------------|----------------------------------------------------------|
	// | provisioner | <provisionerId>                                          |
	// | worker-type | <provisionerId>, <workerType>                            |
	// | worker      | <provisionerId>, <workerType>, <workerGroup>, <workerId> |
	//
	// _Note: The request needs to be signed with the user's Taskcluster credentials._
	URL string `json:"url"`
}

Actions provide a generic mechanism to expose additional features of a provisioner, worker type, or worker to Taskcluster clients.

An action is comprised of metadata describing the feature it exposes, together with a webhook for triggering it.

The Taskcluster tools site, for example, retrieves actions when displaying provisioners, worker types and workers. It presents the provisioner/worker type/worker specific actions to the user. When the user triggers an action, the web client takes the registered webhook, substitutes parameters into the URL (see `url`), signs the requests with the Taskcluster credentials of the user operating the web interface, and issues the HTTP request.

The level to which the action relates (provisioner, worker type, worker) is called the action context. All actions, regardless of the action contexts, are registered against the provisioner when calling `queue.declareProvisioner`.

The action context is used by the web client to determine where in the web interface to present the action to the user as follows:

| `context` | Tool where action is displayed | |-------------|--------------------------------| | provisioner | Provisioner Explorer | | worker-type | Workers Explorer | | worker | Worker Explorer |

See [actions docs](/docs/reference/platform/taskcluster-queue/docs/actions) for more information.

type Artifact

type Artifact struct {

	// Expected content-type of the artifact.  This is informational only:
	// it is suitable for use to choose an icon for the artifact, for example.
	// The accurate content-type of the artifact can only be determined by
	// downloading it.
	//
	// Max length: 255
	ContentType string `json:"contentType"`

	// Date and time after which the artifact created will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Name of the artifact that was created, this is useful if you want to
	// attempt to fetch the artifact.
	//
	// Max length: 1024
	Name string `json:"name"`

	// This is the `storageType` for the request that was used to create
	// the artifact.
	//
	// Possible values:
	//   * "s3"
	//   * "object"
	//   * "reference"
	//   * "link"
	//   * "error"
	StorageType string `json:"storageType"`
}

Information about an artifact

type CancelTaskGroupResponse

type CancelTaskGroupResponse struct {

	// Total number of tasks that were cancelled with this call.
	// It includes all non-resolved tasks.
	//
	// Mininum:    0
	CancelledCount int64 `json:"cancelledCount"`

	// Identifier for the task-group being listed.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId"`

	// Current count of tasks in the task group.
	//
	// Mininum:    0
	TaskGroupSize int64 `json:"taskGroupSize"`

	// List of `taskIds` cancelled by this call.
	//
	// Array items:
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskIds []string `json:"taskIds"`
}

Response from a `cancelTaskGroup` request.

type ClaimWorkRequest

type ClaimWorkRequest struct {

	// Number of tasks to attempt to claim.
	//
	// Default:    1
	// Mininum:    1
	// Maximum:    32
	Tasks int64 `json:"tasks"`

	// Identifier for group that worker claiming the task is a part of.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for worker within the given workerGroup
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

Request to claim a task for a worker to process.

type ClaimWorkResponse

type ClaimWorkResponse struct {

	// List of task claims, may be empty if no tasks was claimed, in which case
	// the worker should sleep a tiny bit before polling again.
	Tasks []TaskClaim `json:"tasks"`
}

Response to an attempt to claim tasks for a worker to process.

type CountPendingTasksResponse

type CountPendingTasksResponse struct {

	// An approximate number of pending tasks for the given `provisionerId` and
	// `workerType`. Number of reported here may be higher than actual number of
	// pending tasks. But there cannot be more pending tasks reported here.
	// Ie. this is an **upper-bound** on the number of pending tasks.
	//
	// Mininum:    0
	PendingTasks int64 `json:"pendingTasks"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

Response to a request for the number of pending tasks for a given `provisionerId` and `workerType`.

type ErrorArtifactRequest

type ErrorArtifactRequest struct {

	// Date-time after which the queue should stop replying with the error
	// and forget about the artifact.
	Expires tcclient.Time `json:"expires"`

	// Human readable explanation of why the artifact is missing
	//
	// Max length: 4096
	Message string `json:"message"`

	// Reason why the artifact doesn't exist.
	//
	// Possible values:
	//   * "file-missing-on-worker"
	//   * "invalid-resource-on-worker"
	//   * "too-large-file-on-worker"
	//   * "file-not-readable-on-worker"
	Reason string `json:"reason"`

	// Artifact storage type, in this case `error`
	//
	// Possible values:
	//   * "error"
	StorageType string `json:"storageType"`
}

Request the queue to reply `424` (Failed Dependency) with `reason` and `message` to any `GET` request for this artifact. This is mainly useful as a way for a task to declare that it failed to provide an artifact it wanted to upload.

type ErrorArtifactResponse

type ErrorArtifactResponse struct {

	// Artifact storage type, in this case `error`
	//
	// Possible values:
	//   * "error"
	StorageType string `json:"storageType"`
}

Response to a request for the queue to reply `424` (Failed Dependency) with `reason` and `message` to any `GET` request for this artifact.

type FinishArtifactRequest

type FinishArtifactRequest struct {

	// The uploadId from `createArtifact`.  Supplying this value provides an
	// additional check, beyond scopes, that the caller was the entity that
	// uploaded the data.  This must be specified for `storageType: object`.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	UploadID string `json:"uploadId"`
}

Request body for `finishArtifact`

type GetArtifactContentResponse

type GetArtifactContentResponse json.RawMessage

Response to the `artifact` and `latestArtifact` methods. It is one of the following types, as identified by the `storageType` property.

One of:

  • GetArtifactContentResponse1
  • GetArtifactContentResponse2
  • GetArtifactContentResponse3
  • GetArtifactContentResponse4

func (*GetArtifactContentResponse) MarshalJSON

func (m *GetArtifactContentResponse) MarshalJSON() ([]byte, error)

MarshalJSON calls json.RawMessage method of the same name. Required since GetArtifactContentResponse is of type json.RawMessage...

func (*GetArtifactContentResponse) UnmarshalJSON

func (m *GetArtifactContentResponse) UnmarshalJSON(data []byte) error

UnmarshalJSON is a copy of the json.RawMessage implementation.

type GetArtifactContentResponse1

type GetArtifactContentResponse1 struct {

	// Constant value: "s3"
	StorageType string `json:"storageType"`

	// URL from which to download the artifact.  This may be a URL for a bucket or
	// a CDN, and may or may not be signed, depending on server configuration.
	URL string `json:"url"`
}

Response to the `artifact` and `latestArtifact` methods. It is one of the following types, as identified by the `storageType` property.

type GetArtifactContentResponse2

type GetArtifactContentResponse2 struct {

	// Temporary credentials for access to the object service.
	//
	// These credentials are used both to download artifacts from the object service
	// (`getArtifactContent`) and to upload artifacts (`createArtifact`).
	Credentials ObjectServiceCredentials `json:"credentials"`

	// Name of the object on the object service.
	//
	// Syntax:     ^[\x20-\x7e]+$
	Name string `json:"name"`

	// Constant value: "object"
	StorageType string `json:"storageType"`
}

An object name, and credentials to use to access that object on the object service. The credentials expire one hour after this call; this should allow ample time for retries, slow downloads, and clock skew.

type GetArtifactContentResponse3

type GetArtifactContentResponse3 struct {

	// Constant value: "reference"
	StorageType string `json:"storageType"`

	// Referenced URL
	URL string `json:"url"`
}

Response to the `artifact` and `latestArtifact` methods. It is one of the following types, as identified by the `storageType` property.

type GetArtifactContentResponse4

type GetArtifactContentResponse4 struct {

	// Error message
	Message string `json:"message"`

	// Error reason
	Reason string `json:"reason"`

	// Constant value: "error"
	StorageType string `json:"storageType"`
}

Response to the `artifact` and `latestArtifact` methods. It is one of the following types, as identified by the `storageType` property.

type GetArtifactResponse

type GetArtifactResponse struct {

	// Artifact storage type.  Note that this is also available in the
	// `x-taskcluster-artifact-storage-type` header.
	StorageType string `json:"storageType"`

	// URL from which to download the artifact
	URL string `json:"url"`
}

Response to the `getArtifact` method. This method returns a simple URL from which the artifact data can be read. Not that this response is provided as the body of an HTTP 303 response, so clients which automatically follow redirects may not see this content.

type HTTPRetryError

type HTTPRetryError = internal.HTTPRetryError

type LinkArtifactRequest

type LinkArtifactRequest struct {

	// Name of the artifact to which to link.
	Artifact string `json:"artifact"`

	// Expected content-type of the artifact.  This is informational only:
	// it is suitable for use to choose an icon for the artifact, for example.
	// The accurate content-type of the artifact can only be determined by
	// downloading it.  If this value is not provided, it will default to
	// `application/binary`.
	//
	// Max length: 255
	ContentType string `json:"contentType,omitempty"`

	// Date-time after which the queue should no longer maintain this link.
	Expires tcclient.Time `json:"expires"`

	// Artifact storage type, in this case `link`
	//
	// Possible values:
	//   * "link"
	StorageType string `json:"storageType"`
}

Request the queue to link this artifact to the named artifact on the same task. When a client fetches this artifact, the request will be treated as if the client fetched the linked artifact (including corresponding scope validation). Note that the target artifact need not exist when this artifact is created. It is allowed to create link cycles, but they will result in a 400 response when fetched.

type LinkArtifactResponse

type LinkArtifactResponse struct {

	// Artifact storage type, in this case `link`
	//
	// Possible values:
	//   * "link"
	StorageType string `json:"storageType"`
}

Response for an artifact with `storageType` `link`.

type ListArtifactsResponse

type ListArtifactsResponse struct {

	// List of artifacts for given `taskId` and `runId`.
	Artifacts []Artifact `json:"artifacts"`

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of artifacts.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called with `continuationToken` until you get a
	// result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`
}

List of artifacts for a given `taskId` and `runId`.

type ListClaimedTasksResponse

type ListClaimedTasksResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of dependent tasks.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listClaimedTasks` with
	// `continuationToken` until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// List of tasks that are currently claimed by workers and are not yet resolved.
	// Results might not represent the actual state of the tasks,
	// as they might be currently resolved by a worker or claim-resolver.
	//
	// Tasks are returned by claimed time, with the oldest claimed tasks first.
	Tasks []Var3 `json:"tasks"`
}

Response from a `listClaimedTasks` request.

type ListDependentTasksResponse

type ListDependentTasksResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of dependent tasks.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listDependentTasks` with
	// `continuationToken` until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// Identifier for the task whose dependents are being listed.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskID string `json:"taskId"`

	// List of tasks that have `taskId` in the `task.dependencies` property.
	Tasks []TaskDefinitionAndStatus `json:"tasks"`
}

Response from a `listDependentTasks` request.

type ListPendingTasksResponse

type ListPendingTasksResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of dependent tasks.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listPendingTasks` with
	// `continuationToken` until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// List of tasks that are currently waiting for workers to be claimed.
	// Results may not represent the actual state of the tasks,
	// as they might be actively claimed by a worker.
	//
	// Tasks are returned in inserted order.
	Tasks []Var2 `json:"tasks"`
}

Response from a `listPendingTasks` request.

type ListProvisionersResponse

type ListProvisionersResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of provisioners.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called with `continuationToken` until you get a
	// result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	Provisioners []ProvisionerInformation `json:"provisioners"`
}

type ListTaskGroupResponse

type ListTaskGroupResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of tasks in the task-group.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listTaskGroup` with `continuationToken`
	// until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// Date and time after the last expiration of any task in the task group.
	// For the unsealed task group this could change to a later date.
	Expires tcclient.Time `json:"expires"`

	// All tasks in a task group must have the same `schedulerId`. This is used for several purposes:
	//
	// * it can represent the entity that created the task;
	// * it can limit addition of new tasks to a task group: the caller of
	//     `createTask` must have a scope related to the `schedulerId` of the task
	//     group;
	// * it controls who can manipulate tasks, again by requiring
	//     `schedulerId`-related scopes; and
	// * it appears in the routing key for Pulse messages about the task.
	//
	// Default:    "-"
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	SchedulerID string `json:"schedulerId"`

	// Empty or date and time when task group was sealed.
	Sealed tcclient.Time `json:"sealed,omitempty"`

	// Identifier for the task-group.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId"`

	// List of tasks in this task-group.
	Tasks []TaskDefinitionAndStatus `json:"tasks"`
}

Response from a `listTaskGroup` request.

type ListTaskQueuesResponse

type ListTaskQueuesResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of task-queues.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listTaskQueues` with `continuationToken`
	// until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// List of all task-queues.
	TaskQueues []TaskQueue `json:"taskQueues"`
}

Response from a `listTaskQueues` request.

type ListWorkerTypesResponse

type ListWorkerTypesResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of worker-types in the provisioner.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listWorkerTypes` with `continuationToken`
	// until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// List of worker-types in this provisioner.
	WorkerTypes []WorkerType `json:"workerTypes"`
}

Response from a `listWorkerTypes` request.

type ListWorkersResponse

type ListWorkersResponse struct {

	// Opaque `continuationToken` to be given as query-string option to get the
	// next set of workers in the worker-type.
	// This property is only present if another request is necessary to fetch all
	// results. In practice the next request with a `continuationToken` may not
	// return additional results, but it can. Thus, you can only be sure to have
	// all the results if you've called `listWorkerTypes` with `continuationToken`
	// until you get a result without a `continuationToken`.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// List of workers in this worker-type.
	Workers []Worker `json:"workers"`
}

Response from a `listWorkers` request.

type ObjectArtifactRequest

type ObjectArtifactRequest struct {

	// Artifact content type.  This is advisory in nature and can be used,
	// for example, to select appropriate icons to display artifact links.
	//
	// Max length: 255
	ContentType string `json:"contentType"`

	// Date-time after which the artifact should be deleted.
	Expires tcclient.Time `json:"expires"`

	// Artifact storage type, in this case `'object'`
	//
	// Possible values:
	//   * "object"
	StorageType string `json:"storageType"`
}

Request to create an artifact via the object service.

type ObjectArtifactResponse

type ObjectArtifactResponse struct {

	// Temporary credentials for access to the object service.
	//
	// These credentials are used both to download artifacts from the object service
	// (`getArtifactContent`) and to upload artifacts (`createArtifact`).
	Credentials ObjectServiceCredentials `json:"credentials"`

	// Expiration time for the artifact.
	Expires tcclient.Time `json:"expires"`

	// Name of the object on the object service.
	//
	// Syntax:     ^[\x20-\x7e]+$
	Name string `json:"name"`

	// Project identifier.
	//
	// Syntax:     ^([a-zA-Z0-9._/-]*)$
	// Min length: 1
	// Max length: 500
	ProjectID string `json:"projectId"`

	// Artifact storage type, in this case `'object'`
	//
	// Possible values:
	//   * "object"
	StorageType string `json:"storageType"`

	// Unique identifier for this upload.   Once an object is created with an uploadId,
	// uploads of the same object with different uploadIds will be rejected.  Callers
	// should pass a randomly-generated slugid here.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	UploadID string `json:"uploadId"`
}

Information supporting uploading an object to the object service. This consists of an object name and uploadId, together with credentials allowing an upload of the designated object to the service. The resulting credentials are valid for 24 hours or until the artifact expires, whichever is shorter, allowing ample time for any method negotiation, retries, and so on. The caller should call `object.createUpload` with the given credentials, and perform the upload. Note that the `uploadId`, `projectId`, and `expires` given to `createUpload` must match those in this response. The caller should call `object.finishUpload` when the upload is finished, at which point the object is immutable and the credentials are no longer useful.

type ObjectServiceCredentials

type ObjectServiceCredentials struct {

	// The `accessToken` for the temporary credentials.
	//
	// Min length: 1
	AccessToken string `json:"accessToken"`

	// The `certificate` for the temporary credentials.
	//
	// Min length: 1
	Certificate string `json:"certificate"`

	// The `clientId` for the temporary credentials.
	//
	// Min length: 1
	ClientID string `json:"clientId"`
}

Temporary credentials for access to the object service.

These credentials are used both to download artifacts from the object service (`getArtifactContent`) and to upload artifacts (`createArtifact`).

type PostArtifactRequest

type PostArtifactRequest json.RawMessage

Request a authorization to put and artifact or posting of a URL as an artifact. Note that the `storageType` property is referenced in the response as well.

One of:

  • S3ArtifactRequest
  • ObjectArtifactRequest
  • RedirectArtifactRequest
  • LinkArtifactRequest
  • ErrorArtifactRequest

func (*PostArtifactRequest) MarshalJSON

func (m *PostArtifactRequest) MarshalJSON() ([]byte, error)

MarshalJSON calls json.RawMessage method of the same name. Required since PostArtifactRequest is of type json.RawMessage...

func (*PostArtifactRequest) UnmarshalJSON

func (m *PostArtifactRequest) UnmarshalJSON(data []byte) error

UnmarshalJSON is a copy of the json.RawMessage implementation.

type PostArtifactResponse

type PostArtifactResponse json.RawMessage

Response to a request for posting an artifact. Note that the `storageType` property is referenced in the request as well.

One of:

  • S3ArtifactResponse
  • ObjectArtifactResponse
  • RedirectArtifactResponse
  • LinkArtifactResponse
  • ErrorArtifactResponse

func (*PostArtifactResponse) MarshalJSON

func (m *PostArtifactResponse) MarshalJSON() ([]byte, error)

MarshalJSON calls json.RawMessage method of the same name. Required since PostArtifactResponse is of type json.RawMessage...

func (*PostArtifactResponse) UnmarshalJSON

func (m *PostArtifactResponse) UnmarshalJSON(data []byte) error

UnmarshalJSON is a copy of the json.RawMessage implementation.

type ProvisionerInformation

type ProvisionerInformation struct {

	// See taskcluster [actions](/docs/reference/platform/taskcluster-queue/docs/actions) documentation.
	Actions []Action `json:"actions"`

	// Description of the provisioner.
	Description string `json:"description"`

	// Date and time after which the provisioner created will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date and time where the provisioner was last seen active
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// This is the stability of the provisioner. Accepted values:
	//  * `experimental`
	//  * `stable`
	//  * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`
}

type ProvisionerRequest

type ProvisionerRequest struct {

	// See taskcluster [actions](/docs/reference/platform/taskcluster-queue/docs/actions) documentation.
	Actions []Action `json:"actions,omitempty"`

	// Description of the provisioner.
	Description string `json:"description,omitempty"`

	// Date and time after which the provisioner will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires,omitempty"`

	// This is the stability of the provisioner. Accepted values:
	//   * `experimental`
	//   * `stable`
	//   * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability,omitempty"`
}

Request to update a provisioner.

type ProvisionerResponse

type ProvisionerResponse struct {

	// See taskcluster [actions](/docs/reference/platform/taskcluster-queue/docs/actions) documentation.
	Actions []Action `json:"actions"`

	// Description of the provisioner.
	Description string `json:"description"`

	// Date and time after which the provisioner will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date of the last time this provisioner was seen active. `lastDateActive` is updated every half hour
	// but may be off by up-to half an hour. Nonetheless, `lastDateActive` is a good indicator
	// of when the provisioner was last seen active.
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// This is the stability of the provisioner. Accepted values:
	//   * `experimental`
	//   * `stable`
	//   * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`
}

Response containing information about a provisioner.

type QuarantineDetails

type QuarantineDetails struct {

	// The clientId of the client that made the request to quarantine the worker.
	ClientID string `json:"clientId"`

	// Usually a reason for the quarantine.
	QuarantineInfo string `json:"quarantineInfo"`

	// Value of the worker's quarantineUntil property at the moment of the quarantine.
	QuarantineUntil tcclient.Time `json:"quarantineUntil"`

	// Time when the quarantine was updated.
	UpdatedAt tcclient.Time `json:"updatedAt"`
}

Information about when and why a worker was quarantined.

type QuarantineWorkerRequest

type QuarantineWorkerRequest struct {

	// A message to be included in the worker's quarantine details. This message will be
	// appended to the existing quarantine details to keep a history of the worker's quarantine.
	//
	// Min length: 0
	// Max length: 4000
	QuarantineInfo string `json:"quarantineInfo,omitempty"`

	// Quarantining a worker allows the machine to remain alive but not accept jobs.
	// Once the quarantineUntil time has elapsed, the worker resumes accepting jobs.
	// Note that a quarantine can be lifted by setting `quarantineUntil` to the present time (or
	// somewhere in the past).
	QuarantineUntil tcclient.Time `json:"quarantineUntil"`
}

Request to update a worker's quarantineUntil property.

type Queue

type Queue tcclient.Client

func New

func New(credentials *tcclient.Credentials, rootURL string) *Queue

New returns a Queue client, configured to run against production. Pass in nil credentials to create a client without authentication. The returned client is mutable, so returned settings can be altered.

queue := tcqueue.New(
    nil,                                      // client without authentication
    "http://localhost:1234/my/taskcluster",   // taskcluster hosted at this root URL on local machine
)
err := queue.Ping(.....)                      // for example, call the Ping(.....) API endpoint (described further down)...
if err != nil {
	// handle errors...
}

func NewFromEnv

func NewFromEnv() *Queue

NewFromEnv returns a *Queue configured from environment variables.

The root URL is taken from TASKCLUSTER_PROXY_URL if set to a non-empty string, otherwise from TASKCLUSTER_ROOT_URL if set, otherwise the empty string.

The credentials are taken from environment variables:

TASKCLUSTER_CLIENT_ID
TASKCLUSTER_ACCESS_TOKEN
TASKCLUSTER_CERTIFICATE

If TASKCLUSTER_CLIENT_ID is empty/unset, authentication will be disabled.

func (*Queue) Artifact

func (queue *Queue) Artifact(taskId, runId, name string) (*GetArtifactContentResponse, error)

Returns information about the content of the artifact, in the given task run.

Depending on the storage type, the endpoint returns the content of the artifact or enough information to access that content.

This method follows link artifacts, so it will not return content for a link artifact.

Required scopes:

For name in names each queue:get-artifact:<name>

See #artifact

func (*Queue) ArtifactInfo

func (queue *Queue) ArtifactInfo(taskId, runId, name string) (*Artifact, error)

Returns associated metadata for a given artifact, in the given task run. The metadata is the same as that returned from `listArtifacts`, and does not grant access to the artifact data.

Note that this method does *not* automatically follow link artifacts.

Required scopes:

queue:list-artifacts:<taskId>:<runId>

See #artifactInfo

func (*Queue) ArtifactInfo_SignedURL

func (queue *Queue) ArtifactInfo_SignedURL(taskId, runId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ArtifactInfo, valid for the specified duration.

Required scopes:

queue:list-artifacts:<taskId>:<runId>

See ArtifactInfo for more details.

func (*Queue) Artifact_SignedURL

func (queue *Queue) Artifact_SignedURL(taskId, runId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for Artifact, valid for the specified duration.

Required scopes:

For name in names each queue:get-artifact:<name>

See Artifact for more details.

func (*Queue) CancelTask

func (queue *Queue) CancelTask(taskId string) (*TaskStatusResponse, error)

This method will cancel a task that is either `unscheduled`, `pending` or `running`. It will resolve the current run as `exception` with `reasonResolved` set to `canceled`. If the task isn't scheduled yet, ie. it doesn't have any runs, an initial run will be added and resolved as described above. Hence, after canceling a task, it cannot be scheduled with `queue.scheduleTask`, but a new run can be created with `queue.rerun`. These semantics is equivalent to calling `queue.scheduleTask` immediately followed by `queue.cancelTask`.

**Remark** this operation is idempotent, if you try to cancel a task that isn't `unscheduled`, `pending` or `running`, this operation will just return the current task status.

Required scopes:

Any of:
- queue:cancel-task:<schedulerId>/<taskGroupId>/<taskId>
- queue:cancel-task-in-project:<projectId>
- All of:
  * queue:cancel-task
  * assume:scheduler-id:<schedulerId>/<taskGroupId>

See #cancelTask

func (*Queue) CancelTaskGroup

func (queue *Queue) CancelTaskGroup(taskGroupId string) (*CancelTaskGroupResponse, error)

Stability: *** EXPERIMENTAL ***

This method will cancel all unresolved tasks (`unscheduled`, `pending` or `running` states) with the given `taskGroupId`. Behaviour is similar to the `cancelTask` method.

It is only possible to cancel a task group if it has been sealed using `sealTaskGroup`. If the task group is not sealed, this method will return a 409 response.

It is possible to rerun a canceled task which will result in a new run. Calling `cancelTaskGroup` again in this case will only cancel the new run. Other tasks that were already canceled would not be canceled again.

Required scopes:

queue:cancel-task-group:<schedulerId>/<taskGroupId>

See #cancelTaskGroup

func (*Queue) ClaimTask

func (queue *Queue) ClaimTask(taskId, runId string, payload *TaskClaimRequest) (*TaskClaimResponse, error)

Stability: *** DEPRECATED ***

claim a task - never documented

Required scopes:

All of:
* queue:claim-task:<provisionerId>/<workerType>
* queue:worker-id:<workerGroup>/<workerId>

See #claimTask

func (*Queue) ClaimWork

func (queue *Queue) ClaimWork(taskQueueId string, payload *ClaimWorkRequest) (*ClaimWorkResponse, error)

Claim pending task(s) for the given task queue.

If any work is available (even if fewer than the requested number of tasks, this will return immediately. Otherwise, it will block for tens of seconds waiting for work. If no work appears, it will return an emtpy list of tasks. Callers should sleep a short while (to avoid denial of service in an error condition) and call the endpoint again. This is a simple implementation of "long polling".

Required scopes:

All of:
* queue:claim-work:<taskQueueId>
* queue:worker-id:<workerGroup>/<workerId>

See #claimWork

func (*Queue) CreateArtifact

func (queue *Queue) CreateArtifact(taskId, runId, name string, payload *PostArtifactRequest) (*PostArtifactResponse, error)

This API end-point creates an artifact for a specific run of a task. This should **only** be used by a worker currently operating on this task, or from a process running within the task (ie. on the worker).

All artifacts must specify when they expire. The queue will automatically take care of deleting artifacts past their expiration point. This feature makes it feasible to upload large intermediate artifacts from data processing applications, as the artifacts can be set to expire a few days later.

Required scopes:

queue:create-artifact:<taskId>/<runId>

See #createArtifact

func (*Queue) CreateTask

func (queue *Queue) CreateTask(taskId string, payload *TaskDefinitionRequest) (*TaskStatusResponse, error)

Create a new task, this is an **idempotent** operation, so repeat it if you get an internal server error or network connection is dropped.

**Task `deadline`**: the deadline property can be no more than 5 days into the future. This is to limit the amount of pending tasks not being taken care of. Ideally, you should use a much shorter deadline.

**Task expiration**: the `expires` property must be greater than the task `deadline`. If not provided it will default to `deadline` + one year. Notice that artifacts created by a task must expire before the task's expiration.

**Task specific routing-keys**: using the `task.routes` property you may define task specific routing-keys. If a task has a task specific routing-key: `<route>`, then when the AMQP message about the task is published, the message will be CC'ed with the routing-key: `route.<route>`. This is useful if you want another component to listen for completed tasks you have posted. The caller must have scope `queue:route:<route>` for each route.

**Dependencies**: any tasks referenced in `task.dependencies` must have already been created at the time of this call.

**Scopes**: Note that the scopes required to complete this API call depend on the content of the `scopes`, `routes`, `schedulerId`, `priority`, `provisionerId`, and `workerType` properties of the task definition.

If the task group was sealed, this end-point will return `409` reporting `RequestConflict` to indicate that it is no longer possible to add new tasks for this `taskGroupId`.

Required scopes:

All of:
* For scope in scopes each <scope>
* For route in routes each queue:route:<route>
* queue:create-task:project:<projectId>
* queue:scheduler-id:<schedulerId>
* For priority in priorities each queue:create-task:<priority>:<provisionerId>/<workerType>

See #createTask

func (*Queue) DeclareProvisioner

func (queue *Queue) DeclareProvisioner(provisionerId string, payload *ProvisionerRequest) (*ProvisionerResponse, error)

Stability: *** DEPRECATED ***

Declare a provisioner, supplying some details about it.

`declareProvisioner` allows updating one or more properties of a provisioner as long as the required scopes are possessed. For example, a request to update the `my-provisioner` provisioner with a body `{description: 'This provisioner is great'}` would require you to have the scope `queue:declare-provisioner:my-provisioner#description`.

The term "provisioner" is taken broadly to mean anything with a provisionerId. This does not necessarily mean there is an associated service performing any provisioning activity.

Required scopes:

For property in properties each queue:declare-provisioner:<provisionerId>#<property>

See #declareProvisioner

func (*Queue) DeclareWorker

func (queue *Queue) DeclareWorker(provisionerId, workerType, workerGroup, workerId string, payload *WorkerRequest) (*WorkerResponse, error)

Stability: *** EXPERIMENTAL ***

Declare a worker, supplying some details about it.

`declareWorker` allows updating one or more properties of a worker as long as the required scopes are possessed.

Required scopes:

For property in properties each queue:declare-worker:<provisionerId>/<workerType>/<workerGroup>/<workerId>#<property>

See #declareWorker

func (*Queue) DeclareWorkerType

func (queue *Queue) DeclareWorkerType(provisionerId, workerType string, payload *WorkerTypeRequest) (*WorkerTypeResponse, error)

Stability: *** DEPRECATED ***

Declare a workerType, supplying some details about it.

`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are possessed. For example, a request to update the `highmem` worker-type within the `my-provisioner` provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope `queue:declare-worker-type:my-provisioner/highmem#description`.

Required scopes:

For property in properties each queue:declare-worker-type:<provisionerId>/<workerType>#<property>

See #declareWorkerType

func (*Queue) DownloadArtifactToBuf

func (queue *Queue) DownloadArtifactToBuf(taskID string, runID int64, name string) (buf []byte, contentType string, contentLength int64, err error)

DownloadArtifactToBuf is a convenience method to download an artifact to an in-memory byte slice. If RunID is -1, the latest run is used. Returns the object itself, the Content-Type and Content-Length of the downloaded object.

func (*Queue) DownloadArtifactToFile

func (queue *Queue) DownloadArtifactToFile(taskID string, runID int64, name string, filepath string) (contentType string, contentLength int64, err error)

DownloadArtifactToFile is a convenience method to download an object to a file. If RunID is -1, the latest run is used. The file is overwritten if it already exists. Returns the Content-Type and Content-Length of the downloaded object.

func (*Queue) DownloadArtifactToWriteSeeker

func (queue *Queue) DownloadArtifactToWriteSeeker(taskID string, runID int64, name string, writeSeeker io.WriteSeeker) (contentType string, contentLength int64, err error)

DownloadArtifactToWriteSeeker downloads the named object from the object service and writes it to writeSeeker, retrying if intermittent errors occur. If RunID is -1, the latest run is used. Returns the Content-Type and Content-Length of the downloaded object.

func (*Queue) FinishArtifact

func (queue *Queue) FinishArtifact(taskId, runId, name string, payload *FinishArtifactRequest) error

This endpoint marks an artifact as present for the given task, and should be called when the artifact data is fully uploaded.

The storage types `reference`, `link`, and `error` do not need to be finished, as they are finished immediately by `createArtifact`. The storage type `s3` does not support this functionality and cannot be finished. In all such cases, calling this method is an input error (400).

Required scopes:

queue:create-artifact:<taskId>/<runId>

See #finishArtifact

func (*Queue) GetArtifact

func (queue *Queue) GetArtifact(taskId, runId, name string) (*GetArtifactResponse, error)

Get artifact by `<name>` from a specific run.

**Artifact Access**, in order to get an artifact you need the scope `queue:get-artifact:<name>`, where `<name>` is the name of the artifact. To allow access to fetch artifacts with a client like `curl` or a web browser, without using Taskcluster credentials, include a scope in the `anonymous` role. The convention is to include `queue:get-artifact:public/*`.

**Response**: the HTTP response to this method is a 303 redirect to the URL from which the artifact can be downloaded. The body of that response contains the data described in the output schema, contianing the same URL. Callers are encouraged to use whichever method of gathering the URL is most convenient. Standard HTTP clients will follow the redirect, while API client libraries will return the JSON body.

In order to download an artifact the following must be done:

1. Obtain queue url. Building a signed url with a taskcluster client is recommended 1. Make a GET request which does not follow redirects 1. In all cases, if specified, the x-taskcluster-location-{content,transfer}-{sha256,length} values must be validated to be equal to the Content-Length and Sha256 checksum of the final artifact downloaded. as well as any intermediate redirects 1. If this response is a 500-series error, retry using an exponential backoff. No more than 5 retries should be attempted 1. If this response is a 400-series error, treat it appropriately for your context. This might be an error in responding to this request or an Error storage type body. This request should not be retried. 1. If this response is a 200-series response, the response body is the artifact. If the x-taskcluster-location-{content,transfer}-{sha256,length} and x-taskcluster-location-content-encoding are specified, they should match this response body 1. If the response type is a 300-series redirect, the artifact will be at the location specified by the `Location` header. There are multiple artifact storage types which use a 300-series redirect. 1. For all redirects followed, the user must verify that the content-sha256, content-length, transfer-sha256, transfer-length and content-encoding match every further request. The final artifact must also be validated against the values specified in the original queue response 1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference` must not occur

**Headers** The following important headers are set on the response to this method:

* location: the url of the artifact if a redirect is to be performed * x-taskcluster-artifact-storage-type: the storage type. Example: s3

Required scopes:

For name in names each queue:get-artifact:<name>

See #getArtifact

func (*Queue) GetArtifact_SignedURL

func (queue *Queue) GetArtifact_SignedURL(taskId, runId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetArtifact, valid for the specified duration.

Required scopes:

For name in names each queue:get-artifact:<name>

See GetArtifact for more details.

func (*Queue) GetLatestArtifact

func (queue *Queue) GetLatestArtifact(taskId, name string) (*GetArtifactResponse, error)

Get artifact by `<name>` from the last run of a task.

**Artifact Access**, in order to get an artifact you need the scope `queue:get-artifact:<name>`, where `<name>` is the name of the artifact. To allow access to fetch artifacts with a client like `curl` or a web browser, without using Taskcluster credentials, include a scope in the `anonymous` role. The convention is to include `queue:get-artifact:public/*`.

**API Clients**, this method will redirect you to the artifact, if it is stored externally. Either way, the response may not be JSON. So API client users might want to generate a signed URL for this end-point and use that URL with a normal HTTP client.

**Remark**, this end-point is slightly slower than `queue.getArtifact`, so consider that if you already know the `runId` of the latest run. Otherwise, just us the most convenient API end-point.

Required scopes:

For name in names each queue:get-artifact:<name>

See #getLatestArtifact

func (*Queue) GetLatestArtifact_SignedURL

func (queue *Queue) GetLatestArtifact_SignedURL(taskId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetLatestArtifact, valid for the specified duration.

Required scopes:

For name in names each queue:get-artifact:<name>

See GetLatestArtifact for more details.

func (*Queue) GetProvisioner

func (queue *Queue) GetProvisioner(provisionerId string) (*ProvisionerResponse, error)

Stability: *** DEPRECATED ***

Get an active provisioner.

The term "provisioner" is taken broadly to mean anything with a provisionerId. This does not necessarily mean there is an associated service performing any provisioning activity.

Required scopes:

queue:get-provisioner:<provisionerId>

See #getProvisioner

func (*Queue) GetProvisioner_SignedURL

func (queue *Queue) GetProvisioner_SignedURL(provisionerId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetProvisioner, valid for the specified duration.

Required scopes:

queue:get-provisioner:<provisionerId>

See GetProvisioner for more details.

func (*Queue) GetTaskGroup

func (queue *Queue) GetTaskGroup(taskGroupId string) (*TaskGroupDefinitionResponse, error)

Get task group information by `taskGroupId`.

This will return meta-information associated with the task group. It contains information about task group expiry date or if it is sealed.

If you also want to see which tasks belong to this task group, you can call `listTaskGroup` method.

Required scopes:

queue:list-task-group:<taskGroupId>

See #getTaskGroup

func (*Queue) GetTaskGroup_SignedURL

func (queue *Queue) GetTaskGroup_SignedURL(taskGroupId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetTaskGroup, valid for the specified duration.

Required scopes:

queue:list-task-group:<taskGroupId>

See GetTaskGroup for more details.

func (*Queue) GetTaskQueue

func (queue *Queue) GetTaskQueue(taskQueueId string) (*TaskQueueResponse, error)

Get a task queue.

Required scopes:

queue:get-task-queue:<taskQueueId>

See #getTaskQueue

func (*Queue) GetTaskQueue_SignedURL

func (queue *Queue) GetTaskQueue_SignedURL(taskQueueId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetTaskQueue, valid for the specified duration.

Required scopes:

queue:get-task-queue:<taskQueueId>

See GetTaskQueue for more details.

func (*Queue) GetWorker

func (queue *Queue) GetWorker(provisionerId, workerType, workerGroup, workerId string) (*WorkerResponse, error)

Stability: *** DEPRECATED ***

Get a worker from a worker-type.

Required scopes:

queue:get-worker:<provisionerId>/<workerType>/<workerGroup>/<workerId>

See #getWorker

func (*Queue) GetWorkerType

func (queue *Queue) GetWorkerType(provisionerId, workerType string) (*WorkerTypeResponse, error)

Stability: *** DEPRECATED ***

Get a worker-type from a provisioner.

Required scopes:

queue:get-worker-type:<provisionerId>/<workerType>

See #getWorkerType

func (*Queue) GetWorkerType_SignedURL

func (queue *Queue) GetWorkerType_SignedURL(provisionerId, workerType string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetWorkerType, valid for the specified duration.

Required scopes:

queue:get-worker-type:<provisionerId>/<workerType>

See GetWorkerType for more details.

func (*Queue) GetWorker_SignedURL

func (queue *Queue) GetWorker_SignedURL(provisionerId, workerType, workerGroup, workerId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for GetWorker, valid for the specified duration.

Required scopes:

queue:get-worker:<provisionerId>/<workerType>/<workerGroup>/<workerId>

See GetWorker for more details.

func (*Queue) Heartbeat

func (queue *Queue) Heartbeat() error

Respond with a service heartbeat.

This endpoint is used to check on backing services this service depends on.

See #heartbeat

func (*Queue) LatestArtifact

func (queue *Queue) LatestArtifact(taskId, name string) (*GetArtifactContentResponse, error)

Returns information about the content of the artifact, in the latest task run.

Depending on the storage type, the endpoint returns the content of the artifact or enough information to access that content.

This method follows link artifacts, so it will not return content for a link artifact.

Required scopes:

For name in names each queue:get-artifact:<name>

See #latestArtifact

func (*Queue) LatestArtifactInfo

func (queue *Queue) LatestArtifactInfo(taskId, name string) (*Artifact, error)

Returns associated metadata for a given artifact, in the latest run of the task. The metadata is the same as that returned from `listArtifacts`, and does not grant access to the artifact data.

Note that this method does *not* automatically follow link artifacts.

Required scopes:

queue:list-artifacts:<taskId>

See #latestArtifactInfo

func (*Queue) LatestArtifactInfo_SignedURL

func (queue *Queue) LatestArtifactInfo_SignedURL(taskId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for LatestArtifactInfo, valid for the specified duration.

Required scopes:

queue:list-artifacts:<taskId>

See LatestArtifactInfo for more details.

func (*Queue) LatestArtifact_SignedURL

func (queue *Queue) LatestArtifact_SignedURL(taskId, name string, duration time.Duration) (*url.URL, error)

Returns a signed URL for LatestArtifact, valid for the specified duration.

Required scopes:

For name in names each queue:get-artifact:<name>

See LatestArtifact for more details.

func (*Queue) Lbheartbeat

func (queue *Queue) Lbheartbeat() error

Respond without doing anything. This endpoint is used to check that the service is up.

See #lbheartbeat

func (*Queue) ListArtifacts

func (queue *Queue) ListArtifacts(taskId, runId, continuationToken, limit string) (*ListArtifactsResponse, error)

Returns a list of artifacts and associated meta-data for a given run.

As a task may have many artifacts paging may be necessary. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as the query-string option: `continuationToken`.

By default this end-point will list up-to 1000 artifacts in a single page you may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-artifacts:<taskId>:<runId>

See #listArtifacts

func (*Queue) ListArtifacts_SignedURL

func (queue *Queue) ListArtifacts_SignedURL(taskId, runId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListArtifacts, valid for the specified duration.

Required scopes:

queue:list-artifacts:<taskId>:<runId>

See ListArtifacts for more details.

func (*Queue) ListClaimedTasks

func (queue *Queue) ListClaimedTasks(taskQueueId, continuationToken, limit string) (*ListClaimedTasksResponse, error)

Stability: *** EXPERIMENTAL ***

List claimed tasks for the given `taskQueueId`.

As task states may change rapidly, this information might not represent the exact state of such tasks, but a very good approximation.

Required scopes:

queue:claimed-list:<taskQueueId>

See #listClaimedTasks

func (*Queue) ListClaimedTasks_SignedURL

func (queue *Queue) ListClaimedTasks_SignedURL(taskQueueId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListClaimedTasks, valid for the specified duration.

Required scopes:

queue:claimed-list:<taskQueueId>

See ListClaimedTasks for more details.

func (*Queue) ListDependentTasks

func (queue *Queue) ListDependentTasks(taskId, continuationToken, limit string) (*ListDependentTasksResponse, error)

List tasks that depend on the given `taskId`.

As many tasks from different task-groups may dependent on a single tasks, this end-point may return a `continuationToken`. To continue listing tasks you must call `listDependentTasks` again with the `continuationToken` as the query-string option `continuationToken`.

By default this end-point will try to return up to 1000 tasks in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `listDependentTasks` with the last `continuationToken` until you get a result without a `continuationToken`.

If you are not interested in listing all the tasks at once, you may use the query-string option `limit` to return fewer.

Required scopes:

queue:list-dependent-tasks:<taskId>

See #listDependentTasks

func (*Queue) ListDependentTasks_SignedURL

func (queue *Queue) ListDependentTasks_SignedURL(taskId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListDependentTasks, valid for the specified duration.

Required scopes:

queue:list-dependent-tasks:<taskId>

See ListDependentTasks for more details.

func (*Queue) ListLatestArtifacts

func (queue *Queue) ListLatestArtifacts(taskId, continuationToken, limit string) (*ListArtifactsResponse, error)

Returns a list of artifacts and associated meta-data for the latest run from the given task.

As a task may have many artifacts paging may be necessary. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as the query-string option: `continuationToken`.

By default this end-point will list up-to 1000 artifacts in a single page you may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-artifacts:<taskId>

See #listLatestArtifacts

func (*Queue) ListLatestArtifacts_SignedURL

func (queue *Queue) ListLatestArtifacts_SignedURL(taskId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListLatestArtifacts, valid for the specified duration.

Required scopes:

queue:list-artifacts:<taskId>

See ListLatestArtifacts for more details.

func (*Queue) ListPendingTasks

func (queue *Queue) ListPendingTasks(taskQueueId, continuationToken, limit string) (*ListPendingTasksResponse, error)

Stability: *** EXPERIMENTAL ***

List pending tasks for the given `taskQueueId`.

As task states may change rapidly, this information might not represent the exact state of such tasks, but a very good approximation.

Required scopes:

queue:pending-list:<taskQueueId>

See #listPendingTasks

func (*Queue) ListPendingTasks_SignedURL

func (queue *Queue) ListPendingTasks_SignedURL(taskQueueId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListPendingTasks, valid for the specified duration.

Required scopes:

queue:pending-list:<taskQueueId>

See ListPendingTasks for more details.

func (*Queue) ListProvisioners

func (queue *Queue) ListProvisioners(continuationToken, limit string) (*ListProvisionersResponse, error)

Stability: *** DEPRECATED ***

Get all active provisioners.

The term "provisioner" is taken broadly to mean anything with a provisionerId. This does not necessarily mean there is an associated service performing any provisioning activity.

The response is paged. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as a query-string option. By default this end-point will list up to 1000 provisioners in a single page. You may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-provisioners

See #listProvisioners

func (*Queue) ListProvisioners_SignedURL

func (queue *Queue) ListProvisioners_SignedURL(continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListProvisioners, valid for the specified duration.

Required scopes:

queue:list-provisioners

See ListProvisioners for more details.

func (*Queue) ListTaskGroup

func (queue *Queue) ListTaskGroup(taskGroupId, continuationToken, limit string) (*ListTaskGroupResponse, error)

List tasks sharing the same `taskGroupId`.

As a task-group may contain an unbounded number of tasks, this end-point may return a `continuationToken`. To continue listing tasks you must call the `listTaskGroup` again with the `continuationToken` as the query-string option `continuationToken`.

By default this end-point will try to return up to 1000 members in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `listTaskGroup` with the last `continuationToken` until you get a result without a `continuationToken`.

If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer.

If you only want to to fetch task group metadata without the tasks, you can call the `getTaskGroup` method.

Required scopes:

queue:list-task-group:<taskGroupId>

See #listTaskGroup

func (*Queue) ListTaskGroup_SignedURL

func (queue *Queue) ListTaskGroup_SignedURL(taskGroupId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListTaskGroup, valid for the specified duration.

Required scopes:

queue:list-task-group:<taskGroupId>

See ListTaskGroup for more details.

func (*Queue) ListTaskQueues

func (queue *Queue) ListTaskQueues(continuationToken, limit string) (*ListTaskQueuesResponse, error)

Get all active task queues.

The response is paged. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as a query-string option. By default this end-point will list up to 1000 task queues in a single page. You may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-task-queues

See #listTaskQueues

func (*Queue) ListTaskQueues_SignedURL

func (queue *Queue) ListTaskQueues_SignedURL(continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListTaskQueues, valid for the specified duration.

Required scopes:

queue:list-task-queues

See ListTaskQueues for more details.

func (*Queue) ListWorkerTypes

func (queue *Queue) ListWorkerTypes(provisionerId, continuationToken, limit string) (*ListWorkerTypesResponse, error)

Stability: *** DEPRECATED ***

Get all active worker-types for the given provisioner.

The response is paged. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as a query-string option. By default this end-point will list up to 1000 worker-types in a single page. You may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-worker-types:<provisionerId>

See #listWorkerTypes

func (*Queue) ListWorkerTypes_SignedURL

func (queue *Queue) ListWorkerTypes_SignedURL(provisionerId, continuationToken, limit string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListWorkerTypes, valid for the specified duration.

Required scopes:

queue:list-worker-types:<provisionerId>

See ListWorkerTypes for more details.

func (*Queue) ListWorkers

func (queue *Queue) ListWorkers(provisionerId, workerType, continuationToken, limit, quarantined string) (*ListWorkersResponse, error)

Stability: *** DEPRECATED ***

Get a list of all active workers of a workerType.

`listWorkers` allows a response to be filtered by quarantined and non quarantined workers. To filter the query, you should call the end-point with `quarantined` as a query-string option with a true or false value.

The response is paged. If this end-point returns a `continuationToken`, you should call the end-point again with the `continuationToken` as a query-string option. By default this end-point will list up to 1000 workers in a single page. You may limit this with the query-string parameter `limit`.

Required scopes:

queue:list-workers:<provisionerId>/<workerType>

See #listWorkers

func (*Queue) ListWorkers_SignedURL

func (queue *Queue) ListWorkers_SignedURL(provisionerId, workerType, continuationToken, limit, quarantined string, duration time.Duration) (*url.URL, error)

Returns a signed URL for ListWorkers, valid for the specified duration.

Required scopes:

queue:list-workers:<provisionerId>/<workerType>

See ListWorkers for more details.

func (*Queue) PendingTasks

func (queue *Queue) PendingTasks(taskQueueId string) (*CountPendingTasksResponse, error)

Get an approximate number of pending tasks for the given `taskQueueId`.

As task states may change rapidly, this number may not represent the exact number of pending tasks, but a very good approximation.

Required scopes:

queue:pending-count:<taskQueueId>

See #pendingTasks

func (*Queue) PendingTasks_SignedURL

func (queue *Queue) PendingTasks_SignedURL(taskQueueId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for PendingTasks, valid for the specified duration.

Required scopes:

queue:pending-count:<taskQueueId>

See PendingTasks for more details.

func (*Queue) Ping

func (queue *Queue) Ping() error

Respond without doing anything. This endpoint is used to check that the service is up.

See #ping

func (*Queue) QuarantineWorker

func (queue *Queue) QuarantineWorker(provisionerId, workerType, workerGroup, workerId string, payload *QuarantineWorkerRequest) (*WorkerResponse, error)

Stability: *** EXPERIMENTAL ***

Quarantine a worker

Required scopes:

queue:quarantine-worker:<provisionerId>/<workerType>/<workerGroup>/<workerId>

See #quarantineWorker

func (*Queue) ReclaimTask

func (queue *Queue) ReclaimTask(taskId, runId string) (*TaskReclaimResponse, error)

Refresh the claim for a specific `runId` for given `taskId`. This updates the `takenUntil` property and returns a new set of temporary credentials for performing requests on behalf of the task. These credentials should be used in-place of the credentials returned by `claimWork`.

The `reclaimTask` requests serves to:

  • Postpone `takenUntil` preventing the queue from resolving `claim-expired`,
  • Refresh temporary credentials used for processing the task, and
  • Abort execution if the task/run have been resolved.

If the `takenUntil` timestamp is exceeded the queue will resolve the run as _exception_ with reason `claim-expired`, and proceeded to retry to the task. This ensures that tasks are retried, even if workers disappear without warning.

If the task is resolved, this end-point will return `409` reporting `RequestConflict`. This typically happens if the task have been canceled or the `task.deadline` have been exceeded. If reclaiming fails, workers should abort the task and forget about the given `runId`. There is no need to resolve the run or upload artifacts.

Required scopes:

queue:reclaim-task:<taskId>/<runId>

See #reclaimTask

func (*Queue) ReportCompleted

func (queue *Queue) ReportCompleted(taskId, runId string) (*TaskStatusResponse, error)

Report a task completed, resolving the run as `completed`.

Required scopes:

queue:resolve-task:<taskId>/<runId>

See #reportCompleted

func (*Queue) ReportException

func (queue *Queue) ReportException(taskId, runId string, payload *TaskExceptionRequest) (*TaskStatusResponse, error)

Resolve a run as _exception_. Generally, you will want to report tasks as failed instead of exception. You should `reportException` if,

  • The `task.payload` is invalid,
  • Non-existent resources are referenced,
  • Declared actions cannot be executed due to unavailable resources,
  • The worker had to shutdown prematurely,
  • The worker experienced an unknown error, or,
  • The task explicitly requested a retry.

Do not use this to signal that some user-specified code crashed for any reason specific to this code. If user-specific code hits a resource that is temporarily unavailable worker should report task _failed_.

Required scopes:

queue:resolve-task:<taskId>/<runId>

See #reportException

func (*Queue) ReportFailed

func (queue *Queue) ReportFailed(taskId, runId string) (*TaskStatusResponse, error)

Report a run failed, resolving the run as `failed`. Use this to resolve a run that failed because the task specific code behaved unexpectedly. For example the task exited non-zero, or didn't produce expected output.

Do not use this if the task couldn't be run because if malformed payload, or other unexpected condition. In these cases we have a task exception, which should be reported with `reportException`.

Required scopes:

queue:resolve-task:<taskId>/<runId>

See #reportFailed

func (*Queue) RerunTask

func (queue *Queue) RerunTask(taskId string) (*TaskStatusResponse, error)

This method _reruns_ a previously resolved task, even if it was _completed_. This is useful if your task completes unsuccessfully, and you just want to run it from scratch again. This will also reset the number of `retries` allowed. It will schedule a task that is _unscheduled_ regardless of the state of its dependencies.

Remember that `retries` in the task status counts the number of runs that the queue have started because the worker stopped responding, for example because a spot node died.

**Remark** this operation is idempotent: if it is invoked for a task that is `pending` or `running`, it will just return the current task status.

Required scopes:

Any of:
- queue:rerun-task:<schedulerId>/<taskGroupId>/<taskId>
- queue:rerun-task-in-project:<projectId>
- All of:
  * queue:rerun-task
  * assume:scheduler-id:<schedulerId>/<taskGroupId>

See #rerunTask

func (*Queue) ScheduleTask

func (queue *Queue) ScheduleTask(taskId string) (*TaskStatusResponse, error)

scheduleTask will schedule a task to be executed, even if it has unresolved dependencies. A task would otherwise only be scheduled if its dependencies were resolved.

This is useful if you have defined a task that depends on itself or on some other task that has not been resolved, but you wish the task to be scheduled immediately.

This will announce the task as pending and workers will be allowed to claim it and resolve the task.

**Note** this operation is **idempotent** and will not fail or complain if called with a `taskId` that is already scheduled, or even resolved. To reschedule a task previously resolved, use `rerunTask`.

Required scopes:

Any of:
- queue:schedule-task:<schedulerId>/<taskGroupId>/<taskId>
- queue:schedule-task-in-project:<projectId>
- All of:
  * queue:schedule-task
  * assume:scheduler-id:<schedulerId>/<taskGroupId>

See #scheduleTask

func (*Queue) SealTaskGroup

func (queue *Queue) SealTaskGroup(taskGroupId string) (*TaskGroupDefinitionResponse, error)

Stability: *** EXPERIMENTAL ***

Seal task group to prevent creation of new tasks.

Task group can be sealed once and is irreversible. Calling it multiple times will return same result and will not update it again.

Required scopes:

queue:seal-task-group:<schedulerId>/<taskGroupId>

See #sealTaskGroup

func (*Queue) Status

func (queue *Queue) Status(taskId string) (*TaskStatusResponse, error)

Get task status structure from `taskId`

Required scopes:

queue:status:<taskId>

See #status

func (*Queue) Status_SignedURL

func (queue *Queue) Status_SignedURL(taskId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for Status, valid for the specified duration.

Required scopes:

queue:status:<taskId>

See Status for more details.

func (*Queue) Statuses added in v64.1.0

func (queue *Queue) Statuses(continuationToken, limit string, payload *TaskDefinitionsResponse) (*TasksStatusesResponse, error)

Stability: *** EXPERIMENTAL ***

This end-point will return the task statuses for each input task id. If a given taskId does not match a task, it will be ignored, and callers will need to handle the difference.

Required scopes:

For taskId in taskIds each queue:status:<taskId>

See #statuses

func (*Queue) Task

func (queue *Queue) Task(taskId string) (*TaskDefinitionResponse, error)

This end-point will return the task-definition. Notice that the task definition may have been modified by queue, if an optional property is not specified the queue may provide a default value.

Required scopes:

queue:get-task:<taskId>

See #task

func (*Queue) Task_SignedURL

func (queue *Queue) Task_SignedURL(taskId string, duration time.Duration) (*url.URL, error)

Returns a signed URL for Task, valid for the specified duration.

Required scopes:

queue:get-task:<taskId>

See Task for more details.

func (*Queue) Tasks added in v64.1.0

func (queue *Queue) Tasks(continuationToken, limit string, payload *TaskDefinitionsResponse) (*TaskDefinitionResponse1, error)

Stability: *** EXPERIMENTAL ***

This end-point will return the task definition for each input task id. Notice that the task definitions may have been modified by queue.

Required scopes:

For taskId in taskIds each queue:get-task:<taskId>

See #tasks

func (*Queue) Version

func (queue *Queue) Version() error

Respond with the JSON version object. https://github.com/mozilla-services/Dockerflow/blob/main/docs/version_object.md

See #version

type RedirectArtifactRequest

type RedirectArtifactRequest struct {

	// Expected content-type of the artifact.  This is informational only:
	// it is suitable for use to choose an icon for the artifact, for example.
	// The accurate content-type of the artifact can only be determined by
	// downloading it.
	//
	// Max length: 255
	ContentType string `json:"contentType"`

	// Date-time after which the queue should no longer redirect to this URL.
	// Note, that the queue will and cannot delete the resource your URL
	// references, you are responsible for doing that yourself.
	Expires tcclient.Time `json:"expires"`

	// Artifact storage type, in this case `reference`
	//
	// Possible values:
	//   * "reference"
	StorageType string `json:"storageType"`

	// URL to which the queue should redirect using a `303` (See other)
	// redirect.
	URL string `json:"url"`
}

Request the queue to redirect fetches for this artifact to a URL. An existing artifact can be replaced with a RedirectArtifact as long as the task is still executing. When a RedirectArtifact is fetched, the URL is returned verbatim as a Location header in a 303 (See Other) response. Clients will not apply any form of authentication to that URL.

type RedirectArtifactResponse

type RedirectArtifactResponse struct {

	// Artifact storage type, in this case `reference`
	//
	// Possible values:
	//   * "reference"
	StorageType string `json:"storageType"`
}

Response to a request for the queue to redirect to a URL for a given artifact.

type RunInformation

type RunInformation struct {

	// Reason for the creation of this run,
	// **more reasons may be added in the future**.
	//
	// Possible values:
	//   * "scheduled"
	//   * "retry"
	//   * "task-retry"
	//   * "rerun"
	//   * "exception"
	ReasonCreated string `json:"reasonCreated"`

	// Reason that run was resolved, this is mainly
	// useful for runs resolved as `exception`.
	// Note, **more reasons may be added in the future**, also this
	// property is only available after the run is resolved. Some of these
	// reasons, notably `intermittent-task`, `worker-shutdown`, and
	// `claim-expired`, will trigger an automatic retry of the task.
	// Note that 'superseded' is here only for compatibility, as that
	// functionality has been removed.
	//
	// Possible values:
	//   * "completed"
	//   * "failed"
	//   * "deadline-exceeded"
	//   * "canceled"
	//   * "claim-expired"
	//   * "worker-shutdown"
	//   * "malformed-payload"
	//   * "resource-unavailable"
	//   * "internal-error"
	//   * "intermittent-task"
	//   * "superseded"
	ReasonResolved string `json:"reasonResolved,omitempty"`

	// Date-time at which this run was resolved, ie. when the run changed
	// state from `running` to either `completed`, `failed` or `exception`.
	// This property is only present after the run as been resolved.
	Resolved tcclient.Time `json:"resolved,omitempty"`

	// Id of this task run, `run-id`s always starts from `0`
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// Date-time at which this run was scheduled, ie. when the run was
	// created in state `pending`.
	Scheduled tcclient.Time `json:"scheduled"`

	// Date-time at which this run was claimed, ie. when the run changed
	// state from `pending` to `running`. This property is only present
	// after the run has been claimed.
	Started tcclient.Time `json:"started,omitempty"`

	// State of this run
	//
	// Possible values:
	//   * "pending"
	//   * "running"
	//   * "completed"
	//   * "failed"
	//   * "exception"
	State string `json:"state"`

	// Time at which the run expires and is resolved as `failed`, if the
	// run isn't reclaimed. Note, only present after the run has been
	// claimed.
	TakenUntil tcclient.Time `json:"takenUntil,omitempty"`

	// Identifier for group that worker who executes this run is a part of,
	// this identifier is mainly used for efficient routing.
	// Note, this property is only present after the run is claimed.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup,omitempty"`

	// Identifier for worker evaluating this run within given
	// `workerGroup`. Note, this property is only available after the run
	// has been claimed.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId,omitempty"`
}

JSON object with information about a run

type S3ArtifactRequest

type S3ArtifactRequest struct {

	// Artifact mime-type, when uploading artifact to the signed
	// `PUT` URL returned from this request this must given with the
	//  `ContentType` header. Please, provide correct mime-type,
	//  this make tooling a lot easier, specifically,
	//  always using `application/json` for JSON artifacts.
	//
	// Max length: 255
	ContentType string `json:"contentType"`

	// Date-time after which the artifact should be deleted. Note, that
	// these will be collected over time, and artifacts may remain
	// available after expiration. S3 based artifacts are identified in
	// the database and explicitly deleted on S3 after expiration.
	Expires tcclient.Time `json:"expires"`

	// Artifact storage type, in this case `'s3'`
	//
	// Possible values:
	//   * "s3"
	StorageType string `json:"storageType"`
}

Request for a signed PUT URL that will allow you to upload an artifact to an S3 bucket managed by the queue.

type S3ArtifactResponse

type S3ArtifactResponse struct {

	// Artifact mime-type, must be specified as header when uploading with
	// the signed `putUrl`.
	//
	// Max length: 255
	ContentType string `json:"contentType"`

	// Date-time after which the signed `putUrl` no longer works
	Expires tcclient.Time `json:"expires"`

	// URL to which a `PUT` request can be made to upload the artifact
	// requested. Note, the `Content-Length` must be specified correctly,
	// and the `ContentType` header must be set the value specified below.
	PutURL string `json:"putUrl"`

	// Artifact storage type, in this case `'s3'`
	//
	// Possible values:
	//   * "s3"
	StorageType string `json:"storageType"`
}

Response to a request for a signed PUT URL that will allow you to upload an artifact to an S3 bucket managed by the queue.

type Source

type Source string

Link to source of this task, should specify a file, revision and repository. This should be place someone can go an do a git/hg blame to who came up with recipe for this task.

Syntax: ^(https?://|ssh://|git@) Max length: 4096

type Source1

type Source1 string

Link to source of this task, should specify a file, revision and repository. This should be place someone can go an do a git/hg blame to who came up with recipe for this task.

Syntax: ^(https?://|ssh://|git@) Max length: 4096

type TaskClaim

type TaskClaim struct {

	// Temporary credentials granting `task.scopes` and the scope:
	// `queue:claim-task:<taskId>/<runId>` which allows the worker to reclaim
	// the task, upload artifacts and report task resolution.
	//
	// The temporary credentials are set to expire after `takenUntil`. They
	// won't expire exactly at `takenUntil` but shortly after, hence, requests
	// coming close `takenUntil` won't have problems even if there is a little
	// clock drift.
	//
	// Workers should use these credentials when making requests on behalf of
	// a task. This includes requests to create artifacts, reclaiming the task
	// reporting the task `completed`, `failed` or `exception`.
	//
	// Note, a new set of temporary credentials is issued when the worker
	// reclaims the task.
	Credentials TaskCredentials `json:"credentials"`

	// `run-id` assigned to this run of the task
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`

	// Time at which the run expires and is resolved as `exception`,
	// with reason `claim-expired` if the run haven't been reclaimed.
	// This will be some time in the future, with that time controlled
	// by the `queue.task_claim_timeout` configuration.
	TakenUntil tcclient.Time `json:"takenUntil"`

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`

	// Identifier for the worker-group within which this run started.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for the worker executing this run.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

type TaskClaimRequest

type TaskClaimRequest struct {

	// Identifier for group that worker claiming the task is a part of.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for worker within the given workerGroup
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

Request to claim (or reclaim) a task

type TaskClaimResponse

type TaskClaimResponse struct {

	// Temporary credentials granting `task.scopes` and the scope:
	// `queue:claim-task:<taskId>/<runId>` which allows the worker to reclaim
	// the task, upload artifacts and report task resolution.
	//
	// The temporary credentials are set to expire after `takenUntil`. They
	// won't expire exactly at `takenUntil` but shortly after, hence, requests
	// coming close `takenUntil` won't have problems even if there is a little
	// clock drift.
	//
	// Workers should use these credentials when making requests on behalf of
	// a task. This includes requests to create artifacts, reclaiming the task
	// reporting the task `completed`, `failed` or `exception`.
	//
	// Note, a new set of temporary credentials is issued when the worker
	// reclaims the task.
	Credentials TaskCredentials `json:"credentials"`

	// `run-id` assigned to this run of the task
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`

	// Time at which the run expires and is resolved as `exception`,
	// with reason `claim-expired` if the run haven't been reclaimed.
	TakenUntil tcclient.Time `json:"takenUntil"`

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`

	// Identifier for the worker-group within which this run started.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for the worker executing this run.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

Response to a successful task claim

type TaskCredentials

type TaskCredentials struct {

	// The `accessToken` for the temporary credentials.
	//
	// Min length: 1
	AccessToken string `json:"accessToken"`

	// The `certificate` for the temporary credentials, these are required
	// for the temporary credentials to work.
	//
	// Min length: 1
	Certificate string `json:"certificate"`

	// The `clientId` for the temporary credentials.
	//
	// Min length: 1
	ClientID string `json:"clientId"`
}

Temporary credentials granting `task.scopes` and the scope: `queue:claim-task:<taskId>/<runId>` which allows the worker to reclaim the task, upload artifacts and report task resolution.

The temporary credentials are set to expire after `takenUntil`. They won't expire exactly at `takenUntil` but shortly after, hence, requests coming close `takenUntil` won't have problems even if there is a little clock drift.

Workers should use these credentials when making requests on behalf of a task. This includes requests to create artifacts, reclaiming the task reporting the task `completed`, `failed` or `exception`.

Note, a new set of temporary credentials is issued when the worker reclaims the task.

type TaskDefinitionAndStatus

type TaskDefinitionAndStatus struct {

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`
}

Task Definition and task status structure.

type TaskDefinitionRequest

type TaskDefinitionRequest struct {

	// Creation time of task
	Created tcclient.Time `json:"created"`

	// Deadline of the task, by which this task must be complete. `pending` and
	// `running` runs are resolved as **exception** if not resolved by other means
	// before the deadline. After the deadline, a task is immutable. Note,
	// deadline cannot be more than 5 days into the future
	Deadline tcclient.Time `json:"deadline"`

	// List of dependent tasks. These must either be _completed_ or _resolved_
	// before this task is scheduled. See `requires` for semantics.
	//
	// Default:    []
	//
	// Array items:
	// The `taskId` of a task that must be resolved before this task is
	// scheduled.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	Dependencies []string `json:"dependencies,omitempty"`

	// Task expiration, time at which task definition and status is deleted.
	// Notice that all artifacts for the task must have an expiration that is no
	// later than this. If this property isn't it will be set to `deadline`
	// plus one year (this default may change).
	Expires tcclient.Time `json:"expires,omitempty"`

	// Object with properties that can hold any kind of extra data that should be
	// associated with the task. This can be data for the task which doesn't
	// fit into `payload`, or it can supplementary data for use in services
	// listening for events from this task. For example this could be details to
	// display on dashboard, or information for indexing the task. Please, try
	// to put all related information under one property, so `extra` data keys
	// don't conflict.  **Warning**, do not stuff large data-sets in here --
	// task definitions should not take-up multiple MiBs.
	//
	// Default:    {}
	//
	// Additional properties allowed
	Extra json.RawMessage `json:"extra,omitempty"`

	// Required task metadata
	Metadata TaskMetadata `json:"metadata"`

	// Task-specific payload following worker-specific format.
	// Refer to the documentation for the worker implementing
	// `<provisionerId>/<workerType>` for details.
	//
	// Additional properties allowed
	Payload json.RawMessage `json:"payload"`

	// Priority of task. This defaults to `lowest` and the scope
	// `queue:create-task:<priority>/<provisionerId>/<workerType>` is required
	// to define a task with `<priority>`. The `normal` priority is treated as
	// `lowest`.
	//
	// Possible values:
	//   * "highest"
	//   * "very-high"
	//   * "high"
	//   * "medium"
	//   * "low"
	//   * "very-low"
	//   * "lowest"
	//   * "normal"
	//
	// Default:    "lowest"
	Priority string `json:"priority,omitempty"`

	// The name for the "project" with which this task is associated.  This
	// value can be used to control permission to manipulate tasks as well as
	// for usage reporting.  Project ids are typically simple identifiers,
	// optionally in a hierarchical namespace separated by `/` characters.
	// This value defaults to `none`.
	//
	// Default:    "none"
	// Syntax:     ^([a-zA-Z0-9._/-]*)$
	// Min length: 1
	// Max length: 500
	ProjectID string `json:"projectId,omitempty"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId,omitempty"`

	// The tasks relation to its dependencies. This property specifies the
	// semantics of the `task.dependencies` property.
	// If `all-completed` is given the task will be scheduled when all
	// dependencies are resolved _completed_ (successful resolution).
	// If `all-resolved` is given the task will be scheduled when all dependencies
	// have been resolved, regardless of what their resolution is.
	//
	// Possible values:
	//   * "all-completed"
	//   * "all-resolved"
	//
	// Default:    "all-completed"
	Requires string `json:"requires,omitempty"`

	// Number of times to retry the task in case of infrastructure issues.
	// An _infrastructure issue_ is a worker node that crashes or is shutdown,
	// these events are to be expected.
	//
	// Default:    5
	// Mininum:    0
	// Maximum:    49
	Retries int64 `json:"retries,omitempty"`

	// List of task-specific routes. Pulse messages about the task will be CC'ed to
	// `route.<value>` for each `<value>` in this array.
	//
	// This array has a maximum size due to a limitation of the AMQP protocol,
	// over which Pulse runs.  All routes must fit in the same "frame" of this
	// protocol, and the frames have a fixed maximum size (typically 128k).
	//
	// Default:    []
	//
	// Array items:
	// A task specific route.
	//
	// Min length: 1
	// Max length: 249
	Routes []string `json:"routes,omitempty"`

	// All tasks in a task group must have the same `schedulerId`. This is used for several purposes:
	//
	// * it can represent the entity that created the task;
	// * it can limit addition of new tasks to a task group: the caller of
	//     `createTask` must have a scope related to the `schedulerId` of the task
	//     group;
	// * it controls who can manipulate tasks, again by requiring
	//     `schedulerId`-related scopes; and
	// * it appears in the routing key for Pulse messages about the task.
	//
	// Default:    "-"
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	SchedulerID string `json:"schedulerId,omitempty"`

	// List of scopes that the task is authorized to use during its execution.
	//
	// Array items:
	// A single scope. A scope must be composed of
	// printable ASCII characters and spaces.  Scopes ending in more than
	// one `*` character are forbidden.
	//
	// Syntax:     ^[ -~]*$
	Scopes []string `json:"scopes,omitempty"`

	// Arbitrary key-value tags (only strings limited to 4k). These can be used
	// to attach informal metadata to a task. Use this for informal tags that
	// tasks can be classified by. You can also think of strings here as
	// candidates for formal metadata. Something like
	// `purpose: 'build' || 'test'` is a good example.
	//
	// Default:    {}
	//
	// Map entries:
	// Max length: 4096
	Tags map[string]string `json:"tags,omitempty"`

	// Identifier for a group of tasks scheduled together with this task.
	// Generally, all tasks related to a single event such as a version-control
	// push or a nightly build have the same `taskGroupId`.  This property
	// defaults to `taskId` if it isn't specified.  Tasks with `taskId` equal to
	// the `taskGroupId` are, [by convention](/docs/manual/using/task-graph),
	// decision tasks.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId,omitempty"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId,omitempty"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType,omitempty"`
}

Definition of a task that can be scheduled

type TaskDefinitionResponse

type TaskDefinitionResponse struct {

	// Creation time of task
	Created tcclient.Time `json:"created"`

	// Deadline of the task, by which this task must be complete. `pending` and
	// `running` runs are resolved as **exception** if not resolved by other means
	// before the deadline. After the deadline, a task is immutable. Note,
	// deadline cannot be more than 5 days into the future
	Deadline tcclient.Time `json:"deadline"`

	// List of dependent tasks. These must either be _completed_ or _resolved_
	// before this task is scheduled. See `requires` for semantics.
	//
	// Default:    []
	//
	// Array items:
	// The `taskId` of a task that must be resolved before this task is
	// scheduled.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	Dependencies []string `json:"dependencies"`

	// Task expiration, time at which task definition and status is deleted.
	// Notice that all artifacts for the task must have an expiration that is no
	// later than this. If this property isn't it will be set to `deadline`
	// plus one year (this default may change).
	Expires tcclient.Time `json:"expires,omitempty"`

	// Object with properties that can hold any kind of extra data that should be
	// associated with the task. This can be data for the task which doesn't
	// fit into `payload`, or it can supplementary data for use in services
	// listening for events from this task. For example this could be details to
	// display on dashboard, or information for indexing the task. Please, try
	// to put all related information under one property, so `extra` data keys
	// don't conflict.  **Warning**, do not stuff large data-sets in here --
	// task definitions should not take-up multiple MiBs.
	//
	// Default:    {}
	//
	// Additional properties allowed
	Extra json.RawMessage `json:"extra"`

	// Required task metadata
	Metadata TaskMetadata `json:"metadata"`

	// Task-specific payload following worker-specific format.
	// Refer to the documentation for the worker implementing
	// `<provisionerId>/<workerType>` for details.
	//
	// Additional properties allowed
	Payload json.RawMessage `json:"payload"`

	// Priority of task. This defaults to `lowest` and the scope
	// `queue:create-task:<priority>/<provisionerId>/<workerType>` is required
	// to define a task with `<priority>`. The `normal` priority is treated as
	// `lowest`.
	//
	// Possible values:
	//   * "highest"
	//   * "very-high"
	//   * "high"
	//   * "medium"
	//   * "low"
	//   * "very-low"
	//   * "lowest"
	//   * "normal"
	//
	// Default:    "lowest"
	Priority string `json:"priority"`

	// The name for the "project" with which this task is associated.  This
	// value can be used to control permission to manipulate tasks as well as
	// for usage reporting.  Project ids are typically simple identifiers,
	// optionally in a hierarchical namespace separated by `/` characters.
	// This value defaults to `none`.
	//
	// Default:    "none"
	// Syntax:     ^([a-zA-Z0-9._/-]*)$
	// Min length: 1
	// Max length: 500
	ProjectID string `json:"projectId,omitempty"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// The tasks relation to its dependencies. This property specifies the
	// semantics of the `task.dependencies` property.
	// If `all-completed` is given the task will be scheduled when all
	// dependencies are resolved _completed_ (successful resolution).
	// If `all-resolved` is given the task will be scheduled when all dependencies
	// have been resolved, regardless of what their resolution is.
	//
	// Possible values:
	//   * "all-completed"
	//   * "all-resolved"
	//
	// Default:    "all-completed"
	Requires string `json:"requires"`

	// Number of times to retry the task in case of infrastructure issues.
	// An _infrastructure issue_ is a worker node that crashes or is shutdown,
	// these events are to be expected.
	//
	// Default:    5
	// Mininum:    0
	// Maximum:    49
	Retries int64 `json:"retries"`

	// List of task-specific routes. Pulse messages about the task will be CC'ed to
	// `route.<value>` for each `<value>` in this array.
	//
	// This array has a maximum size due to a limitation of the AMQP protocol,
	// over which Pulse runs.  All routes must fit in the same "frame" of this
	// protocol, and the frames have a fixed maximum size (typically 128k).
	//
	// Default:    []
	//
	// Array items:
	// A task specific route.
	//
	// Min length: 1
	// Max length: 249
	Routes []string `json:"routes"`

	// All tasks in a task group must have the same `schedulerId`. This is used for several purposes:
	//
	// * it can represent the entity that created the task;
	// * it can limit addition of new tasks to a task group: the caller of
	//     `createTask` must have a scope related to the `schedulerId` of the task
	//     group;
	// * it controls who can manipulate tasks, again by requiring
	//     `schedulerId`-related scopes; and
	// * it appears in the routing key for Pulse messages about the task.
	//
	// Default:    "-"
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	SchedulerID string `json:"schedulerId"`

	// List of scopes that the task is authorized to use during its execution.
	//
	// Array items:
	// A single scope. A scope must be composed of
	// printable ASCII characters and spaces.  Scopes ending in more than
	// one `*` character are forbidden.
	//
	// Syntax:     ^[ -~]*$
	Scopes []string `json:"scopes"`

	// Arbitrary key-value tags (only strings limited to 4k). These can be used
	// to attach informal metadata to a task. Use this for informal tags that
	// tasks can be classified by. You can also think of strings here as
	// candidates for formal metadata. Something like
	// `purpose: 'build' || 'test'` is a good example.
	//
	// Default:    {}
	//
	// Map entries:
	// Max length: 4096
	Tags map[string]string `json:"tags"`

	// Identifier for a group of tasks scheduled together with this task.
	// Generally, all tasks related to a single event such as a version-control
	// push or a nightly build have the same `taskGroupId`.  This property
	// defaults to `taskId` if it isn't specified.  Tasks with `taskId` equal to
	// the `taskGroupId` are, [by convention](/docs/manual/using/task-graph),
	// decision tasks.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

Definition of a task that can be scheduled

type TaskDefinitionResponse1 added in v64.1.0

type TaskDefinitionResponse1 struct {

	// A continuation token is returned if there are more results than listed
	// here. You can optionally provide the token in the request payload to
	// load the additional results.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// Default:    []
	Tasks []Var `json:"tasks"`
}

Definitions of multiple tasks

type TaskDefinitionsResponse added in v64.1.0

type TaskDefinitionsResponse struct {

	// Default:    []
	//
	// Array items:
	// ID of a task to list
	TaskIds []string `json:"taskIds,omitempty"`
}

Request to list definitions of multiple tasks.

type TaskExceptionRequest

type TaskExceptionRequest struct {

	// Reason that the task is resolved with an exception. This is a subset
	// of the values for `resolvedReason` given in the task status structure.
	//
	// * **Report `worker-shutdown`** if the run failed because the worker
	// had to shutdown (spot node disappearing). In case of `worker-shutdown`
	// the queue will immediately **retry** the task, by making a new run.
	// This is much faster than ignoreing the issue and letting the task _retry_
	// by claim expiration. For any other _reason_ reported the queue will not
	// retry the task.
	//
	// * **Report `malformed-payload`** if the `task.payload` doesn't match the
	// schema for the worker payload, or referenced resource doesn't exists.
	// In either case, you should still log the error to a log file for the
	// specific run.
	//
	// * **Report `resource-unavailable`** if a resource/service needed or
	// referenced in `task.payload` is _temporarily_ unavailable. Do not use this
	// unless you know the resource exists, if the resource doesn't exist you
	// should report `malformed-payload`. Example use-case if you contact the
	// index (a service) on behalf of the task, because of a declaration in
	// `task.payload`, and the service (index) is temporarily down. Don't use
	// this if a URL returns 404, but if it returns 503 or hits a timeout when
	// you retry the request, then this _may_ be a valid exception. The queue
	// assumes that workers have applied retries as needed, and will not retry
	//  the task.
	//
	// * **Report `internal-error`** if the worker experienced an unhandled internal
	// error from which it couldn't recover. The queue will not retry runs
	// resolved with this reason, but you are clearly signaling that this is a
	// bug in the worker code.
	//
	// * **Report `intermittent-task`** if the task explicitly requested a retry
	// because task is intermittent. Workers can choose whether or not to
	// support this, but workers shouldn't blindly report this for every task
	// that fails.
	//
	// Possible values:
	//   * "worker-shutdown"
	//   * "malformed-payload"
	//   * "resource-unavailable"
	//   * "internal-error"
	//   * "intermittent-task"
	Reason string `json:"reason"`
}

Request for a run of a task to be resolved with an exception

type TaskGroupDefinitionResponse

type TaskGroupDefinitionResponse struct {

	// Date and time after the last expiration of any task in the task group.
	// For the unsealed task group this could change to a later date.
	Expires tcclient.Time `json:"expires"`

	// All tasks in a task group must have the same `schedulerId`. This is used for several purposes:
	//
	// * it can represent the entity that created the task;
	// * it can limit addition of new tasks to a task group: the caller of
	//     `createTask` must have a scope related to the `schedulerId` of the task
	//     group;
	// * it controls who can manipulate tasks, again by requiring
	//     `schedulerId`-related scopes; and
	// * it appears in the routing key for Pulse messages about the task.
	//
	// Default:    "-"
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	SchedulerID string `json:"schedulerId"`

	// Empty or date and time when task group was sealed.
	Sealed tcclient.Time `json:"sealed,omitempty"`

	// Identifier for the task-group.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId"`
}

Response containing information about a task group.

type TaskMetadata

type TaskMetadata struct {

	// Human readable description of the task, please **explain** what the
	// task does. A few lines of documentation is not going to hurt you.
	//
	// Max length: 32768
	Description string `json:"description"`

	// Human readable name of task, used to very briefly given an idea about
	// what the task does.
	//
	// Max length: 255
	Name string `json:"name"`

	// Entity who caused this task, not necessarily a person with email who did
	// `hg push` as it could be automation bots as well. The entity we should
	// contact to ask why this task is here.
	//
	// Max length: 255
	Owner string `json:"owner"`

	// Link to source of this task, should specify a file, revision and
	// repository. This should be place someone can go an do a git/hg blame
	// to who came up with recipe for this task.
	//
	// Syntax:     ^(https?://|ssh://|git@)
	// Max length: 4096
	// Any of:
	//   * Source
	//   * Source1
	Source string `json:"source"`
}

Required task metadata

type TaskQueue

type TaskQueue struct {

	// Description of the task queue.
	Description string `json:"description"`

	// Date and time after which the task queue will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date and time where the task queue was last seen active
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// This is the stability of the task queue. Accepted values:
	//  * `experimental`
	//  * `stable`
	//  * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`
}

type TaskQueueResponse

type TaskQueueResponse struct {

	// Description of the task queue.
	Description string `json:"description"`

	// Date and time after which the task queue will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date of the last time this task queue was seen active. Updated each time a worker calls
	// `queue.claimWork`, `queue.reclaimTask`, and `queue.declareWorker` for this task queue.
	// `lastDateActive` is updated every half hour but may be off by up-to half an hour.
	// Nonetheless, `lastDateActive` is a good indicator of when the task queue was last seen active.
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// This is the stability of the task queue. Accepted values:
	//   * `experimental`
	//   * `stable`
	//   * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`
}

Response to a task queue request from a provisioner.

type TaskReclaimResponse

type TaskReclaimResponse struct {

	// Temporary credentials granting `task.scopes` and the scope:
	// `queue:claim-task:<taskId>/<runId>` which allows the worker to reclaim
	// the task, upload artifacts and report task resolution.
	//
	// The temporary credentials are set to expire after `takenUntil`. They
	// won't expire exactly at `takenUntil` but shortly after, hence, requests
	// coming close `takenUntil` won't have problems even if there is a little
	// clock drift.
	//
	// Workers should use these credentials when making requests on behalf of
	// a task. This includes requests to create artifacts, reclaiming the task
	// reporting the task `completed`, `failed` or `exception`.
	//
	// Note, a new set of temporary credentials is issued when the worker
	// reclaims the task.
	Credentials TaskCredentials `json:"credentials"`

	// `run-id` assigned to this run of the task
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`

	// Time at which the run expires and is resolved as `exception`,
	// with reason `claim-expired` if the run haven't been reclaimed.
	TakenUntil tcclient.Time `json:"takenUntil"`

	// Identifier for the worker-group within which this run started.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for the worker executing this run.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

Response to a successful task claim

type TaskRun

type TaskRun struct {

	// Id of this task run, `run-id`s always starts from `0`
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// Unique task identifier, this is UUID encoded as
	// [URL-safe base64](http://tools.ietf.org/html/rfc4648#section-5) and
	// stripped of `=` padding.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskID string `json:"taskId"`
}

A run of a task.

type TaskStatusResponse

type TaskStatusResponse struct {

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`
}

Response to a task status request

type TaskStatusStructure

type TaskStatusStructure struct {

	// Deadline of the task, by which this task must be complete. `pending` and
	// `running` runs are resolved as **exception** if not resolved by other means
	// before the deadline. After the deadline, a task is immutable. Note,
	// deadline cannot be more than 5 days into the future
	Deadline tcclient.Time `json:"deadline"`

	// Task expiration, time at which task definition and
	// status is deleted. Notice that all artifacts for the task
	// must have an expiration that is no later than this.
	Expires tcclient.Time `json:"expires"`

	// The name for the "project" with which this task is associated.  This
	// value can be used to control permission to manipulate tasks as well as
	// for usage reporting.  Project ids are typically simple identifiers,
	// optionally in a hierarchical namespace separated by `/` characters.
	// This value defaults to `none`.
	//
	// Default:    "none"
	// Syntax:     ^([a-zA-Z0-9._/-]*)$
	// Min length: 1
	// Max length: 500
	ProjectID string `json:"projectId"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// Number of retries left for the task in case of infrastructure issues
	//
	// Mininum:    0
	// Maximum:    999
	RetriesLeft int64 `json:"retriesLeft"`

	// List of runs, ordered so that index `i` has `runId == i`
	Runs []RunInformation `json:"runs"`

	// All tasks in a task group must have the same `schedulerId`. This is used for several purposes:
	//
	// * it can represent the entity that created the task;
	// * it can limit addition of new tasks to a task group: the caller of
	//     `createTask` must have a scope related to the `schedulerId` of the task
	//     group;
	// * it controls who can manipulate tasks, again by requiring
	//     `schedulerId`-related scopes; and
	// * it appears in the routing key for Pulse messages about the task.
	//
	// Default:    "-"
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	SchedulerID string `json:"schedulerId"`

	// State of this task. This is just an auxiliary property derived from state
	// of latests run, or `unscheduled` if none.
	//
	// Possible values:
	//   * "unscheduled"
	//   * "pending"
	//   * "running"
	//   * "completed"
	//   * "failed"
	//   * "exception"
	State string `json:"state"`

	// Identifier for a group of tasks scheduled together with this task.
	// Generally, all tasks related to a single event such as a version-control
	// push or a nightly build have the same `taskGroupId`.  This property
	// defaults to `taskId` if it isn't specified.  Tasks with `taskId` equal to
	// the `taskGroupId` are, [by convention](/docs/manual/using/task-graph),
	// decision tasks.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskGroupID string `json:"taskGroupId"`

	// Unique task identifier, this is UUID encoded as
	// [URL-safe base64](http://tools.ietf.org/html/rfc4648#section-5) and
	// stripped of `=` padding.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskID string `json:"taskId"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

A representation of **task status** as known by the queue

type TasksStatusesResponse added in v64.1.0

type TasksStatusesResponse struct {

	// A continuation token is returned if there are more results than listed
	// here. You can optionally provide the token in the request payload to
	// load the additional results.
	ContinuationToken string `json:"continuationToken,omitempty"`

	// Default:    []
	Statuses []Var1 `json:"statuses"`
}

Status of multiple tasks

type Var

type Var struct {

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`

	TaskID string `json:"taskId"`
}

type Var1

type Var1 struct {

	// A representation of **task status** as known by the queue
	Status TaskStatusStructure `json:"status"`

	TaskID string `json:"taskId"`
}

type Var2 added in v64.1.0

type Var2 struct {
	Inserted tcclient.Time `json:"inserted"`

	// Unique run identifier, this is a number between 0 and 1000 inclusive.
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`

	// Unique task identifier, this is UUID encoded as
	// [URL-safe base64](http://tools.ietf.org/html/rfc4648#section-5) and
	// stripped of `=` padding.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskID string `json:"taskId"`
}

type Var3 added in v64.1.0

type Var3 struct {
	Claimed tcclient.Time `json:"claimed"`

	// Unique run identifier, this is a number between 0 and 1000 inclusive.
	//
	// Mininum:    0
	// Maximum:    1000
	RunID int64 `json:"runId"`

	// Definition of a task that can be scheduled
	Task TaskDefinitionResponse `json:"task"`

	// Unique task identifier, this is UUID encoded as
	// [URL-safe base64](http://tools.ietf.org/html/rfc4648#section-5) and
	// stripped of `=` padding.
	//
	// Syntax:     ^[A-Za-z0-9_-]{8}[Q-T][A-Za-z0-9_-][CGKOSWaeimquy26-][A-Za-z0-9_-]{10}[AQgw]$
	TaskID string `json:"taskId"`

	WorkerGroup string `json:"workerGroup"`

	WorkerID string `json:"workerId"`
}

type Worker

type Worker struct {

	// Date of the first time this worker claimed a task.
	FirstClaim tcclient.Time `json:"firstClaim"`

	// Date of the last time this worker was seen active. Updated each time a worker calls
	// `queue.claimWork`, `queue.reclaimTask`, and `queue.declareWorker` for this task queue.
	// `lastDateActive` is updated every half hour but may be off by up-to half an hour.
	// Nonetheless, `lastDateActive` is a good indicator of when the worker was last seen active.
	// This defaults to null in the database, and is set to the current time when the worker
	// is first seen.
	LastDateActive tcclient.Time `json:"lastDateActive,omitempty"`

	// A run of a task.
	LatestTask TaskRun `json:"latestTask,omitempty"`

	// Quarantining a worker allows the machine to remain alive but not accept jobs.
	// Once the quarantineUntil time has elapsed, the worker resumes accepting jobs.
	// Note that a quarantine can be lifted by setting `quarantineUntil` to the present time (or
	// somewhere in the past).
	QuarantineUntil tcclient.Time `json:"quarantineUntil,omitempty"`

	// Identifier for the worker group containing this worker.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for this worker (unique within this worker group).
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`
}

type WorkerAction

type WorkerAction struct {

	// Only actions with the context `worker` are included.
	//
	// Possible values:
	//   * "worker"
	Context string `json:"context"`

	// Description of the provisioner.
	Description string `json:"description"`

	// Method to indicate the desired action to be performed for a given resource.
	//
	// Possible values:
	//   * "POST"
	//   * "PUT"
	//   * "DELETE"
	//   * "PATCH"
	Method string `json:"method"`

	// Short names for things like logging/error messages.
	Name string `json:"name"`

	// Appropriate title for any sort of Modal prompt.
	Title json.RawMessage `json:"title"`

	// When an action is triggered, a request is made using the `url` and `method`.
	// Depending on the `context`, the following parameters will be substituted in the url:
	//
	// | `context`   | Path parameters                                          |
	// |-------------|----------------------------------------------------------|
	// | provisioner | <provisionerId>                                          |
	// | worker-type | <provisionerId>, <workerType>                            |
	// | worker      | <provisionerId>, <workerType>, <workerGroup>, <workerId> |
	//
	// _Note: The request needs to be signed with the user's Taskcluster credentials._
	URL string `json:"url"`
}

Actions provide a generic mechanism to expose additional features of a provisioner, worker type, or worker to Taskcluster clients.

An action is comprised of metadata describing the feature it exposes, together with a webhook for triggering it.

The Taskcluster tools site, for example, retrieves actions when displaying provisioners, worker types and workers. It presents the provisioner/worker type/worker specific actions to the user. When the user triggers an action, the web client takes the registered webhook, substitutes parameters into the URL (see `url`), signs the requests with the Taskcluster credentials of the user operating the web interface, and issues the HTTP request.

The level to which the action relates (provisioner, worker type, worker) is called the action context. All actions, regardless of the action contexts, are registered against the provisioner when calling `queue.declareProvisioner`.

The action context is used by the web client to determine where in the web interface to present the action to the user as follows:

| `context` | Tool where action is displayed | |-------------|--------------------------------| | provisioner | Provisioner Explorer | | worker-type | Workers Explorer | | worker | Worker Explorer |

See [actions docs](/docs/reference/platform/taskcluster-queue/docs/actions) for more information.

type WorkerRequest

type WorkerRequest struct {

	// Date and time after which the worker will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires,omitempty"`
}

Request to update a worker.

type WorkerResponse

type WorkerResponse struct {
	Actions []WorkerAction `json:"actions"`

	// Date and time after which the worker will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date of the first time this worker claimed a task.
	FirstClaim tcclient.Time `json:"firstClaim"`

	// Date of the last time this worker was seen active. Updated each time a worker calls
	// `queue.claimWork`, `queue.reclaimTask`, and `queue.declareWorker` for this task queue.
	// `lastDateActive` is updated every half hour but may be off by up-to half an hour.
	// Nonetheless, `lastDateActive` is a good indicator of when the worker was last seen active.
	// This defaults to null in the database, and is set to the current time when the worker
	// is first seen.
	LastDateActive tcclient.Time `json:"lastDateActive,omitempty"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// This is a list of changes to the worker's quarantine status. Each entry is an object
	// containing information about the time, clientId and reason for the change.
	QuarantineDetails []QuarantineDetails `json:"quarantineDetails,omitempty"`

	// Quarantining a worker allows the machine to remain alive but not accept jobs.
	// Once the quarantineUntil time has elapsed, the worker resumes accepting jobs.
	// Note that a quarantine can be lifted by setting `quarantineUntil` to the present time (or
	// somewhere in the past).
	QuarantineUntil tcclient.Time `json:"quarantineUntil,omitempty"`

	// List of 20 most recent tasks claimed by the worker.
	RecentTasks []TaskRun `json:"recentTasks"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId,omitempty"`

	// Identifier for group that worker who executes this run is a part of,
	// this identifier is mainly used for efficient routing.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerGroup string `json:"workerGroup"`

	// Identifier for worker evaluating this run within given
	// `workerGroup`.
	//
	// Syntax:     ^([a-zA-Z0-9-_]*)$
	// Min length: 1
	// Max length: 38
	WorkerID string `json:"workerId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

Response containing information about a worker.

type WorkerType

type WorkerType struct {

	// Description of the worker-type.
	Description string `json:"description"`

	// Date and time after which the worker-type will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date and time where the worker-type was last seen active
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// This is the stability of the worker-type. Accepted values:
	//  * `experimental`
	//  * `stable`
	//  * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

type WorkerTypeAction

type WorkerTypeAction struct {

	// Only actions with the context `worker-type` are included.
	//
	// Possible values:
	//   * "worker-type"
	Context string `json:"context"`

	// Description of the provisioner.
	Description string `json:"description"`

	// Method to indicate the desired action to be performed for a given resource.
	//
	// Possible values:
	//   * "POST"
	//   * "PUT"
	//   * "DELETE"
	//   * "PATCH"
	Method string `json:"method"`

	// Short names for things like logging/error messages.
	Name string `json:"name"`

	// Appropriate title for any sort of Modal prompt.
	Title json.RawMessage `json:"title"`

	// When an action is triggered, a request is made using the `url` and `method`.
	// Depending on the `context`, the following parameters will be substituted in the url:
	//
	// | `context`   | Path parameters                                          |
	// |-------------|----------------------------------------------------------|
	// | provisioner | <provisionerId>                                          |
	// | worker-type | <provisionerId>, <workerType>                            |
	// | worker      | <provisionerId>, <workerType>, <workerGroup>, <workerId> |
	//
	// _Note: The request needs to be signed with the user's Taskcluster credentials._
	URL string `json:"url"`
}

Actions provide a generic mechanism to expose additional features of a provisioner, worker type, or worker to Taskcluster clients.

An action is comprised of metadata describing the feature it exposes, together with a webhook for triggering it.

The Taskcluster tools site, for example, retrieves actions when displaying provisioners, worker types and workers. It presents the provisioner/worker type/worker specific actions to the user. When the user triggers an action, the web client takes the registered webhook, substitutes parameters into the URL (see `url`), signs the requests with the Taskcluster credentials of the user operating the web interface, and issues the HTTP request.

The level to which the action relates (provisioner, worker type, worker) is called the action context. All actions, regardless of the action contexts, are registered against the provisioner when calling `queue.declareProvisioner`.

The action context is used by the web client to determine where in the web interface to present the action to the user as follows:

| `context` | Tool where action is displayed | |-------------|--------------------------------| | provisioner | Provisioner Explorer | | worker-type | Workers Explorer | | worker | Worker Explorer |

See [actions docs](/docs/reference/platform/taskcluster-queue/docs/actions) for more information.

type WorkerTypeRequest

type WorkerTypeRequest struct {

	// Description of the provisioner.
	Description string `json:"description,omitempty"`

	// Date and time after which the worker-type will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires,omitempty"`

	// This is the stability of the provisioner. Accepted values:
	//   * `experimental`
	//   * `stable`
	//   * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability,omitempty"`
}

Request to update a worker-type.

type WorkerTypeResponse

type WorkerTypeResponse struct {
	Actions []WorkerTypeAction `json:"actions"`

	// Description of the worker-type.
	Description string `json:"description"`

	// Date and time after which the worker-type will be automatically
	// deleted by the queue.
	Expires tcclient.Time `json:"expires"`

	// Date of the last time this worker-type was seen active. `lastDateActive` is updated every half hour
	// but may be off by up-to half an hour. Nonetheless, `lastDateActive` is a good indicator
	// of when the worker-type was last seen active.
	LastDateActive tcclient.Time `json:"lastDateActive"`

	// Unique identifier for a provisioner, that can supply specified
	// `workerType`. Deprecation is planned for this property as it
	// will be replaced, together with `workerType`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}$
	ProvisionerID string `json:"provisionerId"`

	// This is the stability of the worker-type. Accepted values:
	//   * `experimental`
	//   * `stable`
	//   * `deprecated`
	//
	// Possible values:
	//   * "experimental"
	//   * "stable"
	//   * "deprecated"
	Stability string `json:"stability"`

	// Unique identifier for a task queue
	//
	// Syntax:     ^[a-zA-Z0-9-_]{1,38}/[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	TaskQueueID string `json:"taskQueueId"`

	// Unique identifier for a worker-type within a specific
	// provisioner. Deprecation is planned for this property as it will
	// be replaced, together with `provisionerId`, by the new
	// identifier `taskQueueId`.
	//
	// Syntax:     ^[a-z]([-a-z0-9]{0,36}[a-z0-9])?$
	WorkerType string `json:"workerType"`
}

Response to a worker-type request from a provisioner.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL