tensorflow: github.com/tensorflow/tensorflow/tensorflow/go/op Index | Examples | Files

package op

import "github.com/tensorflow/tensorflow/tensorflow/go/op"

Package op defines functions for adding TensorFlow operations to a Graph.

Functions for adding an operation to a graph take a Scope object as the first argument. The Scope object encapsulates a graph and a set of properties (such as a name prefix) for all operations being added to the graph.

WARNING: The API in this package has not been finalized and can change without notice.

Code:

// This example creates a Graph that multiplies a constant matrix with
// a matrix to be provided during graph execution (via
// tensorflow.Session).
s := NewScope()
input := Placeholder(s, tf.Float) // Matrix to be provided to Session.Run
output := MatMul(s,
    Const(s, [][]float32{{10}, {20}}), // Constant 2x1 matrix
    input,
    MatMulTransposeB(true))
if s.Err() != nil {
    panic(s.Err())
}
// Shape of the product: The number of rows is fixed by m1, but the
// number of columns will depend on m2, which is unknown.
fmt.Println(output.Shape())

Output:

[2, ?]

Index

Examples

Package Files

generate.go gradients.go op.go scope.go wrappers.go

func Abort Uses

func Abort(scope *Scope, optional ...AbortAttr) (o *tf.Operation)

Raise a exception to abort the process when called.

If exit_without_error is true, the process will exit normally, otherwise it will exit with a SIGABORT signal.

Returns nothing but an exception.

Returns the created operation.

func Abs Uses

func Abs(scope *Scope, x tf.Output) (y tf.Output)

Computes the absolute value of a tensor.

Given a tensor `x`, this operation returns a tensor containing the absolute value of each element in `x`. For example, if x is an input element and y is an output element, this operation computes \\(y = |x|\\).

func AccumulateNV2 Uses

func AccumulateNV2(scope *Scope, inputs []tf.Output, shape tf.Shape) (sum tf.Output)

Returns the element-wise sum of a list of tensors.

`tf.accumulate_n_v2` performs the same operation as `tf.add_n`, but does not wait for all of its inputs to be ready before beginning to sum. This can save memory if inputs are ready at different times, since minimum temporary storage is proportional to the output size rather than the inputs size.

Unlike the original `accumulate_n`, `accumulate_n_v2` is differentiable.

Returns a `Tensor` of same shape and type as the elements of `inputs`.

Arguments:

inputs: A list of `Tensor` objects, each with same shape and type.
shape: Shape of elements of `inputs`.

func Acos Uses

func Acos(scope *Scope, x tf.Output) (y tf.Output)

Computes acos of x element-wise.

func Acosh Uses

func Acosh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic cosine of x element-wise.

func Add Uses

func Add(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x + y element-wise.

*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func AddManySparseToTensorsMap Uses

func AddManySparseToTensorsMap(scope *Scope, sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output, optional ...AddManySparseToTensorsMapAttr) (sparse_handles tf.Output)

Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles.

A `SparseTensor` of rank `R` is represented by three tensors: `sparse_indices`, `sparse_values`, and `sparse_shape`, where

“`sparse_indices.shape[1] == sparse_shape.shape[0] == R“`

An `N`-minibatch of `SparseTensor` objects is represented as a `SparseTensor` having a first `sparse_indices` column taking values between `[0, N)`, where the minibatch size `N == sparse_shape[0]`.

The input `SparseTensor` must have rank `R` greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the `SparseTensor` must be sorted in increasing order of this first dimension. The stored `SparseTensor` objects pointed to by each row of the output `sparse_handles` will have rank `R-1`.

The `SparseTensor` values can then be read out as part of a minibatch by passing the given keys as vector elements to `TakeManySparseFromTensorsMap`. To ensure the correct `SparseTensorsMap` is accessed, ensure that the same `container` and `shared_name` are passed to that Op. If no `shared_name` is provided here, instead use the *name* of the Operation created by calling `AddManySparseToTensorsMap` as the `shared_name` passed to `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.

Arguments:

sparse_indices: 2-D.  The `indices` of the minibatch `SparseTensor`.

`sparse_indices[:, 0]` must be ordered values in `[0, N)`.

sparse_values: 1-D.  The `values` of the minibatch `SparseTensor`.
sparse_shape: 1-D.  The `shape` of the minibatch `SparseTensor`.

The minibatch size `N == sparse_shape[0]`.

Returns 1-D. The handles of the `SparseTensor` now stored in the `SparseTensorsMap`. Shape: `[N]`.

func AddN Uses

func AddN(scope *Scope, inputs []tf.Output) (sum tf.Output)

Add all input tensors element wise.

Arguments:

inputs: Must all be the same size and shape.

func AddSparseToTensorsMap Uses

func AddSparseToTensorsMap(scope *Scope, sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output, optional ...AddSparseToTensorsMapAttr) (sparse_handle tf.Output)

Add a `SparseTensor` to a `SparseTensorsMap` return its handle.

A `SparseTensor` is represented by three tensors: `sparse_indices`, `sparse_values`, and `sparse_shape`.

This operator takes the given `SparseTensor` and adds it to a container object (a `SparseTensorsMap`). A unique key within this container is generated in the form of an `int64`, and this is the value that is returned.

The `SparseTensor` can then be read out as part of a minibatch by passing the key as a vector element to `TakeManySparseFromTensorsMap`. To ensure the correct `SparseTensorsMap` is accessed, ensure that the same `container` and `shared_name` are passed to that Op. If no `shared_name` is provided here, instead use the *name* of the Operation created by calling `AddSparseToTensorsMap` as the `shared_name` passed to `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.

Arguments:

sparse_indices: 2-D.  The `indices` of the `SparseTensor`.
sparse_values: 1-D.  The `values` of the `SparseTensor`.
sparse_shape: 1-D.  The `shape` of the `SparseTensor`.

Returns 0-D. The handle of the `SparseTensor` now stored in the `SparseTensorsMap`.

func AddV2 Uses

func AddV2(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x + y element-wise.

*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func AdjustContrast Uses

func AdjustContrast(scope *Scope, images tf.Output, contrast_factor tf.Output, min_value tf.Output, max_value tf.Output) (output tf.Output)

Deprecated. Disallowed in GraphDef version >= 2.

DEPRECATED at GraphDef version 2: Use AdjustContrastv2 instead

func AdjustContrastv2 Uses

func AdjustContrastv2(scope *Scope, images tf.Output, contrast_factor tf.Output) (output tf.Output)

Adjust the contrast of one or more images.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to `(x - mean) * contrast_factor + mean`.

Arguments:

images: Images to adjust.  At least 3-D.
contrast_factor: A float multiplier for adjusting contrast.

Returns The contrast-adjusted image or images.

func AdjustHue Uses

func AdjustHue(scope *Scope, images tf.Output, delta tf.Output) (output tf.Output)

Adjust the hue of one or more images.

`images` is a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace.

Arguments:

images: Images to adjust.  At least 3-D.
delta: A float delta to add to the hue.

Returns The hue-adjusted image or images.

func AdjustSaturation Uses

func AdjustSaturation(scope *Scope, images tf.Output, scale tf.Output) (output tf.Output)

Adjust the saturation of one or more images.

`images` is a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace.

Arguments:

images: Images to adjust.  At least 3-D.
scale: A float scale to add to the saturation.

Returns The hue-adjusted image or images.

func All Uses

func All(scope *Scope, input tf.Output, axis tf.Output, optional ...AllAttr) (output tf.Output)

Computes the "logical and" of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
axis: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func AllCandidateSampler Uses

func AllCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, optional ...AllCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to produce.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func Angle Uses

func Angle(scope *Scope, input tf.Output, optional ...AngleAttr) (output tf.Output)

Returns the argument of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of type `float` that is the argument of each element in `input`. All elements in `input` must be complex numbers of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part.

The argument returned by this operation is of the form \\(atan2(b, a)\\).

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.angle(input) ==> [2.0132, 1.056] “`

@compatibility(numpy) Equivalent to np.angle. @end_compatibility

func AnonymousIterator Uses

func AnonymousIterator(scope *Scope, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

A container for an iterator resource.

Returns A handle to the iterator that can be passed to a "MakeIterator" or "IteratorGetNext" op. In contrast to Iterator, AnonymousIterator prevents resource sharing by name, and does not keep a reference to the resource container.

func Any Uses

func Any(scope *Scope, input tf.Output, axis tf.Output, optional ...AnyAttr) (output tf.Output)

Computes the "logical or" of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
axis: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func ApproximateEqual Uses

func ApproximateEqual(scope *Scope, x tf.Output, y tf.Output, optional ...ApproximateEqualAttr) (z tf.Output)

Returns the truth value of abs(x-y) < tolerance element-wise.

func ArgMax Uses

func ArgMax(scope *Scope, input tf.Output, dimension tf.Output, optional ...ArgMaxAttr) (output tf.Output)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Arguments:

dimension: int32 or int64, must be in the range `[-rank(input), rank(input))`.

Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.

func ArgMin Uses

func ArgMin(scope *Scope, input tf.Output, dimension tf.Output, optional ...ArgMinAttr) (output tf.Output)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Arguments:

dimension: int32 or int64, must be in the range `[-rank(input), rank(input))`.

Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.

func AsString Uses

func AsString(scope *Scope, input tf.Output, optional ...AsStringAttr) (output tf.Output)

Converts each entry in the given tensor to strings. Supports many numeric

types and boolean.

func Asin Uses

func Asin(scope *Scope, x tf.Output) (y tf.Output)

Computes asin of x element-wise.

func Asinh Uses

func Asinh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic sine of x element-wise.

func Assert Uses

func Assert(scope *Scope, condition tf.Output, data []tf.Output, optional ...AssertAttr) (o *tf.Operation)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

Arguments:

condition: The condition to evaluate.
data: The tensors to print out when condition is false.

Returns the created operation.

func AssignAddVariableOp Uses

func AssignAddVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Adds a value to the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the incremented value or a subsequent newer one.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value by which the variable will be incremented.

Returns the created operation.

func AssignSubVariableOp Uses

func AssignSubVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Subtracts a value from the current value of a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to see the decremented value or a subsequent newer one.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value by which the variable will be incremented.

Returns the created operation.

func AssignVariableOp Uses

func AssignVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Assigns a new value to a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value to set the new tensor to use.

Returns the created operation.

func Atan Uses

func Atan(scope *Scope, x tf.Output) (y tf.Output)

Computes atan of x element-wise.

func Atan2 Uses

func Atan2(scope *Scope, y tf.Output, x tf.Output) (z tf.Output)

Computes arctangent of `y/x` element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt(x^2 + y^2) \).

func Atanh Uses

func Atanh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic tangent of x element-wise.

func AudioSpectrogram Uses

func AudioSpectrogram(scope *Scope, input tf.Output, window_size int64, stride int64, optional ...AudioSpectrogramAttr) (spectrogram tf.Output)

Produces a visualization of audio data over time.

Spectrograms are a standard way of representing audio information as a series of slices of frequency information, one slice for each window of time. By joining these together into a sequence, they form a distinctive fingerprint of the sound over time.

This op expects to receive audio data as an input, stored as floats in the range -1 to 1, together with a window width in samples, and a stride specifying how far to move the window between slices. From this it generates a three dimensional output. The lowest dimension has an amplitude value for each frequency during that time slice. The next dimension is time, with successive frequency slices. The final dimension is for the channels in the input, so a stereo audio input would have two here for example.

This means the layout when converted and saved as an image is rotated 90 degrees clockwise from a typical spectrogram. Time is descending down the Y axis, and the frequency decreases from left to right.

Each value in the result represents the square root of the sum of the real and imaginary parts of an FFT on the current window of samples. In this way, the lowest dimension represents the power of each frequency in the current window, and adjacent windows are concatenated in the next dimension.

To get a more intuitive and visual look at what this operation does, you can run tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the resulting spectrogram as a PNG image.

Arguments:

input: Float representation of audio data.
window_size: How wide the input window is in samples. For the highest efficiency

this should be a power of two, but other values are accepted.

stride: How widely apart the center of adjacent sample windows should be.

Returns 3D representation of the audio frequencies as an image.

func AudioSummary Uses

func AudioSummary(scope *Scope, tag tf.Output, tensor tf.Output, sample_rate float32, optional ...AudioSummaryAttr) (summary tf.Output)

Outputs a `Summary` protocol buffer with audio.

DEPRECATED at GraphDef version 15: Use AudioSummaryV2.

The summary has up to `max_outputs` summary values containing audio. The audio is built from `tensor` which must be 3-D with shape `[batch_size, frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. * If `max_outputs` is greater than 1, the summary value tags are

generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 2-D of shape `[batch_size, frames]`.
sample_rate: The sample rate of the signal in hertz.

Returns Scalar. Serialized `Summary` protocol buffer.

func AudioSummaryV2 Uses

func AudioSummaryV2(scope *Scope, tag tf.Output, tensor tf.Output, sample_rate tf.Output, optional ...AudioSummaryV2Attr) (summary tf.Output)

Outputs a `Summary` protocol buffer with audio.

The summary has up to `max_outputs` summary values containing audio. The audio is built from `tensor` which must be 3-D with shape `[batch_size, frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. * If `max_outputs` is greater than 1, the summary value tags are

generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 2-D of shape `[batch_size, frames]`.
sample_rate: The sample rate of the signal in hertz.

Returns Scalar. Serialized `Summary` protocol buffer.

func AvgPool Uses

func AvgPool(scope *Scope, value tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPoolAttr) (output tf.Output)

Performs average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
ksize: The size of the sliding window for each dimension of `value`.
strides: The stride of the sliding window for each dimension of `value`.
padding: The type of padding algorithm to use.

Returns The average pooled output tensor.

func AvgPool3D Uses

func AvgPool3D(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPool3DAttr) (output tf.Output)

Performs 3D average pooling on the input.

Arguments:

input: Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The average pooled output tensor.

func AvgPool3DGrad Uses

func AvgPool3DGrad(scope *Scope, orig_input_shape tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPool3DGradAttr) (output tf.Output)

Computes gradients of average pooling function.

Arguments:

orig_input_shape: The original input dimensions.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The backprop for input.

func AvgPoolGrad Uses

func AvgPoolGrad(scope *Scope, orig_input_shape tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPoolGradAttr) (output tf.Output)

Computes gradients of the average pooling function.

Arguments:

orig_input_shape: 1-D.  Shape of the original input to `avg_pool`.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t.

the output of `avg_pool`.

ksize: The size of the sliding window for each dimension of the input.
strides: The stride of the sliding window for each dimension of the input.
padding: The type of padding algorithm to use.

Returns 4-D. Gradients w.r.t. the input of `avg_pool`.

func Batch Uses

func Batch(scope *Scope, in_tensors []tf.Output, num_batch_threads int64, max_batch_size int64, batch_timeout_micros int64, grad_timeout_micros int64, optional ...BatchAttr) (batched_tensors []tf.Output, batch_index tf.Output, id tf.Output)

Batches all input tensors nondeterministically.

When many instances of this Op are being run concurrently with the same container/shared_name in the same device, some will output zero-shaped Tensors and others will output Tensors of size up to max_batch_size.

All Tensors in in_tensors are batched together (so, for example, labels and features should be batched with a single instance of this operation.

Each invocation of batch emits an `id` scalar which will be used to identify this particular invocation when doing unbatch or its gradient.

Each op which emits a non-empty batch will also emit a non-empty batch_index Tensor, which, is a [K, 3] matrix where each row contains the invocation's id, start, and length of elements of each set of Tensors present in batched_tensors.

Batched tensors are concatenated along the first dimension, and all tensors in in_tensors must have the first dimension of the same size.

in_tensors: The tensors to be batched. num_batch_threads: Number of scheduling threads for processing batches of work.

Determines the number of batches processed in parallel.

max_batch_size: Batch sizes will never be bigger than this. batch_timeout_micros: Maximum number of microseconds to wait before outputting

an incomplete batch.

allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does

nothing. Otherwise, supplies a list of batch sizes, causing the op to pad
batches up to one of those sizes. The entries must increase monotonically, and
the final entry must equal max_batch_size.

grad_timeout_micros: The timeout to use for the gradient. See Unbatch. batched_tensors: Either empty tensors or a batch of concatenated Tensors. batch_index: If out_tensors is non-empty, has information to invert it. container: Controls the scope of sharing of this batch. id: always contains a scalar with a unique ID for this invocation of Batch. shared_name: Concurrently running instances of batch in the same device with the

same container and shared_name will batch their elements together. If left
empty, the op name will be used as the shared name.

T: the types of tensors to be batched.

func BatchDataset Uses

func BatchDataset(scope *Scope, input_dataset tf.Output, batch_size tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that batches `batch_size` elements from `input_dataset`.

Arguments:

batch_size: A scalar representing the number of elements to accumulate in a

batch.

func BatchDatasetV2 Uses

func BatchDatasetV2(scope *Scope, input_dataset tf.Output, batch_size tf.Output, drop_remainder tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that batches `batch_size` elements from `input_dataset`.

Arguments:

batch_size: A scalar representing the number of elements to accumulate in a batch.
drop_remainder: A scalar representing whether the last batch should be dropped in case its size

is smaller than desired.

func BatchMatMul Uses

func BatchMatMul(scope *Scope, x tf.Output, y tf.Output, optional ...BatchMatMulAttr) (output tf.Output)

Multiplies slices of two tensors in batches.

Multiplies all slices of `Tensor` `x` and `y` (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the `adj_x` or `adj_y` flag to `True`, which are by default `False`.

The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]` and `[..., r_y, c_y]`.

The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Arguments:

x: 2-D or higher with shape `[..., r_x, c_x]`.
y: 2-D or higher with shape `[..., r_y, c_y]`.

Returns 3-D or higher with shape `[..., r_o, c_o]`

func BatchNormWithGlobalNormalization Uses

func BatchNormWithGlobalNormalization(scope *Scope, t tf.Output, m tf.Output, v tf.Output, beta tf.Output, gamma tf.Output, variance_epsilon float32, scale_after_normalization bool) (result tf.Output)

Batch normalization.

DEPRECATED at GraphDef version 9: Use tf.nn.batch_normalization()

This op is deprecated. Prefer `tf.nn.batch_normalization`.

Arguments:

t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.

This is the first output from tf.nn.moments, or a saved moving average thereof.

v: A 1D variance Tensor with size matching the last dimension of t.

This is the second output from tf.nn.moments, or a saved moving average thereof.

beta: A 1D beta Tensor with size matching the last dimension of t.

An offset to be added to the normalized tensor.

gamma: A 1D gamma Tensor with size matching the last dimension of t.

If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.

variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor

needs to be multiplied with gamma.

func BatchNormWithGlobalNormalizationGrad Uses

func BatchNormWithGlobalNormalizationGrad(scope *Scope, t tf.Output, m tf.Output, v tf.Output, gamma tf.Output, backprop tf.Output, variance_epsilon float32, scale_after_normalization bool) (dx tf.Output, dm tf.Output, dv tf.Output, db tf.Output, dg tf.Output)

Gradients for batch normalization.

DEPRECATED at GraphDef version 9: Use tf.nn.batch_normalization()

This op is deprecated. See `tf.nn.batch_normalization`.

Arguments:

t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.

This is the first output from tf.nn.moments, or a saved moving average thereof.

v: A 1D variance Tensor with size matching the last dimension of t.

This is the second output from tf.nn.moments, or a saved moving average thereof.

gamma: A 1D gamma Tensor with size matching the last dimension of t.

If "scale_after_normalization" is true, this Tensor will be multiplied with the normalized Tensor.

backprop: 4D backprop Tensor.
variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor

needs to be multiplied with gamma.

Returns 4D backprop tensor for input.1D backprop tensor for mean.1D backprop tensor for variance.1D backprop tensor for beta.1D backprop tensor for gamma.

func BatchToSpace Uses

func BatchToSpace(scope *Scope, input tf.Output, crops tf.Output, block_size int64) (output tf.Output)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the `batch` dimension are moved in spatial blocks to the `height` and `width` dimensions, followed by cropping along the `height` and `width` dimensions.

Arguments:

input: 4-D tensor with shape

`[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,

depth]`. Note that the batch size of the input tensor must be divisible by

`block_size * block_size`.

crops: 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies

how many elements to crop from the intermediate result across the spatial dimensions as follows:

crops = [[crop_top, crop_bottom], [crop_left, crop_right]]

Returns 4-D with shape `[batch, height, width, depth]`, where:

height = height_pad - crop_top - crop_bottom
width = width_pad - crop_left - crop_right

The attr `block_size` must be greater than one. It indicates the block size.

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:

“` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] “`

The output tensor has shape `[1, 2, 2, 1]` and value:

“` x = [[[[1], [2]], [[3], [4]]]] “`

(2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:

“` [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] “`

The output tensor has shape `[1, 2, 2, 3]` and value:

“` x = [[[[1, 2, 3], [4, 5, 6]],

[[7, 8, 9], [10, 11, 12]]]]

“`

(3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:

“` x = [[[[1], [3]], [[9], [11]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

The output tensor has shape `[1, 4, 4, 1]` and value:

“` x = [[[1], [2], [3], [4]],

[[5],   [6],  [7],  [8]],
[[9],  [10], [11],  [12]],
[[13], [14], [15],  [16]]]

“`

(4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:

“` x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],

[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]

“`

The output tensor has shape `[2, 2, 4, 1]` and value:

“` x = [[[[1], [3]], [[5], [7]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

func BatchToSpaceND Uses

func BatchToSpaceND(scope *Scope, input tf.Output, block_shape tf.Output, crops tf.Output) (output tf.Output)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape `block_shape + [batch]`, interleaves these blocks back into the grid defined by the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to `crops` to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Arguments:

input: N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,

where spatial_shape has M dimensions.

	block_shape: 1-D with shape `[M]`, all values must be >= 1.
	crops: 2-D with shape `[M, 2]`, all values must be >= 0.
  `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input
  dimension `i + 1`, which corresponds to spatial dimension `i`.  It is
  required that
  `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape:

[block_shape[0], ..., block_shape[M-1],
 batch / prod(block_shape),
 input_shape[1], ..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape

[batch / prod(block_shape),

 input_shape[1], block_shape[0],
 ...,
 input_shape[M], block_shape[M-1],

 input_shape[M+1], ..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape

[batch / prod(block_shape),

 input_shape[1] * block_shape[0],
 ...,
 input_shape[M] * block_shape[M-1],

 input_shape[M+1],
 ...,
 input_shape[N-1]]

4. Crop the start and end of dimensions `[1, ..., M]` of

`reshaped_permuted` according to `crops` to produce the output of shape:
  [batch / prod(block_shape),

   input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
   ...,
   input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

   input_shape[M+1], ..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] “`

The output tensor has shape `[1, 2, 2, 1]` and value:

“` x = [[[[1], [2]], [[3], [4]]]] “`

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] “`

The output tensor has shape `[1, 2, 2, 3]` and value:

“` x = [[[[1, 2, 3], [4, 5, 6]],

[[7, 8, 9], [10, 11, 12]]]]

“`

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` x = [[[[1], [3]], [[9], [11]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

The output tensor has shape `[1, 4, 4, 1]` and value:

“` x = [[[1], [2], [3], [4]],

[[5],   [6],  [7],  [8]],
[[9],  [10], [11],  [12]],
[[13], [14], [15],  [16]]]

“`

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [2, 0]]`:

“` x = [[[[0], [1], [3]]], [[[0], [9], [11]]],

[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
[[[0], [6], [8]]], [[[0], [14], [16]]]]

“`

The output tensor has shape `[2, 2, 4, 1]` and value:

“` x = [[[[1], [2], [3], [4]],

 [[5],   [6],  [7],  [8]]],
[[[9],  [10], [11],  [12]],
 [[13], [14], [15],  [16]]]]

“`

func BesselI0e Uses

func BesselI0e(scope *Scope, x tf.Output) (y tf.Output)

Computes the Bessel i0e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i0e(x) = exp(-abs(x)) bessel_i0(x)`.

This function is faster and numerically stabler than `bessel_i0(x)`.

func BesselI1e Uses

func BesselI1e(scope *Scope, x tf.Output) (y tf.Output)

Computes the Bessel i1e function of `x` element-wise.

Exponentially scaled modified Bessel function of order 0 defined as `bessel_i1e(x) = exp(-abs(x)) bessel_i1(x)`.

This function is faster and numerically stabler than `bessel_i1(x)`.

func Betainc Uses

func Betainc(scope *Scope, a tf.Output, b tf.Output, x tf.Output) (z tf.Output)

Compute the regularized incomplete beta integral \\(I_x(a, b)\\).

The regularized incomplete beta integral is defined as:

\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\)

where

\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\)

is the incomplete beta function and \\(B(a, b)\\) is the *complete* beta function.

func BiasAdd Uses

func BiasAdd(scope *Scope, value tf.Output, bias tf.Output, optional ...BiasAddAttr) (output tf.Output)

Adds `bias` to `value`.

This is a special case of `tf.add` where `bias` is restricted to be 1-D. Broadcasting is supported, so `value` may have any number of dimensions.

Arguments:

value: Any number of dimensions.
bias: 1-D with size the last dimension of `value`.

Returns Broadcasted sum of `value` and `bias`.

func BiasAddGrad Uses

func BiasAddGrad(scope *Scope, out_backprop tf.Output, optional ...BiasAddGradAttr) (output tf.Output)

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Arguments:

out_backprop: Any number of dimensions.

Returns 1-D with size the feature dimension of `out_backprop`.

func BiasAddV1 Uses

func BiasAddV1(scope *Scope, value tf.Output, bias tf.Output) (output tf.Output)

Adds `bias` to `value`.

This is a deprecated version of BiasAdd and will be soon removed.

This is a special case of `tf.add` where `bias` is restricted to be 1-D. Broadcasting is supported, so `value` may have any number of dimensions.

Arguments:

value: Any number of dimensions.
bias: 1-D with size the last dimension of `value`.

Returns Broadcasted sum of `value` and `bias`.

func Bincount Uses

func Bincount(scope *Scope, arr tf.Output, size tf.Output, weights tf.Output) (bins tf.Output)

Counts the number of occurrences of each value in an integer array.

Outputs a vector with length `size` and the same dtype as `weights`. If `weights` are empty, then index `i` stores the number of times the value `i` is counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of the value in `weights` at each index where the corresponding value in `arr` is `i`.

Values in `arr` outside of the range [0, size) are ignored.

Arguments:

arr: int32 `Tensor`.
size: non-negative int32 scalar `Tensor`.
weights: is an int32, int64, float32, or float64 `Tensor` with the same

shape as `arr`, or a length-0 `Tensor`, in which case it acts as all weights equal to 1.

Returns 1D `Tensor` with length equal to `size`. The counts or summed weights for each value in the range [0, size).

func Bitcast Uses

func Bitcast(scope *Scope, input tf.Output, type_ tf.DataType) (output tf.Output)

Bitcasts a tensor from one type to another without copying data.

Given a tensor `input`, this operation returns a tensor that has the same buffer data as `input` with datatype `type`.

If the input datatype `T` is larger than the output datatype `type` then the shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].

If `T` is smaller than `type`, the operator requires that the rightmost dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from [..., sizeof(`type`)/sizeof(`T`)] to [...].

*NOTE*: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.

func BitwiseAnd Uses

func BitwiseAnd(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise AND of `x` and `y`.

The result will have those bits set, that are set in both `x` and `y`. The computation is performed on the underlying representations of `x` and `y`.

func BitwiseOr Uses

func BitwiseOr(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise OR of `x` and `y`.

The result will have those bits set, that are set in `x`, `y` or both. The computation is performed on the underlying representations of `x` and `y`.

func BitwiseXor Uses

func BitwiseXor(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise XOR of `x` and `y`.

The result will have those bits set, that are different in `x` and `y`. The computation is performed on the underlying representations of `x` and `y`.

func BoostedTreesBucketize Uses

func BoostedTreesBucketize(scope *Scope, float_values []tf.Output, bucket_boundaries []tf.Output) (buckets []tf.Output)

Bucketize each feature based on bucket boundaries.

An op that returns a list of float tensors, where each tensor represents the bucketized values for a single feature.

Arguments:

float_values: float; List of Rank 1 Tensor each containing float values for a single feature.
bucket_boundaries: float; List of Rank 1 Tensors each containing the bucket boundaries for a single

feature.

Returns int; List of Rank 1 Tensors each containing the bucketized values for a single feature.

func BoostedTreesCalculateBestGainsPerFeature Uses

func BoostedTreesCalculateBestGainsPerFeature(scope *Scope, node_id_range tf.Output, stats_summary_list []tf.Output, l1 tf.Output, l2 tf.Output, tree_complexity tf.Output, min_node_weight tf.Output, max_splits int64) (node_ids_list []tf.Output, gains_list []tf.Output, thresholds_list []tf.Output, left_node_contribs_list []tf.Output, right_node_contribs_list []tf.Output)

Calculates gains for each feature and returns the best possible split information for the feature.

The split information is the best threshold (bucket id), gains and left/right node contributions per node for each feature.

It is possible that not all nodes can be split on each feature. Hence, the list of possible nodes can differ between the features. Therefore, we return `node_ids_list` for each feature, containing the list of nodes that this feature can be used to split.

In this manner, the output is the best split per features and per node, so that it needs to be combined later to produce the best split for each node (among all possible features).

The length of output lists are all of the same length, `num_features`. The output shapes are compatible in a way that the first dimension of all tensors of all lists are the same and equal to the number of possible split nodes for each feature.

Arguments:

node_id_range: A Rank 1 tensor (shape=[2]) to specify the range [first, last) of node ids to process within `stats_summary_list`. The nodes are iterated between the two nodes specified by the tensor, as like `for node_id in range(node_id_range[0], node_id_range[1])` (Note that the last index node_id_range[1] is exclusive).
stats_summary_list: A list of Rank 3 tensor (#shape=[max_splits, bucket, 2]) for accumulated stats summary (gradient/hessian) per node per buckets for each feature. The first dimension of the tensor is the maximum number of splits, and thus not all elements of it will be used, but only the indexes specified by node_ids will be used.
l1: l1 regularization factor on leaf weights, per instance based.
l2: l2 regularization factor on leaf weights, per instance based.
tree_complexity: adjustment to the gain, per leaf based.
min_node_weight: mininum avg of hessians in a node before required for the node to be considered for splitting.
max_splits: the number of nodes that can be split in the whole tree. Used as a dimension of output tensors.

Returns An output list of Rank 1 tensors indicating possible split node ids for each feature. The length of the list is num_features, but each tensor has different size as each feature provides different possible nodes. See above for details like shapes and sizes.An output list of Rank 1 tensors indicating the best gains for each feature to split for certain nodes. See above for details like shapes and sizes.An output list of Rank 1 tensors indicating the bucket id to compare with (as a threshold) for split in each node. See above for details like shapes and sizes.A list of Rank 2 tensors indicating the contribution of the left nodes when branching from parent nodes (given by the tensor element in the output node_ids_list) to the left direction by the given threshold for each feature. This value will be used to make the left node value by adding to the parent node value. Second dimension size is 1 for 1-dimensional logits, but would be larger for multi-class problems. See above for details like shapes and sizes.A list of Rank 2 tensors, with the same shape/conditions as left_node_contribs_list, but just that the value is for the right node.

func BoostedTreesCenterBias Uses

func BoostedTreesCenterBias(scope *Scope, tree_ensemble_handle tf.Output, mean_gradients tf.Output, mean_hessians tf.Output, l1 tf.Output, l2 tf.Output) (continue_centering tf.Output)

Calculates the prior from the training data (the bias) and fills in the first node with the logits' prior. Returns a boolean indicating whether to continue centering.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble.
mean_gradients: A tensor with shape=[logits_dimension] with mean of gradients for a first node.
mean_hessians: A tensor with shape=[logits_dimension] mean of hessians for a first node.
l1: l1 regularization factor on leaf weights, per instance based.
l2: l2 regularization factor on leaf weights, per instance based.

Returns Bool, whether to continue bias centering.

func BoostedTreesCreateEnsemble Uses

func BoostedTreesCreateEnsemble(scope *Scope, tree_ensemble_handle tf.Output, stamp_token tf.Output, tree_ensemble_serialized tf.Output) (o *tf.Operation)

Creates a tree ensemble model and returns a handle to it.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble resource to be created.
stamp_token: Token to use as the initial value of the resource stamp.
tree_ensemble_serialized: Serialized proto of the tree ensemble.

Returns the created operation.

func BoostedTreesCreateQuantileStreamResource Uses

func BoostedTreesCreateQuantileStreamResource(scope *Scope, quantile_stream_resource_handle tf.Output, epsilon tf.Output, num_streams tf.Output, optional ...BoostedTreesCreateQuantileStreamResourceAttr) (o *tf.Operation)

Create the Resource for Quantile Streams.

Arguments:

quantile_stream_resource_handle: resource; Handle to quantile stream resource.
epsilon: float; The required approximation error of the stream resource.
num_streams: int; The number of streams managed by the resource that shares the same epsilon.

Returns the created operation.

func BoostedTreesDeserializeEnsemble Uses

func BoostedTreesDeserializeEnsemble(scope *Scope, tree_ensemble_handle tf.Output, stamp_token tf.Output, tree_ensemble_serialized tf.Output) (o *tf.Operation)

Deserializes a serialized tree ensemble config and replaces current tree

ensemble.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble.
stamp_token: Token to use as the new value of the resource stamp.
tree_ensemble_serialized: Serialized proto of the ensemble.

Returns the created operation.

func BoostedTreesEnsembleResourceHandleOp Uses

func BoostedTreesEnsembleResourceHandleOp(scope *Scope, optional ...BoostedTreesEnsembleResourceHandleOpAttr) (resource tf.Output)

Creates a handle to a BoostedTreesEnsembleResource

func BoostedTreesExampleDebugOutputs Uses

func BoostedTreesExampleDebugOutputs(scope *Scope, tree_ensemble_handle tf.Output, bucketized_features []tf.Output, logits_dimension int64) (examples_debug_outputs_serialized tf.Output)

Debugging/model interpretability outputs for each example.

It traverses all the trees and computes debug metrics for individual examples, such as getting split feature ids and logits after each split along the decision path used to compute directional feature contributions.

Arguments:

bucketized_features: A list of rank 1 Tensors containing bucket id for each

feature.

logits_dimension: scalar, dimension of the logits, to be used for constructing the protos in

examples_debug_outputs_serialized.

Returns Output rank 1 Tensor containing a proto serialized as a string for each example.

func BoostedTreesGetEnsembleStates Uses

func BoostedTreesGetEnsembleStates(scope *Scope, tree_ensemble_handle tf.Output) (stamp_token tf.Output, num_trees tf.Output, num_finalized_trees tf.Output, num_attempted_layers tf.Output, last_layer_nodes_range tf.Output)

Retrieves the tree ensemble resource stamp token, number of trees and growing statistics.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble.

Returns Stamp token of the tree ensemble resource.The number of trees in the tree ensemble resource.The number of trees that were finished successfully.The number of layers we attempted to build (but not necessarily succeeded).Rank size 2 tensor that contains start and end ids of the nodes in the latest layer.

func BoostedTreesMakeQuantileSummaries Uses

func BoostedTreesMakeQuantileSummaries(scope *Scope, float_values []tf.Output, example_weights tf.Output, epsilon tf.Output) (summaries []tf.Output)

Makes the summary of quantiles for the batch.

An op that takes a list of tensors (one tensor per feature) and outputs the quantile summaries for each tensor.

Arguments:

float_values: float; List of Rank 1 Tensors each containing values for a single feature.
example_weights: float; Rank 1 Tensor with weights per instance.
epsilon: float; The required maximum approximation error.

Returns float; List of Rank 2 Tensors each containing the quantile summary (value, weight, min_rank, max_rank) of a single feature.

func BoostedTreesMakeStatsSummary Uses

func BoostedTreesMakeStatsSummary(scope *Scope, node_ids tf.Output, gradients tf.Output, hessians tf.Output, bucketized_features_list []tf.Output, max_splits int64, num_buckets int64) (stats_summary tf.Output)

Makes the summary of accumulated stats for the batch.

The summary stats contains gradients and hessians accumulated into the corresponding node and bucket for each example.

Arguments:

node_ids: int32 Rank 1 Tensor containing node ids, which each example falls into for the requested layer.
gradients: float32; Rank 2 Tensor (shape=[#examples, 1]) for gradients.
hessians: float32; Rank 2 Tensor (shape=[#examples, 1]) for hessians.
bucketized_features_list: int32 list of Rank 1 Tensors, each containing the bucketized feature (for each feature column).
max_splits: int; the maximum number of splits possible in the whole tree.
num_buckets: int; equals to the maximum possible value of bucketized feature.

Returns output Rank 4 Tensor (shape=[#features, #splits, #buckets, 2]) containing accumulated stats put into the corresponding node and bucket. The first index of 4th dimension refers to gradients, and the second to hessians.

func BoostedTreesPredict Uses

func BoostedTreesPredict(scope *Scope, tree_ensemble_handle tf.Output, bucketized_features []tf.Output, logits_dimension int64) (logits tf.Output)

Runs multiple additive regression ensemble predictors on input instances and

computes the logits. It is designed to be used during prediction. It traverses all the trees and calculates the final score for each instance.

Arguments:

bucketized_features: A list of rank 1 Tensors containing bucket id for each

feature.

logits_dimension: scalar, dimension of the logits, to be used for partial logits

shape.

Returns Output rank 2 Tensor containing logits for each example.

func BoostedTreesQuantileStreamResourceAddSummaries Uses

func BoostedTreesQuantileStreamResourceAddSummaries(scope *Scope, quantile_stream_resource_handle tf.Output, summaries []tf.Output) (o *tf.Operation)

Add the quantile summaries to each quantile stream resource.

An op that adds a list of quantile summaries to a quantile stream resource. Each summary Tensor is rank 2, containing summaries (value, weight, min_rank, max_rank) for a single feature.

Arguments:

quantile_stream_resource_handle: resource handle referring to a QuantileStreamResource.
summaries: string; List of Rank 2 Tensor each containing the summaries for a single feature.

Returns the created operation.

func BoostedTreesQuantileStreamResourceDeserialize Uses

func BoostedTreesQuantileStreamResourceDeserialize(scope *Scope, quantile_stream_resource_handle tf.Output, bucket_boundaries []tf.Output) (o *tf.Operation)

Deserialize bucket boundaries and ready flag into current QuantileAccumulator.

An op that deserializes bucket boundaries and are boundaries ready flag into current QuantileAccumulator.

Arguments:

quantile_stream_resource_handle: resource handle referring to a QuantileStreamResource.
bucket_boundaries: float; List of Rank 1 Tensors each containing the bucket boundaries for a feature.

Returns the created operation.

func BoostedTreesQuantileStreamResourceFlush Uses

func BoostedTreesQuantileStreamResourceFlush(scope *Scope, quantile_stream_resource_handle tf.Output, num_buckets tf.Output, optional ...BoostedTreesQuantileStreamResourceFlushAttr) (o *tf.Operation)

Flush the summaries for a quantile stream resource.

An op that flushes the summaries for a quantile stream resource.

Arguments:

quantile_stream_resource_handle: resource handle referring to a QuantileStreamResource.
num_buckets: int; approximate number of buckets unless using generate_quantiles.

Returns the created operation.

func BoostedTreesQuantileStreamResourceGetBucketBoundaries Uses

func BoostedTreesQuantileStreamResourceGetBucketBoundaries(scope *Scope, quantile_stream_resource_handle tf.Output, num_features int64) (bucket_boundaries []tf.Output)

Generate the bucket boundaries for each feature based on accumulated summaries.

An op that returns a list of float tensors for a quantile stream resource. Each tensor is Rank 1 containing bucket boundaries for a single feature.

Arguments:

quantile_stream_resource_handle: resource handle referring to a QuantileStreamResource.
num_features: inferred int; number of features to get bucket boundaries for.

Returns float; List of Rank 1 Tensors each containing the bucket boundaries for a feature.

func BoostedTreesQuantileStreamResourceHandleOp Uses

func BoostedTreesQuantileStreamResourceHandleOp(scope *Scope, optional ...BoostedTreesQuantileStreamResourceHandleOpAttr) (resource tf.Output)

Creates a handle to a BoostedTreesQuantileStreamResource.

func BoostedTreesSerializeEnsemble Uses

func BoostedTreesSerializeEnsemble(scope *Scope, tree_ensemble_handle tf.Output) (stamp_token tf.Output, tree_ensemble_serialized tf.Output)

Serializes the tree ensemble to a proto.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble.

Returns Stamp token of the tree ensemble resource.Serialized proto of the ensemble.

func BoostedTreesTrainingPredict Uses

func BoostedTreesTrainingPredict(scope *Scope, tree_ensemble_handle tf.Output, cached_tree_ids tf.Output, cached_node_ids tf.Output, bucketized_features []tf.Output, logits_dimension int64) (partial_logits tf.Output, tree_ids tf.Output, node_ids tf.Output)

Runs multiple additive regression ensemble predictors on input instances and

computes the update to cached logits. It is designed to be used during training. It traverses the trees starting from cached tree id and cached node id and calculates the updates to be pushed to the cache.

Arguments:

cached_tree_ids: Rank 1 Tensor containing cached tree ids which is the starting

tree of prediction.

cached_node_ids: Rank 1 Tensor containing cached node id which is the starting

node of prediction.

bucketized_features: A list of rank 1 Tensors containing bucket id for each

feature.

logits_dimension: scalar, dimension of the logits, to be used for partial logits

shape.

Returns Rank 2 Tensor containing logits update (with respect to cached values stored) for each example.Rank 1 Tensor containing new tree ids for each example.Rank 1 Tensor containing new node ids in the new tree_ids.

func BoostedTreesUpdateEnsemble Uses

func BoostedTreesUpdateEnsemble(scope *Scope, tree_ensemble_handle tf.Output, feature_ids tf.Output, node_ids []tf.Output, gains []tf.Output, thresholds []tf.Output, left_node_contribs []tf.Output, right_node_contribs []tf.Output, max_depth tf.Output, learning_rate tf.Output, pruning_mode int64) (o *tf.Operation)

Updates the tree ensemble by either adding a layer to the last tree being grown

or by starting a new tree.

Arguments:

tree_ensemble_handle: Handle to the ensemble variable.
feature_ids: Rank 1 tensor with ids for each feature. This is the real id of

the feature that will be used in the split.

node_ids: List of rank 1 tensors representing the nodes for which this feature

has a split.

gains: List of rank 1 tensors representing the gains for each of the feature's

split.

thresholds: List of rank 1 tensors representing the thesholds for each of the

feature's split.

left_node_contribs: List of rank 2 tensors with left leaf contribs for each of

the feature's splits. Will be added to the previous node values to constitute the values of the left nodes.

right_node_contribs: List of rank 2 tensors with right leaf contribs for each

of the feature's splits. Will be added to the previous node values to constitute the values of the right nodes.

max_depth: Max depth of the tree to build.
learning_rate: shrinkage const for each new tree.
pruning_mode: 0-No pruning, 1-Pre-pruning, 2-Post-pruning.

Returns the created operation.

func BroadcastArgs Uses

func BroadcastArgs(scope *Scope, s0 tf.Output, s1 tf.Output) (r0 tf.Output)

Return the shape of s0 op s1 with broadcast.

Given `s0` and `s1`, tensors that represent shapes, compute `r0`, the broadcasted shape. `s0`, `s1` and `r0` are all integer vectors.

func BroadcastGradientArgs Uses

func BroadcastGradientArgs(scope *Scope, s0 tf.Output, s1 tf.Output) (r0 tf.Output, r1 tf.Output)

Return the reduction indices for computing gradients of s0 op s1 with broadcast.

This is typically used by gradient computations for a broadcasting operation.

func BroadcastTo Uses

func BroadcastTo(scope *Scope, input tf.Output, shape tf.Output) (output tf.Output)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one. When trying to broadcast a Tensor to a shape, it starts with the trailing dimensions, and works its way forward.

For example, “` >>> x = tf.constant([1, 2, 3]) >>> y = tf.broadcast_to(x, [3, 3]) >>> sess.run(y) array([[1, 2, 3],

[1, 2, 3],
[1, 2, 3]], dtype=int32)

“` In the above example, the input Tensor with the shape of `[1, 3]` is broadcasted to output Tensor with shape of `[3, 3]`.

Arguments:

input: A Tensor to broadcast.
shape: An 1-D `int` Tensor. The shape of the desired output.

Returns A Tensor.

func Bucketize Uses

func Bucketize(scope *Scope, input tf.Output, boundaries []float32) (output tf.Output)

Bucketizes 'input' based on 'boundaries'.

For example, if the inputs are

boundaries = [0, 10, 100]
input = [[-5, 10000]
         [150,   10]
         [5,    100]]

then the output will be

output = [[0, 3]
          [3, 2]
          [1, 3]]

Arguments:

input: Any shape of Tensor contains with int or float type.
boundaries: A sorted list of floats gives the boundary of the buckets.

Returns Same shape with 'input', each value of input replaced with bucket index.

@compatibility(numpy) Equivalent to np.digitize. @end_compatibility

func CTCBeamSearchDecoder Uses

func CTCBeamSearchDecoder(scope *Scope, inputs tf.Output, sequence_length tf.Output, beam_width int64, top_paths int64, optional ...CTCBeamSearchDecoderAttr) (decoded_indices []tf.Output, decoded_values []tf.Output, decoded_shape []tf.Output, log_probability tf.Output)

Performs beam search decoding on the logits given in input.

A note about the attribute merge_repeated: For the beam search decoder, this means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the top path is "A B B B B", "A B" is returned if merge_repeated = True but "A B B B B" is returned if merge_repeated = False.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
sequence_length: A vector containing sequence lengths, size `(batch)`.
beam_width: A scalar >= 0 (beam search beam width).
top_paths: A scalar >= 0, <= beam_width (controls output size).

Returns A list (length: top_paths) of indices matrices. Matrix j, size `(total_decoded_outputs[j] x 2)`, has indices of a `SparseTensor<int64, 2>`. The rows store: [batch, time].A list (length: top_paths) of values vectors. Vector j, size `(length total_decoded_outputs[j])`, has the values of a `SparseTensor<int64, 2>`. The vector stores the decoded classes for beam j.A list (length: top_paths) of shape vector. Vector j, size `(2)`, stores the shape of the decoded `SparseTensor[j]`. Its values are: `[batch_size, max_decoded_length[j]]`.A matrix, shaped: `(batch_size x top_paths)`. The sequence log-probabilities.

func CTCGreedyDecoder Uses

func CTCGreedyDecoder(scope *Scope, inputs tf.Output, sequence_length tf.Output, optional ...CTCGreedyDecoderAttr) (decoded_indices tf.Output, decoded_values tf.Output, decoded_shape tf.Output, log_probability tf.Output)

Performs greedy decoding on the logits given in inputs.

A note about the attribute merge_repeated: if enabled, when consecutive logits' maximum indices are the same, only the first of these is emitted. Labeling the blank '*', the sequence "A B B * B B" becomes "A B B" if merge_repeated = True and "A B B B B" if merge_repeated = False.

Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank, index `(num_classes - 1)`, no new element is emitted.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
sequence_length: A vector containing sequence lengths, size `(batch_size)`.

Returns Indices matrix, size `(total_decoded_outputs x 2)`, of a `SparseTensor<int64, 2>`. The rows store: [batch, time].Values vector, size: `(total_decoded_outputs)`, of a `SparseTensor<int64, 2>`. The vector stores the decoded classes.Shape vector, size `(2)`, of the decoded SparseTensor. Values are: `[batch_size, max_decoded_length]`.Matrix, size `(batch_size x 1)`, containing sequence log-probabilities.

func CTCLoss Uses

func CTCLoss(scope *Scope, inputs tf.Output, labels_indices tf.Output, labels_values tf.Output, sequence_length tf.Output, optional ...CTCLossAttr) (loss tf.Output, gradient tf.Output)

Calculates the CTC Loss (log probability) for each batch entry. Also calculates

the gradient. This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
labels_indices: The indices of a `SparseTensor<int32, 2>`.

`labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for `(batch b, time t)`.

labels_values: The values (labels) associated with the given batch and time.
sequence_length: A vector containing sequence lengths (batch).

Returns A vector (batch) containing log-probabilities.The gradient of `loss`. 3-D, shape: `(max_time x batch_size x num_classes)`.

func CacheDataset Uses

func CacheDataset(scope *Scope, input_dataset tf.Output, filename tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that caches elements from `input_dataset`.

A CacheDataset will iterate over the input_dataset, and store tensors. If the cache already exists, the cache will be used. If the cache is inappropriate (e.g. cannot be opened, contains tensors of the wrong shape / size), an error will the returned when used.

Arguments:

filename: A path on the filesystem where we should cache the dataset. Note: this

will be a directory.

func Cast Uses

func Cast(scope *Scope, x tf.Output, DstT tf.DataType, optional ...CastAttr) (y tf.Output)

Cast x of type SrcT to y of DstT.

func Ceil Uses

func Ceil(scope *Scope, x tf.Output) (y tf.Output)

Returns element-wise smallest integer not less than x.

func CheckNumerics Uses

func CheckNumerics(scope *Scope, tensor tf.Output, message string) (output tf.Output)

Checks a tensor for NaN and Inf values.

When run, reports an `InvalidArgument` error if `tensor` has any values that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.

Arguments:

message: Prefix of the error message.

func Cholesky Uses

func Cholesky(scope *Scope, input tf.Output) (output tf.Output)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices.

The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.

The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices `[..., :, :]`.

**Note**: The gradient computation on GPU is faster for large matrices but not for large batch dimensions when the submatrices are small. In this case it might be faster to use the CPU.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

func CholeskyGrad Uses

func CholeskyGrad(scope *Scope, l tf.Output, grad tf.Output) (output tf.Output)

Computes the reverse mode backpropagated gradient of the Cholesky algorithm.

For an explanation see "Differentiation of the Cholesky algorithm" by Iain Murray http://arxiv.org/abs/1602.07527.

Arguments:

l: Output of batch Cholesky algorithm l = cholesky(A). Shape is `[..., M, M]`.

Algorithm depends only on lower triangular part of the innermost matrices of this tensor.

grad: df/dl where f is some scalar function. Shape is `[..., M, M]`.

Algorithm depends only on lower triangular part of the innermost matrices of this tensor.

Returns Symmetrized version of df/dA . Shape is `[..., M, M]`

func ClipByValue Uses

func ClipByValue(scope *Scope, t tf.Output, clip_value_min tf.Output, clip_value_max tf.Output) (output tf.Output)

Clips tensor values to a specified min and max.

Given a tensor `t`, this operation returns a tensor of the same type and shape as `t` with its values clipped to `clip_value_min` and `clip_value_max`. Any values less than `clip_value_min` are set to `clip_value_min`. Any values greater than `clip_value_max` are set to `clip_value_max`.

Arguments:

t: A `Tensor`.
clip_value_min: A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape

as `t`. The minimum value to clip by.

clip_value_max: A 0-D (scalar) `Tensor`, or a `Tensor` with the same shape

as `t`. The maximum value to clip by.

Returns A clipped `Tensor` with the same shape as input 't'.

func CollectiveBcastRecv Uses

func CollectiveBcastRecv(scope *Scope, T tf.DataType, group_size int64, group_key int64, instance_key int64, shape tf.Shape) (data tf.Output)

Receives a tensor value broadcast from another device.

func CollectiveBcastSend Uses

func CollectiveBcastSend(scope *Scope, input tf.Output, group_size int64, group_key int64, instance_key int64, shape tf.Shape) (data tf.Output)

Broadcasts a tensor value to one or more other devices.

func CollectiveReduce Uses

func CollectiveReduce(scope *Scope, input tf.Output, group_size int64, group_key int64, instance_key int64, merge_op string, final_op string, subdiv_offsets []int64, optional ...CollectiveReduceAttr) (data tf.Output)

Mutually reduces multiple tensors of identical type and shape.

func CompareAndBitpack Uses

func CompareAndBitpack(scope *Scope, input tf.Output, threshold tf.Output) (output tf.Output)

Compare values of `input` to `threshold` and pack resulting bits into a `uint8`.

Each comparison returns a boolean `true` (if `input_value > threshold`) or and `false` otherwise.

This operation is useful for Locality-Sensitive-Hashing (LSH) and other algorithms that use hashing approximations of cosine and `L2` distances; codes can be generated from an input via:

“`python codebook_size = 50 codebook_bits = codebook_size * 32 codebook = tf.get_variable('codebook', [x.shape[-1].value, codebook_bits],

dtype=x.dtype,
initializer=tf.orthogonal_initializer())

codes = compare_and_threshold(tf.matmul(x, codebook), threshold=0.) codes = tf.bitcast(codes, tf.int32) # go from uint8 to int32 # now codes has shape x.shape[:-1] + [codebook_size] “`

**NOTE**: Currently, the innermost dimension of the tensor must be divisible by 8.

Given an `input` shaped `[s0, s1, ..., s_n]`, the output is a `uint8` tensor shaped `[s0, s1, ..., s_n / 8]`.

Arguments:

input: Values to compare against `threshold` and bitpack.
threshold: Threshold to compare against.

Returns The bitpacked comparisons.

func Complex Uses

func Complex(scope *Scope, real tf.Output, imag tf.Output, optional ...ComplexAttr) (out tf.Output)

Converts two real numbers to a complex number.

Given a tensor `real` representing the real part of a complex number, and a tensor `imag` representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \\(a + bj\\), where *a* represents the `real` part and *b* represents the `imag` part.

The input tensors `real` and `imag` must have the same shape.

For example:

“` # tensor 'real' is [2.25, 3.25] # tensor `imag` is [4.75, 5.75] tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] “`

func ComplexAbs Uses

func ComplexAbs(scope *Scope, x tf.Output, optional ...ComplexAbsAttr) (y tf.Output)

Computes the complex absolute value of a tensor.

Given a tensor `x` of complex numbers, this operation returns a tensor of type `float` or `double` that is the absolute value of each element in `x`. All elements in `x` must be complex numbers of the form \\(a + bj\\). The absolute value is computed as \\( \sqrt{a^2 + b^2}\\).

func ComputeAccidentalHits Uses

func ComputeAccidentalHits(scope *Scope, true_classes tf.Output, sampled_candidates tf.Output, num_true int64, optional ...ComputeAccidentalHitsAttr) (indices tf.Output, ids tf.Output, weights tf.Output)

Computes the ids of the positions in sampled_candidates that match true_labels.

When doing log-odds NCE, the result of this op should be passed through a SparseToDense op, then added to the logits of the sampled candidates. This has the effect of 'removing' the sampled labels that match the true labels by making the classifier sure that they are sampled labels.

Arguments:

true_classes: The true_classes output of UnpackSparseLabels.
sampled_candidates: The sampled_candidates output of CandidateSampler.
num_true: Number of true labels per context.

Returns A vector of indices corresponding to rows of true_candidates.A vector of IDs of positions in sampled_candidates that match a true_label for the row with the corresponding index in indices.A vector of the same length as indices and ids, in which each element is -FLOAT_MAX.

func Concat Uses

func Concat(scope *Scope, concat_dim tf.Output, values []tf.Output) (output tf.Output)

Concatenates tensors along one dimension.

Arguments:

concat_dim: 0-D.  The dimension along which to concatenate.  Must be in the

range [0, rank(values)).

values: The `N` Tensors to concatenate. Their ranks and types must match,

and their sizes must match in all dimensions except `concat_dim`.

Returns A `Tensor` with the concatenation of values stacked along the `concat_dim` dimension. This tensor's shape matches that of `values` except in `concat_dim` where it has the sum of the sizes.

func ConcatOffset Uses

func ConcatOffset(scope *Scope, concat_dim tf.Output, shape []tf.Output) (offset []tf.Output)

Computes offsets of concat inputs within its output.

For example:

“` # 'x' is [2, 2, 7] # 'y' is [2, 3, 7] # 'z' is [2, 5, 7] concat_offset(2, [x, y, z]) => [0, 0, 0], [0, 2, 0], [0, 5, 0] “`

This is typically used by gradient computations for a concat operation.

Arguments:

concat_dim: The dimension along which to concatenate.
shape: The `N` int32 vectors representing shape of tensors being concatenated.

Returns The `N` int32 vectors representing the starting offset of input tensors within the concatenated output.

func ConcatV2 Uses

func ConcatV2(scope *Scope, values []tf.Output, axis tf.Output) (output tf.Output)

Concatenates tensors along one dimension.

Arguments:

values: List of `N` Tensors to concatenate. Their ranks and types must match,

and their sizes must match in all dimensions except `concat_dim`.

axis: 0-D.  The dimension along which to concatenate.  Must be in the

range [-rank(values), rank(values)).

Returns A `Tensor` with the concatenation of values stacked along the `concat_dim` dimension. This tensor's shape matches that of `values` except in `concat_dim` where it has the sum of the sizes.

func ConcatenateDataset Uses

func ConcatenateDataset(scope *Scope, input_dataset tf.Output, another_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that concatenates `input_dataset` with `another_dataset`.

func Conj Uses

func Conj(scope *Scope, input tf.Output) (output tf.Output)

Returns the complex conjugate of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in `input`. The complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part.

The complex conjugate returned by this operation is of the form \\(a - bj\\).

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] “`

func ConjugateTranspose Uses

func ConjugateTranspose(scope *Scope, x tf.Output, perm tf.Output) (y tf.Output)

Shuffle dimensions of x according to a permutation and conjugate the result.

The output `y` has the same rank as `x`. The shapes of `x` and `y` satisfy:

`y.shape[i] == x.shape[perm[i]] for i in [0, 1, ..., rank(x) - 1]`
`y[i,j,k,...,s,t,u] == conj(x[perm[i], perm[j], perm[k],...,perm[s], perm[t], perm[u]])`

func Const Uses

func Const(scope *Scope, value interface{}) (output tf.Output)

Const adds an operation to graph that produces value as output.

func ConsumeMutexLock Uses

func ConsumeMutexLock(scope *Scope, mutex_lock tf.Output) (o *tf.Operation)

This op consumes a lock created by `MutexLock`.

This op exists to consume a tensor created by `MutexLock` (other than direct control dependencies). It should be the only that consumes the tensor, and will raise an error if it is not. Its only purpose is to keep the mutex lock tensor alive until it is consumed by this op.

**NOTE**: This operation must run on the same device as its input. This may be enforced via the `colocate_with` mechanism.

Arguments:

mutex_lock: A tensor returned by `MutexLock`.

Returns the created operation.

func ControlTrigger Uses

func ControlTrigger(scope *Scope) (o *tf.Operation)

Does nothing. Serves as a control trigger for scheduling.

Only useful as a placeholder for control edges.

Returns the created operation.

func Conv2D Uses

func Conv2D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv2DAttr) (output tf.Output)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape

`[filter_height * filter_width * in_channels, output_channels]`.

2. Extracts image patches from the input tensor to form a *virtual*

tensor of shape `[batch, out_height, out_width,
filter_height * filter_width * in_channels]`.

3. For each patch, right-multiplies the filter matrix and the image patch

vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.

Arguments:

input: A 4-D tensor. The dimension order is interpreted according to the value

of `data_format`, see below for details.

filter: A 4-D tensor of shape

`[filter_height, filter_width, in_channels, out_channels]`

strides: 1-D tensor of length 4.  The stride of the sliding window for each

dimension of `input`. The dimension order is determined by the value of `data_format`, see below for details.

padding: The type of padding algorithm to use.

Returns A 4-D tensor. The dimension order is determined by the value of `data_format`, see below for details.

func Conv2DBackpropFilter Uses

func Conv2DBackpropFilter(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv2DBackpropFilterAttr) (output tf.Output)

Computes the gradients of convolution with respect to the filter.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.

out_backprop: 4-D with shape `[batch, out_height, out_width, out_channels]`.

Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t. the `filter` input of the convolution.

func Conv2DBackpropInput Uses

func Conv2DBackpropInput(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv2DBackpropInputAttr) (output tf.Output)

Computes the gradients of convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the shape of `input`,

where `input` is a 4-D `[batch, height, width, channels]` tensor.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

out_backprop: 4-D with shape `[batch, out_height, out_width, out_channels]`.

Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient w.r.t. the input of the convolution.

func Conv3D Uses

func Conv3D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv3DAttr) (output tf.Output)

Computes a 3-D convolution given 5-D `input` and `filter` tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Arguments:

input: Shape `[batch, in_depth, in_height, in_width, in_channels]`.
filter: Shape `[filter_depth, filter_height, filter_width, in_channels,

out_channels]`. `in_channels` must match between `input` and `filter`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropFilter Uses

func Conv3DBackpropFilter(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropFilterAttr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the filter.

DEPRECATED at GraphDef version 10: Use Conv3DBackpropFilterV2

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropFilterV2 Uses

func Conv3DBackpropFilterV2(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropFilterV2Attr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the filter.

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 5-D `[filter_depth, filter_height, filter_width, in_channels, out_channels]` tensor.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropInput Uses

func Conv3DBackpropInput(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropInputAttr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the input.

DEPRECATED at GraphDef version 10: Use Conv3DBackpropInputV2

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropInputV2 Uses

func Conv3DBackpropInputV2(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropInputV2Attr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the tensor shape of `input`,

where `input` is a 5-D `[batch, depth, rows, cols, in_channels]` tensor.

filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Cos Uses

func Cos(scope *Scope, x tf.Output) (y tf.Output)

Computes cos of x element-wise.

func Cosh Uses

func Cosh(scope *Scope, x tf.Output) (y tf.Output)

Computes hyperbolic cosine of x element-wise.

func CropAndResize Uses

func CropAndResize(scope *Scope, image tf.Output, boxes tf.Output, box_ind tf.Output, crop_size tf.Output, optional ...CropAndResizeAttr) (crops tf.Output)

Extracts crops from the input image tensor and resizes them.

Extracts crops from the input image tensor and resizes them using bilinear sampling or nearest neighbor sampling (possibly with aspect ratio change) to a common output size specified by `crop_size`. This is more general than the `crop_to_bounding_box` op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.

Returns a tensor with `crops` from the input `image` at positions defined at the bounding box locations in `boxes`. The cropped boxes are all resized (with bilinear or nearest neighbor interpolation) to a fixed `size = [crop_height, crop_width]`. The result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`. The resizing is corner aligned. In particular, if `boxes = [[0, 0, 1, 1]]`, the method will give identical results to using `tf.image.resize_bilinear()` or `tf.image.resize_nearest_neighbor()`(depends on the `method` argument) with `align_corners=True`.

Arguments:

image: A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

Both `image_height` and `image_width` need to be positive.

boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

crop_size: A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All

cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both `crop_height` and `crop_width` need to be positive.

Returns A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.

func CropAndResizeGradBoxes Uses

func CropAndResizeGradBoxes(scope *Scope, grads tf.Output, image tf.Output, boxes tf.Output, box_ind tf.Output, optional ...CropAndResizeGradBoxesAttr) (output tf.Output)

Computes the gradient of the crop_and_resize op wrt the input boxes tensor.

Arguments:

grads: A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
image: A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

Both `image_height` and `image_width` need to be positive.

boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

Returns A 2-D tensor of shape `[num_boxes, 4]`.

func CropAndResizeGradImage Uses

func CropAndResizeGradImage(scope *Scope, grads tf.Output, boxes tf.Output, box_ind tf.Output, image_size tf.Output, T tf.DataType, optional ...CropAndResizeGradImageAttr) (output tf.Output)

Computes the gradient of the crop_and_resize op wrt the input image tensor.

Arguments:

grads: A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

image_size: A 1-D tensor with value `[batch, image_height, image_width, depth]`

containing the original image size. Both `image_height` and `image_width` need to be positive.

Returns A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

func Cross Uses

func Cross(scope *Scope, a tf.Output, b tf.Output) (product tf.Output)

Compute the pairwise cross product.

`a` and `b` must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Arguments:

a: A tensor containing 3-element vectors.
b: Another tensor, of same type and shape as `a`.

Returns Pairwise cross product of the vectors in `a` and `b`.

func CudnnRNN Uses

func CudnnRNN(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, optional ...CudnnRNNAttr) (output tf.Output, output_h tf.Output, output_c tf.Output, reserve_space tf.Output)

A RNN backed by cuDNN.

Computes the RNN from the input and initial states, with respect to the params buffer.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. is_training: Indicates whether this operation is used for inferenece or

training.

reserve_space: An opaque tensor that can be used in backprop calculation. It

is only produced if is_training is false.

func CudnnRNNBackprop Uses

func CudnnRNNBackprop(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, output tf.Output, output_h tf.Output, output_c tf.Output, output_backprop tf.Output, output_h_backprop tf.Output, output_c_backprop tf.Output, reserve_space tf.Output, optional ...CudnnRNNBackpropAttr) (input_backprop tf.Output, input_h_backprop tf.Output, input_c_backprop tf.Output, params_backprop tf.Output)

Backprop step of CudnnRNN.

Compute the backprop of both data and weights in a RNN.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. output_backprop: A 3-D tensor with the same shape as output in the forward pass. output_h_backprop: A 3-D tensor with the same shape as output_h in the forward

pass.

output_c_backprop: A 3-D tensor with the same shape as output_c in the forward

pass.

reserve_space: The same reserve_space produced in for forward operation. input_backprop: The backprop to input in the forward pass. Has the same shape

as input.

input_h_backprop: The backprop to input_h in the forward pass. Has the same

shape as input_h.

input_c_backprop: The backprop to input_c in the forward pass. Has the same

shape as input_c.

params_backprop: The backprop to the params buffer in the forward pass. Has the

same shape as params.

func CudnnRNNBackpropV2 Uses

func CudnnRNNBackpropV2(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, output tf.Output, output_h tf.Output, output_c tf.Output, output_backprop tf.Output, output_h_backprop tf.Output, output_c_backprop tf.Output, reserve_space tf.Output, host_reserved tf.Output, optional ...CudnnRNNBackpropV2Attr) (input_backprop tf.Output, input_h_backprop tf.Output, input_c_backprop tf.Output, params_backprop tf.Output)

Backprop step of CudnnRNN.

Compute the backprop of both data and weights in a RNN. Takes an extra

"host_reserved" inupt than CudnnRNNBackprop, which is used to determine RNN
cudnnRNNAlgo_t and cudnnMathType_t.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicates whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. output_backprop: A 3-D tensor with the same shape as output in the forward pass. output_h_backprop: A 3-D tensor with the same shape as output_h in the forward

pass.

output_c_backprop: A 3-D tensor with the same shape as output_c in the forward

pass.

reserve_space: The same reserve_space produced in the forward operation. host_reserved: The same host_reserved produced in the forward operation. input_backprop: The backprop to input in the forward pass. Has the same shape

as input.

input_h_backprop: The backprop to input_h in the forward pass. Has the same

shape as input_h.

input_c_backprop: The backprop to input_c in the forward pass. Has the same

shape as input_c.

params_backprop: The backprop to the params buffer in the forward pass. Has the

same shape as params.

func CudnnRNNBackpropV3 Uses

func CudnnRNNBackpropV3(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, sequence_lengths tf.Output, output tf.Output, output_h tf.Output, output_c tf.Output, output_backprop tf.Output, output_h_backprop tf.Output, output_c_backprop tf.Output, reserve_space tf.Output, host_reserved tf.Output, optional ...CudnnRNNBackpropV3Attr) (input_backprop tf.Output, input_h_backprop tf.Output, input_c_backprop tf.Output, params_backprop tf.Output)

Backprop step of CudnnRNNV3.

Compute the backprop of both data and weights in a RNN. Takes an extra

"sequence_lengths" input than CudnnRNNBackprop.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicates whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

sequence_lengths: a vector of lengths of each input sequence. output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. output_backprop: A 3-D tensor with the same shape as output in the forward pass. output_h_backprop: A 3-D tensor with the same shape as output_h in the forward

pass.

output_c_backprop: A 3-D tensor with the same shape as output_c in the forward

pass.

reserve_space: The same reserve_space produced in the forward operation. input_backprop: The backprop to input in the forward pass. Has the same shape

as input.

input_h_backprop: The backprop to input_h in the forward pass. Has the same

shape as input_h.

input_c_backprop: The backprop to input_c in the forward pass. Has the same

shape as input_c.

params_backprop: The backprop to the params buffer in the forward pass. Has the

same shape as params.

func CudnnRNNCanonicalToParams Uses

func CudnnRNNCanonicalToParams(scope *Scope, num_layers tf.Output, num_units tf.Output, input_size tf.Output, weights []tf.Output, biases []tf.Output, optional ...CudnnRNNCanonicalToParamsAttr) (params tf.Output)

Converts CudnnRNN params from canonical form to usable form.

Writes a set of weights into the opaque params buffer so they can be used in upcoming training or inferences.

Note that the params buffer may not be compatible across different GPUs. So any save and restoration should be converted to and from the canonical weights and biases.

num_layers: Specifies the number of layers in the RNN model. num_units: Specifies the size of the hidden state. input_size: Specifies the size of the input state. weights: the canonical form of weights that can be used for saving

and restoration. They are more likely to be compatible across different
generations.

biases: the canonical form of biases that can be used for saving

and restoration. They are more likely to be compatible across different
generations.

num_params: number of parameter sets for all layers.

Each layer may contain multiple parameter sets, with each set consisting of
a weight matrix and a bias vector.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and

The actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used.

dir = (direction == bidirectional) ? 2 : 1

dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout.

func CudnnRNNParamsSize Uses

func CudnnRNNParamsSize(scope *Scope, num_layers tf.Output, num_units tf.Output, input_size tf.Output, T tf.DataType, S tf.DataType, optional ...CudnnRNNParamsSizeAttr) (params_size tf.Output)

Computes size of weights that can be used by a Cudnn RNN model.

Return the params size that can be used by the Cudnn RNN model. Subsequent weight allocation and initialization should use this size.

num_layers: Specifies the number of layers in the RNN model. num_units: Specifies the size of the hidden state. input_size: Specifies the size of the input state. rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and

The actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used.

dir = (direction == bidirectional) ? 2 : 1

dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout. params_size: The size of the params buffer that should be allocated and

initialized for this RNN model. Note that this params buffer may not be
compatible across GPUs. Please use CudnnRNNParamsWeights and
CudnnRNNParamsBiases to save and restore them in a way that is compatible
across different runs.

func CudnnRNNParamsToCanonical Uses

func CudnnRNNParamsToCanonical(scope *Scope, num_layers tf.Output, num_units tf.Output, input_size tf.Output, params tf.Output, num_params int64, optional ...CudnnRNNParamsToCanonicalAttr) (weights []tf.Output, biases []tf.Output)

Retrieves CudnnRNN params in canonical form.

Retrieves a set of weights from the opaque params buffer that can be saved and restored in a way compatible with future runs.

Note that the params buffer may not be compatible across different GPUs. So any save and restoration should be converted to and from the canonical weights and biases.

num_layers: Specifies the number of layers in the RNN model. num_units: Specifies the size of the hidden state. input_size: Specifies the size of the input state. num_params: number of parameter sets for all layers.

Each layer may contain multiple parameter sets, with each set consisting of
a weight matrix and a bias vector.

weights: the canonical form of weights that can be used for saving

and restoration. They are more likely to be compatible across different
generations.

biases: the canonical form of biases that can be used for saving

and restoration. They are more likely to be compatible across different
generations.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicate whether there is a linear projection between the input and

The actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used.

dir = (direction == bidirectional) ? 2 : 1

dropout: dropout probability. When set to 0., dropout is disabled. seed: the 1st part of a seed to initialize dropout. seed2: the 2nd part of a seed to initialize dropout.

func CudnnRNNV2 Uses

func CudnnRNNV2(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, optional ...CudnnRNNV2Attr) (output tf.Output, output_h tf.Output, output_c tf.Output, reserve_space tf.Output, host_reserved tf.Output)

A RNN backed by cuDNN.

Computes the RNN from the input and initial states, with respect to the params buffer. Produces one extra output "host_reserved" than CudnnRNN.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicates whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. is_training: Indicates whether this operation is used for inferenece or

training.

reserve_space: An opaque tensor that can be used in backprop calculation. It

is only produced if is_training is true.

host_reserved: An opaque tensor that can be used in backprop calculation. It is

only produced if is_training is true. It is output on host memory rather than
device memory.

func CudnnRNNV3 Uses

func CudnnRNNV3(scope *Scope, input tf.Output, input_h tf.Output, input_c tf.Output, params tf.Output, sequence_lengths tf.Output, optional ...CudnnRNNV3Attr) (output tf.Output, output_h tf.Output, output_c tf.Output, reserve_space tf.Output, host_reserved tf.Output)

A RNN backed by cuDNN.

Computes the RNN from the input and initial states, with respect to the params buffer. Accepts one extra input "sequence_lengths" than CudnnRNN.

rnn_mode: Indicates the type of the RNN model. input_mode: Indicates whether there is a linear projection between the input and

the actual computation before the first layer. 'skip_input' is only allowed
when input_size == num_units; 'auto_select' implies 'skip_input' when
input_size == num_units; otherwise, it implies 'linear_input'.

direction: Indicates whether a bidirectional model will be used. Should be

"unidirectional" or "bidirectional".

dropout: Dropout probability. When set to 0., dropout is disabled. seed: The 1st part of a seed to initialize dropout. seed2: The 2nd part of a seed to initialize dropout. input: A 3-D tensor with the shape of [seq_length, batch_size, input_size]. input_h: A 3-D tensor with the shape of [num_layer * dir, batch_size,

num_units].

input_c: For LSTM, a 3-D tensor with the shape of

[num_layer * dir, batch, num_units]. For other models, it is ignored.

params: A 1-D tensor that contains the weights and biases in an opaque layout.

The size must be created through CudnnRNNParamsSize, and initialized
separately. Note that they might not be compatible across different
generations. So it is a good idea to save and restore

sequence_lengths: a vector of lengths of each input sequence. output: A 3-D tensor with the shape of [seq_length, batch_size,

dir * num_units].

output_h: The same shape has input_h. output_c: The same shape as input_c for LSTM. An empty tensor for other models. is_training: Indicates whether this operation is used for inferenece or

training.

reserve_space: An opaque tensor that can be used in backprop calculation. It

is only produced if is_training is true.

func Cumprod Uses

func Cumprod(scope *Scope, x tf.Output, axis tf.Output, optional ...CumprodAttr) (out tf.Output)

Compute the cumulative product of the tensor `x` along `axis`.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:

“`python tf.cumprod([a, b, c]) # => [a, a * b, a * b * c] “`

By setting the `exclusive` kwarg to `True`, an exclusive cumprod is performed instead:

“`python tf.cumprod([a, b, c], exclusive=True) # => [1, a, a * b] “`

By setting the `reverse` kwarg to `True`, the cumprod is performed in the opposite direction:

“`python tf.cumprod([a, b, c], reverse=True) # => [a * b * c, b * c, c] “`

This is more efficient than using separate `tf.reverse` ops.

The `reverse` and `exclusive` kwargs can also be combined:

“`python tf.cumprod([a, b, c], exclusive=True, reverse=True) # => [b * c, c, 1] “`

Arguments:

x: A `Tensor`. Must be one of the following types: `float32`, `float64`,

`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.

axis: A `Tensor` of type `int32` (default: 0). Must be in the range

`[-rank(x), rank(x))`.

func Cumsum Uses

func Cumsum(scope *Scope, x tf.Output, axis tf.Output, optional ...CumsumAttr) (out tf.Output)

Compute the cumulative sum of the tensor `x` along `axis`.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output:

“`python tf.cumsum([a, b, c]) # => [a, a + b, a + b + c] “`

By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed instead:

“`python tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b] “`

By setting the `reverse` kwarg to `True`, the cumsum is performed in the opposite direction:

“`python tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c] “`

This is more efficient than using separate `tf.reverse` ops.

The `reverse` and `exclusive` kwargs can also be combined:

“`python tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0] “`

Arguments:

x: A `Tensor`. Must be one of the following types: `float32`, `float64`,

`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.

axis: A `Tensor` of type `int32` (default: 0). Must be in the range

`[-rank(x), rank(x))`.

func DataFormatDimMap Uses

func DataFormatDimMap(scope *Scope, x tf.Output, optional ...DataFormatDimMapAttr) (y tf.Output)

Returns the dimension index in the destination data format given the one in

the source data format.

Arguments:

x: A Tensor with each element as a dimension index in source data format.

Must be in the range [-4, 4).

Returns A Tensor with each element as a dimension index in destination data format.

func DataFormatVecPermute Uses

func DataFormatVecPermute(scope *Scope, x tf.Output, optional ...DataFormatVecPermuteAttr) (y tf.Output)

Returns the permuted vector/tensor in the destination data format given the

one in the source data format.

Arguments:

x: Vector of size 4 or Tensor of shape (4, 2) in source data format.

Returns Vector of size 4 or Tensor of shape (4, 2) in destination data format.

func DatasetToGraph Uses

func DatasetToGraph(scope *Scope, input_dataset tf.Output) (graph tf.Output)

Returns a serialized GraphDef representing `input_dataset`.

Returns a graph representation for `input_dataset`.

Arguments:

input_dataset: A variant tensor representing the dataset to return the graph representation for.

Returns The graph representation of the dataset (as serialized GraphDef).

func DatasetToSingleElement Uses

func DatasetToSingleElement(scope *Scope, dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (components []tf.Output)

Outputs the single element from the given dataset.

Arguments:

dataset: A handle to a dataset that contains a single element.

Returns The components of the single element of `input`.

func DebugGradientIdentity Uses

func DebugGradientIdentity(scope *Scope, input tf.Output) (output tf.Output)

Identity op for gradient debugging.

This op is hidden from public in Python. It is used by TensorFlow Debugger to register gradient tensors for gradient debugging. This op operates on non-reference-type tensors.

func DecodeAndCropJpeg Uses

func DecodeAndCropJpeg(scope *Scope, contents tf.Output, crop_window tf.Output, optional ...DecodeAndCropJpegAttr) (image tf.Output)

Decode and Crop a JPEG-encoded image to a uint8 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the JPEG-encoded image. * 1: output a grayscale image. * 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr `ratio` allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

It is equivalent to a combination of decode and crop, but much faster by only decoding partial jpeg image.

Arguments:

contents: 0-D.  The JPEG-encoded image.
crop_window: 1-D.  The crop window: [crop_y, crop_x, crop_height, crop_width].

Returns 3-D with shape `[height, width, channels]`..

func DecodeBase64 Uses

func DecodeBase64(scope *Scope, input tf.Output) (output tf.Output)

Decode web-safe base64-encoded strings.

Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.

Arguments:

input: Base64 strings to decode.

Returns Decoded strings.

func DecodeBmp Uses

func DecodeBmp(scope *Scope, contents tf.Output, optional ...DecodeBmpAttr) (image tf.Output)

Decode the first frame of a BMP-encoded image to a uint8 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the BMP-encoded image. * 3: output an RGB image. * 4: output an RGBA image.

Arguments:

contents: 0-D.  The BMP-encoded image.

Returns 3-D with shape `[height, width, channels]`. RGB order

func DecodeCSV Uses

func DecodeCSV(scope *Scope, records tf.Output, record_defaults []tf.Output, optional ...DecodeCSVAttr) (output []tf.Output)

Convert CSV records to tensors. Each column maps to one tensor.

RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.

Arguments:

records: Each string is a record/row in the csv and all records should have

the same format.

record_defaults: One tensor per column of the input record, with either a

scalar default value for that column or an empty vector if the column is required.

Returns Each tensor will have the same shape as records.

func DecodeCompressed Uses

func DecodeCompressed(scope *Scope, bytes tf.Output, optional ...DecodeCompressedAttr) (output tf.Output)

Decompress strings.

This op decompresses each element of the `bytes` input `Tensor`, which is assumed to be compressed using the given `compression_type`.

The `output` is a string `Tensor` of the same shape as `bytes`, each element containing the decompressed data from the corresponding element in `bytes`.

Arguments:

bytes: A Tensor of string which is compressed.

Returns A Tensor with the same shape as input `bytes`, uncompressed from bytes.

func DecodeGif Uses

func DecodeGif(scope *Scope, contents tf.Output) (image tf.Output)

Decode the first frame of a GIF-encoded image to a uint8 tensor.

GIF with frame or transparency compression are not supported convert animated GIF from compressed to uncompressed by:

convert $src.gif -coalesce $dst.gif

This op also supports decoding JPEGs and PNGs, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The GIF-encoded image.

Returns 4-D with shape `[num_frames, height, width, 3]`. RGB order

func DecodeJSONExample Uses

func DecodeJSONExample(scope *Scope, json_examples tf.Output) (binary_examples tf.Output)

Convert JSON-encoded Example records to binary protocol buffer strings.

This op translates a tensor containing Example records, encoded using the [standard JSON mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.

Arguments:

json_examples: Each string is a JSON object serialized according to the JSON

mapping of the Example proto.

Returns Each string is a binary Example protocol buffer corresponding to the respective element of `json_examples`.

func DecodeJpeg Uses

func DecodeJpeg(scope *Scope, contents tf.Output, optional ...DecodeJpegAttr) (image tf.Output)

Decode a JPEG-encoded image to a uint8 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the JPEG-encoded image. * 1: output a grayscale image. * 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr `ratio` allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The JPEG-encoded image.

Returns 3-D with shape `[height, width, channels]`..

func DecodePng Uses

func DecodePng(scope *Scope, contents tf.Output, optional ...DecodePngAttr) (image tf.Output)

Decode a PNG-encoded image to a uint8 or uint16 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the PNG-encoded image. * 1: output a grayscale image. * 3: output an RGB image. * 4: output an RGBA image.

If needed, the PNG-encoded image is transformed to match the requested number of color channels.

This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The PNG-encoded image.

Returns 3-D with shape `[height, width, channels]`.

func DecodeProtoV2 Uses

func DecodeProtoV2(scope *Scope, bytes tf.Output, message_type string, field_names []string, output_types []tf.DataType, optional ...DecodeProtoV2Attr) (sizes tf.Output, values []tf.Output)

The op extracts fields from a serialized protocol buffers message into tensors.

The `decode_proto` op extracts fields from a serialized protocol buffers message into tensors. The fields in `field_names` are decoded and converted to the corresponding `output_types` if possible.

A `message_type` name must be provided to give context for the field names. The actual message descriptor can be looked up either in the linked-in descriptor pool or a filename provided by the caller using the `descriptor_source` attribute.

Each output tensor is a dense tensor. This means that it is padded to hold the largest number of repeated elements seen in the input minibatch. (The shape is also padded by one to prevent zero-sized dimensions). The actual repeat counts for each example in the minibatch can be found in the `sizes` output. In many cases the output of `decode_proto` is fed immediately into tf.squeeze if missing values are not a concern. When using tf.squeeze, always pass the squeeze dimension explicitly to avoid surprises.

For the most part, the mapping between Proto field types and TensorFlow dtypes is straightforward. However, there are a few special cases:

- A proto field that contains a submessage or group can only be converted to `DT_STRING` (the serialized submessage). This is to reduce the complexity of the API. The resulting string can be used as input to another instance of the decode_proto op.

- TensorFlow lacks support for unsigned integers. The ops represent uint64 types as a `DT_INT64` with the same twos-complement bit pattern (the obvious way). Unsigned int32 values can be represented exactly by specifying type `DT_INT64`, or using twos-complement if the caller specifies `DT_INT32` in the `output_types` attribute.

The `descriptor_source` attribute selects a source of protocol descriptors to consult when looking up `message_type`. This may be a filename containing a serialized `FileDescriptorSet` message, or the special value `local://`, in which case only descriptors linked into the code will be searched; the filename can be on any filesystem accessible to TensorFlow.

You can build a `descriptor_source` file using the `--descriptor_set_out` and `--include_imports` options to the protocol compiler `protoc`.

The `local://` database only covers descriptors linked into the code via C++ libraries, not Python imports. You can link in a proto descriptor by creating a cc_library target with alwayslink=1.

Both binary and text proto serializations are supported, and can be chosen using the `format` attribute.

Arguments:

bytes: Tensor of serialized protos with shape `batch_shape`.
message_type: Name of the proto message type to decode.
field_names: List of strings containing proto field names.
output_types: List of TF types to use for the respective field in field_names.

Returns Tensor of int32 with shape `[batch_shape, len(field_names)]`. Each entry is the number of values found for the corresponding field. Optional fields may have 0 or 1 values.List of tensors containing values for the corresponding field. `values[i]` has datatype `output_types[i]` and shape `[batch_shape, max(sizes[...,i])]`.

func DecodeRaw Uses

func DecodeRaw(scope *Scope, bytes tf.Output, out_type tf.DataType, optional ...DecodeRawAttr) (output tf.Output)

Reinterpret the bytes of a string as a vector of numbers.

Arguments:

bytes: All the elements must have the same length.

Returns A Tensor with one more dimension than the input `bytes`. The added dimension will have size equal to the length of the elements of `bytes` divided by the number of bytes to represent `out_type`.

func DecodeWav Uses

func DecodeWav(scope *Scope, contents tf.Output, optional ...DecodeWavAttr) (audio tf.Output, sample_rate tf.Output)

Decode a 16-bit PCM WAV file to a float tensor.

The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.

When desired_channels is set, if the input contains fewer channels than this then the last channel will be duplicated to give the requested number, else if the input has more channels than requested then the additional channels will be ignored.

If desired_samples is set, then the audio will be cropped or padded with zeroes to the requested length.

The first output contains a Tensor with the content of the audio samples. The lowest dimension will be the number of channels, and the second will be the number of samples. For example, a ten-sample-long stereo WAV file should give an output shape of [10, 2].

Arguments:

contents: The WAV-encoded audio, usually from a file.

Returns 2-D with shape `[length, channels]`.Scalar holding the sample rate found in the WAV header.

func DeepCopy Uses

func DeepCopy(scope *Scope, x tf.Output) (y tf.Output)

Makes a copy of `x`.

Arguments:

x: The source tensor of type `T`.

Returns y: A `Tensor` of type `T`. A copy of `x`. Guaranteed that `y`

is not an alias of `x`.

func DeleteSessionTensor Uses

func DeleteSessionTensor(scope *Scope, handle tf.Output) (o *tf.Operation)

Delete the tensor specified by its handle in the session.

Arguments:

handle: The handle for a tensor stored in the session state.

Returns the created operation.

func DenseToDenseSetOperation Uses

func DenseToDenseSetOperation(scope *Scope, set1 tf.Output, set2 tf.Output, set_operation string, optional ...DenseToDenseSetOperationAttr) (result_indices tf.Output, result_values tf.Output, result_shape tf.Output)

Applies set operation along last dimension of 2 `Tensor` inputs.

See SetOperationOp::SetOperationFromContext for values of `set_operation`.

Output `result` is a `SparseTensor` represented by `result_indices`, `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` dimension contains the result of `set_operation` applied to the corresponding `[0...n-1]` dimension of `set`.

Arguments:

set1: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

set2: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set1`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

Returns 2D indices of a `SparseTensor`.1D values of a `SparseTensor`.1D `Tensor` shape of a `SparseTensor`. `result_shape[0...n-1]` is the same as the 1st `n-1` dimensions of `set1` and `set2`, `result_shape[n]` is the max result set size across all `0...n-1` dimensions.

func DenseToSparseSetOperation Uses

func DenseToSparseSetOperation(scope *Scope, set1 tf.Output, set2_indices tf.Output, set2_values tf.Output, set2_shape tf.Output, set_operation string, optional ...DenseToSparseSetOperationAttr) (result_indices tf.Output, result_values tf.Output, result_shape tf.Output)

Applies set operation along last dimension of `Tensor` and `SparseTensor`.

See SetOperationOp::SetOperationFromContext for values of `set_operation`.

Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`, and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same as `set1`. Dimension `n` contains values in a set, duplicates are allowed but ignored.

If `validate_indices` is `True`, this op validates the order and range of `set2` indices.

Output `result` is a `SparseTensor` represented by `result_indices`, `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` dimension contains the result of `set_operation` applied to the corresponding `[0...n-1]` dimension of `set`.

Arguments:

set1: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

set2_indices: 2D `Tensor`, indices of a `SparseTensor`. Must be in row-major

order.

set2_values: 1D `Tensor`, values of a `SparseTensor`. Must be in row-major

order.

set2_shape: 1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must

be the same as the 1st `n-1` dimensions of `set1`, `result_shape[n]` is the max set size across `n-1` dimensions.

Returns 2D indices of a `SparseTensor`.1D values of a `SparseTensor`.1D `Tensor` shape of a `SparseTensor`. `result_shape[0...n-1]` is the same as the 1st `n-1` dimensions of `set1` and `set2`, `result_shape[n]` is the max result set size across all `0...n-1` dimensions.

func DepthToSpace Uses

func DepthToSpace(scope *Scope, input tf.Output, block_size int64, optional ...DepthToSpaceAttr) (output tf.Output)

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the `depth` dimension are moved in spatial blocks to the `height` and `width` dimensions. The attr `block_size` indicates the input block size and how the data is moved.

* Chunks of data of size `block_size * block_size` from depth are rearranged
  into non-overlapping blocks of size `block_size x block_size`
* The width the output tensor is `input_depth * block_size`, whereas the
  height is `input_height * block_size`.
* The Y, X coordinates within each block of the output image are determined
  by the high order component of the input channel index.
* The depth of the input tensor must be divisible by
  `block_size * block_size`.

The `data_format` attr specifies the layout of the input and output tensors with the following options:

"NHWC": `[ batch, height, width, channels ]`
"NCHW": `[ batch, channels, height, width ]`
"NCHW_VECT_C":
    `qint8 [ batch, channels / 4, height, width, 4 ]`

It is useful to consider the operation as transforming a 6-D Tensor. e.g. for data_format = NHWC,

Each element in the input tensor can be specified via 6 coordinates,
ordered by decreasing memory layout significance as:
n,iY,iX,bY,bX,oC  (where n=batch index, iX, iY means X or Y coordinates
                   within the input image, bX, bY means coordinates
                   within the output block, oC means output channels).
The output would be the input transposed to the following layout:
n,iY,bY,iX,bX,oC

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given an input of shape `[1, 1, 1, 4]`, data_format = "NHWC" and block_size = 2:

“` x = [[[[1, 2, 3, 4]]]]

“`

This operation will output a tensor of shape `[1, 2, 2, 1]`:

“`

[[[[1], [2]],
  [[3], [4]]]]

“`

Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = `4 / (block_size * block_size)`). The output element shape is `[2, 2, 1]`.

For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.

“` x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] “`

This operation, for block size of 2, will return the following tensor of shape `[1, 2, 2, 3]`

“`

[[[[1, 2, 3], [4, 5, 6]],
  [[7, 8, 9], [10, 11, 12]]]]

“`

Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:

“` x = [[[[1, 2, 3, 4],

 [5, 6, 7, 8]],
[[9, 10, 11, 12],
 [13, 14, 15, 16]]]]

“`

the operator will return the following tensor of shape `[1 4 4 1]`:

“` x = [[[ [1], [2], [5], [6]],

[ [3],   [4],  [7],  [8]],
[ [9],  [10], [13],  [14]],
[ [11], [12], [15],  [16]]]]

“`

Arguments:

block_size: The size of the spatial block, same as in Space2Depth.

func DepthwiseConv2dNative Uses

func DepthwiseConv2dNative(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeAttr) (output tf.Output)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

“` for k in 0..in_channels-1

for q in 0..channel_multiplier-1
  output[b, i, j, k * channel_multiplier + q] =
    sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                      filter[di, dj, k, q]

“`

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.

Arguments:

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`.

padding: The type of padding algorithm to use.

func DepthwiseConv2dNativeBackpropFilter Uses

func DepthwiseConv2dNativeBackpropFilter(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeBackpropFilterAttr) (output tf.Output)

Computes the gradients of depthwise convolution with respect to the filter.

Arguments:

input: 4-D with shape based on `data_format`.  For example, if

`data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, in_width, in_channels]` tensor.

filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 4-D `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.

out_backprop: 4-D with shape  based on `data_format`.

For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t. the `filter` input of the convolution.

func DepthwiseConv2dNativeBackpropInput Uses

func DepthwiseConv2dNativeBackpropInput(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeBackpropInputAttr) (output tf.Output)

Computes the gradients of depthwise convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the shape of `input`, based

on `data_format`. For example, if `data_format` is 'NHWC' then

 `input` is a 4-D `[batch, height, width, channels]` tensor.
	filter: 4-D with shape

`[filter_height, filter_width, in_channels, depthwise_multiplier]`.

out_backprop: 4-D with shape  based on `data_format`.

For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution.

padding: The type of padding algorithm to use.

Returns 4-D with shape according to `data_format`. For example, if `data_format` is 'NHWC', output shape is `[batch, in_height, in_width, in_channels]`. Gradient w.r.t. the input of the convolution.

func Dequantize Uses

func Dequantize(scope *Scope, input tf.Output, min_range tf.Output, max_range tf.Output, optional ...DequantizeAttr) (output tf.Output)

Dequantize the 'input' tensor into a float Tensor.

[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

“` if T == qint8: in[i] += (range(T) + 1)/ 2.0 out[i] = min_range + (in[i]* (max_range - min_range) / range(T)) “` here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`

*MIN_COMBINED Mode Example*

If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting.

If the mode is 'MIN_FIRST', then this approach is used:

“`c++ num_discrete_values = 1 << (# of bits in T) range_adjust = num_discrete_values / (num_discrete_values - 1) range = (range_max - range_min) * range_adjust range_scale = range / num_discrete_values const double offset_input = static_cast<double>(input) - lowest_quantized; result = range_min + ((input - numeric_limits<T>::min()) * range_scale) “`

*SCALED mode Example*

`SCALED` mode matches the quantization approach used in `QuantizeAndDequantize{V2|V3}`.

If the mode is `SCALED`, we do not use the full range of the output type, choosing to elide the lowest possible value for symmetry (e.g., output range is -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 0.

We first find the range of values in our tensor. The range we use is always centered on 0, so we find m such that “`c++

m = max(abs(input_min), abs(input_max))

“`

Our input tensor range is then `[-m, m]`.

Next, we choose our fixed-point quantization buckets, `[min_fixed, max_fixed]`. If T is signed, this is “`

num_bits = sizeof(T) * 8
[min_fixed, max_fixed] =
    [-(1 << (num_bits - 1) - 1), (1 << (num_bits - 1)) - 1]

“`

Otherwise, if T is unsigned, the fixed-point range is “`

[min_fixed, max_fixed] = [0, (1 << num_bits) - 1]

“`

From this we compute our scaling factor, s: “`c++

s = (2 * m) / (max_fixed - min_fixed)

“`

Now we can dequantize the elements of our tensor: “`c++ result = input * s “`

Arguments:

min_range: The minimum scalar value possibly produced for the input.
max_range: The maximum scalar value possibly produced for the input.

func DeserializeIterator Uses

func DeserializeIterator(scope *Scope, resource_handle tf.Output, serialized tf.Output) (o *tf.Operation)

Converts the given variant tensor to an iterator and stores it in the given resource.

Arguments:

resource_handle: A handle to an iterator resource.
serialized: A variant tensor storing the state of the iterator contained in the

resource.

Returns the created operation.

func DeserializeManySparse Uses

func DeserializeManySparse(scope *Scope, serialized_sparse tf.Output, dtype tf.DataType) (sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output)

Deserialize and concatenate `SparseTensors` from a serialized minibatch.

The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where `N` is the minibatch size and the rows correspond to packed outputs of `SerializeSparse`. The ranks of the original `SparseTensor` objects must all match. When the final `SparseTensor` is created, it has rank one higher than the ranks of the incoming `SparseTensor` objects (they have been concatenated along a new row dimension).

The output `SparseTensor` object's shape values for all dimensions but the first are the max across the input `SparseTensor` objects' shape values for the corresponding dimensions. Its first shape value is `N`, the minibatch size.

The input `SparseTensor` objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run `SparseReorder` to restore index ordering.

For example, if the serialized input is a `[2 x 3]` matrix representing two original `SparseTensor` objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized `SparseTensor` will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Arguments:

serialized_sparse: 2-D, The `N` serialized `SparseTensor` objects.

Must have 3 columns.

dtype: The `dtype` of the serialized `SparseTensor` objects.

func DeserializeSparse Uses

func DeserializeSparse(scope *Scope, serialized_sparse tf.Output, dtype tf.DataType) (sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output)

Deserialize `SparseTensor` objects.

The input `serialized_sparse` must have the shape `[?, ?, ..., ?, 3]` where the last dimension stores serialized `SparseTensor` objects and the other N dimensions (N >= 0) correspond to a batch. The ranks of the original `SparseTensor` objects must all match. When the final `SparseTensor` is created, its rank is the rank of the incoming `SparseTensor` objects plus N; the sparse tensors have been concatenated along new dimensions, one for each batch.

The output `SparseTensor` object's shape values for the original dimensions are the max across the input `SparseTensor` objects' shape values for the corresponding dimensions. The new dimensions match the size of the batch.

The input `SparseTensor` objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run `SparseReorder` to restore index ordering.

For example, if the serialized input is a `[2 x 3]` matrix representing two original `SparseTensor` objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized `SparseTensor` will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Arguments:

serialized_sparse: The serialized `SparseTensor` objects. The last dimension

must have 3 columns.

dtype: The `dtype` of the serialized `SparseTensor` objects.

func DestroyResourceOp Uses

func DestroyResourceOp(scope *Scope, resource tf.Output, optional ...DestroyResourceOpAttr) (o *tf.Operation)

Deletes the resource specified by the handle.

All subsequent operations using the resource will result in a NotFound error status.

Arguments:

resource: handle to the resource to delete.

Returns the created operation.

func Diag Uses

func Diag(scope *Scope, diagonal tf.Output) (output tf.Output)

Returns a diagonal tensor with a given diagonal values.

Given a `diagonal`, this operation returns a tensor with the `diagonal` and everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.

For example:

“` # 'diagonal' is [1, 2, 3, 4] tf.diag(diagonal) ==> [[1, 0, 0, 0]

[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]

“`

Arguments:

diagonal: Rank k tensor where k is at most 1.

func DiagPart Uses

func DiagPart(scope *Scope, input tf.Output) (diagonal tf.Output)

Returns the diagonal part of the tensor.

This operation returns a tensor with the `diagonal` part of the `input`. The `diagonal` part is computed as follows:

Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a tensor of rank `k` with dimensions `[D1,..., Dk]` where:

`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.

For example:

“` # 'input' is [[1, 0, 0, 0]

[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4] “`

Arguments:

input: Rank k tensor where k is even and not zero.

Returns The extracted diagonal.

func Digamma Uses

func Digamma(scope *Scope, x tf.Output) (y tf.Output)

Computes Psi, the derivative of Lgamma (the log of the absolute value of

`Gamma(x)`), element-wise.

func Dilation2D Uses

func Dilation2D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, rates []int64, padding string) (output tf.Output)

Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.

The `input` tensor has shape `[batch, in_height, in_width, depth]` and the `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with `conv2d`, we use unmirrored filters):

output[b, y, x, c] =
   max_{dy, dx} input[b,
                      strides[1] * y + rates[1] * dy,
                      strides[2] * x + rates[2] * dx,
                      c] +
                filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of `input` by the `filter` is equal to the negation of the erosion of `-input` by the reflected `filter`.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
strides: The stride of the sliding window for each dimension of the input

tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: The input stride for atrous morphological dilation. Must be:

`[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, out_height, out_width, depth]`.

func Dilation2DBackpropFilter Uses

func Dilation2DBackpropFilter(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, rates []int64, padding string) (filter_backprop tf.Output)

Computes the gradient of morphological 2-D dilation with respect to the filter.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
out_backprop: 4-D with shape `[batch, out_height, out_width, depth]`.
strides: 1-D of length 4. The stride of the sliding window for each dimension of

the input tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: 1-D of length 4. The input stride for atrous morphological dilation.

Must be: `[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 3-D with shape `[filter_height, filter_width, depth]`.

func Dilation2DBackpropInput Uses

func Dilation2DBackpropInput(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, rates []int64, padding string) (in_backprop tf.Output)

Computes the gradient of morphological 2-D dilation with respect to the input.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
out_backprop: 4-D with shape `[batch, out_height, out_width, depth]`.
strides: 1-D of length 4. The stride of the sliding window for each dimension of

the input tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: 1-D of length 4. The input stride for atrous morphological dilation.

Must be: `[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, in_height, in_width, depth]`.

func Div Uses

func Div(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x / y element-wise.

*NOTE*: `Div` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func DivNoNan Uses

func DivNoNan(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns 0 if the denominator is zero.

*NOTE*: `DivNoNan` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func DrawBoundingBoxes Uses

func DrawBoundingBoxes(scope *Scope, images tf.Output, boxes tf.Output) (output tf.Output)

Draw bounding boxes on a batch of images.

Outputs a copy of `images` but draws on top of the pixels zero or more bounding boxes specified by the locations in `boxes`. The coordinates of the each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, if an image is 100 x 200 pixels (height x width) and the bounding box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of the bounding box will be `(40, 10)` to `(180, 50)` (in (x,y) coordinates).

Parts of the bounding box may fall outside the image.

Arguments:

images: 4-D with shape `[batch, height, width, depth]`. A batch of images.
boxes: 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding

boxes.

Returns 4-D with the same shape as `images`. The batch of input images with bounding boxes drawn on the images.

func DynamicPartition Uses

func DynamicPartition(scope *Scope, data tf.Output, partitions tf.Output, num_partitions int64) (outputs []tf.Output)

Partitions `data` into `num_partitions` tensors using indices from `partitions`.

For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` are placed in `outputs[i]` in lexicographic order of `js`, and the first dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. In detail,

“`python

outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

“`

`data.shape` must start with `partitions.shape`.

For example:

“`python

# Scalar partitions.
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = []  # Empty with shape [0, 2]
outputs[1] = [[10, 20]]

# Vector partitions.
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]

“`

See `dynamic_stitch` for an example on how to merge partitions back.

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicPartition.png" alt> </div>

Arguments:

partitions: Any shape.  Indices in the range `[0, num_partitions)`.
num_partitions: The number of partitions to output.

func DynamicStitch Uses

func DynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output)

Interleave the values from the `data` tensors into a single tensor.

Builds a merged tensor such that

“`python

merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

“`

For example, if each `indices[m]` is scalar or vector, we have

“`python

# Scalar indices:
merged[indices[m], ...] = data[m][...]

# Vector indices:
merged[indices[m][i], ...] = data[m][i, ...]

“`

Each `data[i].shape` must start with the corresponding `indices[i].shape`, and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we must have `data[i].shape = indices[i].shape + constant`. In terms of this `constant`, the output shape is

merged.shape = [max(indices)] + constant

Values are merged in order, so if an index appears in both `indices[m][i]` and `indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.

For example:

“`python

indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
          [51, 52], [61, 62]]

“`

This method can be used to merge partitions created by `dynamic_partition` as illustrated on the following example:

“`python

# Apply function (increments x_i) on elements for which a certain condition
# apply (x_i != -1 in this example).
x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
condition_mask=tf.not_equal(x,tf.constant(-1.))
partitioned_data = tf.dynamic_partition(
    x, tf.cast(condition_mask, tf.int32) , 2)
partitioned_data[1] = partitioned_data[1] + 1.0
condition_indices = tf.dynamic_partition(
    tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
x = tf.dynamic_stitch(condition_indices, partitioned_data)
# Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
# unchanged.

“`

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicStitch.png" alt> </div>

func EagerPyFunc Uses

func EagerPyFunc(scope *Scope, input []tf.Output, token string, Tout []tf.DataType) (output []tf.Output)

Eagerly executes a python function to compute func(input)->output. The

semantics of the input, output, and attributes are the same as those for PyFunc.

func EditDistance Uses

func EditDistance(scope *Scope, hypothesis_indices tf.Output, hypothesis_values tf.Output, hypothesis_shape tf.Output, truth_indices tf.Output, truth_values tf.Output, truth_shape tf.Output, optional ...EditDistanceAttr) (output tf.Output)

Computes the (possibly normalized) Levenshtein Edit Distance.

The inputs are variable-length sequences provided by SparseTensors

(hypothesis_indices, hypothesis_values, hypothesis_shape)

and

(truth_indices, truth_values, truth_shape).

The inputs are:

Arguments:

hypothesis_indices: The indices of the hypothesis list SparseTensor.

This is an N x R int64 matrix.

hypothesis_values: The values of the hypothesis list SparseTensor.

This is an N-length vector.

hypothesis_shape: The shape of the hypothesis list SparseTensor.

This is an R-length vector.

truth_indices: The indices of the truth list SparseTensor.

This is an M x R int64 matrix.

truth_values: The values of the truth list SparseTensor.

This is an M-length vector.

truth_shape: truth indices, vector.

Returns A dense float tensor with rank R - 1.

For the example input:

// hypothesis represents a 2x1 matrix with variable-length values:
//   (0,0) = ["a"]
//   (1,0) = ["b"]
hypothesis_indices = [[0, 0, 0],
                      [1, 0, 0]]
hypothesis_values = ["a", "b"]
hypothesis_shape = [2, 1, 1]

// truth represents a 2x2 matrix with variable-length values:
//   (0,0) = []
//   (0,1) = ["a"]
//   (1,0) = ["b", "c"]
//   (1,1) = ["a"]
truth_indices = [[0, 1, 0],
                 [1, 0, 0],
                 [1, 0, 1],
                 [1, 1, 0]]
truth_values = ["a", "b", "c", "a"]
truth_shape = [2, 2, 2]
normalize = true

The output will be:

// output is a 2x2 matrix with edit distances normalized by truth lengths.
output = [[inf, 1.0],  // (0,0): no truth, (0,1): no hypothesis
          [0.5, 1.0]]  // (1,0): addition, (1,1): no hypothesis

func Elu Uses

func Elu(scope *Scope, features tf.Output) (activations tf.Output)

Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.

See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) ](http://arxiv.org/abs/1511.07289)

func EluGrad Uses

func EluGrad(scope *Scope, gradients tf.Output, outputs tf.Output) (backprops tf.Output)

Computes gradients for the exponential linear (Elu) operation.

Arguments:

gradients: The backpropagated gradients to the corresponding Elu operation.
outputs: The outputs of the corresponding Elu operation.

Returns The gradients: `gradients * (outputs + 1)` if outputs < 0, `gradients` otherwise.

func Empty Uses

func Empty(scope *Scope, shape tf.Output, dtype tf.DataType, optional ...EmptyAttr) (output tf.Output)

Creates a tensor with the given shape.

This operation creates a tensor of `shape` and `dtype`.

Arguments:

shape: 1-D. Represents the shape of the output tensor.

Returns A `Tensor` of type `T`.

func EmptyTensorList Uses

func EmptyTensorList(scope *Scope, element_shape tf.Output, max_num_elements tf.Output, element_dtype tf.DataType) (handle tf.Output)

Creates and returns an empty tensor list.

All list elements must be tensors of dtype element_dtype and shape compatible with element_shape.

handle: an empty tensor list. element_dtype: the type of elements in the list. element_shape: a shape compatible with that of elements in the list.

func EncodeBase64 Uses

func EncodeBase64(scope *Scope, input tf.Output, optional ...EncodeBase64Attr) (output tf.Output)

Encode strings into web-safe base64 format.

Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.

Web-safe means that the encoder uses - and _ instead of + and /.

Arguments:

input: Strings to be encoded.

Returns Input strings encoded in base64.

func EncodeJpeg Uses

func EncodeJpeg(scope *Scope, image tf.Output, optional ...EncodeJpegAttr) (contents tf.Output)

JPEG-encode an image.

`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.

The attr `format` can be used to override the color format of the encoded output. Values can be:

* `”`: Use a default format based on the number of channels in the image. * `grayscale`: Output a grayscale JPEG image. The `channels` dimension

of `image` must be 1.

* `rgb`: Output an RGB JPEG image. The `channels` dimension

of `image` must be 3.

If `format` is not specified or is the empty string, a default format is picked in function of the number of channels in `image`:

* 1: Output a grayscale image. * 3: Output an RGB image.

Arguments:

image: 3-D with shape `[height, width, channels]`.

Returns 0-D. JPEG-encoded image.

func EncodePng Uses

func EncodePng(scope *Scope, image tf.Output, optional ...EncodePngAttr) (contents tf.Output)

PNG-encode an image.

`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` where `channels` is:

* 1: for grayscale. * 2: for grayscale + alpha. * 3: for RGB. * 4: for RGBA.

The ZLIB compression level, `compression`, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.

Arguments:

image: 3-D with shape `[height, width, channels]`.

Returns 0-D. PNG-encoded image.

func EncodeProto Uses

func EncodeProto(scope *Scope, sizes tf.Output, values []tf.Output, field_names []string, message_type string, optional ...EncodeProtoAttr) (bytes tf.Output)

The op serializes protobuf messages provided in the input tensors.

The types of the tensors in `values` must match the schema for the fields specified in `field_names`. All the tensors in `values` must have a common shape prefix, *batch_shape*.

The `sizes` tensor specifies repeat counts for each field. The repeat count (last dimension) of a each tensor in `values` must be greater than or equal to corresponding repeat count in `sizes`.

A `message_type` name must be provided to give context for the field names. The actual message descriptor can be looked up either in the linked-in descriptor pool or a filename provided by the caller using the `descriptor_source` attribute.

The `descriptor_source` attribute selects a source of protocol descriptors to consult when looking up `message_type`. This may be a filename containing a serialized `FileDescriptorSet` message, or the special value `local://`, in which case only descriptors linked into the code will be searched; the filename can be on any filesystem accessible to TensorFlow.

You can build a `descriptor_source` file using the `--descriptor_set_out` and `--include_imports` options to the protocol compiler `protoc`.

The `local://` database only covers descriptors linked into the code via C++ libraries, not Python imports. You can link in a proto descriptor by creating a cc_library target with alwayslink=1.

There are a few special cases in the value mapping:

Submessage and group fields must be pre-serialized as TensorFlow strings.

TensorFlow lacks support for unsigned int64s, so they must be represented as `tf.int64` with the same twos-complement bit pattern (the obvious way).

Unsigned int32 values can be represented exactly with `tf.int64`, or with sign wrapping if the input is of type `tf.int32`.

Arguments:

sizes: Tensor of int32 with shape `[batch_shape, len(field_names)]`.
values: List of tensors containing values for the corresponding field.
field_names: List of strings containing proto field names.
message_type: Name of the proto message type to decode.

Returns Tensor of serialized protos with shape `batch_shape`.

func EncodeWav Uses

func EncodeWav(scope *Scope, audio tf.Output, sample_rate tf.Output) (contents tf.Output)

Encode audio data using the WAV file format.

This operation will generate a string suitable to be saved out to create a .wav audio file. It will be encoded in the 16-bit PCM format. It takes in float values in the range -1.0f to 1.0f, and any outside that value will be clamped to that range.

`audio` is a 2-D float Tensor of shape `[length, channels]`. `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100).

Arguments:

audio: 2-D with shape `[length, channels]`.
sample_rate: Scalar containing the sample frequency.

Returns 0-D. WAV-encoded file contents.

func EnsureShape Uses

func EnsureShape(scope *Scope, input tf.Output, shape tf.Shape) (output tf.Output)

Ensures that the tensor's shape matches the expected shape.

Raises an error if the input tensor's shape does not match the specified shape. Returns the input tensor otherwise.

Arguments:

input: A tensor, whose shape is to be validated.
shape: The expected (possibly partially specified) shape of the input tensor.

Returns A tensor with the same shape and contents as the input tensor or value.

func Enter Uses

func Enter(scope *Scope, data tf.Output, frame_name string, optional ...EnterAttr) (output tf.Output)

Creates or finds a child frame, and makes `data` available to the child frame.

This op is used together with `Exit` to create loops in the graph. The unique `frame_name` is used by the `Executor` to identify frames. If `is_constant` is true, `output` is a constant in the child frame; otherwise it may be changed in the child frame. At most `parallel_iterations` iterations are run in parallel in the child frame.

Arguments:

data: The tensor to be made available to the child frame.
frame_name: The name of the child frame.

Returns The same tensor as `data`.

func Equal Uses

func Equal(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x == y) element-wise.

*NOTE*: `Equal` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Erf Uses

func Erf(scope *Scope, x tf.Output) (y tf.Output)

Computes the Gauss error function of `x` element-wise.

func Erfc Uses

func Erfc(scope *Scope, x tf.Output) (y tf.Output)

Computes the complementary error function of `x` element-wise.

func Exit Uses

func Exit(scope *Scope, data tf.Output) (output tf.Output)

Exits the current frame to its parent frame.

Exit makes its input `data` available to the parent frame.

Arguments:

data: The tensor to be made available to the parent frame.

Returns The same tensor as `data`.

func Exp Uses

func Exp(scope *Scope, x tf.Output) (y tf.Output)

Computes exponential of x element-wise. \\(y = e^x\\).

func ExpandDims Uses

func ExpandDims(scope *Scope, input tf.Output, axis tf.Output) (output tf.Output)

Inserts a dimension of 1 into a tensor's shape.

Given a tensor `input`, this operation inserts a dimension of 1 at the dimension index `axis` of `input`'s shape. The dimension index `axis` starts at zero; if you specify a negative number for `axis` it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape `[height, width, channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, which will make the shape `[1, height, width, channels]`.

Other examples:

“` # 't' is a tensor of shape [2] shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1]

# 't2' is a tensor of shape [2, 3, 5] shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] “`

This operation requires that:

`-1-input.dims() <= dim <= input.dims()`

This operation is related to `squeeze()`, which removes dimensions of size 1.

Arguments:

axis: 0-D (scalar). Specifies the dimension index at which to

expand the shape of `input`. Must be in the range `[-rank(input) - 1, rank(input)]`.

Returns Contains the same data as `input`, but its shape has an additional dimension of size 1 added.

func ExperimentalBytesProducedStatsDataset Uses

func ExperimentalBytesProducedStatsDataset(scope *Scope, input_dataset tf.Output, tag tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Records the bytes size of each element of `input_dataset` in a StatsAggregator.

func ExperimentalDatasetCardinality Uses

func ExperimentalDatasetCardinality(scope *Scope, input_dataset tf.Output) (cardinality tf.Output)

Returns the cardinality of `input_dataset`.

Returns the cardinality of `input_dataset`.

Arguments:

input_dataset: A variant tensor representing the dataset to return cardinality for.

Returns The cardinality of `input_dataset`. Named constants are used to represent infinite and unknown cardinality.

func ExperimentalDatasetToTFRecord Uses

func ExperimentalDatasetToTFRecord(scope *Scope, input_dataset tf.Output, filename tf.Output, compression_type tf.Output) (o *tf.Operation)

Writes the given dataset to the given file using the TFRecord format.

Arguments:

input_dataset: A variant tensor representing the dataset to write.
filename: A scalar string tensor representing the filename to use.
compression_type: A scalar string tensor containing either (i) the empty string (no

compression), (ii) "ZLIB", or (iii) "GZIP".

Returns the created operation.

func ExperimentalDenseToSparseBatchDataset Uses

func ExperimentalDenseToSparseBatchDataset(scope *Scope, input_dataset tf.Output, batch_size tf.Output, row_shape tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that batches input elements into a SparseTensor.

Arguments:

input_dataset: A handle to an input dataset. Must have a single component.
batch_size: A scalar representing the number of elements to accumulate in a

batch.

row_shape: A vector representing the dense shape of each row in the produced

SparseTensor. The shape may be partially specified, using `-1` to indicate that a particular dimension should use the maximum size of all batch elements.

func ExperimentalDirectedInterleaveDataset Uses

func ExperimentalDirectedInterleaveDataset(scope *Scope, selector_input_dataset tf.Output, data_input_datasets []tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

A substitute for `InterleaveDataset` on a fixed list of `N` datasets.

Arguments:

selector_input_dataset: A dataset of scalar `DT_INT64` elements that determines which of the

`N` data inputs should produce the next output element.

data_input_datasets: `N` datasets with the same type that will be interleaved according to

the values of `selector_input_dataset`.

func ExperimentalIgnoreErrorsDataset Uses

func ExperimentalIgnoreErrorsDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that contains the elements of `input_dataset` ignoring errors.

func ExperimentalIteratorGetDevice Uses

func ExperimentalIteratorGetDevice(scope *Scope, resource tf.Output) (device tf.Output)

Returns the name of the device on which `resource` has been placed.

func ExperimentalLatencyStatsDataset Uses

func ExperimentalLatencyStatsDataset(scope *Scope, input_dataset tf.Output, tag tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Records the latency of producing `input_dataset` elements in a StatsAggregator.

func ExperimentalMaxIntraOpParallelismDataset Uses

func ExperimentalMaxIntraOpParallelismDataset(scope *Scope, input_dataset tf.Output, max_intra_op_parallelism tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that overrides the maximum intra-op parallelism.

Arguments:

max_intra_op_parallelism: Identifies the maximum intra-op parallelism to use.

func ExperimentalParseExampleDataset Uses

func ExperimentalParseExampleDataset(scope *Scope, input_dataset tf.Output, num_parallel_calls tf.Output, dense_defaults []tf.Output, sparse_keys []string, dense_keys []string, sparse_types []tf.DataType, dense_shapes []tf.Shape, output_types []tf.DataType, output_shapes []tf.Shape, optional ...ExperimentalParseExampleDatasetAttr) (handle tf.Output)

Transforms `input_dataset` containing `Example` protos as vectors of DT_STRING into a dataset of `Tensor` or `SparseTensor` objects representing the parsed features.

Arguments:

dense_defaults: A dict mapping string keys to `Tensor`s.

The keys of the dict must match the dense_keys of the feature.

sparse_keys: A list of string keys in the examples features.

The results for these keys will be returned as `SparseTensor` objects.

dense_keys: A list of Ndense string Tensors (scalars).

The keys expected in the Examples features associated with dense values.

sparse_types: A list of `DTypes` of the same length as `sparse_keys`.

Only `tf.float32` (`FloatList`), `tf.int64` (`Int64List`), and `tf.string` (`BytesList`) are supported.

dense_shapes: List of tuples with the same length as `dense_keys`.

The shape of the data for each dense feature referenced by `dense_keys`. Required for any input tensors identified by `dense_keys`. Must be either fully defined, or may contain an unknown first dimension. An unknown first dimension means the feature is treated as having a variable number of blocks, and the output shape along this dimension is considered unknown at graph build time. Padding is applied for minibatch elements smaller than the maximum number of blocks for the given feature along this dimension.

output_types: The type list for the return values.
output_shapes: The list of shapes being produced.

func ExperimentalPrivateThreadPoolDataset Uses

func ExperimentalPrivateThreadPoolDataset(scope *Scope, input_dataset tf.Output, num_threads tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that uses a custom thread pool to compute `input_dataset`.

Arguments:

num_threads: Identifies the number of threads to use for the private threadpool.

func ExperimentalRandomDataset Uses

func ExperimentalRandomDataset(scope *Scope, seed tf.Output, seed2 tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a Dataset that returns pseudorandom numbers.

Arguments:

seed: A scalar seed for the random number generator. If either seed or

seed2 is set to be non-zero, the random number generator is seeded by the given seed. Otherwise, a random seed is used.

seed2: A second scalar seed to avoid seed collision.

func ExperimentalSlidingWindowDataset Uses

func ExperimentalSlidingWindowDataset(scope *Scope, input_dataset tf.Output, window_size tf.Output, window_shift tf.Output, window_stride tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that passes a sliding window over `input_dataset`.

Arguments:

window_size: A scalar representing the number of elements in the

sliding window.

window_shift: A scalar representing the steps moving the sliding window

forward in one iteration. It must be positive.

window_stride: A scalar representing the stride of the input elements of the sliding window.

It must be positive.

func ExperimentalSqlDataset Uses

func ExperimentalSqlDataset(scope *Scope, driver_name tf.Output, data_source_name tf.Output, query tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that executes a SQL query and emits rows of the result set.

Arguments:

driver_name: The database type. Currently, the only supported type is 'sqlite'.
data_source_name: A connection string to connect to the database.
query: A SQL query to execute.

func ExperimentalStatsAggregatorHandle Uses

func ExperimentalStatsAggregatorHandle(scope *Scope, optional ...ExperimentalStatsAggregatorHandleAttr) (handle tf.Output)

Creates a statistics manager resource.

func ExperimentalStatsAggregatorSummary Uses

func ExperimentalStatsAggregatorSummary(scope *Scope, iterator tf.Output) (summary tf.Output)

Produces a summary of any statistics recorded by the given statistics manager.

func ExperimentalThreadPoolDataset Uses

func ExperimentalThreadPoolDataset(scope *Scope, input_dataset tf.Output, thread_pool tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that uses a custom thread pool to compute `input_dataset`.

Arguments:

thread_pool: A resource produced by the ThreadPoolHandle op.

func ExperimentalThreadPoolHandle Uses

func ExperimentalThreadPoolHandle(scope *Scope, num_threads int64, display_name string, optional ...ExperimentalThreadPoolHandleAttr) (handle tf.Output)

Creates a dataset that uses a custom thread pool to compute `input_dataset`.

Arguments:

num_threads: The number of threads in the thread pool.
display_name: A human-readable name for the threads that may be visible in some

visualizations. threadpool.

Returns A resource that can be consumed by one or more ExperimentalThreadPoolDataset ops.

func ExperimentalUnbatchDataset Uses

func ExperimentalUnbatchDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

A dataset that splits the elements of its input into multiple elements.

func ExperimentalUniqueDataset Uses

func ExperimentalUniqueDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that contains the unique elements of `input_dataset`.

func Expm1 Uses

func Expm1(scope *Scope, x tf.Output) (y tf.Output)

Computes exponential of x - 1 element-wise.

I.e., \\(y = (\exp x) - 1\\).

func ExtractGlimpse Uses

func ExtractGlimpse(scope *Scope, input tf.Output, size tf.Output, offsets tf.Output, optional ...ExtractGlimpseAttr) (glimpse tf.Output)

Extracts a glimpse from the input tensor.

Returns a set of windows called glimpses extracted at location `offsets` from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.

The result is a 4-D tensor of shape `[batch_size, glimpse_height, glimpse_width, channels]`. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the `size` parameter.

The argument `normalized` and `centered` controls how the windows are built:

* If the coordinates are normalized but not centered, 0.0 and 1.0

correspond to the minimum and maximum of each height and width
dimension.

* If the coordinates are both normalized and centered, they range from

-1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper
left corner, the lower right corner is located at (1.0, 1.0) and the
center is at (0, 0).

* If the coordinates are not normalized they are interpreted as

numbers of pixels.

Arguments:

input: A 4-D float tensor of shape `[batch_size, height, width, channels]`.
size: A 1-D tensor of 2 elements containing the size of the glimpses

to extract. The glimpse height must be specified first, following by the glimpse width.

offsets: A 2-D integer tensor of shape `[batch_size, 2]` containing

the y, x locations of the center of each window.

Returns A tensor representing the glimpses `[batch_size, glimpse_height, glimpse_width, channels]`.

func ExtractImagePatches Uses

func ExtractImagePatches(scope *Scope, images tf.Output, ksizes []int64, strides []int64, rates []int64, padding string) (patches tf.Output)

Extract `patches` from `images` and put them in the "depth" output dimension.

Arguments:

images: 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.
ksizes: The size of the sliding window for each dimension of `images`.
strides: 1-D of length 4. How far the centers of two consecutive patches are in

the images. Must be: `[1, stride_rows, stride_cols, 1]`.

rates: 1-D of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the

input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling them spatially by a factor of `rates`. This is equivalent to `rate` in dilated (a.k.a. Atrous) convolutions.

padding: The type of padding algorithm to use.

We specify the size-related attributes as:

“`python

ksizes = [1, ksize_rows, ksize_cols, 1]
strides = [1, strides_rows, strides_cols, 1]
rates = [1, rates_rows, rates_cols, 1]

“`

Returns 4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows * ksize_cols * depth]` containing image patches with size `ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension. Note `out_rows` and `out_cols` are the dimensions of the output patches.

func ExtractJpegShape Uses

func ExtractJpegShape(scope *Scope, contents tf.Output, optional ...ExtractJpegShapeAttr) (image_shape tf.Output)

Extract the shape information of a JPEG-encoded image.

This op only parses the image header, so it is much faster than DecodeJpeg.

Arguments:

contents: 0-D. The JPEG-encoded image.

Returns 1-D. The image shape with format [height, width, channels].

func ExtractVolumePatches Uses

func ExtractVolumePatches(scope *Scope, input tf.Output, ksizes []int64, strides []int64, padding string) (patches tf.Output)

Extract `patches` from `input` and put them in the "depth" output dimension. 3D extension of `extract_image_patches`.

Arguments:

input: 5-D Tensor with shape `[batch, in_planes, in_rows, in_cols, depth]`.
ksizes: The size of the sliding window for each dimension of `input`.
strides: 1-D of length 5. How far the centers of two consecutive patches are in

`input`. Must be: `[1, stride_planes, stride_rows, stride_cols, 1]`.

padding: The type of padding algorithm to use.

We specify the size-related attributes as:

“`python

ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1]
strides = [1, stride_planes, strides_rows, strides_cols, 1]

“`

Returns 5-D Tensor with shape `[batch, out_planes, out_rows, out_cols, ksize_planes * ksize_rows * ksize_cols * depth]` containing patches with size `ksize_planes x ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension. Note `out_planes`, `out_rows` and `out_cols` are the dimensions of the output patches.

func FFT Uses

func FFT(scope *Scope, input tf.Output) (output tf.Output)

Fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of `input`.

Arguments:

input: A complex tensor.

Returns A complex tensor of the same shape as `input`. The inner-most

dimension of `input` is replaced with its 1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fft @end_compatibility

func FFT2D Uses

func FFT2D(scope *Scope, input tf.Output) (output tf.Output)

2D fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`.

Arguments:

input: A complex tensor.

Returns A complex tensor of the same shape as `input`. The inner-most 2

dimensions of `input` are replaced with their 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fft2 @end_compatibility

func FFT3D Uses

func FFT3D(scope *Scope, input tf.Output) (output tf.Output)

3D fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 3

dimensions of `input` are replaced with their 3D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fftn with 3 dimensions. @end_compatibility

func FIFOQueueV2 Uses

func FIFOQueueV2(scope *Scope, component_types []tf.DataType, optional ...FIFOQueueV2Attr) (handle tf.Output)

A queue that produces elements in first-in first-out order.

Arguments:

component_types: The type of each component in a value.

Returns The handle to the queue.

func Fact Uses

func Fact(scope *Scope) (fact tf.Output)

Output a fact about factorials.

func FakeParam Uses

func FakeParam(scope *Scope, dtype tf.DataType, shape tf.Shape) (output tf.Output)
This op is used as a placeholder in If branch functions. It doesn't provide a
valid output when run, so must either be removed (e.g. replaced with a
function input) or guaranteed not to be used (e.g. if mirroring an
intermediate output needed for the gradient computation of the other branch).

Arguments:

	dtype: The type of the output.
	shape:     The purported shape of the output. This is only used for shape inference;
    the output will not necessarily have this shape. Can be a partial shape.

Returns \"Fake\" output value. This should not be consumed by another op.

func FakeQuantWithMinMaxArgs Uses

func FakeQuantWithMinMaxArgs(scope *Scope, inputs tf.Output, optional ...FakeQuantWithMinMaxArgsAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.

Attributes `[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.

Quantization is called fake since the output is still in floating point.

func FakeQuantWithMinMaxArgsGradient Uses

func FakeQuantWithMinMaxArgsGradient(scope *Scope, gradients tf.Output, inputs tf.Output, optional ...FakeQuantWithMinMaxArgsGradientAttr) (backprops tf.Output)

Compute gradients for a FakeQuantWithMinMaxArgs operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.
inputs: Values passed as inputs to the FakeQuantWithMinMaxArgs operation.

Returns Backpropagated gradients below the FakeQuantWithMinMaxArgs operation: `gradients * (inputs >= min && inputs <= max)`.

func FakeQuantWithMinMaxVars Uses

func FakeQuantWithMinMaxVars(scope *Scope, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor of type float via global float scalars `min`

and `max` to 'outputs' tensor of same shape as `inputs`.

`[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.

This operation has a gradient and thus allows for training `min` and `max` values.

func FakeQuantWithMinMaxVarsGradient Uses

func FakeQuantWithMinMaxVarsGradient(scope *Scope, gradients tf.Output, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsGradientAttr) (backprops_wrt_input tf.Output, backprop_wrt_min tf.Output, backprop_wrt_max tf.Output)

Compute gradients for a FakeQuantWithMinMaxVars operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxVars operation.
inputs: Values passed as inputs to the FakeQuantWithMinMaxVars operation.

min, max: Quantization interval, scalar floats.

Returns Backpropagated gradients w.r.t. inputs: `gradients * (inputs >= min && inputs <= max)`.Backpropagated gradients w.r.t. min parameter: `sum(gradients * (inputs < min))`.Backpropagated gradients w.r.t. max parameter: `sum(gradients * (inputs > max))`.

func FakeQuantWithMinMaxVarsPerChannel Uses

func FakeQuantWithMinMaxVarsPerChannel(scope *Scope, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsPerChannelAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`,

`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` to 'outputs' tensor of same shape as `inputs`.

`[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 16, inclusive.

This operation has a gradient and thus allows for training `min` and `max` values.

func FakeQuantWithMinMaxVarsPerChannelGradient Uses

func FakeQuantWithMinMaxVarsPerChannelGradient(scope *Scope, gradients tf.Output, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsPerChannelGradientAttr) (backprops_wrt_input tf.Output, backprop_wrt_min tf.Output, backprop_wrt_max tf.Output)

Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxVars operation,

shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.

	inputs: Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape
  same as `gradients`.

min, max: Quantization interval, floats of shape `[d]`.

Returns Backpropagated gradients w.r.t. inputs, shape same as `inputs`:

`gradients * (inputs >= min && inputs <= max)`.Backpropagated gradients w.r.t. min parameter, shape `[d]`:

`sum_per_d(gradients * (inputs < min))`.Backpropagated gradients w.r.t. max parameter, shape `[d]`: `sum_per_d(gradients * (inputs > max))`.

func Fill Uses

func Fill(scope *Scope, dims tf.Output, value tf.Output) (output tf.Output)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape `dims` and fills it with `value`.

For example:

“` # Output tensor has shape [2, 3]. fill([2, 3], 9) ==> [[9, 9, 9]

[9, 9, 9]]

“`

`tf.fill` differs from `tf.constant` in a few ways:

* `tf.fill` only supports scalar contents, whereas `tf.constant` supports

Tensor values.

* `tf.fill` creates an Op in the computation graph that constructs the actual

Tensor value at runtime. This is in contrast to `tf.constant` which embeds
the entire Tensor into the graph with a `Const` node.

* Because `tf.fill` evaluates at graph runtime, it supports dynamic shapes

based on other runtime Tensors, unlike `tf.constant`.

Arguments:

dims: 1-D. Represents the shape of the output tensor.
value: 0-D (scalar). Value to fill the returned tensor.

@compatibility(numpy) Equivalent to np.full @end_compatibility

func FilterByLastComponentDataset Uses

func FilterByLastComponentDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (output tf.Output)

Creates a dataset containing elements of first component of `input_dataset` having true in the last component.

func FixedLengthRecordDataset Uses

func FixedLengthRecordDataset(scope *Scope, filenames tf.Output, header_bytes tf.Output, record_bytes tf.Output, footer_bytes tf.Output, buffer_size tf.Output) (handle tf.Output)

Creates a dataset that emits the records from one or more binary files.

Arguments:

filenames: A scalar or a vector containing the name(s) of the file(s) to be

read.

header_bytes: A scalar representing the number of bytes to skip at the

beginning of a file.

record_bytes: A scalar representing the number of bytes in each record.
footer_bytes: A scalar representing the number of bytes to skip at the end

of a file.

buffer_size: A scalar representing the number of bytes to buffer. Must be > 0.

func FixedLengthRecordReaderV2 Uses

func FixedLengthRecordReaderV2(scope *Scope, record_bytes int64, optional ...FixedLengthRecordReaderV2Attr) (reader_handle tf.Output)

A Reader that outputs fixed-length records from a file.

Arguments:

record_bytes: Number of bytes in the record.

Returns The handle to reference the Reader.

func FixedUnigramCandidateSampler Uses

func FixedUnigramCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...FixedUnigramCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

A unigram sampler could use a fixed unigram distribution read from a file or passed in as an in-memory array instead of building up the distribution from data on the fly. There is also an option to skew the distribution by applying a distortion power to the weights.

The vocabulary file should be in CSV-like format, with the last field being the weight associated with the word.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func Floor Uses

func Floor(scope *Scope, x tf.Output) (y tf.Output)

Returns element-wise largest integer not greater than x.

func FloorDiv Uses

func FloorDiv(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x // y element-wise.

*NOTE*: `FloorDiv` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func FloorMod Uses

func FloorMod(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns element-wise remainder of division. When `x < 0` xor `y < 0` is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.

*NOTE*: `FloorMod` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func FractionalAvgPool Uses

func FractionalAvgPool(scope *Scope, value tf.Output, pooling_ratio []float32, optional ...FractionalAvgPoolAttr) (output tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output)

Performs fractional average pooling on the input.

Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
pooling_ratio: Pooling ratio for each dimension of `value`, currently only

supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.

Returns output tensor after fractional avg pooling.row pooling sequence, needed to calculate gradient.column pooling sequence, needed to calculate gradient.

func FractionalAvgPoolGrad Uses

func FractionalAvgPoolGrad(scope *Scope, orig_input_tensor_shape tf.Output, out_backprop tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output, optional ...FractionalAvgPoolGradAttr) (output tf.Output)

Computes gradient of the FractionalAvgPool function.

Unlike FractionalMaxPoolGrad, we don't need to find arg_max for FractionalAvgPoolGrad, we just need to evenly back-propagate each element of out_backprop to those indices that form the same pooling cell. Therefore, we just need to know the shape of original input tensor, instead of the whole tensor.

Arguments:

orig_input_tensor_shape: Original input tensor shape for `fractional_avg_pool`
out_backprop: 4-D with shape `[batch, height, width, channels]`.  Gradients

w.r.t. the output of `fractional_avg_pool`.

row_pooling_sequence: row pooling sequence, form pooling region with

col_pooling_sequence.

col_pooling_sequence: column pooling sequence, form pooling region with

row_pooling sequence.

Returns 4-D. Gradients w.r.t. the input of `fractional_avg_pool`.

func FractionalMaxPool Uses

func FractionalMaxPool(scope *Scope, value tf.Output, pooling_ratio []float32, optional ...FractionalMaxPoolAttr) (output tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output)

Performs fractional max pooling on the input.

Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.

The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.

First we define the following:

1. input_row_length : the number of rows from the input set 2. output_row_length : which will be smaller than the input 3. alpha = input_row_length / output_row_length : our reduction ratio 4. K = floor(alpha) 5. row_pooling_sequence : this is the result list of pool boundary rows

Then, row_pooling_sequence should satisfy:

1. a[0] = 0 : the first value of the sequence is 0 2. a[end] = input_row_length : the last value of the sequence is the size 3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size 4. length(row_pooling_sequence) = output_row_length+1

For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
pooling_ratio: Pooling ratio for each dimension of `value`, currently only

supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.

Returns output tensor after fractional max pooling.row pooling sequence, needed to calculate gradient.column pooling sequence, needed to calculate gradient.

func FractionalMaxPoolGrad Uses

func FractionalMaxPoolGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, out_backprop tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output, optional ...FractionalMaxPoolGradAttr) (output tf.Output)

Computes gradient of the FractionalMaxPool function.

Arguments:

orig_input: Original input for `fractional_max_pool`
orig_output: Original output for `fractional_max_pool`
out_backprop: 4-D with shape `[batch, height, width, channels]`.  Gradients

w.r.t. the output of `fractional_max_pool`.

row_pooling_sequence: row pooling sequence, form pooling region with

col_pooling_sequence.

col_pooling_sequence: column pooling sequence, form pooling region with

row_pooling sequence.

Returns 4-D. Gradients w.r.t. the input of `fractional_max_pool`.

func FusedBatchNorm Uses

func FusedBatchNorm(scope *Scope, x tf.Output, scale tf.Output, offset tf.Output, mean tf.Output, variance tf.Output, optional ...FusedBatchNormAttr) (y tf.Output, batch_mean tf.Output, batch_variance tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
offset: A 1D Tensor for offset, to shift to the normalized x.
mean: A 1D Tensor for population mean. Used for inference only;

must be empty for training.

variance: A 1D Tensor for population variance. Used for inference only;

must be empty for training.

Returns A 4D Tensor for output data.A 1D Tensor for the computed batch mean, to be used by TensorFlow to compute the running mean.A 1D Tensor for the computed batch variance, to be used by TensorFlow to compute the running variance.A 1D Tensor for the computed batch mean, to be reused in the gradient computation.A 1D Tensor for the computed batch variance (inverted variance in the cuDNN case), to be reused in the gradient computation.

func FusedBatchNormGrad Uses

func FusedBatchNormGrad(scope *Scope, y_backprop tf.Output, x tf.Output, scale tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output, optional ...FusedBatchNormGradAttr) (x_backprop tf.Output, scale_backprop tf.Output, offset_backprop tf.Output, reserve_space_3 tf.Output, reserve_space_4 tf.Output)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

y_backprop: A 4D Tensor for the gradient with respect to y.
x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
reserve_space_1: When is_training is True, a 1D Tensor for the computed batch

mean to be reused in gradient computation. When is_training is False, a 1D Tensor for the population mean to be reused in both 1st and 2nd order gradient computation.

reserve_space_2: When is_training is True, a 1D Tensor for the computed batch

variance (inverted variance in the cuDNN case) to be reused in gradient computation. When is_training is False, a 1D Tensor for the population variance to be reused in both 1st and 2nd order gradient computation.

Returns A 4D Tensor for the gradient with respect to x.A 1D Tensor for the gradient with respect to scale.A 1D Tensor for the gradient with respect to offset.Unused placeholder to match the mean input in FusedBatchNorm.Unused placeholder to match the variance input in FusedBatchNorm.

func FusedBatchNormGradV2 Uses

func FusedBatchNormGradV2(scope *Scope, y_backprop tf.Output, x tf.Output, scale tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output, optional ...FusedBatchNormGradV2Attr) (x_backprop tf.Output, scale_backprop tf.Output, offset_backprop tf.Output, reserve_space_3 tf.Output, reserve_space_4 tf.Output)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

y_backprop: A 4D Tensor for the gradient with respect to y.
x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
reserve_space_1: When is_training is True, a 1D Tensor for the computed batch

mean to be reused in gradient computation. When is_training is False, a 1D Tensor for the population mean to be reused in both 1st and 2nd order gradient computation.

reserve_space_2: When is_training is True, a 1D Tensor for the computed batch

variance (inverted variance in the cuDNN case) to be reused in gradient computation. When is_training is False, a 1D Tensor for the population variance to be reused in both 1st and 2nd order gradient computation.

Returns A 4D Tensor for the gradient with respect to x.A 1D Tensor for the gradient with respect to scale.A 1D Tensor for the gradient with respect to offset.Unused placeholder to match the mean input in FusedBatchNorm.Unused placeholder to match the variance input in FusedBatchNorm.

func FusedBatchNormV2 Uses

func FusedBatchNormV2(scope *Scope, x tf.Output, scale tf.Output, offset tf.Output, mean tf.Output, variance tf.Output, optional ...FusedBatchNormV2Attr) (y tf.Output, batch_mean tf.Output, batch_variance tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
offset: A 1D Tensor for offset, to shift to the normalized x.
mean: A 1D Tensor for population mean. Used for inference only;

must be empty for training.

variance: A 1D Tensor for population variance. Used for inference only;

must be empty for training.

Returns A 4D Tensor for output data.A 1D Tensor for the computed batch mean, to be used by TensorFlow to compute the running mean.A 1D Tensor for the computed batch variance, to be used by TensorFlow to compute the running variance.A 1D Tensor for the computed batch mean, to be reused in the gradient computation.A 1D Tensor for the computed batch variance (inverted variance in the cuDNN case), to be reused in the gradient computation.

func FusedPadConv2D Uses

func FusedPadConv2D(scope *Scope, input tf.Output, paddings tf.Output, filter tf.Output, mode string, strides []int64, padding string) (output tf.Output)

Performs a padding as a preprocess during a convolution.

Similar to FusedResizeAndPadConv2d, this op allows for an optimized implementation where the spatial padding transformation stage is fused with the im2col lookup, but in this case without the bilinear filtering required for resizing. Fusing the padding prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and 'NHWC' order is used instead. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

func FusedResizeAndPadConv2D Uses

func FusedResizeAndPadConv2D(scope *Scope, input tf.Output, size tf.Output, paddings tf.Output, filter tf.Output, mode string, strides []int64, padding string, optional ...FusedResizeAndPadConv2DAttr) (output tf.Output)

Performs a resize and padding as a preprocess during a convolution.

It's often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and defaults to 'NHWC' order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

func Gather Uses

func Gather(scope *Scope, params tf.Output, indices tf.Output, optional ...GatherAttr) (output tf.Output)

Gather slices from `params` according to `indices`.

`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape `indices.shape + params.shape[1:]` where:

“`python

# Scalar indices
output[:, ..., :] = params[indices, :, ... :]

# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]

# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

“`

If `indices` is a permutation and `len(indices) == params.shape[0]` then this operation will permute `params` accordingly.

`validate_indices`: DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt> </div>

func GatherNd Uses

func GatherNd(scope *Scope, params tf.Output, indices tf.Output) (output tf.Output)

Gather slices from `params` into a Tensor with shape specified by `indices`.

`indices` is an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into `params`, where each element defines a slice of `params`:

output[\\(i_0, ..., i_{K-2}\\)] = params[indices[\\(i_0, ..., i_{K-2}\\)]]

Whereas in `tf.gather` `indices` defines slices into the first dimension of `params`, in `tf.gather_nd`, `indices` defines slices into the first `N` dimensions of `params`, where `N = indices.shape[-1]`.

The last dimension of `indices` can be at most the rank of `params`:

indices.shape[-1] <= params.rank

The last dimension of `indices` corresponds to elements (if `indices.shape[-1] == params.rank`) or slices (if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]` of `params`. The output tensor has shape

indices.shape[:-1] + params.shape[indices.shape[-1]:]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

Some examples below.

Simple indexing into a matrix:

“`python

indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']

“`

Slice indexing into a matrix:

“`python

indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]

“`

Indexing into a 3-tensor:

“`python

indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]

indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]

indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']

“`

Batched indexing into a matrix:

“`python

indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]

“`

Batched slice indexing into a matrix:

“`python

indices = [[[1]], [[0]]]
params = [['a', 'b'], ['c', 'd']]
output = [[['c', 'd']], [['a', 'b']]]

“`

Batched indexing into a 3-tensor:

“`python

indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[[['a1', 'b1'], ['c1', 'd1']]],
          [[['a0', 'b0'], ['c0', 'd0']]]]

indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0'], ['a1', 'b1']],
          [['a0', 'b0'], ['c1', 'd1']]]

indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['b0', 'b1'], ['d0', 'c1']]

“`

See also `tf.gather` and `tf.batch_gather`.

Arguments:

params: The tensor from which to gather values.
indices: Index tensor.

Returns Values from `params` gathered from indices given by `indices`, with shape `indices.shape[:-1] + params.shape[indices.shape[-1]:]`.

func GatherV2 Uses

func GatherV2(scope *Scope, params tf.Output, indices tf.Output, axis tf.Output) (output tf.Output)

Gather slices from `params` axis `axis` according to `indices`.

`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]` where:

“`python

# Scalar indices (output is rank(params) - 1).
output[a_0, ..., a_n, b_0, ..., b_n] =
  params[a_0, ..., a_n, indices, b_0, ..., b_n]

# Vector indices (output is rank(params)).
output[a_0, ..., a_n, i, b_0, ..., b_n] =
  params[a_0, ..., a_n, indices[i], b_0, ..., b_n]

# Higher rank indices (output is rank(params) + rank(indices) - 1).
output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
  params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]

“`

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt> </div>

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

See also `tf.batch_gather` and `tf.gather_nd`.

Arguments:

params: The tensor from which to gather values. Must be at least rank

`axis + 1`.

indices: Index tensor. Must be in range `[0, params.shape[axis])`.
axis: The axis in `params` to gather `indices` from. Defaults to the first

dimension. Supports negative indexes.

Returns Values from `params` gathered from indices given by `indices`, with shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]`.

func GenerateVocabRemapping Uses

func GenerateVocabRemapping(scope *Scope, new_vocab_file tf.Output, old_vocab_file tf.Output, new_vocab_offset int64, num_new_vocab int64, optional ...GenerateVocabRemappingAttr) (remapping tf.Output, num_present tf.Output)

Given a path to new and old vocabulary files, returns a remapping Tensor of

length `num_new_vocab`, where `remapping[i]` contains the row number in the old vocabulary that corresponds to row `i` in the new vocabulary (starting at line `new_vocab_offset` and up to `num_new_vocab` entities), or `-1` if entry `i` in the new vocabulary is not in the old vocabulary. The old vocabulary is constrained to the first `old_vocab_size` entries if `old_vocab_size` is not the default value of -1.

`num_vocab_offset` enables use in the partitioned variable case, and should generally be set through examining partitioning info. The format of the files should be a text file, with each line containing a single entity within the vocabulary.

For example, with `new_vocab_file` a text file containing each of the following elements on a single line: `[f0, f1, f2, f3]`, old_vocab_file = [f1, f0, f3], `num_new_vocab = 3, new_vocab_offset = 1`, the returned remapping would be `[0, -1, 2]`.

The op also returns a count of how many entries in the new vocabulary were present in the old vocabulary, which is used to calculate the number of values to initialize in a weight matrix remapping

This functionality can be used to remap both row vocabularies (typically, features) and column vocabularies (typically, classes) from TensorFlow checkpoints. Note that the partitioning logic relies on contiguous vocabularies corresponding to div-partitioned variables. Moreover, the underlying remapping uses an IndexTable (as opposed to an inexact CuckooTable), so client code should use the corresponding index_table_from_file() as the FeatureColumn framework does (as opposed to tf.feature_to_id(), which uses a CuckooTable).

Arguments:

new_vocab_file: Path to the new vocab file.
old_vocab_file: Path to the old vocab file.
new_vocab_offset: How many entries into the new vocab file to start reading.
num_new_vocab: Number of entries in the new vocab file to remap.

Returns A Tensor of length num_new_vocab where the element at index i is equal to the old ID that maps to the new ID i. This element is -1 for any new ID that is not found in the old vocabulary.Number of new vocab entries found in old vocab.

func GetSessionHandle Uses

func GetSessionHandle(scope *Scope, value tf.Output) (handle tf.Output)

Store the input tensor in the state of the current session.

Arguments:

value: The tensor to be stored.

Returns The handle for the tensor stored in the session state, represented as a string.

func GetSessionHandleV2 Uses

func GetSessionHandleV2(scope *Scope, value tf.Output) (handle tf.Output)

Store the input tensor in the state of the current session.

Arguments:

value: The tensor to be stored.

Returns The handle for the tensor stored in the session state, represented as a ResourceHandle object.

func GetSessionTensor Uses

func GetSessionTensor(scope *Scope, handle tf.Output, dtype tf.DataType) (value tf.Output)

Get the value of the tensor specified by its handle.

Arguments:

handle: The handle for a tensor stored in the session state.
dtype: The type of the output value.

Returns The tensor for the given handle.

func Gradients Uses

func Gradients(scope *Scope, y []tf.Output, x []tf.Output, dx ...tf.Output) (output []tf.Output)

Gradients adds gradients computation ops to the graph according to scope.

Arguments:

y: output of the function to derive
x: inputs of the function for which partial derivatives are computed
dx: if not null, the partial derivatives of some loss function L w.r.t. y

return the partial derivatives

func Greater Uses

func Greater(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x > y) element-wise.

*NOTE*: `Greater` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func GreaterEqual Uses

func GreaterEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x >= y) element-wise.

*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func GuaranteeConst Uses

func GuaranteeConst(scope *Scope, input tf.Output) (output tf.Output)

Gives a guarantee to the TF runtime that the input tensor is a constant.

The runtime is then free to make optimizations based on this.

Only accepts value typed tensors as inputs and rejects resource variable handles as input.

Returns the input tensor without modification.

func HSVToRGB Uses

func HSVToRGB(scope *Scope, images tf.Output) (output tf.Output)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

See `rgb_to_hsv` for a description of the HSV encoding.

Arguments:

images: 1-D or higher rank. HSV data to convert. Last dimension must be size 3.

Returns `images` converted to RGB.

func HashTableV2 Uses

func HashTableV2(scope *Scope, key_dtype tf.DataType, value_dtype tf.DataType, optional ...HashTableV2Attr) (table_handle tf.Output)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Arguments:

key_dtype: Type of the table keys.
value_dtype: Type of the table values.

Returns Handle to a table.

func HistogramFixedWidth Uses

func HistogramFixedWidth(scope *Scope, values tf.Output, value_range tf.Output, nbins tf.Output, optional ...HistogramFixedWidthAttr) (out tf.Output)

Return histogram of values.

Given the tensor `values`, this operation returns a rank 1 histogram counting the number of entries in `values` that fall into every bin. The bins are equal width and determined by the arguments `value_range` and `nbins`.

“`python # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]

with tf.get_default_session() as sess:

hist = tf.histogram_fixed_width(new_values, value_range, nbins=5)
variables.global_variables_initializer().run()
sess.run(hist) => [2, 1, 1, 0, 2]

“`

Arguments:

values: Numeric `Tensor`.
value_range: Shape [2] `Tensor` of same `dtype` as `values`.

values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].

nbins: Scalar `int32 Tensor`.  Number of histogram bins.

Returns A 1-D `Tensor` holding histogram of values.

func HistogramSummary Uses

func HistogramSummary(scope *Scope, tag tf.Output, values tf.Output) (summary tf.Output)

Outputs a `Summary` protocol buffer with a histogram.

The generated [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) has one summary value containing a histogram for `values`.

This op reports an `InvalidArgument` error if any value is not finite.

Arguments:

tag: Scalar.  Tag to use for the `Summary.Value`.
values: Any shape. Values to use to build the histogram.

Returns Scalar. Serialized `Summary` protocol buffer.

func HostConst Uses

func HostConst(scope *Scope, value tf.Tensor, dtype tf.DataType) (output tf.Output)

Returns a constant tensor on the host. Only for writing C++ tests.

Arguments:

value: Attr `value` is the tensor to return.

func IFFT Uses

func IFFT(scope *Scope, input tf.Output) (output tf.Output)

Inverse fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of `input`.

Arguments:

input: A complex tensor.

Returns A complex tensor of the same shape as `input`. The inner-most

dimension of `input` is replaced with its inverse 1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifft @end_compatibility

func IFFT2D Uses

func IFFT2D(scope *Scope, input tf.Output) (output tf.Output)

Inverse 2D fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`.

Arguments:

input: A complex tensor.

Returns A complex tensor of the same shape as `input`. The inner-most 2

dimensions of `input` are replaced with their inverse 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifft2 @end_compatibility

func IFFT3D Uses

func IFFT3D(scope *Scope, input tf.Output) (output tf.Output)

Inverse 3D fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 3

dimensions of `input` are replaced with their inverse 3D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifftn with 3 dimensions. @end_compatibility

func IRFFT Uses

func IRFFT(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse real-valued fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of `input`.

The inner-most dimension of `input` is assumed to be the result of `RFFT`: the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [1]. The FFT length.

Returns A float32 tensor of the same rank as `input`. The inner-most

dimension of `input` is replaced with the `fft_length` samples of its inverse
1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.irfft @end_compatibility

func IRFFT2D Uses

func IRFFT2D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse 2D real-valued fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`.

The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 2 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along each axis `IRFFT2D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [2]. The FFT length for each dimension.

Returns A float32 tensor of the same rank as `input`. The inner-most 2

dimensions of `input` are replaced with the `fft_length` samples of their
inverse 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.irfft2 @end_compatibility

func IRFFT3D Uses

func IRFFT3D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse 3D real-valued fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of `input`.

The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 3 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along each axis `IRFFT3D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [3]. The FFT length for each dimension.

Returns A float32 tensor of the same rank as `input`. The inner-most 3

dimensions of `input` are replaced with the `fft_length` samples of their
inverse 3D real Fourier transform.

@compatibility(numpy) Equivalent to np.irfftn with 3 dimensions. @end_compatibility

func Identity Uses

func Identity(scope *Scope, input tf.Output) (output tf.Output)

Return a tensor with the same shape and contents as the input tensor or value.

func IdentityN Uses

func IdentityN(scope *Scope, input []tf.Output) (output []tf.Output)

Returns a list of tensors with the same shapes and contents as the input

tensors.

This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,

“`python with tf.get_default_graph().gradient_override_map(

  {'IdentityN': 'OverrideGradientWithG'}):
y, _ = identity_n([f(x), x])

@tf.RegisterGradient('OverrideGradientWithG') def ApplyG(op, dy, _):

return [None, g(dy)]  # Do not backprop to f(x).

“`

func IdentityReaderV2 Uses

func IdentityReaderV2(scope *Scope, optional ...IdentityReaderV2Attr) (reader_handle tf.Output)

A Reader that outputs the queued work as both the key and value.

To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).

Returns The handle to reference the Reader.

func Igamma Uses

func Igamma(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Compute the lower regularized incomplete Gamma function `P(a, x)`.

The lower regularized incomplete Gamma function is defined as:

\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\)

where

\\(gamma(a, x) = \\int_{0}^{x} t^{a-1} exp(-t) dt\\)

is the lower incomplete Gamma function.

Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete Gamma function.

func IgammaGradA Uses

func IgammaGradA(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Computes the gradient of `igamma(a, x)` wrt `a`.

func Igammac Uses

func Igammac(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Compute the upper regularized incomplete Gamma function `Q(a, x)`.

The upper regularized incomplete Gamma function is defined as:

\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\)

where

\\(Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt\\)

is the upper incomplete Gama function.

Note, above `P(a, x)` (`Igamma`) is the lower regularized complete Gamma function.

func Imag Uses

func Imag(scope *Scope, input tf.Output, optional ...ImagAttr) (output tf.Output)

Returns the imaginary part of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of type `float` that is the imaginary part of each element in `input`. All elements in `input` must be complex numbers of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part returned by this operation.

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.imag(input) ==> [4.75, 5.75] “`

func ImageSummary Uses

func ImageSummary(scope *Scope, tag tf.Output, tensor tf.Output, optional ...ImageSummaryAttr) (summary tf.Output)

Outputs a `Summary` protocol buffer with images.

The summary has up to `max_images` summary values containing images. The images are built from `tensor` which must be 4-D with shape `[batch_size, height, width, channels]` and where `channels` can be:

* 1: `tensor` is interpreted as Grayscale. * 3: `tensor` is interpreted as RGB. * 4: `tensor` is interpreted as RGBA.

The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range `[0, 255]`. `uint8` values are unchanged. The op uses two different normalization algorithms:

* If the input values are all positive, they are rescaled so the largest one

is 255.

* If any input value is negative, the values are shifted so input value 0.0

is at 127.  They are then rescaled so that either the smallest value is 0,
or the largest one is 255.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_images` is 1, the summary value tag is '*tag*/image'. * If `max_images` is greater than 1, the summary value tags are

generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.

The `bad_color` argument is the color to use in the generated images for non-finite input values. It is a `uint8` 1-D tensor of length `channels`. Each element must be in the range `[0, 255]` (It represents the value of a pixel in the output image). Non-finite values in the input tensor are replaced by this tensor in the output image. The default value is the color red.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 4-D of shape `[batch_size, height, width, channels]` where

`channels` is 1, 3, or 4.

Returns Scalar. Serialized `Summary` protocol buffer.

func ImmutableConst Uses

func ImmutableConst(scope *Scope, dtype tf.DataType, shape tf.Shape, memory_region_name string) (tensor tf.Output)

Returns immutable tensor from memory region.

The current implementation memmaps the tensor from a file.

Arguments:

dtype: Type of the returned tensor.
shape: Shape of the returned tensor.
memory_region_name: Name of readonly memory region used by the tensor, see

NewReadOnlyMemoryRegionFromFile in tensorflow::Env.

func InTopK Uses

func InTopK(scope *Scope, predictions tf.Output, targets tf.Output, k int64) (precision tf.Output)

Says whether the targets are in the top `K` predictions.

This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the prediction for the target class is among the top `k` predictions among all predictions for example `i`. Note that the behavior of `InTopK` differs from the `TopK` op in its handling of ties; if multiple classes have the same prediction value and straddle the top-`k` boundary, all of those classes are considered to be in the top `k`.

More formally, let

\\(predictions_i\\) be the predictions for all classes for example `i`,
\\(targets_i\\) be the target class for example `i`,
\\(out_i\\) be the output for example `i`,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Arguments:

predictions: A `batch_size` x `classes` tensor.
targets: A `batch_size` vector of class ids.
k: Number of top elements to look at for computing precision.

Returns Computed Precision at `k` as a `bool Tensor`.

func InTopKV2 Uses

func InTopKV2(scope *Scope, predictions tf.Output, targets tf.Output, k tf.Output) (precision tf.Output)

Says whether the targets are in the top `K` predictions.

This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the prediction for the target class is among the top `k` predictions among all predictions for example `i`. Note that the behavior of `InTopK` differs from the `TopK` op in its handling of ties; if multiple classes have the same prediction value and straddle the top-`k` boundary, all of those classes are considered to be in the top `k`.

More formally, let

\\(predictions_i\\) be the predictions for all classes for example `i`,
\\(targets_i\\) be the target class for example `i`,
\\(out_i\\) be the output for example `i`,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Arguments:

predictions: A `batch_size` x `classes` tensor.
targets: A `batch_size` vector of class ids.
k: Number of top elements to look at for computing precision.

Returns Computed precision at `k` as a `bool Tensor`.

func InitializeTableFromTextFileV2 Uses

func InitializeTableFromTextFileV2(scope *Scope, table_handle tf.Output, filename tf.Output, key_index int64, value_index int64, optional ...InitializeTableFromTextFileV2Attr) (o *tf.Operation)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on `delimiter` or the line number (starting from zero). Where to extract the key and value from a line is specified by `key_index` and `value_index`.

- A value of -1 means use the line number(starting from zero), expects `int64`. - A value of -2 means use the whole line content, expects `string`. - A value >= 0 means use the index (starting at zero) of the split line based

on `delimiter`.

Arguments:

table_handle: Handle to a table which will be initialized.
filename: Filename of a vocabulary text file.
key_index: Column index in a line to get the table `key` values from.
value_index: Column index that represents information of a line to get the table

`value` values from.

Returns the created operation.

func InitializeTableV2 Uses

func InitializeTableV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Table initializer that takes two tensors for keys and values respectively.

Arguments:

table_handle: Handle to a table which will be initialized.
keys: Keys of type Tkey.
values: Values of type Tval.

Returns the created operation.

func InplaceAdd Uses

func InplaceAdd(scope *Scope, x tf.Output, i tf.Output, v tf.Output) (y tf.Output)
Adds v into specified rows of x.

Computes y = x; y[i, :] += v; return y.

Arguments:

x: A `Tensor` of type T.
i: A vector. Indices into the left-most dimension of `x`.
v: A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.

Returns A `Tensor` of type T. An alias of `x`. The content of `y` is undefined if there are duplicates in `i`.

func InplaceSub Uses

func InplaceSub(scope *Scope, x tf.Output, i tf.Output, v tf.Output) (y tf.Output)
Subtracts `v` into specified rows of `x`.

Computes y = x; y[i, :] -= v; return y.

Arguments:

x: A `Tensor` of type T.
i: A vector. Indices into the left-most dimension of `x`.
v: A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.

Returns A `Tensor` of type T. An alias of `x`. The content of `y` is undefined if there are duplicates in `i`.

func InplaceUpdate Uses

func InplaceUpdate(scope *Scope, x tf.Output, i tf.Output, v tf.Output) (y tf.Output)
Updates specified rows with values in `v`.

Computes `x[i, :] = v; return x`.

Arguments:

x: A tensor of type `T`.
i: A vector. Indices into the left-most dimension of `x`.
v: A `Tensor` of type T. Same dimension sizes as x except the first dimension, which must be the same as i's size.

Returns A `Tensor` of type T. An alias of `x`. The content of `y` is undefined if there are duplicates in `i`.

func Inv Uses

func Inv(scope *Scope, x tf.Output) (y tf.Output)

Computes the reciprocal of x element-wise.

I.e., \\(y = 1 / x\\).

func InvGrad Uses

func InvGrad(scope *Scope, y tf.Output, dy tf.Output) (z tf.Output)

Computes the gradient for the inverse of `x` wrt its input.

Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy` is the corresponding input gradient.

func Invert Uses

func Invert(scope *Scope, x tf.Output) (y tf.Output)

Flips all bits elementwise.

The result will have exactly those bits set, that are not set in `x`. The computation is performed on the underlying representation of x.

func InvertPermutation Uses

func InvertPermutation(scope *Scope, x tf.Output) (y tf.Output)

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor `x`, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor `y` and an input tensor `x`, this operation computes the following:

`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`

The values must include 0. There can be no duplicate values or negative values.

For example:

“` # tensor `x` is [3, 4, 0, 2, 1] invert_permutation(x) ==> [2, 4, 3, 0, 1] “`

Arguments:

x: 1-D.

Returns 1-D.

func IsBoostedTreesEnsembleInitialized Uses

func IsBoostedTreesEnsembleInitialized(scope *Scope, tree_ensemble_handle tf.Output) (is_initialized tf.Output)

Checks whether a tree ensemble has been initialized.

Arguments:

tree_ensemble_handle: Handle to the tree ensemble resouce.

Returns output boolean on whether it is initialized or not.

func IsBoostedTreesQuantileStreamResourceInitialized Uses

func IsBoostedTreesQuantileStreamResourceInitialized(scope *Scope, quantile_stream_resource_handle tf.Output) (is_initialized tf.Output)

Checks whether a quantile stream has been initialized.

An Op that checks if quantile stream resource is initialized.

Arguments:

quantile_stream_resource_handle: resource; The reference to quantile stream resource handle.

Returns bool; True if the resource is initialized, False otherwise.

func IsFinite Uses

func IsFinite(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are finite.

@compatibility(numpy) Equivalent to np.isfinite @end_compatibility

func IsInf Uses

func IsInf(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are Inf.

@compatibility(numpy) Equivalent to np.isinf @end_compatibility

func IsNan Uses

func IsNan(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are NaN.

@compatibility(numpy) Equivalent to np.isnan @end_compatibility

func Iterator Uses

func Iterator(scope *Scope, shared_name string, container string, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

A container for an iterator resource.

Returns A handle to the iterator that can be passed to a "MakeIterator" or "IteratorGetNext" op.

func IteratorFromStringHandle Uses

func IteratorFromStringHandle(scope *Scope, string_handle tf.Output, optional ...IteratorFromStringHandleAttr) (resource_handle tf.Output)

Converts the given string representing a handle to an iterator to a resource.

Arguments:

string_handle: A string representation of the given handle.

Returns A handle to an iterator resource.

func IteratorGetNext Uses

func IteratorGetNext(scope *Scope, iterator tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (components []tf.Output)

Gets the next output from the given iterator .

func IteratorGetNextAsOptional Uses

func IteratorGetNextAsOptional(scope *Scope, iterator tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (optional tf.Output)

Gets the next output from the given iterator as an Optional variant.

func IteratorGetNextSync Uses

func IteratorGetNextSync(scope *Scope, iterator tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (components []tf.Output)

Gets the next output from the given iterator.

This operation is a synchronous version IteratorGetNext. It should only be used in situations where the iterator does not block the calling thread, or where the calling thread is not a member of the thread pool used to execute parallel operations (e.g. in eager mode).

func IteratorToStringHandle Uses

func IteratorToStringHandle(scope *Scope, resource_handle tf.Output) (string_handle tf.Output)

Converts the given `resource_handle` representing an iterator to a string.

Arguments:

resource_handle: A handle to an iterator resource.

Returns A string representation of the given handle.

func KMC2ChainInitialization Uses

func KMC2ChainInitialization(scope *Scope, distances tf.Output, seed tf.Output) (index tf.Output)

Returns the index of a data point that should be added to the seed set.

Entries in distances are assumed to be squared distances of candidate points to the already sampled centers in the seed set. The op constructs one Markov chain of the k-MC^2 algorithm and returns the index of one candidate point to be added as an additional cluster center.

Arguments:

distances: Vector with squared distances to the closest previously sampled cluster center

for each candidate point.

seed: Scalar. Seed for initializing the random number generator.

Returns Scalar with the index of the sampled point.

func KmeansPlusPlusInitialization Uses

func KmeansPlusPlusInitialization(scope *Scope, points tf.Output, num_to_sample tf.Output, seed tf.Output, num_retries_per_sample tf.Output) (samples tf.Output)

Selects num_to_sample rows of input using the KMeans++ criterion.

Rows of points are assumed to be input points. One row is selected at random. Subsequent rows are sampled with probability proportional to the squared L2 distance from the nearest row selected thus far till num_to_sample rows have been sampled.

Arguments:

points: Matrix of shape (n, d). Rows are assumed to be input points.
num_to_sample: Scalar. The number of rows to sample. This value must not be larger than n.
seed: Scalar. Seed for initializing the random number generator.
num_retries_per_sample: Scalar. For each row that is sampled, this parameter

specifies the number of additional points to draw from the current distribution before selecting the best. If a negative value is specified, a heuristic is used to sample O(log(num_to_sample)) additional points.

Returns Matrix of shape (num_to_sample, d). The sampled rows.

func L2Loss Uses

func L2Loss(scope *Scope, t tf.Output) (output tf.Output)

L2 Loss.

Computes half the L2 norm of a tensor without the `sqrt`:

output = sum(t ** 2) / 2

Arguments:

t: Typically 2-D, but may have any dimensions.

Returns 0-D.

func LRN Uses

func LRN(scope *Scope, input tf.Output, optional ...LRNAttr) (output tf.Output)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).

Arguments:

input: 4-D.

func LRNGrad Uses

func LRNGrad(scope *Scope, input_grads tf.Output, input_image tf.Output, output_image tf.Output, optional ...LRNGradAttr) (output tf.Output)

Gradients for Local Response Normalization.

Arguments:

input_grads: 4-D with shape `[batch, height, width, channels]`.
input_image: 4-D with shape `[batch, height, width, channels]`.
output_image: 4-D with shape `[batch, height, width, channels]`.

Returns The gradients for LRN.

func LeakyRelu Uses

func LeakyRelu(scope *Scope, features tf.Output, optional ...LeakyReluAttr) (activations tf.Output)

Computes rectified linear: `max(features, features * alpha)`.

func LeakyReluGrad Uses

func LeakyReluGrad(scope *Scope, gradients tf.Output, features tf.Output, optional ...LeakyReluGradAttr) (backprops tf.Output)

Computes rectified linear gradients for a LeakyRelu operation.

Arguments:

gradients: The backpropagated gradients to the corresponding LeakyRelu operation.
features: The features passed as input to the corresponding LeakyRelu operation,

OR the outputs of that operation (both work equivalently).

Returns `gradients * (features > 0) + alpha * gradients * (featurs <= 0)`.

func LearnedUnigramCandidateSampler Uses

func LearnedUnigramCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...LearnedUnigramCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func LeftShift Uses

func LeftShift(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise left-shift of `x` and `y`.

If `y` is negative, or greater than or equal to the width of `x` in bits the result is implementation defined.

func Less Uses

func Less(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x < y) element-wise.

*NOTE*: `Less` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LessEqual Uses

func LessEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x <= y) element-wise.

*NOTE*: `LessEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Lgamma Uses

func Lgamma(scope *Scope, x tf.Output) (y tf.Output)

Computes the log of the absolute value of `Gamma(x)` element-wise.

func LinSpace Uses

func LinSpace(scope *Scope, start tf.Output, stop tf.Output, num tf.Output) (output tf.Output)

Generates values in an interval.

A sequence of `num` evenly-spaced values are generated beginning at `start`. If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, so that the last one is exactly `stop`.

For example:

“` tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] “`

Arguments:

start: 0-D tensor. First entry in the range.
stop: 0-D tensor. Last entry in the range.
num: 0-D tensor. Number of values to generate.

Returns 1-D. The generated values.

func ListDiff Uses

func ListDiff(scope *Scope, x tf.Output, y tf.Output, optional ...ListDiffAttr) (out tf.Output, idx tf.Output)

Computes the difference between two lists of numbers or strings.

Given a list `x` and a list `y`, this operation returns a list `out` that represents all values that are in `x` but not in `y`. The returned list `out` is sorted in the same order that the numbers appear in `x` (duplicates are preserved). This operation also returns a list `idx` that represents the position of each `out` element in `x`. In other words:

`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`

For example, given this input:

“` x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5] “`

This operation would return:

“` out ==> [2, 4, 6] idx ==> [1, 3, 5] “`

Arguments:

x: 1-D. Values to keep.
y: 1-D. Values to remove.

Returns 1-D. Values present in `x` but not in `y`.1-D. Positions of `x` values preserved in `out`.

func LoadAndRemapMatrix Uses

func LoadAndRemapMatrix(scope *Scope, ckpt_path tf.Output, old_tensor_name tf.Output, row_remapping tf.Output, col_remapping tf.Output, initializing_values tf.Output, num_rows int64, num_cols int64, optional ...LoadAndRemapMatrixAttr) (output_matrix tf.Output)

Loads a 2-D (matrix) `Tensor` with name `old_tensor_name` from the checkpoint

at `ckpt_path` and potentially reorders its rows and columns using the specified remappings.

Most users should use one of the wrapper initializers (such as `tf.contrib.framework.load_and_remap_matrix_initializer`) instead of this function directly.

The remappings are 1-D tensors with the following properties:

* `row_remapping` must have exactly `num_rows` entries. Row `i` of the output

matrix will be initialized from the row corresponding to index
`row_remapping[i]` in the old `Tensor` from the checkpoint.

* `col_remapping` must have either 0 entries (indicating that no column

reordering is needed) or `num_cols` entries. If specified, column `j` of the
output matrix will be initialized from the column corresponding to index
`col_remapping[j]` in the old `Tensor` from the checkpoint.

* A value of -1 in either of the remappings signifies a "missing" entry. In that

case, values from the `initializing_values` tensor will be used to fill that
missing row or column. If `row_remapping` has `r` missing entries and
`col_remapping` has `c` missing entries, then the following condition must be
true:

`(r * num_cols) + (c * num_rows) - (r * c) == len(initializing_values)`

The remapping tensors can be generated using the GenerateVocabRemapping op.

As an example, with row_remapping = [1, 0, -1], col_remapping = [0, 2, -1], initializing_values = [0.5, -0.5, 0.25, -0.25, 42], and w(i, j) representing the value from row i, column j of the old tensor in the checkpoint, the output matrix will look like the following:

[[w(1, 0), w(1, 2), 0.5],

[w(0, 0),  w(0, 2), -0.5],
[0.25,    -0.25,      42]]

Arguments:

ckpt_path: Path to the TensorFlow checkpoint (version 2, `TensorBundle`) from

which the old matrix `Tensor` will be loaded.

old_tensor_name: Name of the 2-D `Tensor` to load from checkpoint.
row_remapping: An int `Tensor` of row remappings (generally created by

`generate_vocab_remapping`). Even if no row remapping is needed, this must still be an index-valued Tensor (e.g. [0, 1, 2, ...]), or a shifted index-valued `Tensor` (e.g. [8, 9, 10, ...], for partitioned `Variables`).

col_remapping: An int `Tensor` of column remappings (generally created by

`generate_vocab_remapping`). May be a size-0 `Tensor` if only row remapping is to be done (e.g. column ordering is the same).

initializing_values: A float `Tensor` containing  values to fill in for cells

in the output matrix that are not loaded from the checkpoint. Length must be exactly the same as the number of missing / new cells.

num_rows: Number of rows (length of the 1st dimension) in the output matrix.
num_cols: Number of columns (length of the 2nd dimension) in the output matrix.

Returns Output matrix containing existing values loaded from the checkpoint, and with any missing values filled in from initializing_values.

func Log Uses

func Log(scope *Scope, x tf.Output) (y tf.Output)

Computes natural logarithm of x element-wise.

I.e., \\(y = \log_e x\\).

func Log1p Uses

func Log1p(scope *Scope, x tf.Output) (y tf.Output)

Computes natural logarithm of (1 + x) element-wise.

I.e., \\(y = \log_e (1 + x)\\).

func LogMatrixDeterminant Uses

func LogMatrixDeterminant(scope *Scope, input tf.Output) (sign tf.Output, log_abs_determinant tf.Output)

Computes the sign and the log of the absolute value of the determinant of

one or more square matrices.

The input is a tensor of shape `[N, M, M]` whose inner-most 2 dimensions form square matrices. The outputs are two tensors containing the signs and absolute values of the log determinants for all N input submatrices `[..., :, :]` such that the determinant = sign*exp(log_abs_determinant). The log_abs_determinant is computed as det(P)*sum(log(diag(LU))) where LU is the LU decomposition of the input and P is the corresponding permutation matrix.

Arguments:

input: Shape is `[N, M, M]`.

Returns The signs of the log determinants of the inputs. Shape is `[N]`.The logs of the absolute values of the determinants of the N input matrices. Shape is `[N]`.

func LogSoftmax Uses

func LogSoftmax(scope *Scope, logits tf.Output) (logsoftmax tf.Output)

Computes log softmax activations.

For each batch `i` and class `j` we have

logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))

Arguments:

logits: 2-D with shape `[batch_size, num_classes]`.

Returns Same shape as `logits`.

func LogUniformCandidateSampler Uses

func LogUniformCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...LogUniformCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a log-uniform distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func LogicalAnd Uses

func LogicalAnd(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of x AND y element-wise.

*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LogicalNot Uses

func LogicalNot(scope *Scope, x tf.Output) (y tf.Output)

Returns the truth value of NOT x element-wise.

func LogicalOr Uses

func LogicalOr(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of x OR y element-wise.

*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LookupTableExportV2 Uses

func LookupTableExportV2(scope *Scope, table_handle tf.Output, Tkeys tf.DataType, Tvalues tf.DataType) (keys tf.Output, values tf.Output)

Outputs all keys and values in the table.

Arguments:

table_handle: Handle to the table.

Returns Vector of all keys present in the table.Tensor of all values in the table. Indexed in parallel with `keys`.

func LookupTableFindV2 Uses

func LookupTableFindV2(scope *Scope, table_handle tf.Output, keys tf.Output, default_value tf.Output) (values tf.Output)

Looks up keys in a table, outputs the corresponding values.

The tensor `keys` must of the same type as the keys of the table. The output `values` is of the type of the table values.

The scalar `default_value` is the value output for keys not present in the table. It must also be of the same type as the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.

Returns Same shape as `keys`. Values found in the table, or `default_values` for missing keys.

func LookupTableImportV2 Uses

func LookupTableImportV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Replaces the contents of the table with the specified keys and values.

The tensor `keys` must be of the same type as the keys of the table. The tensor `values` must be of the type of the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.
values: Values to associate with keys.

Returns the created operation.

func LookupTableInsertV2 Uses

func LookupTableInsertV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Updates the table to associates keys with values.

The tensor `keys` must be of the same type as the keys of the table. The tensor `values` must be of the type of the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.
values: Values to associate with keys.

Returns the created operation.

func LookupTableRemoveV2 Uses

func LookupTableRemoveV2(scope *Scope, table_handle tf.Output, keys tf.Output) (o *tf.Operation)

Removes keys and its associated values from a table.

The tensor `keys` must of the same type as the keys of the table. Keys not already in the table are silently ignored.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys of the elements to remove.

Returns the created operation.

func LookupTableSizeV2 Uses

func LookupTableSizeV2(scope *Scope, table_handle tf.Output) (size tf.Output)

Computes the number of elements in the given table.

Arguments:

table_handle: Handle to the table.

Returns Scalar that contains number of elements in the table.

func LoopCond Uses

func LoopCond(scope *Scope, input tf.Output) (output tf.Output)

Forwards the input to the output.

This operator represents the loop termination condition used by the "pivot" switches of a loop.

Arguments:

input: A boolean scalar, representing the branch predicate of the Switch op.

Returns The same tensor as `input`.

func LowerBound Uses

func LowerBound(scope *Scope, sorted_inputs tf.Output, values tf.Output, optional ...LowerBoundAttr) (output tf.Output)

Applies lower_bound(sorted_search_values, values) along each row.

Each set of rows with the same index in (sorted_inputs, values) is treated independently. The resulting row is the equivalent of calling `np.searchsorted(sorted_inputs, values, side='left')`.

The result is not a global index to the entire `Tensor`, but rather just the index in the last dimension.

A 2-D example:

sorted_sequence = [[0, 3, 9, 9, 10],
                   [1, 2, 3, 4, 5]]
values = [[2, 4, 9],
          [0, 2, 6]]

result = LowerBound(sorted_sequence, values)

result == [[1, 2, 2],
           [0, 1, 5]]

Arguments:

sorted_inputs: 2-D Tensor where each row is ordered.
values: 2-D Tensor with the same numbers of rows as `sorted_search_values`. Contains

the values that will be searched for in `sorted_search_values`.

Returns A `Tensor` with the same shape as `values`. It contains the first scalar index into the last dimension where values can be inserted without changing the ordered property.

func Lu Uses

func Lu(scope *Scope, input tf.Output, optional ...LuAttr) (lu tf.Output, p tf.Output)

Computes the LU decomposition of one or more square matrices.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices.

The input has to be invertible.

The output consists of two tensors LU and P containing the LU decomposition of all input submatrices `[..., :, :]`. LU encodes the lower triangular and upper triangular factors.

For each input submatrix of shape `[M, M]`, L is a lower triangular matrix of shape `[M, M]` with unit diagonal whose entries correspond to the strictly lower triangular part of LU. U is a upper triangular matrix of shape `[M, M]` whose entries correspond to the upper triangular part, including the diagonal, of LU.

P represents a permutation matrix encoded as a list of indices each between `0` and `M-1`, inclusive. If P_mat denotes the permutation matrix corresponding to P, then the L, U and P satisfies P_mat * input = L * U.

Arguments:

input: A tensor of shape `[..., M, M]` whose inner-most 2 dimensions form matrices of

size `[M, M]`.

Returns A tensor of shape `[..., M, M]` whose strictly lower triangular part denotes the lower triangular factor `L` with unit diagonal, and whose upper triangular part denotes the upper triangular factor `U`.Permutation of the rows encoded as a list of indices in `0..M-1`. Shape is `[..., M]`. @compatibility(scipy) Similar to `scipy.linalg.lu`, except the triangular factors `L` and `U` are packed into a single tensor, the permutation is applied to `input` instead of the right hand side and the permutation `P` is returned as a list of indices instead of a permutation matrix. @end_compatibility

func MakeIterator Uses

func MakeIterator(scope *Scope, dataset tf.Output, iterator tf.Output) (o *tf.Operation)

Makes a new iterator from the given `dataset` and stores it in `iterator`.

This operation may be executed multiple times. Each execution will reset the iterator in `iterator` to the first element of `dataset`.

Returns the created operation.

func MapClear Uses

func MapClear(scope *Scope, dtypes []tf.DataType, optional ...MapClearAttr) (o *tf.Operation)

Op removes all elements in the underlying container.

Returns the created operation.

func MapIncompleteSize Uses

func MapIncompleteSize(scope *Scope, dtypes []tf.DataType, optional ...MapIncompleteSizeAttr) (size tf.Output)

Op returns the number of incomplete elements in the underlying container.

func MapPeek Uses

func MapPeek(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...MapPeekAttr) (values []tf.Output)

Op peeks at the values at the specified key. If the

underlying container does not contain this key this op will block until it does.

func MapSize Uses

func MapSize(scope *Scope, dtypes []tf.DataType, optional ...MapSizeAttr) (size tf.Output)

Op returns the number of elements in the underlying container.

func MapStage Uses

func MapStage(scope *Scope, key tf.Output, indices tf.Output, values []tf.Output, dtypes []tf.DataType, optional ...MapStageAttr) (o *tf.Operation)

Stage (key, values) in the underlying container which behaves like a hashtable.

Arguments:

key: int64

values: a list of tensors

dtypes A list of data types that inserted values should adhere to.

Returns the created operation.

func MapUnstage Uses

func MapUnstage(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...MapUnstageAttr) (values []tf.Output)

Op removes and returns the values associated with the key

from the underlying container. If the underlying container does not contain this key, the op will block until it does.

func MapUnstageNoKey Uses

func MapUnstageNoKey(scope *Scope, indices tf.Output, dtypes []tf.DataType, optional ...MapUnstageNoKeyAttr) (key tf.Output, values []tf.Output)

Op removes and returns a random (key, value)

from the underlying container. If the underlying container does not contain elements, the op will block until it does.

func MatMul Uses

func MatMul(scope *Scope, a tf.Output, b tf.Output, optional ...MatMulAttr) (product tf.Output)

Multiply the matrix "a" by the matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" (after being transposed if transpose_a is true) must match the outer dimension of "b" (after being transposed if transposed_b is true).

*Note*: The default kernel implementation for MatMul on GPUs uses cublas.

func MatchingFiles Uses

func MatchingFiles(scope *Scope, pattern tf.Output) (filenames tf.Output)

Returns the set of files matching one or more glob patterns.

Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion. Note also that the order of filenames returned can be non-deterministic.

Arguments:

pattern: Shell wildcard pattern(s). Scalar or vector of type string.

Returns A vector of matching filenames.

func MatrixBandPart Uses

func MatrixBandPart(scope *Scope, input tf.Output, num_lower tf.Output, num_upper tf.Output) (band tf.Output)

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The `band` part is computed as follows: Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a tensor with the same shape where

`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.

The indicator function

`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&

(num_upper < 0 || (n-m) <= num_upper)`.

For example:

“` # if 'input' is [[ 0, 1, 2, 3]

[-1,  0,  1, 2]
[-2, -1,  0, 1]
[-3, -2, -1, 0]],

tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]

[-1,  0,  1, 2]
[ 0, -1,  0, 1]
[ 0,  0, -1, 0]],

tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]

[-1,  0,  1, 0]
[-2, -1,  0, 1]
[ 0, -2, -1, 0]]

“`

Useful special cases:

“`

tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.

“`

Arguments:

input: Rank `k` tensor.
num_lower: 0-D tensor. Number of subdiagonals to keep. If negative, keep entire

lower triangle.

num_upper: 0-D tensor. Number of superdiagonals to keep. If negative, keep

entire upper triangle.

Returns Rank `k` tensor of the same shape as input. The extracted banded tensor.

func MatrixDeterminant Uses

func MatrixDeterminant(scope *Scope, input tf.Output) (output tf.Output)

Computes the determinant of one or more square matrices.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices `[..., :, :]`.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[...]`.

func MatrixDiag Uses

func MatrixDiag(scope *Scope, diagonal tf.Output) (output tf.Output)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a `diagonal`, this operation returns a tensor with the `diagonal` and everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:

`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.

For example:

“` # 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]

 [0, 2, 0, 0]
 [0, 0, 3, 0]
 [0, 0, 0, 4]],
[[5, 0, 0, 0]
 [0, 6, 0, 0]
 [0, 0, 7, 0]
 [0, 0, 0, 8]]]

which has shape (2, 4, 4) “`

Arguments:

diagonal: Rank `k`, where `k >= 1`.

Returns Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.

func MatrixDiagPart Uses

func MatrixDiagPart(scope *Scope, input tf.Output) (diagonal tf.Output)

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the `diagonal` part of the batched `input`. The `diagonal` part is computed as follows:

Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where:

`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`.

The input must be at least a matrix.

For example:

“` # 'input' is [[[1, 0, 0, 0]

 [0, 2, 0, 0]
 [0, 0, 3, 0]
 [0, 0, 0, 4]],
[[5, 0, 0, 0]
 [0, 6, 0, 0]
 [0, 0, 7, 0]
 [0, 0, 0, 8]]]

and input.shape = (2, 4, 4)

tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]

which has shape (2, 4) “`

Arguments:

input: Rank `k` tensor where `k >= 2`.

Returns The extracted diagonal(s) having shape `diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.

func MatrixExponential Uses

func MatrixExponential(scope *Scope, input tf.Output) (output tf.Output)

Deprecated, use python implementation tf.linalg.matrix_exponential.

DEPRECATED at GraphDef version 27: Use Python implementation tf.linalg.matrix_exponential instead.

func MatrixInverse Uses

func MatrixInverse(scope *Scope, input tf.Output, optional ...MatrixInverseAttr) (output tf.Output)

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices `[..., :, :]`.

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

@compatibility(numpy) Equivalent to np.linalg.inv @end_compatibility

func MatrixLogarithm Uses

func MatrixLogarithm(scope *Scope, input tf.Output) (output tf.Output)

Computes the matrix logarithm of one or more square matrices:

\\(log(exp(A)) = A\\)

This op is only defined for complex matrices. If A is positive-definite and real, then casting to a complex matrix, taking the logarithm and casting back to a real matrix will give the correct result.

This function computes the matrix logarithm using the Schur-Parlett algorithm. Details of the algorithm can be found in Section 11.6.2 of: Nicholas J. Higham, Functions of Matrices: Theory and Computation, SIAM 2008. ISBN 978-0-898716-46-7.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the exponential for all input submatrices `[..., :, :]`.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

@compatibility(scipy) Equivalent to scipy.linalg.logm @end_compatibility

func MatrixSetDiag Uses

func MatrixSetDiag(scope *Scope, input tf.Output, diagonal tf.Output) (output tf.Output)

Returns a batched matrix tensor with new batched diagonal values.

Given `input` and `diagonal`, this operation returns a tensor with the same shape and values as `input`, except for the main diagonal of the innermost matrices. These will be overwritten by the values in `diagonal`.

The output is computed as follows:

Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has `k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where:

* `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`.
* `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.

Arguments:

input: Rank `k+1`, where `k >= 1`.
diagonal: Rank `k`, where `k >= 1`.

Returns Rank `k+1`, with `output.shape = input.shape`.

func MatrixSolve Uses

func MatrixSolve(scope *Scope, matrix tf.Output, rhs tf.Output, optional ...MatrixSolveAttr) (output tf.Output)

Solves systems of linear equations.

`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. If `adjoint` is `True` then each output matrix satisfies `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.

Arguments:

matrix: Shape is `[..., M, M]`.
rhs: Shape is `[..., M, K]`.

Returns Shape is `[..., M, K]`.

func MatrixSolveLs Uses

func MatrixSolveLs(scope *Scope, matrix tf.Output, rhs tf.Output, l2_regularizer tf.Output, optional ...MatrixSolveLsAttr) (output tf.Output)

Solves one or more linear least-squares problems.

`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions form real or complex matrices of size `[M, N]`. `Rhs` is a tensor of the same type as `matrix` and shape `[..., M, K]`. The output is a tensor shape `[..., N, K]` where each output matrix solves each of the equations `matrix[..., :, :]` * `output[..., :, :]` = `rhs[..., :, :]` in the least squares sense.

We use the following notation for (complex) matrix and right-hand sides in the batch:

`matrix`=\\(A \in \mathbb{C}^{m \times n}\\), `rhs`=\\(B \in \mathbb{C}^{m \times k}\\), `output`=\\(X \in \mathbb{C}^{n \times k}\\), `l2_regularizer`=\\(\lambda \in \mathbb{R}\\).

If `fast` is `True`, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then \\(X = (A^H A + \lambda I)^{-1} A^H B\\), which solves the least-squares problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k} } ||A Z - B||_F^2 + \lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as \\(X = A^H (A A^H + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is the minimum-norm solution to the under-determined linear system, i.e. \\(X = \mathrm{argmin}_{Z \in \mathbb{C}^{n \times k} } ||Z||_F^2 \\), subject to \\(A Z = B\\). Notice that the fast path is only numerically stable when \\(A\\) is numerically full rank and has a condition number \\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach} } }\\) or \\(\lambda\\) is sufficiently large.

If `fast` is `False` an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \\(A\\) is rank deficient. This path is typically 6-7 times slower than the fast path. If `fast` is `False` then `l2_regularizer` is ignored.

Arguments:

matrix: Shape is `[..., M, N]`.
rhs: Shape is `[..., M, K]`.
l2_regularizer: Scalar tensor.

@compatibility(numpy) Equivalent to np.linalg.lstsq @end_compatibility

Returns Shape is `[..., N, K]`.

func MatrixSquareRoot Uses

func MatrixSquareRoot(scope *Scope, input tf.Output) (output tf.Output)

Computes the matrix square root of one or more square matrices:

matmul(sqrtm(A), sqrtm(A)) = A

The input matrix should be invertible. If the input matrix is real, it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed).

The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix is then computed directly. Details of the algorithm can be found in: Nicholas J. Higham, "Computing real square roots of a real matrix", Linear Algebra Appl., 1987.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the matrix square root for all input submatrices `[..., :, :]`.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

@compatibility(scipy) Equivalent to scipy.linalg.sqrtm @end_compatibility

func MatrixTriangularSolve Uses

func MatrixTriangularSolve(scope *Scope, matrix tf.Output, rhs tf.Output, optional ...MatrixTriangularSolveAttr) (output tf.Output)

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. If `lower` is `True` then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If `lower` is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. `rhs` is a tensor of shape `[..., M, K]`.

The output is a tensor of shape `[..., M, K]`. If `adjoint` is `True` then the innermost matrices in `output` satisfy matrix equations `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. If `adjoint` is `False` then the strictly then the innermost matrices in `output` satisfy matrix equations `adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

Arguments:

matrix: Shape is `[..., M, M]`.
rhs: Shape is `[..., M, K]`.

Returns Shape is `[..., M, K]`.

func Max Uses

func Max(scope *Scope, input tf.Output, axis tf.Output, optional ...MaxAttr) (output tf.Output)

Computes the maximum of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
axis: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func MaxPool Uses

func MaxPool(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolAttr) (output tf.Output)

Performs max pooling on the input.

Arguments:

input: 4-D input to pool over.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.

func MaxPool3D Uses

func MaxPool3D(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DAttr) (output tf.Output)

Performs 3D max pooling on the input.

Arguments:

input: Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.

func MaxPool3DGrad Uses

func MaxPool3DGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DGradAttr) (output tf.Output)

Computes gradients of max pooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func MaxPool3DGradGrad Uses

func MaxPool3DGradGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DGradGradAttr) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input to `max_pool`.

func MaxPoolGrad Uses

func MaxPoolGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolGradAttr) (output tf.Output)

Computes gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients w.r.t. the output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients w.r.t. the input to `max_pool`.

func MaxPoolGradGrad Uses

func MaxPoolGradGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolGradGradAttr) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients of gradients w.r.t. the input of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input to `max_pool`.

func MaxPoolGradGradV2 Uses

func MaxPoolGradGradV2(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize tf.Output, strides tf.Output, padding string, optional ...MaxPoolGradGradV2Attr) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients of gradients w.r.t. the input of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input to `max_pool`.

func MaxPoolGradGradWithArgmax Uses

func MaxPoolGradGradWithArgmax(scope *Scope, input tf.Output, grad tf.Output, argmax tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

input: The original input.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t. the

input of `max_pool`.

argmax: The indices of the maximum values chosen for each output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input of `max_pool`.

func MaxPoolGradV2 Uses

func MaxPoolGradV2(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize tf.Output, strides tf.Output, padding string, optional ...MaxPoolGradV2Attr) (output tf.Output)

Computes gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients w.r.t. the output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients w.r.t. the input to `max_pool`.

func MaxPoolGradWithArgmax Uses

func MaxPoolGradWithArgmax(scope *Scope, input tf.Output, grad tf.Output, argmax tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output)

Computes gradients of the maxpooling function.

Arguments:

input: The original input.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t. the

output of `max_pool`.

argmax: The indices of the maximum values chosen for each output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients w.r.t. the input of `max_pool`.

func MaxPoolV2 Uses

func MaxPoolV2(scope *Scope, input tf.Output, ksize tf.Output, strides tf.Output, padding string, optional ...MaxPoolV2Attr) (output tf.Output)

Performs max pooling on the input.

Arguments:

input: 4-D input to pool over.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.

func MaxPoolWithArgmax Uses

func MaxPoolWithArgmax(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolWithArgmaxAttr) (output tf.Output, argmax tf.Output)

Performs max pooling on the input and outputs both max values and indices.

The indices in `argmax` are flattened, so that a maximum value at position `[b, y, x, c]` becomes flattened index `((b * height + y) * width + x) * channels + c`.

The indices returned are always in `[0, height) x [0, width)` before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.

Arguments:

input: 4-D with shape `[batch, height, width, channels]`.  Input to pool over.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.4-D. The flattened indices of the max values chosen for each output.

func Maximum Uses

func Maximum(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

*NOTE*: `Maximum` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Mean Uses

func Mean(scope *Scope, input tf.Output, axis tf.Output, optional ...MeanAttr) (output tf.Output)

Computes the mean of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
axis: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func Merge Uses

func Merge(scope *Scope, inputs []tf.Output) (output tf.Output, value_index tf.Output)

Forwards the value of an available tensor from `inputs` to `output`.

`Merge` waits for at least one of the tensors in `inputs` to become available. It is usually combined with `Switch` to implement branching.

`Merge` forwards the first tensor to become available to `output`, and sets `value_index` to its index in `inputs`.

Arguments:

inputs: The input tensors, exactly one of which will become available.

Returns Will be set to the available input tensor.The index of the chosen input tensor in `inputs`.

func MergeSummary Uses

func MergeSummary(scope *Scope, inputs []tf.Output) (summary tf.Output)

Merges summaries.

This op creates a [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) protocol buffer that contains the union of all the values in the input summaries.

When the Op is run, it reports an `InvalidArgument` error if multiple values in the summaries to merge use the same tag.

Arguments:

inputs: Can be of any shape.  Each must contain serialized `Summary` protocol

buffers.

Returns Scalar. Serialized `Summary` protocol buffer.

func MergeV2Checkpoints Uses

func MergeV2Checkpoints(scope *Scope, checkpoint_prefixes tf.Output, destination_prefix tf.Output, optional ...MergeV2CheckpointsAttr) (o *tf.Operation)

V2 format specific: merges the metadata files of sharded checkpoints. The

result is one logical checkpoint, with one physical metadata file and renamed data files.

Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.

If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.

Arguments:

checkpoint_prefixes: prefixes of V2 checkpoints to merge.
destination_prefix: scalar.  The desired final prefix.  Allowed to be the same

as one of the checkpoint_prefixes.

Returns the created operation.

func Mfcc Uses

func Mfcc(scope *Scope, spectrogram tf.Output, sample_rate tf.Output, optional ...MfccAttr) (output tf.Output)

Transforms a spectrogram into a form that's useful for speech recognition.

Mel Frequency Cepstral Coefficients are a way of representing audio data that's been effective as an input feature for machine learning. They are created by taking the spectrum of a spectrogram (a 'cepstrum'), and discarding some of the higher frequencies that are less significant to the human ear. They have a long history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum is a good resource to learn more.

Arguments:

spectrogram: Typically produced by the Spectrogram op, with magnitude_squared

set to true.

sample_rate: How many samples per second the source audio used.

func Min Uses

func Min(scope *Scope, input tf.Output, axis tf.Output, optional ...MinAttr) (output tf.Output)

Computes the minimum of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `axis`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `axis`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
axis: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func Minimum Uses

func Minimum(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

*NOTE*: `Minimum` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func MirrorPad Uses

func MirrorPad(scope *Scope, input tf.Output, paddings tf.Output, mode string) (output tf.Output)

Pads a tensor with mirrored values.

This operation pads a `input` with mirrored values according to the `paddings` you specify. `paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates how many values to add before the contents of `input` in that dimension, and `paddings[D, 1]` indicates how many values to add after the contents of `input` in that dimension. Both `paddings[D, 0]` and `paddings[D, 1]` must be no greater than `input.dim_size(D)` (or `input.dim_size(D) - 1`) if `copy_border` is true (if false, respectively).

The padded size of each dimension D of the output is:

`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`

For example:

“` # 't' is [[1, 2, 3], [4, 5, 6]]. # 'paddings' is [[1, 1]], [2, 2]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2]

[2, 1, 1, 2, 3, 3, 2]
[5, 4, 4, 5, 6, 6, 5]
[5, 4, 4, 5, 6, 6, 5]]

“`

Arguments:

input: The input tensor to be padded.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

mode: Either `REFLECT` or `SYMMETRIC`. In reflect mode the padded regions

do not include the borders, while in symmetric mode the padded regions do include the borders. For example, if `input` is `[1, 2, 3]` and `paddings` is `[0, 2]`, then the output is `[1, 2, 3, 2, 1]` in reflect mode, and it is `[1, 2, 3, 3, 2]` in symmetric mode.

Returns The padded tensor.

func MirrorPadGrad Uses

func MirrorPadGrad(scope *Scope, input tf.Output, paddings tf.Output, mode string) (output tf.Output)

Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor.

This operation folds the padded areas of `input` by `MirrorPad` according to the `paddings` you specify. `paddings` must be the same as `paddings` argument given to the corresponding `MirrorPad` op.

The folded size of each dimension D of the output is:

`input.dim_size(D) - paddings(D, 0) - paddings(D, 1)`

For example:

“` # 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]. # 'paddings' is [[0, 1]], [0, 1]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[ 1, 5]

[11, 28]]

“`

Arguments:

input: The input tensor to be folded.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

mode: The mode used in the `MirrorPad` op.

Returns The folded tensor.

func Mod Uses

func Mod(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. `tf.truncatediv(x, y) * y + truncate_mod(x, y) = x`.

*NOTE*: `Mod` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func ModelDataset Uses

func ModelDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Identity transformation that models performance.

Identity transformation that models performance.

Arguments:

input_dataset: A variant tensor representing the input dataset.

func Mul Uses

func Mul(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x * y element-wise.

*NOTE*: `Multiply` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func MultiDeviceIterator Uses

func MultiDeviceIterator(scope *Scope, devices []string, shared_name string, container string, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a MultiDeviceIterat