tensorflow: github.com/tensorflow/tensorflow/tensorflow/go/op Index | Examples | Files

package op

import "github.com/tensorflow/tensorflow/tensorflow/go/op"

Package op defines functions for adding TensorFlow operations to a Graph.

Functions for adding an operation to a graph take a Scope object as the first argument. The Scope object encapsulates a graph and a set of properties (such as a name prefix) for all operations being added to the graph.

WARNING: The API in this package has not been finalized and can change without notice.

Code:

// This example creates a Graph that multiplies a constant matrix with
// a matrix to be provided during graph execution (via
// tensorflow.Session).
s := NewScope()
input := Placeholder(s, tf.Float) // Matrix to be provided to Session.Run
output := MatMul(s,
    Const(s, [][]float32{{10}, {20}}), // Constant 2x1 matrix
    input,
    MatMulTransposeB(true))
if s.Err() != nil {
    panic(s.Err())
}
// Shape of the product: The number of rows is fixed by m1, but the
// number of columns will depend on m2, which is unknown.
fmt.Println(output.Shape())

Output:

[2, ?]

Index

Examples

Package Files

generate.go op.go scope.go wrappers.go

func Abort Uses

func Abort(scope *Scope, optional ...AbortAttr) (o *tf.Operation)

Raise a exception to abort the process when called.

If exit_without_error is true, the process will exit normally, otherwise it will exit with a SIGABORT signal.

Returns nothing but an exception.

Returns the created operation.

func Abs Uses

func Abs(scope *Scope, x tf.Output) (y tf.Output)

Computes the absolute value of a tensor.

Given a tensor `x`, this operation returns a tensor containing the absolute value of each element in `x`. For example, if x is an input element and y is an output element, this operation computes \\(y = |x|\\).

func Acos Uses

func Acos(scope *Scope, x tf.Output) (y tf.Output)

Computes acos of x element-wise.

func Acosh Uses

func Acosh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic cosine of x element-wise.

func Add Uses

func Add(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x + y element-wise.

*NOTE*: `Add` supports broadcasting. `AddN` does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func AddManySparseToTensorsMap Uses

func AddManySparseToTensorsMap(scope *Scope, sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output, optional ...AddManySparseToTensorsMapAttr) (sparse_handles tf.Output)

Add an `N`-minibatch `SparseTensor` to a `SparseTensorsMap`, return `N` handles.

A `SparseTensor` of rank `R` is represented by three tensors: `sparse_indices`, `sparse_values`, and `sparse_shape`, where

“`sparse_indices.shape[1] == sparse_shape.shape[0] == R“`

An `N`-minibatch of `SparseTensor` objects is represented as a `SparseTensor` having a first `sparse_indices` column taking values between `[0, N)`, where the minibatch size `N == sparse_shape[0]`.

The input `SparseTensor` must have rank `R` greater than 1, and the first dimension is treated as the minibatch dimension. Elements of the `SparseTensor` must be sorted in increasing order of this first dimension. The stored `SparseTensor` objects pointed to by each row of the output `sparse_handles` will have rank `R-1`.

The `SparseTensor` values can then be read out as part of a minibatch by passing the given keys as vector elements to `TakeManySparseFromTensorsMap`. To ensure the correct `SparseTensorsMap` is accessed, ensure that the same `container` and `shared_name` are passed to that Op. If no `shared_name` is provided here, instead use the *name* of the Operation created by calling `AddManySparseToTensorsMap` as the `shared_name` passed to `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.

Arguments:

sparse_indices: 2-D.  The `indices` of the minibatch `SparseTensor`.

`sparse_indices[:, 0]` must be ordered values in `[0, N)`.

sparse_values: 1-D.  The `values` of the minibatch `SparseTensor`.
sparse_shape: 1-D.  The `shape` of the minibatch `SparseTensor`.

The minibatch size `N == sparse_shape[0]`.

Returns 1-D. The handles of the `SparseTensor` now stored in the `SparseTensorsMap`. Shape: `[N]`.

func AddN Uses

func AddN(scope *Scope, inputs []tf.Output) (sum tf.Output)

Add all input tensors element wise.

Arguments:

inputs: Must all be the same size and shape.

func AddSparseToTensorsMap Uses

func AddSparseToTensorsMap(scope *Scope, sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output, optional ...AddSparseToTensorsMapAttr) (sparse_handle tf.Output)

Add a `SparseTensor` to a `SparseTensorsMap` return its handle.

A `SparseTensor` is represented by three tensors: `sparse_indices`, `sparse_values`, and `sparse_shape`.

This operator takes the given `SparseTensor` and adds it to a container object (a `SparseTensorsMap`). A unique key within this container is generated in the form of an `int64`, and this is the value that is returned.

The `SparseTensor` can then be read out as part of a minibatch by passing the key as a vector element to `TakeManySparseFromTensorsMap`. To ensure the correct `SparseTensorsMap` is accessed, ensure that the same `container` and `shared_name` are passed to that Op. If no `shared_name` is provided here, instead use the *name* of the Operation created by calling `AddSparseToTensorsMap` as the `shared_name` passed to `TakeManySparseFromTensorsMap`. Ensure the Operations are colocated.

Arguments:

sparse_indices: 2-D.  The `indices` of the `SparseTensor`.
sparse_values: 1-D.  The `values` of the `SparseTensor`.
sparse_shape: 1-D.  The `shape` of the `SparseTensor`.

Returns 0-D. The handle of the `SparseTensor` now stored in the `SparseTensorsMap`.

func AdjustContrast Uses

func AdjustContrast(scope *Scope, images tf.Output, contrast_factor tf.Output, min_value tf.Output, max_value tf.Output) (output tf.Output)

Deprecated. Disallowed in GraphDef version >= 2.

DEPRECATED at GraphDef version 2: Use AdjustContrastv2 instead

func AdjustContrastv2 Uses

func AdjustContrastv2(scope *Scope, images tf.Output, contrast_factor tf.Output) (output tf.Output)

Adjust the contrast of one or more images.

`images` is a tensor of at least 3 dimensions. The last 3 dimensions are interpreted as `[height, width, channels]`. The other dimensions only represent a collection of images, such as `[batch, height, width, channels].`

Contrast is adjusted independently for each channel of each image.

For each channel, the Op first computes the mean of the image pixels in the channel and then adjusts each component of each pixel to `(x - mean) * contrast_factor + mean`.

Arguments:

images: Images to adjust.  At least 3-D.
contrast_factor: A float multiplier for adjusting contrast.

Returns The contrast-adjusted image or images.

func AdjustHue Uses

func AdjustHue(scope *Scope, images tf.Output, delta tf.Output) (output tf.Output)

Adjust the hue of one or more images.

`images` is a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A delta is then applied all the hue values, and then remapped back to RGB colorspace.

Arguments:

images: Images to adjust.  At least 3-D.
delta: A float delta to add to the hue.

Returns The hue-adjusted image or images.

func AdjustSaturation Uses

func AdjustSaturation(scope *Scope, images tf.Output, scale tf.Output) (output tf.Output)

Adjust the saturation of one or more images.

`images` is a tensor of at least 3 dimensions. The last dimension is interpretted as channels, and must be three.

The input image is considered in the RGB colorspace. Conceptually, the RGB colors are first mapped into HSV. A scale is then applied all the saturation values, and then remapped back to RGB colorspace.

Arguments:

images: Images to adjust.  At least 3-D.
scale: A float scale to add to the saturation.

Returns The hue-adjusted image or images.

func All Uses

func All(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...AllAttr) (output tf.Output)

Computes the "logical and" of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func AllCandidateSampler Uses

func AllCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, optional ...AllCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to produce.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func Any Uses

func Any(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...AnyAttr) (output tf.Output)

Computes the "logical or" of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func ApproximateEqual Uses

func ApproximateEqual(scope *Scope, x tf.Output, y tf.Output, optional ...ApproximateEqualAttr) (z tf.Output)

Returns the truth value of abs(x-y) < tolerance element-wise.

func ArgMax Uses

func ArgMax(scope *Scope, input tf.Output, dimension tf.Output, optional ...ArgMaxAttr) (output tf.Output)

Returns the index with the largest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Arguments:

dimension: int32 or int64, must be in the range `[-rank(input), rank(input))`.

Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.

func ArgMin Uses

func ArgMin(scope *Scope, input tf.Output, dimension tf.Output, optional ...ArgMinAttr) (output tf.Output)

Returns the index with the smallest value across dimensions of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

Arguments:

dimension: int32 or int64, must be in the range `[-rank(input), rank(input))`.

Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.

func AsString Uses

func AsString(scope *Scope, input tf.Output, optional ...AsStringAttr) (output tf.Output)

Converts each entry in the given tensor to strings. Supports many numeric

types and boolean.

func Asin Uses

func Asin(scope *Scope, x tf.Output) (y tf.Output)

Computes asin of x element-wise.

func Asinh Uses

func Asinh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic sine of x element-wise.

func Assert Uses

func Assert(scope *Scope, condition tf.Output, data []tf.Output, optional ...AssertAttr) (o *tf.Operation)

Asserts that the given condition is true.

If `condition` evaluates to false, print the list of tensors in `data`. `summarize` determines how many entries of the tensors to print.

Arguments:

condition: The condition to evaluate.
data: The tensors to print out when condition is false.

Returns the created operation.

func AssignAddVariableOp Uses

func AssignAddVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Adds a value to the current value of a variable.

Any ReadVariableOp which depends directly or indirectly on this assign is guaranteed to see the incremented value or a subsequent newer one.

Outputs the incremented value, which can be used to totally order the increments to this variable.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value by which the variable will be incremented.

Returns the created operation.

func AssignSubVariableOp Uses

func AssignSubVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Subtracts a value from the current value of a variable.

Any ReadVariableOp which depends directly or indirectly on this assign is guaranteed to see the incremented value or a subsequent newer one.

Outputs the incremented value, which can be used to totally order the increments to this variable.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value by which the variable will be incremented.

Returns the created operation.

func AssignVariableOp Uses

func AssignVariableOp(scope *Scope, resource tf.Output, value tf.Output) (o *tf.Operation)

Assigns a new value to a variable.

Any ReadVariableOp with a control dependency on this op is guaranteed to return this value or a subsequent newer value of the variable.

Arguments:

resource: handle to the resource in which to store the variable.
value: the value to set the new tensor to use.

Returns the created operation.

func Atan Uses

func Atan(scope *Scope, x tf.Output) (y tf.Output)

Computes atan of x element-wise.

func Atan2 Uses

func Atan2(scope *Scope, y tf.Output, x tf.Output) (z tf.Output)

Computes arctangent of `y/x` element-wise, respecting signs of the arguments.

This is the angle \( \theta \in [-\pi, \pi] \) such that \[ x = r \cos(\theta) \] and \[ y = r \sin(\theta) \] where \(r = \sqrt(x^2 + y^2) \).

func Atanh Uses

func Atanh(scope *Scope, x tf.Output) (y tf.Output)

Computes inverse hyperbolic tangent of x element-wise.

func AudioSpectrogram Uses

func AudioSpectrogram(scope *Scope, input tf.Output, window_size int64, stride int64, optional ...AudioSpectrogramAttr) (spectrogram tf.Output)

Produces a visualization of audio data over time.

Spectrograms are a standard way of representing audio information as a series of slices of frequency information, one slice for each window of time. By joining these together into a sequence, they form a distinctive fingerprint of the sound over time.

This op expects to receive audio data as an input, stored as floats in the range -1 to 1, together with a window width in samples, and a stride specifying how far to move the window between slices. From this it generates a three dimensional output. The lowest dimension has an amplitude value for each frequency during that time slice. The next dimension is time, with successive frequency slices. The final dimension is for the channels in the input, so a stereo audio input would have two here for example.

This means the layout when converted and saved as an image is rotated 90 degrees clockwise from a typical spectrogram. Time is descending down the Y axis, and the frequency decreases from left to right.

Each value in the result represents the square root of the sum of the real and imaginary parts of an FFT on the current window of samples. In this way, the lowest dimension represents the power of each frequency in the current window, and adjacent windows are concatenated in the next dimension.

To get a more intuitive and visual look at what this operation does, you can run tensorflow/examples/wav_to_spectrogram to read in an audio file and save out the resulting spectrogram as a PNG image.

Arguments:

input: Float representation of audio data.
window_size: How wide the input window is in samples. For the highest efficiency

this should be a power of two, but other values are accepted.

stride: How widely apart the center of adjacent sample windows should be.

Returns 3D representation of the audio frequencies as an image.

func AudioSummary Uses

func AudioSummary(scope *Scope, tag tf.Output, tensor tf.Output, sample_rate float32, optional ...AudioSummaryAttr) (summary tf.Output)

Outputs a `Summary` protocol buffer with audio.

DEPRECATED at GraphDef version 15: Use AudioSummaryV2.

The summary has up to `max_outputs` summary values containing audio. The audio is built from `tensor` which must be 3-D with shape `[batch_size, frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. * If `max_outputs` is greater than 1, the summary value tags are

generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 2-D of shape `[batch_size, frames]`.
sample_rate: The sample rate of the signal in hertz.

Returns Scalar. Serialized `Summary` protocol buffer.

func AudioSummaryV2 Uses

func AudioSummaryV2(scope *Scope, tag tf.Output, tensor tf.Output, sample_rate tf.Output, optional ...AudioSummaryV2Attr) (summary tf.Output)

Outputs a `Summary` protocol buffer with audio.

The summary has up to `max_outputs` summary values containing audio. The audio is built from `tensor` which must be 3-D with shape `[batch_size, frames, channels]` or 2-D with shape `[batch_size, frames]`. The values are assumed to be in the range of `[-1.0, 1.0]` with a sample rate of `sample_rate`.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_outputs` is 1, the summary value tag is '*tag*/audio'. * If `max_outputs` is greater than 1, the summary value tags are

generated sequentially as '*tag*/audio/0', '*tag*/audio/1', etc.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 2-D of shape `[batch_size, frames]`.
sample_rate: The sample rate of the signal in hertz.

Returns Scalar. Serialized `Summary` protocol buffer.

func AvgPool Uses

func AvgPool(scope *Scope, value tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPoolAttr) (output tf.Output)

Performs average pooling on the input.

Each entry in `output` is the mean of the corresponding size `ksize` window in `value`.

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
ksize: The size of the sliding window for each dimension of `value`.
strides: The stride of the sliding window for each dimension of `value`.
padding: The type of padding algorithm to use.

Returns The average pooled output tensor.

func AvgPool3D Uses

func AvgPool3D(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPool3DAttr) (output tf.Output)

Performs 3D average pooling on the input.

Arguments:

input: Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The average pooled output tensor.

func AvgPool3DGrad Uses

func AvgPool3DGrad(scope *Scope, orig_input_shape tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPool3DGradAttr) (output tf.Output)

Computes gradients of average pooling function.

Arguments:

orig_input_shape: The original input dimensions.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The backprop for input.

func AvgPoolGrad Uses

func AvgPoolGrad(scope *Scope, orig_input_shape tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...AvgPoolGradAttr) (output tf.Output)

Computes gradients of the average pooling function.

Arguments:

orig_input_shape: 1-D.  Shape of the original input to `avg_pool`.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t.

the output of `avg_pool`.

ksize: The size of the sliding window for each dimension of the input.
strides: The stride of the sliding window for each dimension of the input.
padding: The type of padding algorithm to use.

Returns 4-D. Gradients w.r.t. the input of `avg_pool`.

func BatchDataset Uses

func BatchDataset(scope *Scope, input_dataset tf.Output, batch_size tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that batches `batch_size` elements from `input_dataset`.

Arguments:

batch_size: A scalar representing the number of elements to accumulate in a

batch.

func BatchMatMul Uses

func BatchMatMul(scope *Scope, x tf.Output, y tf.Output, optional ...BatchMatMulAttr) (output tf.Output)

Multiplies slices of two tensors in batches.

Multiplies all slices of `Tensor` `x` and `y` (each slice can be viewed as an element of a batch), and arranges the individual results in a single output tensor of the same batch size. Each of the individual slices can optionally be adjointed (to adjoint a matrix means to transpose and conjugate it) before multiplication by setting the `adj_x` or `adj_y` flag to `True`, which are by default `False`.

The input tensors `x` and `y` are 2-D or higher with shape `[..., r_x, c_x]` and `[..., r_y, c_y]`.

The output tensor is 2-D or higher with shape `[..., r_o, c_o]`, where:

r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y

It is computed as:

output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])

Arguments:

x: 2-D or higher with shape `[..., r_x, c_x]`.
y: 2-D or higher with shape `[..., r_y, c_y]`.

Returns 3-D or higher with shape `[..., r_o, c_o]`

func BatchNormWithGlobalNormalization Uses

func BatchNormWithGlobalNormalization(scope *Scope, t tf.Output, m tf.Output, v tf.Output, beta tf.Output, gamma tf.Output, variance_epsilon float32, scale_after_normalization bool) (result tf.Output)

Batch normalization.

DEPRECATED at GraphDef version 9: Use tf.nn.batch_normalization()

This op is deprecated. Prefer `tf.nn.batch_normalization`.

Arguments:

t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.

This is the first output from tf.nn.moments, or a saved moving average thereof.

v: A 1D variance Tensor with size matching the last dimension of t.

This is the second output from tf.nn.moments, or a saved moving average thereof.

beta: A 1D beta Tensor with size matching the last dimension of t.

An offset to be added to the normalized tensor.

gamma: A 1D gamma Tensor with size matching the last dimension of t.

If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.

variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor

needs to be multiplied with gamma.

func BatchNormWithGlobalNormalizationGrad Uses

func BatchNormWithGlobalNormalizationGrad(scope *Scope, t tf.Output, m tf.Output, v tf.Output, gamma tf.Output, backprop tf.Output, variance_epsilon float32, scale_after_normalization bool) (dx tf.Output, dm tf.Output, dv tf.Output, db tf.Output, dg tf.Output)

Gradients for batch normalization.

DEPRECATED at GraphDef version 9: Use tf.nn.batch_normalization()

This op is deprecated. See `tf.nn.batch_normalization`.

Arguments:

t: A 4D input Tensor.
m: A 1D mean Tensor with size matching the last dimension of t.

This is the first output from tf.nn.moments, or a saved moving average thereof.

v: A 1D variance Tensor with size matching the last dimension of t.

This is the second output from tf.nn.moments, or a saved moving average thereof.

gamma: A 1D gamma Tensor with size matching the last dimension of t.

If "scale_after_normalization" is true, this Tensor will be multiplied with the normalized Tensor.

backprop: 4D backprop Tensor.
variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor

needs to be multiplied with gamma.

Returns 4D backprop tensor for input.1D backprop tensor for mean.1D backprop tensor for variance.1D backprop tensor for beta.1D backprop tensor for gamma.

func BatchToSpace Uses

func BatchToSpace(scope *Scope, input tf.Output, crops tf.Output, block_size int64) (output tf.Output)

BatchToSpace for 4-D tensors of type T.

This is a legacy version of the more general BatchToSpaceND.

Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the `batch` dimension are moved in spatial blocks to the `height` and `width` dimensions, followed by cropping along the `height` and `width` dimensions.

Arguments:

input: 4-D tensor with shape

`[batch*block_size*block_size, height_pad/block_size, width_pad/block_size,

depth]`. Note that the batch size of the input tensor must be divisible by

`block_size * block_size`.

crops: 2-D tensor of non-negative integers with shape `[2, 2]`. It specifies

how many elements to crop from the intermediate result across the spatial dimensions as follows:

crops = [[crop_top, crop_bottom], [crop_left, crop_right]]

Returns 4-D with shape `[batch, height, width, depth]`, where:

height = height_pad - crop_top - crop_bottom
width = width_pad - crop_left - crop_right

The attr `block_size` must be greater than one. It indicates the block size.

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]` and block_size of 2:

“` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] “`

The output tensor has shape `[1, 2, 2, 1]` and value:

“` x = [[[[1], [2]], [[3], [4]]]] “`

(2) For the following input of shape `[4, 1, 1, 3]` and block_size of 2:

“` [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] “`

The output tensor has shape `[1, 2, 2, 3]` and value:

“` x = [[[[1, 2, 3], [4, 5, 6]],

[[7, 8, 9], [10, 11, 12]]]]

“`

(3) For the following input of shape `[4, 2, 2, 1]` and block_size of 2:

“` x = [[[[1], [3]], [[9], [11]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

The output tensor has shape `[1, 4, 4, 1]` and value:

“` x = [[[1], [2], [3], [4]],

[[5],   [6],  [7],  [8]],
[[9],  [10], [11],  [12]],
[[13], [14], [15],  [16]]]

“`

(4) For the following input of shape `[8, 1, 2, 1]` and block_size of 2:

“` x = [[[[1], [3]]], [[[9], [11]]], [[[2], [4]]], [[[10], [12]]],

[[[5], [7]]], [[[13], [15]]], [[[6], [8]]], [[[14], [16]]]]

“`

The output tensor has shape `[2, 2, 4, 1]` and value:

“` x = [[[[1], [3]], [[5], [7]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

func BatchToSpaceND Uses

func BatchToSpaceND(scope *Scope, input tf.Output, block_shape tf.Output, crops tf.Output) (output tf.Output)

BatchToSpace for N-D tensors of type T.

This operation reshapes the "batch" dimension 0 into `M + 1` dimensions of shape `block_shape + [batch]`, interleaves these blocks back into the grid defined by the spatial dimensions `[1, ..., M]`, to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to `crops` to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.

Arguments:

input: N-D with shape `input_shape = [batch] + spatial_shape + remaining_shape`,

where spatial_shape has M dimensions.

	block_shape: 1-D with shape `[M]`, all values must be >= 1.
	crops: 2-D with shape `[M, 2]`, all values must be >= 0.
  `crops[i] = [crop_start, crop_end]` specifies the amount to crop from input
  dimension `i + 1`, which corresponds to spatial dimension `i`.  It is
  required that
  `crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]`.

This operation is equivalent to the following steps:

1. Reshape `input` to `reshaped` of shape:

[block_shape[0], ..., block_shape[M-1],
 batch / prod(block_shape),
 input_shape[1], ..., input_shape[N-1]]

2. Permute dimensions of `reshaped` to produce `permuted` of shape

[batch / prod(block_shape),

 input_shape[1], block_shape[0],
 ...,
 input_shape[M], block_shape[M-1],

 input_shape[M+1], ..., input_shape[N-1]]

3. Reshape `permuted` to produce `reshaped_permuted` of shape

[batch / prod(block_shape),

 input_shape[1] * block_shape[0],
 ...,
 input_shape[M] * block_shape[M-1],

 input_shape[M+1],
 ...,
 input_shape[N-1]]

4. Crop the start and end of dimensions `[1, ..., M]` of

`reshaped_permuted` according to `crops` to produce the output of shape:
  [batch / prod(block_shape),

   input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1],
   ...,
   input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1],

   input_shape[M+1], ..., input_shape[N-1]]

Some examples:

(1) For the following input of shape `[4, 1, 1, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] “`

The output tensor has shape `[1, 2, 2, 1]` and value:

“` x = [[[[1], [2]], [[3], [4]]]] “`

(2) For the following input of shape `[4, 1, 1, 3]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` [[[1, 2, 3]], [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]] “`

The output tensor has shape `[1, 2, 2, 3]` and value:

“` x = [[[[1, 2, 3], [4, 5, 6]],

[[7, 8, 9], [10, 11, 12]]]]

“`

(3) For the following input of shape `[4, 2, 2, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [0, 0]]`:

“` x = [[[[1], [3]], [[9], [11]]],

[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]

“`

The output tensor has shape `[1, 4, 4, 1]` and value:

“` x = [[[1], [2], [3], [4]],

[[5],   [6],  [7],  [8]],
[[9],  [10], [11],  [12]],
[[13], [14], [15],  [16]]]

“`

(4) For the following input of shape `[8, 1, 3, 1]`, `block_shape = [2, 2]`, and

`crops = [[0, 0], [2, 0]]`:

“` x = [[[[0], [1], [3]]], [[[0], [9], [11]]],

[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
[[[0], [6], [8]]], [[[0], [14], [16]]]]

“`

The output tensor has shape `[2, 2, 4, 1]` and value:

“` x = [[[[1], [2], [3], [4]],

 [[5],   [6],  [7],  [8]]],
[[[9],  [10], [11],  [12]],
 [[13], [14], [15],  [16]]]]

“`

func Betainc Uses

func Betainc(scope *Scope, a tf.Output, b tf.Output, x tf.Output) (z tf.Output)

Compute the regularized incomplete beta integral \\(I_x(a, b)\\).

The regularized incomplete beta integral is defined as:

\\(I_x(a, b) = \frac{B(x; a, b)}{B(a, b)}\\)

where

\\(B(x; a, b) = \int_0^x t^{a-1} (1 - t)^{b-1} dt\\)

is the incomplete beta function and \\(B(a, b)\\) is the *complete* beta function.

func BiasAdd Uses

func BiasAdd(scope *Scope, value tf.Output, bias tf.Output, optional ...BiasAddAttr) (output tf.Output)

Adds `bias` to `value`.

This is a special case of `tf.add` where `bias` is restricted to be 1-D. Broadcasting is supported, so `value` may have any number of dimensions.

Arguments:

value: Any number of dimensions.
bias: 1-D with size the last dimension of `value`.

Returns Broadcasted sum of `value` and `bias`.

func BiasAddGrad Uses

func BiasAddGrad(scope *Scope, out_backprop tf.Output, optional ...BiasAddGradAttr) (output tf.Output)

The backward operation for "BiasAdd" on the "bias" tensor.

It accumulates all the values from out_backprop into the feature dimension. For NHWC data format, the feature dimension is the last. For NCHW data format, the feature dimension is the third-to-last.

Arguments:

out_backprop: Any number of dimensions.

Returns 1-D with size the feature dimension of `out_backprop`.

func BiasAddV1 Uses

func BiasAddV1(scope *Scope, value tf.Output, bias tf.Output) (output tf.Output)

Adds `bias` to `value`.

This is a deprecated version of BiasAdd and will be soon removed.

This is a special case of `tf.add` where `bias` is restricted to be 1-D. Broadcasting is supported, so `value` may have any number of dimensions.

Arguments:

value: Any number of dimensions.
bias: 1-D with size the last dimension of `value`.

Returns Broadcasted sum of `value` and `bias`.

func Bincount Uses

func Bincount(scope *Scope, arr tf.Output, size tf.Output, weights tf.Output) (bins tf.Output)

Counts the number of occurrences of each value in an integer array.

Outputs a vector with length `size` and the same dtype as `weights`. If `weights` are empty, then index `i` stores the number of times the value `i` is counted in `arr`. If `weights` are non-empty, then index `i` stores the sum of the value in `weights` at each index where the corresponding value in `arr` is `i`.

Values in `arr` outside of the range [0, size) are ignored.

Arguments:

arr: int32 `Tensor`.
size: non-negative int32 scalar `Tensor`.
weights: is an int32, int64, float32, or float64 `Tensor` with the same

shape as `arr`, or a length-0 `Tensor`, in which case it acts as all weights equal to 1.

Returns 1D `Tensor` with length equal to `size`. The counts or summed weights for each value in the range [0, size).

func Bitcast Uses

func Bitcast(scope *Scope, input tf.Output, type_ tf.DataType) (output tf.Output)

Bitcasts a tensor from one type to another without copying data.

Given a tensor `input`, this operation returns a tensor that has the same buffer data as `input` with datatype `type`.

If the input datatype `T` is larger than the output datatype `type` then the shape changes from [...] to [..., sizeof(`T`)/sizeof(`type`)].

If `T` is smaller than `type`, the operator requires that the rightmost dimension be equal to sizeof(`type`)/sizeof(`T`). The shape then goes from [..., sizeof(`type`)/sizeof(`T`)] to [...].

*NOTE*: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.

func BitwiseAnd Uses

func BitwiseAnd(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise AND of `x` and `y`.

The result will have those bits set, that are set in both `x` and `y`. The computation is performed on the underlying representations of `x` and `y`.

func BitwiseOr Uses

func BitwiseOr(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise OR of `x` and `y`.

The result will have those bits set, that are set in `x`, `y` or both. The computation is performed on the underlying representations of `x` and `y`.

func BitwiseXor Uses

func BitwiseXor(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Elementwise computes the bitwise XOR of `x` and `y`.

The result will have those bits set, that are different in `x` and `y`. The computation is performed on the underlying representations of `x` and `y`.

func BroadcastArgs Uses

func BroadcastArgs(scope *Scope, s0 tf.Output, s1 tf.Output) (r0 tf.Output)

Return the shape of s0 op s1 with broadcast.

Given `s0` and `s1`, tensors that represent shapes, compute `r0`, the broadcasted shape. `s0`, `s1` and `r0` are all integer vectors.

func BroadcastGradientArgs Uses

func BroadcastGradientArgs(scope *Scope, s0 tf.Output, s1 tf.Output) (r0 tf.Output, r1 tf.Output)

Return the reduction indices for computing gradients of s0 op s1 with broadcast.

This is typically used by gradient computations for a broadcasting operation.

func Bucketize Uses

func Bucketize(scope *Scope, input tf.Output, boundaries []float32) (output tf.Output)

Bucketizes 'input' based on 'boundaries'.

For example, if the inputs are

boundaries = [0, 10, 100]
input = [[-5, 10000]
         [150,   10]
         [5,    100]]

then the output will be

output = [[0, 3]
          [3, 2]
          [1, 3]]

Arguments:

input: Any shape of Tensor contains with int or float type.
boundaries: A sorted list of floats gives the boundary of the buckets.

Returns Same shape with 'input', each value of input replaced with bucket index.

@compatibility(numpy) Equivalent to np.digitize. @end_compatibility

func CTCBeamSearchDecoder Uses

func CTCBeamSearchDecoder(scope *Scope, inputs tf.Output, sequence_length tf.Output, beam_width int64, top_paths int64, optional ...CTCBeamSearchDecoderAttr) (decoded_indices []tf.Output, decoded_values []tf.Output, decoded_shape []tf.Output, log_probability tf.Output)

Performs beam search decoding on the logits given in input.

A note about the attribute merge_repeated: For the beam search decoder, this means that if consecutive entries in a beam are the same, only the first of these is emitted. That is, when the top path is "A B B B B", "A B" is returned if merge_repeated = True but "A B B B B" is returned if merge_repeated = False.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
sequence_length: A vector containing sequence lengths, size `(batch)`.
beam_width: A scalar >= 0 (beam search beam width).
top_paths: A scalar >= 0, <= beam_width (controls output size).

Returns A list (length: top_paths) of indices matrices. Matrix j, size `(total_decoded_outputs[j] x 2)`, has indices of a `SparseTensor<int64, 2>`. The rows store: [batch, time].A list (length: top_paths) of values vectors. Vector j, size `(length total_decoded_outputs[j])`, has the values of a `SparseTensor<int64, 2>`. The vector stores the decoded classes for beam j.A list (length: top_paths) of shape vector. Vector j, size `(2)`, stores the shape of the decoded `SparseTensor[j]`. Its values are: `[batch_size, max_decoded_length[j]]`.A matrix, shaped: `(batch_size x top_paths)`. The sequence log-probabilities.

func CTCGreedyDecoder Uses

func CTCGreedyDecoder(scope *Scope, inputs tf.Output, sequence_length tf.Output, optional ...CTCGreedyDecoderAttr) (decoded_indices tf.Output, decoded_values tf.Output, decoded_shape tf.Output, log_probability tf.Output)

Performs greedy decoding on the logits given in inputs.

A note about the attribute merge_repeated: if enabled, when consecutive logits' maximum indices are the same, only the first of these is emitted. Labeling the blank '*', the sequence "A B B * B B" becomes "A B B" if merge_repeated = True and "A B B B B" if merge_repeated = False.

Regardless of the value of merge_repeated, if the maximum index of a given time and batch corresponds to the blank, index `(num_classes - 1)`, no new element is emitted.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
sequence_length: A vector containing sequence lengths, size `(batch_size)`.

Returns Indices matrix, size `(total_decoded_outputs x 2)`, of a `SparseTensor<int64, 2>`. The rows store: [batch, time].Values vector, size: `(total_decoded_outputs)`, of a `SparseTensor<int64, 2>`. The vector stores the decoded classes.Shape vector, size `(2)`, of the decoded SparseTensor. Values are: `[batch_size, max_decoded_length]`.Matrix, size `(batch_size x 1)`, containing sequence log-probabilities.

func CTCLoss Uses

func CTCLoss(scope *Scope, inputs tf.Output, labels_indices tf.Output, labels_values tf.Output, sequence_length tf.Output, optional ...CTCLossAttr) (loss tf.Output, gradient tf.Output)

Calculates the CTC Loss (log probability) for each batch entry. Also calculates

the gradient. This class performs the softmax operation for you, so inputs should be e.g. linear projections of outputs by an LSTM.

Arguments:

inputs: 3-D, shape: `(max_time x batch_size x num_classes)`, the logits.
labels_indices: The indices of a `SparseTensor<int32, 2>`.

`labels_indices(i, :) == [b, t]` means `labels_values(i)` stores the id for `(batch b, time t)`.

labels_values: The values (labels) associated with the given batch and time.
sequence_length: A vector containing sequence lengths (batch).

Returns A vector (batch) containing log-probabilities.The gradient of `loss`. 3-D, shape: `(max_time x batch_size x num_classes)`.

func CacheDataset Uses

func CacheDataset(scope *Scope, input_dataset tf.Output, filename tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that caches elements from `input_dataset`.

A CacheDataset will iterate over the input_dataset, and store tensors. If the cache already exists, the cache will be used. If the cache is inappropriate (e.g. cannot be opened, contains tensors of the wrong shape / size), an error will the returned when used.

Arguments:

filename: A path on the filesystem where we should cache the dataset. Note: this

will be a directory.

func Cast Uses

func Cast(scope *Scope, x tf.Output, DstT tf.DataType) (y tf.Output)

Cast x of type SrcT to y of DstT.

func Ceil Uses

func Ceil(scope *Scope, x tf.Output) (y tf.Output)

Returns element-wise smallest integer in not less than x.

func CheckNumerics Uses

func CheckNumerics(scope *Scope, tensor tf.Output, message string) (output tf.Output)

Checks a tensor for NaN and Inf values.

When run, reports an `InvalidArgument` error if `tensor` has any values that are not a number (NaN) or infinity (Inf). Otherwise, passes `tensor` as-is.

Arguments:

message: Prefix of the error message.

func Cholesky Uses

func Cholesky(scope *Scope, input tf.Output) (output tf.Output)

Computes the Cholesky decomposition of one or more square matrices.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices.

The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.

The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices `[..., :, :]`.

**Note**: The gradient computation on GPU is faster for large matrices but not for large batch dimensions when the submatrices are small. In this case it might be faster to use the CPU.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

func CholeskyGrad Uses

func CholeskyGrad(scope *Scope, l tf.Output, grad tf.Output) (output tf.Output)

Computes the reverse mode backpropagated gradient of the Cholesky algorithm.

For an explanation see "Differentiation of the Cholesky algorithm" by Iain Murray http://arxiv.org/abs/1602.07527.

Arguments:

l: Output of batch Cholesky algorithm l = cholesky(A). Shape is `[..., M, M]`.

Algorithm depends only on lower triangular part of the innermost matrices of this tensor.

grad: df/dl where f is some scalar function. Shape is `[..., M, M]`.

Algorithm depends only on lower triangular part of the innermost matrices of this tensor.

Returns Symmetrized version of df/dA . Shape is `[..., M, M]`

func CompareAndBitpack Uses

func CompareAndBitpack(scope *Scope, input tf.Output, threshold tf.Output) (output tf.Output)

Compare values of `input` to `threshold` and pack resulting bits into a `uint8`.

Each comparison returns a boolean `true` (if `input_value > threshold`) or and `false` otherwise.

This operation is useful for Locality-Sensitive-Hashing (LSH) and other algorithms that use hashing approximations of cosine and `L2` distances; codes can be generated from an input via:

“`python codebook_size = 50 codebook_bits = codebook_size * 32 codebook = tf.get_variable('codebook', [x.shape[-1].value, codebook_bits],

dtype=x.dtype,
initializer=tf.orthogonal_initializer())

codes = compare_and_threshold(tf.matmul(x, codebook), threshold=0.) codes = tf.bitcast(codes, tf.int32) # go from uint8 to int32 # now codes has shape x.shape[:-1] + [codebook_size] “`

**NOTE**: Currently, the innermost dimension of the tensor must be divisible by 8.

Given an `input` shaped `[s0, s1, ..., s_n]`, the output is a `uint8` tensor shaped `[s0, s1, ..., s_n / 8]`.

Arguments:

input: Values to compare against `threshold` and bitpack.
threshold: Threshold to compare against.

Returns The bitpacked comparisons.

func Complex Uses

func Complex(scope *Scope, real tf.Output, imag tf.Output, optional ...ComplexAttr) (out tf.Output)

Converts two real numbers to a complex number.

Given a tensor `real` representing the real part of a complex number, and a tensor `imag` representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \\(a + bj\\), where *a* represents the `real` part and *b* represents the `imag` part.

The input tensors `real` and `imag` must have the same shape.

For example:

“` # tensor 'real' is [2.25, 3.25] # tensor `imag` is [4.75, 5.75] tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]] “`

func ComplexAbs Uses

func ComplexAbs(scope *Scope, x tf.Output, optional ...ComplexAbsAttr) (y tf.Output)

Computes the complex absolute value of a tensor.

Given a tensor `x` of complex numbers, this operation returns a tensor of type `float` or `double` that is the absolute value of each element in `x`. All elements in `x` must be complex numbers of the form \\(a + bj\\). The absolute value is computed as \\( \sqrt{a^2 + b^2}\\).

func ComputeAccidentalHits Uses

func ComputeAccidentalHits(scope *Scope, true_classes tf.Output, sampled_candidates tf.Output, num_true int64, optional ...ComputeAccidentalHitsAttr) (indices tf.Output, ids tf.Output, weights tf.Output)

Computes the ids of the positions in sampled_candidates that match true_labels.

When doing log-odds NCE, the result of this op should be passed through a SparseToDense op, then added to the logits of the sampled candidates. This has the effect of 'removing' the sampled labels that match the true labels by making the classifier sure that they are sampled labels.

Arguments:

true_classes: The true_classes output of UnpackSparseLabels.
sampled_candidates: The sampled_candidates output of CandidateSampler.
num_true: Number of true labels per context.

Returns A vector of indices corresponding to rows of true_candidates.A vector of IDs of positions in sampled_candidates that match a true_label for the row with the corresponding index in indices.A vector of the same length as indices and ids, in which each element is -FLOAT_MAX.

func Concat Uses

func Concat(scope *Scope, concat_dim tf.Output, values []tf.Output) (output tf.Output)

Concatenates tensors along one dimension.

Arguments:

concat_dim: 0-D.  The dimension along which to concatenate.  Must be in the

range [0, rank(values)).

values: The `N` Tensors to concatenate. Their ranks and types must match,

and their sizes must match in all dimensions except `concat_dim`.

Returns A `Tensor` with the concatenation of values stacked along the `concat_dim` dimension. This tensor's shape matches that of `values` except in `concat_dim` where it has the sum of the sizes.

func ConcatOffset Uses

func ConcatOffset(scope *Scope, concat_dim tf.Output, shape []tf.Output) (offset []tf.Output)

Computes offsets of concat inputs within its output.

For example:

“` # 'x' is [2, 2, 7] # 'y' is [2, 3, 7] # 'z' is [2, 5, 7] concat_offset(2, [x, y, z]) => [0, 0, 0], [0, 2, 0], [0, 5, 0] “`

This is typically used by gradient computations for a concat operation.

Arguments:

concat_dim: The dimension along which to concatenate.
shape: The `N` int32 vectors representing shape of tensors being concatenated.

Returns The `N` int32 vectors representing the starting offset of input tensors within the concatenated output.

func ConcatV2 Uses

func ConcatV2(scope *Scope, values []tf.Output, axis tf.Output) (output tf.Output)

Concatenates tensors along one dimension.

Arguments:

values: List of `N` Tensors to concatenate. Their ranks and types must match,

and their sizes must match in all dimensions except `concat_dim`.

axis: 0-D.  The dimension along which to concatenate.  Must be in the

range [-rank(values), rank(values)).

Returns A `Tensor` with the concatenation of values stacked along the `concat_dim` dimension. This tensor's shape matches that of `values` except in `concat_dim` where it has the sum of the sizes.

func ConcatenateDataset Uses

func ConcatenateDataset(scope *Scope, input_dataset tf.Output, another_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that concatenates `input_dataset` with `another_dataset`.

func Conj Uses

func Conj(scope *Scope, input tf.Output) (output tf.Output)

Returns the complex conjugate of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in `input`. The complex numbers in `input` must be of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part.

The complex conjugate returned by this operation is of the form \\(a - bj\\).

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j] “`

func Const Uses

func Const(scope *Scope, value interface{}) (output tf.Output)

Const adds an operation to graph that produces value as output.

func ControlTrigger Uses

func ControlTrigger(scope *Scope) (o *tf.Operation)

Does nothing. Serves as a control trigger for scheduling.

Only useful as a placeholder for control edges.

Returns the created operation.

func Conv2D Uses

func Conv2D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv2DAttr) (output tf.Output)

Computes a 2-D convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, out_channels]`, this op performs the following:

1. Flattens the filter to a 2-D matrix with shape

`[filter_height * filter_width * in_channels, output_channels]`.

2. Extracts image patches from the input tensor to form a *virtual*

tensor of shape `[batch, out_height, out_width,
filter_height * filter_width * in_channels]`.

3. For each patch, right-multiplies the filter matrix and the image patch

vector.

In detail, with the default NHWC format,

output[b, i, j, k] =
    sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q] *
                    filter[di, dj, q, k]

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.

Arguments:

input: A 4-D tensor. The dimension order is interpreted according to the value

of `data_format`, see below for details.

filter: A 4-D tensor of shape

`[filter_height, filter_width, in_channels, out_channels]`

strides: 1-D tensor of length 4.  The stride of the sliding window for each

dimension of `input`. The dimension order is determined by the value of

  `data_format`, see below for details.
	padding: The type of padding algorithm to use.

Returns A 4-D tensor. The dimension order is determined by the value of `data_format`, see below for details.

func Conv2DBackpropFilter Uses

func Conv2DBackpropFilter(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv2DBackpropFilterAttr) (output tf.Output)

Computes the gradients of convolution with respect to the filter.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 4-D `[filter_height, filter_width, in_channels, out_channels]` tensor.

out_backprop: 4-D with shape `[batch, out_height, out_width, out_channels]`.

Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t. the `filter` input of the convolution.

func Conv2DBackpropInput Uses

func Conv2DBackpropInput(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv2DBackpropInputAttr) (output tf.Output)

Computes the gradients of convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the shape of `input`,

where `input` is a 4-D `[batch, height, width, channels]` tensor.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

out_backprop: 4-D with shape `[batch, out_height, out_width, out_channels]`.

Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, in_height, in_width, in_channels]`. Gradient w.r.t. the input of the convolution.

func Conv3D Uses

func Conv3D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...Conv3DAttr) (output tf.Output)

Computes a 3-D convolution given 5-D `input` and `filter` tensors.

In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product.

Our Conv3D implements a form of cross-correlation.

Arguments:

input: Shape `[batch, in_depth, in_height, in_width, in_channels]`.
filter: Shape `[filter_depth, filter_height, filter_width, in_channels,

out_channels]`. `in_channels` must match between `input` and `filter`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropFilter Uses

func Conv3DBackpropFilter(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the filter.

DEPRECATED at GraphDef version 10: Use Conv3DBackpropFilterV2

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropFilterV2 Uses

func Conv3DBackpropFilterV2(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropFilterV2Attr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the filter.

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 5-D `[filter_depth, filter_height, filter_width, in_channels, out_channels]` tensor.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropInput Uses

func Conv3DBackpropInput(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the input.

DEPRECATED at GraphDef version 10: Use Conv3DBackpropInputV2

Arguments:

input: Shape `[batch, depth, rows, cols, in_channels]`.
filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Conv3DBackpropInputV2 Uses

func Conv3DBackpropInputV2(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...Conv3DBackpropInputV2Attr) (output tf.Output)

Computes the gradients of 3-D convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the tensor shape of `input`,

where `input` is a 5-D `[batch, depth, rows, cols, in_channels]` tensor.

filter: Shape `[depth, rows, cols, in_channels, out_channels]`.

`in_channels` must match between `input` and `filter`.

out_backprop: Backprop signal of shape `[batch, out_depth, out_rows, out_cols,

out_channels]`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func Cos Uses

func Cos(scope *Scope, x tf.Output) (y tf.Output)

Computes cos of x element-wise.

func Cosh Uses

func Cosh(scope *Scope, x tf.Output) (y tf.Output)

Computes hyperbolic cosine of x element-wise.

func CropAndResize Uses

func CropAndResize(scope *Scope, image tf.Output, boxes tf.Output, box_ind tf.Output, crop_size tf.Output, optional ...CropAndResizeAttr) (crops tf.Output)

Extracts crops from the input image tensor and bilinearly resizes them (possibly

with aspect ratio change) to a common output size specified by `crop_size`. This is more general than the `crop_to_bounding_box` op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.

Returns a tensor with `crops` from the input `image` at positions defined at the bounding box locations in `boxes`. The cropped boxes are all resized (with bilinear interpolation) to a fixed `size = [crop_height, crop_width]`. The result is a 4-D tensor `[num_boxes, crop_height, crop_width, depth]`.

Arguments:

image: A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

Both `image_height` and `image_width` need to be positive.

boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1]` in image height coordinates. We do allow `y1` > `y2`, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

crop_size: A 1-D tensor of 2 elements, `size = [crop_height, crop_width]`. All

cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both `crop_height` and `crop_width` need to be positive.

Returns A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.

func CropAndResizeGradBoxes Uses

func CropAndResizeGradBoxes(scope *Scope, grads tf.Output, image tf.Output, boxes tf.Output, box_ind tf.Output, optional ...CropAndResizeGradBoxesAttr) (output tf.Output)

Computes the gradient of the crop_and_resize op wrt the input boxes tensor.

Arguments:

grads: A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
image: A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

Both `image_height` and `image_width` need to be positive.

boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

Returns A 2-D tensor of shape `[num_boxes, 4]`.

func CropAndResizeGradImage Uses

func CropAndResizeGradImage(scope *Scope, grads tf.Output, boxes tf.Output, box_ind tf.Output, image_size tf.Output, T tf.DataType, optional ...CropAndResizeGradImageAttr) (output tf.Output)

Computes the gradient of the crop_and_resize op wrt the input image tensor.

Arguments:

grads: A 4-D tensor of shape `[num_boxes, crop_height, crop_width, depth]`.
boxes: A 2-D tensor of shape `[num_boxes, 4]`. The `i`-th row of the tensor

specifies the coordinates of a box in the `box_ind[i]` image and is specified in normalized coordinates `[y1, x1, y2, x2]`. A normalized coordinate value of `y` is mapped to the image coordinate at `y * (image_height - 1)`, so as the `[0, 1]` interval of normalized image height is mapped to `[0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the `[0, 1]` range are allowed, in which case we use `extrapolation_value` to extrapolate the input image values.

box_ind: A 1-D tensor of shape `[num_boxes]` with int32 values in `[0, batch)`.

The value of `box_ind[i]` specifies the image that the `i`-th box refers to.

image_size: A 1-D tensor with value `[batch, image_height, image_width, depth]`

containing the original image size. Both `image_height` and `image_width` need to be positive.

Returns A 4-D tensor of shape `[batch, image_height, image_width, depth]`.

func Cross Uses

func Cross(scope *Scope, a tf.Output, b tf.Output) (product tf.Output)

Compute the pairwise cross product.

`a` and `b` must be the same shape; they can either be simple 3-element vectors, or any shape where the innermost dimension is 3. In the latter case, each pair of corresponding 3-element vectors is cross-multiplied independently.

Arguments:

a: A tensor containing 3-element vectors.
b: Another tensor, of same type and shape as `a`.

Returns Pairwise cross product of the vectors in `a` and `b`.

func Cumprod Uses

func Cumprod(scope *Scope, x tf.Output, axis tf.Output, optional ...CumprodAttr) (out tf.Output)

Compute the cumulative product of the tensor `x` along `axis`.

By default, this op performs an inclusive cumprod, which means that the first element of the input is identical to the first element of the output:

“`python tf.cumprod([a, b, c]) # => [a, a * b, a * b * c] “`

By setting the `exclusive` kwarg to `True`, an exclusive cumprod is performed instead:

“`python tf.cumprod([a, b, c], exclusive=True) # => [1, a, a * b] “`

By setting the `reverse` kwarg to `True`, the cumprod is performed in the opposite direction:

“`python tf.cumprod([a, b, c], reverse=True) # => [a * b * c, b * c, c] “`

This is more efficient than using separate `tf.reverse` ops.

The `reverse` and `exclusive` kwargs can also be combined:

“`python tf.cumprod([a, b, c], exclusive=True, reverse=True) # => [b * c, c, 1] “`

Arguments:

x: A `Tensor`. Must be one of the following types: `float32`, `float64`,

`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.

axis: A `Tensor` of type `int32` (default: 0). Must be in the range

`[-rank(x), rank(x))`.

func Cumsum Uses

func Cumsum(scope *Scope, x tf.Output, axis tf.Output, optional ...CumsumAttr) (out tf.Output)

Compute the cumulative sum of the tensor `x` along `axis`.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output:

“`python tf.cumsum([a, b, c]) # => [a, a + b, a + b + c] “`

By setting the `exclusive` kwarg to `True`, an exclusive cumsum is performed instead:

“`python tf.cumsum([a, b, c], exclusive=True) # => [0, a, a + b] “`

By setting the `reverse` kwarg to `True`, the cumsum is performed in the opposite direction:

“`python tf.cumsum([a, b, c], reverse=True) # => [a + b + c, b + c, c] “`

This is more efficient than using separate `tf.reverse` ops.

The `reverse` and `exclusive` kwargs can also be combined:

“`python tf.cumsum([a, b, c], exclusive=True, reverse=True) # => [b + c, c, 0] “`

Arguments:

x: A `Tensor`. Must be one of the following types: `float32`, `float64`,

`int64`, `int32`, `uint8`, `uint16`, `int16`, `int8`, `complex64`, `complex128`, `qint8`, `quint8`, `qint32`, `half`.

axis: A `Tensor` of type `int32` (default: 0). Must be in the range

`[-rank(x), rank(x))`.

func DebugGradientIdentity Uses

func DebugGradientIdentity(scope *Scope, input tf.Output) (output tf.Output)

Identity op for gradient debugging.

This op is hidden from public in Python. It is used by TensorFlow Debugger to register gradient tensors for gradient debugging.

func DecodeBase64 Uses

func DecodeBase64(scope *Scope, input tf.Output) (output tf.Output)

Decode web-safe base64-encoded strings.

Input may or may not have padding at the end. See EncodeBase64 for padding. Web-safe means that input must use - and _ instead of + and /.

Arguments:

input: Base64 strings to decode.

Returns Decoded strings.

func DecodeBmp Uses

func DecodeBmp(scope *Scope, contents tf.Output, optional ...DecodeBmpAttr) (image tf.Output)

Decode the first frame of a BMP-encoded image to a uint8 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the BMP-encoded image. * 3: output an RGB image. * 4: output an RGBA image.

Arguments:

contents: 0-D.  The BMP-encoded image.

Returns 3-D with shape `[height, width, channels]`. RGB order

func DecodeCSV Uses

func DecodeCSV(scope *Scope, records tf.Output, record_defaults []tf.Output, optional ...DecodeCSVAttr) (output []tf.Output)

Convert CSV records to tensors. Each column maps to one tensor.

RFC 4180 format is expected for the CSV records. (https://tools.ietf.org/html/rfc4180) Note that we allow leading and trailing spaces with int or float field.

Arguments:

records: Each string is a record/row in the csv and all records should have

the same format.

record_defaults: One tensor per column of the input record, with either a

scalar default value for that column or empty if the column is required.

Returns Each tensor will have the same shape as records.

func DecodeGif Uses

func DecodeGif(scope *Scope, contents tf.Output) (image tf.Output)

Decode the first frame of a GIF-encoded image to a uint8 tensor.

GIF with frame or transparency compression are not supported convert animated GIF from compressed to uncompressed by:

convert $src.gif -coalesce $dst.gif

This op also supports decoding JPEGs and PNGs, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The GIF-encoded image.

Returns 4-D with shape `[num_frames, height, width, 3]`. RGB order

func DecodeJSONExample Uses

func DecodeJSONExample(scope *Scope, json_examples tf.Output) (binary_examples tf.Output)

Convert JSON-encoded Example records to binary protocol buffer strings.

This op translates a tensor containing Example records, encoded using the [standard JSON mapping](https://developers.google.com/protocol-buffers/docs/proto3#json), into a tensor containing the same records encoded as binary protocol buffers. The resulting tensor can then be fed to any of the other Example-parsing ops.

Arguments:

json_examples: Each string is a JSON object serialized according to the JSON

mapping of the Example proto.

Returns Each string is a binary Example protocol buffer corresponding to the respective element of `json_examples`.

func DecodeJpeg Uses

func DecodeJpeg(scope *Scope, contents tf.Output, optional ...DecodeJpegAttr) (image tf.Output)

Decode a JPEG-encoded image to a uint8 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the JPEG-encoded image. * 1: output a grayscale image. * 3: output an RGB image.

If needed, the JPEG-encoded image is transformed to match the requested number of color channels.

The attr `ratio` allows downscaling the image by an integer factor during decoding. Allowed values are: 1, 2, 4, and 8. This is much faster than downscaling the image later.

This op also supports decoding PNGs and non-animated GIFs since the interface is the same, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The JPEG-encoded image.

Returns 3-D with shape `[height, width, channels]`..

func DecodePng Uses

func DecodePng(scope *Scope, contents tf.Output, optional ...DecodePngAttr) (image tf.Output)

Decode a PNG-encoded image to a uint8 or uint16 tensor.

The attr `channels` indicates the desired number of color channels for the decoded image.

Accepted values are:

* 0: Use the number of channels in the PNG-encoded image. * 1: output a grayscale image. * 3: output an RGB image. * 4: output an RGBA image.

If needed, the PNG-encoded image is transformed to match the requested number of color channels.

This op also supports decoding JPEGs and non-animated GIFs since the interface is the same, though it is cleaner to use `tf.image.decode_image`.

Arguments:

contents: 0-D.  The PNG-encoded image.

Returns 3-D with shape `[height, width, channels]`.

func DecodeRaw Uses

func DecodeRaw(scope *Scope, bytes tf.Output, out_type tf.DataType, optional ...DecodeRawAttr) (output tf.Output)

Reinterpret the bytes of a string as a vector of numbers.

Arguments:

bytes: All the elements must have the same length.

Returns A Tensor with one more dimension than the input `bytes`. The added dimension will have size equal to the length of the elements of `bytes` divided by the number of bytes to represent `out_type`.

func DecodeWav Uses

func DecodeWav(scope *Scope, contents tf.Output, optional ...DecodeWavAttr) (audio tf.Output, sample_rate tf.Output)

Decode a 16-bit PCM WAV file to a float tensor.

The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float.

When desired_channels is set, if the input contains fewer channels than this then the last channel will be duplicated to give the requested number, else if the input has more channels than requested then the additional channels will be ignored.

If desired_samples is set, then the audio will be cropped or padded with zeroes to the requested length.

The first output contains a Tensor with the content of the audio samples. The lowest dimension will be the number of channels, and the second will be the number of samples. For example, a ten-sample-long stereo WAV file should give an output shape of [10, 2].

Arguments:

contents: The WAV-encoded audio, usually from a file.

Returns 2-D with shape `[length, channels]`.Scalar holding the sample rate found in the WAV header.

func DeleteSessionTensor Uses

func DeleteSessionTensor(scope *Scope, handle tf.Output) (o *tf.Operation)

Delete the tensor specified by its handle in the session.

Arguments:

handle: The handle for a tensor stored in the session state.

Returns the created operation.

func DenseToDenseSetOperation Uses

func DenseToDenseSetOperation(scope *Scope, set1 tf.Output, set2 tf.Output, set_operation string, optional ...DenseToDenseSetOperationAttr) (result_indices tf.Output, result_values tf.Output, result_shape tf.Output)

Applies set operation along last dimension of 2 `Tensor` inputs.

See SetOperationOp::SetOperationFromContext for values of `set_operation`.

Output `result` is a `SparseTensor` represented by `result_indices`, `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` dimension contains the result of `set_operation` applied to the corresponding `[0...n-1]` dimension of `set`.

Arguments:

set1: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

set2: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set1`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

Returns 2D indices of a `SparseTensor`.1D values of a `SparseTensor`.1D `Tensor` shape of a `SparseTensor`. `result_shape[0...n-1]` is the same as the 1st `n-1` dimensions of `set1` and `set2`, `result_shape[n]` is the max result set size across all `0...n-1` dimensions.

func DenseToSparseBatchDataset Uses

func DenseToSparseBatchDataset(scope *Scope, input_dataset tf.Output, batch_size tf.Output, row_shape tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that yields a SparseTensor for each element of the input.

Arguments:

input_dataset: A handle to an input dataset. Must have a single component.
batch_size: A scalar representing the number of elements to accumulate in a

batch.

row_shape: A vector representing the dense shape of each row in the produced

SparseTensor.

func DenseToSparseSetOperation Uses

func DenseToSparseSetOperation(scope *Scope, set1 tf.Output, set2_indices tf.Output, set2_values tf.Output, set2_shape tf.Output, set_operation string, optional ...DenseToSparseSetOperationAttr) (result_indices tf.Output, result_values tf.Output, result_shape tf.Output)

Applies set operation along last dimension of `Tensor` and `SparseTensor`.

See SetOperationOp::SetOperationFromContext for values of `set_operation`.

Input `set2` is a `SparseTensor` represented by `set2_indices`, `set2_values`, and `set2_shape`. For `set2` ranked `n`, 1st `n-1` dimensions must be the same as `set1`. Dimension `n` contains values in a set, duplicates are allowed but ignored.

If `validate_indices` is `True`, this op validates the order and range of `set2` indices.

Output `result` is a `SparseTensor` represented by `result_indices`, `result_values`, and `result_shape`. For `set1` and `set2` ranked `n`, this has rank `n` and the same 1st `n-1` dimensions as `set1` and `set2`. The `nth` dimension contains the result of `set_operation` applied to the corresponding `[0...n-1]` dimension of `set`.

Arguments:

set1: `Tensor` with rank `n`. 1st `n-1` dimensions must be the same as `set2`.

Dimension `n` contains values in a set, duplicates are allowed but ignored.

set2_indices: 2D `Tensor`, indices of a `SparseTensor`. Must be in row-major

order.

set2_values: 1D `Tensor`, values of a `SparseTensor`. Must be in row-major

order.

set2_shape: 1D `Tensor`, shape of a `SparseTensor`. `set2_shape[0...n-1]` must

be the same as the 1st `n-1` dimensions of `set1`, `result_shape[n]` is the max set size across `n-1` dimensions.

Returns 2D indices of a `SparseTensor`.1D values of a `SparseTensor`.1D `Tensor` shape of a `SparseTensor`. `result_shape[0...n-1]` is the same as the 1st `n-1` dimensions of `set1` and `set2`, `result_shape[n]` is the max result set size across all `0...n-1` dimensions.

func DepthToSpace Uses

func DepthToSpace(scope *Scope, input tf.Output, block_size int64) (output tf.Output)

DepthToSpace for tensors of type T.

Rearranges data from depth into blocks of spatial data. This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of the input tensor where values from the `depth` dimension are moved in spatial blocks to the `height` and `width` dimensions. The attr `block_size` indicates the input block size and how the data is moved.

* Chunks of data of size `block_size * block_size` from depth are rearranged
  into non-overlapping blocks of size `block_size x block_size`
* The width the output tensor is `input_depth * block_size`, whereas the
  height is `input_height * block_size`.
* The depth of the input tensor must be divisible by
  `block_size * block_size`.

That is, assuming the input is in the shape: `[batch, height, width, depth]`, the shape of the output will be: `[batch, height*block_size, width*block_size, depth/(block_size*block_size)]`

This operation requires that the input tensor be of rank 4, and that `block_size` be >=1 and that `block_size * block_size` be a divisor of the input depth.

This operation is useful for resizing the activations between convolutions (but keeping all data), e.g. instead of pooling. It is also useful for training purely convolutional models.

For example, given this input of shape `[1, 1, 1, 4]`, and a block size of 2:

“` x = [[[[1, 2, 3, 4]]]]

“`

This operation will output a tensor of shape `[1, 2, 2, 1]`:

“`

[[[[1], [2]],
  [[3], [4]]]]

“`

Here, the input has a batch of 1 and each batch element has shape `[1, 1, 4]`, the corresponding output will have 2x2 elements and will have a depth of 1 channel (1 = `4 / (block_size * block_size)`). The output element shape is `[2, 2, 1]`.

For an input tensor with larger depth, here of shape `[1, 1, 1, 12]`, e.g.

“` x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]] “`

This operation, for block size of 2, will return the following tensor of shape `[1, 2, 2, 3]`

“`

[[[[1, 2, 3], [4, 5, 6]],
  [[7, 8, 9], [10, 11, 12]]]]

“`

Similarly, for the following input of shape `[1 2 2 4]`, and a block size of 2:

“` x = [[[[1, 2, 3, 4],

 [5, 6, 7, 8]],
[[9, 10, 11, 12],
 [13, 14, 15, 16]]]]

“`

the operator will return the following tensor of shape `[1 4 4 1]`:

“` x = [[ [1], [2], [5], [6]],

[ [3],   [4],  [7],  [8]],
[ [9],  [10], [13],  [14]],
[ [11], [12], [15],  [16]]]

“`

Arguments:

block_size: The size of the spatial block, same as in Space2Depth.

func DepthwiseConv2dNative Uses

func DepthwiseConv2dNative(scope *Scope, input tf.Output, filter tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeAttr) (output tf.Output)

Computes a 2-D depthwise convolution given 4-D `input` and `filter` tensors.

Given an input tensor of shape `[batch, in_height, in_width, in_channels]` and a filter / kernel tensor of shape `[filter_height, filter_width, in_channels, channel_multiplier]`, containing `in_channels` convolutional filters of depth 1, `depthwise_conv2d` applies a different filter to each input channel (expanding from 1 channel to `channel_multiplier` channels for each), then concatenates the results together. Thus, the output has `in_channels * channel_multiplier` channels.

“` for k in 0..in_channels-1

for q in 0..channel_multiplier-1
  output[b, i, j, k * channel_multiplier + q] =
    sum_{di, dj} input[b, strides[1] * i + di, strides[2] * j + dj, k] *
                      filter[di, dj, k, q]

“`

Must have `strides[0] = strides[3] = 1`. For the most common case of the same horizontal and vertices strides, `strides = [1, stride, stride, 1]`.

Arguments:

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`.

padding: The type of padding algorithm to use.

func DepthwiseConv2dNativeBackpropFilter Uses

func DepthwiseConv2dNativeBackpropFilter(scope *Scope, input tf.Output, filter_sizes tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeBackpropFilterAttr) (output tf.Output)

Computes the gradients of depthwise convolution with respect to the filter.

Arguments:

input: 4-D with shape based on `data_format`.  For example, if

`data_format` is 'NHWC' then `input` is a 4-D `[batch, in_height, in_width, in_channels]` tensor.

filter_sizes: An integer vector representing the tensor shape of `filter`,

where `filter` is a 4-D `[filter_height, filter_width, in_channels, depthwise_multiplier]` tensor.

out_backprop: 4-D with shape  based on `data_format`.

For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[filter_height, filter_width, in_channels, out_channels]`. Gradient w.r.t. the `filter` input of the convolution.

func DepthwiseConv2dNativeBackpropInput Uses

func DepthwiseConv2dNativeBackpropInput(scope *Scope, input_sizes tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, padding string, optional ...DepthwiseConv2dNativeBackpropInputAttr) (output tf.Output)

Computes the gradients of depthwise convolution with respect to the input.

Arguments:

input_sizes: An integer vector representing the shape of `input`, based

on `data_format`. For example, if `data_format` is 'NHWC' then

 `input` is a 4-D `[batch, height, width, channels]` tensor.
	filter: 4-D with shape

`[filter_height, filter_width, in_channels, depthwise_multiplier]`.

out_backprop: 4-D with shape  based on `data_format`.

For example, if `data_format` is 'NHWC' then out_backprop shape is `[batch, out_height, out_width, out_channels]`. Gradients w.r.t. the output of the convolution.

strides: The stride of the sliding window for each dimension of the input

of the convolution.

padding: The type of padding algorithm to use.

Returns 4-D with shape according to `data_format`. For example, if `data_format` is 'NHWC', output shape is `[batch, in_height, in_width, in_channels]`. Gradient w.r.t. the input of the convolution.

func Dequantize Uses

func Dequantize(scope *Scope, input tf.Output, min_range tf.Output, max_range tf.Output, optional ...DequantizeAttr) (output tf.Output)

Dequantize the 'input' tensor into a float Tensor.

[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

“` if T == qint8, in[i] += (range(T) + 1)/ 2.0 out[i] = min_range + (in[i]* (max_range - min_range) / range(T)) “` here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`

*MIN_COMBINED Mode Example*

If the input comes from a QuantizedRelu6, the output type is quint8 (range of 0-255) but the possible range of QuantizedRelu6 is 0-6. The min_range and max_range values are therefore 0.0 and 6.0. Dequantize on quint8 will take each value, cast to float, and multiply by 6 / 255. Note that if quantizedtype is qint8, the operation will additionally add each value by 128 prior to casting.

If the mode is 'MIN_FIRST', then this approach is used:

“`c++ number_of_steps = 1 << (# of bits in T) range_adjust = number_of_steps / (number_of_steps - 1) range = (range_max - range_min) * range_adjust range_scale = range / number_of_steps const double offset_input = static_cast<double>(input) - lowest_quantized; result = range_min + ((input - numeric_limits<T>::min()) * range_scale) “`

Arguments:

min_range: The minimum scalar value possibly produced for the input.
max_range: The maximum scalar value possibly produced for the input.

func DeserializeManySparse Uses

func DeserializeManySparse(scope *Scope, serialized_sparse tf.Output, dtype tf.DataType) (sparse_indices tf.Output, sparse_values tf.Output, sparse_shape tf.Output)

Deserialize and concatenate `SparseTensors` from a serialized minibatch.

The input `serialized_sparse` must be a string matrix of shape `[N x 3]` where `N` is the minibatch size and the rows correspond to packed outputs of `SerializeSparse`. The ranks of the original `SparseTensor` objects must all match. When the final `SparseTensor` is created, it has rank one higher than the ranks of the incoming `SparseTensor` objects (they have been concatenated along a new row dimension).

The output `SparseTensor` object's shape values for all dimensions but the first are the max across the input `SparseTensor` objects' shape values for the corresponding dimensions. Its first shape value is `N`, the minibatch size.

The input `SparseTensor` objects' indices are assumed ordered in standard lexicographic order. If this is not the case, after this step run `SparseReorder` to restore index ordering.

For example, if the serialized input is a `[2 x 3]` matrix representing two original `SparseTensor` objects:

index = [ 0]
        [10]
        [20]
values = [1, 2, 3]
shape = [50]

and

index = [ 2]
        [10]
values = [4, 5]
shape = [30]

then the final deserialized `SparseTensor` will be:

index = [0  0]
        [0 10]
        [0 20]
        [1  2]
        [1 10]
values = [1, 2, 3, 4, 5]
shape = [2 50]

Arguments:

serialized_sparse: 2-D, The `N` serialized `SparseTensor` objects.

Must have 3 columns.

dtype: The `dtype` of the serialized `SparseTensor` objects.

func DestroyResourceOp Uses

func DestroyResourceOp(scope *Scope, resource tf.Output, optional ...DestroyResourceOpAttr) (o *tf.Operation)

Deletes the resource specified by the handle.

All subsequent operations using the resource will result in a NotFound error status.

Arguments:

resource: handle to the resource to delete.

Returns the created operation.

func Diag Uses

func Diag(scope *Scope, diagonal tf.Output) (output tf.Output)

Returns a diagonal tensor with a given diagonal values.

Given a `diagonal`, this operation returns a tensor with the `diagonal` and everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal` has dimensions [D1,..., Dk], then the output is a tensor of rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:

`output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]` and 0 everywhere else.

For example:

“` # 'diagonal' is [1, 2, 3, 4] tf.diag(diagonal) ==> [[1, 0, 0, 0]

[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]

“`

Arguments:

diagonal: Rank k tensor where k is at most 3.

func DiagPart Uses

func DiagPart(scope *Scope, input tf.Output) (diagonal tf.Output)

Returns the diagonal part of the tensor.

This operation returns a tensor with the `diagonal` part of the `input`. The `diagonal` part is computed as follows:

Assume `input` has dimensions `[D1,..., Dk, D1,..., Dk]`, then the output is a tensor of rank `k` with dimensions `[D1,..., Dk]` where:

`diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]`.

For example:

“` # 'input' is [[1, 0, 0, 0]

[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]

tf.diag_part(input) ==> [1, 2, 3, 4] “`

Arguments:

input: Rank k tensor where k is 2, 4, or 6.

Returns The extracted diagonal.

func Digamma Uses

func Digamma(scope *Scope, x tf.Output) (y tf.Output)

Computes Psi, the derivative of Lgamma (the log of the absolute value of

`Gamma(x)`), element-wise.

func Dilation2D Uses

func Dilation2D(scope *Scope, input tf.Output, filter tf.Output, strides []int64, rates []int64, padding string) (output tf.Output)

Computes the grayscale dilation of 4-D `input` and 3-D `filter` tensors.

The `input` tensor has shape `[batch, in_height, in_width, depth]` and the `filter` tensor has shape `[filter_height, filter_width, depth]`, i.e., each input channel is processed independently of the others with its own structuring function. The `output` tensor has shape `[batch, out_height, out_width, depth]`. The spatial dimensions of the output tensor depend on the `padding` algorithm. We currently only support the default "NHWC" `data_format`.

In detail, the grayscale morphological 2-D dilation is the max-sum correlation (for consistency with `conv2d`, we use unmirrored filters):

output[b, y, x, c] =
   max_{dy, dx} input[b,
                      strides[1] * y + rates[1] * dy,
                      strides[2] * x + rates[2] * dx,
                      c] +
                filter[dy, dx, c]

Max-pooling is a special case when the filter has size equal to the pooling kernel size and contains all zeros.

Note on duality: The dilation of `input` by the `filter` is equal to the negation of the erosion of `-input` by the reflected `filter`.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
strides: The stride of the sliding window for each dimension of the input

tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: The input stride for atrous morphological dilation. Must be:

`[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, out_height, out_width, depth]`.

func Dilation2DBackpropFilter Uses

func Dilation2DBackpropFilter(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, rates []int64, padding string) (filter_backprop tf.Output)

Computes the gradient of morphological 2-D dilation with respect to the filter.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
out_backprop: 4-D with shape `[batch, out_height, out_width, depth]`.
strides: 1-D of length 4. The stride of the sliding window for each dimension of

the input tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: 1-D of length 4. The input stride for atrous morphological dilation.

Must be: `[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 3-D with shape `[filter_height, filter_width, depth]`.

func Dilation2DBackpropInput Uses

func Dilation2DBackpropInput(scope *Scope, input tf.Output, filter tf.Output, out_backprop tf.Output, strides []int64, rates []int64, padding string) (in_backprop tf.Output)

Computes the gradient of morphological 2-D dilation with respect to the input.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, depth]`.
filter: 3-D with shape `[filter_height, filter_width, depth]`.
out_backprop: 4-D with shape `[batch, out_height, out_width, depth]`.
strides: 1-D of length 4. The stride of the sliding window for each dimension of

the input tensor. Must be: `[1, stride_height, stride_width, 1]`.

rates: 1-D of length 4. The input stride for atrous morphological dilation.

Must be: `[1, rate_height, rate_width, 1]`.

padding: The type of padding algorithm to use.

Returns 4-D with shape `[batch, in_height, in_width, depth]`.

func Div Uses

func Div(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x / y element-wise.

*NOTE*: `Div` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func DrawBoundingBoxes Uses

func DrawBoundingBoxes(scope *Scope, images tf.Output, boxes tf.Output) (output tf.Output)

Draw bounding boxes on a batch of images.

Outputs a copy of `images` but draws on top of the pixels zero or more bounding boxes specified by the locations in `boxes`. The coordinates of the each bounding box in `boxes` are encoded as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example, if an image is 100 x 200 pixels (height x width) and the bounding box is `[0.1, 0.2, 0.5, 0.9]`, the upper-left and bottom-right coordinates of the bounding box will be `(40, 10)` to `(100, 50)` (in (x,y) coordinates).

Parts of the bounding box may fall outside the image.

Arguments:

images: 4-D with shape `[batch, height, width, depth]`. A batch of images.
boxes: 3-D with shape `[batch, num_bounding_boxes, 4]` containing bounding

boxes.

Returns 4-D with the same shape as `images`. The batch of input images with bounding boxes drawn on the images.

func DynamicPartition Uses

func DynamicPartition(scope *Scope, data tf.Output, partitions tf.Output, num_partitions int64) (outputs []tf.Output)

Partitions `data` into `num_partitions` tensors using indices from `partitions`.

For each index tuple `js` of size `partitions.ndim`, the slice `data[js, ...]` becomes part of `outputs[partitions[js]]`. The slices with `partitions[js] = i` are placed in `outputs[i]` in lexicographic order of `js`, and the first dimension of `outputs[i]` is the number of entries in `partitions` equal to `i`. In detail,

“`python

outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

outputs[i] = pack([data[js, ...] for js if partitions[js] == i])

“`

`data.shape` must start with `partitions.shape`.

For example:

“`python

# Scalar partitions.
partitions = 1
num_partitions = 2
data = [10, 20]
outputs[0] = []  # Empty with shape [0, 2]
outputs[1] = [[10, 20]]

# Vector partitions.
partitions = [0, 0, 1, 1, 0]
num_partitions = 2
data = [10, 20, 30, 40, 50]
outputs[0] = [10, 20, 50]
outputs[1] = [30, 40]

“`

See `dynamic_stitch` for an example on how to merge partitions back.

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicPartition.png" alt> </div>

Arguments:

partitions: Any shape.  Indices in the range `[0, num_partitions)`.
num_partitions: The number of partitions to output.

func DynamicStitch Uses

func DynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output)

Interleave the values from the `data` tensors into a single tensor.

Builds a merged tensor such that

“`python

merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

“`

For example, if each `indices[m]` is scalar or vector, we have

“`python

# Scalar indices:
merged[indices[m], ...] = data[m][...]

# Vector indices:
merged[indices[m][i], ...] = data[m][i, ...]

“`

Each `data[i].shape` must start with the corresponding `indices[i].shape`, and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we must have `data[i].shape = indices[i].shape + constant`. In terms of this `constant`, the output shape is

merged.shape = [max(indices)] + constant

Values are merged in order, so if an index appears in both `indices[m][i]` and `indices[n][j]` for `(m,i) < (n,j)` the slice `data[n][j]` will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.

For example:

“`python

indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
          [51, 52], [61, 62]]

“`

This method can be used to merge partitions created by `dynamic_partition` as illustrated on the following example:

“`python

# Apply function (increments x_i) on elements for which a certain condition
# apply (x_i != -1 in this example).
x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
condition_mask=tf.not_equal(x,tf.constant(-1.))
partitioned_data = tf.dynamic_partition(
    x, tf.cast(condition_mask, tf.int32) , 2)
partitioned_data[1] = partitioned_data[1] + 1.0
condition_indices = tf.dynamic_partition(
    tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
x = tf.dynamic_stitch(condition_indices, partitioned_data)
# Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
# unchanged.

“`

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicStitch.png" alt> </div>

func EditDistance Uses

func EditDistance(scope *Scope, hypothesis_indices tf.Output, hypothesis_values tf.Output, hypothesis_shape tf.Output, truth_indices tf.Output, truth_values tf.Output, truth_shape tf.Output, optional ...EditDistanceAttr) (output tf.Output)

Computes the (possibly normalized) Levenshtein Edit Distance.

The inputs are variable-length sequences provided by SparseTensors

(hypothesis_indices, hypothesis_values, hypothesis_shape)

and

(truth_indices, truth_values, truth_shape).

The inputs are:

Arguments:

hypothesis_indices: The indices of the hypothesis list SparseTensor.

This is an N x R int64 matrix.

hypothesis_values: The values of the hypothesis list SparseTensor.

This is an N-length vector.

hypothesis_shape: The shape of the hypothesis list SparseTensor.

This is an R-length vector.

truth_indices: The indices of the truth list SparseTensor.

This is an M x R int64 matrix.

truth_values: The values of the truth list SparseTensor.

This is an M-length vector.

truth_shape: truth indices, vector.

Returns A dense float tensor with rank R - 1.

For the example input:

// hypothesis represents a 2x1 matrix with variable-length values:
//   (0,0) = ["a"]
//   (1,0) = ["b"]
hypothesis_indices = [[0, 0, 0],
                      [1, 0, 0]]
hypothesis_values = ["a", "b"]
hypothesis_shape = [2, 1, 1]

// truth represents a 2x2 matrix with variable-length values:
//   (0,0) = []
//   (0,1) = ["a"]
//   (1,0) = ["b", "c"]
//   (1,1) = ["a"]
truth_indices = [[0, 1, 0],
                 [1, 0, 0],
                 [1, 0, 1],
                 [1, 1, 0]]
truth_values = ["a", "b", "c", "a"]
truth_shape = [2, 2, 2]
normalize = true

The output will be:

// output is a 2x2 matrix with edit distances normalized by truth lengths.
output = [[inf, 1.0],  // (0,0): no truth, (0,1): no hypothesis
          [0.5, 1.0]]  // (1,0): addition, (1,1): no hypothesis

func Elu Uses

func Elu(scope *Scope, features tf.Output) (activations tf.Output)

Computes exponential linear: `exp(features) - 1` if < 0, `features` otherwise.

See [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) ](http://arxiv.org/abs/1511.07289)

func EluGrad Uses

func EluGrad(scope *Scope, gradients tf.Output, outputs tf.Output) (backprops tf.Output)

Computes gradients for the exponential linear (Elu) operation.

Arguments:

gradients: The backpropagated gradients to the corresponding Elu operation.
outputs: The outputs of the corresponding Elu operation.

Returns The gradients: `gradients * (outputs + 1)` if outputs < 0, `gradients` otherwise.

func EncodeBase64 Uses

func EncodeBase64(scope *Scope, input tf.Output, optional ...EncodeBase64Attr) (output tf.Output)

Encode strings into web-safe base64 format.

Refer to the following article for more information on base64 format: en.wikipedia.org/wiki/Base64. Base64 strings may have padding with '=' at the end so that the encoded has length multiple of 4. See Padding section of the link above.

Web-safe means that the encoder uses - and _ instead of + and /.

Arguments:

input: Strings to be encoded.

Returns Input strings encoded in base64.

func EncodeJpeg Uses

func EncodeJpeg(scope *Scope, image tf.Output, optional ...EncodeJpegAttr) (contents tf.Output)

JPEG-encode an image.

`image` is a 3-D uint8 Tensor of shape `[height, width, channels]`.

The attr `format` can be used to override the color format of the encoded output. Values can be:

* `”`: Use a default format based on the number of channels in the image. * `grayscale`: Output a grayscale JPEG image. The `channels` dimension

of `image` must be 1.

* `rgb`: Output an RGB JPEG image. The `channels` dimension

of `image` must be 3.

If `format` is not specified or is the empty string, a default format is picked in function of the number of channels in `image`:

* 1: Output a grayscale image. * 3: Output an RGB image.

Arguments:

image: 3-D with shape `[height, width, channels]`.

Returns 0-D. JPEG-encoded image.

func EncodePng Uses

func EncodePng(scope *Scope, image tf.Output, optional ...EncodePngAttr) (contents tf.Output)

PNG-encode an image.

`image` is a 3-D uint8 or uint16 Tensor of shape `[height, width, channels]` where `channels` is:

* 1: for grayscale. * 2: for grayscale + alpha. * 3: for RGB. * 4: for RGBA.

The ZLIB compression level, `compression`, can be -1 for the PNG-encoder default or a value from 0 to 9. 9 is the highest compression level, generating the smallest output, but is slower.

Arguments:

image: 3-D with shape `[height, width, channels]`.

Returns 0-D. PNG-encoded image.

func EncodeWav Uses

func EncodeWav(scope *Scope, audio tf.Output, sample_rate tf.Output) (contents tf.Output)

Encode audio data using the WAV file format.

This operation will generate a string suitable to be saved out to create a .wav audio file. It will be encoded in the 16-bit PCM format. It takes in float values in the range -1.0f to 1.0f, and any outside that value will be clamped to that range.

`audio` is a 2-D float Tensor of shape `[length, channels]`. `sample_rate` is a scalar Tensor holding the rate to use (e.g. 44100).

Arguments:

audio: 2-D with shape `[length, channels]`.
sample_rate: Scalar containing the sample frequency.

Returns 0-D. WAV-encoded file contents.

func Enter Uses

func Enter(scope *Scope, data tf.Output, frame_name string, optional ...EnterAttr) (output tf.Output)

Creates or finds a child frame, and makes `data` available to the child frame.

This op is used together with `Exit` to create loops in the graph. The unique `frame_name` is used by the `Executor` to identify frames. If `is_constant` is true, `output` is a constant in the child frame; otherwise it may be changed in the child frame. At most `parallel_iterations` iterations are run in parallel in the child frame.

Arguments:

data: The tensor to be made available to the child frame.
frame_name: The name of the child frame.

Returns The same tensor as `data`.

func Equal Uses

func Equal(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x == y) element-wise.

*NOTE*: `Equal` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Erf Uses

func Erf(scope *Scope, x tf.Output) (y tf.Output)

Computes the Gauss error function of `x` element-wise.

func Erfc Uses

func Erfc(scope *Scope, x tf.Output) (y tf.Output)

Computes the complementary error function of `x` element-wise.

func Exit Uses

func Exit(scope *Scope, data tf.Output) (output tf.Output)

Exits the current frame to its parent frame.

Exit makes its input `data` available to the parent frame.

Arguments:

data: The tensor to be made available to the parent frame.

Returns The same tensor as `data`.

func Exp Uses

func Exp(scope *Scope, x tf.Output) (y tf.Output)

Computes exponential of x element-wise. \\(y = e^x\\).

func ExpandDims Uses

func ExpandDims(scope *Scope, input tf.Output, dim tf.Output) (output tf.Output)

Inserts a dimension of 1 into a tensor's shape.

Given a tensor `input`, this operation inserts a dimension of 1 at the dimension index `dim` of `input`'s shape. The dimension index `dim` starts at zero; if you specify a negative number for `dim` it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape `[height, width, channels]`, you can make it a batch of 1 image with `expand_dims(image, 0)`, which will make the shape `[1, height, width, channels]`.

Other examples:

“` # 't' is a tensor of shape [2] shape(expand_dims(t, 0)) ==> [1, 2] shape(expand_dims(t, 1)) ==> [2, 1] shape(expand_dims(t, -1)) ==> [2, 1]

# 't2' is a tensor of shape [2, 3, 5] shape(expand_dims(t2, 0)) ==> [1, 2, 3, 5] shape(expand_dims(t2, 2)) ==> [2, 3, 1, 5] shape(expand_dims(t2, 3)) ==> [2, 3, 5, 1] “`

This operation requires that:

`-1-input.dims() <= dim <= input.dims()`

This operation is related to `squeeze()`, which removes dimensions of size 1.

Arguments:

dim: 0-D (scalar). Specifies the dimension index at which to

expand the shape of `input`. Must be in the range `[-rank(input) - 1, rank(input)]`.

Returns Contains the same data as `input`, but its shape has an additional dimension of size 1 added.

func Expm1 Uses

func Expm1(scope *Scope, x tf.Output) (y tf.Output)

Computes exponential of x - 1 element-wise.

I.e., \\(y = (\exp x) - 1\\).

func ExtractGlimpse Uses

func ExtractGlimpse(scope *Scope, input tf.Output, size tf.Output, offsets tf.Output, optional ...ExtractGlimpseAttr) (glimpse tf.Output)

Extracts a glimpse from the input tensor.

Returns a set of windows called glimpses extracted at location `offsets` from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.

The result is a 4-D tensor of shape `[batch_size, glimpse_height, glimpse_width, channels]`. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the `size` parameter.

The argument `normalized` and `centered` controls how the windows are built:

* If the coordinates are normalized but not centered, 0.0 and 1.0

correspond to the minimum and maximum of each height and width
dimension.

* If the coordinates are both normalized and centered, they range from

-1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper
left corner, the lower right corner is located at (1.0, 1.0) and the
center is at (0, 0).

* If the coordinates are not normalized they are interpreted as

numbers of pixels.

Arguments:

input: A 4-D float tensor of shape `[batch_size, height, width, channels]`.
size: A 1-D tensor of 2 elements containing the size of the glimpses

to extract. The glimpse height must be specified first, following by the glimpse width.

offsets: A 2-D integer tensor of shape `[batch_size, 2]` containing

the y, x locations of the center of each window.

Returns A tensor representing the glimpses `[batch_size, glimpse_height, glimpse_width, channels]`.

func ExtractImagePatches Uses

func ExtractImagePatches(scope *Scope, images tf.Output, ksizes []int64, strides []int64, rates []int64, padding string) (patches tf.Output)

Extract `patches` from `images` and put them in the "depth" output dimension.

Arguments:

images: 4-D Tensor with shape `[batch, in_rows, in_cols, depth]`.
ksizes: The size of the sliding window for each dimension of `images`.
strides: 1-D of length 4. How far the centers of two consecutive patches are in

the images. Must be: `[1, stride_rows, stride_cols, 1]`.

rates: 1-D of length 4. Must be: `[1, rate_rows, rate_cols, 1]`. This is the

input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with `patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1)`, followed by subsampling them spatially by a factor of `rates`. This is equivalent to `rate` in dilated (a.k.a. Atrous) convolutions.

padding: The type of padding algorithm to use.

We specify the size-related attributes as:

“`python

ksizes = [1, ksize_rows, ksize_cols, 1]
strides = [1, strides_rows, strides_cols, 1]
rates = [1, rates_rows, rates_cols, 1]

“`

Returns 4-D Tensor with shape `[batch, out_rows, out_cols, ksize_rows * ksize_cols * depth]` containing image patches with size `ksize_rows x ksize_cols x depth` vectorized in the "depth" dimension. Note `out_rows` and `out_cols` are the dimensions of the output patches.

func FFT Uses

func FFT(scope *Scope, input tf.Output) (output tf.Output)

Fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform over the inner-most dimension of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most

dimension of `input` is replaced with its 1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fft @end_compatibility

func FFT2D Uses

func FFT2D(scope *Scope, input tf.Output) (output tf.Output)

2D fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 2

dimensions of `input` are replaced with their 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fft2 @end_compatibility

func FFT3D Uses

func FFT3D(scope *Scope, input tf.Output) (output tf.Output)

3D fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 3

dimensions of `input` are replaced with their 3D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.fftn with 3 dimensions. @end_compatibility

func FIFOQueueV2 Uses

func FIFOQueueV2(scope *Scope, component_types []tf.DataType, optional ...FIFOQueueV2Attr) (handle tf.Output)

A queue that produces elements in first-in first-out order.

Arguments:

component_types: The type of each component in a value.

Returns The handle to the queue.

func Fact Uses

func Fact(scope *Scope) (fact tf.Output)

Output a fact about factorials.

func FakeQuantWithMinMaxArgs Uses

func FakeQuantWithMinMaxArgs(scope *Scope, inputs tf.Output, optional ...FakeQuantWithMinMaxArgsAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type.

Attributes `[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.

Quantization is called fake since the output is still in floating point.

func FakeQuantWithMinMaxArgsGradient Uses

func FakeQuantWithMinMaxArgsGradient(scope *Scope, gradients tf.Output, inputs tf.Output, optional ...FakeQuantWithMinMaxArgsGradientAttr) (backprops tf.Output)

Compute gradients for a FakeQuantWithMinMaxArgs operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxArgs operation.
inputs: Values passed as inputs to the FakeQuantWithMinMaxArgs operation.

Returns Backpropagated gradients below the FakeQuantWithMinMaxArgs operation: `gradients * (inputs >= min && inputs <= max)`.

func FakeQuantWithMinMaxVars Uses

func FakeQuantWithMinMaxVars(scope *Scope, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor of type float via global float scalars `min`

and `max` to 'outputs' tensor of same shape as `inputs`.

`[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.

This operation has a gradient and thus allows for training `min` and `max` values.

func FakeQuantWithMinMaxVarsGradient Uses

func FakeQuantWithMinMaxVarsGradient(scope *Scope, gradients tf.Output, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsGradientAttr) (backprops_wrt_input tf.Output, backprop_wrt_min tf.Output, backprop_wrt_max tf.Output)

Compute gradients for a FakeQuantWithMinMaxVars operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxVars operation.
inputs: Values passed as inputs to the FakeQuantWithMinMaxVars operation.

min, max: Quantization interval, scalar floats.

Returns Backpropagated gradients w.r.t. inputs: `gradients * (inputs >= min && inputs <= max)`.Backpropagated gradients w.r.t. min parameter: `sum(gradients * (inputs < min))`.Backpropagated gradients w.r.t. max parameter: `sum(gradients * (inputs > max))`.

func FakeQuantWithMinMaxVarsPerChannel Uses

func FakeQuantWithMinMaxVarsPerChannel(scope *Scope, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsPerChannelAttr) (outputs tf.Output)

Fake-quantize the 'inputs' tensor of type float and one of the shapes: `[d]`,

`[b, d]` `[b, h, w, d]` via per-channel floats `min` and `max` of shape `[d]` to 'outputs' tensor of same shape as `inputs`.

`[min; max]` define the clamping range for the `inputs` data. `inputs` values are quantized into the quantization range (`[0; 2^num_bits - 1]` when `narrow_range` is false and `[1; 2^num_bits - 1]` when it is true) and then de-quantized and output as floats in `[min; max]` interval. `num_bits` is the bitwidth of the quantization; between 2 and 8, inclusive.

This operation has a gradient and thus allows for training `min` and `max` values.

func FakeQuantWithMinMaxVarsPerChannelGradient Uses

func FakeQuantWithMinMaxVarsPerChannelGradient(scope *Scope, gradients tf.Output, inputs tf.Output, min tf.Output, max tf.Output, optional ...FakeQuantWithMinMaxVarsPerChannelGradientAttr) (backprops_wrt_input tf.Output, backprop_wrt_min tf.Output, backprop_wrt_max tf.Output)

Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.

Arguments:

gradients: Backpropagated gradients above the FakeQuantWithMinMaxVars operation,

shape one of: `[d]`, `[b, d]`, `[b, h, w, d]`.

	inputs: Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape
  same as `gradients`.

min, max: Quantization interval, floats of shape `[d]`.

Returns Backpropagated gradients w.r.t. inputs, shape same as `inputs`:

`gradients * (inputs >= min && inputs <= max)`.Backpropagated gradients w.r.t. min parameter, shape `[d]`:

`sum_per_d(gradients * (inputs < min))`.Backpropagated gradients w.r.t. max parameter, shape `[d]`: `sum_per_d(gradients * (inputs > max))`.

func Fill Uses

func Fill(scope *Scope, dims tf.Output, value tf.Output) (output tf.Output)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape `dims` and fills it with `value`.

For example:

“` # Output tensor has shape [2, 3]. fill([2, 3], 9) ==> [[9, 9, 9]

[9, 9, 9]]

“`

Arguments:

dims: 1-D. Represents the shape of the output tensor.
value: 0-D (scalar). Value to fill the returned tensor.

@compatibility(numpy) Equivalent to np.full @end_compatibility

func FixedLengthRecordDataset Uses

func FixedLengthRecordDataset(scope *Scope, filenames tf.Output, header_bytes tf.Output, record_bytes tf.Output, footer_bytes tf.Output) (handle tf.Output)

Creates a dataset that emits the records from one or more binary files.

Arguments:

filenames: A scalar or a vector containing the name(s) of the file(s) to be

read.

header_bytes: A scalar representing the number of bytes to skip at the

beginning of a file.

record_bytes: A scalar representing the number of bytes in each record.
footer_bytes: A scalar representing the number of bytes to skip at the end

of a file.

func FixedLengthRecordReaderV2 Uses

func FixedLengthRecordReaderV2(scope *Scope, record_bytes int64, optional ...FixedLengthRecordReaderV2Attr) (reader_handle tf.Output)

A Reader that outputs fixed-length records from a file.

Arguments:

record_bytes: Number of bytes in the record.

Returns The handle to reference the Reader.

func FixedUnigramCandidateSampler Uses

func FixedUnigramCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...FixedUnigramCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

A unigram sampler could use a fixed unigram distribution read from a file or passed in as an in-memory array instead of building up the distribution from data on the fly. There is also an option to skew the distribution by applying a distortion power to the weights.

The vocabulary file should be in CSV-like format, with the last field being the weight associated with the word.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func Floor Uses

func Floor(scope *Scope, x tf.Output) (y tf.Output)

Returns element-wise largest integer not greater than x.

func FloorDiv Uses

func FloorDiv(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x // y element-wise.

*NOTE*: `FloorDiv` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func FloorMod Uses

func FloorMod(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns element-wise remainder of division. When `x < 0` xor `y < 0` is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`.

*NOTE*: `FloorMod` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func FractionalAvgPool Uses

func FractionalAvgPool(scope *Scope, value tf.Output, pooling_ratio []float32, optional ...FractionalAvgPoolAttr) (output tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output)

Performs fractional average pooling on the input.

Fractional average pooling is similar to Fractional max pooling in the pooling region generation step. The only difference is that after pooling regions are generated, a mean operation is performed instead of a max operation in each pooling region.

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
pooling_ratio: Pooling ratio for each dimension of `value`, currently only

supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.

Returns output tensor after fractional avg pooling.row pooling sequence, needed to calculate gradient.column pooling sequence, needed to calculate gradient.

func FractionalAvgPoolGrad Uses

func FractionalAvgPoolGrad(scope *Scope, orig_input_tensor_shape tf.Output, out_backprop tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output, optional ...FractionalAvgPoolGradAttr) (output tf.Output)

Computes gradient of the FractionalAvgPool function.

Unlike FractionalMaxPoolGrad, we don't need to find arg_max for FractionalAvgPoolGrad, we just need to evenly back-propagate each element of out_backprop to those indices that form the same pooling cell. Therefore, we just need to know the shape of original input tensor, instead of the whole tensor.

Arguments:

orig_input_tensor_shape: Original input tensor shape for `fractional_avg_pool`
out_backprop: 4-D with shape `[batch, height, width, channels]`.  Gradients

w.r.t. the output of `fractional_avg_pool`.

row_pooling_sequence: row pooling sequence, form pooling region with

col_pooling_sequence.

col_pooling_sequence: column pooling sequence, form pooling region with

row_pooling sequence.

Returns 4-D. Gradients w.r.t. the input of `fractional_avg_pool`.

func FractionalMaxPool Uses

func FractionalMaxPool(scope *Scope, value tf.Output, pooling_ratio []float32, optional ...FractionalMaxPoolAttr) (output tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output)

Performs fractional max pooling on the input.

Fractional max pooling is slightly different than regular max pooling. In regular max pooling, you downsize an input set by taking the maximum value of smaller N x N subsections of the set (often 2x2), and try to reduce the set by a factor of N, where N is an integer. Fractional max pooling, as you might expect from the word "fractional", means that the overall reduction ratio N does not have to be an integer.

The sizes of the pooling regions are generated randomly but are fairly uniform. For example, let's look at the height dimension, and the constraints on the list of rows that will be pool boundaries.

First we define the following:

1. input_row_length : the number of rows from the input set 2. output_row_length : which will be smaller than the input 3. alpha = input_row_length / output_row_length : our reduction ratio 4. K = floor(alpha) 5. row_pooling_sequence : this is the result list of pool boundary rows

Then, row_pooling_sequence should satisfy:

1. a[0] = 0 : the first value of the sequence is 0 2. a[end] = input_row_length : the last value of the sequence is the size 3. K <= (a[i+1] - a[i]) <= K+1 : all intervals are K or K+1 size 4. length(row_pooling_sequence) = output_row_length+1

For more details on fractional max pooling, see this paper: [Benjamin Graham, Fractional Max-Pooling](http://arxiv.org/abs/1412.6071)

Arguments:

value: 4-D with shape `[batch, height, width, channels]`.
pooling_ratio: Pooling ratio for each dimension of `value`, currently only

supports row and col dimension and should be >= 1.0. For example, a valid pooling ratio looks like [1.0, 1.44, 1.73, 1.0]. The first and last elements must be 1.0 because we don't allow pooling on batch and channels dimensions. 1.44 and 1.73 are pooling ratio on height and width dimensions respectively.

Returns output tensor after fractional max pooling.row pooling sequence, needed to calculate gradient.column pooling sequence, needed to calculate gradient.

func FractionalMaxPoolGrad Uses

func FractionalMaxPoolGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, out_backprop tf.Output, row_pooling_sequence tf.Output, col_pooling_sequence tf.Output, optional ...FractionalMaxPoolGradAttr) (output tf.Output)

Computes gradient of the FractionalMaxPool function.

Arguments:

orig_input: Original input for `fractional_max_pool`
orig_output: Original output for `fractional_max_pool`
out_backprop: 4-D with shape `[batch, height, width, channels]`.  Gradients

w.r.t. the output of `fractional_max_pool`.

row_pooling_sequence: row pooling sequence, form pooling region with

col_pooling_sequence.

col_pooling_sequence: column pooling sequence, form pooling region with

row_pooling sequence.

Returns 4-D. Gradients w.r.t. the input of `fractional_max_pool`.

func FusedBatchNorm Uses

func FusedBatchNorm(scope *Scope, x tf.Output, scale tf.Output, offset tf.Output, mean tf.Output, variance tf.Output, optional ...FusedBatchNormAttr) (y tf.Output, batch_mean tf.Output, batch_variance tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output)

Batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
offset: A 1D Tensor for offset, to shift to the normalized x.
mean: A 1D Tensor for population mean. Used for inference only;

must be empty for training.

variance: A 1D Tensor for population variance. Used for inference only;

must be empty for training.

Returns A 4D Tensor for output data.A 1D Tensor for the computed batch mean, to be used by TensorFlow to compute the running mean.A 1D Tensor for the computed batch variance, to be used by TensorFlow to compute the running variance.A 1D Tensor for the computed batch mean, to be reused in the gradient computation.A 1D Tensor for the computed batch variance (inverted variance in the cuDNN case), to be used in the gradient computation.

func FusedBatchNormGrad Uses

func FusedBatchNormGrad(scope *Scope, y_backprop tf.Output, x tf.Output, scale tf.Output, reserve_space_1 tf.Output, reserve_space_2 tf.Output, optional ...FusedBatchNormGradAttr) (x_backprop tf.Output, scale_backprop tf.Output, offset_backprop tf.Output, reserve_space_3 tf.Output, reserve_space_4 tf.Output)

Gradient for batch normalization.

Note that the size of 4D Tensors are defined by either "NHWC" or "NCHW". The size of 1D Tensors matches the dimension C of the 4D Tensors.

Arguments:

y_backprop: A 4D Tensor for the gradient with respect to y.
x: A 4D Tensor for input data.
scale: A 1D Tensor for scaling factor, to scale the normalized x.
reserve_space_1: A 1D Tensor for the computed batch mean, to be reused

in the gradient computation.

reserve_space_2: A 1D Tensor for the computed batch variance (inverted variance

in the cuDNN case), to be used in the gradient computation.

Returns A 4D Tensor for the gradient with respect to x.A 1D Tensor for the gradient with respect to scale.A 1D Tensor for the gradient with respect to offset.Unused placeholder to match the mean input in FusedBatchNorm.Unused placeholder to match the variance input in FusedBatchNorm.

func FusedPadConv2D Uses

func FusedPadConv2D(scope *Scope, input tf.Output, paddings tf.Output, filter tf.Output, mode string, strides []int64, padding string) (output tf.Output)

Performs a padding as a preprocess during a convolution.

Similar to FusedResizeAndPadConv2d, this op allows for an optimized implementation where the spatial padding transformation stage is fused with the im2col lookup, but in this case without the bilinear filtering required for resizing. Fusing the padding prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and 'NHWC' order is used instead. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

func FusedResizeAndPadConv2D Uses

func FusedResizeAndPadConv2D(scope *Scope, input tf.Output, size tf.Output, paddings tf.Output, filter tf.Output, mode string, strides []int64, padding string, optional ...FusedResizeAndPadConv2DAttr) (output tf.Output)

Performs a resize and padding as a preprocess during a convolution.

It's often possible to do spatial transformations more efficiently as part of the packing stage of a convolution, so this op allows for an optimized implementation where these stages are fused together. This prevents the need to write out the intermediate results as whole tensors, reducing memory pressure, and we can get some latency gains by merging the transformation calculations. The data_format attribute for Conv2D isn't supported by this op, and defaults to 'NHWC' order. Internally this op uses a single per-graph scratch buffer, which means that it will block if multiple versions are being run in parallel. This is because this operator is primarily an optimization to minimize memory usage.

Arguments:

input: 4-D with shape `[batch, in_height, in_width, in_channels]`.
size: A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

filter: 4-D with shape

`[filter_height, filter_width, in_channels, out_channels]`.

strides: 1-D of length 4.  The stride of the sliding window for each dimension

of `input`. Must be in the same order as the dimension specified with format.

padding: The type of padding algorithm to use.

func Gather Uses

func Gather(scope *Scope, params tf.Output, indices tf.Output, optional ...GatherAttr) (output tf.Output)

Gather slices from `params` according to `indices`.

`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape `indices.shape + params.shape[1:]` where:

“`python

# Scalar indices
output[:, ..., :] = params[indices, :, ... :]

# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]

# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

“`

If `indices` is a permutation and `len(indices) == params.shape[0]` then this operation will permute `params` accordingly.

`validate_indices`: DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, out-of-bound indices result in safe but unspecified behavior, which may include raising an error.

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt> </div>

func GatherNd Uses

func GatherNd(scope *Scope, params tf.Output, indices tf.Output) (output tf.Output)

Gather slices from `params` into a Tensor with shape specified by `indices`.

`indices` is an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into `params`, where each element defines a slice of `params`:

output[i_0, ..., i_{K-2}] = params[indices[i0, ..., i_{K-2}]]

Whereas in @{tf.gather} `indices` defines slices into the first dimension of `params`, in `tf.gather_nd`, `indices` defines slices into the first `N` dimensions of `params`, where `N = indices.shape[-1]`.

The last dimension of `indices` can be at most the rank of `params`:

indices.shape[-1] <= params.rank

The last dimension of `indices` corresponds to elements (if `indices.shape[-1] == params.rank`) or slices (if `indices.shape[-1] < params.rank`) along dimension `indices.shape[-1]` of `params`. The output tensor has shape

indices.shape[:-1] + params.shape[indices.shape[-1]:]

Some examples below.

Simple indexing into a matrix:

“`python

indices = [[0, 0], [1, 1]]
params = [['a', 'b'], ['c', 'd']]
output = ['a', 'd']

“`

Slice indexing into a matrix:

“`python

indices = [[1], [0]]
params = [['a', 'b'], ['c', 'd']]
output = [['c', 'd'], ['a', 'b']]

“`

Indexing into a 3-tensor:

“`python

indices = [[1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['a1', 'b1'], ['c1', 'd1']]]

indices = [[0, 1], [1, 0]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['c0', 'd0'], ['a1', 'b1']]

indices = [[0, 0, 1], [1, 0, 1]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = ['b0', 'b1']

“`

Batched indexing into a matrix:

“`python

indices = [[[0, 0]], [[0, 1]]]
params = [['a', 'b'], ['c', 'd']]
output = [['a'], ['b']]

“`

Batched slice indexing into a matrix:

“`python

indices = [[[1]], [[0]]]
params = [['a', 'b'], ['c', 'd']]
output = [[['c', 'd']], [['a', 'b']]]

“`

Batched indexing into a 3-tensor:

“`python

indices = [[[1]], [[0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[[['a1', 'b1'], ['c1', 'd1']]],
          [[['a0', 'b0'], ['c0', 'd0']]]]

indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [[['c0', 'd0'], ['a1', 'b1']],
          [['a0', 'b0'], ['c1', 'd1']]]

indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]]
params = [[['a0', 'b0'], ['c0', 'd0']],
          [['a1', 'b1'], ['c1', 'd1']]]
output = [['b0', 'b1'], ['d0', 'c1']]

“`

Arguments:

params: The tensor from which to gather values.
indices: Index tensor.

Returns Values from `params` gathered from indices given by `indices`, with shape `indices.shape[:-1] + params.shape[indices.shape[-1]:]`.

func GatherV2 Uses

func GatherV2(scope *Scope, params tf.Output, indices tf.Output, axis tf.Output) (output tf.Output)

Gather slices from `params` axis `axis` according to `indices`.

`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]` where:

“`python

# Scalar indices (output is rank(params) - 1).
output[a_0, ..., a_n, b_0, ..., b_n] =
  params[a_0, ..., a_n, indices, b_0, ..., b_n]

# Vector indices (output is rank(params)).
output[a_0, ..., a_n, i, b_0, ..., b_n] =
  params[a_0, ..., a_n, indices[i], b_0, ..., b_n]

# Higher rank indices (output is rank(params) + rank(indices) - 1).
output[a_0, ..., a_n, i, ..., j, b_0, ... b_n] =
  params[a_0, ..., a_n, indices[i, ..., j], b_0, ..., b_n]

“`

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/Gather.png" alt> </div>

Arguments:

params: The tensor from which to gather values. Must be at least rank

`axis + 1`.

indices: Index tensor. Must be in range `[0, params.shape[axis])`.
axis: The axis in `params` to gather `indices` from. Defaults to the first

dimension. Supports negative indexes.

Returns Values from `params` gathered from indices given by `indices`, with shape `params.shape[:axis] + indices.shape + params.shape[axis + 1:]`.

func GetSessionHandle Uses

func GetSessionHandle(scope *Scope, value tf.Output) (handle tf.Output)

Store the input tensor in the state of the current session.

Arguments:

value: The tensor to be stored.

Returns The handle for the tensor stored in the session state, represented as a string.

func GetSessionHandleV2 Uses

func GetSessionHandleV2(scope *Scope, value tf.Output) (handle tf.Output)

Store the input tensor in the state of the current session.

Arguments:

value: The tensor to be stored.

Returns The handle for the tensor stored in the session state, represented as a ResourceHandle object.

func GetSessionTensor Uses

func GetSessionTensor(scope *Scope, handle tf.Output, dtype tf.DataType) (value tf.Output)

Get the value of the tensor specified by its handle.

Arguments:

handle: The handle for a tensor stored in the session state.
dtype: The type of the output value.

Returns The tensor for the given handle.

func Greater Uses

func Greater(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x > y) element-wise.

*NOTE*: `Greater` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func GreaterEqual Uses

func GreaterEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x >= y) element-wise.

*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func HSVToRGB Uses

func HSVToRGB(scope *Scope, images tf.Output) (output tf.Output)

Convert one or more images from HSV to RGB.

Outputs a tensor of the same shape as the `images` tensor, containing the RGB value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

See `rgb_to_hsv` for a description of the HSV encoding.

Arguments:

images: 1-D or higher rank. HSV data to convert. Last dimension must be size 3.

Returns `images` converted to RGB.

func HashTableV2 Uses

func HashTableV2(scope *Scope, key_dtype tf.DataType, value_dtype tf.DataType, optional ...HashTableV2Attr) (table_handle tf.Output)

Creates a non-initialized hash table.

This op creates a hash table, specifying the type of its keys and values. Before using the table you will have to initialize it. After initialization the table will be immutable.

Arguments:

key_dtype: Type of the table keys.
value_dtype: Type of the table values.

Returns Handle to a table.

func HistogramSummary Uses

func HistogramSummary(scope *Scope, tag tf.Output, values tf.Output) (summary tf.Output)

Outputs a `Summary` protocol buffer with a histogram.

The generated [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) has one summary value containing a histogram for `values`.

This op reports an `InvalidArgument` error if any value is not finite.

Arguments:

tag: Scalar.  Tag to use for the `Summary.Value`.
values: Any shape. Values to use to build the histogram.

Returns Scalar. Serialized `Summary` protocol buffer.

func IFFT Uses

func IFFT(scope *Scope, input tf.Output) (output tf.Output)

Inverse fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform over the inner-most dimension of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most

dimension of `input` is replaced with its inverse 1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifft @end_compatibility

func IFFT2D Uses

func IFFT2D(scope *Scope, input tf.Output) (output tf.Output)

Inverse 2D fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 2

dimensions of `input` are replaced with their inverse 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifft2 @end_compatibility

func IFFT3D Uses

func IFFT3D(scope *Scope, input tf.Output) (output tf.Output)

Inverse 3D fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform over the inner-most 3 dimensions of `input`.

Arguments:

input: A complex64 tensor.

Returns A complex64 tensor of the same shape as `input`. The inner-most 3

dimensions of `input` are replaced with their inverse 3D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.ifftn with 3 dimensions. @end_compatibility

func IRFFT Uses

func IRFFT(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse real-valued fast Fourier transform.

Computes the inverse 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of `input`.

The inner-most dimension of `input` is assumed to be the result of `RFFT`: the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most dimension of `input` (`fft_length = 2 * (inner - 1)`). If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along the axis `IRFFT` is computed on, if `fft_length / 2 + 1` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [1]. The FFT length.

Returns A float32 tensor of the same rank as `input`. The inner-most

dimension of `input` is replaced with the `fft_length` samples of its inverse
1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.irfft @end_compatibility

func IRFFT2D Uses

func IRFFT2D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse 2D real-valued fast Fourier transform.

Computes the inverse 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`.

The inner-most 2 dimensions of `input` are assumed to be the result of `RFFT2D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 2 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along each axis `IRFFT2D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [2]. The FFT length for each dimension.

Returns A float32 tensor of the same rank as `input`. The inner-most 2

dimensions of `input` are replaced with the `fft_length` samples of their
inverse 2D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.irfft2 @end_compatibility

func IRFFT3D Uses

func IRFFT3D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Inverse 3D real-valued fast Fourier transform.

Computes the inverse 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of `input`.

The inner-most 3 dimensions of `input` are assumed to be the result of `RFFT3D`: The inner-most dimension contains the `fft_length / 2 + 1` unique components of the DFT of a real-valued signal. If `fft_length` is not provided, it is computed from the size of the inner-most 3 dimensions of `input`. If the FFT length used to compute `input` is odd, it should be provided since it cannot be inferred properly.

Along each axis `IRFFT3D` is computed on, if `fft_length` (or `fft_length / 2 + 1` for the inner-most dimension) is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A complex64 tensor.
fft_length: An int32 tensor of shape [3]. The FFT length for each dimension.

Returns A float32 tensor of the same rank as `input`. The inner-most 3

dimensions of `input` are replaced with the `fft_length` samples of their
inverse 3D real Fourier transform.

@compatibility(numpy) Equivalent to np.irfftn with 3 dimensions. @end_compatibility

func Identity Uses

func Identity(scope *Scope, input tf.Output) (output tf.Output)

Return a tensor with the same shape and contents as the input tensor or value.

func IdentityN Uses

func IdentityN(scope *Scope, input []tf.Output) (output []tf.Output)

Returns a list of tensors with the same shapes and contents as the input

tensors.

This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,

“`python with tf.get_default_graph().gradient_override_map(

  {'IdentityN': 'OverrideGradientWithG'}):
y, _ = identity_n([f(x), x])

@tf.RegisterGradient('OverrideGradientWithG') def ApplyG(op, dy, _):

return [None, g(dy)]  # Do not backprop to f(x).

“`

func IdentityReaderV2 Uses

func IdentityReaderV2(scope *Scope, optional ...IdentityReaderV2Attr) (reader_handle tf.Output)

A Reader that outputs the queued work as both the key and value.

To use, enqueue strings in a Queue. ReaderRead will take the front work string and output (work, work).

Returns The handle to reference the Reader.

func Igamma Uses

func Igamma(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Compute the lower regularized incomplete Gamma function `Q(a, x)`.

The lower regularized incomplete Gamma function is defined as:

\\(P(a, x) = gamma(a, x) / Gamma(a) = 1 - Q(a, x)\\)

where

\\(gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt\\)

is the lower incomplete Gamma function.

Note, above `Q(a, x)` (`Igammac`) is the upper regularized complete Gamma function.

func Igammac Uses

func Igammac(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Compute the upper regularized incomplete Gamma function `Q(a, x)`.

The upper regularized incomplete Gamma function is defined as:

\\(Q(a, x) = Gamma(a, x) / Gamma(a) = 1 - P(a, x)\\)

where

\\(Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt\\)

is the upper incomplete Gama function.

Note, above `P(a, x)` (`Igamma`) is the lower regularized complete Gamma function.

func IgnoreErrorsDataset Uses

func IgnoreErrorsDataset(scope *Scope, input_dataset tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that contains the elements of `input_dataset` ignoring errors.

func Imag Uses

func Imag(scope *Scope, input tf.Output, optional ...ImagAttr) (output tf.Output)

Returns the imaginary part of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of type `float` that is the imaginary part of each element in `input`. All elements in `input` must be complex numbers of the form \\(a + bj\\), where *a* is the real part and *b* is the imaginary part returned by this operation.

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.imag(input) ==> [4.75, 5.75] “`

func ImageSummary Uses

func ImageSummary(scope *Scope, tag tf.Output, tensor tf.Output, optional ...ImageSummaryAttr) (summary tf.Output)

Outputs a `Summary` protocol buffer with images.

The summary has up to `max_images` summary values containing images. The images are built from `tensor` which must be 4-D with shape `[batch_size, height, width, channels]` and where `channels` can be:

* 1: `tensor` is interpreted as Grayscale. * 3: `tensor` is interpreted as RGB. * 4: `tensor` is interpreted as RGBA.

The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range `[0, 255]`. `uint8` values are unchanged. The op uses two different normalization algorithms:

* If the input values are all positive, they are rescaled so the largest one

is 255.

* If any input value is negative, the values are shifted so input value 0.0

is at 127.  They are then rescaled so that either the smallest value is 0,
or the largest one is 255.

The `tag` argument is a scalar `Tensor` of type `string`. It is used to build the `tag` of the summary values:

* If `max_images` is 1, the summary value tag is '*tag*/image'. * If `max_images` is greater than 1, the summary value tags are

generated sequentially as '*tag*/image/0', '*tag*/image/1', etc.

The `bad_color` argument is the color to use in the generated images for non-finite input values. It is a `unit8` 1-D tensor of length `channels`. Each element must be in the range `[0, 255]` (It represents the value of a pixel in the output image). Non-finite values in the input tensor are replaced by this tensor in the output image. The default value is the color red.

Arguments:

tag: Scalar. Used to build the `tag` attribute of the summary values.
tensor: 4-D of shape `[batch_size, height, width, channels]` where

`channels` is 1, 3, or 4.

Returns Scalar. Serialized `Summary` protocol buffer.

func ImmutableConst Uses

func ImmutableConst(scope *Scope, dtype tf.DataType, shape tf.Shape, memory_region_name string) (tensor tf.Output)

Returns immutable tensor from memory region.

The current implementation memmaps the tensor from a file.

Arguments:

dtype: Type of the returned tensor.
shape: Shape of the returned tensor.
memory_region_name: Name of readonly memory region used by the tensor, see

NewReadOnlyMemoryRegionFromFile in tensorflow::Env.

func InTopK Uses

func InTopK(scope *Scope, predictions tf.Output, targets tf.Output, k int64) (precision tf.Output)

Says whether the targets are in the top `K` predictions.

This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the prediction for the target class is among the top `k` predictions among all predictions for example `i`. Note that the behavior of `InTopK` differs from the `TopK` op in its handling of ties; if multiple classes have the same prediction value and straddle the top-`k` boundary, all of those classes are considered to be in the top `k`.

More formally, let

\\(predictions_i\\) be the predictions for all classes for example `i`,
\\(targets_i\\) be the target class for example `i`,
\\(out_i\\) be the output for example `i`,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Arguments:

predictions: A `batch_size` x `classes` tensor.
targets: A `batch_size` vector of class ids.
k: Number of top elements to look at for computing precision.

Returns Computed Precision at `k` as a `bool Tensor`.

func InTopKV2 Uses

func InTopKV2(scope *Scope, predictions tf.Output, targets tf.Output, k tf.Output) (precision tf.Output)

Says whether the targets are in the top `K` predictions.

This outputs a `batch_size` bool array, an entry `out[i]` is `true` if the prediction for the target class is among the top `k` predictions among all predictions for example `i`. Note that the behavior of `InTopK` differs from the `TopK` op in its handling of ties; if multiple classes have the same prediction value and straddle the top-`k` boundary, all of those classes are considered to be in the top `k`.

More formally, let

\\(predictions_i\\) be the predictions for all classes for example `i`,
\\(targets_i\\) be the target class for example `i`,
\\(out_i\\) be the output for example `i`,

$$out_i = predictions_{i, targets_i} \in TopKIncludingTies(predictions_i)$$

Arguments:

predictions: A `batch_size` x `classes` tensor.
targets: A `batch_size` vector of class ids.
k: Number of top elements to look at for computing precision.

Returns Computed precision at `k` as a `bool Tensor`.

func InitializeTableFromTextFileV2 Uses

func InitializeTableFromTextFileV2(scope *Scope, table_handle tf.Output, filename tf.Output, key_index int64, value_index int64, optional ...InitializeTableFromTextFileV2Attr) (o *tf.Operation)

Initializes a table from a text file.

It inserts one key-value pair into the table for each line of the file. The key and value is extracted from the whole line content, elements from the split line based on `delimiter` or the line number (starting from zero). Where to extract the key and value from a line is specified by `key_index` and `value_index`.

- A value of -1 means use the line number(starting from zero), expects `int64`. - A value of -2 means use the whole line content, expects `string`. - A value >= 0 means use the index (starting at zero) of the split line based

on `delimiter`.

Arguments:

table_handle: Handle to a table which will be initialized.
filename: Filename of a vocabulary text file.
key_index: Column index in a line to get the table `key` values from.
value_index: Column index that represents information of a line to get the table

`value` values from.

Returns the created operation.

func InitializeTableV2 Uses

func InitializeTableV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Table initializer that takes two tensors for keys and values respectively.

Arguments:

table_handle: Handle to a table which will be initialized.
keys: Keys of type Tkey.
values: Values of type Tval.

Returns the created operation.

func Inv Uses

func Inv(scope *Scope, x tf.Output) (y tf.Output)

Computes the reciprocal of x element-wise.

DEPRECATED at GraphDef version 17: Use Reciprocal

I.e., \\(y = 1 / x\\).

func InvGrad Uses

func InvGrad(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Computes the gradient for the inverse of `x` wrt its input.

DEPRECATED at GraphDef version 17: Use ReciprocalGrad

Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy` is the corresponding input gradient.

func Invert Uses

func Invert(scope *Scope, x tf.Output) (y tf.Output)

Flips all bits elementwise.

The result will have exactly those bits set, that are not set in `x`. The computation is performed on the underlying representation of x.

func InvertPermutation Uses

func InvertPermutation(scope *Scope, x tf.Output) (y tf.Output)

Computes the inverse permutation of a tensor.

This operation computes the inverse of an index permutation. It takes a 1-D integer tensor `x`, which represents the indices of a zero-based array, and swaps each value with its index position. In other words, for an output tensor `y` and an input tensor `x`, this operation computes the following:

`y[x[i]] = i for i in [0, 1, ..., len(x) - 1]`

The values must include 0. There can be no duplicate values or negative values.

For example:

“` # tensor `x` is [3, 4, 0, 2, 1] invert_permutation(x) ==> [2, 4, 3, 0, 1] “`

Arguments:

x: 1-D.

Returns 1-D.

func IsFinite Uses

func IsFinite(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are finite.

@compatibility(numpy) Equivalent to np.isfinite @end_compatibility

func IsInf Uses

func IsInf(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are Inf.

@compatibility(numpy) Equivalent to np.isinf @end_compatibility

func IsNan Uses

func IsNan(scope *Scope, x tf.Output) (y tf.Output)

Returns which elements of x are NaN.

@compatibility(numpy) Equivalent to np.isnan @end_compatibility

func Iterator Uses

func Iterator(scope *Scope, shared_name string, container string, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

A container for an iterator resource.

Returns A handle to the iterator that can be passed to a "MakeIterator" or "IteratorGetNext" op.

func IteratorDispose Uses

func IteratorDispose(scope *Scope, iterator tf.Output) (o *tf.Operation)

Releases any resources used by the given iterator.

Returns the created operation.

func IteratorFromStringHandle Uses

func IteratorFromStringHandle(scope *Scope, string_handle tf.Output, optional ...IteratorFromStringHandleAttr) (resource_handle tf.Output)

Converts the given string representing a handle to an iterator to a resource.

Arguments:

string_handle: A string representation of the given handle.

Returns A handle to an iterator resource.

func IteratorGetNext Uses

func IteratorGetNext(scope *Scope, iterator tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (components []tf.Output)

Gets the next output from the given iterator.

func IteratorToStringHandle Uses

func IteratorToStringHandle(scope *Scope, resource_handle tf.Output) (string_handle tf.Output)

Converts the given `resource_handle` representing an iterator to a string.

Arguments:

resource_handle: A handle to an iterator resource.

Returns A string representation of the given handle.

func L2Loss Uses

func L2Loss(scope *Scope, t tf.Output) (output tf.Output)

L2 Loss.

Computes half the L2 norm of a tensor without the `sqrt`:

output = sum(t ** 2) / 2

Arguments:

t: Typically 2-D, but may have any dimensions.

Returns 0-D.

func LRN Uses

func LRN(scope *Scope, input tf.Output, optional ...LRNAttr) (output tf.Output)

Local Response Normalization.

The 4-D `input` tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within `depth_radius`. In detail,

sqr_sum[a, b, c, d] =
    sum(input[a, b, c, d - depth_radius : d + depth_radius + 1] ** 2)
output = input / (bias + alpha * sqr_sum) ** beta

For details, see [Krizhevsky et al., ImageNet classification with deep convolutional neural networks (NIPS 2012)](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks).

Arguments:

input: 4-D.

func LRNGrad Uses

func LRNGrad(scope *Scope, input_grads tf.Output, input_image tf.Output, output_image tf.Output, optional ...LRNGradAttr) (output tf.Output)

Gradients for Local Response Normalization.

Arguments:

input_grads: 4-D with shape `[batch, height, width, channels]`.
input_image: 4-D with shape `[batch, height, width, channels]`.
output_image: 4-D with shape `[batch, height, width, channels]`.

Returns The gradients for LRN.

func LearnedUnigramCandidateSampler Uses

func LearnedUnigramCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...LearnedUnigramCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a learned unigram distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func Less Uses

func Less(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x < y) element-wise.

*NOTE*: `Less` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LessEqual Uses

func LessEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x <= y) element-wise.

*NOTE*: `LessEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Lgamma Uses

func Lgamma(scope *Scope, x tf.Output) (y tf.Output)

Computes the log of the absolute value of `Gamma(x)` element-wise.

func LinSpace Uses

func LinSpace(scope *Scope, start tf.Output, stop tf.Output, num tf.Output) (output tf.Output)

Generates values in an interval.

A sequence of `num` evenly-spaced values are generated beginning at `start`. If `num > 1`, the values in the sequence increase by `stop - start / num - 1`, so that the last one is exactly `stop`.

For example:

“` tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0 11.0 12.0] “`

Arguments:

start: First entry in the range.
stop: Last entry in the range.
num: Number of values to generate.

Returns 1-D. The generated values.

func ListDiff Uses

func ListDiff(scope *Scope, x tf.Output, y tf.Output, optional ...ListDiffAttr) (out tf.Output, idx tf.Output)

Computes the difference between two lists of numbers or strings.

Given a list `x` and a list `y`, this operation returns a list `out` that represents all values that are in `x` but not in `y`. The returned list `out` is sorted in the same order that the numbers appear in `x` (duplicates are preserved). This operation also returns a list `idx` that represents the position of each `out` element in `x`. In other words:

`out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]`

For example, given this input:

“` x = [1, 2, 3, 4, 5, 6] y = [1, 3, 5] “`

This operation would return:

“` out ==> [2, 4, 6] idx ==> [1, 3, 5] “`

Arguments:

x: 1-D. Values to keep.
y: 1-D. Values to remove.

Returns 1-D. Values present in `x` but not in `y`.1-D. Positions of `x` values preserved in `out`.

func Log Uses

func Log(scope *Scope, x tf.Output) (y tf.Output)

Computes natural logarithm of x element-wise.

I.e., \\(y = \log_e x\\).

func Log1p Uses

func Log1p(scope *Scope, x tf.Output) (y tf.Output)

Computes natural logarithm of (1 + x) element-wise.

I.e., \\(y = \log_e (1 + x)\\).

func LogSoftmax Uses

func LogSoftmax(scope *Scope, logits tf.Output) (logsoftmax tf.Output)

Computes log softmax activations.

For each batch `i` and class `j` we have

logsoftmax[i, j] = logits[i, j] - log(sum(exp(logits[i])))

Arguments:

logits: 2-D with shape `[batch_size, num_classes]`.

Returns Same shape as `logits`.

func LogUniformCandidateSampler Uses

func LogUniformCandidateSampler(scope *Scope, true_classes tf.Output, num_true int64, num_sampled int64, unique bool, range_max int64, optional ...LogUniformCandidateSamplerAttr) (sampled_candidates tf.Output, true_expected_count tf.Output, sampled_expected_count tf.Output)

Generates labels for candidate sampling with a log-uniform distribution.

See explanations of candidate sampling and the data formats at go/candidate-sampling.

For each batch, this op picks a single set of sampled candidate labels.

The advantages of sampling candidates per-batch are simplicity and the possibility of efficient dense matrix multiplication. The disadvantage is that the sampled candidates must be chosen independently of the context and of the true labels.

Arguments:

true_classes: A batch_size * num_true matrix, in which each row contains the

IDs of the num_true target_classes in the corresponding original label.

num_true: Number of true labels per context.
num_sampled: Number of candidates to randomly sample.
unique: If unique is true, we sample with rejection, so that all sampled

candidates in a batch are unique. This requires some approximation to estimate the post-rejection sampling probabilities.

range_max: The sampler will sample integers from the interval [0, range_max).

Returns A vector of length num_sampled, in which each element is the ID of a sampled candidate.A batch_size * num_true matrix, representing the number of times each candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.A vector of length num_sampled, for each sampled candidate representing the number of times the candidate is expected to occur in a batch of sampled candidates. If unique=true, then this is a probability.

func LogicalAnd Uses

func LogicalAnd(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of x AND y element-wise.

*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LogicalNot Uses

func LogicalNot(scope *Scope, x tf.Output) (y tf.Output)

Returns the truth value of NOT x element-wise.

func LogicalOr Uses

func LogicalOr(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of x OR y element-wise.

*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func LookupTableExportV2 Uses

func LookupTableExportV2(scope *Scope, table_handle tf.Output, Tkeys tf.DataType, Tvalues tf.DataType) (keys tf.Output, values tf.Output)

Outputs all keys and values in the table.

Arguments:

table_handle: Handle to the table.

Returns Vector of all keys present in the table.Tensor of all values in the table. Indexed in parallel with `keys`.

func LookupTableFindV2 Uses

func LookupTableFindV2(scope *Scope, table_handle tf.Output, keys tf.Output, default_value tf.Output) (values tf.Output)

Looks up keys in a table, outputs the corresponding values.

The tensor `keys` must of the same type as the keys of the table. The output `values` is of the type of the table values.

The scalar `default_value` is the value output for keys not present in the table. It must also be of the same type as the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.

Returns Same shape as `keys`. Values found in the table, or `default_values` for missing keys.

func LookupTableImportV2 Uses

func LookupTableImportV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Replaces the contents of the table with the specified keys and values.

The tensor `keys` must be of the same type as the keys of the table. The tensor `values` must be of the type of the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.
values: Values to associate with keys.

Returns the created operation.

func LookupTableInsertV2 Uses

func LookupTableInsertV2(scope *Scope, table_handle tf.Output, keys tf.Output, values tf.Output) (o *tf.Operation)

Updates the table to associates keys with values.

The tensor `keys` must be of the same type as the keys of the table. The tensor `values` must be of the type of the table values.

Arguments:

table_handle: Handle to the table.
keys: Any shape.  Keys to look up.
values: Values to associate with keys.

Returns the created operation.

func LookupTableSizeV2 Uses

func LookupTableSizeV2(scope *Scope, table_handle tf.Output) (size tf.Output)

Computes the number of elements in the given table.

Arguments:

table_handle: Handle to the table.

Returns Scalar that contains number of elements in the table.

func LoopCond Uses

func LoopCond(scope *Scope, input tf.Output) (output tf.Output)

Forwards the input to the output.

This operator represents the loop termination condition used by the "pivot" switches of a loop.

Arguments:

input: A boolean scalar, representing the branch predicate of the Switch op.

Returns The same tensor as `input`.

func MakeIterator Uses

func MakeIterator(scope *Scope, dataset tf.Output, iterator tf.Output) (o *tf.Operation)

Makes a new iterator from the given `dataset` and stores it in `iterator`.

This operation may be executed multiple times. Each execution will reset the iterator in `iterator` to the first element of `dataset`.

Returns the created operation.

func MapClear Uses

func MapClear(scope *Scope, dtypes []tf.DataType, optional ...MapClearAttr) (o *tf.Operation)

Op removes all elements in the underlying container.

Returns the created operation.

func MapIncompleteSize Uses

func MapIncompleteSize(scope *Scope, dtypes []tf.DataType, optional ...MapIncompleteSizeAttr) (size tf.Output)

Op returns the number of incomplete elements in the underlying container.

func MapPeek Uses

func MapPeek(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...MapPeekAttr) (values []tf.Output)

Op peeks at the values at the specified key. If the

underlying container does not contain this key this op will block until it does.

func MapSize Uses

func MapSize(scope *Scope, dtypes []tf.DataType, optional ...MapSizeAttr) (size tf.Output)

Op returns the number of elements in the underlying container.

func MapStage Uses

func MapStage(scope *Scope, key tf.Output, indices tf.Output, values []tf.Output, dtypes []tf.DataType, optional ...MapStageAttr) (o *tf.Operation)

Stage (key, values) in the underlying container which behaves like a hashtable.

Arguments:

key: int64

values: a list of tensors

dtypes A list of data types that inserted values should adhere to.

Returns the created operation.

func MapUnstage Uses

func MapUnstage(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...MapUnstageAttr) (values []tf.Output)

Op removes and returns the values associated with the key

from the underlying container. If the underlying container does not contain this key, the op will block until it does.

func MapUnstageNoKey Uses

func MapUnstageNoKey(scope *Scope, indices tf.Output, dtypes []tf.DataType, optional ...MapUnstageNoKeyAttr) (key tf.Output, values []tf.Output)

Op removes and returns a random (key, value)

from the underlying container. If the underlying container does not contain elements, the op will block until it does.

func MatMul Uses

func MatMul(scope *Scope, a tf.Output, b tf.Output, optional ...MatMulAttr) (product tf.Output)

Multiply the matrix "a" by the matrix "b".

The inputs must be two-dimensional matrices and the inner dimension of "a" (after being transposed if transpose_a is true) must match the outer dimension of "b" (after being transposed if transposed_b is true).

*Note*: The default kernel implementation for MatMul on GPUs uses cublas.

func MatchingFiles Uses

func MatchingFiles(scope *Scope, pattern tf.Output) (filenames tf.Output)

Returns the set of files matching one or more glob patterns.

Note that this routine only supports wildcard characters in the basename portion of the pattern, not in the directory portion.

Arguments:

pattern: Shell wildcard pattern(s). Scalar or vector of type string.

Returns A vector of matching filenames.

func MatrixBandPart Uses

func MatrixBandPart(scope *Scope, input tf.Output, num_lower tf.Output, num_upper tf.Output) (band tf.Output)

Copy a tensor setting everything outside a central band in each innermost matrix

to zero.

The `band` part is computed as follows: Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a tensor with the same shape where

`band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]`.

The indicator function

`in_band(m, n) = (num_lower < 0 || (m-n) <= num_lower)) &&

(num_upper < 0 || (n-m) <= num_upper)`.

For example:

“` # if 'input' is [[ 0, 1, 2, 3]

[-1,  0,  1, 2]
[-2, -1,  0, 1]
[-3, -2, -1, 0]],

tf.matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]

[-1,  0,  1, 2]
[ 0, -1,  0, 1]
[ 0,  0, -1, 0]],

tf.matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]

[-1,  0,  1, 0]
[-2, -1,  0, 1]
[ 0, -2, -1, 0]]

“`

Useful special cases:

“`

tf.matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.matrix_band_part(input, 0, 0) ==> Diagonal.

“`

Arguments:

input: Rank `k` tensor.
num_lower: 0-D tensor. Number of subdiagonals to keep. If negative, keep entire

lower triangle.

num_upper: 0-D tensor. Number of superdiagonals to keep. If negative, keep

entire upper triangle.

Returns Rank `k` tensor of the same shape as input. The extracted banded tensor.

func MatrixDeterminant Uses

func MatrixDeterminant(scope *Scope, input tf.Output) (output tf.Output)

Computes the determinant of one or more square matrices.

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor containing the determinants for all input submatrices `[..., :, :]`.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[...]`.

func MatrixDiag Uses

func MatrixDiag(scope *Scope, diagonal tf.Output) (output tf.Output)

Returns a batched diagonal tensor with a given batched diagonal values.

Given a `diagonal`, this operation returns a tensor with the `diagonal` and everything else padded with zeros. The diagonal is computed as follows:

Assume `diagonal` has `k` dimensions `[I, J, K, ..., N]`, then the output is a tensor of rank `k+1` with dimensions [I, J, K, ..., N, N]` where:

`output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]`.

For example:

“` # 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]

and diagonal.shape = (2, 4)

tf.matrix_diag(diagonal) ==> [[[1, 0, 0, 0]

 [0, 2, 0, 0]
 [0, 0, 3, 0]
 [0, 0, 0, 4]],
[[5, 0, 0, 0]
 [0, 6, 0, 0]
 [0, 0, 7, 0]
 [0, 0, 0, 8]]]

which has shape (2, 4, 4) “`

Arguments:

diagonal: Rank `k`, where `k >= 1`.

Returns Rank `k+1`, with `output.shape = diagonal.shape + [diagonal.shape[-1]]`.

func MatrixDiagPart Uses

func MatrixDiagPart(scope *Scope, input tf.Output) (diagonal tf.Output)

Returns the batched diagonal part of a batched tensor.

This operation returns a tensor with the `diagonal` part of the batched `input`. The `diagonal` part is computed as follows:

Assume `input` has `k` dimensions `[I, J, K, ..., M, N]`, then the output is a tensor of rank `k - 1` with dimensions `[I, J, K, ..., min(M, N)]` where:

`diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]`.

The input must be at least a matrix.

For example:

“` # 'input' is [[[1, 0, 0, 0]

 [0, 2, 0, 0]
 [0, 0, 3, 0]
 [0, 0, 0, 4]],
[[5, 0, 0, 0]
 [0, 6, 0, 0]
 [0, 0, 7, 0]
 [0, 0, 0, 8]]]

and input.shape = (2, 4, 4)

tf.matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]

which has shape (2, 4) “`

Arguments:

input: Rank `k` tensor where `k >= 2`.

Returns The extracted diagonal(s) having shape `diagonal.shape = input.shape[:-2] + [min(input.shape[-2:])]`.

func MatrixInverse Uses

func MatrixInverse(scope *Scope, input tf.Output, optional ...MatrixInverseAttr) (output tf.Output)

Computes the inverse of one or more square invertible matrices or their

adjoints (conjugate transposes).

The input is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the inverse for all input submatrices `[..., :, :]`.

The op uses LU decomposition with partial pivoting to compute the inverses.

If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.

Arguments:

input: Shape is `[..., M, M]`.

Returns Shape is `[..., M, M]`.

@compatibility(numpy) Equivalent to np.linalg.inv @end_compatibility

func MatrixSetDiag Uses

func MatrixSetDiag(scope *Scope, input tf.Output, diagonal tf.Output) (output tf.Output)

Returns a batched matrix tensor with new batched diagonal values.

Given `input` and `diagonal`, this operation returns a tensor with the same shape and values as `input`, except for the main diagonal of the innermost matrices. These will be overwritten by the values in `diagonal`.

The output is computed as follows:

Assume `input` has `k+1` dimensions `[I, J, K, ..., M, N]` and `diagonal` has `k` dimensions `[I, J, K, ..., min(M, N)]`. Then the output is a tensor of rank `k+1` with dimensions `[I, J, K, ..., M, N]` where:

* `output[i, j, k, ..., m, n] = diagonal[i, j, k, ..., n]` for `m == n`.
* `output[i, j, k, ..., m, n] = input[i, j, k, ..., m, n]` for `m != n`.

Arguments:

input: Rank `k+1`, where `k >= 1`.
diagonal: Rank `k`, where `k >= 1`.

Returns Rank `k+1`, with `output.shape = input.shape`.

func MatrixSolve Uses

func MatrixSolve(scope *Scope, matrix tf.Output, rhs tf.Output, optional ...MatrixSolveAttr) (output tf.Output)

Solves systems of linear equations.

`Matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. `Rhs` is a tensor of shape `[..., M, K]`. The `output` is a tensor shape `[..., M, K]`. If `adjoint` is `False` then each output matrix satisfies `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. If `adjoint` is `True` then each output matrix satisfies `adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]`.

Arguments:

matrix: Shape is `[..., M, M]`.
rhs: Shape is `[..., M, K]`.

Returns Shape is `[..., M, K]`.

func MatrixSolveLs Uses

func MatrixSolveLs(scope *Scope, matrix tf.Output, rhs tf.Output, l2_regularizer tf.Output, optional ...MatrixSolveLsAttr) (output tf.Output)

Solves one or more linear least-squares problems.

`matrix` is a tensor of shape `[..., M, N]` whose inner-most 2 dimensions form matrices of size `[M, N]`. Rhs is a tensor of shape `[..., M, K]`. The output is a tensor shape `[..., N, K]` where each output matrix solves each of the equations matrix[..., :, :] * output[..., :, :] = rhs[..., :, :] in the least squares sense.

matrix and right-hand sides in the batch:

`matrix`=\\(A \in \Re^{m \times n}\\), `rhs`=\\(B \in \Re^{m \times k}\\), `output`=\\(X \in \Re^{n \times k}\\), `l2_regularizer`=\\(\lambda\\).

If `fast` is `True`, then the solution is computed by solving the normal equations using Cholesky decomposition. Specifically, if \\(m \ge n\\) then \\(X = (A^T A + \lambda I)^{-1} A^T B\\), which solves the least-squares problem \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k} } ||A Z - B||_F^2 + \lambda ||Z||_F^2\\). If \\(m \lt n\\) then `output` is computed as \\(X = A^T (A A^T + \lambda I)^{-1} B\\), which (for \\(\lambda = 0\\)) is the minimum-norm solution to the under-determined linear system, i.e. \\(X = \mathrm{argmin}_{Z \in \Re^{n \times k} } ||Z||_F^2 \\), subject to \\(A Z = B\\). Notice that the fast path is only numerically stable when \\(A\\) is numerically full rank and has a condition number \\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon_{mach} } }\\) or\\(\lambda\\) is sufficiently large.

If `fast` is `False` an algorithm based on the numerically robust complete orthogonal decomposition is used. This computes the minimum-norm least-squares solution, even when \\(A\\) is rank deficient. This path is typically 6-7 times slower than the fast path. If `fast` is `False` then `l2_regularizer` is ignored.

Arguments:

matrix: Shape is `[..., M, N]`.
rhs: Shape is `[..., M, K]`.
l2_regularizer: Scalar tensor.

@compatibility(numpy) Equivalent to np.linalg.lstsq @end_compatibility

Returns Shape is `[..., N, K]`.

func MatrixTriangularSolve Uses

func MatrixTriangularSolve(scope *Scope, matrix tf.Output, rhs tf.Output, optional ...MatrixTriangularSolveAttr) (output tf.Output)

Solves systems of linear equations with upper or lower triangular matrices by

backsubstitution.

`matrix` is a tensor of shape `[..., M, M]` whose inner-most 2 dimensions form square matrices. If `lower` is `True` then the strictly upper triangular part of each inner-most matrix is assumed to be zero and not accessed. If `lower` is False then the strictly lower triangular part of each inner-most matrix is assumed to be zero and not accessed. `rhs` is a tensor of shape `[..., M, K]`.

The output is a tensor of shape `[..., M, K]`. If `adjoint` is `True` then the innermost matrices in output` satisfy matrix equations `matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]`. If `adjoint` is `False` then the strictly then the innermost matrices in `output` satisfy matrix equations `adjoint(matrix[..., i, k]) * output[..., k, j] = rhs[..., i, j]`.

Arguments:

matrix: Shape is `[..., M, M]`.
rhs: Shape is `[..., M, K]`.

Returns Shape is `[..., M, K]`.

func Max Uses

func Max(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...MaxAttr) (output tf.Output)

Computes the maximum of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func MaxPool Uses

func MaxPool(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolAttr) (output tf.Output)

Performs max pooling on the input.

Arguments:

input: 4-D input to pool over.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.

func MaxPool3D Uses

func MaxPool3D(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DAttr) (output tf.Output)

Performs 3D max pooling on the input.

Arguments:

input: Shape `[batch, depth, rows, cols, channels]` tensor to pool over.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.

func MaxPool3DGrad Uses

func MaxPool3DGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DGradAttr) (output tf.Output)

Computes gradients of max pooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

func MaxPool3DGradGrad Uses

func MaxPool3DGradGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPool3DGradGradAttr) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: Output backprop of shape `[batch, depth, rows, cols, channels]`.
ksize: 1-D tensor of length 5. The size of the window for each dimension of

the input tensor. Must have `ksize[0] = ksize[4] = 1`.

strides: 1-D tensor of length 5. The stride of the sliding window for each

dimension of `input`. Must have `strides[0] = strides[4] = 1`.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input to `max_pool`.

func MaxPoolGrad Uses

func MaxPoolGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolGradAttr) (output tf.Output)

Computes gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients w.r.t. the output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients w.r.t. the input to `max_pool`.

func MaxPoolGradGrad Uses

func MaxPoolGradGrad(scope *Scope, orig_input tf.Output, orig_output tf.Output, grad tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolGradGradAttr) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

orig_input: The original input tensor.
orig_output: The original output tensor.
grad: 4-D.  Gradients of gradients w.r.t. the input of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input to `max_pool`.

func MaxPoolGradGradWithArgmax Uses

func MaxPoolGradGradWithArgmax(scope *Scope, input tf.Output, grad tf.Output, argmax tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output)

Computes second-order gradients of the maxpooling function.

Arguments:

input: The original input.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t. the

input of `max_pool`.

argmax: The indices of the maximum values chosen for each output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients of gradients w.r.t. the input of `max_pool`.

func MaxPoolGradWithArgmax Uses

func MaxPoolGradWithArgmax(scope *Scope, input tf.Output, grad tf.Output, argmax tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output)

Computes gradients of the maxpooling function.

Arguments:

input: The original input.
grad: 4-D with shape `[batch, height, width, channels]`.  Gradients w.r.t. the

output of `max_pool`.

argmax: The indices of the maximum values chosen for each output of `max_pool`.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns Gradients w.r.t. the input of `max_pool`.

func MaxPoolWithArgmax Uses

func MaxPoolWithArgmax(scope *Scope, input tf.Output, ksize []int64, strides []int64, padding string, optional ...MaxPoolWithArgmaxAttr) (output tf.Output, argmax tf.Output)

Performs max pooling on the input and outputs both max values and indices.

The indices in `argmax` are flattened, so that a maximum value at position `[b, y, x, c]` becomes flattened index `((b * height + y) * width + x) * channels + c`.

The indices returned are always in `[0, height) x [0, width)` before flattening, even if padding is involved and the mathematically correct answer is outside (either negative or too large). This is a bug, but fixing it is difficult to do in a safe backwards compatible way, especially due to flattening.

Arguments:

input: 4-D with shape `[batch, height, width, channels]`.  Input to pool over.
ksize: The size of the window for each dimension of the input tensor.
strides: The stride of the sliding window for each dimension of the

input tensor.

padding: The type of padding algorithm to use.

Returns The max pooled output tensor.4-D. The flattened indices of the max values chosen for each output.

func Maximum Uses

func Maximum(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

*NOTE*: `Maximum` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Mean Uses

func Mean(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...MeanAttr) (output tf.Output)

Computes the mean of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func Merge Uses

func Merge(scope *Scope, inputs []tf.Output) (output tf.Output, value_index tf.Output)

Forwards the value of an available tensor from `inputs` to `output`.

`Merge` waits for at least one of the tensors in `inputs` to become available. It is usually combined with `Switch` to implement branching.

`Merge` forwards the first tensor to become available to `output`, and sets `value_index` to its index in `inputs`.

Arguments:

inputs: The input tensors, exactly one of which will become available.

Returns Will be set to the available input tensor.The index of the chosen input tensor in `inputs`.

func MergeSummary Uses

func MergeSummary(scope *Scope, inputs []tf.Output) (summary tf.Output)

Merges summaries.

This op creates a [`Summary`](https://www.tensorflow.org/code/tensorflow/core/framework/summary.proto) protocol buffer that contains the union of all the values in the input summaries.

When the Op is run, it reports an `InvalidArgument` error if multiple values in the summaries to merge use the same tag.

Arguments:

inputs: Can be of any shape.  Each must contain serialized `Summary` protocol

buffers.

Returns Scalar. Serialized `Summary` protocol buffer.

func MergeV2Checkpoints Uses

func MergeV2Checkpoints(scope *Scope, checkpoint_prefixes tf.Output, destination_prefix tf.Output, optional ...MergeV2CheckpointsAttr) (o *tf.Operation)

V2 format specific: merges the metadata files of sharded checkpoints. The

result is one logical checkpoint, with one physical metadata file and renamed data files.

Intended for "grouping" multiple checkpoints in a sharded checkpoint setup.

If delete_old_dirs is true, attempts to delete recursively the dirname of each path in the input checkpoint_prefixes. This is useful when those paths are non user-facing temporary locations.

Arguments:

checkpoint_prefixes: prefixes of V2 checkpoints to merge.
destination_prefix: scalar.  The desired final prefix.  Allowed to be the same

as one of the checkpoint_prefixes.

Returns the created operation.

func Mfcc Uses

func Mfcc(scope *Scope, spectrogram tf.Output, sample_rate tf.Output, optional ...MfccAttr) (output tf.Output)

Transforms a spectrogram into a form that's useful for speech recognition.

Mel Frequency Cepstral Coefficients are a way of representing audio data that's been effective as an input feature for machine learning. They are created by taking the spectrum of a spectrogram (a 'cepstrum'), and discarding some of the higher frequencies that are less significant to the human ear. They have a long history in the speech recognition world, and https://en.wikipedia.org/wiki/Mel-frequency_cepstrum is a good resource to learn more.

Arguments:

spectrogram: Typically produced by the Spectrogram op, with magnitude_squared

set to true.

sample_rate: How many samples per second the source audio used.

func Min Uses

func Min(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...MinAttr) (output tf.Output)

Computes the minimum of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func Minimum Uses

func Minimum(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

*NOTE*: `Minimum` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func MirrorPad Uses

func MirrorPad(scope *Scope, input tf.Output, paddings tf.Output, mode string) (output tf.Output)

Pads a tensor with mirrored values.

This operation pads a `input` with mirrored values according to the `paddings` you specify. `paddings` is an integer tensor with shape `[n, 2]`, where n is the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates how many values to add before the contents of `input` in that dimension, and `paddings[D, 1]` indicates how many values to add after the contents of `input` in that dimension. Both `paddings[D, 0]` and `paddings[D, 1]` must be no greater than `input.dim_size(D)` (or `input.dim_size(D) - 1`) if `copy_border` is true (if false, respectively).

The padded size of each dimension D of the output is:

`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`

For example:

“` # 't' is [[1, 2, 3], [4, 5, 6]]. # 'paddings' is [[1, 1]], [2, 2]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[2, 1, 1, 2, 3, 3, 2]

[2, 1, 1, 2, 3, 3, 2]
[5, 4, 4, 5, 6, 6, 5]
[5, 4, 4, 5, 6, 6, 5]]

“`

Arguments:

input: The input tensor to be padded.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

mode: Either `REFLECT` or `SYMMETRIC`. In reflect mode the padded regions

do not include the borders, while in symmetric mode the padded regions do include the borders. For example, if `input` is `[1, 2, 3]` and `paddings` is `[0, 2]`, then the output is `[1, 2, 3, 2, 1]` in reflect mode, and it is `[1, 2, 3, 3, 2]` in symmetric mode.

Returns The padded tensor.

func MirrorPadGrad Uses

func MirrorPadGrad(scope *Scope, input tf.Output, paddings tf.Output, mode string) (output tf.Output)

Gradient op for `MirrorPad` op. This op folds a mirror-padded tensor.

This operation folds the padded areas of `input` by `MirrorPad` according to the `paddings` you specify. `paddings` must be the same as `paddings` argument given to the corresponding `MirrorPad` op.

The folded size of each dimension D of the output is:

`input.dim_size(D) - paddings(D, 0) - paddings(D, 1)`

For example:

“` # 't' is [[1, 2, 3], [4, 5, 6], [7, 8, 9]]. # 'paddings' is [[0, 1]], [0, 1]]. # 'mode' is SYMMETRIC. # rank of 't' is 2. pad(t, paddings) ==> [[ 1, 5]

[11, 28]]

“`

Arguments:

input: The input tensor to be folded.
paddings: A two-column matrix specifying the padding sizes. The number of

rows must be the same as the rank of `input`.

mode: The mode used in the `MirrorPad` op.

Returns The folded tensor.

func Mod Uses

func Mod(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. `truncate(x / y) * y + truncate_mod(x, y) = x`.

*NOTE*: `Mod` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Mul Uses

func Mul(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x * y element-wise.

*NOTE*: `Mul` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Multinomial Uses

func Multinomial(scope *Scope, logits tf.Output, num_samples tf.Output, optional ...MultinomialAttr) (output tf.Output)

Draws samples from a multinomial distribution.

Arguments:

logits: 2-D Tensor with shape `[batch_size, num_classes]`.  Each slice `[i, :]`

represents the unnormalized log probabilities for all classes.

num_samples: 0-D.  Number of independent samples to draw for each row slice.

Returns 2-D Tensor with shape `[batch_size, num_samples]`. Each slice `[i, :]` contains the drawn class labels with range `[0, num_classes)`.

func MutableDenseHashTableV2 Uses

func MutableDenseHashTableV2(scope *Scope, empty_key tf.Output, value_dtype tf.DataType, optional ...MutableDenseHashTableV2Attr) (table_handle tf.Output)

Creates an empty hash table that uses tensors as the backing store.

It uses "open addressing" with quadratic reprobing to resolve collisions.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Arguments:

empty_key: The key used to represent empty key buckets internally. Must not

be used in insert or lookup operations.

value_dtype: Type of the table values.

Returns Handle to a table.

func MutableHashTableOfTensorsV2 Uses

func MutableHashTableOfTensorsV2(scope *Scope, key_dtype tf.DataType, value_dtype tf.DataType, optional ...MutableHashTableOfTensorsV2Attr) (table_handle tf.Output)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a vector. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Arguments:

key_dtype: Type of the table keys.
value_dtype: Type of the table values.

Returns Handle to a table.

func MutableHashTableV2 Uses

func MutableHashTableV2(scope *Scope, key_dtype tf.DataType, value_dtype tf.DataType, optional ...MutableHashTableV2Attr) (table_handle tf.Output)

Creates an empty hash table.

This op creates a mutable hash table, specifying the type of its keys and values. Each value must be a scalar. Data can be inserted into the table using the insert operations. It does not support the initialization operation.

Arguments:

key_dtype: Type of the table keys.
value_dtype: Type of the table values.

Returns Handle to a table.

func Neg Uses

func Neg(scope *Scope, x tf.Output) (y tf.Output)

Computes numerical negative value element-wise.

I.e., \\(y = -x\\).

func NextIteration Uses

func NextIteration(scope *Scope, data tf.Output) (output tf.Output)

Makes its input available to the next iteration.

Arguments:

data: The tensor to be made available to the next iteration.

Returns The same tensor as `data`.

func NoOp Uses

func NoOp(scope *Scope) (o *tf.Operation)

Does nothing. Only useful as a placeholder for control edges.

Returns the created operation.

func NonMaxSuppression Uses

func NonMaxSuppression(scope *Scope, boxes tf.Output, scores tf.Output, max_output_size tf.Output, optional ...NonMaxSuppressionAttr) (selected_indices tf.Output)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm. The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the `tf.gather operation`. For example:

selected_indices = tf.image.non_max_suppression(
    boxes, scores, max_output_size, iou_threshold)
selected_boxes = tf.gather(boxes, selected_indices)

Arguments:

boxes: A 2-D float tensor of shape `[num_boxes, 4]`.
scores: A 1-D float tensor of shape `[num_boxes]` representing a single

score corresponding to each box (each row of boxes).

max_output_size: A scalar integer tensor representing the maximum number of

boxes to be selected by non max suppression.

Returns A 1-D integer tensor of shape `[M]` representing the selected indices from the boxes tensor, where `M <= max_output_size`.

func NonMaxSuppressionV2 Uses

func NonMaxSuppressionV2(scope *Scope, boxes tf.Output, scores tf.Output, max_output_size tf.Output, iou_threshold tf.Output) (selected_indices tf.Output)

Greedily selects a subset of bounding boxes in descending order of score,

pruning away boxes that have high intersection-over-union (IOU) overlap with previously selected boxes. Bounding boxes are supplied as [y1, x1, y2, x2], where (y1, x1) and (y2, x2) are the coordinates of any diagonal pair of box corners and the coordinates can be provided as normalized (i.e., lying in the interval [0, 1]) or absolute. Note that this algorithm is agnostic to where the origin is in the coordinate system. Note that this algorithm is invariant to orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system result in the same boxes being selected by the algorithm.

The output of this operation is a set of integers indexing into the input collection of bounding boxes representing the selected boxes. The bounding box coordinates corresponding to the selected indices can then be obtained using the `tf.gather operation`. For example:

selected_indices = tf.image.non_max_suppression_v2(
    boxes, scores, max_output_size, iou_threshold)
selected_boxes = tf.gather(boxes, selected_indices)

Arguments:

boxes: A 2-D float tensor of shape `[num_boxes, 4]`.
scores: A 1-D float tensor of shape `[num_boxes]` representing a single

score corresponding to each box (each row of boxes).

max_output_size: A scalar integer tensor representing the maximum number of

boxes to be selected by non max suppression.

iou_threshold: A 0-D float tensor representing the threshold for deciding whether

boxes overlap too much with respect to IOU.

Returns A 1-D integer tensor of shape `[M]` representing the selected indices from the boxes tensor, where `M <= max_output_size`.

func NotEqual Uses

func NotEqual(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns the truth value of (x != y) element-wise.

*NOTE*: `NotEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func OneHot Uses

func OneHot(scope *Scope, indices tf.Output, depth tf.Output, on_value tf.Output, off_value tf.Output, optional ...OneHotAttr) (output tf.Output)

Returns a one-hot tensor.

The locations represented by indices in `indices` take value `on_value`, while all other locations take value `off_value`.

If the input `indices` is rank `N`, the output will have rank `N+1`, The new axis is created at dimension `axis` (default: the new axis is appended at the end).

If `indices` is a scalar the output shape will be a vector of length `depth`.

If `indices` is a vector of length `features`, the output shape will be: “`

features x depth if axis == -1
depth x features if axis == 0

“`

If `indices` is a matrix (batch) with shape `[batch, features]`, the output shape will be: “`

batch x features x depth if axis == -1
batch x depth x features if axis == 1
depth x batch x features if axis == 0

“`

Examples =========

Suppose that

“`

indices = [0, 2, -1, 1]
depth = 3
on_value = 5.0
off_value = 0.0
axis = -1

“`

Then output is `[4 x 3]`:

```output =
  [5.0 0.0 0.0]  // one_hot(0)
  [0.0 0.0 5.0]  // one_hot(2)
  [0.0 0.0 0.0]  // one_hot(-1)
  [0.0 5.0 0.0]  // one_hot(1)
```

Suppose that

“`

indices = [0, 2, -1, 1]
depth = 3
on_value = 0.0
off_value = 3.0
axis = 0

“`

Then output is `[3 x 4]`:

```output =
  [0.0 3.0 3.0 3.0]
  [3.0 3.0 3.0 0.0]
  [3.0 3.0 3.0 3.0]
  [3.0 0.0 3.0 3.0]
//  ^                one_hot(0)
//      ^            one_hot(2)
//          ^        one_hot(-1)
//              ^    one_hot(1)
```

Suppose that

“`

indices = [[0, 2], [1, -1]]
depth = 3
on_value = 1.0
off_value = 0.0
axis = -1

“`

Then output is `[2 x 2 x 3]`:

```output =
  [
    [1.0, 0.0, 0.0]  // one_hot(0)
    [0.0, 0.0, 1.0]  // one_hot(2)
  ][
    [0.0, 1.0, 0.0]  // one_hot(1)
    [0.0, 0.0, 0.0]  // one_hot(-1)
  ]```

Arguments:

indices: A tensor of indices.
depth: A scalar defining the depth of the one hot dimension.
on_value: A scalar defining the value to fill in output when `indices[j] = i`.
off_value: A scalar defining the value to fill in output when `indices[j] != i`.

Returns The one-hot tensor.

func OnesLike Uses

func OnesLike(scope *Scope, x tf.Output) (y tf.Output)

Returns a tensor of ones with the same shape and type as x.

Arguments:

x: a tensor of type T.

Returns a tensor of the same shape and type as x but filled with ones.

func OrderedMapClear Uses

func OrderedMapClear(scope *Scope, dtypes []tf.DataType, optional ...OrderedMapClearAttr) (o *tf.Operation)

Op removes all elements in the underlying container.

Returns the created operation.

func OrderedMapIncompleteSize Uses

func OrderedMapIncompleteSize(scope *Scope, dtypes []tf.DataType, optional ...OrderedMapIncompleteSizeAttr) (size tf.Output)

Op returns the number of incomplete elements in the underlying container.

func OrderedMapPeek Uses

func OrderedMapPeek(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...OrderedMapPeekAttr) (values []tf.Output)

Op peeks at the values at the specified key. If the

underlying container does not contain this key this op will block until it does. This Op is optimized for performance.

func OrderedMapSize Uses

func OrderedMapSize(scope *Scope, dtypes []tf.DataType, optional ...OrderedMapSizeAttr) (size tf.Output)

Op returns the number of elements in the underlying container.

func OrderedMapStage Uses

func OrderedMapStage(scope *Scope, key tf.Output, indices tf.Output, values []tf.Output, dtypes []tf.DataType, optional ...OrderedMapStageAttr) (o *tf.Operation)

Stage (key, values) in the underlying container which behaves like a ordered

associative container. Elements are ordered by key.

Arguments:

key: int64

values: a list of tensors

dtypes A list of data types that inserted values should adhere to.

Returns the created operation.

func OrderedMapUnstage Uses

func OrderedMapUnstage(scope *Scope, key tf.Output, indices tf.Output, dtypes []tf.DataType, optional ...OrderedMapUnstageAttr) (values []tf.Output)

Op removes and returns the values associated with the key

from the underlying container. If the underlying container does not contain this key, the op will block until it does.

func OrderedMapUnstageNoKey Uses

func OrderedMapUnstageNoKey(scope *Scope, indices tf.Output, dtypes []tf.DataType, optional ...OrderedMapUnstageNoKeyAttr) (key tf.Output, values []tf.Output)

Op removes and returns the (key, value) element with the smallest

key from the underlying container. If the underlying container does not contain elements, the op will block until it does.

func Pack Uses

func Pack(scope *Scope, values []tf.Output, optional ...PackAttr) (output tf.Output)

Packs a list of `N` rank-`R` tensors into one rank-`(R+1)` tensor.

Packs the `N` tensors in `values` into a tensor with rank one higher than each tensor in `values`, by packing them along the `axis` dimension. Given a list of tensors of shape `(A, B, C)`;

if `axis == 0` then the `output` tensor will have the shape `(N, A, B, C)`. if `axis == 1` then the `output` tensor will have the shape `(A, N, B, C)`. Etc.

For example:

“` # 'x' is [1, 4] # 'y' is [2, 5] # 'z' is [3, 6] pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]] “`

This is the opposite of `unpack`.

Arguments:

values: Must be of same shape and type.

Returns The packed tensor.

func Pad Uses

func Pad(scope *Scope, input tf.Output, paddings tf.Output) (output tf.Output)

Pads a tensor with zeros.

This operation pads a `input` with zeros according to the `paddings` you specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates how many zeros to add before the contents of `input` in that dimension, and `paddings[D, 1]` indicates how many zeros to add after the contents of `input` in that dimension.

The padded size of each dimension D of the output is:

`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`

For example:

“` # 't' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]] # rank of 't' is 2 pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]

[0, 0, 1, 1, 0, 0]
[0, 0, 2, 2, 0, 0]
[0, 0, 0, 0, 0, 0]]

“`

func PadV2 Uses

func PadV2(scope *Scope, input tf.Output, paddings tf.Output, constant_values tf.Output) (output tf.Output)

Pads a tensor.

This operation pads `input` according to the `paddings` and `constant_values` you specify. `paddings` is an integer tensor with shape `[Dn, 2]`, where n is the rank of `input`. For each dimension D of `input`, `paddings[D, 0]` indicates how many padding values to add before the contents of `input` in that dimension, and `paddings[D, 1]` indicates how many padding values to add after the contents of `input` in that dimension. `constant_values` is a scalar tensor of the same type as `input` that indicates the value to use for padding `input`.

The padded size of each dimension D of the output is:

`paddings(D, 0) + input.dim_size(D) + paddings(D, 1)`

For example:

“` # 't' is [[1, 1], [2, 2]] # 'paddings' is [[1, 1], [2, 2]] # 'constant_values' is 0 # rank of 't' is 2 pad(t, paddings) ==> [[0, 0, 0, 0, 0, 0]

[0, 0, 1, 1, 0, 0]
[0, 0, 2, 2, 0, 0]
[0, 0, 0, 0, 0, 0]]

“`

func PaddedBatchDataset Uses

func PaddedBatchDataset(scope *Scope, input_dataset tf.Output, batch_size tf.Output, padded_shapes []tf.Output, padding_values []tf.Output, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that batches and pads `batch_size` elements from the input.

Arguments:

batch_size: A scalar representing the number of elements to accumulate in a

batch.

padded_shapes: A list of int64 tensors representing the desired padded shapes

of the corresponding output components. These shapes may be partially specified, using `-1` to indicate that a particular dimension should be padded to the maximum size of all batch elements.

padding_values: A list of scalars containing the padding value to use for

each of the outputs.

func PaddingFIFOQueueV2 Uses

func PaddingFIFOQueueV2(scope *Scope, component_types []tf.DataType, optional ...PaddingFIFOQueueV2Attr) (handle tf.Output)

A queue that produces elements in first-in first-out order.

Variable-size shapes are allowed by setting the corresponding shape dimensions to 0 in the shape attr. In this case DequeueMany will pad up to the maximum size of any given element in the minibatch. See below for details.

Arguments:

component_types: The type of each component in a value.

Returns The handle to the queue.

func ParallelConcat Uses

func ParallelConcat(scope *Scope, values []tf.Output, shape tf.Shape) (output tf.Output)

Concatenates a list of `N` tensors along the first dimension.

The input tensors are all required to have size 1 in the first dimension.

For example:

“` # 'x' is [[1, 4]] # 'y' is [[2, 5]] # 'z' is [[3, 6]] parallel_concat([x, y, z]) => [[1, 4], [2, 5], [3, 6]] # Pack along first dim. “`

The difference between concat and parallel_concat is that concat requires all of the inputs be computed before the operation will begin but doesn't require that the input shapes be known during graph construction. Parallel concat will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Arguments:

values: Tensors to be concatenated. All must have size 1 in the first dimension

and same shape.

shape: the final shape of the result; should be equal to the shapes of any input

but with the number of input values in the first dimension.

Returns The concatenated tensor.

func ParallelDynamicStitch Uses

func ParallelDynamicStitch(scope *Scope, indices []tf.Output, data []tf.Output) (merged tf.Output)

Interleave the values from the `data` tensors into a single tensor.

Builds a merged tensor such that

“`python

merged[indices[m][i, ..., j], ...] = data[m][i, ..., j, ...]

“`

For example, if each `indices[m]` is scalar or vector, we have

“`python

# Scalar indices:
merged[indices[m], ...] = data[m][...]

# Vector indices:
merged[indices[m][i], ...] = data[m][i, ...]

“`

Each `data[i].shape` must start with the corresponding `indices[i].shape`, and the rest of `data[i].shape` must be constant w.r.t. `i`. That is, we must have `data[i].shape = indices[i].shape + constant`. In terms of this `constant`, the output shape is

merged.shape = [max(indices)] + constant

Values may be merged in parallel, so if an index appears in both `indices[m][i]` and `indices[n][j]`, the result may be invalid. This differs from the normal DynamicStitch operator that defines the behavior in that case.

For example:

“`python

indices[0] = 6
indices[1] = [4, 1]
indices[2] = [[5, 2], [0, 3]]
data[0] = [61, 62]
data[1] = [[41, 42], [11, 12]]
data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]]
merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],
          [51, 52], [61, 62]]

“`

This method can be used to merge partitions created by `dynamic_partition` as illustrated on the following example:

“`python

# Apply function (increments x_i) on elements for which a certain condition
# apply (x_i != -1 in this example).
x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4])
condition_mask=tf.not_equal(x,tf.constant(-1.))
partitioned_data = tf.dynamic_partition(
    x, tf.cast(condition_mask, tf.int32) , 2)
partitioned_data[1] = partitioned_data[1] + 1.0
condition_indices = tf.dynamic_partition(
    tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)
x = tf.dynamic_stitch(condition_indices, partitioned_data)
# Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain
# unchanged.

“`

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/DynamicStitch.png" alt> </div>

func ParameterizedTruncatedNormal Uses

func ParameterizedTruncatedNormal(scope *Scope, shape tf.Output, means tf.Output, stdevs tf.Output, minvals tf.Output, maxvals tf.Output, optional ...ParameterizedTruncatedNormalAttr) (output tf.Output)

Outputs random values from a normal distribution. The parameters may each be a

scalar which applies to the entire output, or a vector of length shape[0] which stores the parameters for each batch.

Arguments:

shape: The shape of the output tensor. Batches are indexed by the 0th dimension.
means: The mean parameter of each batch.
stdevs: The standard deviation parameter of each batch. Must be greater than 0.
minvals: The minimum cutoff. May be -infinity.
maxvals: The maximum cutoff. May be +infinity, and must be more than the minval

for each batch.

Returns A matrix of shape num_batches x samples_per_batch, filled with random truncated normal values using the parameters for each row.

func ParseExample Uses

func ParseExample(scope *Scope, serialized tf.Output, names tf.Output, sparse_keys []tf.Output, dense_keys []tf.Output, dense_defaults []tf.Output, sparse_types []tf.DataType, dense_shapes []tf.Shape) (sparse_indices []tf.Output, sparse_values []tf.Output, sparse_shapes []tf.Output, dense_values []tf.Output)

Transforms a vector of brain.Example protos (as strings) into typed tensors.

Arguments:

serialized: A vector containing a batch of binary serialized Example protos.
names: A vector containing the names of the serialized protos.

May contain, for example, table key (descriptive) names for the corresponding serialized protos. These are purely useful for debugging purposes, and the presence of values here has no effect on the output. May also be an empty vector if no names are available. If non-empty, this vector must be the same length as "serialized".

sparse_keys: A list of Nsparse string Tensors (scalars).

The keys expected in the Examples' features associated with sparse values.

dense_keys: A list of Ndense string Tensors (scalars).

The keys expected in the Examples' features associated with dense values.

dense_defaults: A list of Ndense Tensors (some may be empty).

dense_defaults[j] provides default values when the example's feature_map lacks dense_key[j]. If an empty Tensor is provided for dense_defaults[j], then the Feature dense_keys[j] is required. The input type is inferred from dense_defaults[j], even when it's empty. If dense_defaults[j] is not empty, and dense_shapes[j] is fully defined, then the shape of dense_defaults[j] must match that of dense_shapes[j]. If dense_shapes[j] has an undefined major dimension (variable strides dense feature), dense_defaults[j] must contain a single element: the padding element.

sparse_types: A list of Nsparse types; the data types of data in each Feature

given in sparse_keys. Currently the ParseExample supports DT_FLOAT (FloatList), DT_INT64 (Int64List), and DT_STRING (BytesList).

dense_shapes: A list of Ndense shapes; the shapes of data in each Feature

given in dense_keys. The number of elements in the Feature corresponding to dense_key[j] must always equal dense_shapes[j].NumEntries(). If dense_shapes[j] == (D0, D1, ..., DN) then the shape of output Tensor dense_values[j] will be (|serialized|, D0, D1, ..., DN): The dense outputs are just the inputs row-stacked by batch. This works for dense_shapes[j] = (-1, D1, ..., DN). In this case the shape of the output Tensor dense_values[j] will be (|serialized|, M, D1, .., DN), where M is the maximum number of blocks of elements of length D1 * .... * DN, across all minibatch entries in the input. Any minibatch entry with less than M blocks of elements of length D1 * ... * DN will be padded with the corresponding default_value scalar element along the second dimension.

func ParseSingleSequenceExample Uses

func ParseSingleSequenceExample(scope *Scope, serialized tf.Output, feature_list_dense_missing_assumed_empty tf.Output, context_sparse_keys []tf.Output, context_dense_keys []tf.Output, feature_list_sparse_keys []tf.Output, feature_list_dense_keys []tf.Output, context_dense_defaults []tf.Output, debug_name tf.Output, optional ...ParseSingleSequenceExampleAttr) (context_sparse_indices []tf.Output, context_sparse_values []tf.Output, context_sparse_shapes []tf.Output, context_dense_values []tf.Output, feature_list_sparse_indices []tf.Output, feature_list_sparse_values []tf.Output, feature_list_sparse_shapes []tf.Output, feature_list_dense_values []tf.Output)

Transforms a scalar brain.SequenceExample proto (as strings) into typed tensors.

Arguments:

serialized: A scalar containing a binary serialized SequenceExample proto.
feature_list_dense_missing_assumed_empty: A vector listing the

FeatureList keys which may be missing from the SequenceExample. If the associated FeatureList is missing, it is treated as empty. By default, any FeatureList not listed in this vector must exist in the SequenceExample.

context_sparse_keys: A list of Ncontext_sparse string Tensors (scalars).

The keys expected in the Examples' features associated with context_sparse values.

context_dense_keys: A list of Ncontext_dense string Tensors (scalars).

The keys expected in the SequenceExamples' context features associated with dense values.

feature_list_sparse_keys: A list of Nfeature_list_sparse string Tensors

(scalars). The keys expected in the FeatureLists associated with sparse values.

feature_list_dense_keys: A list of Nfeature_list_dense string Tensors (scalars).

The keys expected in the SequenceExamples' feature_lists associated with lists of dense values.

context_dense_defaults: A list of Ncontext_dense Tensors (some may be empty).

context_dense_defaults[j] provides default values when the SequenceExample's context map lacks context_dense_key[j]. If an empty Tensor is provided for context_dense_defaults[j], then the Feature context_dense_keys[j] is required. The input type is inferred from context_dense_defaults[j], even when it's empty. If context_dense_defaults[j] is not empty, its shape must match context_dense_shapes[j].

debug_name: A scalar containing the name of the serialized proto.

May contain, for example, table key (descriptive) name for the corresponding serialized proto. This is purely useful for debugging purposes, and the presence of values here has no effect on the output. May also be an empty scalar if no name is available.

func ParseTensor Uses

func ParseTensor(scope *Scope, serialized tf.Output, out_type tf.DataType) (output tf.Output)

Transforms a serialized tensorflow.TensorProto proto into a Tensor.

Arguments:

serialized: A scalar string containing a serialized TensorProto proto.
out_type: The type of the serialized tensor.  The provided type must match the

type of the serialized tensor and no implicit conversion will take place.

Returns A Tensor of type `out_type`.

func Placeholder Uses

func Placeholder(scope *Scope, dtype tf.DataType, optional ...PlaceholderAttr) (output tf.Output)

A placeholder op for a value that will be fed into the computation.

N.B. This operation will fail with an error if it is executed. It is intended as a way to represent a value that will always be fed, and to provide attrs that enable the fed value to be checked at runtime.

Arguments:

dtype: The type of elements in the tensor.

Returns A placeholder tensor that must be replaced using the feed mechanism.

func PlaceholderV2 Uses

func PlaceholderV2(scope *Scope, dtype tf.DataType, shape tf.Shape) (output tf.Output)

A placeholder op for a value that will be fed into the computation.

DEPRECATED at GraphDef version 23: Placeholder now behaves the same as PlaceholderV2.

N.B. This operation will fail with an error if it is executed. It is intended as a way to represent a value that will always be fed, and to provide attrs that enable the fed value to be checked at runtime.

Arguments:

dtype: The type of elements in the tensor.
shape: The shape of the tensor. The shape can be any partially-specified

shape. To be unconstrained, pass in a shape with unknown rank.

Returns A placeholder tensor that must be replaced using the feed mechanism.

func PlaceholderWithDefault Uses

func PlaceholderWithDefault(scope *Scope, input tf.Output, shape tf.Shape) (output tf.Output)

A placeholder op that passes through `input` when its output is not fed.

Arguments:

input: The default value to produce when `output` is not fed.
shape: The (possibly partial) shape of the tensor.

Returns A placeholder tensor that defaults to `input` if it is not fed.

func Polygamma Uses

func Polygamma(scope *Scope, a tf.Output, x tf.Output) (z tf.Output)

Compute the polygamma function \\(\psi^{(n)}(x)\\).

The polygamma function is defined as:

\\(\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x)\\)

where \\(\psi(x)\\) is the digamma function.

func PopulationCount Uses

func PopulationCount(scope *Scope, x tf.Output) (y tf.Output)

Computes element-wise population count (a.k.a. popcount, bitsum, bitcount).

For each entry in `x`, calculates the number of `1` (on) bits in the binary representation of that entry.

**NOTE**: It is more efficient to first `tf.bitcast` your tensors into `int32` or `int64` and perform the bitcount on the result, than to feed in 8- or 16-bit inputs and then aggregate the resulting counts.

func Pow Uses

func Pow(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Computes the power of one value to another.

Given a tensor `x` and a tensor `y`, this operation computes \\(x^y\\) for corresponding elements in `x` and `y`. For example:

“` # tensor 'x' is [[2, 2]], [3, 3]] # tensor 'y' is [[8, 16], [2, 3]] tf.pow(x, y) ==> [[256, 65536], [9, 27]] “`

func PrefetchDataset Uses

func PrefetchDataset(scope *Scope, input_dataset tf.Output, buffer_size tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that asynchronously prefetches elements from `input_dataset`.

Arguments:

buffer_size: The maximum number of elements to buffer in an iterator over

this dataset.

func PreventGradient Uses

func PreventGradient(scope *Scope, input tf.Output, optional ...PreventGradientAttr) (output tf.Output)

An identity op that triggers an error if a gradient is requested.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, the TensorFlow gradient system will return an error when trying to lookup the gradient of this op, because no gradient must ever be registered for this function. This op exists to prevent subtle bugs from silently returning unimplemented gradients in some corner cases.

Arguments:

input: any tensor.

Returns the same input tensor.

func Print Uses

func Print(scope *Scope, input tf.Output, data []tf.Output, optional ...PrintAttr) (output tf.Output)

Prints a list of tensors.

Passes `input` through to `output` and prints `data` when evaluating.

Arguments:

input: The tensor passed to `output`
data: A list of tensors to print out when op is evaluated.

Returns = The unmodified `input` tensor

func PriorityQueueV2 Uses

func PriorityQueueV2(scope *Scope, shapes []tf.Shape, optional ...PriorityQueueV2Attr) (handle tf.Output)

A queue that produces elements sorted by the first component value.

Note that the PriorityQueue requires the first component of any element to be a scalar int64, in addition to the other elements declared by component_types. Therefore calls to Enqueue and EnqueueMany (resp. Dequeue and DequeueMany) on a PriorityQueue will all require (resp. output) one extra entry in their input (resp. output) lists.

Arguments:

shapes: The shape of each component in a value. The length of this attr must

be either 0 or the same as the length of component_types. If the length of this attr is 0, the shapes of queue elements are not constrained, and only one element may be dequeued at a time.

Returns The handle to the queue.

func Prod Uses

func Prod(scope *Scope, input tf.Output, reduction_indices tf.Output, optional ...ProdAttr) (output tf.Output)

Computes the product of elements across dimensions of a tensor.

Reduces `input` along the dimensions given in `reduction_indices`. Unless `keep_dims` is true, the rank of the tensor is reduced by 1 for each entry in `reduction_indices`. If `keep_dims` is true, the reduced dimensions are retained with length 1.

Arguments:

input: The tensor to reduce.
reduction_indices: The dimensions to reduce. Must be in the range

`[-rank(input), rank(input))`.

Returns The reduced tensor.

func Qr Uses

func Qr(scope *Scope, input tf.Output, optional ...QrAttr) (q tf.Output, r tf.Output)

Computes the QR decompositions of one or more matrices.

Computes the QR decomposition of each inner matrix in `tensor` such that `tensor[..., :, :] = q[..., :, :] * r[..., :,:])`

“`python # a is a tensor. # q is a tensor of orthonormal matrices. # r is a tensor of upper triangular matrices. q, r = qr(a) q_full, r_full = qr(a, full_matrices=True) “`

Arguments:

input: A tensor of shape `[..., M, N]` whose inner-most 2 dimensions

form matrices of size `[M, N]`. Let `P` be the minimum of `M` and `N`.

Returns Orthonormal basis for range of `a`. If `full_matrices` is `False` then shape is `[..., M, P]`; if `full_matrices` is `True` then shape is `[..., M, M]`.Triangular factor. If `full_matrices` is `False` then shape is `[..., P, N]`. If `full_matrices` is `True` then shape is `[..., M, N]`.

func QuantizeAndDequantize Uses

func QuantizeAndDequantize(scope *Scope, input tf.Output, optional ...QuantizeAndDequantizeAttr) (output tf.Output)

Use QuantizeAndDequantizeV2 instead.

DEPRECATED at GraphDef version 22: Replaced by QuantizeAndDequantizeV2

func QuantizeAndDequantizeV2 Uses

func QuantizeAndDequantizeV2(scope *Scope, input tf.Output, input_min tf.Output, input_max tf.Output, optional ...QuantizeAndDequantizeV2Attr) (output tf.Output)

Quantizes then dequantizes a tensor.

This op simulates the precision loss from the quantized forward pass by: 1. Quantizing the tensor to fixed point numbers, which should match the target

quantization method when it is used in inference.

2. Dequantizing it back to floating point numbers for the following ops, most

likely matmul.

There are different ways to quantize. This version does not use the full range of the output type, choosing to elide the lowest possible value for symmetry (e.g., output range is -127 to 127, not -128 to 127 for signed 8 bit quantization), so that 0.0 maps to 0.

To perform this op, we first find the range of values in our tensor. The range we use is always centered on 0, so we find m such that

1. m = max(abs(input_min), abs(input_max)) if range_given is true, 2. m = max(abs(min_elem(input)), abs(max_elem(input))) otherwise.

Our input tensor range is then [-m, m].

Next, we choose our fixed-point quantization buckets, [min_fixed, max_fixed]. If signed_input is true, this is

[min_fixed, max_fixed ] =
    [-(1 << (num_bits - 1) - 1), (1 << (num_bits - 1)) - 1].

Otherwise, if signed_input is false, the fixed-point range is

[min_fixed, max_fixed] = [0, (1 << num_bits) - 1].

From this we compute our scaling factor, s:

s = (max_fixed - min_fixed) / (2 * m).

Now we can quantize and dequantize the elements of our tensor. An element e is transformed into e':

e' = (e * s).round_to_nearest() / s.

Note that we have a different number of buckets in the signed vs. unsigned cases. For example, if num_bits == 8, we get 254 buckets in the signed case vs. 255 in the unsigned case.

For example, suppose num_bits = 8 and m = 1. Then

[min_fixed, max_fixed] = [-127, 127], and
s = (127 + 127) / 2 = 127.

Given the vector {-1, -0.5, 0, 0.3}, this is quantized to {-127, -63, 0, 38}, and dequantized to {-1, -63.0/127, 0, 38.0/127}.

Arguments:

input: Tensor to quantize and then dequantize.
input_min: If range_given, this is the min of the range, otherwise this input

will be ignored.

input_max: If range_given, this is the max of the range, otherwise this input

will be ignored.

func QuantizeAndDequantizeV3 Uses

func QuantizeAndDequantizeV3(scope *Scope, input tf.Output, input_min tf.Output, input_max tf.Output, num_bits tf.Output, optional ...QuantizeAndDequantizeV3Attr) (output tf.Output)

Quantizes then dequantizes a tensor.

This is almost identical to QuantizeAndDequantizeV2, except that num_bits is a tensor, so its value can change during training.

func QuantizeDownAndShrinkRange Uses

func QuantizeDownAndShrinkRange(scope *Scope, input tf.Output, input_min tf.Output, input_max tf.Output, out_type tf.DataType) (output tf.Output, output_min tf.Output, output_max tf.Output)

Convert the quantized 'input' tensor into a lower-precision 'output', using the

actual distribution of the values to maximize the usage of the lower bit depth and adjusting the output min and max ranges accordingly.

[input_min, input_max] are scalar floats that specify the range for the float interpretation of the 'input' data. For example, if input_min is -1.0f and input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.

This operator tries to squeeze as much precision as possible into an output with a lower bit depth by calculating the actual min and max values found in the data. For example, maybe that quint16 input has no values lower than 16,384 and none higher than 49,152. That means only half the range is actually needed, all the float interpretations are between -0.5f and 0.5f, so if we want to compress the data into a quint8 output, we can use that range rather than the theoretical -1.0f to 1.0f that is suggested by the input min and max.

In practice, this is most useful for taking output from operations like QuantizedMatMul that can produce higher bit-depth outputs than their inputs and may have large potential output ranges, but in practice have a distribution of input values that only uses a small fraction of the possible range. By feeding that output into this operator, we can reduce it from 32 bits down to 8 with minimal loss of accuracy.

Arguments:

input_min: The float value that the minimum quantized input value represents.
input_max: The float value that the maximum quantized input value represents.
out_type: The type of the output. Should be a lower bit depth than Tinput.

Returns The float value that the minimum quantized output value represents.The float value that the maximum quantized output value represents.

func QuantizeV2 Uses

func QuantizeV2(scope *Scope, input tf.Output, min_range tf.Output, max_range tf.Output, T tf.DataType, optional ...QuantizeV2Attr) (output tf.Output, output_min tf.Output, output_max tf.Output)

Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.

[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.

In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:

“` out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) if T == qint8, out[i] -= (range(T) + 1) / 2.0 “` here `range(T) = numeric_limits<T>::max() - numeric_limits<T>::min()`

*MIN_COMBINED Mode Example*

Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min_range and max_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8.

If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8.

If the mode is 'MIN_FIRST', then this approach is used:

“` number_of_steps = 1 << (# of bits in T) range_adjust = number_of_steps / (number_of_steps - 1) range = (range_max - range_min) * range_adjust range_scale = number_of_steps / range quantized = round(input * range_scale) - round(range_min * range_scale) +

numeric_limits<T>::min()

quantized = max(quantized, numeric_limits<T>::min()) quantized = min(quantized, numeric_limits<T>::max()) “`

The biggest difference between this and MIN_COMBINED is that the minimum range is rounded first, before it's subtracted from the rounded value. With MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error.

One thing to watch out for is that the operator may choose to adjust the requested minimum and maximum values slightly during the quantization process, so you should always use the output ports as the range for further calculations. For example, if the requested minimum and maximum values are close to equal, they will be separated by a small epsilon value to prevent ill-formed quantized buffers from being created. Otherwise, you can end up with buffers where all the quantized values map to the same float value, which causes problems for operations that have to perform further calculations on them.

Arguments:

min_range: The minimum scalar value possibly produced for the input.
max_range: The maximum scalar value possibly produced for the input.

Returns The quantized data produced from the float input.The actual minimum scalar value used for the output.The actual maximum scalar value used for the output.

func QuantizedAdd Uses

func QuantizedAdd(scope *Scope, x tf.Output, y tf.Output, min_x tf.Output, max_x tf.Output, min_y tf.Output, max_y tf.Output, optional ...QuantizedAddAttr) (z tf.Output, min_z tf.Output, max_z tf.Output)

Returns x + y element-wise, working on quantized buffers.

Arguments:

min_x: The float value that the lowest quantized `x` value represents.
max_x: The float value that the highest quantized `x` value represents.
min_y: The float value that the lowest quantized `y` value represents.
max_y: The float value that the highest quantized `y` value represents.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

*NOTE*: `QuantizedAdd` supports limited forms of broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func QuantizedAvgPool Uses

func QuantizedAvgPool(scope *Scope, input tf.Output, min_input tf.Output, max_input tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output, min_output tf.Output, max_output tf.Output)

Produces the average pool of the input tensor for quantized types.

Arguments:

input: 4-D with shape `[batch, height, width, channels]`.
min_input: The float value that the lowest quantized input value represents.
max_input: The float value that the highest quantized input value represents.
ksize: The size of the window for each dimension of the input tensor.

The length must be 4 to match the number of dimensions of the input.

strides: The stride of the sliding window for each dimension of the input

tensor. The length must be 4 to match the number of dimensions of the input.

padding: The type of padding algorithm to use.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

func QuantizedBatchNormWithGlobalNormalization Uses

func QuantizedBatchNormWithGlobalNormalization(scope *Scope, t tf.Output, t_min tf.Output, t_max tf.Output, m tf.Output, m_min tf.Output, m_max tf.Output, v tf.Output, v_min tf.Output, v_max tf.Output, beta tf.Output, beta_min tf.Output, beta_max tf.Output, gamma tf.Output, gamma_min tf.Output, gamma_max tf.Output, out_type tf.DataType, variance_epsilon float32, scale_after_normalization bool) (result tf.Output, result_min tf.Output, result_max tf.Output)

Quantized Batch normalization.

This op is deprecated and will be removed in the future. Prefer `tf.nn.batch_normalization`.

Arguments:

t: A 4D input Tensor.
t_min: The value represented by the lowest quantized input.
t_max: The value represented by the highest quantized input.
m: A 1D mean Tensor with size matching the last dimension of t.

This is the first output from tf.nn.moments, or a saved moving average thereof.

m_min: The value represented by the lowest quantized mean.
m_max: The value represented by the highest quantized mean.
v: A 1D variance Tensor with size matching the last dimension of t.

This is the second output from tf.nn.moments, or a saved moving average thereof.

v_min: The value represented by the lowest quantized variance.
v_max: The value represented by the highest quantized variance.
beta: A 1D beta Tensor with size matching the last dimension of t.

An offset to be added to the normalized tensor.

beta_min: The value represented by the lowest quantized offset.
beta_max: The value represented by the highest quantized offset.
gamma: A 1D gamma Tensor with size matching the last dimension of t.

If "scale_after_normalization" is true, this tensor will be multiplied with the normalized tensor.

gamma_min: The value represented by the lowest quantized gamma.
gamma_max: The value represented by the highest quantized gamma.

variance_epsilon: A small float number to avoid dividing by 0.
scale_after_normalization: A bool indicating whether the resulted tensor

needs to be multiplied with gamma.

func QuantizedBiasAdd Uses

func QuantizedBiasAdd(scope *Scope, input tf.Output, bias tf.Output, min_input tf.Output, max_input tf.Output, min_bias tf.Output, max_bias tf.Output, out_type tf.DataType) (output tf.Output, min_out tf.Output, max_out tf.Output)

Adds Tensor 'bias' to Tensor 'input' for Quantized types.

Broadcasts the values of bias on dimensions 0..N-2 of 'input'.

Arguments:

bias: A 1D bias Tensor with size matching the last dimension of 'input'.
min_input: The float value that the lowest quantized input value represents.
max_input: The float value that the highest quantized input value represents.
min_bias: The float value that the lowest quantized bias value represents.
max_bias: The float value that the highest quantized bias value represents.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

func QuantizedConcat Uses

func QuantizedConcat(scope *Scope, concat_dim tf.Output, values []tf.Output, input_mins []tf.Output, input_maxes []tf.Output) (output tf.Output, output_min tf.Output, output_max tf.Output)

Concatenates quantized tensors along one dimension.

Arguments:

concat_dim: 0-D.  The dimension along which to concatenate.  Must be in the

range [0, rank(values)).

values: The `N` Tensors to concatenate. Their ranks and types must match,

and their sizes must match in all dimensions except `concat_dim`.

input_mins: The minimum scalar values for each of the input tensors.
input_maxes: The maximum scalar values for each of the input tensors.

Returns A `Tensor` with the concatenation of values stacked along the `concat_dim` dimension. This tensor's shape matches that of `values` except in `concat_dim` where it has the sum of the sizes.The float value that the minimum quantized output value represents.The float value that the maximum quantized output value represents.

func QuantizedConv2D Uses

func QuantizedConv2D(scope *Scope, input tf.Output, filter tf.Output, min_input tf.Output, max_input tf.Output, min_filter tf.Output, max_filter tf.Output, strides []int64, padding string, optional ...QuantizedConv2DAttr) (output tf.Output, min_output tf.Output, max_output tf.Output)

Computes a 2D convolution given quantized 4D input and filter tensors.

The inputs are quantized tensors where the lowest value represents the real number of the associated minimum, and the highest represents the maximum. This means that you can only interpret the quantized output in the same way, by taking the returned minimum and maximum values into account.

Arguments:

filter: filter's input_depth dimension must match input's depth dimensions.
min_input: The float value that the lowest quantized input value represents.
max_input: The float value that the highest quantized input value represents.
min_filter: The float value that the lowest quantized filter value represents.
max_filter: The float value that the highest quantized filter value represents.
strides: The stride of the sliding window for each dimension of the input

tensor.

padding: The type of padding algorithm to use.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

func QuantizedInstanceNorm Uses

func QuantizedInstanceNorm(scope *Scope, x tf.Output, x_min tf.Output, x_max tf.Output, optional ...QuantizedInstanceNormAttr) (y tf.Output, y_min tf.Output, y_max tf.Output)

Quantized Instance normalization.

Arguments:

x: A 4D input Tensor.
x_min: The value represented by the lowest quantized input.
x_max: The value represented by the highest quantized input.

Returns A 4D Tensor.The value represented by the lowest quantized output.The value represented by the highest quantized output.

func QuantizedMatMul Uses

func QuantizedMatMul(scope *Scope, a tf.Output, b tf.Output, min_a tf.Output, max_a tf.Output, min_b tf.Output, max_b tf.Output, optional ...QuantizedMatMulAttr) (out tf.Output, min_out tf.Output, max_out tf.Output)

Perform a quantized matrix multiplication of `a` by the matrix `b`.

The inputs must be two-dimensional matrices and the inner dimension of `a` (after being transposed if `transpose_a` is non-zero) must match the outer dimension of `b` (after being transposed if `transposed_b` is non-zero).

Arguments:

a: Must be a two-dimensional tensor.
b: Must be a two-dimensional tensor.
min_a: The float value that the lowest quantized `a` value represents.
max_a: The float value that the highest quantized `a` value represents.
min_b: The float value that the lowest quantized `b` value represents.
max_b: The float value that the highest quantized `b` value represents.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

func QuantizedMaxPool Uses

func QuantizedMaxPool(scope *Scope, input tf.Output, min_input tf.Output, max_input tf.Output, ksize []int64, strides []int64, padding string) (output tf.Output, min_output tf.Output, max_output tf.Output)

Produces the max pool of the input tensor for quantized types.

Arguments:

input: The 4D (batch x rows x cols x depth) Tensor to MaxReduce over.
min_input: The float value that the lowest quantized input value represents.
max_input: The float value that the highest quantized input value represents.
ksize: The size of the window for each dimension of the input tensor.

The length must be 4 to match the number of dimensions of the input.

strides: The stride of the sliding window for each dimension of the input

tensor. The length must be 4 to match the number of dimensions of the input.

padding: The type of padding algorithm to use.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

func QuantizedMul Uses

func QuantizedMul(scope *Scope, x tf.Output, y tf.Output, min_x tf.Output, max_x tf.Output, min_y tf.Output, max_y tf.Output, optional ...QuantizedMulAttr) (z tf.Output, min_z tf.Output, max_z tf.Output)

Returns x * y element-wise, working on quantized buffers.

Arguments:

min_x: The float value that the lowest quantized `x` value represents.
max_x: The float value that the highest quantized `x` value represents.
min_y: The float value that the lowest quantized `y` value represents.
max_y: The float value that the highest quantized `y` value represents.

Returns The float value that the lowest quantized output value represents.The float value that the highest quantized output value represents.

*NOTE*: `QuantizedMul` supports limited forms of broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func QuantizedRelu Uses

func QuantizedRelu(scope *Scope, features tf.Output, min_features tf.Output, max_features tf.Output, optional ...QuantizedReluAttr) (activations tf.Output, min_activations tf.Output, max_activations tf.Output)

Computes Quantized Rectified Linear: `max(features, 0)`

Arguments:

min_features: The float value that the lowest quantized value represents.
max_features: The float value that the highest quantized value represents.

Returns Has the same output shape as "features".The float value that the lowest quantized value represents.The float value that the highest quantized value represents.

func QuantizedRelu6 Uses

func QuantizedRelu6(scope *Scope, features tf.Output, min_features tf.Output, max_features tf.Output, optional ...QuantizedRelu6Attr) (activations tf.Output, min_activations tf.Output, max_activations tf.Output)

Computes Quantized Rectified Linear 6: `min(max(features, 0), 6)`

Arguments:

min_features: The float value that the lowest quantized value represents.
max_features: The float value that the highest quantized value represents.

Returns Has the same output shape as "features".The float value that the lowest quantized value represents.The float value that the highest quantized value represents.

func QuantizedReluX Uses

func QuantizedReluX(scope *Scope, features tf.Output, max_value tf.Output, min_features tf.Output, max_features tf.Output, optional ...QuantizedReluXAttr) (activations tf.Output, min_activations tf.Output, max_activations tf.Output)

Computes Quantized Rectified Linear X: `min(max(features, 0), max_value)`

Arguments:

min_features: The float value that the lowest quantized value represents.
max_features: The float value that the highest quantized value represents.

Returns Has the same output shape as "features".The float value that the lowest quantized value represents.The float value that the highest quantized value represents.

func QuantizedReshape Uses

func QuantizedReshape(scope *Scope, tensor tf.Output, shape tf.Output, input_min tf.Output, input_max tf.Output) (output tf.Output, output_min tf.Output, output_max tf.Output)

Reshapes a quantized tensor as per the Reshape op.

“`

Arguments:

shape: Defines the shape of the output tensor.
input_min: The minimum value of the input.
input_max: The maximum value of the input.

Returns This value is copied from input_min.This value is copied from input_max.

func QuantizedResizeBilinear Uses

func QuantizedResizeBilinear(scope *Scope, images tf.Output, size tf.Output, min tf.Output, max tf.Output, optional ...QuantizedResizeBilinearAttr) (resized_images tf.Output, out_min tf.Output, out_max tf.Output)

Resize quantized `images` to `size` using quantized bilinear interpolation.

Input images and output images must be quantized types.

Arguments:

images: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

Returns 4-D with shape `[batch, new_height, new_width, channels]`.

func QueueCloseV2 Uses

func QueueCloseV2(scope *Scope, handle tf.Output, optional ...QueueCloseV2Attr) (o *tf.Operation)

Closes the given queue.

This operation signals that no more elements will be enqueued in the given queue. Subsequent Enqueue(Many) operations will fail. Subsequent Dequeue(Many) operations will continue to succeed if sufficient elements remain in the queue. Subsequent Dequeue(Many) operations that would block will fail immediately.

Arguments:

handle: The handle to a queue.

Returns the created operation.

func QueueDequeueManyV2 Uses

func QueueDequeueManyV2(scope *Scope, handle tf.Output, n tf.Output, component_types []tf.DataType, optional ...QueueDequeueManyV2Attr) (components []tf.Output)

Dequeues `n` tuples of one or more tensors from the given queue.

If the queue is closed and there are fewer than `n` elements, then an OutOfRange error is returned.

This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size `n` in the 0th dimension.

This operation has `k` outputs, where `k` is the number of components in the tuples stored in the given queue, and output `i` is the ith component of the dequeued tuple.

N.B. If the queue is empty, this operation will block until `n` elements have been dequeued (or 'timeout_ms' elapses, if specified).

Arguments:

handle: The handle to a queue.
n: The number of tuples to dequeue.
component_types: The type of each component in a tuple.

Returns One or more tensors that were dequeued as a tuple.

func QueueDequeueUpToV2 Uses

func QueueDequeueUpToV2(scope *Scope, handle tf.Output, n tf.Output, component_types []tf.DataType, optional ...QueueDequeueUpToV2Attr) (components []tf.Output)

Dequeues `n` tuples of one or more tensors from the given queue.

This operation is not supported by all queues. If a queue does not support DequeueUpTo, then an Unimplemented error is returned.

If the queue is closed and there are more than 0 but less than `n` elements remaining, then instead of returning an OutOfRange error like QueueDequeueMany, less than `n` elements are returned immediately. If the queue is closed and there are 0 elements left in the queue, then an OutOfRange error is returned just like in QueueDequeueMany. Otherwise the behavior is identical to QueueDequeueMany:

This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size n in the 0th dimension.

This operation has `k` outputs, where `k` is the number of components in the tuples stored in the given queue, and output `i` is the ith component of the dequeued tuple.

Arguments:

handle: The handle to a queue.
n: The number of tuples to dequeue.
component_types: The type of each component in a tuple.

Returns One or more tensors that were dequeued as a tuple.

func QueueDequeueV2 Uses

func QueueDequeueV2(scope *Scope, handle tf.Output, component_types []tf.DataType, optional ...QueueDequeueV2Attr) (components []tf.Output)

Dequeues a tuple of one or more tensors from the given queue.

This operation has k outputs, where k is the number of components in the tuples stored in the given queue, and output i is the ith component of the dequeued tuple.

N.B. If the queue is empty, this operation will block until an element has been dequeued (or 'timeout_ms' elapses, if specified).

Arguments:

handle: The handle to a queue.
component_types: The type of each component in a tuple.

Returns One or more tensors that were dequeued as a tuple.

func QueueEnqueueManyV2 Uses

func QueueEnqueueManyV2(scope *Scope, handle tf.Output, components []tf.Output, optional ...QueueEnqueueManyV2Attr) (o *tf.Operation)

Enqueues zero or more tuples of one or more tensors in the given queue.

This operation slices each component tensor along the 0th dimension to make multiple queue elements. All of the tuple components must have the same size in the 0th dimension.

The components input has k elements, which correspond to the components of tuples stored in the given queue.

N.B. If the queue is full, this operation will block until the given elements have been enqueued (or 'timeout_ms' elapses, if specified).

Arguments:

handle: The handle to a queue.
components: One or more tensors from which the enqueued tensors should

be taken.

Returns the created operation.

func QueueEnqueueV2 Uses

func QueueEnqueueV2(scope *Scope, handle tf.Output, components []tf.Output, optional ...QueueEnqueueV2Attr) (o *tf.Operation)

Enqueues a tuple of one or more tensors in the given queue.

The components input has k elements, which correspond to the components of tuples stored in the given queue.

N.B. If the queue is full, this operation will block until the given element has been enqueued (or 'timeout_ms' elapses, if specified).

Arguments:

handle: The handle to a queue.
components: One or more tensors from which the enqueued tensors should be taken.

Returns the created operation.

func QueueIsClosedV2 Uses

func QueueIsClosedV2(scope *Scope, handle tf.Output) (is_closed tf.Output)

Returns true if queue is closed.

This operation returns true if the queue is closed and false if the queue is open.

Arguments:

handle: The handle to a queue.

func QueueSizeV2 Uses

func QueueSizeV2(scope *Scope, handle tf.Output) (size tf.Output)

Computes the number of elements in the given queue.

Arguments:

handle: The handle to a queue.

Returns The number of elements in the given queue.

func RFFT Uses

func RFFT(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

Real-valued fast Fourier transform.

Computes the 1-dimensional discrete Fourier transform of a real-valued signal over the inner-most dimension of `input`.

Since the DFT of a real signal is Hermitian-symmetric, `RFFT` only returns the `fft_length / 2 + 1` unique components of the FFT: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms.

Along the axis `RFFT` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A float32 tensor.
fft_length: An int32 tensor of shape [1]. The FFT length.

Returns A complex64 tensor of the same rank as `input`. The inner-most

dimension of `input` is replaced with the `fft_length / 2 + 1` unique
frequency components of its 1D Fourier transform.

@compatibility(numpy) Equivalent to np.fft.rfft @end_compatibility

func RFFT2D Uses

func RFFT2D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

2D real-valued fast Fourier transform.

Computes the 2-dimensional discrete Fourier transform of a real-valued signal over the inner-most 2 dimensions of `input`.

Since the DFT of a real signal is Hermitian-symmetric, `RFFT2D` only returns the `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension of `output`: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms.

Along each axis `RFFT2D` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A float32 tensor.
fft_length: An int32 tensor of shape [2]. The FFT length for each dimension.

Returns A complex64 tensor of the same rank as `input`. The inner-most 2

dimensions of `input` are replaced with their 2D Fourier transform. The
inner-most dimension contains `fft_length / 2 + 1` unique frequency
components.

@compatibility(numpy) Equivalent to np.fft.rfft2 @end_compatibility

func RFFT3D Uses

func RFFT3D(scope *Scope, input tf.Output, fft_length tf.Output) (output tf.Output)

3D real-valued fast Fourier transform.

Computes the 3-dimensional discrete Fourier transform of a real-valued signal over the inner-most 3 dimensions of `input`.

Since the DFT of a real signal is Hermitian-symmetric, `RFFT3D` only returns the `fft_length / 2 + 1` unique components of the FFT for the inner-most dimension of `output`: the zero-frequency term, followed by the `fft_length / 2` positive-frequency terms.

Along each axis `RFFT3D` is computed on, if `fft_length` is smaller than the corresponding dimension of `input`, the dimension is cropped. If it is larger, the dimension is padded with zeros.

Arguments:

input: A float32 tensor.
fft_length: An int32 tensor of shape [3]. The FFT length for each dimension.

Returns A complex64 tensor of the same rank as `input`. The inner-most 3

dimensions of `input` are replaced with the their 3D Fourier transform. The
inner-most dimension contains `fft_length / 2 + 1` unique frequency
components.

@compatibility(numpy) Equivalent to np.fft.rfftn with 3 dimensions. @end_compatibility

func RGBToHSV Uses

func RGBToHSV(scope *Scope, images tf.Output) (output tf.Output)

Converts one or more images from RGB to HSV.

Outputs a tensor of the same shape as the `images` tensor, containing the HSV value of the pixels. The output is only well defined if the value in `images` are in `[0,1]`.

`output[..., 0]` contains hue, `output[..., 1]` contains saturation, and `output[..., 2]` contains value. All HSV values are in `[0,1]`. A hue of 0 corresponds to pure red, hue 1/3 is pure green, and 2/3 is pure blue.

Arguments:

images: 1-D or higher rank. RGB data to convert. Last dimension must be size 3.

Returns `images` converted to HSV.

func RandomCrop Uses

func RandomCrop(scope *Scope, image tf.Output, size tf.Output, optional ...RandomCropAttr) (output tf.Output)

Randomly crop `image`.

DEPRECATED at GraphDef version 8: Random crop is now pure Python

`size` is a 1-D int64 tensor with 2 elements representing the crop height and width. The values must be non negative.

This Op picks a random location in `image` and crops a `height` by `width` rectangle from that location. The random location is picked so the cropped area will fit inside the original image.

Arguments:

image: 3-D of shape `[height, width, channels]`.
size: 1-D of length 2 containing: `crop_height`, `crop_width`..

Returns 3-D of shape `[crop_height, crop_width, channels].`

func RandomGamma Uses

func RandomGamma(scope *Scope, shape tf.Output, alpha tf.Output, optional ...RandomGammaAttr) (output tf.Output)

Outputs random values from the Gamma distribution(s) described by alpha.

This op uses the algorithm by Marsaglia et al. to acquire samples via transformation-rejection from pairs of uniform and normal random variables. See http://dl.acm.org/citation.cfm?id=358414

Arguments:

shape: 1-D integer tensor. Shape of independent samples to draw from each

distribution described by the shape parameters given in alpha.

alpha: A tensor in which each scalar is a "shape" parameter describing the

associated gamma distribution.

Returns A tensor with shape `shape + shape(alpha)`. Each slice `[:, ..., :, i0, i1, ...iN]` contains the samples drawn for `alpha[i0, i1, ...iN]`. The dtype of the output matches the dtype of alpha.

func RandomPoisson Uses

func RandomPoisson(scope *Scope, shape tf.Output, rate tf.Output, optional ...RandomPoissonAttr) (output tf.Output)

Outputs random values from the Poisson distribution(s) described by rate.

This op uses two algorithms, depending on rate. If rate >= 10, then the algorithm by Hormann is used to acquire samples via transformation-rejection. See http://www.sciencedirect.com/science/article/pii/0167668793909974.

Otherwise, Knuth's algorithm is used to acquire samples via multiplying uniform random variables. See Donald E. Knuth (1969). Seminumerical Algorithms. The Art of Computer Programming, Volume 2. Addison Wesley

Arguments:

shape: 1-D integer tensor. Shape of independent samples to draw from each

distribution described by the shape parameters given in rate.

rate: A tensor in which each scalar is a "rate" parameter describing the

associated poisson distribution.

Returns A tensor with shape `shape + shape(rate)`. Each slice `[:, ..., :, i0, i1, ...iN]` contains the samples drawn for `rate[i0, i1, ...iN]`. The dtype of the output matches the dtype of rate.

func RandomShuffle Uses

func RandomShuffle(scope *Scope, value tf.Output, optional ...RandomShuffleAttr) (output tf.Output)

Randomly shuffles a tensor along its first dimension.

The tensor is shuffled along dimension 0, such that each `value[j]` is mapped
to one and only one `output[i]`. For example, a mapping that might occur for a
3x2 tensor is:

“` [[1, 2], [[5, 6],

[3, 4],  ==>   [1, 2],
[5, 6]]        [3, 4]]

“`

Arguments:

value: The tensor to be shuffled.

Returns A tensor of same shape and type as `value`, shuffled along its first dimension.

func RandomShuffleQueueV2 Uses

func RandomShuffleQueueV2(scope *Scope, component_types []tf.DataType, optional ...RandomShuffleQueueV2Attr) (handle tf.Output)

A queue that randomizes the order of elements.

Arguments:

component_types: The type of each component in a value.

Returns The handle to the queue.

func RandomStandardNormal Uses

func RandomStandardNormal(scope *Scope, shape tf.Output, dtype tf.DataType, optional ...RandomStandardNormalAttr) (output tf.Output)

Outputs random values from a normal distribution.

The generated values will have mean 0 and standard deviation 1.

Arguments:

shape: The shape of the output tensor.
dtype: The type of the output.

Returns A tensor of the specified shape filled with random normal values.

func RandomUniform Uses

func RandomUniform(scope *Scope, shape tf.Output, dtype tf.DataType, optional ...RandomUniformAttr) (output tf.Output)

Outputs random values from a uniform distribution.

The generated values follow a uniform distribution in the range `[0, 1)`. The lower bound 0 is included in the range, while the upper bound 1 is excluded.

Arguments:

shape: The shape of the output tensor.
dtype: The type of the output.

Returns A tensor of the specified shape filled with uniform random values.

func RandomUniformInt Uses

func RandomUniformInt(scope *Scope, shape tf.Output, minval tf.Output, maxval tf.Output, optional ...RandomUniformIntAttr) (output tf.Output)

Outputs random integers from a uniform distribution.

The generated values are uniform integers in the range `[minval, maxval)`. The lower bound `minval` is included in the range, while the upper bound `maxval` is excluded.

The random integers are slightly biased unless `maxval - minval` is an exact power of two. The bias is small for values of `maxval - minval` significantly smaller than the range of the output (either `2^32` or `2^64`).

Arguments:

shape: The shape of the output tensor.
minval: 0-D.  Inclusive lower bound on the generated integers.
maxval: 0-D.  Exclusive upper bound on the generated integers.

Returns A tensor of the specified shape filled with uniform random integers.

func Range Uses

func Range(scope *Scope, start tf.Output, limit tf.Output, delta tf.Output) (output tf.Output)

Creates a sequence of numbers.

This operation creates a sequence of numbers that begins at `start` and extends by increments of `delta` up to but not including `limit`.

For example:

“` # 'start' is 3 # 'limit' is 18 # 'delta' is 3 tf.range(start, limit, delta) ==> [3, 6, 9, 12, 15] “`

Arguments:

start: 0-D (scalar). First entry in the sequence.
limit: 0-D (scalar). Upper limit of sequence, exclusive.
delta: 0-D (scalar). Optional. Default is 1. Number that increments `start`.

Returns 1-D.

func RangeDataset Uses

func RangeDataset(scope *Scope, start tf.Output, stop tf.Output, step tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset with a range of values. Corresponds to python's xrange.

Arguments:

start: corresponds to start in python's xrange().
stop: corresponds to stop in python's xrange().
step: corresponds to step in python's xrange().

func Rank Uses

func Rank(scope *Scope, input tf.Output) (output tf.Output)

Returns the rank of a tensor.

This operation returns an integer representing the rank of `input`.

For example:

“` # 't' is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]] # shape of tensor 't' is [2, 2, 3] rank(t) ==> 3 “`

**Note**: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as "order", "degree", or "ndims."

func ReadFile Uses

func ReadFile(scope *Scope, filename tf.Output) (contents tf.Output)

Reads and outputs the entire contents of the input filename.

func ReadVariableOp Uses

func ReadVariableOp(scope *Scope, resource tf.Output, dtype tf.DataType) (value tf.Output)

Reads the value of a variable.

The tensor returned by this operation is immutable.

The value returned by this operation is guaranteed to be influenced by all the writes on which this operation depends directly or indirectly, and to not be influenced by any of the writes which depend directly or indirectly on this operation.

Arguments:

resource: handle to the resource in which to store the variable.
dtype: the dtype of the value.

func ReaderNumRecordsProducedV2 Uses

func ReaderNumRecordsProducedV2(scope *Scope, reader_handle tf.Output) (records_produced tf.Output)

Returns the number of records this Reader has produced.

This is the same as the number of ReaderRead executions that have succeeded.

Arguments:

reader_handle: Handle to a Reader.

func ReaderNumWorkUnitsCompletedV2 Uses

func ReaderNumWorkUnitsCompletedV2(scope *Scope, reader_handle tf.Output) (units_completed tf.Output)

Returns the number of work units this Reader has finished processing.

Arguments:

reader_handle: Handle to a Reader.

func ReaderReadUpToV2 Uses

func ReaderReadUpToV2(scope *Scope, reader_handle tf.Output, queue_handle tf.Output, num_records tf.Output) (keys tf.Output, values tf.Output)

Returns up to `num_records` (key, value) pairs produced by a Reader.

Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than `num_records` even before the last batch.

Arguments:

reader_handle: Handle to a `Reader`.
queue_handle: Handle to a `Queue`, with string work items.
num_records: number of records to read from `Reader`.

Returns A 1-D tensor.A 1-D tensor.

func ReaderReadV2 Uses

func ReaderReadV2(scope *Scope, reader_handle tf.Output, queue_handle tf.Output) (key tf.Output, value tf.Output)

Returns the next record (key, value pair) produced by a Reader.

Will dequeue from the input queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file).

Arguments:

reader_handle: Handle to a Reader.
queue_handle: Handle to a Queue, with string work items.

Returns A scalar.A scalar.

func ReaderResetV2 Uses

func ReaderResetV2(scope *Scope, reader_handle tf.Output) (o *tf.Operation)

Restore a Reader to its initial clean state.

Arguments:

reader_handle: Handle to a Reader.

Returns the created operation.

func ReaderRestoreStateV2 Uses

func ReaderRestoreStateV2(scope *Scope, reader_handle tf.Output, state tf.Output) (o *tf.Operation)

Restore a reader to a previously saved state.

Not all Readers support being restored, so this can produce an Unimplemented error.

Arguments:

reader_handle: Handle to a Reader.
state: Result of a ReaderSerializeState of a Reader with type

matching reader_handle.

Returns the created operation.

func ReaderSerializeStateV2 Uses

func ReaderSerializeStateV2(scope *Scope, reader_handle tf.Output) (state tf.Output)

Produce a string tensor that encodes the state of a Reader.

Not all Readers support being serialized, so this can produce an Unimplemented error.

Arguments:

reader_handle: Handle to a Reader.

func Real Uses

func Real(scope *Scope, input tf.Output, optional ...RealAttr) (output tf.Output)

Returns the real part of a complex number.

Given a tensor `input` of complex numbers, this operation returns a tensor of type `float` that is the real part of each element in `input`. All elements in `input` must be complex numbers of the form \\(a + bj\\), where *a* is the real

part returned by this operation and *b* is the imaginary part.

For example:

“` # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.real(input) ==> [-2.25, 3.25] “`

func RealDiv Uses

func RealDiv(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Returns x / y element-wise for real types.

If `x` and `y` are reals, this will return the floating-point division.

*NOTE*: `Div` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

func Reciprocal Uses

func Reciprocal(scope *Scope, x tf.Output) (y tf.Output)

Computes the reciprocal of x element-wise.

I.e., \\(y = 1 / x\\).

func ReciprocalGrad Uses

func ReciprocalGrad(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Computes the gradient for the inverse of `x` wrt its input.

Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy` is the corresponding input gradient.

func RecordInput Uses

func RecordInput(scope *Scope, file_pattern string, optional ...RecordInputAttr) (records tf.Output)

Emits randomized records.

Arguments:

file_pattern: Glob pattern for the data files.

Returns A tensor of shape [batch_size].

func ReduceJoin Uses

func ReduceJoin(scope *Scope, inputs tf.Output, reduction_indices tf.Output, optional ...ReduceJoinAttr) (output tf.Output)

Joins a string Tensor across the given dimensions.

Computes the string join across dimensions in the given string Tensor of shape `[d_0, d_1, ..., d_n-1]`. Returns a new Tensor created by joining the input strings with the given separator (default: empty string). Negative indices are counted backwards from the end, with `-1` being equivalent to `n - 1`.

For example:

“`python # tensor `a` is [["a", "b"], ["c", "d"]] tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, -2) = tf.reduce_join(a, 0) ==> ["ac", "bd"] tf.reduce_join(a, -1) = tf.reduce_join(a, 1) ==> ["ab", "cd"] tf.reduce_join(a, 0, keep_dims=True) ==> [["ac", "bd"]] tf.reduce_join(a, 1, keep_dims=True) ==> [["ab"], ["cd"]] tf.reduce_join(a, 0, separator=".") ==> ["a.c", "b.d"] tf.reduce_join(a, [0, 1]) ==> ["acbd"] tf.reduce_join(a, [1, 0]) ==> ["abcd"] tf.reduce_join(a, []) ==> ["abcd"] “`

Arguments:

inputs: The input to be joined.  All reduced indices must have non-zero size.
reduction_indices: The dimensions to reduce over.  Dimensions are reduced in the

order specified. Omitting `reduction_indices` is equivalent to passing `[n-1, n-2, ..., 0]`. Negative indices from `-n` to `-1` are supported.

Returns Has shape equal to that of the input with reduced dimensions removed or set to `1` depending on `keep_dims`.

func Relu Uses

func Relu(scope *Scope, features tf.Output) (activations tf.Output)

Computes rectified linear: `max(features, 0)`.

func Relu6 Uses

func Relu6(scope *Scope, features tf.Output) (activations tf.Output)

Computes rectified linear 6: `min(max(features, 0), 6)`.

func Relu6Grad Uses

func Relu6Grad(scope *Scope, gradients tf.Output, features tf.Output) (backprops tf.Output)

Computes rectified linear 6 gradients for a Relu6 operation.

Arguments:

gradients: The backpropagated gradients to the corresponding Relu6 operation.
features: The features passed as input to the corresponding Relu6 operation.

Returns The gradients: `gradients * (features > 0) * (features < 6)`.

func ReluGrad Uses

func ReluGrad(scope *Scope, gradients tf.Output, features tf.Output) (backprops tf.Output)

Computes rectified linear gradients for a Relu operation.

Arguments:

gradients: The backpropagated gradients to the corresponding Relu operation.
features: The features passed as input to the corresponding Relu operation, OR

the outputs of that operation (both work equivalently).

Returns `gradients * (features > 0)`.

func RemoteFusedGraphExecute Uses

func RemoteFusedGraphExecute(scope *Scope, inputs []tf.Output, Toutputs []tf.DataType, serialized_remote_fused_graph_execute_info string) (outputs []tf.Output)

Execute a sub graph on a remote processor.

The graph specifications(such as graph itself, input tensors and output names) are stored as a serialized protocol buffer of RemoteFusedGraphExecuteInfo as serialized_remote_fused_graph_execute_info. The specifications will be passed to a dedicated registered remote fused graph executor. The executor will send the graph specifications to a remote processor and execute that graph. The execution results will be passed to consumer nodes as outputs of this node.

Arguments:

inputs: Arbitrary number of tensors with arbitrary data types

serialized_remote_fused_graph_execute_info: Serialized protocol buffer

of RemoteFusedGraphExecuteInfo which contains graph specifications.

Returns Arbitrary number of tensors with arbitrary data types

func RepeatDataset Uses

func RepeatDataset(scope *Scope, input_dataset tf.Output, count tf.Output, output_types []tf.DataType, output_shapes []tf.Shape) (handle tf.Output)

Creates a dataset that emits the outputs of `input_dataset` `count` times.

Arguments:

count: A scalar representing the number of times that `input_dataset` should

be repeated. A value of `-1` indicates that it should be repeated infinitely.

func RequantizationRange Uses

func RequantizationRange(scope *Scope, input tf.Output, input_min tf.Output, input_max tf.Output) (output_min tf.Output, output_max tf.Output)

Given a quantized tensor described by (input, input_min, input_max), outputs a

range that covers the actual values present in that tensor. This op is typically used to produce the requested_output_min and requested_output_max for Requantize.

Arguments:

input_min: The float value that the minimum quantized input value represents.
input_max: The float value that the maximum quantized input value represents.

Returns The computed min output.the computed max output.

func Requantize Uses

func Requantize(scope *Scope, input tf.Output, input_min tf.Output, input_max tf.Output, requested_output_min tf.Output, requested_output_max tf.Output, out_type tf.DataType) (output tf.Output, output_min tf.Output, output_max tf.Output)

Convert the quantized 'input' tensor into a lower-precision 'output', using the

output range specified with 'requested_output_min' and 'requested_output_max'.

[input_min, input_max] are scalar floats that specify the range for the float interpretation of the 'input' data. For example, if input_min is -1.0f and input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.

Arguments:

input_min: The float value that the minimum quantized input value represents.
input_max: The float value that the maximum quantized input value represents.
requested_output_min: The float value that the minimum quantized output value represents.
requested_output_max: The float value that the maximum quantized output value represents.
out_type: The type of the output. Should be a lower bit depth than Tinput.

Returns The requested_output_min value is copied into this output.The requested_output_max value is copied into this output.

func Reshape Uses

func Reshape(scope *Scope, tensor tf.Output, shape tf.Output) (output tf.Output)

Reshapes a tensor.

Given `tensor`, this operation returns a tensor that has the same values as `tensor` with shape `shape`.

If one component of `shape` is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a `shape` of `[-1]` flattens into 1-D. At most one component of `shape` can be -1.

If `shape` is 1-D or higher, then the operation returns a tensor with shape `shape` filled with the values of `tensor`. In this case, the number of elements implied by `shape` must be the same as the number of elements in `tensor`.

For example:

“` # tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9] # tensor 't' has shape [9] reshape(t, [3, 3]) ==> [[1, 2, 3],

[4, 5, 6],
[7, 8, 9]]

# tensor 't' is [[[1, 1], [2, 2]], # [[3, 3], [4, 4]]] # tensor 't' has shape [2, 2, 2] reshape(t, [2, 4]) ==> [[1, 1, 2, 2],

[3, 3, 4, 4]]

# tensor 't' is [[[1, 1, 1], # [2, 2, 2]], # [[3, 3, 3], # [4, 4, 4]], # [[5, 5, 5], # [6, 6, 6]]] # tensor 't' has shape [3, 2, 3] # pass '[-1]' to flatten 't' reshape(t, [-1]) ==> [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]

# -1 can also be used to infer the shape

# -1 is inferred to be 9: reshape(t, [2, -1]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],

[4, 4, 4, 5, 5, 5, 6, 6, 6]]

# -1 is inferred to be 2: reshape(t, [-1, 9]) ==> [[1, 1, 1, 2, 2, 2, 3, 3, 3],

[4, 4, 4, 5, 5, 5, 6, 6, 6]]

# -1 is inferred to be 3: reshape(t, [ 2, -1, 3]) ==> [[[1, 1, 1],

 [2, 2, 2],
 [3, 3, 3]],
[[4, 4, 4],
 [5, 5, 5],
 [6, 6, 6]]]

# tensor 't' is [7] # shape `[]` reshapes to a scalar reshape(t, []) ==> 7 “`

Arguments:

shape: Defines the shape of the output tensor.

func ResizeArea Uses

func ResizeArea(scope *Scope, images tf.Output, size tf.Output, optional ...ResizeAreaAttr) (resized_images tf.Output)

Resize `images` to `size` using area interpolation.

Input images can be of different types but output images are always float.

Each output pixel is computed by first transforming the pixel's footprint into the input tensor and then averaging the pixels that intersect the footprint. An input pixel's contribution to the average is weighted by the fraction of its area that intersects the footprint. This is the same as OpenCV's INTER_AREA.

Arguments:

images: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

Returns 4-D with shape `[batch, new_height, new_width, channels]`.

func ResizeBicubic Uses

func ResizeBicubic(scope *Scope, images tf.Output, size tf.Output, optional ...ResizeBicubicAttr) (resized_images tf.Output)

Resize `images` to `size` using bicubic interpolation.

Input images can be of different types but output images are always float.

Arguments:

images: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

Returns 4-D with shape `[batch, new_height, new_width, channels]`.

func ResizeBilinear Uses

func ResizeBilinear(scope *Scope, images tf.Output, size tf.Output, optional ...ResizeBilinearAttr) (resized_images tf.Output)

Resize `images` to `size` using bilinear interpolation.

Input images can be of different types but output images are always float.

Arguments:

images: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

Returns 4-D with shape `[batch, new_height, new_width, channels]`.

func ResizeBilinearGrad Uses

func ResizeBilinearGrad(scope *Scope, grads tf.Output, original_image tf.Output, optional ...ResizeBilinearGradAttr) (output tf.Output)

Computes the gradient of bilinear interpolation.

Arguments:

grads: 4-D with shape `[batch, height, width, channels]`.
original_image: 4-D with shape `[batch, orig_height, orig_width, channels]`,

The image tensor that was resized.

Returns 4-D with shape `[batch, orig_height, orig_width, channels]`. Gradients with respect to the input image. Input image must have been float or double.

func ResizeNearestNeighbor Uses

func ResizeNearestNeighbor(scope *Scope, images tf.Output, size tf.Output, optional ...ResizeNearestNeighborAttr) (resized_images tf.Output)

Resize `images` to `size` using nearest neighbor interpolation.

Arguments:

images: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `new_height, new_width`.  The

new size for the images.

Returns 4-D with shape `[batch, new_height, new_width, channels]`.

func ResizeNearestNeighborGrad Uses

func ResizeNearestNeighborGrad(scope *Scope, grads tf.Output, size tf.Output, optional ...ResizeNearestNeighborGradAttr) (output tf.Output)

Computes the gradient of nearest neighbor interpolation.

Arguments:

grads: 4-D with shape `[batch, height, width, channels]`.
size: = A 1-D int32 Tensor of 2 elements: `orig_height, orig_width`. The

original input size.

Returns 4-D with shape `[batch, orig_height, orig_width, channels]`. Gradients with respect to the input image.

func ResourceApplyAdadelta Uses

func ResourceApplyAdadelta(scope *Scope, var_ tf.Output, accum tf.Output, accum_update tf.Output, lr tf.Output, rho tf.Output, epsilon tf.Output, grad tf.Output, optional ...ResourceApplyAdadeltaAttr) (o *tf.Operation)

Update '*var' according to the adadelta scheme.

accum = rho() * accum + (1 - rho()) * grad.square(); update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad; update_accum = rho() * update_accum + (1 - rho()) * update.square(); var -= update;

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
accum_update: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
rho: Decay factor. Must be a scalar.
epsilon: Constant factor. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceApplyAdagrad Uses

func ResourceApplyAdagrad(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, grad tf.Output, optional ...ResourceApplyAdagradAttr) (o *tf.Operation)

Update '*var' according to the adagrad scheme.

accum += grad * grad var -= lr * grad * (1 / sqrt(accum))

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceApplyAdagradDA Uses

func ResourceApplyAdagradDA(scope *Scope, var_ tf.Output, gradient_accumulator tf.Output, gradient_squared_accumulator tf.Output, grad tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, global_step tf.Output, optional ...ResourceApplyAdagradDAAttr) (o *tf.Operation)

Update '*var' according to the proximal adagrad scheme.

Arguments:

var_: Should be from a Variable().
gradient_accumulator: Should be from a Variable().
gradient_squared_accumulator: Should be from a Variable().
grad: The gradient.
lr: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
global_step: Training step number. Must be a scalar.

Returns the created operation.

func ResourceApplyAdam Uses

func ResourceApplyAdam(scope *Scope, var_ tf.Output, m tf.Output, v tf.Output, beta1_power tf.Output, beta2_power tf.Output, lr tf.Output, beta1 tf.Output, beta2 tf.Output, epsilon tf.Output, grad tf.Output, optional ...ResourceApplyAdamAttr) (o *tf.Operation)

Update '*var' according to the Adam algorithm.

lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) m_t <- beta1 * m_{t-1} + (1 - beta1) * g_t v_t <- beta2 * v_{t-1} + (1 - beta2) * g_t * g_t variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)

Arguments:

var_: Should be from a Variable().
m: Should be from a Variable().
v: Should be from a Variable().
beta1_power: Must be a scalar.
beta2_power: Must be a scalar.
lr: Scaling factor. Must be a scalar.
beta1: Momentum factor. Must be a scalar.
beta2: Momentum factor. Must be a scalar.
epsilon: Ridge term. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceApplyCenteredRMSProp Uses

func ResourceApplyCenteredRMSProp(scope *Scope, var_ tf.Output, mg tf.Output, ms tf.Output, mom tf.Output, lr tf.Output, rho tf.Output, momentum tf.Output, epsilon tf.Output, grad tf.Output, optional ...ResourceApplyCenteredRMSPropAttr) (o *tf.Operation)

Update '*var' according to the centered RMSProp algorithm.

The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient

Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)

mg <- rho * mg_{t-1} + (1-rho) * grad ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms - mg * mg + epsilon) var <- var - mom

Arguments:

var_: Should be from a Variable().
mg: Should be from a Variable().
ms: Should be from a Variable().
mom: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
rho: Decay rate. Must be a scalar.

epsilon: Ridge term. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceApplyFtrl Uses

func ResourceApplyFtrl(scope *Scope, var_ tf.Output, accum tf.Output, linear tf.Output, grad tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, lr_power tf.Output, optional ...ResourceApplyFtrlAttr) (o *tf.Operation)

Update '*var' according to the Ftrl-proximal scheme.

accum_new = accum + grad * grad linear += grad - (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
linear: Should be from a Variable().
grad: The gradient.
lr: Scaling factor. Must be a scalar.
l1: L1 regulariation. Must be a scalar.
l2: L2 regulariation. Must be a scalar.
lr_power: Scaling factor. Must be a scalar.

Returns the created operation.

func ResourceApplyFtrlV2 Uses

func ResourceApplyFtrlV2(scope *Scope, var_ tf.Output, accum tf.Output, linear tf.Output, grad tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, l2_shrinkage tf.Output, lr_power tf.Output, optional ...ResourceApplyFtrlV2Attr) (o *tf.Operation)

Update '*var' according to the Ftrl-proximal scheme.

grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage +

(accum_new^(-lr_power) - accum^(-lr_power)) / lr * var

quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
linear: Should be from a Variable().
grad: The gradient.
lr: Scaling factor. Must be a scalar.
l1: L1 regulariation. Must be a scalar.
l2: L2 shrinkage regulariation. Must be a scalar.

lr_power: Scaling factor. Must be a scalar.

Returns the created operation.

func ResourceApplyGradientDescent Uses

func ResourceApplyGradientDescent(scope *Scope, var_ tf.Output, alpha tf.Output, delta tf.Output, optional ...ResourceApplyGradientDescentAttr) (o *tf.Operation)

Update '*var' by subtracting 'alpha' * 'delta' from it.

Arguments:

var_: Should be from a Variable().
alpha: Scaling factor. Must be a scalar.
delta: The change.

Returns the created operation.

func ResourceApplyMomentum Uses

func ResourceApplyMomentum(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, grad tf.Output, momentum tf.Output, optional ...ResourceApplyMomentumAttr) (o *tf.Operation)

Update '*var' according to the momentum scheme. Set use_nesterov = True if you

want to use Nesterov momentum.

accum = accum * momentum + grad var -= lr * accum

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
grad: The gradient.
momentum: Momentum. Must be a scalar.

Returns the created operation.

func ResourceApplyProximalAdagrad Uses

func ResourceApplyProximalAdagrad(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, grad tf.Output, optional ...ResourceApplyProximalAdagradAttr) (o *tf.Operation)

Update '*var' and '*accum' according to FOBOS with Adagrad learning rate.

accum += grad * grad prox_v = var - lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceApplyProximalGradientDescent Uses

func ResourceApplyProximalGradientDescent(scope *Scope, var_ tf.Output, alpha tf.Output, l1 tf.Output, l2 tf.Output, delta tf.Output, optional ...ResourceApplyProximalGradientDescentAttr) (o *tf.Operation)

Update '*var' as FOBOS algorithm with fixed learning rate.

prox_v = var - alpha * delta var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}

Arguments:

var_: Should be from a Variable().
alpha: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
delta: The change.

Returns the created operation.

func ResourceApplyRMSProp Uses

func ResourceApplyRMSProp(scope *Scope, var_ tf.Output, ms tf.Output, mom tf.Output, lr tf.Output, rho tf.Output, momentum tf.Output, epsilon tf.Output, grad tf.Output, optional ...ResourceApplyRMSPropAttr) (o *tf.Operation)

Update '*var' according to the RMSProp algorithm.

Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)

ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom

Arguments:

var_: Should be from a Variable().
ms: Should be from a Variable().
mom: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
rho: Decay rate. Must be a scalar.

epsilon: Ridge term. Must be a scalar.
grad: The gradient.

Returns the created operation.

func ResourceGather Uses

func ResourceGather(scope *Scope, resource tf.Output, indices tf.Output, dtype tf.DataType, optional ...ResourceGatherAttr) (output tf.Output)

Gather slices from the variable pointed to by `resource` according to `indices`.

`indices` must be an integer tensor of any dimension (usually 0-D or 1-D). Produces an output tensor with shape `indices.shape + params.shape[1:]` where:

“`python

# Scalar indices
output[:, ..., :] = params[indices, :, ... :]

# Vector indices
output[i, :, ..., :] = params[indices[i], :, ... :]

# Higher rank indices
output[i, ..., j, :, ... :] = params[indices[i, ..., j], :, ..., :]

“`

func ResourceScatterAdd Uses

func ResourceScatterAdd(scope *Scope, resource tf.Output, indices tf.Output, updates tf.Output) (o *tf.Operation)

Adds sparse updates to the variable referenced by `resource`.

This operation computes

# Scalar indices
ref[indices, ...] += updates[...]

# Vector indices (for each i)
ref[indices[i], ...] += updates[i, ...]

# High rank indices (for each i, ..., j)
ref[indices[i, ..., j], ...] += updates[i, ..., j, ...]

Duplicate entries are handled correctly: if multiple `indices` reference the same location, their contributions add.

Requires `updates.shape = indices.shape + ref.shape[1:]`.

<div style="width:70%; margin:auto; margin-bottom:10px; margin-top:20px;"> <img style="width:100%" src="https://www.tensorflow.org/images/ScatterAdd.png" alt> </div>

Arguments:

resource: Should be from a `Variable` node.
indices: A tensor of indices into the first dimension of `ref`.
updates: A tensor of updated values to add to `ref`.

Returns the created operation.

func ResourceSparseApplyAdadelta Uses

func ResourceSparseApplyAdadelta(scope *Scope, var_ tf.Output, accum tf.Output, accum_update tf.Output, lr tf.Output, rho tf.Output, epsilon tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyAdadeltaAttr) (o *tf.Operation)

var: Should be from a Variable().

Arguments:

accum: Should be from a Variable().
accum_update: : Should be from a Variable().
lr: Learning rate. Must be a scalar.
rho: Decay factor. Must be a scalar.
epsilon: Constant factor. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.

Returns the created operation.

func ResourceSparseApplyAdagrad Uses

func ResourceSparseApplyAdagrad(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyAdagradAttr) (o *tf.Operation)

Update relevant entries in '*var' and '*accum' according to the adagrad scheme.

That is for rows we have grad for, we update var and accum as follows: accum += grad * grad var -= lr * grad * (1 / sqrt(accum))

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Learning rate. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.

Returns the created operation.

func ResourceSparseApplyAdagradDA Uses

func ResourceSparseApplyAdagradDA(scope *Scope, var_ tf.Output, gradient_accumulator tf.Output, gradient_squared_accumulator tf.Output, grad tf.Output, indices tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, global_step tf.Output, optional ...ResourceSparseApplyAdagradDAAttr) (o *tf.Operation)

Update entries in '*var' and '*accum' according to the proximal adagrad scheme.

Arguments:

var_: Should be from a Variable().
gradient_accumulator: Should be from a Variable().
gradient_squared_accumulator: Should be from a Variable().
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.
lr: Learning rate. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
global_step: Training step number. Must be a scalar.

Returns the created operation.

func ResourceSparseApplyCenteredRMSProp Uses

func ResourceSparseApplyCenteredRMSProp(scope *Scope, var_ tf.Output, mg tf.Output, ms tf.Output, mom tf.Output, lr tf.Output, rho tf.Output, momentum tf.Output, epsilon tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyCenteredRMSPropAttr) (o *tf.Operation)

Update '*var' according to the centered RMSProp algorithm.

The centered RMSProp algorithm uses an estimate of the centered second moment (i.e., the variance) for normalization, as opposed to regular RMSProp, which uses the (uncentered) second moment. This often helps with training, but is slightly more expensive in terms of computation and memory.

Note that in dense implementation of this algorithm, mg, ms, and mom will update even if the grad is zero, but in this sparse implementation, mg, ms, and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 mean_grad = decay * mean_grad + (1-decay) * gradient Delta = learning_rate * gradient / sqrt(mean_square + epsilon - mean_grad ** 2)

ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom

Arguments:

var_: Should be from a Variable().
mg: Should be from a Variable().
ms: Should be from a Variable().
mom: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
rho: Decay rate. Must be a scalar.

epsilon: Ridge term. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var, ms and mom.

Returns the created operation.

func ResourceSparseApplyFtrl Uses

func ResourceSparseApplyFtrl(scope *Scope, var_ tf.Output, accum tf.Output, linear tf.Output, grad tf.Output, indices tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, lr_power tf.Output, optional ...ResourceSparseApplyFtrlAttr) (o *tf.Operation)

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

That is for rows we have grad for, we update var, accum and linear as follows: accum_new = accum + grad * grad linear += grad + (accum_new^(-lr_power) - accum^(-lr_power)) / lr * var quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
linear: Should be from a Variable().
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.
lr: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
lr_power: Scaling factor. Must be a scalar.

Returns the created operation.

func ResourceSparseApplyFtrlV2 Uses

func ResourceSparseApplyFtrlV2(scope *Scope, var_ tf.Output, accum tf.Output, linear tf.Output, grad tf.Output, indices tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, l2_shrinkage tf.Output, lr_power tf.Output, optional ...ResourceSparseApplyFtrlV2Attr) (o *tf.Operation)

Update relevant entries in '*var' according to the Ftrl-proximal scheme.

That is for rows we have grad for, we update var, accum and linear as follows: grad_with_shrinkage = grad + 2 * l2_shrinkage * var accum_new = accum + grad_with_shrinkage * grad_with_shrinkage linear += grad_with_shrinkage +

(accum_new^(-lr_power) - accum^(-lr_power)) / lr * var

quadratic = 1.0 / (accum_new^(lr_power) * lr) + 2 * l2 var = (sign(linear) * l1 - linear) / quadratic if |linear| > l1 else 0.0 accum = accum_new

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
linear: Should be from a Variable().
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.
lr: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 shrinkage regulariation. Must be a scalar.

lr_power: Scaling factor. Must be a scalar.

Returns the created operation.

func ResourceSparseApplyMomentum Uses

func ResourceSparseApplyMomentum(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, grad tf.Output, indices tf.Output, momentum tf.Output, optional ...ResourceSparseApplyMomentumAttr) (o *tf.Operation)

Update relevant entries in '*var' and '*accum' according to the momentum scheme.

Set use_nesterov = True if you want to use Nesterov momentum.

That is for rows we have grad for, we update var and accum as follows:

accum = accum * momentum + grad var -= lr * accum

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Learning rate. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.
momentum: Momentum. Must be a scalar.

Returns the created operation.

func ResourceSparseApplyProximalAdagrad Uses

func ResourceSparseApplyProximalAdagrad(scope *Scope, var_ tf.Output, accum tf.Output, lr tf.Output, l1 tf.Output, l2 tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyProximalAdagradAttr) (o *tf.Operation)

Sparse update entries in '*var' and '*accum' according to FOBOS algorithm.

That is for rows we have grad for, we update var and accum as follows: accum += grad * grad prox_v = var prox_v -= lr * grad * (1 / sqrt(accum)) var = sign(prox_v)/(1+lr*l2) * max{|prox_v|-lr*l1,0}

Arguments:

var_: Should be from a Variable().
accum: Should be from a Variable().
lr: Learning rate. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.

Returns the created operation.

func ResourceSparseApplyProximalGradientDescent Uses

func ResourceSparseApplyProximalGradientDescent(scope *Scope, var_ tf.Output, alpha tf.Output, l1 tf.Output, l2 tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyProximalGradientDescentAttr) (o *tf.Operation)

Sparse update '*var' as FOBOS algorithm with fixed learning rate.

That is for rows we have grad for, we update var as follows: prox_v = var - alpha * grad var = sign(prox_v)/(1+alpha*l2) * max{|prox_v|-alpha*l1,0}

Arguments:

var_: Should be from a Variable().
alpha: Scaling factor. Must be a scalar.
l1: L1 regularization. Must be a scalar.
l2: L2 regularization. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var and accum.

Returns the created operation.

func ResourceSparseApplyRMSProp Uses

func ResourceSparseApplyRMSProp(scope *Scope, var_ tf.Output, ms tf.Output, mom tf.Output, lr tf.Output, rho tf.Output, momentum tf.Output, epsilon tf.Output, grad tf.Output, indices tf.Output, optional ...ResourceSparseApplyRMSPropAttr) (o *tf.Operation)

Update '*var' according to the RMSProp algorithm.

Note that in dense implementation of this algorithm, ms and mom will update even if the grad is zero, but in this sparse implementation, ms and mom will not update in iterations during which the grad is zero.

mean_square = decay * mean_square + (1-decay) * gradient ** 2 Delta = learning_rate * gradient / sqrt(mean_square + epsilon)

ms <- rho * ms_{t-1} + (1-rho) * grad * grad mom <- momentum * mom_{t-1} + lr * grad / sqrt(ms + epsilon) var <- var - mom

Arguments:

var_: Should be from a Variable().
ms: Should be from a Variable().
mom: Should be from a Variable().
lr: Scaling factor. Must be a scalar.
rho: Decay rate. Must be a scalar.

epsilon: Ridge term. Must be a scalar.
grad: The gradient.
indices: A vector of indices into the first dimension of var, ms and mom.

Returns the created operation.

func ResourceStridedSliceAssign Uses

func ResourceStridedSliceAssign(scope *Scope, ref tf.Output, begin tf.Output, end tf.Output, strides tf.Output, value tf.Output, optional ...ResourceStridedSliceAssignAttr) (o *tf.Operation)

Assign `value` to the sliced l-value reference of `ref`.

The values of `value` are assigned to the positions in the variable `ref` that are selected by the slice parameters. The slice parameters `begin, `end`, `strides`, etc. work exactly as in `StridedSlice`.

NOTE this op currently does not support broadcasting and so `value`'s shape must be exactly the shape produced by the slice of `ref`.

Returns the created operation.

func Restore Uses

func Restore(scope *Scope, file_pattern tf.Output, tensor_name tf.Output, dt tf.DataType, optional ...RestoreAttr) (tensor tf.Output)

Restores a tensor from checkpoint files.

Reads a tensor stored in one or several files. If there are several files (for instance because a tensor was saved as slices), `file_pattern` may contain wildcard symbols (`*` and `?`) in the filename portion only, not in the directory portion.

If a `file_pattern` matches several files, `preferred_shard` can be used to hint in which file the requested tensor is likely to be found. This op will first open the file at index `preferred_shard` in the list of matching files and try to restore tensors from that file. Only if some tensors or tensor slices are not found in that first file, then the Op opens all the files. Setting `preferred_shard` to match the value passed as the `shard` input of a matching `Save` Op may speed up Restore. This attribute only affects performance, not correctness. The default value -1 means files are processed in order.

See also `RestoreSlice`.

Arguments:

file_pattern: Must have a single element. The pattern of the files from

which we read the tensor.

tensor_name: Must have a single element. The name of the tensor to be

restored.

dt: The type of the tensor to be restored.

Returns The restored tensor.

func RestoreSlice Uses

func RestoreSlice(scope *Scope, file_pattern tf.Output, tensor_name tf.Output, shape_and_slice tf.Output, dt tf.DataType, optional ...RestoreSliceAttr) (tensor tf.Output)

Restores a tensor from checkpoint files.

This is like `Restore` except that restored tensor can be listed as filling only a slice of a larger tensor. `shape_and_slice` specifies the shape of the larger tensor and the slice that the restored tensor covers.

The `shape_and_slice` input has the same format as the elements of the `shapes_and_slices` input of the `SaveSlices` op.

Arguments:

file_pattern: Must have a single element. The pattern of the files from

which we read the tensor.

tensor_name: Must have a single element. The name of the tensor to be

restored.

shape_and_slice: Scalar. The shapes and slice specifications to use when

restoring a tensors.

dt: The type of the tensor to be restored.

Returns The restored tensor.

func RestoreV2 Uses

func RestoreV2(scope *Scope, prefix tf.Output, tensor_names tf.Output, shape_and_slices tf.Output, dtypes []tf.DataType) (tensors []tf.Output)

Restores tensors from a V2 checkpoint.

For backward compatibility with the V1 format, this Op currently allows restoring from a V1 checkpoint as well:

- This Op first attempts to find the V2 index file pointed to by "prefix", and
  if found proceed to read it as a V2 checkpoint;
- Otherwise the V1 read path is invoked.

Relying on this behavior is not recommended, as the ability to fall back to read V1 might be deprecated and eventually removed.

By default, restores the named tensors in full. If the caller wishes to restore specific slices of stored tensors, "shape_and_slices" should be non-empty strings and correspondingly well-formed.

Callers must ensure all the named tensors are indeed stored in the checkpoint.

Arguments:

prefix: Must have a single element.  The prefix of a V2 checkpoint.
tensor_names: shape {N}.  The names of the tensors to be restored.
shape_and_slices: shape {N}.  The slice specs of the tensors to be restored.

Empty strings indicate that they are non-partitioned tensors.

dtypes: shape {N}.  The list of expected dtype for the tensors.  Must match

those stored in the checkpoint.

Returns shape {N}. The restored tensors, whose shapes are read from the checkpoint directly.

func Reverse Uses

func Reverse(scope *Scope, tensor tf.Output, dims tf.Output) (output tf.Output)

Reverses specific dimensions of a tensor.

Given a `tensor`, and a `bool` tensor `dims` representing the dimensions of `tensor`, this operation reverses each dimension i of `tensor` where `dims[i]` is `True`.

`tensor` can have up to 8 dimensions. The number of dimensions of `tensor` must equal the number of elements in `dims`. In other words:

`rank(tensor) = size(dims)`

For example:

“` # tensor 't' is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor 't' shape is [1, 2, 3, 4]

# 'dims' is [False, False, False, True] reverse(t, dims) ==> [[[[ 3, 2, 1, 0],

 [ 7,  6,  5,  4],
 [ 11, 10, 9, 8]],
[[15, 14, 13, 12],
 [19, 18, 17, 16],
 [23, 22, 21, 20]]]]

# 'dims' is [False, True, False, False] reverse(t, dims) ==> [[[[12, 13, 14, 15],

 [16, 17, 18, 19],
 [20, 21, 22, 23]
[[ 0,  1,  2,  3],
 [ 4,  5,  6,  7],
 [ 8,  9, 10, 11]]]]

# 'dims' is [False, False, True, False] reverse(t, dims) ==> [[[[8, 9, 10, 11],

 [4, 5, 6, 7],
 [0, 1, 2, 3]]
[[20, 21, 22, 23],
 [16, 17, 18, 19],
 [12, 13, 14, 15]]]]

“`

Arguments:

tensor: Up to 8-D.
dims: 1-D. The dimensions to reverse.

Returns The same shape as `tensor`.

func ReverseSequence Uses

func ReverseSequence(scope *Scope, input tf.Output, seq_lengths tf.Output, seq_dim int64, optional ...ReverseSequenceAttr) (output tf.Output)

Reverses variable length slices.

This op first slices `input` along the dimension `batch_dim`, and for each slice `i`, reverses the first `seq_lengths[i]` elements along the dimension `seq_dim`.

The elements of `seq_lengths` must obey `seq_lengths[i] <= input.dims[seq_dim]`, and `seq_lengths` must be a vector of length `input.dims[batch_dim]`.

The output slice `i` along dimension `batch_dim` is then given by input slice `i`, with the first `seq_lengths[i]` slices along dimension `seq_dim` reversed.

For example:

“` # Given this: batch_dim = 0 seq_dim = 1 input.dims = (4, 8, ...) seq_lengths = [7, 2, 3, 5]

# then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0, 0:7, :, ...] = input[0, 7:0:-1, :, ...] output[1, 0:2, :, ...] = input[1, 2:0:-1, :, ...] output[2, 0:3, :, ...] = input[2, 3:0:-1, :, ...] output[3, 0:5, :, ...] = input[3, 5:0:-1, :, ...]

# while entries past seq_lens are copied through: output[0, 7:, :, ...] = input[0, 7:, :, ...] output[1, 2:, :, ...] = input[1, 2:, :, ...] output[2, 3:, :, ...] = input[2, 3:, :, ...] output[3, 2:, :, ...] = input[3, 2:, :, ...] “`

In contrast, if:

“` # Given this: batch_dim = 2 seq_dim = 0 input.dims = (8, ?, 4, ...) seq_lengths = [7, 2, 3, 5]

# then slices of input are reversed on seq_dim, but only up to seq_lengths: output[0:7, :, 0, :, ...] = input[7:0:-1, :, 0, :, ...] output[0:2, :, 1, :, ...] = input[2:0:-1, :, 1, :, ...] output[0:3, :, 2, :, ...] = input[3:0:-1, :, 2, :, ...] output[0:5, :, 3, :, ...] = input[5:0:-1, :, 3, :, ...]

# while entries past seq_lens are copied through: output[7:, :, 0, :, ...] = input[7:, :, 0, :, ...] output[2:, :, 1, :, ...] = input[2:, :, 1, :, ...] output[3:, :, 2, :, ...] = input[3:, :, 2, :, ...] output[2:, :, 3, :, ...] = input[2:, :, 3, :, ...] “`

Arguments:

input: The input to reverse.
seq_lengths: 1-D with length `input.dims(batch_dim)` and

`max(seq_lengths) <= input.dims(seq_dim)`

seq_dim: The dimension which is partially reversed.

Returns The partially reversed input. It has the same shape as `input`.

func ReverseV2 Uses

func ReverseV2(scope *Scope, tensor tf.Output, axis tf.Output) (output tf.Output)

Reverses specific dimensions of a tensor.

NOTE `tf.reverse` has now changed behavior in preparation for 1.0. `tf.reverse_v2` is currently an alias that will be deprecated before TF 1.0.

Given a `tensor`, and a `int32` tensor `axis` representing the set of dimensions of `tensor` to reverse. This operation reverses each dimension `i` for which there exists `j` s.t. `axis[j] == i`.

`tensor` can have up to 8 dimensions. The number of dimensions specified in `axis` may be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised.

For example:

“` # tensor 't' is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor 't' shape is [1, 2, 3, 4]

# 'dims' is [3] or 'dims' is -1 reverse(t, dims) ==> [[[[ 3, 2, 1, 0],

 [ 7,  6,  5,  4],
 [ 11, 10, 9, 8]],
[[15, 14, 13, 12],
 [19, 18, 17, 16],
 [23, 22, 21, 20]]]]

# 'dims' is '[1]' (or 'dims' is '[-3]') reverse(t, dims) ==> [[[[12, 13, 14, 15],

 [16, 17, 18, 19],
 [20, 21, 22, 23]
[[ 0,  1,  2,  3],
 [ 4,  5,  6,  7],
 [ 8,  9, 10, 11]]]]

# 'dims' is '[2]' (or 'dims' is '[-2]') reverse(t, dims) ==> [[[[8, 9, 10, 11],

 [4, 5, 6, 7],
 [0, 1, 2, 3]]
[[20, 21, 22, 23],
 [16, 17, 18, 19],
 [12, 13, 14, 15]]]]

“`

Arguments:

tensor: Up to 8-D.
axis: 1-D. The indices of the dimensions to reverse. Must be in the range

`[-rank(tensor), rank(tensor))`.

Returns The same shape as `tensor`.

func Rint Uses

func Rint(scope *Scope, x tf.Output) (y tf.Output)

Returns element-wise integer closest to x.

If the result is midway between two representable values, the even representable is chosen. For example:

“` rint(-1.5) ==> -2.0 rint(0.5000001) ==> 1.0 rint([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) ==> [-2., -2., -0., 0., 2., 2., 2.] “`

func Round Uses

func Round(scope *Scope, x tf.Output) (y tf.Output)

Rounds the values of a tensor to the nearest integer, element-wise.

Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use std::cint.

func Rsqrt Uses

func Rsqrt(scope *Scope, x tf.Output) (y tf.Output)

Computes reciprocal of square root of x element-wise.

I.e., \\(y = 1 / \sqrt{x}\\).

func RsqrtGrad Uses

func RsqrtGrad(scope *Scope, x tf.Output, y tf.Output) (z tf.Output)

Computes the gradient for the rsqrt of `x` wrt its input.

Specifically, `grad = dy * -0.5 * y^3`, where `y = rsqrt(x)`, and `dy` is the corresponding input gradient.

func SampleDistortedBoundingBox Uses

func SampleDistortedBoundingBox(scope *Scope, image_size tf.Output, bounding_boxes tf.Output, optional ...SampleDistortedBoundingBoxAttr) (begin tf.Output, size tf.Output, bboxes tf.Output)

Generate a single randomly distorted bounding box for an image.

Bounding box annotations are often supplied in addition to ground-truth labels in image recognition or object localization tasks. A common technique for training such a system is to randomly distort an image while preserving its content, i.e. *data augmentation*. This Op outputs a randomly distorted localization of an object, i.e. bounding box, given an `image_size`, `bounding_boxes` and a series of constraints.

The output of this Op is a single bounding box that may be used to crop the original image. The output is returned as 3 tensors: `begin`, `size` and `bboxes`. The first 2 tensors can be fed directly into `tf.slice` to crop the image. The latter may be supplied to `tf.image.draw_bounding_boxes` to visualize what the bounding box looks like.

Bounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`. The bounding box coordinates are floats in `[0.0, 1.0]` relative to the width and height of the underlying image.

For example,

“`python

# Generate a single distorted bounding box.
begin, size, bbox_for_draw = tf.image.sample_distorted_bounding_box(
    tf.shape(image),
    bounding_boxes=bounding_boxes)

# Draw the bounding box in an image summary.
image_with_box = tf.image.draw_bounding_boxes(tf.expand_dims(image, 0),
                                              bbox_for_draw)
tf.image_summary('images_with_box', image_with_box)

# Employ the bounding box to distort the image.
distorted_image = tf.slice(image, begin, size)

“`

Note that if no bounding box information is available, setting `use_image_if_no_bounding_boxes = true` will assume there is a single implicit bounding box covering the whole image. If `use_image_if_no_bounding_boxes` is false and no bounding boxes are supplied, an error is raised.

Arguments:

image_size: 1-D, containing `[height, width, channels]`.
bounding_boxes: 3-D with shape `[batch, N, 4]` describing the N bounding boxes

associated with the image.

Returns 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to `tf.slice`.1-D, containing `[target_height, target_width, -1]`. Provide as input to `tf.slice`.3-D with shape `[1, 1, 4]` containing the distorted bounding box. Provide as input to `tf.image.draw_bounding_boxes`.

func SampleDistortedBoundingBoxV2 Uses

func SampleDistortedBoundingBoxV2(scope *Scope, image_size tf.Output, bounding_boxes tf.Output, min_object_covered tf.