cockroach: github.com/cockroachdb/cockroach/pkg/sql/colexec Index | Files | Directories

package colexec

import "github.com/cockroachdb/cockroach/pkg/sql/colexec"

Index

Package Files

aggregate_funcs.go and_or_projection.eg.go bool_vec_to_sel.go buffer.go builtin_funcs.go cancel_checker.go case.go cast.eg.go columnarizer.go const.eg.go constants.go count.go deselector.go disk_spiller.go distinct.eg.go expr.go external_hash_joiner.go external_sort.go fn_op.go hash.go hash_aggregator.eg.go hash_aggregator.go hash_any_not_null_agg.eg.go hash_avg_agg.eg.go hash_bool_and_or_agg.eg.go hash_concat_agg.eg.go hash_count_agg.eg.go hash_min_max_agg.eg.go hash_sum_agg.eg.go hash_sum_int_agg.eg.go hash_utils.eg.go hash_utils.go hashjoiner.eg.go hashjoiner.go hashtable.go hashtable_distinct.eg.go hashtable_full_default.eg.go hashtable_full_deleting.eg.go invariants_checker.go is_null_ops.go like_ops.eg.go like_ops.go limit.go materializer.go mergejoinbase.eg.go mergejoiner.go mergejoiner_exceptall.eg.go mergejoiner_fullouter.eg.go mergejoiner_inner.eg.go mergejoiner_intersectall.eg.go mergejoiner_leftanti.eg.go mergejoiner_leftouter.eg.go mergejoiner_leftsemi.eg.go mergejoiner_rightouter.eg.go mergejoiner_util.go offset.go one_shot.go op_creation.go operator.go ordered_aggregator.go ordered_any_not_null_agg.eg.go ordered_avg_agg.eg.go ordered_bool_and_or_agg.eg.go ordered_concat_agg.eg.go ordered_count_agg.eg.go ordered_min_max_agg.eg.go ordered_sum_agg.eg.go ordered_sum_int_agg.eg.go ordered_synchronizer.eg.go ordinality.go parallel_unordered_synchronizer.go partially_ordered_distinct.go partitioner.go proj_const_left_ops.eg.go proj_const_right_ops.eg.go proj_non_const_ops.eg.go quicksort.eg.go rank.eg.go relative_rank.eg.go routers.go row_number.eg.go rowstovec.eg.go select_in.eg.go selection_ops.eg.go serial_unordered_synchronizer.go simple_project.go sort.eg.go sort.go sort_chunks.go sorttopk.go spilling_queue.go stats.go substring.eg.go unordered_distinct.go utils.eg.go utils.go values_differ.eg.go vec_comparators.eg.go vec_to_datum.eg.go window_peer_grouper.eg.go

Constants

const DefaultVectorizeRowCountThreshold = 1000

DefaultVectorizeRowCountThreshold denotes the default row count threshold. When it is met, the vectorized execution engine will be used if possible. The current number 1000 was chosen upon comparing `SELECT count(*) FROM t` query running through the row and the vectorized execution engines on a single node with tables having different number of columns. Note: if you are updating this field, please make sure to update vectorize_threshold logic test accordingly.

const HashTableNumBuckets = 1 << 16

HashTableNumBuckets is the default number of buckets in the colexec hashtable. TODO(yuzefovich): support rehashing instead of large fixed bucket size.

const VecMaxOpenFDsLimit = 256

VecMaxOpenFDsLimit specifies the maximum number of open file descriptors that the vectorized engine can have (globally) for use of the temporary storage.

Variables

var SupportedAggFns = []execinfrapb.AggregatorSpec_Func{
    execinfrapb.AggregatorSpec_ANY_NOT_NULL,
    execinfrapb.AggregatorSpec_AVG,
    execinfrapb.AggregatorSpec_SUM,
    execinfrapb.AggregatorSpec_SUM_INT,
    execinfrapb.AggregatorSpec_CONCAT_AGG,
    execinfrapb.AggregatorSpec_COUNT_ROWS,
    execinfrapb.AggregatorSpec_COUNT,
    execinfrapb.AggregatorSpec_MIN,
    execinfrapb.AggregatorSpec_MAX,
    execinfrapb.AggregatorSpec_BOOL_AND,
    execinfrapb.AggregatorSpec_BOOL_OR,
}

SupportedAggFns contains all aggregate functions supported by the vectorized engine.

var TestNewColOperator func(ctx context.Context, flowCtx *execinfra.FlowCtx, args *NewColOperatorArgs,
) (r NewColOperatorResult, err error)

TestNewColOperator is a test helper that's always aliased to builder.NewColOperator. We inject this at test time, so tests can use NewColOperator from colexec package.

func BoolOrUnknownToSelOp Uses

func BoolOrUnknownToSelOp(
    input colexecbase.Operator, typs []*types.T, vecIdx int,
) (colexecbase.Operator, error)

BoolOrUnknownToSelOp plans an infrastructure necessary to convert a column of either Bool or Unknown type into a selection vector on the input batches.

func EncDatumRowsToColVec Uses

func EncDatumRowsToColVec(
    allocator *colmem.Allocator,
    rows sqlbase.EncDatumRows,
    vec coldata.Vec,
    columnIdx int,
    t *types.T,
    alloc *sqlbase.DatumAlloc,
) error

EncDatumRowsToColVec converts one column from EncDatumRows to a column vector. columnIdx is the 0-based index of the column in the EncDatumRows.

func GetCastOperator Uses

func GetCastOperator(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    colIdx int,
    resultIdx int,
    fromType *types.T,
    toType *types.T,
) (colexecbase.Operator, error)

func GetDatumToPhysicalFn Uses

func GetDatumToPhysicalFn(ct *types.T) func(tree.Datum) interface{}

GetDatumToPhysicalFn returns a function for converting a datum of the given ColumnType to the corresponding Go type. Note that the signature of the return function doesn't contain an error since we assume that the conversion must succeed. If for some reason it fails, a panic will be emitted and will be caught by the panic-catcher mechanism of the vectorized engine and will be propagated as an error accordingly.

func GetInOperator Uses

func GetInOperator(
    t *types.T, input colexecbase.Operator, colIdx int, datumTuple *tree.DTuple, negate bool,
) (colexecbase.Operator, error)

func GetInProjectionOperator Uses

func GetInProjectionOperator(
    allocator *colmem.Allocator,
    t *types.T,
    input colexecbase.Operator,
    colIdx int,
    resultIdx int,
    datumTuple *tree.DTuple,
    negate bool,
) (colexecbase.Operator, error)

func GetLikeOperator Uses

func GetLikeOperator(
    ctx *tree.EvalContext, input colexecbase.Operator, colIdx int, pattern string, negate bool,
) (colexecbase.Operator, error)

GetLikeOperator returns a selection operator which applies the specified LIKE pattern, or NOT LIKE if the negate argument is true. The implementation varies depending on the complexity of the pattern.

func GetLikeProjectionOperator Uses

func GetLikeProjectionOperator(
    allocator *colmem.Allocator,
    ctx *tree.EvalContext,
    input colexecbase.Operator,
    colIdx int,
    resultIdx int,
    pattern string,
    negate bool,
) (colexecbase.Operator, error)

GetLikeProjectionOperator returns a projection operator which projects the result of the specified LIKE pattern, or NOT LIKE if the negate argument is true. The implementation varies depending on the complexity of the pattern.

func GetProjectionLConstOperator Uses

func GetProjectionLConstOperator(
    allocator *colmem.Allocator,
    leftType *types.T,
    rightType *types.T,
    outputType *types.T,
    op tree.Operator,
    input colexecbase.Operator,
    colIdx int,
    constArg tree.Datum,
    outputIdx int,
    binFn *tree.BinOp,
    evalCtx *tree.EvalContext,
) (colexecbase.Operator, error)

GetProjectionLConstOperator returns the appropriate constant projection operator for the given left and right column types and operation.

func GetProjectionOperator Uses

func GetProjectionOperator(
    allocator *colmem.Allocator,
    leftType *types.T,
    rightType *types.T,
    outputType *types.T,
    op tree.Operator,
    input colexecbase.Operator,
    col1Idx int,
    col2Idx int,
    outputIdx int,
    binFn *tree.BinOp,
    evalCtx *tree.EvalContext,
) (colexecbase.Operator, error)

GetProjectionOperator returns the appropriate projection operator for the given left and right column types and operation.

func GetProjectionRConstOperator Uses

func GetProjectionRConstOperator(
    allocator *colmem.Allocator,
    leftType *types.T,
    rightType *types.T,
    outputType *types.T,
    op tree.Operator,
    input colexecbase.Operator,
    colIdx int,
    constArg tree.Datum,
    outputIdx int,
    binFn *tree.BinOp,
    evalCtx *tree.EvalContext,
) (colexecbase.Operator, error)

GetProjectionRConstOperator returns the appropriate constant projection operator for the given left and right column types and operation.

func GetSelectionConstOperator Uses

func GetSelectionConstOperator(
    leftType *types.T,
    constType *types.T,
    cmpOp tree.ComparisonOperator,
    input colexecbase.Operator,
    colIdx int,
    constArg tree.Datum,
    binFn *tree.BinOp,
) (colexecbase.Operator, error)

GetSelectionConstOperator returns the appropriate constant selection operator for the given left and right column types and comparison.

func GetSelectionOperator Uses

func GetSelectionOperator(
    leftType *types.T,
    rightType *types.T,
    cmpOp tree.ComparisonOperator,
    input colexecbase.Operator,
    col1Idx int,
    col2Idx int,
    binFn *tree.BinOp,
) (colexecbase.Operator, error)

GetSelectionOperator returns the appropriate two column selection operator for the given left and right column types and comparison.

func GetVecComparator Uses

func GetVecComparator(t *types.T, numVecs int) vecComparator

func MakeAggregateFuncsOutputTypes Uses

func MakeAggregateFuncsOutputTypes(
    aggTyps [][]*types.T, aggFns []execinfrapb.AggregatorSpec_Func,
) ([]*types.T, error)

MakeAggregateFuncsOutputTypes produces the output types for a given set of aggregates.

func NewAndProjOp Uses

func NewAndProjOp(
    allocator *colmem.Allocator,
    input, leftProjOpChain, rightProjOpChain colexecbase.Operator,
    leftFeedOp, rightFeedOp *FeedOperator,
    leftInputType, rightInputType *types.T,
    leftIdx, rightIdx, outputIdx int,
) (colexecbase.Operator, error)

func NewBufferOp Uses

func NewBufferOp(input colexecbase.Operator) colexecbase.Operator

NewBufferOp returns a new bufferOp, initialized to buffer batches from the supplied input.

func NewBuiltinFunctionOperator Uses

func NewBuiltinFunctionOperator(
    allocator *colmem.Allocator,
    evalCtx *tree.EvalContext,
    funcExpr *tree.FuncExpr,
    columnTypes []*types.T,
    argumentCols []int,
    outputIdx int,
    input colexecbase.Operator,
) (colexecbase.Operator, error)

NewBuiltinFunctionOperator returns an operator that applies builtin functions.

func NewCaseOp Uses

func NewCaseOp(
    allocator *colmem.Allocator,
    buffer colexecbase.Operator,
    caseOps []colexecbase.Operator,
    elseOp colexecbase.Operator,
    thenIdxs []int,
    outputIdx int,
    typ *types.T,
) colexecbase.Operator

NewCaseOp returns an operator that runs a case statement. buffer is a bufferOp that will return the input batch repeatedly. caseOps is a list of operator chains, one per branch in the case statement.

Each caseOp is connected to the input buffer op, and filters the input based
on the case arm's WHEN condition, and then projects the remaining selected
tuples based on the case arm's THEN condition.

elseOp is the ELSE condition. whenCol is the index into the input batch to read from. thenCol is the index into the output batch to write to. typ is the type of the CASE expression.

func NewConstNullOp Uses

func NewConstNullOp(
    allocator *colmem.Allocator, input colexecbase.Operator, outputIdx int,
) colexecbase.Operator

NewConstNullOp creates a new operator that produces a constant (untyped) NULL value at index outputIdx.

func NewConstOp Uses

func NewConstOp(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    t *types.T,
    constVal interface{},
    outputIdx int,
) (colexecbase.Operator, error)

NewConstOp creates a new operator that produces a constant value constVal of type t at index outputIdx.

func NewCountOp Uses

func NewCountOp(allocator *colmem.Allocator, input colexecbase.Operator) colexecbase.Operator

NewCountOp returns a new count operator that counts the rows in its input.

func NewDeselectorOp Uses

func NewDeselectorOp(
    allocator *colmem.Allocator, input colexecbase.Operator, typs []*types.T,
) colexecbase.Operator

NewDeselectorOp creates a new deselector operator on the given input operator with the given column types.

func NewExternalHashJoiner Uses

func NewExternalHashJoiner(
    unlimitedAllocator *colmem.Allocator,
    spec HashJoinerSpec,
    leftInput, rightInput colexecbase.Operator,
    memoryLimit int64,
    diskQueueCfg colcontainer.DiskQueueCfg,
    fdSemaphore semaphore.Semaphore,
    createReusableDiskBackedSorter func(input colexecbase.Operator, inputTypes []*types.T, orderingCols []execinfrapb.Ordering_Column, maxNumberPartitions int) (colexecbase.Operator, error),
    numForcedRepartitions int,
    delegateFDAcquisitions bool,
    diskAcc *mon.BoundAccount,
) colexecbase.Operator

NewExternalHashJoiner returns a disk-backed hash joiner. - unlimitedAllocator must have been created with a memory account derived from an unlimited memory monitor. It will be used by several internal components of the external hash joiner which is responsible for making sure that the components stay within the memory limit. - numForcedRepartitions is a number of times that the external hash joiner is forced to recursively repartition (even if it is otherwise not needed). This should be non-zero only in tests. - delegateFDAcquisitions specifies whether the external hash joiner should let the partitioned disk queues acquire file descriptors instead of acquiring them up front in Next. Should be true only in tests.

func NewExternalSorter Uses

func NewExternalSorter(
    ctx context.Context,
    unlimitedAllocator *colmem.Allocator,
    standaloneMemAccount *mon.BoundAccount,
    input colexecbase.Operator,
    inputTypes []*types.T,
    ordering execinfrapb.Ordering,
    memoryLimit int64,
    maxNumberPartitions int,
    delegateFDAcquisitions bool,
    diskQueueCfg colcontainer.DiskQueueCfg,
    fdSemaphore semaphore.Semaphore,
    diskAcc *mon.BoundAccount,
) colexecbase.Operator

NewExternalSorter returns a disk-backed general sort operator. - ctx is the same context that standaloneMemAccount was created with. - unlimitedAllocator must have been created with a memory account derived from an unlimited memory monitor. It will be used by several internal components of the external sort which is responsible for making sure that the components stay within the memory limit. - standaloneMemAccount must be a memory account derived from an unlimited memory monitor with a standalone budget. It will be used by inputPartitioningOperator to "partition" the input according to memory limit. The budget *must* be standalone because we don't want to double count the memory (the memory under the batches will be accounted for with the unlimitedAllocator). - maxNumberPartitions (when non-zero) overrides the semi-dynamically computed maximum number of partitions to have at once. - delegateFDAcquisitions specifies whether the external sorter should let the partitioned disk queue acquire file descriptors instead of acquiring them up front in Next. This should only be true in tests.

func NewHashAggregator Uses

func NewHashAggregator(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    typs []*types.T,
    aggFns []execinfrapb.AggregatorSpec_Func,
    groupCols []uint32,
    aggCols [][]uint32,
) (colexecbase.Operator, error)

NewHashAggregator creates a hash aggregator on the given grouping columns. The input specifications to this function are the same as that of the NewOrderedAggregator function.

func NewHashJoiner Uses

func NewHashJoiner(
    allocator *colmem.Allocator, spec HashJoinerSpec, leftSource, rightSource colexecbase.Operator,
) colexecbase.Operator

NewHashJoiner creates a new equality hash join operator on the left and right input tables.

func NewInvariantsChecker Uses

func NewInvariantsChecker(input colexecbase.Operator) colexecbase.Operator

NewInvariantsChecker creates a new invariantsChecker.

func NewIsNullProjOp Uses

func NewIsNullProjOp(
    allocator *colmem.Allocator, input colexecbase.Operator, colIdx, outputIdx int, negate bool,
) colexecbase.Operator

NewIsNullProjOp returns a new isNullProjOp.

func NewIsNullSelOp Uses

func NewIsNullSelOp(input colexecbase.Operator, colIdx int, negate bool) colexecbase.Operator

NewIsNullSelOp returns a new isNullSelOp.

func NewLimitOp Uses

func NewLimitOp(input colexecbase.Operator, limit int) colexecbase.Operator

NewLimitOp returns a new limit operator with the given limit.

func NewNoop Uses

func NewNoop(input colexecbase.Operator) colexecbase.Operator

NewNoop returns a new noop Operator.

func NewOffsetOp Uses

func NewOffsetOp(input colexecbase.Operator, offset int) colexecbase.Operator

NewOffsetOp returns a new offset operator with the given offset.

func NewOneInputDiskSpiller Uses

func NewOneInputDiskSpiller(
    input colexecbase.Operator,
    inMemoryOp colexecbase.BufferingInMemoryOperator,
    inMemoryMemMonitorName string,
    diskBackedOpConstructor func(input colexecbase.Operator) colexecbase.Operator,
    spillingCallbackFn func(),
) colexecbase.Operator

NewOneInputDiskSpiller returns a new oneInputDiskSpiller. It takes the following arguments: - inMemoryOp - the in-memory operator that will be consuming input and doing

computations until it either successfully processes the whole input or
reaches its memory limit.

- inMemoryMemMonitorName - the name of the memory monitor of the in-memory

operator. diskSpiller will catch an OOM error only if this name is
contained within the error message.

- diskBackedOpConstructor - the function to construct the disk-backed

operator when given an input operator. We take in a constructor rather
than an already created operator in order to hide the complexity of buffer
exporting operator that serves as the input to the disk-backed operator.

- spillingCallbackFn will be called when the spilling from in-memory to disk

backed operator occurs. It should only be set in tests.

func NewOrProjOp Uses

func NewOrProjOp(
    allocator *colmem.Allocator,
    input, leftProjOpChain, rightProjOpChain colexecbase.Operator,
    leftFeedOp, rightFeedOp *FeedOperator,
    leftInputType, rightInputType *types.T,
    leftIdx, rightIdx, outputIdx int,
) (colexecbase.Operator, error)

func NewOrderedAggregator Uses

func NewOrderedAggregator(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    typs []*types.T,
    aggFns []execinfrapb.AggregatorSpec_Func,
    groupCols []uint32,
    aggCols [][]uint32,
    isScalar bool,
) (colexecbase.Operator, error)

NewOrderedAggregator creates an ordered aggregator on the given grouping columns. aggCols is a slice where each index represents a new aggregation function. The slice at that index specifies the columns of the input batch that the aggregate function should work on.

func NewOrderedDistinct Uses

func NewOrderedDistinct(
    input colexecbase.Operator, distinctCols []uint32, typs []*types.T,
) (colexecbase.Operator, error)

NewOrderedDistinct creates a new ordered distinct operator on the given input columns with the given types.

func NewOrdinalityOp Uses

func NewOrdinalityOp(
    allocator *colmem.Allocator, input colexecbase.Operator, outputIdx int,
) colexecbase.Operator

NewOrdinalityOp returns a new WITH ORDINALITY operator.

func NewRankOperator Uses

func NewRankOperator(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    windowFn execinfrapb.WindowerSpec_WindowFunc,
    orderingCols []execinfrapb.Ordering_Column,
    outputColIdx int,
    partitionColIdx int,
    peersColIdx int,
) (colexecbase.Operator, error)

NewRankOperator creates a new Operator that computes window functions RANK or DENSE_RANK (depending on the passed in windowFn). outputColIdx specifies in which coldata.Vec the operator should put its output (if there is no such column, a new column is appended).

func NewRelativeRankOperator Uses

func NewRelativeRankOperator(
    unlimitedAllocator *colmem.Allocator,
    memoryLimit int64,
    diskQueueCfg colcontainer.DiskQueueCfg,
    fdSemaphore semaphore.Semaphore,
    input colexecbase.Operator,
    inputTypes []*types.T,
    windowFn execinfrapb.WindowerSpec_WindowFunc,
    orderingCols []execinfrapb.Ordering_Column,
    outputColIdx int,
    partitionColIdx int,
    peersColIdx int,
    diskAcc *mon.BoundAccount,
) (colexecbase.Operator, error)

NewRelativeRankOperator creates a new Operator that computes window functions PERCENT_RANK or CUME_DIST (depending on the passed in windowFn). outputColIdx specifies in which coldata.Vec the operator should put its output (if there is no such column, a new column is appended).

func NewRowNumberOperator Uses

func NewRowNumberOperator(
    allocator *colmem.Allocator, input colexecbase.Operator, outputColIdx int, partitionColIdx int,
) colexecbase.Operator

NewRowNumberOperator creates a new Operator that computes window function ROW_NUMBER. outputColIdx specifies in which coldata.Vec the operator should put its output (if there is no such column, a new column is appended).

func NewSimpleProjectOp Uses

func NewSimpleProjectOp(
    input colexecbase.Operator, numInputCols int, projection []uint32,
) colexecbase.Operator

NewSimpleProjectOp returns a new simpleProjectOp that applies a simple projection on the columns in its input batch, returning a new batch with only the columns in the projection slice, in order. In a degenerate case when input already outputs batches that satisfy the projection, a simpleProjectOp is not planned and input is returned.

func NewSingleTupleNoInputOp Uses

func NewSingleTupleNoInputOp(allocator *colmem.Allocator) colexecbase.Operator

NewSingleTupleNoInputOp creates a new Operator which returns a batch of length 1 with no actual columns on the first call to Next() and zero-length batches on all consecutive calls.

func NewSortChunks Uses

func NewSortChunks(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    inputTypes []*types.T,
    orderingCols []execinfrapb.Ordering_Column,
    matchLen int,
) (colexecbase.Operator, error)

NewSortChunks returns a new sort chunks operator, which sorts its input on the columns given in orderingCols. The inputTypes must correspond 1-1 with the columns in the input operator. The input tuples must be sorted on first matchLen columns.

func NewSorter Uses

func NewSorter(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    inputTypes []*types.T,
    orderingCols []execinfrapb.Ordering_Column,
) (colexecbase.Operator, error)

NewSorter returns a new sort operator, which sorts its input on the columns given in orderingCols. The inputTypes must correspond 1-1 with the columns in the input operator.

func NewTopKSorter Uses

func NewTopKSorter(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    inputTypes []*types.T,
    orderingCols []execinfrapb.Ordering_Column,
    k int,
) colexecbase.Operator

NewTopKSorter returns a new sort operator, which sorts its input on the columns given in orderingCols and returns the first K rows. The inputTypes must correspond 1-1 with the columns in the input operator.

func NewTwoInputDiskSpiller Uses

func NewTwoInputDiskSpiller(
    inputOne, inputTwo colexecbase.Operator,
    inMemoryOp colexecbase.BufferingInMemoryOperator,
    inMemoryMemMonitorName string,
    diskBackedOpConstructor func(inputOne, inputTwo colexecbase.Operator) colexecbase.Operator,
    spillingCallbackFn func(),
) colexecbase.Operator

NewTwoInputDiskSpiller returns a new twoInputDiskSpiller. It takes the following arguments: - inMemoryOp - the in-memory operator that will be consuming inputs and

doing computations until it either successfully processes the whole inputs
or reaches its memory limit.

- inMemoryMemMonitorName - the name of the memory monitor of the in-memory

operator. diskSpiller will catch an OOM error only if this name is
contained within the error message.

- diskBackedOpConstructor - the function to construct the disk-backed

operator when given two input operators. We take in a constructor rather
than an already created operator in order to hide the complexity of buffer
exporting operators that serves as inputs to the disk-backed operator.

- spillingCallbackFn will be called when the spilling from in-memory to disk

backed operator occurs. It should only be set in tests.

func NewUnorderedDistinct Uses

func NewUnorderedDistinct(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    distinctCols []uint32,
    typs []*types.T,
    numHashBuckets uint64,
) colexecbase.Operator

NewUnorderedDistinct creates an unordered distinct on the given distinct columns. numHashBuckets determines the number of buckets that the hash table is created with.

func NewWindowPeerGrouper Uses

func NewWindowPeerGrouper(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    typs []*types.T,
    orderingCols []execinfrapb.Ordering_Column,
    partitionColIdx int,
    outputColIdx int,
) (op colexecbase.Operator, err error)

NewWindowPeerGrouper creates a new Operator that puts 'true' in outputColIdx'th column (which is appended if needed) for every tuple that is the first within its peer group. Peers are tuples that belong to the same partition and are equal on the ordering columns. If orderingCols is empty, then all tuples within the partition are peers. - partitionColIdx, if not columnOmitted, *must* specify the column in which

'true' indicates the start of a new partition.

NOTE: the input *must* already be ordered on ordCols.

func NewWindowSortingPartitioner Uses

func NewWindowSortingPartitioner(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    inputTyps []*types.T,
    partitionIdxs []uint32,
    ordCols []execinfrapb.Ordering_Column,
    partitionColIdx int,
    createDiskBackedSorter func(input colexecbase.Operator, inputTypes []*types.T, orderingCols []execinfrapb.Ordering_Column) (colexecbase.Operator, error),
) (op colexecbase.Operator, err error)

NewWindowSortingPartitioner creates a new colexec.Operator that orders input first based on the partitionIdxs columns and second on ordCols (i.e. it handles both PARTITION BY and ORDER BY clauses of a window function) and puts true in partitionColIdx'th column (which is appended if needed) for every tuple that is the first within its partition.

func NewZeroOp Uses

func NewZeroOp(input colexecbase.Operator) colexecbase.Operator

NewZeroOp creates a new operator which just returns an empty batch.

func NewZeroOpNoInput Uses

func NewZeroOpNoInput() colexecbase.Operator

NewZeroOpNoInput creates a new operator which just returns an empty batch and doesn't an input.

func OrderedDistinctColsToOperators Uses

func OrderedDistinctColsToOperators(
    input colexecbase.Operator, distinctCols []uint32, typs []*types.T,
) (colexecbase.Operator, []bool, error)

OrderedDistinctColsToOperators is a utility function that given an input and a slice of columns, creates a chain of distinct operators and returns the last distinct operator in that chain as well as its output column.

func PhysicalTypeColVecToDatum Uses

func PhysicalTypeColVecToDatum(
    converted []tree.Datum, col coldata.Vec, length int, sel []int, da *sqlbase.DatumAlloc,
)

type BatchSchemaSubsetEnforcer Uses

type BatchSchemaSubsetEnforcer struct {
    NonExplainable
    // contains filtered or unexported fields
}

BatchSchemaSubsetEnforcer is similar to vectorTypeEnforcer in its purpose, but it enforces that the subset of the columns of the non-zero length batch satisfies the desired schema. It needs to wrap the input to a "projecting" operator that internally uses other "projecting" operators (for example, caseOp and logical projection operators). This operator supports type schemas with unsupported types in which case in the corresponding position an "unknown" vector can be appended.

The word "subset" is actually more like a "range", but we chose the former since the latter is overloaded.

NOTE: the type schema passed into BatchSchemaSubsetEnforcer *must* include the output type of the Operator that the enforcer will be the input to.

func NewBatchSchemaSubsetEnforcer Uses

func NewBatchSchemaSubsetEnforcer(
    allocator *colmem.Allocator,
    input colexecbase.Operator,
    typs []*types.T,
    subsetStartIdx, subsetEndIdx int,
) *BatchSchemaSubsetEnforcer

NewBatchSchemaSubsetEnforcer creates a new BatchSchemaSubsetEnforcer. - subsetStartIdx and subsetEndIdx define the boundaries of the range of columns that the projecting operator and its internal projecting operators own.

func (*BatchSchemaSubsetEnforcer) Close Uses

func (c *BatchSchemaSubsetEnforcer) Close(ctx context.Context) error

func (*BatchSchemaSubsetEnforcer) Init Uses

func (e *BatchSchemaSubsetEnforcer) Init()

Init implements the colexecbase.Operator interface.

func (*BatchSchemaSubsetEnforcer) Next Uses

func (e *BatchSchemaSubsetEnforcer) Next(ctx context.Context) coldata.Batch

Next implements the colexecbase.Operator interface.

func (*BatchSchemaSubsetEnforcer) SetTypes Uses

func (e *BatchSchemaSubsetEnforcer) SetTypes(typs []*types.T)

SetTypes sets the types of this schema subset enforcer, and sets the end of the range of enforced columns to the length of the input types.

type BoolVecComparator Uses

type BoolVecComparator struct {
    // contains filtered or unexported fields
}

type BytesVecComparator Uses

type BytesVecComparator struct {
    // contains filtered or unexported fields
}

type CallbackCloser Uses

type CallbackCloser struct {
    CloseCb func(context.Context) error
}

CallbackCloser is a utility struct that implements the Closer interface by calling a provided callback.

func (*CallbackCloser) Close Uses

func (c *CallbackCloser) Close(ctx context.Context) error

Close implements the Closer interface.

type CancelChecker Uses

type CancelChecker struct {
    OneInputNode
    NonExplainable
    // contains filtered or unexported fields
}

CancelChecker is an Operator that checks whether query cancellation has occurred. The check happens on every batch.

func NewCancelChecker Uses

func NewCancelChecker(op colexecbase.Operator) *CancelChecker

NewCancelChecker creates a new CancelChecker.

func (*CancelChecker) Init Uses

func (c *CancelChecker) Init()

Init is part of the Operator interface.

func (*CancelChecker) Next Uses

func (c *CancelChecker) Next(ctx context.Context) coldata.Batch

Next is part of Operator interface.

type Closer Uses

type Closer interface {
    Close(ctx context.Context) error
}

Closer is an object that releases resources when Close is called. Note that this interface must be implemented by all operators that could be planned on top of other operators that do actually need to release the resources (e.g. if we have a simple project on top of a disk-backed operator, that simple project needs to implement this interface so that Close() call could be propagated correctly).

type Closers Uses

type Closers []Closer

Closers is a slice of Closers.

func (Closers) CloseAndLogOnErr Uses

func (c Closers) CloseAndLogOnErr(ctx context.Context, prefix string)

CloseAndLogOnErr closes all Closers and logs the error if the log verbosity is 1 or higher. The given prefix is prepended to the log message.

type Columnarizer Uses

type Columnarizer struct {
    execinfra.ProcessorBase
    NonExplainable
    // contains filtered or unexported fields
}

Columnarizer turns an execinfra.RowSource input into an Operator output, by reading the input in chunks of size coldata.BatchSize() and converting each chunk into a coldata.Batch column by column.

func NewColumnarizer Uses

func NewColumnarizer(
    ctx context.Context,
    allocator *colmem.Allocator,
    flowCtx *execinfra.FlowCtx,
    processorID int32,
    input execinfra.RowSource,
) (*Columnarizer, error)

NewColumnarizer returns a new Columnarizer.

func (*Columnarizer) Child Uses

func (c *Columnarizer) Child(nth int, verbose bool) execinfra.OpNode

Child is part of the Operator interface.

func (*Columnarizer) ChildCount Uses

func (c *Columnarizer) ChildCount(verbose bool) int

ChildCount is part of the Operator interface.

func (*Columnarizer) Close Uses

func (c *Columnarizer) Close(ctx context.Context) error

Close is part of the Operator interface.

func (*Columnarizer) DrainMeta Uses

func (c *Columnarizer) DrainMeta(ctx context.Context) []execinfrapb.ProducerMetadata

DrainMeta is part of the MetadataSource interface.

func (*Columnarizer) Init Uses

func (c *Columnarizer) Init()

Init is part of the Operator interface.

func (*Columnarizer) Input Uses

func (c *Columnarizer) Input() execinfra.RowSource

Input returns the input of this columnarizer.

func (*Columnarizer) Next Uses

func (c *Columnarizer) Next(context.Context) coldata.Batch

Next is part of the Operator interface.

func (*Columnarizer) Run Uses

func (c *Columnarizer) Run(context.Context)

Run is part of the execinfra.Processor interface.

Columnarizers are not expected to be Run, so we prohibit calling this method on them.

type DatumVecComparator Uses

type DatumVecComparator struct {
    // contains filtered or unexported fields
}

type DecimalVecComparator Uses

type DecimalVecComparator struct {
    // contains filtered or unexported fields
}

type ExprDeserialization Uses

type ExprDeserialization int

ExprDeserialization describes how expression deserialization should be handled in the vectorized engine.

const (
    // DefaultExprDeserialization is the default way of handling expression
    // deserialization in which case LocalExpr field is used if set.
    DefaultExprDeserialization ExprDeserialization = iota
    // ForcedExprDeserialization is the way of handling expression
    // deserialization in which case LocalExpr field is completely ignored and
    // the serialized representation is always deserialized.
    ForcedExprDeserialization
)

type ExprHelper Uses

type ExprHelper interface {
    // ProcessExpr processes the given expression and returns a well-typed
    // expression.
    ProcessExpr(execinfrapb.Expression, *tree.EvalContext, []*types.T) (tree.TypedExpr, error)
}

ExprHelper is a utility interface that helps with expression handling in the vectorized engine.

func NewDefaultExprHelper Uses

func NewDefaultExprHelper() ExprHelper

NewDefaultExprHelper returns an ExprHelper that takes advantage of an already well-typed expression in LocalExpr when set.

func NewExprHelper Uses

func NewExprHelper(exprDeserialization ExprDeserialization) ExprHelper

NewExprHelper returns a new ExprHelper. forceExprDeserialization determines whether LocalExpr field is ignored by the helper.

type FeedOperator Uses

type FeedOperator struct {
    colexecbase.ZeroInputNode
    NonExplainable
    // contains filtered or unexported fields
}

FeedOperator is used to feed an Operator chain with input by manually setting the next batch.

func NewFeedOperator Uses

func NewFeedOperator() *FeedOperator

NewFeedOperator returns a new feed operator.

func (FeedOperator) Init Uses

func (FeedOperator) Init()

Init implements the colexecbase.Operator interface.

func (*FeedOperator) Next Uses

func (o *FeedOperator) Next(context.Context) coldata.Batch

Next implements the colexecbase.Operator interface.

type Float64VecComparator Uses

type Float64VecComparator struct {
    // contains filtered or unexported fields
}

type HashJoinerSpec Uses

type HashJoinerSpec struct {
    // contains filtered or unexported fields
}

HashJoinerSpec is the specification for a hash join operator. The hash joiner performs a join on the left and right's equal columns and returns combined left and right output columns.

func MakeHashJoinerSpec Uses

func MakeHashJoinerSpec(
    joinType sqlbase.JoinType,
    leftEqCols []uint32,
    rightEqCols []uint32,
    leftTypes []*types.T,
    rightTypes []*types.T,
    rightDistinct bool,
) (HashJoinerSpec, error)

MakeHashJoinerSpec creates a specification for columnar hash join operator. leftEqCols and rightEqCols specify the equality columns while leftOutCols and rightOutCols specifies the output columns. leftTypes and rightTypes specify the input column types of the two sources. rightDistinct indicates whether the equality columns of the right source form a key.

type HashRouter Uses

type HashRouter struct {
    OneInputNode
    // contains filtered or unexported fields
}

HashRouter hashes values according to provided hash columns and computes a destination for each row. These destinations are exposed as Operators returned by the constructor.

func NewHashRouter Uses

func NewHashRouter(
    unlimitedAllocators []*colmem.Allocator,
    input colexecbase.Operator,
    types []*types.T,
    hashCols []uint32,
    memoryLimit int64,
    diskQueueCfg colcontainer.DiskQueueCfg,
    fdSemaphore semaphore.Semaphore,
    diskAccounts []*mon.BoundAccount,
    toDrain []execinfrapb.MetadataSource,
    toClose []Closer,
) (*HashRouter, []colexecbase.DrainableOperator)

NewHashRouter creates a new hash router that consumes coldata.Batches from input and hashes each row according to hashCols to one of the outputs returned as Operators. The number of allocators provided will determine the number of outputs returned. Note that each allocator must be unlimited, memory will be limited by comparing memory use in the allocator with the memoryLimit argument. Each Operator must have an independent allocator (this means that each allocator should be linked to an independent mem account) as Operator.Next will usually be called concurrently between different outputs. Similarly, each output needs to have a separate disk account.

func (*HashRouter) Run Uses

func (r *HashRouter) Run(ctx context.Context)

Run runs the HashRouter. Batches are read from the input and pushed to an output calculated by hashing columns. Cancel the given context to terminate early.

type Int16VecComparator Uses

type Int16VecComparator struct {
    // contains filtered or unexported fields
}

type Int32VecComparator Uses

type Int32VecComparator struct {
    // contains filtered or unexported fields
}

type Int64VecComparator Uses

type Int64VecComparator struct {
    // contains filtered or unexported fields
}

type InternalMemoryOperator Uses

type InternalMemoryOperator interface {
    colexecbase.Operator
    // InternalMemoryUsage reports the internal memory usage (in bytes) of an
    // operator.
    InternalMemoryUsage() int
}

InternalMemoryOperator is an interface that operators which use internal memory need to implement. "Internal memory" is defined as memory that is "private" to the operator and is not exposed to the outside; notably, it does *not* include any coldata.Batch'es and coldata.Vec's.

type IntervalVecComparator Uses

type IntervalVecComparator struct {
    // contains filtered or unexported fields
}

type Materializer Uses

type Materializer struct {
    execinfra.ProcessorBase
    NonExplainable
    // contains filtered or unexported fields
}

Materializer converts an Operator input into a execinfra.RowSource.

func NewMaterializer Uses

func NewMaterializer(
    flowCtx *execinfra.FlowCtx,
    processorID int32,
    input colexecbase.Operator,
    typs []*types.T,
    output execinfra.RowReceiver,
    metadataSourcesQueue []execinfrapb.MetadataSource,
    toClose []Closer,
    outputStatsToTrace func(),
    cancelFlow func() context.CancelFunc,
) (*Materializer, error)

NewMaterializer creates a new Materializer processor which processes the columnar data coming from input to return it as rows. Arguments: - typs is the output types scheme. - metadataSourcesQueue are all of the metadata sources that are planned on the same node as the Materializer and that need to be drained. - outputStatsToTrace (when tracing is enabled) finishes the stats. - cancelFlow should return the context cancellation function that cancels the context of the flow (i.e. it is Flow.ctxCancel). It should only be non-nil in case of a root Materializer (i.e. not when we're wrapping a row source). NOTE: the constructor does *not* take in an execinfrapb.PostProcessSpec because we expect input to handle that for us.

func (*Materializer) Child Uses

func (m *Materializer) Child(nth int, verbose bool) execinfra.OpNode

Child is part of the exec.OpNode interface.

func (*Materializer) ChildCount Uses

func (m *Materializer) ChildCount(verbose bool) int

ChildCount is part of the exec.OpNode interface.

func (*Materializer) ConsumerClosed Uses

func (m *Materializer) ConsumerClosed()

ConsumerClosed is part of the execinfra.RowSource interface.

func (*Materializer) ConsumerDone Uses

func (m *Materializer) ConsumerDone()

ConsumerDone is part of the execinfra.RowSource interface.

func (*Materializer) InternalClose Uses

func (m *Materializer) InternalClose() bool

InternalClose helps implement the execinfra.RowSource interface.

func (*Materializer) Next Uses

func (m *Materializer) Next() (sqlbase.EncDatumRow, *execinfrapb.ProducerMetadata)

Next is part of the execinfra.RowSource interface.

func (*Materializer) Start Uses

func (m *Materializer) Start(ctx context.Context) context.Context

Start is part of the execinfra.RowSource interface.

type NewColOperatorArgs Uses

type NewColOperatorArgs struct {
    Spec                 *execinfrapb.ProcessorSpec
    Inputs               []colexecbase.Operator
    StreamingMemAccount  *mon.BoundAccount
    ProcessorConstructor execinfra.ProcessorConstructor
    DiskQueueCfg         colcontainer.DiskQueueCfg
    FDSemaphore          semaphore.Semaphore
    ExprHelper           ExprHelper
    TestingKnobs         struct {
        // UseStreamingMemAccountForBuffering specifies whether to use
        // StreamingMemAccount when creating buffering operators and should only be
        // set to 'true' in tests. The idea behind this flag is reducing the number
        // of memory accounts and monitors we need to close, so we plumbed it into
        // the planning code so that it doesn't create extra memory monitoring
        // infrastructure (and so that we could use testMemAccount defined in
        // main_test.go).
        UseStreamingMemAccountForBuffering bool
        // SpillingCallbackFn will be called when the spilling from an in-memory to
        // disk-backed operator occurs. It should only be set in tests.
        SpillingCallbackFn func()
        // DiskSpillingDisabled specifies whether only in-memory operators should
        // be created.
        DiskSpillingDisabled bool
        // NumForcedRepartitions specifies a number of "repartitions" that a
        // disk-backed operator should be forced to perform. "Repartition" can mean
        // different things depending on the operator (for example, for hash joiner
        // it is dividing original partition into multiple new partitions; for
        // sorter it is merging already created partitions into new one before
        // proceeding to the next partition from the input).
        NumForcedRepartitions int
        // DelegateFDAcquisitions should be observed by users of a
        // PartitionedDiskQueue. During normal operations, these should acquire the
        // maximum number of file descriptors they will use from FDSemaphore up
        // front. Setting this testing knob to true disables that behavior and
        // lets the PartitionedDiskQueue interact with the semaphore as partitions
        // are opened/closed, which ensures that the number of open files never
        // exceeds what is expected.
        DelegateFDAcquisitions bool
    }
}

NewColOperatorArgs is a helper struct that encompasses all of the input arguments to NewColOperator call.

type NewColOperatorResult Uses

type NewColOperatorResult struct {
    Op               colexecbase.Operator
    ColumnTypes      []*types.T
    InternalMemUsage int
    MetadataSources  []execinfrapb.MetadataSource
    // ToClose is a slice of components that need to be Closed.
    ToClose     []Closer
    IsStreaming bool
    OpMonitors  []*mon.BytesMonitor
    OpAccounts  []*mon.BoundAccount
}

NewColOperatorResult is a helper struct that encompasses all of the return values of NewColOperator call.

type NonExplainable Uses

type NonExplainable interface {
    // contains filtered or unexported methods
}

NonExplainable is a marker interface which identifies an Operator that should be omitted from the output of EXPLAIN (VEC). Note that VERBOSE explain option will override the omitting behavior.

type OneInputNode Uses

type OneInputNode struct {
    // contains filtered or unexported fields
}

OneInputNode is an execinfra.OpNode with a single Operator input.

func NewOneInputNode Uses

func NewOneInputNode(input colexecbase.Operator) OneInputNode

NewOneInputNode returns an execinfra.OpNode with a single Operator input.

func (OneInputNode) Child Uses

func (n OneInputNode) Child(nth int, verbose bool) execinfra.OpNode

Child implements the execinfra.OpNode interface.

func (OneInputNode) ChildCount Uses

func (OneInputNode) ChildCount(verbose bool) int

ChildCount implements the execinfra.OpNode interface.

func (OneInputNode) Input Uses

func (n OneInputNode) Input() colexecbase.Operator

Input returns the single input of this OneInputNode as an Operator.

type OperatorInitStatus Uses

type OperatorInitStatus int

OperatorInitStatus indicates whether Init method has already been called on an Operator.

const (
    // OperatorNotInitialized indicates that Init has not been called yet.
    OperatorNotInitialized OperatorInitStatus = iota
    // OperatorInitialized indicates that Init has already been called.
    OperatorInitialized
)

type OrderedSynchronizer Uses

type OrderedSynchronizer struct {
    // contains filtered or unexported fields
}

OrderedSynchronizer receives rows from multiple inputs and produces a single stream of rows, ordered according to a set of columns. The rows in each input stream are assumed to be ordered according to the same set of columns.

func NewOrderedSynchronizer Uses

func NewOrderedSynchronizer(
    allocator *colmem.Allocator,
    inputs []SynchronizerInput,
    typs []*types.T,
    ordering sqlbase.ColumnOrdering,
) (*OrderedSynchronizer, error)

NewOrderedSynchronizer creates a new OrderedSynchronizer.

func (*OrderedSynchronizer) Child Uses

func (o *OrderedSynchronizer) Child(nth int, verbose bool) execinfra.OpNode

Child implements the execinfrapb.OpNode interface.

func (*OrderedSynchronizer) ChildCount Uses

func (o *OrderedSynchronizer) ChildCount(verbose bool) int

ChildCount implements the execinfrapb.OpNode interface.

func (*OrderedSynchronizer) Close Uses

func (o *OrderedSynchronizer) Close(ctx context.Context) error

func (*OrderedSynchronizer) DrainMeta Uses

func (o *OrderedSynchronizer) DrainMeta(ctx context.Context) []execinfrapb.ProducerMetadata

func (*OrderedSynchronizer) Init Uses

func (o *OrderedSynchronizer) Init()

Init is part of the Operator interface.

func (*OrderedSynchronizer) Len Uses

func (o *OrderedSynchronizer) Len() int

Len is part of heap.Interface and is only meant to be used internally.

func (*OrderedSynchronizer) Less Uses

func (o *OrderedSynchronizer) Less(i, j int) bool

Less is part of heap.Interface and is only meant to be used internally.

func (*OrderedSynchronizer) Next Uses

func (o *OrderedSynchronizer) Next(ctx context.Context) coldata.Batch

Next is part of the Operator interface.

func (*OrderedSynchronizer) Pop Uses

func (o *OrderedSynchronizer) Pop() interface{}

Pop is part of heap.Interface and is only meant to be used internally.

func (*OrderedSynchronizer) Push Uses

func (o *OrderedSynchronizer) Push(x interface{})

Push is part of heap.Interface and is only meant to be used internally.

func (*OrderedSynchronizer) Swap Uses

func (o *OrderedSynchronizer) Swap(i, j int)

Swap is part of heap.Interface and is only meant to be used internally.

type ParallelUnorderedSynchronizer Uses

type ParallelUnorderedSynchronizer struct {
    // contains filtered or unexported fields
}

ParallelUnorderedSynchronizer is an Operator that combines multiple Operator streams into one.

func NewParallelUnorderedSynchronizer Uses

func NewParallelUnorderedSynchronizer(
    inputs []SynchronizerInput, wg *sync.WaitGroup,
) *ParallelUnorderedSynchronizer

NewParallelUnorderedSynchronizer creates a new ParallelUnorderedSynchronizer. On the first call to Next, len(inputs) goroutines are spawned to read each input asynchronously (to not be limited by a slow input). These will increment the passed-in WaitGroup and decrement when done. It is also guaranteed that these spawned goroutines will have completed on any error or zero-length batch received from Next.

func (*ParallelUnorderedSynchronizer) Child Uses

func (s *ParallelUnorderedSynchronizer) Child(nth int, verbose bool) execinfra.OpNode

Child implements the execinfra.OpNode interface.

func (*ParallelUnorderedSynchronizer) ChildCount Uses

func (s *ParallelUnorderedSynchronizer) ChildCount(verbose bool) int

ChildCount implements the execinfra.OpNode interface.

func (*ParallelUnorderedSynchronizer) DrainMeta Uses

func (s *ParallelUnorderedSynchronizer) DrainMeta(
    ctx context.Context,
) []execinfrapb.ProducerMetadata

DrainMeta is part of the MetadataSource interface.

func (*ParallelUnorderedSynchronizer) Init Uses

func (s *ParallelUnorderedSynchronizer) Init()

Init is part of the Operator interface.

func (*ParallelUnorderedSynchronizer) Next Uses

func (s *ParallelUnorderedSynchronizer) Next(ctx context.Context) coldata.Batch

Next is part of the Operator interface.

type ResettableOperator Uses

type ResettableOperator interface {
    colexecbase.Operator
    // contains filtered or unexported methods
}

ResettableOperator is an Operator that can be reset.

func NewMergeJoinOp Uses

func NewMergeJoinOp(
    unlimitedAllocator *colmem.Allocator,
    memoryLimit int64,
    diskQueueCfg colcontainer.DiskQueueCfg,
    fdSemaphore semaphore.Semaphore,
    joinType sqlbase.JoinType,
    left colexecbase.Operator,
    right colexecbase.Operator,
    leftTypes []*types.T,
    rightTypes []*types.T,
    leftOrdering []execinfrapb.Ordering_Column,
    rightOrdering []execinfrapb.Ordering_Column,
    diskAcc *mon.BoundAccount,
) (ResettableOperator, error)

NewMergeJoinOp returns a new merge join operator with the given spec that implements sort-merge join. It performs a merge on the left and right input sources, based on the equality columns, assuming both inputs are in sorted order.

type SerialUnorderedSynchronizer Uses

type SerialUnorderedSynchronizer struct {
    // contains filtered or unexported fields
}

SerialUnorderedSynchronizer is an Operator that combines multiple Operator streams into one. It reads its inputs one by one until each one is exhausted, at which point it moves to the next input. See ParallelUnorderedSynchronizer for a parallel implementation. The serial one is used when concurrency is undesirable - for example when the whole query is planned on the gateway and we want to run it in the RootTxn.

func NewSerialUnorderedSynchronizer Uses

func NewSerialUnorderedSynchronizer(inputs []SynchronizerInput) *SerialUnorderedSynchronizer

NewSerialUnorderedSynchronizer creates a new SerialUnorderedSynchronizer.

func (*SerialUnorderedSynchronizer) Child Uses

func (s *SerialUnorderedSynchronizer) Child(nth int, verbose bool) execinfra.OpNode

Child implements the execinfra.OpNode interface.

func (*SerialUnorderedSynchronizer) ChildCount Uses

func (s *SerialUnorderedSynchronizer) ChildCount(verbose bool) int

ChildCount implements the execinfra.OpNode interface.

func (*SerialUnorderedSynchronizer) Close Uses

func (s *SerialUnorderedSynchronizer) Close(ctx context.Context) error

Close is part of the Closer interface.

func (*SerialUnorderedSynchronizer) DrainMeta Uses

func (s *SerialUnorderedSynchronizer) DrainMeta(
    ctx context.Context,
) []execinfrapb.ProducerMetadata

DrainMeta is part of the MetadataSource interface.

func (*SerialUnorderedSynchronizer) Init Uses

func (s *SerialUnorderedSynchronizer) Init()

Init is part of the Operator interface.

func (*SerialUnorderedSynchronizer) Next Uses

func (s *SerialUnorderedSynchronizer) Next(ctx context.Context) coldata.Batch

Next is part of the Operator interface.

type SynchronizerInput Uses

type SynchronizerInput struct {
    // Op is the input Operator.
    Op  colexecbase.Operator
    // MetadataSources are metadata sources in the input tree that should be
    // drained in the same goroutine as Op.
    MetadataSources execinfrapb.MetadataSources
    // ToClose are Closers in the input tree that should be closed in the same
    // goroutine as Op.
    ToClose Closers
}

SynchronizerInput is a wrapper over a colexecbase.Operator that a synchronizer goroutine will be calling Next on. An accompanying []execinfrapb.MetadataSource may also be specified, in which case DrainMeta will be called from the same goroutine.

type TimestampVecComparator Uses

type TimestampVecComparator struct {
    // contains filtered or unexported fields
}

type VectorizedStatsCollector Uses

type VectorizedStatsCollector struct {
    colexecbase.Operator
    NonExplainable
    execpb.VectorizedStats
    // contains filtered or unexported fields
}

VectorizedStatsCollector collects VectorizedStats on Operators.

If two Operators are connected (i.e. one is an input to another), the corresponding VectorizedStatsCollectors are also "connected" by sharing a StopWatch.

func NewVectorizedStatsCollector Uses

func NewVectorizedStatsCollector(
    op colexecbase.Operator,
    id int32,
    idTagKey string,
    isStall bool,
    inputWatch *timeutil.StopWatch,
    memMonitors []*mon.BytesMonitor,
    diskMonitors []*mon.BytesMonitor,
    inputStatsCollectors []*VectorizedStatsCollector,
) *VectorizedStatsCollector

NewVectorizedStatsCollector creates a new VectorizedStatsCollector which wraps 'op' that corresponds to a component with either ProcessorID or StreamID 'id' (with 'idTagKey' distinguishing between the two). 'isStall' indicates whether stall or execution time is being measured. 'stopwatch' must be non-nil.

func (*VectorizedStatsCollector) Next Uses

func (vsc *VectorizedStatsCollector) Next(ctx context.Context) coldata.Batch

Next is part of the Operator interface.

func (*VectorizedStatsCollector) OutputStats Uses

func (vsc *VectorizedStatsCollector) OutputStats(
    ctx context.Context, flowID string, deterministicStats bool,
)

OutputStats outputs the vectorized stats collected by vsc into ctx.

Directories

PathSynopsis
colbuilder
execgen
execpb

Package colexec imports 45 packages (graph) and is imported by 11 packages. Updated 2020-07-30. Refresh now. Tools for package owners.