cockroach: github.com/cockroachdb/cockroach/pkg/sql/opt Index | Files | Directories

package opt

import "github.com/cockroachdb/cockroach/pkg/sql/opt"

Package opt contains the Cockroach SQL optimizer. The optimizer transforms the AST of a SQL query into a physical query plan for execution. Naive execution of a SQL query can be prohibitively expensive, because SQL specifies the desired results and not how to achieve them. A given SQL query can have thousands of equivalent query plans with vastly different execution times. The Cockroach optimizer is cost-based, meaning that it enumerates some or all of these alternate plans and chooses the one with the lowest estimated cost.

Overview

SQL query planning is often described in terms of 8 modules:

1. Properties

2. Stats

3. Cost Model

4. Memo

5. Transforms

6. Prep

7. Rewrite

8. Search

Note Prep, Rewrite and Search could be considered phases, though this document will refer to all 8 uniformly as modules. Memo is a technique for compactly representing the forest of trees generated during Search. Stats, Properties, Cost Model and Transformations are modules that power the Prep, Rewrite and Search phases.

                   SQL query text
                         |
                   +-----v-----+  - parse SQL text according to grammar
                   |   Parse   |  - report syntax errors
                   +-----+-----+
                         |
                       (ast)
                         |
                   +-----v-----+  - fold constants, check types, resolve
                   |  Analyze  |    names
                   +-----+-----+  - annotate tree with semantic info
                         |        - report semantic errors
                       (ast+)
     +-------+           |
     | Stats +----->-----v-----+  - normalize tree with cost-agnostic
     +-------+     |   Prep    |    transforms (placeholders present)
                +-->-----+-----+  - compute initial properties
                |        |        - retrieve and attach stats
                |     (expr)      - done once per PREPARE
                |        |
+------------+  |  +-----v-----+  - capture placeholder values / timestamps
| Transforms |--+-->  Rewrite  |  - normalize tree with cost-agnostic
+------------+  |  +-----+-----+    transforms (placeholders not present)
                |        |        - done once per EXECUTE
                |     (expr)
                |        |
                +-->-----v-----+  - generate equivalent expression trees
+------------+     |  Search   |  - find lowest cost physical plan
| Cost Model +----->-----+-----+  - includes DistSQL physical planning
+------------+           |
                  (physical plan)
                         |
                   +-----v-----+
                   | Execution |
                   +-----------+

The opt-related packages implement portions of these modules, while other parts are implemented elsewhere. For example, other sql packages are used to perform name resolution and type checking which are part of the Analyze phase.

Parse

The parse phase is not discussed in this document. It transforms the SQL query text into an abstract syntax tree (AST).

Analyze

The analyze phase ensures that the AST obeys all SQL semantic rules, and annotates the AST with information that will be used by later phases. In addition, some simple transforms are applied to the AST in order to simplify handling in later phases. Semantic rules are many and varied; this document describes a few major categories of semantic checks and rewrites.

"Name resolution" binds table, column, and other references. Each name must be matched to the appropriate schema object, and an error reported if no matching object can be found. Name binding can result in AST annotations that make it easy for other components to find the target object, or rewrites that replace unbound name nodes with new nodes that are easier to handle (e.g. IndexedVar).

"Constant folding" rewrites expressions that have constant inputs. For example, 1+1 would be folded to 2. Cockroach's typing rules assume that constants have been folded, as there are some expressions that would otherwise produce a semantic error if they are not first folded.

"Type inference" automatically determines the return data type of various SQL expressions, based on the types of inputs, as well as the context in which the expression is used. The AST is annotated with the resolved types for later use.

"Type checking" ensures that all inputs to SQL expressions and statements have legal static types. For example, the CONCAT function only accepts zero or more arguments that are statically typed as strings. Violation of the typing rules produces a semantic error.

Properties

Properties are meta-information that are computed (and sometimes stored) for each node in an expression. Properties power transformations and optimization.

"Logical properties" describe the structure and content of data returned by an expression, such as whether relational output columns can contain nulls, or the data type of a scalar expression. Two expressions which are logically equivalent according to the rules of the relational algebra will return the same set of rows and columns, and will have the same set of logical properties. However, the order of the rows, naming of the columns, and other presentational aspects of the result are not governed by the logical properties.

"Physical properties" are interesting characteristics of an expression that impact its layout, presentation, or location, but not its logical content. Examples include row order, column naming, and data distribution (physical location of data ranges). Physical properties exist outside of the relational algebra, and arise from both the SQL query itself (e.g. the non-relational ORDER BY operator) and by the selection of specific implementations during optimization (e.g. a merge join requires the inputs to be sorted in a particular order).

Properties can be "required" or "derived". A required property is one specified by the SQL query text. For example, a DISTINCT clause is a required property on the set of columns of the corresponding projection -- that the tuple of columns forms a key (unique values) in the results. A derived property is one derived by the optimizer for an expression based on the properties of the child expressions. For example:

SELECT k+1 FROM kv

Once the ordering of "k" is known from kv's descriptor, the same ordering property can be derived for k+1. During optimization, for each expression with required properties, the optimizer will look at child expressions to check whether their actual properties (which can be derived) match the requirement. If they don't, the optimizer must introduce an "enforcer" operator in the plan that provides the required property. For example, an ORDER BY clause creates a required ordering property that can cause the optimizer to add a Sort operator as an enforcer of that property.

Stats

Table statistics power both the cost model and the search of alternate query plans. A simple example of where statistics guide the search of alternate query plans is in join ordering:

SELECT * FROM a JOIN b

In the absence of other opportunities, this might be implemented as a hash join. With a hash join, we want to load the smaller set of rows (either from a or b) into the hash table and then query that table while looping through the larger set of rows. How do we know whether a or b is larger? We keep statistics about the cardinality of a and b, i.e. the (approximate) number of different values.

Simple table cardinality is sufficient for the above query but fails in other queries. Consider:

SELECT * FROM a JOIN b ON a.x = b.x WHERE a.y > 10

Table statistics might indicate that a contains 10x more data than b, but the predicate a.y > 10 is filtering a chunk of the table. What we care about is whether the result of the scan *after* filtering returns more rows than the scan of b. This can be accomplished by making a determination of the selectivity of the predicate a.y > 10 (the % of rows it will filter) and then multiplying that selectivity by the cardinality of a. The common technique for estimating selectivity is to collect a histogram on a.y.

The collection of table statistics occurs prior to receiving the query. As such, the statistics are necessarily out of date and may be inaccurate. The system may bound the inaccuracy by recomputing the stats based on how fast a table is being modified. Or the system may notice when stat estimations are inaccurate during query execution.

Cost Model

The cost model takes an expression as input and computes an estimated "cost" to execute that expression. The unit of "cost" can be arbitrary, though it is desirable if it has some real world meaning such as expected execution time. What is required is for the costs of different query plans to be comparable. The optimizer seeks to find the shortest expected execution time for a query and uses cost as a proxy for execution time.

Cost is roughly calculated by estimating how much time each node in the expression tree will use to process all results and modeling how data flows through the expression tree. Table statistics are used to power cardinality estimates of base relations which in term power cardinality estimates of intermediate relations. This is accomplished by propagating histograms of column values from base relations up through intermediate nodes (e.g. combining histograms from the two join inputs into a single histogram). Operator-specific computations model the network, disk and CPU costs. The cost model should include data layout and the specific operating environment. For example, network RTT in one cluster might be vastly different than another.

Because the cost for a query plan is an estimate, there is an associated error. This error might be implicit in the cost, or could be explicitly tracked. One advantage to explicitly tracking the expected error is that it can allow selecting a higher cost but lower expected error plan over a lower cost but higher expected error plan. Where does the error come from? One source is the innate inaccuracy of stats: selectivity estimation might be wildly off due to an outlier value. Another source is the accumulated build up of estimation errors the higher up in the query tree. Lastly, the cost model is making an estimation for the execution time of an operation such as a network RTT. This estimate can also be wildly inaccurate due to bursts of activity.

Search finds the lowest cost plan using dynamic programming. That imposes a restriction on the cost model: it must exhibit optimal substructure. An optimal solution can be constructed from optimal solutions of its sub-problems.

Memo

Memo is a data structure for efficiently storing a forest of query plans. Conceptually, the memo is composed of a numbered set of equivalency classes called groups where each group contains a set of logically equivalent expressions. The different expressions in a single group are called memo expressions (memo-ized expressions). A memo expression has a list of child groups as its children rather than a list of individual expressions. The forest is composed of every possible combination of parent expression with its children, recursively applied.

Memo expressions can be relational (e.g. join) or scalar (e.g. <). Operators are always both logical (specify results) and physical (specify results and a particular implementation). This means that even a "raw" unoptimized expression tree can be executed (naively). Both relational and scalar operators are uniformly represented as nodes in memo expression trees, which facilitates tree pattern matching and replacement.

Because memo groups contain logically equivalent expressions, all the memo expressions in a group share the same logical properties. However, it's possible for two logically equivalent expressions to be placed in different memo groups. This occurs because determining logical equivalency of two relational expressions is too complex to perform 100% correctly. A correctness failure (i.e. considering two expressions logically equivalent when they are not) results in invalid transformations and invalid plans. But placing two logically equivalent expressions in different groups has a much gentler failure mode: the memo and transformations are less efficient. Expressions within the memo may have different physical properties. For example, a memo group might contain both hash join and merge join expressions which produce the same set of output rows, but produce them in different orders.

Expressions are inserted into the memo by the factory, which ensure that expressions have been fully normalized before insertion (see the Prep section for more details). A new group is created only when unique normalized expressions are created by the factory during construction or rewrite of the tree. Uniqueness is determined by computing the fingerprint for a memo expression, which is simply the expression operator and its list of child groups. For example, consider this query:

SELECT * FROM a, b WHERE a.x = b.x

After insertion into the memo, the memo would contain these six groups:

6: [inner-join [1 2 5]]
5: [eq [3 4]]
4: [variable b.x]
3: [variable a.x]
2: [scan b]
1: [scan a]

The fingerprint for the inner-join expression is [inner-join [1 2 5]]. The memo maintains a map from expression fingerprint to memo group which allows quick determination of whether the normalized form of an expression already exists in the memo.

The normalizing factory will never add more than one expression to a memo group. But the explorer (see Search section for more details) does add denormalized expressions to existing memo groups, since oftentimes one of these equivalent, but denormalized expressions will have a lower cost than the initial normalized expression added by the factory. For example, the join commutativity transformation expands the memo like this:

6: [inner-join [1 2 5]] [inner-join [2 1 5]]
5: [eq [3 4]]
4: [variable b.x]
3: [variable a.x]
2: [scan b]
1: [scan a]

Notice that there are now two expressions in memo group 6. The coster (see Cost Model section for more details) will estimate the execution cost of each expression, and the optimizer will select the lowest cost alternative.

Transforms

Transforms convert an input expression tree into zero or more logically equivalent trees. Transforms consist of two parts: a "match pattern" and a "replace pattern". Together, the match pattern and replace pattern are called a "rule". Transform rules are categorized as "normalization" or "exploration" rules.

If an expression in the tree matches the match pattern, then a new expression will be constructed according to the replace pattern. Note that "replace" means the new expression is a logical replacement for the existing expression, not that the existing expression needs to physically be replaced. Depending on the context, the existing expression may be discarded, or it may be retained side- by-side with the new expression in the memo group.

Normalization rules are cost-agnostic, as they are always considered to be beneficial. All normalization rules are implemented by the normalizing factory, which does its best to map all logically equivalent expression trees to a single canonical form from which searches can branch out. See the Prep section for more details.

Exploration rules generate equivalent expression trees that must be costed in order to determine the lowest cost alternative. All exploration rules are implemented by the explorer, which is optimized to efficiently enumerate all possible expression tree combinations in the memo in order to look for rule matches. When it finds a match, the explorer applies the rule and adds an equivalent expression to the existing memo group. See the Search section for more details.

Some examples of transforms:

Join commutativity
Swaps the order of the inputs to an inner join.
	SELECT * FROM a, b => SELECT * FROM b, a

Join associativity
Reorders the children of a parent and child join
	SELECT * FROM (SELECT * FROM a, b), c
	=>
	SELECT * FROM (SELECT * FROM a, c), b

Predicate pushdown
Moves predicates below joins
	SELECT * FROM a, b USING (x) WHERE a.x < 10
	=>
	SELECT * FROM (SELECT * FROM a WHERE a.x < 10), b USING (x)

Join elimination
Removes unnecessary joins based on projected columns and foreign keys.
	SELECT a.x FROM a, b USING (x)
	=>
	SELECT a.x FROM a

Distinct/group-by elimination
Removes unnecessary distinct/group-by operations based on keys.
	SELECT DISTINCT a.x FROM a
	=>
	SELECT a.x FROM a

Predicate inference
Adds predicates based on filter conditions.
	SELECT * FROM a, b USING (x)
	=>
	SELECT * FROM a, b USING (x) WHERE a.x IS NOT NULL AND b.x IS NOT NULL

Decorrelation
Replaces correlated subqueries with semi-join, anti-join and apply ops.

Scan to index scan
Transforms scan operator into one or more index scans on covering indexes.

Inner join to merge join
Generates alternate merge-join operator from default inner-join operator.

Much of the optimizer's rule matching and application code is generated by a tool called Optgen, short for "optimizer generator". Optgen is a domain- specific language (DSL) that provides an intuitive syntax for defining transform rules. Here is an example:

[NormalizeEq]
(Eq
  $left:^(Variable)
  $right:(Variable)
)
=>
(Eq $right $left)

The expression above the arrow is the match pattern and the expression below the arrow is the replace pattern. This example rule will match Eq expressions which have a left input which is not a Variable operator and a right input which is a Variable operator. The replace pattern will trigger a replacement that reverses the two inputs. In addition, custom match and replace functions can be defined in order to run arbitrary Go code.

Prep

Prep (short for "prepare") is the first phase of query optimization, in which the annotated AST is transformed into a single normalized "expression tree". The optimizer directly creates the expression tree in the memo data structure rather than first constructing an intermediate data structure. A forest of equivalent trees will be generated in later phases, but at the end of the prep phase, the memo contains just one normalized tree that is logically equivalent to the SQL query.

During the prep phase, placeholder values are not yet known, so normalization cannot go as far as it can during later phases. However, this also means that the resulting expression tree can be cached in response to a PREPARE statement, and then be reused as a starting point each time an EXECUTE statement provides new placeholder values.

The memo expression tree is constructed by the normalizing factory, which does its best to map all logically equivalent expression trees to a single canonical form from which searches can branch out. The factory has an interface similar to this:

ConstructConst(value PrivateID) GroupID
ConstructAnd(conditions ListID) GroupID
ConstructInnerJoin(left GroupID, right GroupID, on GroupID) GroupID

The factory methods construct a memo expression tree bottom-up, with each memo group becoming an input to operators higher in the tree.

As each expression is constructed by the factory, it transitively applies normalization rules defined for that expression type. This may result in the construction of a different type of expression than what was requested. If, after normalization, the expression is already part of the memo, then construction is a no-op. Otherwise, a new memo group is created, with the normalized expression as its first and only expression.

By applying normalization rules as the expression tree is constructed, the factory can avoid creating intermediate expressions; often, "replacement" of an existing expression means it's never created to begin with.

During Prep, all columns used by the SQL query are given a numeric index that is unique across the query. Column numbering involves assigning every base column and non-trivial projection in a query a unique, query-specific index. Giving each column a unique index allows the expression nodes mentioned above to track input and output columns, or really any set of columns during Prep and later phases, using a bitmap (FastIntSet). The bitmap representation allows fast determination of compatibility between expression nodes and is utilized by transforms to determine the legality of such operations.

The Prep phase also computes logical properties, such as the input and output columns of each (sub-)expression, equivalent columns, not-null columns and functional dependencies. These properties are computed bottom-up as part of constructing the expression tree.

Rewrite

Rewrite is the second phase of query optimization. Placeholder values are available starting at this phase, so new normalization rules will typically match once constant values are substituted for placeholders. As mentioned in the previous section, the expression tree produced by the Prep phase can be cached and serve as the starting point for the Rewrite phase. In addition, the Rewrite phase takes a set of physical properties that are required from the result, such as row ordering and column naming.

The Rewrite and Search phases have significant overlap. Both phases perform transformations on the expression tree. However, Search preserves the matched expression side-by-side with the new expression, while Rewrite simply discards the matched expression, since the new expression is assumed to always be better. In addition, the application of exploration rules may trigger additional normalization rules, which may in turn trigger additional exploration rules.

Together, the Rewrite and Search phases are responsible for finding the expression that can provide the required set of physical properties at the lowest possible execution cost. That mandate is recursively applied; in other words, each subtree is also optimized with respect to a set of physical properties required by its parent, and the goal is to find the lowest cost equivalent expression. An example of an "interior" optimization goal is a merge join that requires its inner child to return its rows in a specific order. The same group can be (and sometimes is) optimized multiple times, but with different required properties each time.

Search is the final phase of optimization. Search begins with a single normalized tree that was created by the earlier phases. For each group, the "explorer" component generates alternative expressions that are logically equivalent to the normalized expression, but which may have very different execution plans. The "coster" component computes the estimated cost for each alternate expression. The optimizer remembers the "best expression" for each group, for each set of physical properties required of that group.

Optimization of a group proceeds in two phases:

1. Compute the cost of any previously generated expressions. That set initially contains only the group's normalized expression, but exploration may yield additional expressions. Costing a parent expression requires that the children first be costed, so costing triggers a recursive traversal of the memo groups.

2. Invoke the explorer to generate new equivalent expressions for the group. Those new expressions are costed once the optimizer loops back to the first phase.

In order to avoid a combinatorial explosion in the number of expression trees, the optimizer utilizes the memo structure. Due to the large number of possible plans for some queries, the optimizer cannot always explore all of them. Therefore, it proceeds in multiple iterative "passes", until either it hits some configured time or resource limit, or until an exhaustive search is complete. As long as the search is allowed to complete, the best plan will be found, just as in Volcano and Cascades.

The optimizer uses several techniques to maximize the chance that it finds the best plan early on:

- As with Cascades, the search is highly directed, interleaving exploration with costing in order to prune parts of the tree that cannot yield a better plan. This contrasts with Volcano, which first generates all possible plans in one global phase (exploration), and then determines the lowest cost plan in another global phase (costing).

- The optimizer uses a simple hill climbing heuristic to make greedy progress towards the best plan. During a given pass, the optimizer visits each group and performs costing and exploration for that group. As long as doing that yields a lower cost expression for the group, the optimizer will repeat those steps. This finds a local maxima for each group during the current pass.

In order to avoid costing or exploring parts of the search space that cannot yield a better plan, the optimizer performs aggressive "branch and bound pruning". Each group expression is optimized with respect to a "budget" parameter. As soon as this budget is exceeded, optimization of that expression terminates. It's not uncommon for large sections of the search space to never be costed or explored due to this pruning. Example:

innerJoin
	left:  cost = 50
	right: cost = 75
	on:    cost = 25

If the current best expression for the group has a cost of 100, then the optimizer does not need to cost or explore the "on" child of the join, and does not need to cost the join itself. This is because the combined cost of the left and right children already exceeds 100.

Index

Package Files

colset.go column_meta.go constants.go doc.go metadata.go operator.go ordering.go rule_name.go table_meta.go telemetry.go view_dependencies.go

Constants

const DefaultJoinOrderLimit = 4

DefaultJoinOrderLimit denotes the default limit on the number of joins to reorder.

const SaveTablesDatabase = "savetables"

SaveTablesDatabase is the name of the database where tables created by the saveTableNode are stored.

Variables

var AggregateOpReverseMap = map[Operator]string{
    ArrayAggOp:        "array_agg",
    AvgOp:             "avg",
    BoolAndOp:         "bool_and",
    BoolOrOp:          "bool_or",
    ConcatAggOp:       "concat_agg",
    CountOp:           "count",
    CountRowsOp:       "count_rows",
    MaxOp:             "max",
    MinOp:             "min",
    SumIntOp:          "sum_int",
    SumOp:             "sum",
    SqrDiffOp:         "sqrdiff",
    VarianceOp:        "variance",
    StdDevOp:          "stddev",
    XorAggOp:          "xor_agg",
    JsonAggOp:         "json_agg",
    JsonbAggOp:        "jsonb_agg",
    StringAggOp:       "string_agg",
    ConstAggOp:        "any_not_null",
    ConstNotNullAggOp: "any_not_null",
    AnyNotNullAggOp:   "any_not_null",
}

AggregateOpReverseMap maps from an optimizer operator type to the name of an aggregation function.

var BinaryOpReverseMap = map[Operator]tree.BinaryOperator{
    BitandOp:        tree.Bitand,
    BitorOp:         tree.Bitor,
    BitxorOp:        tree.Bitxor,
    PlusOp:          tree.Plus,
    MinusOp:         tree.Minus,
    MultOp:          tree.Mult,
    DivOp:           tree.Div,
    FloorDivOp:      tree.FloorDiv,
    ModOp:           tree.Mod,
    PowOp:           tree.Pow,
    ConcatOp:        tree.Concat,
    LShiftOp:        tree.LShift,
    RShiftOp:        tree.RShift,
    FetchValOp:      tree.JSONFetchVal,
    FetchTextOp:     tree.JSONFetchText,
    FetchValPathOp:  tree.JSONFetchValPath,
    FetchTextPathOp: tree.JSONFetchTextPath,
}

BinaryOpReverseMap maps from an optimizer operator type to a semantic tree binary operator type.

var ComparisonOpMap [tree.NumComparisonOperators]Operator

ComparisonOpMap maps from a semantic tree comparison operator type to an optimizer operator type.

var ComparisonOpReverseMap = map[Operator]tree.ComparisonOperator{
    EqOp:             tree.EQ,
    LtOp:             tree.LT,
    GtOp:             tree.GT,
    LeOp:             tree.LE,
    GeOp:             tree.GE,
    NeOp:             tree.NE,
    InOp:             tree.In,
    NotInOp:          tree.NotIn,
    LikeOp:           tree.Like,
    NotLikeOp:        tree.NotLike,
    ILikeOp:          tree.ILike,
    NotILikeOp:       tree.NotILike,
    SimilarToOp:      tree.SimilarTo,
    NotSimilarToOp:   tree.NotSimilarTo,
    RegMatchOp:       tree.RegMatch,
    NotRegMatchOp:    tree.NotRegMatch,
    RegIMatchOp:      tree.RegIMatch,
    NotRegIMatchOp:   tree.NotRegIMatch,
    IsOp:             tree.IsNotDistinctFrom,
    IsNotOp:          tree.IsDistinctFrom,
    ContainsOp:       tree.Contains,
    JsonExistsOp:     tree.JSONExists,
    JsonSomeExistsOp: tree.JSONSomeExists,
    JsonAllExistsOp:  tree.JSONAllExists,
    OverlapsOp:       tree.Overlaps,
}

ComparisonOpReverseMap maps from an optimizer operator type to a semantic tree comparison operator type.

var NegateOpMap = map[Operator]Operator{
    EqOp:           NeOp,
    LtOp:           GeOp,
    GtOp:           LeOp,
    LeOp:           GtOp,
    GeOp:           LtOp,
    NeOp:           EqOp,
    InOp:           NotInOp,
    NotInOp:        InOp,
    LikeOp:         NotLikeOp,
    NotLikeOp:      LikeOp,
    ILikeOp:        NotILikeOp,
    NotILikeOp:     ILikeOp,
    SimilarToOp:    NotSimilarToOp,
    NotSimilarToOp: SimilarToOp,
    RegMatchOp:     NotRegMatchOp,
    NotRegMatchOp:  RegMatchOp,
    RegIMatchOp:    NotRegIMatchOp,
    NotRegIMatchOp: RegIMatchOp,
    IsOp:           IsNotOp,
    IsNotOp:        IsOp,
}

NegateOpMap maps from a comparison operator type to its negated operator type, as if the Not operator was applied to it. Some comparison operators, like Contains and JsonExists, do not have negated versions.

var OpTelemetryCounters [NumOperators]telemetry.Counter

OpTelemetryCounters stores telemetry counters for operators marked with the "Telemetry" tag. All other operators have nil values.

var UnaryOpReverseMap = map[Operator]tree.UnaryOperator{
    UnaryMinusOp:      tree.UnaryMinus,
    UnaryComplementOp: tree.UnaryComplement,
}

UnaryOpReverseMap maps from an optimizer operator type to a semantic tree unary operator type.

var WindowOpReverseMap = map[Operator]string{
    RankOp:        "rank",
    RowNumberOp:   "row_number",
    DenseRankOp:   "dense_rank",
    PercentRankOp: "percent_rank",
    CumeDistOp:    "cume_dist",
    NtileOp:       "ntile",
    LagOp:         "lag",
    LeadOp:        "lead",
    FirstValueOp:  "first_value",
    LastValueOp:   "last_value",
    NthValueOp:    "nth_value",
}

WindowOpReverseMap maps from an optimizer operator type to the name of a window function.

func AggregateIgnoresNulls Uses

func AggregateIgnoresNulls(op Operator) bool

AggregateIgnoresNulls returns true if the given aggregate operator has a single input, and if it always evaluates to the same result regardless of how many NULL values are included in that input, in any order.

func AggregateIsNullOnEmpty Uses

func AggregateIsNullOnEmpty(op Operator) bool

AggregateIsNullOnEmpty returns true if the given aggregate operator has a single input, and if it returns NULL when the input set contains no values. This group of aggregates overlaps considerably with the AggregateIgnoresNulls group, with the notable exception of COUNT, which returns zero instead of NULL when its input is empty.

func BoolOperatorRequiresNotNullArgs Uses

func BoolOperatorRequiresNotNullArgs(op Operator) bool

BoolOperatorRequiresNotNullArgs returns true if the operator can never evaluate to true if one of its children is NULL.

type AliasedColumn Uses

type AliasedColumn struct {
    Alias string
    ID    ColumnID
}

AliasedColumn specifies the label and id of a column.

type ColList Uses

type ColList []ColumnID

ColList is a list of column ids.

TODO(radu): perhaps implement a FastIntList with the same "small" representation as FastIntMap but with a slice for large cases.

func ColSetToList Uses

func ColSetToList(set ColSet) ColList

ColSetToList converts a column id set to a list, in column id order.

func (ColList) Equals Uses

func (cl ColList) Equals(other ColList) bool

Equals returns true if this column list has the same columns as the given column list, in the same order.

func (ColList) Find Uses

func (cl ColList) Find(col ColumnID) (idx int, ok bool)

Find searches for a column in the list and returns its index in the list (if successful).

func (ColList) ToSet Uses

func (cl ColList) ToSet() ColSet

ToSet converts a column id list to a column id set.

type ColMap Uses

type ColMap = util.FastIntMap

ColMap provides a 1:1 mapping from one column id to another. It is used by operators that need to match columns from its inputs.

type ColSet Uses

type ColSet struct {
    // contains filtered or unexported fields
}

ColSet efficiently stores an unordered set of column ids.

func MakeColSet Uses

func MakeColSet(vals ...ColumnID) ColSet

MakeColSet returns a set initialized with the given values.

func (*ColSet) Add Uses

func (s *ColSet) Add(col ColumnID)

Add adds a column to the set. No-op if the column is already in the set.

func (ColSet) Contains Uses

func (s ColSet) Contains(col ColumnID) bool

Contains returns true if the set contains the column.

func (ColSet) Copy Uses

func (s ColSet) Copy() ColSet

Copy returns a copy of s which can be modified independently.

func (ColSet) Difference Uses

func (s ColSet) Difference(rhs ColSet) ColSet

Difference returns the elements of s that are not in rhs as a new set.

func (*ColSet) DifferenceWith Uses

func (s *ColSet) DifferenceWith(rhs ColSet)

DifferenceWith removes any elements in rhs from this set.

func (ColSet) Empty Uses

func (s ColSet) Empty() bool

Empty returns true if the set is empty.

func (ColSet) Equals Uses

func (s ColSet) Equals(rhs ColSet) bool

Equals returns true if the two sets are identical.

func (ColSet) ForEach Uses

func (s ColSet) ForEach(f func(col ColumnID))

ForEach calls a function for each column in the set (in increasing order).

func (ColSet) Intersection Uses

func (s ColSet) Intersection(rhs ColSet) ColSet

Intersection returns the intersection of s and rhs as a new set.

func (*ColSet) IntersectionWith Uses

func (s *ColSet) IntersectionWith(rhs ColSet)

IntersectionWith removes any columns not in rhs from this set.

func (ColSet) Intersects Uses

func (s ColSet) Intersects(rhs ColSet) bool

Intersects returns true if s has any elements in common with rhs.

func (ColSet) Len Uses

func (s ColSet) Len() int

Len returns the number of the columns in the set.

func (ColSet) Next Uses

func (s ColSet) Next(startVal ColumnID) (ColumnID, bool)

Next returns the first value in the set which is >= startVal. If there is no such column, the second return value is false.

func (*ColSet) Remove Uses

func (s *ColSet) Remove(col ColumnID)

Remove removes a column from the set. No-op if the column is not in the set.

func (ColSet) SingleColumn Uses

func (s ColSet) SingleColumn() ColumnID

SingleColumn returns the single column in s. Panics if s does not contain exactly one column.

func (ColSet) String Uses

func (s ColSet) String() string

String returns a list representation of elements. Sequential runs of positive numbers are shown as ranges. For example, for the set {1, 2, 3 5, 6, 10}, the output is "(1-3,5,6,10)".

func (ColSet) SubsetOf Uses

func (s ColSet) SubsetOf(rhs ColSet) bool

SubsetOf returns true if rhs contains all the elements in s.

func (ColSet) Union Uses

func (s ColSet) Union(rhs ColSet) ColSet

Union returns the union of s and rhs as a new set.

func (*ColSet) UnionWith Uses

func (s *ColSet) UnionWith(rhs ColSet)

UnionWith adds all the columns from rhs to this set.

type ColumnID Uses

type ColumnID int32

ColumnID uniquely identifies the usage of a column within the scope of a query. ColumnID 0 is reserved to mean "unknown column". See the comment for Metadata for more details.

type ColumnMeta Uses

type ColumnMeta struct {
    // MetaID is the identifier for this column that is unique within the query
    // metadata.
    MetaID ColumnID

    // Alias is the best-effort name of this column. Since the same column in a
    // query can have multiple names (using aliasing), one of those is chosen to
    // be used for pretty-printing and debugging. This might be different than
    // what is stored in the physical properties and is presented to end users.
    Alias string

    // Type is the scalar SQL type of this column.
    Type *types.T

    // Table is the base table to which this column belongs.
    // If the column was synthesized (i.e. no base table), then it is 0.
    Table TableID
}

ColumnMeta stores information about one of the columns stored in the metadata.

type Expr Uses

type Expr interface {
    // Op returns the operator type of the expression.
    Op() Operator

    // ChildCount returns the number of children of the expression.
    ChildCount() int

    // Child returns the nth child of the expression.
    Child(nth int) Expr

    // Private returns operator-specific data. Callers are expected to know the
    // type and format of the data, which will differ from operator to operator.
    // For example, an operator may choose to return one of its fields, or perhaps
    // a pointer to itself, or nil if there is nothing useful to return.
    Private() interface{}

    // String returns a human-readable string representation for the expression
    // that can be used for debugging and testing.
    String() string
}

Expr is a node in an expression tree. It offers methods to traverse and inspect the tree. Each node in the tree has an enumerated operator type, zero or more children, and an optional private value. The entire tree can be easily visited using a pattern like this:

var visit func(e Expr)
visit := func(e Expr) {
  for i, n := 0, e.ChildCount(); i < n; i++ {
    visit(e.Child(i))
  }
}

type MDDepName Uses

type MDDepName struct {
    // contains filtered or unexported fields
}

MDDepName stores either the unresolved DataSourceName or the StableID from the query that was used to resolve a data source.

func DepByID Uses

func DepByID(id cat.StableID) MDDepName

DepByID is used with AddDependency when the data source was looked up by ID.

func DepByName Uses

func DepByName(name *cat.DataSourceName) MDDepName

DepByName is used with AddDependency when the data source was looked up using a data source name.

type Metadata Uses

type Metadata struct {
    // contains filtered or unexported fields
}

Metadata assigns unique ids to the columns, tables, and other metadata used within the scope of a particular query. Because it is specific to one query, the ids tend to be small integers that can be efficiently stored and manipulated.

Within a query, every unique column and every projection (that is more than just a pass through of a column) is assigned a unique column id. Additionally, every separate reference to a table in the query gets a new set of output column ids. Consider the query:

SELECT * FROM a AS l JOIN a AS r ON (l.x = r.y)

In this query, `l.x` is not equivalent to `r.x` and `l.y` is not equivalent to `r.y`. In order to achieve this, we need to give these columns different ids.

In all cases, the column ids are global to the query. For example, consider the query:

SELECT x FROM a WHERE y > 0

There are 2 columns in the above query: x and y. During name resolution, the above query becomes:

SELECT [0] FROM a WHERE [1] > 0
-- [0] -> x
-- [1] -> y

func (*Metadata) AddColumn Uses

func (md *Metadata) AddColumn(alias string, typ *types.T) ColumnID

AddColumn assigns a new unique id to a column within the query and records its alias and type. If the alias is empty, a "column<ID>" alias is created.

func (*Metadata) AddDependency Uses

func (md *Metadata) AddDependency(name MDDepName, ds cat.DataSource, priv privilege.Kind)

AddDependency tracks one of the catalog data sources on which the query depends, as well as the privilege required to access that data source. If the Memo using this metadata is cached, then a call to CheckDependencies can detect if the name resolves to a different data source now, or if changes to schema or permissions on the data source has invalidated the cached metadata.

func (*Metadata) AddSchema Uses

func (md *Metadata) AddSchema(sch cat.Schema) SchemaID

AddSchema indexes a new reference to a schema used by the query.

func (*Metadata) AddSequence Uses

func (md *Metadata) AddSequence(seq cat.Sequence) SequenceID

AddSequence adds the sequence to the metadata, returning a SequenceID that can be used to retrieve it.

func (*Metadata) AddTable Uses

func (md *Metadata) AddTable(tab cat.Table, alias *tree.TableName) TableID

AddTable indexes a new reference to a table within the query. Separate references to the same table are assigned different table ids (e.g. in a self-join query). All columns are added to the metadata. If mutation columns are present, they are added after active columns.

The ExplicitCatalog/ExplicitSchema fields of the table's alias are honored so that its original formatting is preserved for error messages, pretty-printing, etc.

func (*Metadata) AddView Uses

func (md *Metadata) AddView(v cat.View)

AddView adds a new reference to a view used by the query.

func (*Metadata) AllSequences Uses

func (md *Metadata) AllSequences() []cat.Sequence

AllSequences returns the metadata for all sequences. The result must not be modified.

func (*Metadata) AllTables Uses

func (md *Metadata) AllTables() []TableMeta

AllTables returns the metadata for all tables. The result must not be modified.

func (*Metadata) AllViews Uses

func (md *Metadata) AllViews() []cat.View

AllViews returns the metadata for all views. The result must not be modified.

func (*Metadata) CheckDependencies Uses

func (md *Metadata) CheckDependencies(
    ctx context.Context, catalog cat.Catalog,
) (upToDate bool, err error)

CheckDependencies resolves (again) each data source on which this metadata depends, in order to check that all data source names resolve to the same objects, and that the user still has sufficient privileges to access the objects. If the dependencies are no longer up-to-date, then CheckDependencies returns false.

This function cannot swallow errors and return only a boolean, as it may perform KV operations on behalf of the transaction associated with the provided catalog, and those errors are required to be propagated.

func (*Metadata) ColumnMeta Uses

func (md *Metadata) ColumnMeta(colID ColumnID) *ColumnMeta

ColumnMeta looks up the metadata for the column associated with the given column id. The same column can be added multiple times to the query metadata and associated with multiple column ids.

func (*Metadata) CopyFrom Uses

func (md *Metadata) CopyFrom(from *Metadata)

CopyFrom initializes the metadata with a copy of the provided metadata. This metadata can then be modified independent of the copied metadata.

Table annotations are not transferred over; all annotations are unset on the copy.

func (*Metadata) Init Uses

func (md *Metadata) Init()

Init prepares the metadata for use (or reuse).

func (*Metadata) NextValuesID Uses

func (md *Metadata) NextValuesID() ValuesID

NextValuesID returns a fresh ValuesID which is guaranteed to never have been allocated prior in this memo.

func (*Metadata) NumColumns Uses

func (md *Metadata) NumColumns() int

NumColumns returns the count of columns tracked by this Metadata instance.

func (*Metadata) QualifiedAlias Uses

func (md *Metadata) QualifiedAlias(colID ColumnID, fullyQualify bool, catalog cat.Catalog) string

QualifiedAlias returns the column alias, possibly qualified with the table, schema, or database name:

1. If fullyQualify is true, then the returned alias is prefixed by the
   original, fully qualified name of the table: tab.Name().FQString().

2. If there's another column in the metadata with the same column alias but
   a different table name, then prefix the column alias with the table
   name: "tabName.columnAlias".

func (*Metadata) Schema Uses

func (md *Metadata) Schema(schID SchemaID) cat.Schema

Schema looks up the metadata for the schema associated with the given schema id.

func (*Metadata) Sequence Uses

func (md *Metadata) Sequence(seqID SequenceID) cat.Sequence

Sequence looks up the catalog sequence associated with the given metadata id. The same sequence can be associated with multiple metadata ids.

func (*Metadata) SetTableAnnotation Uses

func (md *Metadata) SetTableAnnotation(tabID TableID, tabAnnID TableAnnID, ann interface{})

SetTableAnnotation associates the given annotation with the given table. The annotation is associated by the given ID, which was allocated by calling NewTableAnnID. If an annotation with the ID already exists on the table, then it is overwritten.

See the TableAnnID comment for more details and a usage example.

func (*Metadata) Table Uses

func (md *Metadata) Table(tabID TableID) cat.Table

Table looks up the catalog table associated with the given metadata id. The same table can be associated with multiple metadata ids.

func (*Metadata) TableAnnotation Uses

func (md *Metadata) TableAnnotation(tabID TableID, annID TableAnnID) interface{}

TableAnnotation returns the given annotation that is associated with the given table. If the table has no such annotation, TableAnnotation returns nil.

func (*Metadata) TableByStableID Uses

func (md *Metadata) TableByStableID(id cat.StableID) cat.Table

TableByStableID looks up the catalog table associated with the given StableID (unique across all tables and stable across queries).

func (*Metadata) TableMeta Uses

func (md *Metadata) TableMeta(tabID TableID) *TableMeta

TableMeta looks up the metadata for the table associated with the given table id. The same table can be added multiple times to the query metadata and associated with multiple table ids.

type MutableExpr Uses

type MutableExpr interface {
    // SetChild updates the nth child of the expression to instead be the given
    // child expression.
    SetChild(nth int, child Expr)
}

MutableExpr is implemented by expressions that allow their children to be updated.

type OpaqueMetadata Uses

type OpaqueMetadata interface {
    ImplementsOpaqueMetadata()

    // String is a short description used when printing optimizer trees and when
    // forming error messages; it should be the SQL statement tag.
    String() string
}

OpaqueMetadata is an object stored in OpaqueRelExpr and passed through to the exec factory.

type Operator Uses

type Operator uint16

Operator describes the type of operation that a memo expression performs. Some operators are relational (join, select, project) and others are scalar (and, or, plus, variable).

func (Operator) String Uses

func (op Operator) String() string

String returns the name of the operator as a string.

func (Operator) SyntaxTag Uses

func (op Operator) SyntaxTag() string

SyntaxTag returns the name of the operator using the SQL syntax that most closely matches it.

type Ordering Uses

type Ordering []OrderingColumn

Ordering defines the order of rows provided or required by an operator. A negative value indicates descending order on the column id "-(value)".

func (Ordering) ColSet Uses

func (o Ordering) ColSet() ColSet

ColSet returns the set of column IDs used in the ordering.

func (Ordering) CommonPrefix Uses

func (o Ordering) CommonPrefix(other Ordering) Ordering

CommonPrefix returns the longest ordering that is a prefix of both orderings.

func (Ordering) Empty Uses

func (o Ordering) Empty() bool

Empty returns true if the ordering is empty or unset.

func (Ordering) Equals Uses

func (o Ordering) Equals(rhs Ordering) bool

Equals returns true if the two orderings are identical.

func (Ordering) Format Uses

func (o Ordering) Format(buf *bytes.Buffer)

Format prints a string representation to the buffer.

func (Ordering) Provides Uses

func (o Ordering) Provides(required Ordering) bool

Provides returns true if the required ordering is a prefix of this ordering.

func (Ordering) String Uses

func (o Ordering) String() string

type OrderingColumn Uses

type OrderingColumn int32

OrderingColumn is the ColumnID for a column that is part of an ordering, except that it can be negated to indicate a descending ordering on that column.

func MakeOrderingColumn Uses

func MakeOrderingColumn(id ColumnID, descending bool) OrderingColumn

MakeOrderingColumn initializes an ordering column with a ColumnID and a flag indicating whether the direction is descending.

func (OrderingColumn) Ascending Uses

func (c OrderingColumn) Ascending() bool

Ascending returns true if the ordering on this column is ascending.

func (OrderingColumn) Descending Uses

func (c OrderingColumn) Descending() bool

Descending returns true if the ordering on this column is descending.

func (OrderingColumn) Format Uses

func (c OrderingColumn) Format(buf *bytes.Buffer)

Format prints a string representation to the buffer.

func (OrderingColumn) ID Uses

func (c OrderingColumn) ID() ColumnID

ID returns the ColumnID for this OrderingColumn.

func (OrderingColumn) String Uses

func (c OrderingColumn) String() string

type OrderingSet Uses

type OrderingSet []Ordering

OrderingSet is a set of orderings, with the restriction that no ordering is a prefix of another ordering in the set.

func (*OrderingSet) Add Uses

func (os *OrderingSet) Add(o Ordering)

Add an ordering to the list, checking whether it is a prefix of another ordering (or vice-versa).

func (OrderingSet) Copy Uses

func (os OrderingSet) Copy() OrderingSet

Copy returns a copy of the set which can be independently modified.

func (*OrderingSet) RestrictToCols Uses

func (os *OrderingSet) RestrictToCols(cols ColSet)

RestrictToCols keeps only the orderings (or prefixes of them) that refer to columns in the given set.

func (*OrderingSet) RestrictToPrefix Uses

func (os *OrderingSet) RestrictToPrefix(required Ordering)

RestrictToPrefix keeps only the orderings that have the required ordering as a prefix.

func (OrderingSet) String Uses

func (os OrderingSet) String() string

type RuleName Uses

type RuleName uint32

RuleName enumerates the names of all the optimizer rules. Manual rule names are defined in this file and rule names generated by Optgen are defined in rule_name.og.go.

const (
    InvalidRuleName RuleName = iota

    SimplifyRootOrdering
    PruneRootCols
    SimplifyZeroCardinalityGroup

    // NumManualRules tracks the number of manually-defined rules.
    NumManualRuleNames
)

Enumeration of all manual rule names.

func (RuleName) IsExplore Uses

func (r RuleName) IsExplore() bool

IsExplore returns true if r is an exploration rule.

func (RuleName) IsNormalize Uses

func (r RuleName) IsNormalize() bool

IsNormalize returns true if r is a normalization rule.

type ScalarExpr Uses

type ScalarExpr interface {
    Expr

    // ID is a unique (within the context of a memo) ID that can be
    // used to define a total order over ScalarExprs.
    ID() ScalarID

    // DataType is the SQL type of the expression.
    DataType() *types.T
}

ScalarExpr is a scalar expression, which is an expression that returns a primitive-typed value like boolean or string rather than rows and columns.

type ScalarID Uses

type ScalarID int

ScalarID is the type of the memo-unique identifier given to every scalar expression.

type SchemaID Uses

type SchemaID int32

SchemaID uniquely identifies the usage of a schema within the scope of a query. SchemaID 0 is reserved to mean "unknown schema". Internally, the SchemaID consists of an index into the Metadata.schemas slice.

See the comment for Metadata for more details on identifiers.

type SequenceID Uses

type SequenceID uint64

SequenceID uniquely identifies the usage of a sequence within the scope of a query. SequenceID 0 is reserved to mean "unknown sequence".

type TableAnnID Uses

type TableAnnID int

TableAnnID uniquely identifies an annotation on an instance of table metadata. A table annotation allows arbitrary values to be cached with table metadata, which can be used to avoid recalculating base table properties or other information each time it's needed.

WARNING! When copying memo metadata (which happens when we use a cached memo), the annotations are cleared. Any code using a annotation must treat this as a best-effort cache and be prepared to repopulate the annotation as necessary.

To create a TableAnnID, call NewTableAnnID during Go's program initialization phase. The returned TableAnnID never clashes with other annotations on the same table. Here is a usage example:

var myAnnID = NewTableAnnID()

md.SetTableAnnotation(TableID(1), myAnnID, "foo")
ann := md.TableAnnotation(TableID(1), myAnnID)

Currently, the following annotations are in use:

- WeakKeys: weak keys derived from the base table
- Stats: statistics derived from the base table

To add an additional annotation, increase the value of maxTableAnnIDCount and add a call to NewTableAnnID.

func NewTableAnnID Uses

func NewTableAnnID() TableAnnID

NewTableAnnID allocates a unique annotation identifier that is used to associate arbitrary data with table metadata. Only maxTableAnnIDCount total annotation ID's can exist in the system. Attempting to exceed the maximum results in a panic.

This method is not thread-safe, and therefore should only be called during Go's program initialization phase (which uses a single goroutine to init variables).

See the TableAnnID comment for more details and a usage example.

type TableID Uses

type TableID uint64

TableID uniquely identifies the usage of a table within the scope of a query. TableID 0 is reserved to mean "unknown table".

Internally, the TableID consists of an index into the Metadata.tables slice, as well as the ColumnID of the first column in the table. Subsequent columns have sequential ids, relative to their ordinal position in the table.

See the comment for Metadata for more details on identifiers.

func (TableID) ColumnID Uses

func (t TableID) ColumnID(ord int) ColumnID

ColumnID returns the metadata id of the column at the given ordinal position in the table. This is equivalent to calling:

md.TableMeta(t).ColumnMeta(ord).MetaID

NOTE: This method cannot do bounds checking, so it's up to the caller to

ensure that a column really does exist at this ordinal position.

func (TableID) ColumnOrdinal Uses

func (t TableID) ColumnOrdinal(id ColumnID) int

ColumnOrdinal returns the ordinal position of the given column in its base table. This is equivalent to calling:

md.ColumnMeta(id).Ordinal()

NOTE: This method cannot do complete bounds checking, so it's up to the

caller to ensure that this column is really in the given base table.

type TableMeta Uses

type TableMeta struct {
    // MetaID is the identifier for this table that is unique within the query
    // metadata.
    MetaID TableID

    // Table is a reference to the table in the catalog.
    Table cat.Table

    // Alias stores the identifier used in the query to identify the table. This
    // might be explicitly qualified (e.g. <catalog>.<schema>.<table>), or not
    // (e.g. <table>). Or, it may be an alias used in the query, in which case it
    // is always an unqualified name.
    Alias tree.TableName

    // IgnoreForeignKeys is true if we should disable any rules that depend on the
    // consistency of outgoing foreign key references. Set by the
    // IGNORE_FOREIGN_KEYS table hint; useful for scrub queries meant to verify
    // the consistency of foreign keys.
    IgnoreForeignKeys bool
    // contains filtered or unexported fields
}

TableMeta stores information about one of the tables stored in the metadata.

func (*TableMeta) AddConstraint Uses

func (tm *TableMeta) AddConstraint(constraint ScalarExpr)

AddConstraint adds a valid table constraint to the table's metadata.

func (*TableMeta) Constraint Uses

func (tm *TableMeta) Constraint(i int) ScalarExpr

Constraint looks up the ith valid table constraint on the table where i < ConstraintCount().

func (*TableMeta) ConstraintCount Uses

func (tm *TableMeta) ConstraintCount() int

ConstraintCount returns the number of validated check constraints that are applied to the table.

func (*TableMeta) IndexColumns Uses

func (tm *TableMeta) IndexColumns(indexOrd int) ColSet

IndexColumns returns the metadata IDs for the set of columns in the given index. TODO(justin): cache this value in the table metadata.

func (*TableMeta) IndexKeyColumns Uses

func (tm *TableMeta) IndexKeyColumns(indexOrd int) ColSet

IndexKeyColumns returns the metadata IDs for the set of strict key columns in the given index.

type ValuesID Uses

type ValuesID uint64

ValuesID uniquely identifies the usage of a values clause within the scope of a query.

See the comment for Metadata for more details on identifiers.

type ViewDep Uses

type ViewDep struct {
    DataSource cat.DataSource

    // ColumnOrdinals is the set of column ordinals that are referenced by the
    // view for this table. In most cases, this consists of all "public" columns
    // of the table; the only exception is when a table is referenced by table ID
    // with a specific list of column IDs.
    ColumnOrdinals util.FastIntSet

    // If an index is referenced specifically (via an index hint), SpecificIndex
    // is true and Index is the ordinal of that index.
    SpecificIndex bool
    Index         cat.IndexOrdinal
}

ViewDep contains information about a view dependency.

type ViewDeps Uses

type ViewDeps []ViewDep

ViewDeps contains information about the dependencies of a view.

type WithID Uses

type WithID uint64

WithID uniquely identifies a With expression within the scope of a query. WithID=0 is reserved to mean "unknown expression". See the comment for Metadata for more details on identifiers.

Directories

PathSynopsis
benchPackage bench houses benchmarks for the SQL optimizer.
catPackage cat contains interfaces that are used by the query optimizer to avoid including specifics of sqlbase structures in the opt code.
constraint
exec
exec/execbuilder
idxconstraint
memo
norm
opbench
optbuilder
optgen/exprgen
optgen/langPackage lang implements a language called Optgen, short for "optimizer generator".
orderingPackage ordering contains operator-specific logic related to orderings - whether ops can provide Required orderings, what orderings do they need to require from their children, etc.
props
props/physical
testutils
testutils/opttester
testutils/testcat
testutils/testexpr
xform

Package opt imports 13 packages (graph) and is imported by 31 packages. Updated 2019-09-19. Refresh now. Tools for package owners.