import "github.com/pingcap/tidb/store/tikv"
Package tikv provides tcp connection to kvserver.
Package tikv provides tcp connection to kvserver.
Package tikv provides tcp connection to kvserver.
2pc.go backoff.go batch_coprocessor.go cleanup.go client.go client_batch.go client_collapse.go commit.go coprocessor.go coprocessor_cache.go delete_range.go error.go failpoint.go interface.go kv.go lock_resolver.go mpp.go pd_codec.go pessimistic.go prewrite.go range_task.go rawkv.go region_cache.go region_request.go safepoint.go scan.go snapshot.go split_region.go test_util.go txn.go
const ( // NoJitter makes the backoff sequence strict exponential. NoJitter = 1 + iota // FullJitter applies random factors to strict exponential. FullJitter // EqualJitter is also randomized, but prevents very short sleeps. EqualJitter // DecorrJitter increases the maximum jitter based on the last random value. DecorrJitter )
const ( BoTxnLock backoffType BoPDRPC BoRegionMiss )
Back off types.
const ( GetAllMembersBackoff = 5000 GcOneRegionMaxBackoff = 20000 GcResolveLockMaxBackoff = 100000 )
Maximum total sleep time(in ms) for kv/cop commands.
const ( // This is almost the same as 'tikv_gc_safe_point' in the table 'mysql.tidb', // save this to pd instead of tikv, because we can't use interface of table // if the safepoint on tidb is expired. GcSavedSafePoint = "/tidb/store/gcworker/saved_safe_point" GcSafePointCacheInterval = time.Second * 100 )
Safe point constants.
const MockResponseSizeForTest = 100 * 1024 * 1024
MockResponseSizeForTest mock the response size
const ResolvedCacheSize = 2048
ResolvedCacheSize is max number of cached txn status.
var ( // CommitMaxBackoff is max sleep time of the 'commit' command CommitMaxBackoff = uint64(41000) // PrewriteMaxBackoff is max sleep time of the `pre-write` command. PrewriteMaxBackoff = 20000 )
var ( ReadTimeoutMedium = 60 * time.Second // For requests that may need scan region. ReadTimeoutLong = 150 * time.Second // For requests that may need scan region multiple times. ReadTimeoutUltraLong = 3600 * time.Second // For requests that may scan many regions for tiflash. GCTimeout = 5 * time.Minute UnsafeDestroyRangeTimeout = 5 * time.Minute AccessLockObserverTimeout = 10 * time.Second )
Timeout durations.
var ( ErrTiKVServerTimeout = dbterror.ClassTiKV.NewStd(mysql.ErrTiKVServerTimeout) ErrTiFlashServerTimeout = dbterror.ClassTiKV.NewStd(mysql.ErrTiFlashServerTimeout) ErrResolveLockTimeout = dbterror.ClassTiKV.NewStd(mysql.ErrResolveLockTimeout) ErrPDServerTimeout = dbterror.ClassTiKV.NewStd(mysql.ErrPDServerTimeout) = dbterror.ClassTiKV.NewStd(mysql.ErrRegionUnavailable) ErrTiKVServerBusy = dbterror.ClassTiKV.NewStd(mysql.ErrTiKVServerBusy) ErrTiFlashServerBusy = dbterror.ClassTiKV.NewStd(mysql.ErrTiFlashServerBusy) ErrTiKVStaleCommand = dbterror.ClassTiKV.NewStd(mysql.ErrTiKVStaleCommand) ErrTiKVMaxTimestampNotSynced = dbterror.ClassTiKV.NewStd(mysql.ErrTiKVMaxTimestampNotSynced) ErrGCTooEarly = dbterror.ClassTiKV.NewStd(mysql.ErrGCTooEarly) ErrQueryInterrupted = dbterror.ClassTiKV.NewStd(mysql.ErrQueryInterrupted) ErrLockAcquireFailAndNoWaitSet = dbterror.ClassTiKV.NewStd(mysql.ErrLockAcquireFailAndNoWaitSet) ErrLockWaitTimeout = dbterror.ClassTiKV.NewStd(mysql.ErrLockWaitTimeout) ErrTokenLimit = dbterror.ClassTiKV.NewStd(mysql.ErrTiKVStoreLimit) ErrLockExpire = dbterror.ClassTiKV.NewStd(mysql.ErrLockExpire) ErrUnknown = dbterror.ClassTiKV.NewStd(mysql.ErrUnknown) )
MySQL error instances.
var ( // MockRetryableErrorResp mocks an retryable error while processing response MockRetryableErrorResp failpoint.Failpoint // MockScatterRegionTimeout mocks timeout when trying to scatter region MockScatterRegionTimeout failpoint.Failpoint // MockSplitRegionTimeout mocks timeout when trying to split region MockSplitRegionTimeout failpoint.Failpoint )
var ( // MaxRawKVScanLimit is the maximum scan limit for rawkv Scan. MaxRawKVScanLimit = 10240 // ErrMaxScanLimitExceeded is returned when the limit for rawkv Scan is to large. ErrMaxScanLimitExceeded = errors.New("limit should be less than MaxRawKVScanLimit") )
var ( // ErrBodyMissing response body is missing error ErrBodyMissing = errors.New("response body is missing") )
Global variable set by config file.
MaxRecvMsgSize set max gRPC receive message size received from server. If any message size is larger than current value, an error will be reported from gRPC.
NewGCHandlerFunc creates a new GCHandler. To enable real GC, we should assign the function to `gcworker.NewGCWorker`.
RegionCacheTTLSec is the max idle time for regions in the region cache.
SetSuccess is used to probe if kv variables are set or not. It is ONLY used in test cases.
ShuttingDown is a flag to indicate tidb-server is exiting (Ctrl+C signal receved for example). If this flag is set, tikv client should not retry on network error because tidb-server expect tikv client to exit as soon as possible.
var ( // StoreLivenessTimeout is the max duration of resolving liveness of a TiKV instance. StoreLivenessTimeout time.Duration )
GetStoreTypeByMeta gets store type by store meta pb.
NewBackoffFn creates a backoff func which implements exponential backoff with optional jitters. See http://www.awsarchitectureblog.com/2015/03/backoff.html
func NewTestTiKVStore(client Client, pdClient pd.Client, clientHijack func(Client) Client, pdClientHijack func(pd.Client) pd.Client, txnLocalLatches uint) (kv.Storage, error)
NewTestTiKVStore creates a test store with Option
SnapCacheHitCount gets the snapshot cache hit count.
SnapCacheSize gets the snapshot cache size.
func SplitRegionRanges(bo *Backoffer, cache *RegionCache, keyRanges []kv.KeyRange) ([]kv.KeyRange, error)
SplitRegionRanges get the split ranges from pd region.
AccessIndex represent the index for accessIndex array
AccessMode uses to index stores for different region cache access requirements.
const ( // TiKvOnly indicates stores list that use for TiKv access(include both leader request and follower read). TiKvOnly AccessMode = iota // TiFlashOnly indicates stores list that use for TiFlash request. TiFlashOnly // NumAccessMode reserved to keep max access mode value. NumAccessMode )
func (a AccessMode) String() string
type Backoffer struct {
// contains filtered or unexported fields
}
Backoffer is a utility for retrying queries.
NewBackoffer (Deprecated) creates a Backoffer with maximum sleep time(in ms).
NewBackofferWithVars creates a Backoffer with maximum sleep time(in ms) and kv.Variables.
NewNoopBackoff create a Backoffer do nothing just return error directly
Backoff sleeps a while base on the backoffType and records the error message. It returns a retryable error if total sleep time exceeds maxSleep.
BackoffWithMaxSleep sleeps a while base on the backoffType and records the error message and never sleep more than maxSleepMs for each sleep.
Clone creates a new Backoffer which keeps current Backoffer's sleep time and errors, and shares current Backoffer's context.
func (b *Backoffer) Fork() (*Backoffer, context.CancelFunc)
Fork creates a new Backoffer which keeps current Backoffer's sleep time and errors, and holds a child context of current Backoffer's context.
type Client interface { // Close should release all data. Close() error // SendRequest sends Request. SendRequest(ctx context.Context, addr string, req *tikvrpc.Request, timeout time.Duration) (*tikvrpc.Response, error) }
Client is a client that sends RPC. It should not be used after calling Close().
NewTestRPCClient is for some external tests.
type CommitterMutations interface { Len() int GetKey(i int) []byte GetKeys() [][]byte GetOp(i int) pb.Op GetValue(i int) []byte IsPessimisticLock(i int) bool Slice(from, to int) CommitterMutations }
CommitterMutations contains the mutations to be submitted.
type CopClient struct { kv.RequestTypeSupportedChecker // contains filtered or unexported fields }
CopClient is coprocessor client.
func (c *CopClient) Send(ctx context.Context, req *kv.Request, vars *kv.Variables, sessionMemTracker *memory.Tracker, enabledRateLimitAction bool) kv.Response
Send builds the request and gets the coprocessor iterator response.
type CopRuntimeStats struct { execdetails.ExecDetails RegionRequestRuntimeStats CoprCacheHit bool }
CopRuntimeStats contains execution detail information.
type DeleteRangeTask struct {
// contains filtered or unexported fields
}
DeleteRangeTask is used to delete all keys in a range. After performing DeleteRange, it keeps how many ranges it affects and if the task was canceled or not.
func NewDeleteRangeTask(store Storage, startKey []byte, endKey []byte, concurrency int) *DeleteRangeTask
NewDeleteRangeTask creates a DeleteRangeTask. Deleting will be performed when `Execute` method is invoked. Be careful while using this API. This API doesn't keep recent MVCC versions, but will delete all versions of all keys in the range immediately. Also notice that frequent invocation to this API may cause performance problems to TiKV.
func NewNotifyDeleteRangeTask(store Storage, startKey []byte, endKey []byte, concurrency int) *DeleteRangeTask
NewNotifyDeleteRangeTask creates a task that sends delete range requests to all regions in the range, but with the flag `notifyOnly` set. TiKV will not actually delete the range after receiving request, but it will be replicated via raft. This is used to notify the involved regions before sending UnsafeDestroyRange requests.
func (t *DeleteRangeTask) CompletedRegions() int
CompletedRegions returns the number of regions that are affected by this delete range task
func (t *DeleteRangeTask) Execute(ctx context.Context) error
Execute performs the delete range operation.
type Driver struct { }
Driver implements engine Driver.
Open opens or creates an TiKV storage with given path. Path example: tikv://etcd-node1:port,etcd-node2:port?cluster=1&disableGC=false
ErrDeadlock wraps *kvrpcpb.Deadlock to implement the error interface. It also marks if the deadlock is retryable.
func (d *ErrDeadlock) Error() string
type EtcdBackend interface { EtcdAddrs() ([]string, error) TLSConfig() *tls.Config StartGCWorker() error }
EtcdBackend is used for judging a storage is a real TiKV.
type EtcdSafePointKV struct {
// contains filtered or unexported fields
}
EtcdSafePointKV implements SafePointKV at runtime
NewEtcdSafePointKV creates an instance of EtcdSafePointKV
func (w *EtcdSafePointKV) Close() error
Close implements the Close for SafePointKV
func (w *EtcdSafePointKV) Get(k string) (string, error)
Get implements the Get method for SafePointKV
GetWithPrefix implements the GetWithPrefix for SafePointKV
func (w *EtcdSafePointKV) Put(k string, v string) error
Put implements the Put method for SafePointKV
type GCHandler interface { // Start starts the GCHandler. Start() // Close closes the GCHandler. Close() }
GCHandler runs garbage collection job.
KeyLocation is the region and range that a key is located.
func (l *KeyLocation) Contains(key []byte) bool
Contains checks if key is in [StartKey, EndKey).
type Lock struct { Key []byte Primary []byte TxnID uint64 TTL uint64 TxnSize uint64 LockType kvrpcpb.Op UseAsyncCommit bool LockForUpdateTS uint64 MinCommitTS uint64 }
Lock represents a lock from tikv server.
NewLock creates a new *Lock.
type LockResolver struct {
// contains filtered or unexported fields
}
LockResolver resolves locks and also caches resolved txn status.
func NewLockResolver(etcdAddrs []string, security config.Security, opts ...pd.ClientOption) (*LockResolver, error)
NewLockResolver creates a LockResolver. It is exported for other pkg to use. For instance, binlog service needs to determine a transaction's commit state.
func (lr *LockResolver) BatchResolveLocks(bo *Backoffer, locks []*Lock, loc RegionVerID) (bool, error)
BatchResolveLocks resolve locks in a batch. Used it in gcworker only!
func (lr *LockResolver) GetTxnStatus(txnID uint64, callerStartTS uint64, primary []byte) (TxnStatus, error)
GetTxnStatus queries tikv-server for a txn's status (commit/rollback). If the primary key is still locked, it will launch a Rollback to abort it. To avoid unnecessarily aborting too many txns, it is wiser to wait a few seconds before calling it after Prewrite.
func (lr *LockResolver) ResolveLocks(bo *Backoffer, callerStartTS uint64, locks []*Lock) (int64, []uint64, error)
ResolveLocks tries to resolve Locks. The resolving process is in 3 steps: 1) Use the `lockTTL` to pick up all expired locks. Only locks that are too
old are considered orphan locks and will be handled later. If all locks are expired then all locks will be resolved so the returned `ok` will be true, otherwise caller should sleep a while before retry.
2) For each lock, query the primary key to get txn(which left the lock)'s
commit status.
3) Send `ResolveLock` cmd to the lock's region to resolve all locks belong to
the same transaction.
type MPPClient struct {
// contains filtered or unexported fields
}
MPPClient servers MPP requests.
func (c *MPPClient) ConstructMPPTasks(ctx context.Context, req *kv.MPPBuildTasksRequest) ([]kv.MPPTaskMeta, error)
ConstructMPPTasks receives ScheduleRequest, which are actually collects of kv ranges. We allocates MPPTaskMeta for them and returns.
func (c *MPPClient) DispatchMPPTasks(ctx context.Context, dispatchReqs []*kv.MPPDispatchRequest) kv.Response
DispatchMPPTasks dispatches all the mpp task and waits for the reponses.
type MockSafePointKV struct {
// contains filtered or unexported fields
}
MockSafePointKV implements SafePointKV at mock test
func NewMockSafePointKV() *MockSafePointKV
NewMockSafePointKV creates an instance of MockSafePointKV
func (w *MockSafePointKV) Close() error
Close implements the Close method for SafePointKV
func (w *MockSafePointKV) Get(k string) (string, error)
Get implements the Get method for SafePointKV
GetWithPrefix implements the Get method for SafePointKV
func (w *MockSafePointKV) Put(k string, v string) error
Put implements the Put method for SafePointKV
PDError wraps *pdpb.Error to implement the error interface.
PlainMutation represents a single transaction operation.
type PlainMutations struct {
// contains filtered or unexported fields
}
PlainMutations contains transaction operations.
func NewPlainMutations(sizeHint int) PlainMutations
NewPlainMutations creates a PlainMutations object with sizeHint reserved.
func (c *PlainMutations) AppendMutation(mutation PlainMutation)
AppendMutation merges a single Mutation into the current mutations.
func (c *PlainMutations) GetKey(i int) []byte
GetKey returns the key at index.
func (c *PlainMutations) GetKeys() [][]byte
GetKeys returns the keys.
func (c *PlainMutations) GetOp(i int) pb.Op
GetOp returns the key op at index.
func (c *PlainMutations) GetOps() []pb.Op
GetOps returns the key ops.
func (c *PlainMutations) GetPessimisticFlags() []bool
GetPessimisticFlags returns the key pessimistic flags.
func (c *PlainMutations) GetValue(i int) []byte
GetValue returns the key value at index.
func (c *PlainMutations) GetValues() [][]byte
GetValues returns the key values.
func (c *PlainMutations) IsPessimisticLock(i int) bool
IsPessimisticLock returns the key pessimistic flag at index.
func (c *PlainMutations) Len() int
Len returns the count of mutations.
func (c *PlainMutations) MergeMutations(mutations PlainMutations)
MergeMutations append input mutations into current mutations.
Push another mutation into mutations.
func (c *PlainMutations) Slice(from, to int) CommitterMutations
Slice return a sub mutations in range [from, to).
RPCCanceller is rpc send cancelFunc collector.
func NewRPCanceller() *RPCCanceller
NewRPCanceller creates RPCCanceller with init state.
func (h *RPCCanceller) CancelAll()
CancelAll cancels all inflight rpc context.
WithCancel generates new context with cancel func.
type RPCCancellerCtxKey struct{}
RPCCancellerCtxKey is context key attach rpc send cancelFunc collector to ctx.
type RPCContext struct { Region RegionVerID Meta *metapb.Region Peer *metapb.Peer AccessIdx AccessIndex Store *Store Addr string AccessMode AccessMode }
RPCContext contains data that is needed to send RPC to a region.
func (c *RPCContext) String() string
RPCRuntimeStats indicates the RPC request count and consume time.
RangeTaskHandler is the type of functions that processes a task of a key range. The function should calculate Regions that succeeded or failed to the task. Returning error from the handler means the error caused the whole task should be stopped.
type RangeTaskRunner struct {
// contains filtered or unexported fields
}
RangeTaskRunner splits a range into many ranges to process concurrently, and convenient to send requests to all regions in the range. Because of merging and splitting, it's possible that multiple requests for disjoint ranges are sent to the same region.
func NewRangeTaskRunner( name string, store Storage, concurrency int, handler RangeTaskHandler, ) *RangeTaskRunner
NewRangeTaskRunner creates a RangeTaskRunner.
`requestCreator` is the function used to create RPC request according to the given range. `responseHandler` is the function to process responses of errors. If `responseHandler` returns error, the whole job will be canceled.
func (s *RangeTaskRunner) CompletedRegions() int
CompletedRegions returns how many regions has been sent requests.
func (s *RangeTaskRunner) FailedRegions() int
FailedRegions returns how many regions has failed to do the task.
RunOnRange runs the task on the given range. Empty startKey or endKey means unbounded.
func (s *RangeTaskRunner) SetRegionsPerTask(regionsPerTask int)
SetRegionsPerTask sets how many regions is in a divided task. Since regions may split and merge, it's possible that a sub task contains not exactly specified number of regions.
RangeTaskStat is used to count Regions that completed or failed to do the task.
type RawKVClient struct {
// contains filtered or unexported fields
}
RawKVClient is a client of TiKV server which is used as a key-value storage, only GET/PUT/DELETE commands are supported.
func NewRawKVClient(pdAddrs []string, security config.Security, opts ...pd.ClientOption) (*RawKVClient, error)
NewRawKVClient creates a client with PD cluster addrs.
func (c *RawKVClient) BatchDelete(keys [][]byte) error
BatchDelete deletes key-value pairs from TiKV
func (c *RawKVClient) BatchGet(keys [][]byte) ([][]byte, error)
BatchGet queries values with the keys.
func (c *RawKVClient) BatchPut(keys, values [][]byte) error
BatchPut stores key-value pairs to TiKV.
func (c *RawKVClient) Close() error
Close closes the client.
func (c *RawKVClient) ClusterID() uint64
ClusterID returns the TiKV cluster ID.
func (c *RawKVClient) Delete(key []byte) error
Delete deletes a key-value pair from TiKV.
func (c *RawKVClient) DeleteRange(startKey []byte, endKey []byte) error
DeleteRange deletes all key-value pairs in a range from TiKV
func (c *RawKVClient) Get(key []byte) ([]byte, error)
Get queries value with the key. When the key does not exist, it returns `nil, nil`.
func (c *RawKVClient) Put(key, value []byte) error
Put stores a key-value pair to TiKV.
func (c *RawKVClient) ReverseScan(startKey, endKey []byte, limit int) (keys [][]byte, values [][]byte, err error)
ReverseScan queries continuous kv pairs in range [endKey, startKey), up to limit pairs. Direction is different from Scan, upper to lower. If endKey is empty, it means unbounded. If you want to include the startKey or exclude the endKey, push a '\0' to the key. For example, to scan (endKey, startKey], you can write: `ReverseScan(push(startKey, '\0'), push(endKey, '\0'), limit)`. It doesn't support Scanning from "", because locating the last Region is not yet implemented.
func (c *RawKVClient) Scan(startKey, endKey []byte, limit int) (keys [][]byte, values [][]byte, err error)
Scan queries continuous kv pairs in range [startKey, endKey), up to limit pairs. If endKey is empty, it means unbounded. If you want to exclude the startKey or include the endKey, push a '\0' to the key. For example, to scan (startKey, endKey], you can write: `Scan(push(startKey, '\0'), push(endKey, '\0'), limit)`.
type Region struct {
// contains filtered or unexported fields
}
Region presents kv region
func (r *Region) AnyStorePeer(rs *RegionStore, followerStoreSeed uint32, op *storeSelectorOp) (store *Store, peer *metapb.Peer, accessIdx AccessIndex, storeIdx int)
AnyStorePeer returns a leader or follower store with the associated peer.
Contains checks whether the key is in the region, for the maximum region endKey is empty. startKey <= key < endKey.
ContainsByEnd check the region contains the greatest key that is less than key. for the maximum region endKey is empty. startKey < key <= endKey.
EndKey returns EndKey.
func (r *Region) FollowerStorePeer(rs *RegionStore, followerStoreSeed uint32, op *storeSelectorOp) (store *Store, peer *metapb.Peer, accessIdx AccessIndex, storeIdx int)
FollowerStorePeer returns a follower store with follower peer.
GetID returns id.
GetLeaderPeerID returns leader peer ID.
GetLeaderStoreID returns the store ID of the leader region.
GetMeta returns region meta.
StartKey returns StartKey.
func (r *Region) VerID() RegionVerID
VerID returns the Region's RegionVerID.
func (r *Region) WorkStorePeer(rs *RegionStore) (store *Store, peer *metapb.Peer, accessIdx AccessIndex, storeIdx int)
WorkStorePeer returns current work store with work peer.
type RegionBatchRequestSender struct { RegionRequestSender }
RegionBatchRequestSender sends BatchCop requests to TiFlash server by stream way.
func NewRegionBatchRequestSender(cache *RegionCache, client Client) *RegionBatchRequestSender
NewRegionBatchRequestSender creates a RegionBatchRequestSender object.
type RegionCache struct {
// contains filtered or unexported fields
}
RegionCache caches Regions loaded from PD.
func NewRegionCache(pdClient pd.Client) *RegionCache
NewRegionCache creates a RegionCache.
func (c *RegionCache) BatchLoadRegionsFromKey(bo *Backoffer, startKey []byte, count int) ([]byte, error)
BatchLoadRegionsFromKey loads at most given numbers of regions to the RegionCache, from the given startKey. Returns the endKey of the last loaded region. If some of the regions has no leader, their entries in RegionCache will not be updated.
func (c *RegionCache) BatchLoadRegionsWithKeyRange(bo *Backoffer, startKey []byte, endKey []byte, count int) (regions []*Region, err error)
BatchLoadRegionsWithKeyRange loads at most given numbers of regions to the RegionCache, within the given key range from the startKey to endKey. Returns the loaded regions.
func (c *RegionCache) Close()
Close releases region cache's resource.
func (c *RegionCache) GetTiFlashRPCContext(bo *Backoffer, id RegionVerID) (*RPCContext, error)
GetTiFlashRPCContext returns RPCContext for a region must access flash store. If it returns nil, the region must be out of date and already dropped from cache or not flash store found.
func (c *RegionCache) GetTiKVRPCContext(bo *Backoffer, id RegionVerID, replicaRead kv.ReplicaReadType, followerStoreSeed uint32, opts ...StoreSelectorOption) (*RPCContext, error)
GetTiKVRPCContext returns RPCContext for a region. If it returns nil, the region must be out of date and already dropped from cache.
func (c *RegionCache) GroupKeysByRegion(bo *Backoffer, keys [][]byte, filter func(key, regionStartKey []byte) bool) (map[RegionVerID][][]byte, RegionVerID, error)
GroupKeysByRegion separates keys into groups by their belonging Regions. Specially it also returns the first key's region which may be used as the 'PrimaryLockKey' and should be committed ahead of others. filter is used to filter some unwanted keys.
func (c *RegionCache) InvalidateCachedRegion(id RegionVerID)
InvalidateCachedRegion removes a cached Region.
func (c *RegionCache) ListRegionIDsInKeyRange(bo *Backoffer, startKey, endKey []byte) (regionIDs []uint64, err error)
ListRegionIDsInKeyRange lists ids of regions in [start_key,end_key].
func (c *RegionCache) LoadRegionsInKeyRange(bo *Backoffer, startKey, endKey []byte) (regions []*Region, err error)
LoadRegionsInKeyRange lists regions in [start_key,end_key].
func (c *RegionCache) LocateEndKey(bo *Backoffer, key []byte) (*KeyLocation, error)
LocateEndKey searches for the region and range that the key is located. Unlike LocateKey, start key of a region is exclusive and end key is inclusive.
func (c *RegionCache) LocateKey(bo *Backoffer, key []byte) (*KeyLocation, error)
LocateKey searches for the region and range that the key is located.
func (c *RegionCache) LocateRegionByID(bo *Backoffer, regionID uint64) (*KeyLocation, error)
LocateRegionByID searches for the region with ID.
func (c *RegionCache) OnRegionEpochNotMatch(bo *Backoffer, ctx *RPCContext, currentRegions []*metapb.Region) error
OnRegionEpochNotMatch removes the old region and inserts new regions into the cache.
func (c *RegionCache) OnSendFail(bo *Backoffer, ctx *RPCContext, scheduleReload bool, err error)
OnSendFail handles send request fail logic.
func (c *RegionCache) PDClient() pd.Client
PDClient returns the pd.Client in RegionCache.
func (c *RegionCache) UpdateLeader(regionID RegionVerID, leaderStoreID uint64, currentPeerIdx AccessIndex)
UpdateLeader update some region cache with newer leader info.
type RegionRequestRuntimeStats struct { Stats map[tikvrpc.CmdType]*RPCRuntimeStats }
RegionRequestRuntimeStats records the runtime stats of send region requests.
func NewRegionRequestRuntimeStats() RegionRequestRuntimeStats
NewRegionRequestRuntimeStats returns a new RegionRequestRuntimeStats.
func (r *RegionRequestRuntimeStats) Clone() RegionRequestRuntimeStats
Clone returns a copy of itself.
func (r *RegionRequestRuntimeStats) Merge(rs RegionRequestRuntimeStats)
Merge merges other RegionRequestRuntimeStats.
func (r *RegionRequestRuntimeStats) String() string
String implements fmt.Stringer interface.
type RegionRequestSender struct { RegionRequestRuntimeStats // contains filtered or unexported fields }
RegionRequestSender sends KV/Cop requests to tikv server. It handles network errors and some region errors internally.
Typically, a KV/Cop request is bind to a region, all keys that are involved in the request should be located in the region. The sending process begins with looking for the address of leader store's address of the target region from cache, and the request is then sent to the destination tikv server over TCP connection. If region is updated, can be caused by leader transfer, region split, region merge, or region balance, tikv server may not able to process request and send back a RegionError. RegionRequestSender takes care of errors that does not relevant to region range, such as 'I/O timeout', 'NotLeader', and 'ServerIsBusy'. For other errors, since region range have changed, the request may need to split, so we simply return the error to caller.
func NewRegionRequestSender(regionCache *RegionCache, client Client) *RegionRequestSender
NewRegionRequestSender creates a new sender.
func (s *RegionRequestSender) SendReq(bo *Backoffer, req *tikvrpc.Request, regionID RegionVerID, timeout time.Duration) (*tikvrpc.Response, error)
SendReq sends a request to tikv server.
func (s *RegionRequestSender) SendReqCtx( bo *Backoffer, req *tikvrpc.Request, regionID RegionVerID, timeout time.Duration, sType kv.StoreType, ) ( resp *tikvrpc.Response, rpcCtx *RPCContext, err error, )
SendReqCtx sends a request to tikv server and return response and RPCCtx of this RPC.
type RegionStore struct {
// contains filtered or unexported fields
}
RegionStore represents region stores info it will be store as unsafe.Pointer and be load at once
type RegionVerID struct {
// contains filtered or unexported fields
}
RegionVerID is a unique ID that can identify a Region at a specific version.
func (r *RegionVerID) GetConfVer() uint64
GetConfVer returns the conf ver of the region's epoch
func (r *RegionVerID) GetID() uint64
GetID returns the id of the region
func (r *RegionVerID) GetVer() uint64
GetVer returns the version of the region's epoch
type RelatedSchemaChange struct { PhyTblIDS []int64 ActionTypes []uint64 LatestInfoSchema SchemaVer Amendable bool }
RelatedSchemaChange contains information about schema diff between two schema versions.
type SafePointKV interface { Put(k string, v string) error Get(k string) (string, error) GetWithPrefix(k string) ([]*mvccpb.KeyValue, error) Close() error }
SafePointKV is used for a seamingless integration for mockTest and runtime.
type Scanner struct {
// contains filtered or unexported fields
}
Scanner support tikv scan
Close close iterator.
Key return key.
Next return next element.
Valid return valid.
Value return value.
type SchemaAmender interface { // AmendTxn is the amend entry, new mutations will be generated based on input mutations using schema change info. // The returned results are mutations need to prewrite and mutations need to cleanup. AmendTxn(ctx context.Context, startInfoSchema SchemaVer, change *RelatedSchemaChange, mutations CommitterMutations) (CommitterMutations, error) }
SchemaAmender is used by pessimistic transactions to amend commit mutations for schema change during 2pc.
type SchemaVer interface { // SchemaMetaVersion returns the meta schema version. SchemaMetaVersion() int64 }
SchemaVer is the infoSchema which will return the schema version.
type SnapshotRuntimeStats struct {
// contains filtered or unexported fields
}
SnapshotRuntimeStats records the runtime stats of snapshot.
func (rs *SnapshotRuntimeStats) Clone() execdetails.RuntimeStats
Clone implements the RuntimeStats interface.
func (rs *SnapshotRuntimeStats) Merge(other execdetails.RuntimeStats)
Merge implements the RuntimeStats interface.
func (rs *SnapshotRuntimeStats) String() string
String implements fmt.Stringer interface.
func (rs *SnapshotRuntimeStats) Tp() int
Tp implements the RuntimeStats interface.
type Storage interface { kv.Storage // GetRegionCache gets the RegionCache. GetRegionCache() *RegionCache // SendReq sends a request to TiKV. SendReq(bo *Backoffer, req *tikvrpc.Request, regionID RegionVerID, timeout time.Duration) (*tikvrpc.Response, error) // GetLockResolver gets the LockResolver. GetLockResolver() *LockResolver // GetSafePointKV gets the SafePointKV. GetSafePointKV() SafePointKV // UpdateSPCache updates the cache of safe point. UpdateSPCache(cachedSP uint64, cachedTime time.Time) // GetGCHandler gets the GCHandler. GetGCHandler() GCHandler // SetOracle sets the Oracle. SetOracle(oracle oracle.Oracle) // SetTiKVClient sets the TiKV client. SetTiKVClient(client Client) // GetTiKVClient gets the TiKV client. GetTiKVClient() Client // Closed returns the closed channel. Closed() <-chan struct{} }
Storage represent the kv.Storage runs on TiKV.
type Store struct {
// contains filtered or unexported fields
}
Store contains a kv process's address.
func (s *Store) IsLabelsMatch(labels []*metapb.StoreLabel) bool
IsLabelsMatch return whether the store's labels match the target labels
func (s *Store) IsSameLabels(labels []*metapb.StoreLabel) bool
IsSameLabels returns whether the store have the same labels with target labels
type StoreSelectorOption func(*storeSelectorOp)
StoreSelectorOption configures storeSelectorOp.
func WithMatchLabels(labels []*metapb.StoreLabel) StoreSelectorOption
WithMatchLabels indicates selecting stores with matched labels
type TxnStatus struct {
// contains filtered or unexported fields
}
TxnStatus represents a txn's final status. It should be Lock or Commit or Rollback.
Action returns what the CheckTxnStatus request have done to the transaction.
CommitTS returns the txn's commitTS. It is valid iff `IsCommitted` is true.
IsCommitted returns true if the txn's final status is Commit.
StatusCacheable checks whether the transaction status is certain.True will be returned if its status is certain:
If transaction is already committed, the result could be cached. Otherwise: If l.LockType is pessimistic lock type: - if its primary lock is pessimistic too, the check txn status result should not be cached. - if its primary lock is prewrite lock type, the check txn status could be cached. If l.lockType is prewrite lock type: - always cache the check txn status result.
For prewrite locks, their primary keys should ALWAYS be the correct one and will NOT change.
TTL returns the TTL of the transaction if the transaction is still alive.
Path | Synopsis |
---|---|
gcworker | |
latch | |
oracle | |
oracle/oracles | |
storeutil | |
tikvrpc |
Package tikv imports 86 packages (graph) and is imported by 173 packages. Updated 2021-01-27. Refresh now. Tools for package owners.