dtstruct

package
v0.0.0-...-6b32d26 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 15, 2021 License: Apache-2.0 Imports: 26 Imported by: 96

Documentation

Overview

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Package collection holds routines for collecting "high frequency" metric and handling their auto-expiry based on a configured retention time. This becomes more interesting as the number of MySQL servers monitored by ham4db increases.

Most monitoring systems look at different metric over a period like 1, 10, 30 or 60 seconds but even at second resolution ham4db may have polled a number of servers.

It can be helpful to collect the raw values, and then allow external monitoring to pull via an http api call either pre-cooked aggregate data or the raw data for custom analysis over the period requested.

This is expected to be used for the following types of metric:

  • discovery metric (time to poll a MySQL server and collect status)
  • queue metric (statistics within the discovery queue itself)
  • query metric (statistics on the number of queries made to the backend MySQL database)

This code can just add a new metric without worrying about removing it later, and other code which serves API requests can pull out the data when needed for the requested time period.

For current metric two api urls have been provided: one provides the raw data and the other one provides a single set of aggregate data which is suitable for easy collection by monitoring systems.

Expiry is triggered by default if the collection is created via CreateOrReturnCollection() and uses an expiry period of DiscoveryCollectionRetentionSeconds. It can also be enabled by calling StartAutoExpiration() after setting the required expire period with SetExpirePeriod().

This will trigger periodic calls (every second) to ensure the removal of metric which have passed the time specified. Not enabling expiry will mean data is collected but never freed which will make ham4db run out of memory eventually.

Current code uses DiscoveryCollectionRetentionSeconds as the time to keep metric data.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Copyright 2021 SANGFOR TECHNOLOGIES

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

	http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Index

Constants

View Source
const (
	NoProblem                                               AnalysisCode = "NoProblem"
	DeadMasterWithoutReplicas                                            = "DeadMasterWithoutReplicas"
	DeadMaster                                                           = "DeadMaster"
	DeadMasterAndReplicas                                                = "DeadMasterAndReplicas"
	DeadMasterAndSomeReplicas                                            = "DeadMasterAndSomeReplicas"
	UnreachableMasterWithLaggingReplicas                                 = "UnreachableMasterWithLaggingReplicas"
	UnreachableMaster                                                    = "UnreachableMaster"
	MasterSingleReplicaNotReplicating                                    = "MasterSingleReplicaNotReplicating"
	MasterSingleReplicaDead                                              = "MasterSingleReplicaDead"
	MasterSomeReplicaDead                                                = "MasterSomeReplicaDead"
	AllMasterReplicasNotReplicating                                      = "AllMasterReplicasNotReplicating"
	AllMasterReplicasNotReplicatingOrDead                                = "AllMasterReplicasNotReplicatingOrDead"
	LockedSemiSyncMasterHypothesis                                       = "LockedSemiSyncMasterHypothesis"
	LockedSemiSyncMaster                                                 = "LockedSemiSyncMaster"
	MasterWithoutReplicas                                                = "MasterWithoutReplicas"
	DeadCoMaster                                                         = "DeadCoMaster"
	DeadCoMasterAndSomeReplicas                                          = "DeadCoMasterAndSomeReplicas"
	UnreachableCoMaster                                                  = "UnreachableCoMaster"
	AllCoMasterReplicasNotReplicating                                    = "AllCoMasterReplicasNotReplicating"
	DeadIntermediateMaster                                               = "DeadIntermediateMaster"
	DeadIntermediateMasterWithSingleReplica                              = "DeadIntermediateMasterWithSingleReplica"
	DeadIntermediateMasterWithSingleReplicaFailingToConnect              = "DeadIntermediateMasterWithSingleReplicaFailingToConnect"
	DeadIntermediateMasterAndSomeReplicas                                = "DeadIntermediateMasterAndSomeReplicas"
	DeadIntermediateMasterAndReplicas                                    = "DeadIntermediateMasterAndReplicas"
	UnreachableIntermediateMasterWithLaggingReplicas                     = "UnreachableIntermediateMasterWithLaggingReplicas"
	UnreachableIntermediateMaster                                        = "UnreachableIntermediateMaster"
	AllIntermediateMasterReplicasFailingToConnectOrDead                  = "AllIntermediateMasterReplicasFailingToConnectOrDead"
	AllIntermediateMasterReplicasNotReplicating                          = "AllIntermediateMasterReplicasNotReplicating"
	FirstTierReplicaFailingToConnectToMaster                             = "FirstTierReplicaFailingToConnectToMaster"
	BinlogServerFailingToConnectToMaster                                 = "BinlogServerFailingToConnectToMaster"
	// Group replication problems
	DeadReplicationGroupMemberWithReplicas = "DeadReplicationGroupMemberWithReplicas"

	//for opengauss
	DownInstance       = "DownInstance"
	DeadStandby        = "DeadStandby"
	DeadCascadeStandby = "DeadCascadeStandby"
	NeedRepair         = "NeedRepair"
	DuplicatePrimary   = "DuplicatePrimary"
)
View Source
const (
	StatementAndMixedLoggingReplicasStructureWarning     StructureAnalysisCode = "StatementAndMixedLoggingReplicasStructureWarning"
	StatementAndRowLoggingReplicasStructureWarning                             = "StatementAndRowLoggingReplicasStructureWarning"
	MixedAndRowLoggingReplicasStructureWarning                                 = "MixedAndRowLoggingReplicasStructureWarning"
	MultipleMajorVersionsLoggingReplicasStructureWarning                       = "MultipleMajorVersionsLoggingReplicasStructureWarning"
	NoLoggingReplicasStructureWarning                                          = "NoLoggingReplicasStructureWarning"
	DifferentGTIDModesStructureWarning                                         = "DifferentGTIDModesStructureWarning"
	ErrantGTIDStructureWarning                                                 = "ErrantGTIDStructureWarning"
	NoFailoverSupportStructureWarning                                          = "NoFailoverSupportStructureWarning"
	NoWriteableMasterStructureWarning                                          = "NoWriteableMasterStructureWarning"
	NotEnoughValidSemiSyncReplicasStructureWarning                             = "NotEnoughValidSemiSyncReplicasStructureWarning"
)
View Source
const (
	ForceMasterFailoverCommandHint    string = "force-master-failover"
	ForceMasterTakeoverCommandHint    string = "force-master-takeover"
	GracefulMasterTakeoverCommandHint string = "graceful-master-takeover"
)
View Source
const (
	NotMasterRecovery              RecoveryType = "NotMasterRecovery"
	MasterRecovery                              = "MasterRecovery"
	CoMasterRecovery                            = "CoMasterRecovery"
	IntermediateMasterRecovery                  = "IntermediateMasterRecovery"
	ReplicationGroupMemberRecovery              = "ReplicationGroupMemberRecovery"
)
View Source
const ReasonableDiscoveryLatency = 500 * time.Millisecond

Variables

View Source
var DBTypeDefaultPortMap = make(map[string]int)
View Source
var DBTypeMap = make(map[string]bool)
View Source
var HamHandlerMap = make(map[string]HamHandler)
View Source
var InstanceAdaptorMap = make(map[string]InstanceAdaptor)
View Source
var ProcessToken = NewToken()
View Source
var SnapshotHandlerMap = make(map[string]SnapshotHandler)
View Source
var (
	TagEqualsRegexp = regexp.MustCompile("^([^=]+)=(.*)$")
)

Functions

func FormatEventCleanly

func FormatEventCleanly(event BinlogEvent, length *int) string

used by GetNextBinlogCoordinatesToMatch to format debug information appropriately format the event information in debug output

func GetDatabaseType

func GetDatabaseType(dbt string) string

GetDatabaseType check if database type is exist and enabled

func GetDefaultPort

func GetDefaultPort(dbt string) int

GetDatabaseType check if database type is exist and enabled

func GetMaintenanceOwner

func GetMaintenanceOwner() string

func IsBannedFromBeingCandidateReplica

func IsBannedFromBeingCandidateReplica(replica InstanceAdaptor) bool

IsBannedFromBeingCandidateReplica check if replica is banned from being candidate

func IsSibling

func IsSibling(instance0, instance1 InstanceAdaptor) bool

IsSibling checks whether both instances are replicating from same master

func IsSmallerMajorVersion

func IsSmallerMajorVersion(version string, otherVersion string) bool

IsSmallerMajorVersion tests two versions against another and returns true if the former is a smaller "major" varsion than the latter. e.g. 5.5.36 is NOT a smaller major version as comapred to 5.5.40, but IS as compared to 5.6.9

func IsTypeValid

func IsTypeValid(dbt string) (ok bool)

IsTypeValid check if database type is valid

func IsUpstreamOf

func IsUpstreamOf(upstream, downstream InstanceAdaptor) bool

IsUpstreamOf checks whether an instance is the upstream of another

func MajorVersion

func MajorVersion(version string) []string

MajorVersion returns a MySQL major version number (e.g. given "5.5.36" it returns "5.5")

func MappedClusterNameToAlias

func MappedClusterNameToAlias(clusterName string) string

MappedClusterNameToAlias attempts to match a cluster with an alias based on configured ClusterNameToAlias map

func NewMajorVersionsSortedByCount

func NewMajorVersionsSortedByCount(versionsCount map[string]int) *majorVersionsSortedByCount

func Rpad

func Rpad(coordinates LogCoordinates, length *int) string

Rpad formats the binlog coordinates to a given size. If the size increases this value is modified so it can be reused later. This is to ensure consistent formatting in debug output.

func SetMaintenanceOwner

func SetMaintenanceOwner(owner string)

Types

type APIResponse

type APIResponse struct {
	HttpStatus int
	Code       APIResponseCode
	Message    string
	Detail     interface{}
}

APIResponse is a response returned as JSON to various requests.

func NewApiResponse

func NewApiResponse(code APIResponseCode, message string, detail interface{}) *APIResponse

NewApiResponse generate new api response

type APIResponseCode

type APIResponseCode int

APIResponseCode is an OK/ERROR response code

const (
	ERROR APIResponseCode = iota
	OK
)

func (*APIResponseCode) HttpStatus

func (this *APIResponseCode) HttpStatus() int

HttpStatus returns the respective HTTP status for this response

func (*APIResponseCode) MarshalJSON

func (this *APIResponseCode) MarshalJSON() ([]byte, error)

func (*APIResponseCode) String

func (this *APIResponseCode) String() string

type Agent

type Agent struct {
	IP         string
	Port       int
	Hostname   string
	Interval   int
	LastUpdate time.Time
}

func (*Agent) GetAddr

func (a *Agent) GetAddr() string

type AggregatedWriteBufferMetric

type AggregatedWriteBufferMetric struct {
	InstanceWriteBufferSize           int // config setting
	InstanceFlushIntervalMilliseconds int // config setting
	CountInstances                    int
	MaxInstances                      float64
	MeanInstances                     float64
	MedianInstances                   float64
	P95Instances                      float64
	MaxWaitSeconds                    float64
	MeanWaitSeconds                   float64
	MedianWaitSeconds                 float64
	P95WaitSeconds                    float64
	MaxWriteSeconds                   float64
	MeanWriteSeconds                  float64
	MedianWriteSeconds                float64
	P95WriteSeconds                   float64
}

func AggregatedSince

func AggregatedSince(c *Collection, t time.Time) AggregatedWriteBufferMetric

AggregatedSince returns the aggregated query metric for the period given from the values provided.

type AnalysisCode

type AnalysisCode string

type AnalysisInstanceType

type AnalysisInstanceType string
const (
	AnalysisInstanceTypeMaster             AnalysisInstanceType = "master"
	AnalysisInstanceTypeCoMaster           AnalysisInstanceType = "co-master"
	AnalysisInstanceTypeIntermediateMaster AnalysisInstanceType = "intermediate-master"
	AnalysisInstanceTypeGroupMember        AnalysisInstanceType = "group-member"
)

type AnalysisMap

type AnalysisMap map[string](*ReplicationAnalysis)

type Audit

type Audit struct {
	AuditId          int64
	AuditTimestamp   string
	AuditType        string
	AuditInstanceKey InstanceKey
	Message          string
}

Audit presents a single audit entry (namely in the database)

type BinlogEvent

type BinlogEvent struct {
	Coordinates  LogCoordinates
	NextEventPos int64
	EventType    string
	Info         string
}

func (*BinlogEvent) Equals

func (this *BinlogEvent) Equals(other *BinlogEvent) bool

func (*BinlogEvent) EqualsIgnoreCoordinates

func (this *BinlogEvent) EqualsIgnoreCoordinates(other *BinlogEvent) bool

func (*BinlogEvent) NextBinlogCoordinates

func (this *BinlogEvent) NextBinlogCoordinates() LogCoordinates

func (*BinlogEvent) NormalizeInfo

func (this *BinlogEvent) NormalizeInfo()

type BinlogEventCursor

type BinlogEventCursor struct {
	CachedEvents      []BinlogEvent
	CurrentEventIndex int
	// contains filtered or unexported fields
}

func NewBinlogEventCursor

func NewBinlogEventCursor(startCoordinates LogCoordinates, fetchNextEventsFunc func(LogCoordinates) ([]BinlogEvent, error)) BinlogEventCursor

fetchNextEventsFunc expected to return events starting at a given position, and automatically fetch those from next binary log when no more rows are found in current log. It is expected to return empty array with no error upon end of binlogs It is expected to return error upon error...

func (*BinlogEventCursor) GetNextCoordinates

func (this *BinlogEventCursor) GetNextCoordinates() (LogCoordinates, error)

NextCoordinates return the binlog coordinates of the next entry as yet unprocessed by the cursor. Moreover, when the cursor terminates (consumes last entry), these coordinates indicate what will be the futuristic coordinates of the next binlog entry. The value of this function is used by match-below to move a replica behind another, after exhausting the shared binlog entries of both.

func (*BinlogEventCursor) NextRealEvent

func (this *BinlogEventCursor) NextRealEvent(recursionLevel int) (*BinlogEvent, error)

NextRealEvent returns the next event from binlog that is not meta/control event (these are start-of-binary-log, rotate-binary-log etc.)

type BinlogFormatSortedByCount

type BinlogFormatSortedByCount struct {
	// contains filtered or unexported fields
}

majorVersionsSortedByCount sorts (major) versions: - primary sort: by count appearances - secondary sort: by version

func NewBinlogFormatSortedByCount

func NewBinlogFormatSortedByCount(dbt string, formatsCount map[string]int) *BinlogFormatSortedByCount

func (*BinlogFormatSortedByCount) First

func (this *BinlogFormatSortedByCount) First() string

func (*BinlogFormatSortedByCount) Len

func (this *BinlogFormatSortedByCount) Len() int

func (*BinlogFormatSortedByCount) Less

func (this *BinlogFormatSortedByCount) Less(i, j int) bool

func (*BinlogFormatSortedByCount) Swap

func (this *BinlogFormatSortedByCount) Swap(i, j int)

type BlockedTopologyRecovery

type BlockedTopologyRecovery struct {
	FailedInstanceKey    InstanceKey
	ClusterName          string
	Analysis             AnalysisCode
	LastBlockedTimestamp string
	BlockingRecoveryId   int64
}

BlockedTopologyRecovery represents an entry in the blocked_topology_recovery table

type ByNamePort

type ByNamePort []*InstanceKey

func (ByNamePort) Len

func (bnp ByNamePort) Len() int

func (ByNamePort) Less

func (bnp ByNamePort) Less(i, j int) bool

func (ByNamePort) Swap

func (bnp ByNamePort) Swap(i, j int)

type CLIFlag

type CLIFlag struct {
	Noop                       *bool
	SkipUnresolved             *bool
	SkipUnresolvedCheck        *bool
	BinlogFile                 *string
	GrabElection               *bool
	Version                    *bool
	Statement                  *string
	PromotionRule              *string
	ConfiguredVersion          string
	SkipBinlogSearch           *bool
	SkipContinuousRegistration *bool
	EnableDatabaseUpdate       *bool
	IgnoreRaftSetup            *bool
	Tag                        *string
}

CLIFlag stores some command line flags that are globally available in the process' lifetime

var RuntimeCLIFlags CLIFlag

type Cache

type Cache struct {
	sync.Mutex
	Name string
	*cache.Cache
	// contains filtered or unexported fields
}

func NewCache

func NewCache(name string, expiration time.Duration, cleanupInterval time.Duration) *Cache

NewCache create new cache with expiration and clean up interval

func (*Cache) GetVal

func (c *Cache) GetVal(key string, checkFunc func(interface{}) bool, valFunc func() (interface{}, error)) (interface{}, error)

GetVal return value in cache, if not, get from function `valFunc` and add to cache with default expire.

func (*Cache) GetWithExpire

func (c *Cache) GetWithExpire(expire time.Duration, key string, checkFunc func(interface{}) bool, valFunc func() (interface{}, error)) (interface{}, error)

GetWithExpire return value in cache, if not, get from function `valFunc` and add to cache.

func (*Cache) GetWithFunc

func (c *Cache) GetWithFunc(expire time.Duration, key string, checkFunc func(interface{}) bool, valFunc func() (interface{}, error), blockFunc func(chan struct{})) (val interface{}, err error)

GetWithFunc return value in cache, if not, get from function `valFunc` and add to cache. 1. validCheck, function to check if value in cache is valid, true means valid. 2. valFunc, function to get value for key. 3. blockFunc, function for read goroutine to decide what to do when blocked by atomic channel.

func (*Cache) HitRate

func (c *Cache) HitRate() float64

HitRate get hit rate

func (*Cache) IsExist

func (c *Cache) IsExist(key string) (exist bool)

IsExist check if key is exist in cache

func (*Cache) SetVal

func (c *Cache) SetVal(key string, value interface{}, d time.Duration)

SetVal set key/value to cache and reset atom key expire time

func (*Cache) SetValDefault

func (c *Cache) SetValDefault(key string, value interface{})

SetVal set key/value to cache with default expire time

type CandidateDatabaseInstance

type CandidateDatabaseInstance struct {
	InstanceKey
	PromotionRule       CandidatePromotionRule
	LastSuggestedString string
	PromotionRuleExpiry string // generated when retrieved from database for consistency reasons
}

CandidateDatabaseInstance contains information about explicit promotion rules for an instance

func NewCandidateDatabaseInstance

func NewCandidateDatabaseInstance(instanceKey InstanceKey, promotionRule CandidatePromotionRule) *CandidateDatabaseInstance

func (*CandidateDatabaseInstance) Key

Key returns an instance key representing this candidate

func (*CandidateDatabaseInstance) String

func (cdi *CandidateDatabaseInstance) String() string

String returns a string representation of the CandidateDatabaseInstance struct

type CandidatePromotionRule

type CandidatePromotionRule string

CandidatePromotionRule describe the promotion preference/rule for an instance. It maps to promotion_rule column in candidate_database_instance

const (
	MustPromoteRule      CandidatePromotionRule = "must"
	PreferPromoteRule    CandidatePromotionRule = "prefer"
	NeutralPromoteRule   CandidatePromotionRule = "neutral"
	PreferNotPromoteRule CandidatePromotionRule = "prefer_not"
	MustNotPromoteRule   CandidatePromotionRule = "must_not"
)

func ParseCandidatePromotionRule

func ParseCandidatePromotionRule(ruleName string) (CandidatePromotionRule, error)

ParseCandidatePromotionRule returns a CandidatePromotionRule by name. It returns an error if there is no known rule by the given name.

func (*CandidatePromotionRule) BetterThan

func (this *CandidatePromotionRule) BetterThan(other CandidatePromotionRule) bool

type CliParam

type CliParam struct {
	Command                     string
	Strict                      bool
	DatabaseType                string
	ClusterId                   string
	Instance                    string
	Destination                 string
	Owner                       string
	Reason                      string
	Duration                    string
	Pattern                     string
	ClusterAlias                string
	Pool                        string
	HostnameFlag                string
	RawInstanceKey              *InstanceKey
	InstanceKey                 *InstanceKey
	DestinationKey              *InstanceKey
	PostponedFunctionsContainer *PostponedFunctionsContainer
}

CliParam used to hold all cli command param

type ClusterInfo

type ClusterInfo struct {
	DatabaseType                           string
	ClusterName                            string
	ClusterAlias                           string // Human friendly alias
	ClusterDomain                          string // CNAME/VIP/A-record/whatever of the master of this cluster
	CountInstances                         uint
	HeuristicLag                           int64
	HasAutomatedMasterRecovery             bool
	HasAutomatedIntermediateMasterRecovery bool
}

ClusterInfo makes for a cluster status/info summary

func (*ClusterInfo) ApplyClusterAlias

func (ci *ClusterInfo) ApplyClusterAlias()

ApplyClusterAlias updates the given clusterInfo's ClusterAlias property

func (*ClusterInfo) ReadRecoveryInfo

func (ci *ClusterInfo) ReadRecoveryInfo()

ReadRecoveryInfo check if cluster info can match filter in config

type ClusterPoolInstance

type ClusterPoolInstance struct {
	ClusterName  string
	ClusterAlias string
	Pool         string
	Hostname     string
	Port         int
}

ClusterPoolInstance is an instance mapping a cluster, pool & instance

type Collection

type Collection struct {
	sync.Mutex                     // for locking the structure
	MetricList   []MetricInterface // metrics of this collection
	Done         chan struct{}     // to indicate that we are finishing expiry processing
	ExpirePeriod time.Duration     // time to keep the MetricList information for
	// contains filtered or unexported fields
}

Collection contains a collection of Metrics

func (*Collection) Append

func (c *Collection) Append(m MetricInterface) error

Append a new MetricInterface to the existing collection

func (*Collection) GetExpirePeriod

func (c *Collection) GetExpirePeriod() time.Duration

ExpirePeriod returns the currently configured expiration period

func (*Collection) Metrics

func (c *Collection) Metrics() []MetricInterface

Metrics returns a slice containing all the metric values

func (*Collection) SetExpirePeriod

func (c *Collection) SetExpirePeriod(duration time.Duration)

SetExpirePeriod determines after how long the collected data should be removed

func (*Collection) Since

func (c *Collection) Since(t time.Time) ([]MetricInterface, error)

Since returns the Metrics on or after the given time. We assume the metric are stored in ascending time. Iterate backwards until we reach the first value before the given time or the end of the array.

func (*Collection) StartAutoExpiration

func (c *Collection) StartAutoExpiration()

StartAutoExpiration initiates the auto expiry procedure which periodically checks for metric in the collection which need to be expired according to bc.ExpirePeriod.

func (*Collection) StopAutoExpiration

func (c *Collection) StopAutoExpiration()

StopAutoExpiration prepares to stop by terminating the auto-expiration process

type CommandDesc

type CommandDesc struct {
	Command     string
	Category    string
	Section     string
	Description string
	Func        func(cliPrm *CliParam)
}

CommandDesc describe command detail: command name, which section it belongs to and description for it, it also have func that will be called when this command be executed

type CommandList

type CommandList []CommandDesc

CommandList used to sort command when output or execute help command

func (CommandList) Len

func (cl CommandList) Len() int

func (CommandList) Less

func (cl CommandList) Less(i, j int) bool

func (CommandList) Swap

func (cl CommandList) Swap(i, j int)

type CommandSlice

type CommandSlice []string

func (CommandSlice) Len

func (cs CommandSlice) Len() int

func (CommandSlice) Less

func (cs CommandSlice) Less(i, j int) bool

func (CommandSlice) Swap

func (cs CommandSlice) Swap(i, j int)

type Downtime

type Downtime struct {
	Key            *InstanceKey
	Owner          string
	Reason         string
	Duration       time.Duration
	BeginsAt       time.Time
	EndsAt         time.Time
	BeginsAtString string
	EndsAtString   string
}

func NewDowntime

func NewDowntime(instanceKey *InstanceKey, owner string, reason string, duration time.Duration) *Downtime

func (*Downtime) Ended

func (downtime *Downtime) Ended() bool

func (*Downtime) EndsIn

func (downtime *Downtime) EndsIn() time.Duration

type DummySqlResult

type DummySqlResult struct {
}

func (DummySqlResult) LastInsertId

func (this DummySqlResult) LastInsertId() (int64, error)

func (DummySqlResult) RowsAffected

func (this DummySqlResult) RowsAffected() (int64, error)

type HamHandler

type HamHandler interface {
	SchemaBase() []string
	SchemaPatch() []string

	DriveName() string
	GetDBURI(host string, port int, args ...interface{}) string
	TLSCheck(string) bool
	TLSConfig(*tls.Config) error
	GetDBConnPool(uri string) (*sql.DB, error)

	// register cli command
	CliCmd(commandMap map[string]CommandDesc, cliParam *CliParam)

	// http api
	RegisterAPIRequest() map[string][]interface{}
	RegisterWebRequest() map[string][]interface{}

	// interface for sql in different case
	SQLProblemQuery() string
	SQLProblemCondition() string
	SQLProblemArgs(args ...interface{}) []interface{}

	// exec sql
	OpenTopology(host string, port int, args ...interface{}) (*sql.DB, error)
	ExecSQLOnInstance(instanceKey *InstanceKey, query string, args ...interface{}) (sql.Result, error)
	ScanInstanceRow(instanceKey *InstanceKey, query string, dest ...interface{}) error
	EmptyCommitInstance(instanceKey *InstanceKey) error

	// instance
	ReadFromBackendDB(instanceKey *InstanceKey) (InstanceAdaptor, bool, error)
	ReadInstanceByCondition(query string, condition string, args []interface{}, sort string) ([]InstanceAdaptor, error)
	ReadClusterInstances(clusterName string) ([]InstanceAdaptor, error)
	ReadReplicaInstances(masterKey *InstanceKey) ([]InstanceAdaptor, error)
	GetInfoFromInstance(ctx context.Context, instanceKey *InstanceKey, checkOnly, bufferWrites bool, latency *stopwatch.NamedStopwatch, agent string) (inst InstanceAdaptor, err error)
	GetSyncInfo(instanceKey *InstanceKey, bufferWrites bool, agent string) (interface{}, error)
	RowToInstance(rowMap sqlutil.RowMap) InstanceAdaptor
	WriteToBackendDB(context.Context, []InstanceAdaptor, bool, bool) error
	ForgetInstance(instanceKey *InstanceKey) error

	// replication
	StartReplication(ctx context.Context, instanceKey *InstanceKey) (interface{}, error)
	RestartReplication(instanceKey *InstanceKey) (interface{}, error)
	ResetReplication(instanceKey *InstanceKey) (InstanceAdaptor, error)
	CanReplicateFrom(first InstanceAdaptor, other InstanceAdaptor) (bool, error)
	StopReplication(*InstanceKey) (interface{}, error)
	StopReplicationNicely(instanceKey *InstanceKey, timeout time.Duration) (InstanceAdaptor, error)
	DelayReplication(instanceKey *InstanceKey, seconds int) error
	SetReadOnly(instanceKey *InstanceKey, readOnly bool) (InstanceAdaptor, error)
	SetSemiSyncOnDownstream(instanceKey *InstanceKey, enable bool) (InstanceAdaptor, error)
	SetSemiSyncOnUpstream(instanceKey *InstanceKey, enable bool) (InstanceAdaptor, error)
	SkipQuery(instanceKey *InstanceKey) (InstanceAdaptor, error)
	KillQuery(instanceKey *InstanceKey, process int64) (InstanceAdaptor, error)

	// master
	ChangeMasterTo(*InstanceKey, *InstanceKey, *LogCoordinates, bool, string) (InstanceAdaptor, error)
	DetachMaster(*InstanceKey) (InstanceAdaptor, error)
	ReattachMaster(*InstanceKey) (InstanceAdaptor, error)
	MakeCoMaster(instanceKey *InstanceKey) (InstanceAdaptor, error)
	MakeMaster(instanceKey *InstanceKey) (InstanceAdaptor, error)
	TakeMaster(instanceKey *InstanceKey, allowTakingCoMaster bool) (InstanceAdaptor, error)
	MakeLocalMaster(instanceKey *InstanceKey) (InstanceAdaptor, error)

	// topology
	MoveUp(instanceKey *InstanceKey) (InstanceAdaptor, error)
	MoveEquivalent(instanceKey, otherKey *InstanceKey) (InstanceAdaptor, error)
	MoveBelow(instanceKey, siblingKey *InstanceKey) (InstanceAdaptor, error)
	MoveUpReplicas(instanceKey *InstanceKey, pattern string) ([]InstanceAdaptor, InstanceAdaptor, error, []error)

	MatchUp(instanceKey *InstanceKey, requireInstanceMaintenance bool) (InstanceAdaptor, *LogCoordinates, error)
	MatchBelow(instanceKey, otherKey *InstanceKey, requireInstanceMaintenance bool) (InstanceAdaptor, *LogCoordinates, error)
	RelocateBelow(downstreamInstKey, upstreamInstKey *InstanceKey) (interface{}, error)
	RelocateReplicas(instanceKey, otherKey *InstanceKey, pattern string) (replicas []InstanceAdaptor, other InstanceAdaptor, err error, errs []error)

	MultiMatchReplicas(masterKey *InstanceKey, belowKey *InstanceKey, pattern string) ([]InstanceAdaptor, InstanceAdaptor, error, []error)
	MatchUpReplicas(masterKey *InstanceKey, pattern string) ([]InstanceAdaptor, InstanceAdaptor, error, []error)
	MultiMatchBelow(replicas []InstanceAdaptor, belowKey *InstanceKey, postponedFunctionsContainer *PostponedFunctionsContainer) (matchedReplicas []InstanceAdaptor, belowInstance InstanceAdaptor, err error, errs []error)

	RematchReplica(instanceKey *InstanceKey, requireInstanceMaintenance bool) (InstanceAdaptor, *LogCoordinates, error)
	TakeSiblings(instanceKey *InstanceKey) (instance InstanceAdaptor, takenSiblings int, err error)

	Repoint(*InstanceKey, *InstanceKey, string) (InstanceAdaptor, error)
	RegroupReplicas(masterKey *InstanceKey, returnReplicaEvenOnFailureToRegroup bool, onCandidateReplicaChosen func(InstanceAdaptor), postponedFunctionsContainer *PostponedFunctionsContainer) (aheadReplicas []InstanceAdaptor, equalReplicas []InstanceAdaptor, laterReplicas []InstanceAdaptor, cannotReplicateReplicas []InstanceAdaptor, instance InstanceAdaptor, err error)
	RepointReplicasTo(instanceKey *InstanceKey, pattern string, belowKey *InstanceKey) ([]InstanceAdaptor, error, []error)

	Topology(request *Request, historyTimestampPattern string, tabulated bool, printTags bool) (result interface{}, err error)

	ReplicationConfirm(failedKey *InstanceKey, streamKey *InstanceKey, upstream bool) bool

	// recovery
	GetReplicationAnalysis(clusterName, clusterId string, hints *ReplicationAnalysisHints) ([]ReplicationAnalysis, error)
	GetCheckAndRecoverFunction(analysisCode AnalysisCode, analyzedInstanceKey *InstanceKey) (
		checkAndRecoverFunction func(analysisEntry ReplicationAnalysis, candidateInstanceKey *InstanceKey, forceInstanceRecovery bool, skipProcesses bool) (recoveryAttempted bool, topologyRecovery *TopologyRecovery, err error),
		isActionableRecovery bool,
	)
	GetMasterRecoveryType(*ReplicationAnalysis) RecoveryType
	GetCandidateReplica(*InstanceKey, bool) (InstanceAdaptor, []InstanceAdaptor, []InstanceAdaptor, []InstanceAdaptor, []InstanceAdaptor, error)
	CategorizeReplication(*TopologyRecovery, *InstanceKey, func(InstanceAdaptor, bool) bool) (aheadReplicas, equalReplicas, laterReplicas, cannotReplicateReplicas []InstanceAdaptor, promotedReplica InstanceAdaptor, err error)
	RunEmergentOperations(analysisEntry *ReplicationAnalysis)
	CheckIfWouldBeMaster(InstanceAdaptor) bool
	//CheckAndRecoverDeadMaster(analysisEntry dtstruct.ReplicationAnalysis, candidateInstanceKey *dtstruct.InstanceKey, forceInstanceRecovery bool, skipProcesses bool) (recoveryAttempted bool, topologyRecovery *dtstruct.TopologyRecovery, err error)
	GracefulMasterTakeover(clusterName string, designatedKey *InstanceKey, auto bool) (topologyRecovery *TopologyRecovery, promotedMasterCoordinates *LogCoordinates, err error)

	// interface for agent
	BaseInfo() string
	MasterInfo() string
	SlaveInfo() string
	CascadeInfo() string
}

func GetHamHandler

func GetHamHandler(dbt string) HamHandler

GetHamHandler get ham handler for database type

type HostAttributes

type HostAttributes struct {
	Hostname        string
	AttributeName   string
	AttributeValue  string
	SubmitTimestamp string
	ExpireTimestamp string
}

HostAttributes presnts attributes submitted by a host

type HostnameRegistration

type HostnameRegistration struct {
	CreatedAt time.Time
	Key       InstanceKey
	Hostname  string
}

func NewHostnameDeregistration

func NewHostnameDeregistration(instanceKey *InstanceKey) *HostnameRegistration

func NewHostnameRegistration

func NewHostnameRegistration(instanceKey *InstanceKey, hostname string) *HostnameRegistration

type HostnameResolve

type HostnameResolve struct {
	Hostname         string
	ResolvedHostname string
}

func (HostnameResolve) String

func (this HostnameResolve) String() string

type HostnameUnresolve

type HostnameUnresolve struct {
	Hostname           string
	UnresolvedHostname string
}

func (HostnameUnresolve) String

func (this HostnameUnresolve) String() string

type HttpAPI

type HttpAPI struct {
	URLPrefix string
}

type Instance

type Instance struct {
	Uptime                    uint
	Key                       InstanceKey
	InstanceId                string
	InstanceAlias             string
	ClusterId                 string
	Version                   string
	VersionComment            string
	UpstreamKey               InstanceKey
	DownstreamKeyMap          InstanceKeyMap
	Role                      string
	DBState                   string
	IsCoUpstream              bool
	ReadOnly                  bool
	LastSeenTimestamp         string
	IsLastCheckValid          bool
	IsUpToDate                bool
	IsRecentlyChecked         bool
	ClusterName               string
	FlavorName                string
	DataCenter                string
	Region                    string
	Environment               string
	SuggestedClusterAlias     string
	ReplicationState          string
	ReplicationDepth          uint
	HasReplicationFilters     bool
	AllowTLS                  bool
	HasReplicationCredentials bool

	IsDetachedMaster bool

	SlaveLagSeconds       sql.NullInt64 // for API backwards compatibility. Equals `ReplicationLagSeconds`
	ReplicationLagSeconds sql.NullInt64

	ReplicationCredentialsAvailable bool

	SecondsSinceLastSeen sql.NullInt64

	// Careful. IsCandidate and PromotionRule are used together
	// and probably need to be merged. IsCandidate's value may
	// be picked up from daabase_candidate_instance's value when
	// reading an instance from the db.
	IsCandidate          bool
	PromotionRule        CandidatePromotionRule
	IsDowntimed          bool
	DowntimeReason       string
	DowntimeOwner        string
	DowntimeEndTimestamp string
	ElapsedDowntime      time.Duration
	UnresolvedHostname   string

	Problems []string

	LastDiscoveryLatency time.Duration

	Seed bool // Means we force this instance to be written to backend, even if it's invalid, empty or forgotten

}

Instance represents a database instance, including its current configuration & status. It presents important replication configuration and detailed replication status.

func NewInstance

func NewInstance() *Instance

NewInstance creates a new, empty instance

func (*Instance) AddDownstreamKey

func (this *Instance) AddDownstreamKey(replicaKey *InstanceKey)

AddDownstreamKey adds a replica to the list of this instance's replicas.

func (*Instance) GetAssociateInstance

func (this *Instance) GetAssociateInstance() (instanceList []InstanceKey)

func (*Instance) GetHostname

func (this *Instance) GetHostname() string

func (*Instance) GetInstance

func (this *Instance) GetInstance() *Instance

func (*Instance) GetPort

func (this *Instance) GetPort() int

func (*Instance) GetReplicas

func (this *Instance) GetReplicas() InstanceKeyMap

func (*Instance) IsReplica

func (this *Instance) IsReplica() bool

func (*Instance) IsReplicaServer

func (this *Instance) IsReplicaServer() bool

func (*Instance) IsSeed

func (this *Instance) IsSeed() bool

func (*Instance) IsSmallerMajorVersion

func (this *Instance) IsSmallerMajorVersion(other *Instance) bool

IsSmallerMajorVersion tests this instance against another and returns true if this instance is of a smaller "major" varsion. e.g. 5.5.36 is NOT a smaller major version as comapred to 5.5.36, but IS as compared to 5.6.9

func (*Instance) IsSmallerMajorVersionByString

func (this *Instance) IsSmallerMajorVersionByString(otherVersion string) bool

IsSmallerMajorVersionByString checks if this instance has a smaller major version number than given one

func (*Instance) Less

func (this *Instance) Less(handler InstanceAdaptor, dataCenter string) bool

func (*Instance) MajorVersion

func (this *Instance) MajorVersion() []string

MajorVersion returns this instance's major version number (e.g. for 5.5.36 it returns "5.5")

func (*Instance) MajorVersionString

func (this *Instance) MajorVersionString() string

MajorVersion returns this instance's major version number (e.g. for 5.5.36 it returns "5.5")

func (*Instance) ReplicaRunning

func (this *Instance) ReplicaRunning() bool

func (*Instance) SetHstPrtAndClusterName

func (this *Instance) SetHstPrtAndClusterName(hostname string, port int, upstreamHostname string, upstreamPort int, clusterName string)

func (*Instance) SetPromotionRule

func (this *Instance) SetPromotionRule(CandidatePromotionRule)

func (*Instance) SetSeed

func (this *Instance) SetSeed()

type InstanceAdaptor

type InstanceAdaptor interface {

	// get instance database type
	GetDatabaseType() string

	// get instance hostname
	GetHostname() string

	// get instance port
	GetPort() int

	// get handler for this instance
	GetHandler() InstanceAdaptor

	// get instance all replicas
	GetReplicas() InstanceKeyMap

	// get upstream and downstream instance
	GetAssociateInstance() []InstanceKey

	// get instance info
	GetInstance() *Instance

	// get instance description
	HumanReadableDescription() string

	// set promotion rule for candidate instance
	SetPromotionRule(CandidatePromotionRule)

	// set hostname/port/cluster name
	SetHstPrtAndClusterName(hostname string, port int, upstreamHostname string, upstreamPort int, clusterName string)

	// set instance
	SetInstance(*Instance)

	// set instance favor name
	SetFlavorName()

	// check if instance's replication is running
	ReplicaRunning() bool

	// check if instance is a binlog serve
	IsReplicaServer() bool

	// check if instance is replica
	IsReplica() bool

	// for instance sort
	Less(handler InstanceAdaptor, dataCenter string) bool
}

InstanceAdaptor interface that instance need to implement

func GetInstanceAdaptor

func GetInstanceAdaptor(dbt string) InstanceAdaptor

GetInstanceAdaptor get instance handler for database type

func RemoveInstance

func RemoveInstance(instanceList []InstanceAdaptor, instanceKey *InstanceKey) []InstanceAdaptor

RemoveInstance will remove an instance from a list of instances

type InstanceAnalysis

type InstanceAnalysis struct {
	// contains filtered or unexported fields
}

func NewInstanceAnalysis

func NewInstanceAnalysis(instanceKey *InstanceKey, analysis AnalysisCode) *InstanceAnalysis

func (*InstanceAnalysis) String

func (instanceAnalysis *InstanceAnalysis) String() string

type InstanceBinlogCoordinates

type InstanceBinlogCoordinates struct {
	Key         InstanceKey
	Coordinates LogCoordinates
}

InstanceBinlogCoordinates is a convenice wrapper for instance key + binlog coordinates

type InstanceKey

type InstanceKey struct {
	DBType    string
	Hostname  string
	Port      int
	ClusterId string
}

InstanceKey is an instance indicator, identified by hostname and port

func (*InstanceKey) DetachedKey

func (ik *InstanceKey) DetachedKey() *InstanceKey

DetachedKey returns an instance key whose hostname is detahced: invalid, but recoverable

func (*InstanceKey) DisplayString

func (ik *InstanceKey) DisplayString() string

DisplayString returns a user-friendly string representation of this key

func (*InstanceKey) Equals

func (ik *InstanceKey) Equals(other *InstanceKey) bool

Equals tests equality between this key and another key

func (*InstanceKey) IsDetached

func (ik *InstanceKey) IsDetached() bool

IsDetached returns 'true' when this hostname is logically "detached"

func (*InstanceKey) IsIPv4

func (ik *InstanceKey) IsIPv4() bool

IsValid uses simple heuristics to see whether this key represents an actual instance

func (*InstanceKey) IsValid

func (ik *InstanceKey) IsValid() bool

IsValid uses simple heuristics to see whether this key represents an actual instance

func (*InstanceKey) ReattachedKey

func (ik *InstanceKey) ReattachedKey() *InstanceKey

ReattachedKey returns an instance key whose hostname is detahced: invalid, but recoverable

func (*InstanceKey) SmallerThan

func (ik *InstanceKey) SmallerThan(other *InstanceKey) bool

SmallerThan returns true if this key is dictionary-smaller than another. This is used for consistent sorting/ordering; there's nothing magical about it.

func (InstanceKey) String

func (ik InstanceKey) String() string

String returns a user-friendly string representation of this key

func (*InstanceKey) StringCode

func (ik *InstanceKey) StringCode() string

StringCode returns an official string representation of this key

type InstanceKeyMap

type InstanceKeyMap map[InstanceKey]bool

InstanceKeyMap is a convenience struct for listing InstanceKey-s

func NewInstanceKeyMap

func NewInstanceKeyMap() *InstanceKeyMap

func (*InstanceKeyMap) AddKey

func (ikm *InstanceKeyMap) AddKey(key InstanceKey)

AddKey adds a single key to this map

func (*InstanceKeyMap) AddKeys

func (ikm *InstanceKeyMap) AddKeys(keys []InstanceKey)

AddKeys adds all given keys to this map

func (*InstanceKeyMap) GetInstanceKeys

func (ikm *InstanceKeyMap) GetInstanceKeys() []InstanceKey

GetInstanceKeys returns keys in this map in the form of an array

func (*InstanceKeyMap) HasKey

func (ikm *InstanceKeyMap) HasKey(key InstanceKey) bool

HasKey checks if given key is within the map

func (*InstanceKeyMap) Intersect

func (ikm *InstanceKeyMap) Intersect(other *InstanceKeyMap) *InstanceKeyMap

Intersect returns a keymap which is the intersection of this and another map

func (InstanceKeyMap) MarshalJSON

func (ikm InstanceKeyMap) MarshalJSON() ([]byte, error)

MarshalJSON will marshal this map as JSON

func (*InstanceKeyMap) ReadJson

func (ikm *InstanceKeyMap) ReadJson(jsonStr string) error

ReadJson unmarshalls a json into this map

func (*InstanceKeyMap) ToCommaDelimitedList

func (ikm *InstanceKeyMap) ToCommaDelimitedList() string

ToCommaDelimitedList will export this map in comma delimited format

func (*InstanceKeyMap) ToJSON

func (ikm *InstanceKeyMap) ToJSON() (string, error)

ToJSON will marshal this map as JSON

func (*InstanceKeyMap) ToJSONString

func (ikm *InstanceKeyMap) ToJSONString() string

ToJSONString will marshal this map as JSON

func (*InstanceKeyMap) UnmarshalJSON

func (ikm *InstanceKeyMap) UnmarshalJSON(b []byte) error

UnmarshalJSON reds this object from JSON

type InstanceTag

type InstanceTag struct {
	Key InstanceKey
	T   Tag
}

type InstancesByCountReplicas

type InstancesByCountReplicas []InstanceAdaptor

InstancesByCountReplicas sorts instances by number of replicas, descending

func (InstancesByCountReplicas) Len

func (this InstancesByCountReplicas) Len() int

func (InstancesByCountReplicas) Less

func (this InstancesByCountReplicas) Less(i, j int) bool

func (InstancesByCountReplicas) Swap

func (this InstancesByCountReplicas) Swap(i, j int)

type KVPair

type KVPair struct {
	Key   string
	Value string
}

func NewKVPair

func NewKVPair(key string, value string) *KVPair

func (*KVPair) String

func (this *KVPair) String() string

type KVStore

type KVStore interface {
	PutKeyValue(key string, value string) (err error)
	PutKVPairs(kvPairs []*KVPair) (err error)
	GetKeyValue(key string) (value string, found bool, err error)
	DistributePairs(kvPairs [](*KVPair)) (err error)
}

type LogCoordinates

type LogCoordinates struct {
	LogFile string
	LogPos  int64
	Type    constant.LogType
}

LogCoordinates described binary log coordinates in the form of log file & log position.

func ParseLogCoordinates

func ParseLogCoordinates(logFileLogPos string) (*LogCoordinates, error)

ParseInstanceKey will parse an InstanceKey from a string representation such as 127.0.0.1:3306

func (*LogCoordinates) Detach

func (this *LogCoordinates) Detach() (detachedCoordinates LogCoordinates)

Detach returns a detahced form of coordinates

func (*LogCoordinates) DisplayString

func (this *LogCoordinates) DisplayString() string

DisplayString returns a user-friendly string representation of these coordinates

func (*LogCoordinates) Equals

func (this *LogCoordinates) Equals(other *LogCoordinates) bool

Equals tests equality of this corrdinate and another one.

func (*LogCoordinates) ExtractDetachedCoordinates

func (this *LogCoordinates) ExtractDetachedCoordinates() (isDetached bool, detachedCoordinates LogCoordinates)

FileSmallerThan returns true if this coordinate's file is strictly smaller than the other's.

func (*LogCoordinates) FileNumber

func (this *LogCoordinates) FileNumber() (int, int)

FileNumber returns the numeric value of the file, and the length in characters representing the number in the filename. Example: FileNumber() of mysqld.log.000789 is (789, 6)

func (*LogCoordinates) FileNumberDistance

func (this *LogCoordinates) FileNumberDistance(other *LogCoordinates) int

FileNumberDistance returns the numeric distance between this corrdinate's file number and the other's. Effectively it means "how many roatets/FLUSHes would make these coordinates's file reach the other's"

func (*LogCoordinates) FileSmallerThan

func (this *LogCoordinates) FileSmallerThan(other *LogCoordinates) bool

FileSmallerThan returns true if this coordinate's file is strictly smaller than the other's.

func (*LogCoordinates) IsEmpty

func (this *LogCoordinates) IsEmpty() bool

IsEmpty returns true if the log file is empty, unnamed

func (*LogCoordinates) NextFileCoordinates

func (this *LogCoordinates) NextFileCoordinates() (LogCoordinates, error)

PreviousFileCoordinates guesses the filename of the previous binlog/relaylog

func (*LogCoordinates) PreviousFileCoordinates

func (this *LogCoordinates) PreviousFileCoordinates() (LogCoordinates, error)

PreviousFileCoordinates guesses the filename of the previous binlog/relaylog

func (*LogCoordinates) PreviousFileCoordinatesBy

func (this *LogCoordinates) PreviousFileCoordinatesBy(offset int) (LogCoordinates, error)

PreviousFileCoordinatesBy guesses the filename of the previous binlog/relaylog, by given offset (number of files back)

func (*LogCoordinates) SmallerThan

func (this *LogCoordinates) SmallerThan(other *LogCoordinates) bool

SmallerThan returns true if this coordinate is strictly smaller than the other.

func (*LogCoordinates) SmallerThanOrEquals

func (this *LogCoordinates) SmallerThanOrEquals(other *LogCoordinates) bool

SmallerThanOrEquals returns true if this coordinate is the same or equal to the other one. We do NOT compare the type so we can not use this.Equals()

func (LogCoordinates) String

func (this LogCoordinates) String() string

String returns a user-friendly string representation of these coordinates

type Maintenance

type Maintenance struct {
	MaintenanceId  uint
	Key            InstanceKey
	BeginTimestamp string
	SecondsElapsed uint
	IsActive       bool
	Owner          string
	Reason         string
}

Maintenance indicates a maintenance entry (also in the database)

type Metric

type Metric struct {
	Timestamp      time.Time     // time the metric was started
	WaitLatency    time.Duration // time that we had to wait before starting query execution
	ExecuteLatency time.Duration // time the query took to execute
	Err            error         // any error resulting from the query execution
}

Metric records query metric of backend writes that go through a sized channel. It allows us to compare the time waiting to execute the query against the time needed to run it and in a "sized channel" the wait time may be significant and is good to measure.

func NewMetric

func NewMetric() *Metric

NewMetric returns a new metric with timestamp starting from now

func (Metric) When

func (m Metric) When() time.Time

When records the timestamp of the start of the recording

type MetricAggregate

type MetricAggregate struct {
	stats.Float64Data
}

MetricAggregate used to aggregate metric

func (MetricAggregate) Max

func (m MetricAggregate) Max(values stats.Float64Data) float64

internal routine to return the maximum value or 0

func (MetricAggregate) Mean

func (m MetricAggregate) Mean(values stats.Float64Data) float64

internal routine to return the average value or 0

func (MetricAggregate) Median

func (m MetricAggregate) Median(values stats.Float64Data) float64

internal routine to return the Median or 0

func (MetricAggregate) Min

func (m MetricAggregate) Min(values stats.Float64Data) float64

internal routine to return the minimum value or 9e9

func (MetricAggregate) Percentile

func (m MetricAggregate) Percentile(values stats.Float64Data, percent float64) float64

internal routine to return the requested Percentile value or 0

type MetricAggregatedDiscovery

type MetricAggregatedDiscovery struct {
	FirstSeen                       time.Time // timestamp of the first data seen
	LastSeen                        time.Time // timestamp of the last data seen
	CountDistinctInstanceKeys       int       // number of distinct Instances seen (note: this may not be true: distinct = succeeded + failed)
	CountDistinctOkInstanceKeys     int       // number of distinct Instances which succeeded
	CountDistinctFailedInstanceKeys int       // number of distinct Instances which failed
	FailedDiscoveries               uint64    // number of failed discoveries
	SuccessfulDiscoveries           uint64    // number of successful discoveries
	MeanTotalSeconds                float64
	MeanBackendSeconds              float64
	MeanInstanceSeconds             float64
	FailedMeanTotalSeconds          float64
	FailedMeanBackendSeconds        float64
	FailedMeanInstanceSeconds       float64
	MaxTotalSeconds                 float64
	MaxBackendSeconds               float64
	MaxInstanceSeconds              float64
	FailedMaxTotalSeconds           float64
	FailedMaxBackendSeconds         float64
	FailedMaxInstanceSeconds        float64
	MedianTotalSeconds              float64
	MedianBackendSeconds            float64
	MedianInstanceSeconds           float64
	FailedMedianTotalSeconds        float64
	FailedMedianBackendSeconds      float64
	FailedMedianInstanceSeconds     float64
	P95TotalSeconds                 float64
	P95BackendSeconds               float64
	P95InstanceSeconds              float64
	FailedP95TotalSeconds           float64
	FailedP95BackendSeconds         float64
	FailedP95InstanceSeconds        float64
}

MetricAggregatedDiscovery contains aggregated metric for instance discovery. Called from api/discovery-metric-aggregated/:seconds

type MetricAggregatedQuery

type MetricAggregatedQuery struct {
	Count             int
	MaxExecSeconds    float64
	MeanExecSeconds   float64
	MedianExecSeconds float64
	P95ExecSeconds    float64
	MaxWaitSeconds    float64
	MeanWaitSeconds   float64
	MedianWaitSeconds float64
	P95WaitSeconds    float64
}

MetricAggregatedQuery hold all metric for query

type MetricAggregatedQueue

type MetricAggregatedQueue struct {
	ActiveMinEntries    float64
	ActiveMeanEntries   float64
	ActiveMedianEntries float64
	ActiveP95Entries    float64
	ActiveMaxEntries    float64
	QueuedMinEntries    float64
	QueuedMeanEntries   float64
	QueuedMedianEntries float64
	QueuedP95Entries    float64
	QueuedMaxEntries    float64
}

MetricAggregatedQueue contains aggregate information some part queue metric

type MetricDiscover

type MetricDiscover struct {
	Timestamp       time.Time     // time the collection was taken
	InstanceKey     InstanceKey   // instance being monitored
	BackendLatency  time.Duration // time taken talking to the backend
	InstanceLatency time.Duration // time taken talking to the instance
	TotalLatency    time.Duration // total time taken doing the discovery
	Err             error         // error (if applicable) doing the discovery process
}

MetricDiscover holds a set of information of instance discovery metric

func (MetricDiscover) When

func (m MetricDiscover) When() time.Time

When did the metric happen

type MetricInterface

type MetricInterface interface {
	When() time.Time // when the metric was taken
}

MetricInterface is an interface containing a metric

type MetricQueue

type MetricQueue struct {
	Active int
	Queued int
}

MetricQueue contains the queue's active and queued sizes

type MinimalInstance

type MinimalInstance struct {
	Key         InstanceKey
	MasterKey   InstanceKey
	ClusterName string
}

func (*MinimalInstance) ToInstance

func (mi *MinimalInstance) ToInstance() *Instance

type Monitor

type Monitor interface {
	Init() error
}

Monitor is the interface for third metric framework

type MultiSQL

type MultiSQL struct {
	Query []string
	Args  [][]interface{}
}

type PeerAnalysisMap

type PeerAnalysisMap map[string]int

PeerAnalysisMap indicates the number of peers agreeing on an analysis. Key of this map is a InstanceAnalysis.String()

type PoolInstancesMap

type PoolInstancesMap map[string][]*InstanceKey

PoolInstancesMap lists instance keys per pool name

type PoolInstancesSubmission

type PoolInstancesSubmission struct {
	CreatedAt          time.Time
	DatabaseType       string
	Pool               string
	DelimitedInstances string
	RegisteredAt       string
}

func NewPoolInstancesSubmission

func NewPoolInstancesSubmission(databaseType string, pool string, instances string) *PoolInstancesSubmission

type PostponedFunctionsContainer

type PostponedFunctionsContainer struct {
	// contains filtered or unexported fields
}

func NewPostponedFunctionsContainer

func NewPostponedFunctionsContainer() *PostponedFunctionsContainer

func (*PostponedFunctionsContainer) AddPostponedFunction

func (this *PostponedFunctionsContainer) AddPostponedFunction(postponedFunction func() error, description string)

func (*PostponedFunctionsContainer) Descriptions

func (this *PostponedFunctionsContainer) Descriptions() []string

func (*PostponedFunctionsContainer) Len

func (this *PostponedFunctionsContainer) Len() int

func (*PostponedFunctionsContainer) Wait

func (this *PostponedFunctionsContainer) Wait()

type Process

type Process struct {
	InstanceHostname string
	InstancePort     int
	Id               int64
	User             string
	Host             string
	Db               string
	Command          string
	Time             int64
	State            string
	Info             string
	StartedAt        string
}

Process presents a MySQL executing thread (as observed by PROCESSLIST)

type Queue

type Queue struct {
	sync.Mutex
	// contains filtered or unexported fields
}

Queue contains information for managing discovery requests

func CreateOrReturnQueue

func CreateOrReturnQueue(name string, capacity uint, maxMetricSize int, timeToWarn uint) *Queue

CreateOrReturnQueue allows for creation of a new discovery queue or returning a pointer to an existing one given the name.

func (*Queue) AggregatedDiscoveryQueueMetrics

func (q *Queue) AggregatedDiscoveryQueueMetrics(period int) *MetricAggregatedQueue

AggregatedDiscoveryQueueMetrics Returns some aggregate statistics based on the period (last N entries) requested. We store up to config.Config.DiscoveryQueueMaxStatisticsSize values and collect once a second so we expect period to be a smaller value.

func (*Queue) Consume

func (q *Queue) Consume() InstanceKey

Consume fetches a key to process; blocks if queue is empty. Release must be called once after Consume.

func (*Queue) DiscoveryQueueMetrics

func (q *Queue) DiscoveryQueueMetrics(period int) []MetricQueue

DiscoveryQueueMetrics returns some raw queue metric based on the period (last N entries) requested.

func (*Queue) Push

func (q *Queue) Push(key InstanceKey)

Push enqueues a key if it is not on a queue and is not being processed; silently returns otherwise.

func (*Queue) QueueLen

func (q *Queue) QueueLen() int

QueueLen returns the length of the queue (channel size + queued size)

func (*Queue) Release

func (q *Queue) Release(key InstanceKey)

Release removes a key from a list of being processed keys which allows that key to be pushed into the queue again.

type RecoveryAcknowledgement

type RecoveryAcknowledgement struct {
	CreatedAt time.Time
	Owner     string
	Comment   string

	Key           InstanceKey
	ClusterName   string
	Id            int64
	UID           string
	AllRecoveries bool
}

func NewInternalAcknowledgement

func NewInternalAcknowledgement() *RecoveryAcknowledgement

func NewRecoveryAcknowledgement

func NewRecoveryAcknowledgement(owner string, comment string) *RecoveryAcknowledgement

type RecoveryType

type RecoveryType string

type ReplicationAnalysis

type ReplicationAnalysis struct {
	AnalyzedInstanceKey                       InstanceKey
	AnalyzedInstanceUpstreamKey               InstanceKey
	ClusterDetails                            ClusterInfo
	AnalyzedInstanceDataCenter                string
	AnalyzedInstanceRegion                    string
	AnalyzedInstanceBinlogCoordinates         LogCoordinates
	IsMaster                                  bool
	IsReplicationGroupMember                  bool
	IsCoMaster                                bool
	LastCheckValid                            bool
	LastCheckPartialSuccess                   bool
	CountReplicas                             uint
	CountValidReplicas                        uint
	CountValidReplicatingReplicas             uint
	CountReplicasFailingToConnectToMaster     uint
	CountDowntimedReplicas                    uint
	ReplicationDepth                          uint
	Downstreams                               InstanceKeyMap
	SlaveHosts                                InstanceKeyMap // for backwards compatibility. Equals `DownstreamKeyMap`
	IsFailingToConnectToMaster                bool
	Analysis                                  AnalysisCode
	Description                               string
	StructureAnalysis                         []StructureAnalysisCode
	IsDowntimed                               bool
	IsReplicasDowntimed                       bool // as good as downtimed because all replicas are downtimed AND analysis is all about the replicas (e.e. AllMasterReplicasNotReplicating)
	DowntimeEndTimestamp                      string
	DowntimeRemainingSeconds                  int
	IsReplicaServer                           bool
	PseudoGTIDImmediateTopology               bool
	OracleGTIDImmediateTopology               bool
	MariaDBGTIDImmediateTopology              bool
	BinlogServerImmediateTopology             bool
	SemiSyncMasterEnabled                     bool
	SemiSyncMasterStatus                      bool
	SemiSyncMasterWaitForReplicaCount         uint
	SemiSyncMasterClients                     uint
	CountSemiSyncReplicasEnabled              uint
	CountLoggingReplicas                      uint
	CountStatementBasedLoggingReplicas        uint
	CountMixedBasedLoggingReplicas            uint
	CountRowBasedLoggingReplicas              uint
	CountDistinctMajorVersionsLoggingReplicas uint
	CountDelayedReplicas                      uint
	CountLaggingReplicas                      uint
	IsActionableRecovery                      bool
	ProcessingNodeHostname                    string
	ProcessingNodeToken                       string
	CountAdditionalAgreeingNodes              int
	StartActivePeriod                         string
	SkippableDueToDowntime                    bool
	GTIDMode                                  string
	MinReplicaGTIDMode                        string
	MaxReplicaGTIDMode                        string
	MaxReplicaGTIDErrant                      string
	CommandHint                               string
	IsReadOnly                                bool

	//for opengauss
	CountPrimary int
}

ReplicationAnalysis notes analysis on replication chain status, per instance

func (*ReplicationAnalysis) AnalysisString

func (this *ReplicationAnalysis) AnalysisString() string

AnalysisString returns a human friendly description of all analysis issues

func (*ReplicationAnalysis) GetAnalysisInstanceType

func (this *ReplicationAnalysis) GetAnalysisInstanceType() AnalysisInstanceType

Get a string description of the analyzed instance type (master? co-master? intermediate-master?)

func (*ReplicationAnalysis) MarshalJSON

func (this *ReplicationAnalysis) MarshalJSON() ([]byte, error)

type ReplicationAnalysisChangelog

type ReplicationAnalysisChangelog struct {
	AnalyzedInstanceKey InstanceKey
	Changelog           []string
}

type ReplicationAnalysisHints

type ReplicationAnalysisHints struct {
	IncludeDowntimed bool
	IncludeNoProblem bool
	AuditAnalysis    bool
}

type ReplicationCredentials

type ReplicationCredentials struct {
	User      string
	Password  string
	SSLCert   string
	SSLKey    string
	SSLCaCert string
}

type Request

type Request struct {
	*InstanceKey
	Hint string
}

func (*Request) String

func (r *Request) String() string

String returns a user-friendly string representation of this request

type SnapshotData

type SnapshotData struct {
	Keys             []InstanceKey // Kept for backwards comapatibility
	MinimalInstances []MinimalInstance
	RecoveryDisabled bool

	ClusterAlias,
	ClusterAliasOverride,
	ClusterDomainName,
	HostAttributes,
	InstanceTags,
	AccessToken,
	PoolInstances,
	InjectedPseudoGTIDClusters,
	HostnameResolves,
	HostnameUnresolves,
	DowntimedInstances,
	Candidates,
	Detections,
	KVStore,
	Recovery,
	RecoverySteps sqlutil.NamedResultData

	LeaderURI string
}

func NewSnapshotData

func NewSnapshotData() *SnapshotData

type SnapshotHandler

type SnapshotHandler interface {
	Snapshot() (data []byte, err error)
	Restore(rc io.ReadCloser) error
}

type StructureAnalysisCode

type StructureAnalysisCode string

type Tag

type Tag struct {
	TagName  string
	TagValue string
	HasValue bool
	Negate   bool
}

func NewTag

func NewTag(tagName string, tagValue string) (*Tag, error)

func ParseIntersectTags

func ParseIntersectTags(tagsString string) (tags []*Tag, err error)

func ParseTag

func ParseTag(tagString string) (*Tag, error)

func (*Tag) Display

func (tag *Tag) Display() string

func (*Tag) String

func (tag *Tag) String() string

type Token

type Token struct {
	Hash string
}

Token is used to identify and validate requests to this service

func NewToken

func NewToken() *Token

func (*Token) Short

func (this *Token) Short() string

type TopologyRecovery

type TopologyRecovery struct {
	PostponedFunctionsContainer

	Id                        int64
	UID                       string
	AnalysisEntry             ReplicationAnalysis
	SuccessorKey              *InstanceKey
	SuccessorAlias            string
	IsActive                  bool
	IsSuccessful              bool
	LostReplicas              InstanceKeyMap
	ParticipatingInstanceKeys InstanceKeyMap
	AllErrors                 []string
	RecoveryStartTimestamp    string
	RecoveryEndTimestamp      string
	ProcessingNodeHostname    string
	ProcessingNodeToken       string
	Acknowledged              bool
	AcknowledgedAt            string
	AcknowledgedBy            string
	AcknowledgedComment       string
	LastDetectionId           int64
	RelatedRecoveryId         int64
	Type                      RecoveryType
	RecoveryType              RecoveryType
}

TopologyRecovery represents an entry in the topology_recovery table

func NewTopologyRecovery

func NewTopologyRecovery(replicationAnalysis ReplicationAnalysis) *TopologyRecovery

func (*TopologyRecovery) AddError

func (this *TopologyRecovery) AddError(err error) error

func (*TopologyRecovery) AddErrors

func (this *TopologyRecovery) AddErrors(errs []error)

type TopologyRecoveryStep

type TopologyRecoveryStep struct {
	Id          int64
	RecoveryUID string
	AuditAt     string
	Message     string
}

func NewTopologyRecoveryStep

func NewTopologyRecoveryStep(uid string, message string) *TopologyRecoveryStep

type Worker

type Worker struct {
	Name string // worker name, better to define in constant with "WorkerName" prefix
	// contains filtered or unexported fields
}

Worker used to exec function async

func (*Worker) AsyncRun

func (w *Worker) AsyncRun() error

AsyncRun run worker async

func (*Worker) Exit

func (w *Worker) Exit()

Exit trigger worker to exit

type WorkerPool

type WorkerPool struct {
	// contains filtered or unexported fields
}

WorkerPool used to manage all task running async

func NewWorkerPool

func NewWorkerPool() *WorkerPool

NewWorkerPool create new worker pool

func (*WorkerPool) AsyncRun

func (w *WorkerPool) AsyncRun(name string, f func(workerExit chan struct{})) error

AsyncRun create new worker and run it async

func (*WorkerPool) Exit

func (w *WorkerPool) Exit(signal os.Signal)

Exit quit worker pool by signal

func (*WorkerPool) GetExitChannel

func (w *WorkerPool) GetExitChannel() chan os.Signal

GetExitChannel return worker pool's exit channel

func (*WorkerPool) GetWorker

func (w *WorkerPool) GetWorker(name string) *Worker

GetWorker get named worker from worker pool

func (*WorkerPool) NewWorker

func (w *WorkerPool) NewWorker(name string, f func(workerExit chan struct{})) (*Worker, error)

NewWorker create new worker and put it to worker pool

func (*WorkerPool) RemoveWorker

func (w *WorkerPool) RemoveWorker(name string) bool

RemoveWorker remove worker without running instance from worker pool

func (*WorkerPool) WaitStop

func (w *WorkerPool) WaitStop()

WaitStop stop all worker and quit worker pool gracefully by once

type WriteBufferMetric

type WriteBufferMetric struct {
	Timestamp    time.Time     // time the metric was started
	Instances    int           // number of flushed instances
	WaitLatency  time.Duration // waiting before flush
	WriteLatency time.Duration // time writing to backend
}

MetricInterface records query metric of backend writes that go through a sized channel. It allows us to compare the time waiting to execute the query against the time needed to run it and in a "sized channel" the wait time may be significant and is good to measure.

func (WriteBufferMetric) When

func (m WriteBufferMetric) When() time.Time

When records the timestamp of the start of the recording

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL