import "istio.io/istio/pilot/pkg/status"
ledger.go report.go reporter.go resourcelock.go state.go
func CreateOrUpdateConfigMap(ctx context.Context, cm *corev1.ConfigMap, client v1.ConfigMapInterface) (res *corev1.ConfigMap, err error)
this is lifted with few modifications from kubeadm's apiclient
func GVKtoGVR(in config.GroupVersionKind) *schema.GroupVersionResource
func GetTypedStatus(in interface{}) (out *v1alpha1.IstioStatus, err error)
func ReconcileStatuses(current *config.Config, desired Progress, generation int64) (bool, *v1alpha1.IstioStatus)
type DistributionController struct { CurrentState map[Resource]map[string]Progress ObservationTime map[string]time.Time UpdateInterval time.Duration StaleInterval time.Duration // contains filtered or unexported fields }
func NewController(restConfig rest.Config, namespace string, cs model.ConfigStore) *DistributionController
func (c *DistributionController) Start(stop <-chan struct{})
type DistributionReport struct { Reporter string `json:"reporter"` DataPlaneCount int `json:"dataPlaneCount"` InProgressResources map[string]int `json:"inProgressResources"` }
func ReportFromYaml(content []byte) (DistributionReport, error)
type DistroReportHandler struct {
// contains filtered or unexported fields
}
func (drh *DistroReportHandler) HandleNew(obj interface{})
func (drh *DistroReportHandler) OnAdd(obj interface{})
func (drh *DistroReportHandler) OnDelete(obj interface{})
func (drh *DistroReportHandler) OnUpdate(oldObj, newObj interface{})
type Reporter struct { UpdateInterval time.Duration PodName string // contains filtered or unexported fields }
This function must be called every time a resource change is detected by pilot. This allows us to lookup only the resources we expect to be in flight, not the ones that have already distributed
Init starts all the read only features of the reporter, used for nonce generation and responding to istioctl wait.
func (r *Reporter) QueryLastNonce(conID string, distributionType xds.EventType) (noncePrefix string)
When a dataplane disconnects, we should no longer count it, nor expect it to ack config.
Register that a dataplane has acknowledged a new version of the config. Theoretically, we could use the ads connections themselves to harvest this data, but the mutex there is pretty hot, and it seems best to trade memory for time.
func (r *Reporter) SetController(controller *DistributionController)
func (r *Reporter) Start(clientSet kubernetes.Interface, namespace string, podname string, stop <-chan struct{})
Starts the reporter, which watches dataplane ack's and resource changes so that it can update status leader with distribution information.
type Resource struct { schema.GroupVersionResource Namespace string Name string Generation string }
TODO: maybe replace with a kubernetes resource identifier, if that's a thing
type Task func(entry cacheEntry)
Task to be performed.
type WorkQueue struct { OnPush func() // contains filtered or unexported fields }
func (wq *WorkQueue) Pop(exclusion map[lockResource]struct{}) (target *Resource, progress *Progress)
Pop returns the first item in the queue not in exclusion, along with it's latest progress
type WorkerPool struct {
// contains filtered or unexported fields
}
func (wp *WorkerPool) Delete(target Resource)
func (wp *WorkerPool) Push(target Resource, progress Progress)
func (wp *WorkerPool) Run(ctx context.Context)
type WorkerQueue interface { // Push a task. Push(target Resource, progress Progress) // Run the loop until a signal on the context Run(ctx context.Context) // Delete a task Delete(target Resource) }
Worker queue implements an expandable goroutine pool which executes at most one concurrent routine per target resource. Multiple calls to Push() will not schedule multiple executions per target resource, but will ensure that the single execution uses the latest value.
func NewWorkerPool(work func(*Resource, *Progress), maxWorkers uint) WorkerQueue
Package status imports 29 packages (graph) and is imported by 2 packages. Updated 2021-01-27. Refresh now. Tools for package owners.