trireme-lib: go.aporeto.io/trireme-lib/monitor/internal/pod Index | Files

package podmonitor

import "go.aporeto.io/trireme-lib/monitor/internal/pod"

Index

Package Files

config.go controller.go delete_controller.go monitor.go resync.go watcher.go

Variables

var (
    // ErrHandlePUStartEventFailed is the error sent back if a start event fails
    ErrHandlePUStartEventFailed = errs.New("Aporeto Enforcer start event failed")

    // ErrNetnsExtractionMissing is the error when we are missing a PID or netns path after successful metadata extraction
    ErrNetnsExtractionMissing = errs.New("Aporeto Enforcer missed to extract PID or netns path")

    // ErrHandlePUStopEventFailed is the error sent back if a stop event fails
    ErrHandlePUStopEventFailed = errs.New("Aporeto Enforcer stop event failed")

    // ErrHandlePUDestroyEventFailed is the error sent back if a create event fails
    ErrHandlePUDestroyEventFailed = errs.New("Aporeto Enforcer destroy event failed")
)

func ResyncWithAllPods Uses

func ResyncWithAllPods(ctx context.Context, c client.Client, evCh chan<- event.GenericEvent) error

ResyncWithAllPods is called from the implemented resync, it will list all pods and fire them down the event source (the generic event channel)

type Config Uses

type Config struct {
    Kubeconfig     string
    Nodename       string
    EnableHostPods bool
    Workers        int

    MetadataExtractor         extractors.PodMetadataExtractor
    NetclsProgrammer          extractors.PodNetclsProgrammer
    PidsSetMaxProcsProgrammer extractors.PodPidsSetMaxProcsProgrammer
    ResetNetcls               extractors.ResetNetclsKubepods
    SandboxExtractor          extractors.PodSandboxExtractor
}

Config is the config for the Kubernetes monitor

func DefaultConfig Uses

func DefaultConfig() *Config

DefaultConfig provides a default configuration

func SetupDefaultConfig Uses

func SetupDefaultConfig(kubernetesConfig *Config) *Config

SetupDefaultConfig adds defaults to a partial configuration

type DeleteController Uses

type DeleteController struct {
    // contains filtered or unexported fields
}

DeleteController is responsible for cleaning up after Kubernetes because we are missing our native ID on the last reconcile event where the pod has already been deleted. This is also more reliable because we are filling this controller with events starting from the time when we first see a deletion timestamp on a pod. It pretty much facilitates the work of a finalizer without needing a finalizer and also only kicking in once a pod has *really* been deleted.

func NewDeleteController Uses

func NewDeleteController(c client.Client, pc *config.ProcessorConfig, sandboxExtractor extractors.PodSandboxExtractor, eventsCh chan event.GenericEvent) *DeleteController

NewDeleteController creates a new DeleteController.

func (*DeleteController) GetDeleteCh Uses

func (c *DeleteController) GetDeleteCh() chan<- DeleteEvent

GetDeleteCh returns the delete channel on which to queue delete events

func (*DeleteController) GetReconcileCh Uses

func (c *DeleteController) GetReconcileCh() chan<- struct{}

GetReconcileCh returns the channel on which to notify the controller about an immediate reconcile event

func (*DeleteController) Start Uses

func (c *DeleteController) Start(z <-chan struct{}) error

Start implemets the Runnable interface

type DeleteEvent Uses

type DeleteEvent struct {
    PodUID        string
    SandboxID     string
    NamespaceName client.ObjectKey
}

DeleteEvent is used to send delete events to our event loop which will watch them for real deletion in the Kubernetes API. Once an object is gone, we will send down destroy events to trireme.

type DeleteObject Uses

type DeleteObject struct {
    // contains filtered or unexported fields
}

DeleteObject is the obj used to store in the event map.

type PodMonitor Uses

type PodMonitor struct {
    // contains filtered or unexported fields
}

PodMonitor implements a monitor that sends pod events upstream It is implemented as a filter on the standard DockerMonitor. It gets all the PU events from the DockerMonitor and if the container is the POD container from Kubernetes, It connects to the Kubernetes API and adds the tags that are coming from Kuberntes that cannot be found

func New Uses

func New() *PodMonitor

New returns a new kubernetes monitor.

func (*PodMonitor) Resync Uses

func (m *PodMonitor) Resync(ctx context.Context) error

Resync requests to the monitor to do a resync.

func (*PodMonitor) Run Uses

func (m *PodMonitor) Run(ctx context.Context) error

Run starts the monitor.

func (*PodMonitor) SetupConfig Uses

func (m *PodMonitor) SetupConfig(registerer registerer.Registerer, cfg interface{}) error

SetupConfig provides a configuration to implmentations. Every implmentation can have its own config type.

func (*PodMonitor) SetupHandlers Uses

func (m *PodMonitor) SetupHandlers(c *config.ProcessorConfig)

SetupHandlers sets up handlers for monitors to invoke for various events such as processing unit events and synchronization events. This will be called before Start() by the consumer of the monitor

type ReconcilePod Uses

type ReconcilePod struct {
    // contains filtered or unexported fields
}

ReconcilePod reconciles a Pod object

func (*ReconcilePod) Reconcile Uses

func (r *ReconcilePod) Reconcile(request reconcile.Request) (reconcile.Result, error)

Reconcile reads that state of the cluster for a pod object

type WatchPodMapper Uses

type WatchPodMapper struct {
    // contains filtered or unexported fields
}

WatchPodMapper determines if we want to reconcile on a pod event. There are two limitiations: - the pod must be schedule on a matching nodeName - if the pod requests host networking, only reconcile if we want to enable host pods

func (*WatchPodMapper) Map Uses

func (w *WatchPodMapper) Map(obj handler.MapObject) []reconcile.Request

Map implements the handler.Mapper interface to emit reconciles for corev1.Pods. It effectively filters the pods by looking for a matching nodeName and filters them out if host networking is requested, but we don't want to enable those.

Package podmonitor imports 24 packages (graph) and is imported by 2 packages. Updated 2019-11-08. Refresh now. Tools for package owners.