k8saudit-gke

module
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 7, 2024 License: Apache-2.0

README

Kubernetes Audit Events Plugin for GKE

Introduction

Audit logs created by Google Kubernetes Engine (GKE) are part of Google's Cloud Audit Logs, which in turn are part of Google's Cloud Logging.

This means we have no control over the audit policy and, more importantly, we have no access to the original audit event emitted by the Kubernetes API server.

This plugin tries to reconstruct an audit event object as defined in the audit.k8s.io API group from the available information in the Google LogEntry. Unfortunately this might mean that certain information is simply unavailable to Falco.

How it works

GKE Admin Activity audit logs and GKE Data Access audit logs can be send to a Pub/Sub topic using a logs routing sink. The Falco k8saudit-gke plugin uses a subscription on this Pub/Sub topic to pull the audit log entries.

Optionally, the k8saudit-gke plugin can use the Google Container API to fetch cluster resource metadata labels. These cluster labels are appended to the resource labels of the log entry.

Finally, the Google audit log entries are converted to a Kubernetes audit event object and handed off to the Falco rule pipeline. This means the field extraction methods and rules of the k8saudit can be used.

[!WARNING] As the Kubernetes audit event is reconstructed from a Google audit logs entry some Falco rules might not work as expected due to missing information.

flowchart LR
  sink(logs routing sink) -- publish --> topic(Pub/Sub topic)
  topic -- subscription --> plugin(k8saudit-gke plugin)
  metadata(Google Container API) -. fetch cluster labels .-> plugin
  plugin -- transform --> audit(K8S Audit Event)

Usage

Configuration

Here's an example of configuration of falco.yaml:

plugins:
- init_config:
    project_id: "your-gcp-project-id"
    max_event_size: 8000000
    set_cluster_labels: true
  library_path: libk8saudit-gke.so
  name: k8saudit-gke
  open_params: "your-gcp-subscription-id"

load_plugins: [k8saudit-gke, json]

Initialization Config:

  • project_id: The Google project ID containing your Pub/Sub topic/subscription.
  • credentials_file: If non-empty overrides the default GCP credentials file (default: empty)
  • num_goroutines: The number of goroutines that each datastructure along the Pub/Sub receive path will spawn (default: 10)
  • maxout_stand_messages: The maximum number of unprocessed Pub/Sub messages (default: 1000)
  • fetch_cluster_metadata: If true then use the Google Container API to fetch cluster metadata labels (default: false)
  • cache_expiration: Cluster metadata cache expiration duration in minutes (default: 10)
  • use_async: If true then async extraction optimization is enabled (default: true)
  • max_event_size: Maximum size of single audit event (default: 262144)

Note: as described in issue #2475 it might be better to turn off the async extraction optimization.

Open Parameters:

A string which contains the subscriber name for your Google Pub/Sub topic (required).

Setting up a Google Pub/Sub topic and subscription

A Pub/Sub topic and subscription can be created in the same Google project as your GKE cluster(s). If you run GKE clusters in different Google projects, this would mean deploying multiple Falco k8saudit-gke plugin instances, as the Falco k8saudit-gke plugin only supports a single subscription.

Fortunately, Google supports publishing messages to a Pub/Sub topic in a different Google project. Hence it is possible to create a single Pub/Sub topic and subscription with log sinks from different projects routing log entries to it.

In case a single Falco k8saudit-gke plugin instance is not able to handle your audit log volume, you can use the following Pub/Sub publish and subscribe patterns (or a combination of them):

  1. create multiple topics (e.g. one per Google project or GKE cluster) and corresponding subscriptions
  2. create a single topic and multiple subscriptions with different message filters
  3. use multiple Falco k8saudit-gke plugin instances with a single subscription by enabling exactly-once delivery. Only supported within a single cloud region.

Pub/Sub setup:

  • create a Pub/Sub topic (e.g. falco-gke-audit-topic), this can be in the same or in a different Google project as your GKE cluster(s)
  • create a subscription (e.g. falco-gke-audit-sub) to the Pub/Sub topic created above
  • create a service account and bind the iam role roles/pubsub.subscriber
  • provide the credentials file of this service account to the k8saudit-gke plugin

Log Router Sinks setup (for each Google project containing GKE clusters):

  • create a logs routing sink (e.g. falco-gke-audit-sink) with the following options:
    • destination: pubsub.googleapis.com/<your Pub/Sub topic> (e.g. pubsub.googleapis.com/<my-google-pubsub-project-id>/topics/falco-gke-audit-topic)
    • filter: logName=~"projects/.+/logs/cloudaudit.googleapis.com%2F(activity|data_access)" AND protoPayload.serviceName="k8s.io"
    • exclusion filters (optional): e.g. protoPayload.methodName="io.k8s.coordination.v1.leases.update" (exclusion filters reduce the number of log entries send to Falco)
  • bind the iam role roles/pubsub.publisher to the log sink writer identity

See the official Google Pub/Sub documentation for additional information on how to set up Pub/Sub.

Cluster resource labels and Google Container API permissions

To fetch cluster metadata from the Google Container API (enabled with the fetch_cluster_metadata flag), the serviceaccount used by the k8saudit-gke plugin requires the iam rolebinding roles/container.clusterViewer for each Google project sending GKE auditlogs to the Pub/Sub topic.

The cluster resource labels are internally added to the labels field of the Google LogEntry.resource object. These labels in turn are added to the annotations field of the reconstructed Kubernetes audit.k8s.io/v1/Event object.

For example the default 'resource' labels of a LogEntry are available in a Falco rule as follows, but the same applies to your own cluster metadata labels:

  • %jevt.value[/annotations/cluster_name]
  • %jevt.value[/annotations/location]
  • %jevt.value[/annotations/project_id]

Running locally

To build and run the k8saudit-gke plugin locally on MacOS, the following steps can be followed:

  • d/l json plugin with falcoctl:
docker run -ti --rm -v "${PWD}"/hack:/plugins falcosecurity/falcoctl artifact install json --plugins-dir=/plugins
  • build k8saudit-gke plugin from its src dir:
docker run -ti --rm -v "${PWD}"/:/go/src --workdir=/go/src golang:1.21 make
  • run falco container from k8saudit-gke plugin src dir:
docker run -ti --rm -v "${PWD}"/hack/falco.yaml:/etc/falco/falco.yaml:ro -v "${PWD}"/hack/rules.yaml:/etc/falco/rules.d/rules.yaml:ro -v "${HOME}"/.config/gcloud/application_default_credentials.json:/root/.config/gcloud/application_default_credentials.json:ro -v "${PWD}"/hack/libjson.so://usr/share/falco/plugins/libjson.so:ro -v "${PWD}"/libk8saudit-gke.so:/usr/share/falco/plugins/libk8saudit-gke.so:ro -v "${PWD}"/test:/test/:ro falcosecurity/falco-no-driver:0.37.1 falco --disable-source syscall

See hack/falco.yaml for open_params. To test specific Google auditlog events the file:// option can be used to point to local json files, otherwise point to a Google Pub/Sub subscription:

  # open_params: "file://test/pods_create.json"
  open_params: "falco-gke-audit-sub"

References

Directories

Path Synopsis
pkg

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL