scripts

package module
v0.0.0-...-d7993bf Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 2, 2020 License: Apache-2.0 Imports: 1 Imported by: 0

README

Helper scripts

This directory contains helper scripts used by Prow test jobs, as well as local development scripts.

Using the presubmit-tests.sh helper script

This is a helper script to run the presubmit tests. To use it:

  1. Source this script.

  2. [optional] Define the function build_tests(). If you don't define this function, the default action for running the build tests is to:

    • check markdown files
    • run go build on the entire repo
    • run /hack/verify-codegen.sh (if it exists)
    • check licenses in all go packages

    The markdown link checker tool doesn't check localhost links by default. Its configuration file, markdown-link-check-config.json, lives in the test-infra/scripts directory. To override it, create a file with the same name, containing the custom config in the /test directory.

    The markdown lint tool ignores long lines by default. Its configuration file, markdown-lint-config.rc, lives in the test-infra/scripts directory. To override it, create a file with the same name, containing the custom config in the /test directory.

  3. [optional] Customize the default build test runner, if you're using it. Set the following environment variables if the default values don't fit your needs:

    • DISABLE_MD_LINTING: Disable linting markdown files, defaults to 0 (false).
    • DISABLE_MD_LINK_CHECK: Disable checking links in markdown files, defaults to 0 (false).
    • PRESUBMIT_TEST_FAIL_FAST: Fail the presubmit test immediately if a test fails, defaults to 0 (false).
  4. [optional] Define the functions pre_build_tests() and/or post_build_tests(). These functions will be called before or after the build tests (either your custom one or the default action) and will cause the test to fail if they don't return success.

  5. [optional] Define the function unit_tests(). If you don't define this function, the default action for running the unit tests is to run all go tests in the repo.

  6. [optional] Define the functions pre_unit_tests() and/or post_unit_tests(). These functions will be called before or after the unit tests (either your custom one or the default action) and will cause the test to fail if they don't return success.

  7. [optional] Define the function integration_tests(). If you don't define this function, the default action for running the integration tests is to run all run all ./test/e2e-*tests.sh scripts, in sequence.

  8. [optional] Define the functions pre_integration_tests() and/or post_integration_tests(). These functions will be called before or after the integration tests (either your custom one or the default action) and will cause the test to fail if they don't return success.

  9. Call the main() function passing "$@" (with quotes).

Running the script without parameters, or with the --all-tests flag causes all tests to be executed, in the right order (i.e., build, then unit, then integration tests).

Use the flags --build-tests, --unit-tests and --integration-tests to run a specific set of tests.

To run specific programs as a test, use the --run-test flag, and provide the program as the argument. If arguments are required for the program, pass everything as a single quotes argument. For example, ./presubmit-tests.sh --run-test "test/my/test data". This flag can be used repeatedly, and each one will be ran in sequential order.

The script will automatically skip all presubmit tests for PRs where all changed files are exempt of tests (e.g., a PR changing only the OWNERS file).

Also, for PRs touching only markdown files, the unit and integration tests are skipped.

Sample presubmit test script
source vendor/github.com/chizhg/test-infra/scripts/presubmit-tests.sh

function post_build_tests() {
  echo "Cleaning up after build tests"
  rm -fr ./build-cache
}

function unit_tests() {
  make -C tests test
}

function pre_integration_tests() {
  echo "Cleaning up before integration tests"
  rm -fr ./staging-area
}

# We use the default integration test runner.

main "$@"

Using the e2e-tests.sh helper script

This is a helper script for Knative E2E test scripts. To use it:

  1. [optional] Customize the test cluster. Set the following environment variables if the default values don't fit your needs:

    • E2E_CLUSTER_REGION: Cluster region, defaults to us-central1.
    • E2E_CLUSTER_BACKUP_REGIONS: Space-separated list of regions to retry test cluster creation in case of stockout. Defaults to us-west1 us-east1.
    • E2E_CLUSTER_ZONE: Cluster zone (e.g., a), defaults to none (i.e. use a regional cluster).
    • E2E_CLUSTER_BACKUP_ZONES: Space-separated list of zones to retry test cluster creation in case of stockout. If defined, E2E_CLUSTER_BACKUP_REGIONS will be ignored thus it defaults to none.
    • E2E_CLUSTER_MACHINE: Cluster node machine type, defaults to e2-standard-4}.
    • E2E_MIN_CLUSTER_NODES: Minimum number of nodes in the cluster when autoscaling, defaults to 1.
    • E2E_MAX_CLUSTER_NODES: Maximum number of nodes in the cluster when autoscaling, defaults to 3.
  2. Source the script.

  3. [optional] Write the knative_setup() function, which will set up your system under test (e.g., Knative Serving). This function won't be called if you use the --skip-knative-setup flag.

  4. [optional] Write the knative_teardown() function, which will tear down your system under test (e.g., Knative Serving). This function won't be called if you use the --skip-knative-setup flag.

  5. [optional] Write the test_setup() function, which will set up the test resources.

  6. [optional] Write the test_teardown() function, which will tear down the test resources.

  7. [optional] Write the cluster_setup() function, which will set up any resources before the test cluster is created.

  8. [optional] Write the cluster_teardown() function, which will tear down any resources after the test cluster is destroyed.

  9. [optional] Write the dump_extra_cluster_state() function. It will be called when a test fails, and can dump extra information about the current state of the cluster (typically using kubectl).

  10. [optional] Write the parse_flags() function. It will be called whenever an unrecognized flag is passed to the script, allowing you to define your own flags. The function must return 0 if the flag is unrecognized, or the number of items to skip in the command line if the flag was parsed successfully. For example, return 1 for a simple flag, and 2 for a flag with a parameter.

  11. Call the initialize() function passing $@ (without quotes).

  12. Write logic for the end-to-end tests. Run all go tests using go_test_e2e() (or report_go_test() if you need a more fine-grained control) and call fail_test() or success() if any of them failed. The environment variable KO_DOCKER_REPO and E2E_PROJECT_ID will be set according to the test cluster.

Notes:

  1. Calling your script without arguments will create a new cluster in the GCP project $PROJECT_ID and run the tests against it.

  2. Calling your script with --run-tests and the variable KO_DOCKER_REPO set will immediately start the tests against the cluster currently configured for kubectl.

  3. By default knative_teardown() and test_teardown() will be called after the tests finish, use --skip-teardowns if you don't want them to be called.

  4. By default Istio is installed on the cluster via Addon, use --skip-istio-addon if you choose not to have it preinstalled.

  5. You can force running the tests against a specific GKE cluster version by using the --cluster-version flag and passing a full version as the flag value.

Sample end-to-end test script

This script will test that the latest Knative Serving nightly release works. It defines a special flag (--no-knative-wait) that causes the script not to wait for Knative Serving to be up before running the tests. It also requires that the test cluster is created in a specific region, us-west2.


# This test requires a cluster in LA
E2E_CLUSTER_REGION=us-west2

source vendor/github.com/chizhg/test-infra/scripts/e2e-tests.sh

function knative_setup() {
  start_latest_knative_serving
  if (( WAIT_FOR_KNATIVE )); then
    wait_until_pods_running knative-serving || fail_test "Knative Serving not up"
  fi
}

function parse_flags() {
  if [[ "$1" == "--no-knative-wait" ]]; then
    WAIT_FOR_KNATIVE=0
    return 1
  fi
  return 0
}

WAIT_FOR_KNATIVE=1

initialize $@

# TODO: use go_test_e2e to run the tests.
kubectl get pods || fail_test

success

Using the performance-tests.sh helper script

This is a helper script for Knative performance test scripts. In combination with specific Prow jobs, it can automatically manage the environment for running benchmarking jobs for each repo. To use it:

  1. Source the script.

  2. [optional] Customize GCP project settings for the benchmarks. Set the following environment variables if the default value doesn't fit your needs:

    • PROJECT_NAME: GCP project name for keeping the clusters that run the benchmarks. Defaults to knative-performance.
    • SERVICE_ACCOUNT_NAME: Service account name for controlling GKE clusters and interacting with Mako server. It MUST have Kubernetes Engine Admin and Storage Admin role, and be whitelisted by Mako admin. Defaults to mako-job.
  3. [optional] Customize root path of the benchmarks. This root folder should contain and only contain all benchmarks you want to run continuously. Set the following environment variable if the default value doesn't fit your needs:

    • BENCHMARK_ROOT_PATH: Benchmark root path, defaults to test/performance/benchmarks. Each repo can decide which folder to put its benchmarks in, and override this environment variable to be the path of that folder.
  4. [optional] Write the update_knative function, which will update your system under test (e.g. Knative Serving).

  5. [optional] Write the update_benchmark function, which will update the underlying resources for the benchmark (usually Knative resources and Kubernetes cronjobs for benchmarking). This function accepts a parameter, which is the benchmark name in the current repo.

  6. Call the main() function with all parameters (e.g. $@).

Sample performance test script

This script will update Knative serving and the given benchmark.

source vendor/github.com/chizhg/test-infra/scripts/performance-tests.sh

function update_knative() {
  echo ">> Updating serving"
  ko apply -f config/ || abort "failed to apply serving"
}

function update_benchmark() {
  echo ">> Updating benchmark $1"
  ko apply -f ${BENCHMARK_ROOT_PATH}/$1 || abort "failed to apply benchmark $1"
}

main $@

Using the release.sh helper script

This is a helper script for Knative release scripts. To use it:

  1. Source the script.

  2. [optional] By default, the release script will run ./test/presubmit-tests.sh as the release validation tests. If you need to run something else, set the environment variable VALIDATION_TESTS to the executable to run.

  3. Write logic for building the release in a function named build_release(). Set the environment variable ARTIFACTS_TO_PUBLISH to the list of files created, space separated. Use the following boolean (0 is false, 1 is true) and string environment variables for the logic:

    • RELEASE_VERSION: contains the release version if --version was passed. This also overrides the value of the TAG variable as v<version>.
    • RELEASE_BRANCH: contains the release branch if --branch was passed. Otherwise it's empty and master HEAD will be considered the release branch.
    • RELEASE_NOTES: contains the filename with the release notes if --release-notes was passed. The release notes is a simple markdown file.
    • RELEASE_GCS_BUCKET: contains the GCS bucket name to store the manifests if --release-gcs was passed, otherwise the default value knative-nightly/<repo> will be used. It is empty if --publish was not passed.
    • RELEASE_DIR: contains the directory to store the manifests if --release-dir was passed. Defaults to empty value, but if --nopublish was passed then points to the repository root directory.
    • BUILD_COMMIT_HASH: the commit short hash for the current repo. If the current git tree is dirty, it will have -dirty appended to it.
    • BUILD_YYYYMMDD: current UTC date in YYYYMMDD format.
    • BUILD_TIMESTAMP: human-readable UTC timestamp in YYYY-MM-DD HH:MM:SS format.
    • BUILD_TAG: a tag in the form v$BUILD_YYYYMMDD-$BUILD_COMMIT_HASH.
    • KO_DOCKER_REPO: contains the GCR to store the images if --release-gcr was passed, otherwise the default value gcr.io/knative-nightly will be used. It is set to ko.local if --publish was not passed.
    • SKIP_TESTS: true if --skip-tests was passed. This is handled automatically.
    • TAG_RELEASE: true if --tag-release was passed. In this case, the environment variable TAG will contain the release tag in the form v$BUILD_TAG.
    • PUBLISH_RELEASE: true if --publish was passed. In this case, the environment variable KO_FLAGS will be updated with the -L option and TAG will contain the release tag in the form v$RELEASE_VERSION.
    • PUBLISH_TO_GITHUB: true if --version, --branch and --publish-release were passed.

    All boolean environment variables default to false for safety.

    All environment variables above, except KO_FLAGS, are marked read-only once main() is called (see below).

  4. Call the main() function passing "$@" (with quotes).

Sample release script
source vendor/github.com/chizhg/test-infra/scripts/release.sh

function build_release() {
  # config/ contains the manifests
  ko resolve ${KO_FLAGS} -f config/ > release.yaml
  ARTIFACTS_TO_PUBLISH="release.yaml"
}

main "$@"

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL