test161

package module
v0.0.0-...-ebe072c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 4, 2024 License: MIT Imports: 38 Imported by: 0

README

= `test161`: A Testing Tool for OS/161

`test161` is a command line tool for automated testing of
http://os161.eecs.harvard.edu[OS/161] instructional operating system kernels
that run inside the `sys161` (System/161)
https://en.wikipedia.org/wiki/R3000[MIPS R3000] simulator. You are probably
not interested in `test161` unless you are a student taking or an instructor
teaching a course that uses OS/161.

== `test161`: Library, Client, and Server

The `test161` source tree consists of a library along with client and server
utilities, which are in large part wrappers around the library. <<Configuration>>
and <<Usage>> examples below that mention `test161` commands are referring to
the `test161` utility, which is what students generally interact with. There is
also a <<server,test161-server>> utility that students indirectly interact with
when submitting assignments. Much of the documentation refers simply to
`test161`, which refers to the system as a whole.

== Installation and Environment

`test161` is written in https://golang.org/[Go], and the instructions below
assume you are installing from source and/or setting up your development
environment to work on `test161`. Alternatively, the current stable binary
version of `test161` is included in the https://www.ops-class.org/asst/toolchain/#ppa[ops-class.org PPA].

=== Installing Go

Many Linux distributions package fairly out-of-date versions of Go. Instead, we
encourage you to install the https://github.com/moovweb/gvm[Go Version Manager (GVM)]:

[source,bash]
----
sudo apt-get install -y curl bison # Install requirement
bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
source $HOME/.gvm/scripts/gvm
----

At this point you are ready to start using GVM. We are currently building and
testing `test161` with Go version `go1.5.3`. However, because the Go compiler
is now written in Go, installing versions of Go past 1.5 require install Go
version 1.4 first.

[source,bash]
----
gvm install go1.4
gvm use 1.4
gvm install go1.5.3
gvm use 1.5.3 --default
----

==== `GOPATH`

Note that `gvm` will set your `GOPATH` and `PATH` variables properly to allow
you to run Go binaries that you install. However, if you are interested in
writing Go code you should set a more accessible `GOPATH` as described as https://golang.org/doc/code.html#GOPATH[described
here.]

=== Installing `test161`

Once you have Go installed, upgrading or installing `test161` is simple:

[source,bash]
----
go get -u github.com/jay1999ke/test161/test161
----

=== Configuration

Out of the box, `test161` can't do much without a set of test scripts and an
OS/161 `root` directory which contains your kernel and user binaries. If you
are starting from the https://github.com/ops-class/os161[ops-class OS/161 sources],
as soon as you configure, compile, and install your userland binaries and kernel,
`test161` will work in either your root or source trees. If you are starting from
other OS/161 sources, see <<Custom Configuration>>.

==== Custom Configuration

The ops-class sources create symlinks footnote:[`.root` is in your source
directory and points to your root directory, `.src` is in your root directory
and points to your source directory] between your OS/161 source and root
directories in order to infer your environment, which may not always be what
you want. To support partial environments where either source or root cannot be
inferred from the other, or you want to use a specific set of `test161`
configuration files, you can use `test161 config` to set the test161 directory
footnote:[The directory containing the tests, targets, and commands subdirectories.
For the ops-class sources, the test161 directory is named `test161` and is a
subdirectory of the OS/161 source directory.].

[source,bash]
----
test161 config test161dir <path>
----

This allows you to run `test161` tests from your root directory using the test
files in `test161dir`. If you do not have symlinks created for environment
inference, submitting will need to be done from your source directory.

===== Environment Variables

In addition to configuring the `test161` directory, `test161` supports
environment variables that may be useful during development testing and
in advanced and use cases.

* *`TEST161_SERVER`*: This variable allows you to set a custom back-end server,
for example:

[source,bash]
----
export TEST161_SERVER=http://localhost:4000
----

* *`TEST161_OVERLAY`*: The `test161` server uses an overlay directory
containing trusted files for each assignment. As a security measure, these
trusted files -- make files, user and kernel test code -- replace students'
versions when testing their submissions (see <<Security>>). Setting this
environment variable allows you to test an overlay using the `test161` client,
allowing you to test overlay changes without submitting to a `test161` server.

==== Submission Configuration

In order to submit to the https://test161.ops-class.org[test161 server], you
need to configure your username/token footnote:[The task of creating tokens
belongs to the front-end, which students need log in for.], which can be done with:

[source,bash]
----
test161 config add-user <username> <token>
----

Removing users and changing configured tokens can be done with:

[source,bash]
----
test161 config del-user <username>              # Delete user information
test161 config change-token <username> <token>  # Change token
----

==== Printing Configuration

To view the current test161 configuration, simply run:

[source,bash]
----
test161 config
----

== Usage

`test161` is designed around two main tasks: running tests and submitting
targets. Additionally, sub-commands exist for configuring test161 and
listing existing tests. Running `test161` with no arguments will print usage
information, and for a more detailed description of `test161` sub-commands,
use `test161 help`.

=== Tests, Targets, and Tags

The `test161` sub-commands often take one or more tests, targets, and tags as
arguments. A _test_ consists of one or more OS/161 commands along with
metadata, `sys161` configuration, and possibly some additional `test161` runtime
configuration. Tests can optionally include _tags_, which allow related tests
to be grouped and run together. _Targets_ consist of a set of tests that are run
together, and allow point values to be assigned to each test.

==== Listing Tests, Targets, and Tags

Available tests, targets, and tags can easily be listed with the `test161 list`
sub-command:

[source,bash]
----
test161 list tests        # List all tests with descriptions
test161 list tags         # List all tags and which tests share each tag
test161 list targets      # List all targets
test161 list targets -r   # List all targets available for submission on the server
----

=== Running Tests

To run a single test, group of tests, or single target, use the `test161 run
<names>` sub-command. Here, `<names>` can be a single target, one or more tests,
or one or more tags.footnote:[In the case that tag and target names conflict,
specify `-tag` if you mean tag.] For test files, `<names>` is a list of
http://www.linuxjournal.com/content/globstar-new-bash-globbing-option[globstar]
style file names, with paths specified relative to the root of the test
directory.  The following are all valid commands:

[source,bash]
----
test161 run synch/*.t         # Run all tests in the tests/sync/
test161 run **/l*.t           # Run all tests in all sub-directories beginning with 'l'
test161 run synchprobs/sp*.t  # Run the synchprobs
test161 run synch/lt1.t       # Run lock test 1
test161 run locks             # Run all lock tests (tests tagged with 'locks')
test161 run asst1             # Run the asst1 target
----

==== Test Concurrency

By default, `test161` runs all tests in parallel to speed up processing. As a
result, the output produced by each test will be interleaved, which can be
difficult to debug. It is possible to run tests sequentially using the
`-sequential (-s)` flag.

==== Test Dependencies

Each test specifies a list of dependencies, tests that must pass in order for
that test to run. For example, our condition variable tests depend on our lock
tests since locks must work for CVs to work. Internally, `test161` creates a
dependency graph for all the tests it needs to run and will short-circuit any
children in the dependency graph in case of failure. By default, all
dependencies are run when running any group of tests. For targets, this is
unavoidable. For other groups of tests, this behavior can be suppressed with
the `-no-dependencies (-n)` flag. This can save a lot of time when debugging a
particular test that has a lot of dependencies.

==== Command Line Flags

There are several command line flags that can be specified to customize how
`test161` runs tests.

* `-dry-run` (`-r`): Show the tests that would be run, but don't run them.

* `-explain` (`-x`): Show test detail, such as name, descriptions, `sys161`
configuration, commands, and expected output.

* `-sequential` (`-s`): By default the output of all tests are interleaved,
which can be hard to debug. Specify this option to run tests one at a time.

* `-no-dependencies` (`-n`): Run the given tests without also running their
dependencies. 

* `-verbose` (`-v`): There are three levels of output: `loud` (default), `quiet`
(no test output), and `whisper` (only final summary, no per-test status).

=== Submitting

Solutions are submitted with the `test161 submit` sub-command. In the most
common case, you will use the following command from your source or root
directory, where <target> is the target you wish to submit:

[source,bash]
----
test161 submit <target>
----

By default, `test161 submit` will use the commit associated with the tip of
your current Git branch. This behavior can be overridden by specifying a
tree-ish argument after the target argument. For example, all of the following
are valid commands:

[source,bash]
----
test161 submit asst1            # Submit the current branch to the asst1 target
test161 submit asst2 working    # Submit the working branch/tag to the asst2 target
test161 submit asst3 3df3dd59a  # Submit the commit 3df3dd59a to the asst3 target
----

==== Command Line Flags

`test161 submit` has a few useful command line flags:

* `-debug`: Print debug output when submitting, namely all Git commands used to
determine repository details.

* `-verify`: Check for local and remote issues without submitting, i.e. verify
that the submission would be accepted. This option is useful for verifying that
your configuration --  users, tokens, keys, etc. -- is correct.

* `-no-cache`: As an optimization, `test161` caches a cloned copy of your repo
in the same way the server does in order to improve the performance of
subsequent submissions. In some cases, it is useful to override this behavior.

== Requirements

* `sys161` and `disk161` in the path.
* Git version >= 2.3.0.


== Commands, Tests, and Targets

`test161` uses a http://yaml.org[YAML]-based configuration system, with
configuration files located across subdirectories of your `test161`
directory. The anatomy of this configuration directory is as follows:

* *commands/*: *.tc files containing OS/161 command specification. Each .tc
file usually contains multiple related commands.

* *targets/*: *.tt files containing target definitions, one per target.

* *tests/*: *.t files containing test specification, one per test. This
directory will contain subdirectories used to organize related tests.

=== Commands

The basic unit in `test161` is a command, such as `lt1` for running Lock Test 1,
or `sp1` to run the whalemating test.  Information about what to
expect when running these commands, as well as what input/output they expect
is specified in the `commands` directory in your test161 root directory.
All .tc files in this directory will be loaded, and commands must only be
specified once.

The following is the full syntax for a commands file:

[source,yaml]
----
# Each commands file consists of a collection of templates. An * indicates the
# default option.
templates:
    # Command name/id. For userland tests, include the path and binary name.
  - name: /testbin/forktest

    # An array of input arguments (optional). This should be included if the
    # command needs default arguments.
    input:
      - arg1
      - arg2

    # An array of expected output lines (optional). This should be specified
    # if the output differs from the default <name>: SUCESS.
    output:
        # The expected output text
      - text: "You should see this message if the test passed!"

        # true* if the output is secured, false if not
        trusted:  true

        # true if <text> references an external command, false* if not
        external: false

    # Whether or not the command panics - yes, no*, or maybe
    panics: no

    # Whether or not the command is expected to time out - yes, no*, maybe
    timesout: no

    # Time (s) after which the test is terminated - 0.0* indicates no timeout
    timeout: 0.0
----

Minimally, any command that is to be evaluated for correctness needs to be
present in exactly one commands (.tc) file with the `name` property specified.
If no output is specified, the default expected output is
`<command name>: SUCCESS`.

==== Examples

In the following example, several commands are specified all of which expect the
default output.

[source,yaml]
----
templates:
  - name: lt1
  - name: lt2
  - name: sem1
  ...
----

Some commands might be designed to cause a kernel panic.

[source,yaml]
----
templates:
  ...
  - name: lt2
    panics: yes
    output:
      - text: "lt2: Should panic..."
  ...
----

Some OS/161 tests are composed of other tests, in which case the command output
will reference an external command. In the following example, the 'triplehuge'
command expects three instances of the 'huge' command's output:

[source,yaml]
----
templates:
  ...
 - name: /testbin/triplehuge
    output:
      - {text: /testbin/huge, external: true, trusted: true}
      - {text: /testbin/huge, external: true, trusted: true}
      - {text: /testbin/huge, external: true, trusted: true}
  ...
----

Input and output can use https://golang.org/pkg/text/template/[Go's text templates]
to specify more complex text. The arguments and argument length are available in
the text templates as `.Args` and `.ArgLen`, respectively. Custom functions are
also provided; see the `funcMap` in
https://github.com/jay1999ke/test161/blob/master/commands.go[commands.go]
for details.

The following example illustrates how the `add` test's output can be determined
from random integer inputs:

[source,yaml,options="nowrap"]
----
templates:
  ...
  - name: /testbin/add
    input:
      - "{{randInt 2 1000}}"
      - "{{randInt 2 4000}}"
    output:
      - text: "/testbin/add: {{$x:= index .Args 0 | atoi}}{{$y := index .Args 1 | atoi}}{{add $x $y}}"
...
----

=== Tests

Test files (`*.t`) are located in the `tests/` directory in your test161 root
directory. This directory can contain subdirectories to help organize tests.
Each test consists of one or more commands, and each test can have its own
`sys161` configuration.  Tests are run in their own sandboxed environment, 
but commands within the test are executed within the same `sys161` session.

The following is an example of a `test161` test file:

[source,yaml]
----
---
name: "Semaphore Test"
description:
  Tests core semaphore logic through cyclic signaling.
tags: [synch, semaphores, kleaks]
depends: [boot]
sys161:
  cpus: 32
---
sem1
----

==== Front Matter

The test consist of two parts. The header in between the first and second
`---` is http://yaml.org[YAML] front matter that provides test metadata and
configuration. The following describes the syntax and semantics of the test
metadata:

[source, yaml] 
----
---
name: "Test Name"            # The name this is displayed in test161 commands
description: "Description"   # Longer test description, used in test161 list tests
tags: [tag1, tag2]           # All tests with the same tag can be run with test161 run <tag>
depends: [dep1, dep2]        # Specify dependencies. If these fail, the test is skipped
...
---
----

===== Configuration Options

In addition to metadata, the test file syntax supports various configuration
options for both `test161` and the underlying `sys161` instance. The following
provides both the syntax and semantics, as well as the default values for all
configuration options.

[source, yaml]
----
# sys161 configuration
conf:
  # 1-32 are valid
  cpus: 8

  # Number of bytes of memory, with optional K or M prefix
  ram: 1M

  # Random number generated at runtime. This can be overridden by specifying an
  # unsigned 32 bit integer to use as the random seed.
  random: seed=random

  # Disabled by default, but should be enabled when you want swap disk
  disk1:
    enabled: false
    rpm: 7200
    bytes: 32M
    nodoom: true

  # Disabled by default, but uses these defaults if configured
  disk2: 
    enabled: false
    rpm: 7200
    bytes: 32M
    nodoom: false

# stat161 configuration. The window specifies the number of stat objects we
# keep around, while the resolution represents the interval (s) that we
# request stats from stat161.
stat:
  resolution: 0.01
  window: 1

# Monitor configuration
monitor:
  enabled: true

  # Number of samples to include in kernel and user calculations
  window: 400

  # Monitor configuration for tracking kernel cycles. The ratio of kernel
  # cycles to total cycles to be >= min (if enabled) and <= max.
  kernel:
    enablemin: false
    min: 0.001
    max: 1.0

  # Monitor configuration for tracking user cycles. The ratio of user cycles
  # to total cycles to be >= min (if enabled) and <= max.
  user:
    enablemin: false
    min: 0.0001
    max: 1.0

  # Sim time (s) without character output before the test is stopped
  progresstimeout: 10.0

  # Sim time (s) a command is allowed to execute before it is stopped
  commandtimeout: 60.0

# Miscelleneous configuration
misc:
  # The next three configuration parameters deal with sys161 occasionally
  # dropping input characters.

  # Time (ms) to wait for a character to appear in the output stream after
  # sending it.
  charactertimeout: 1000

  # Whether or not to retry sending characters if the character timeout is
  # exceeded.
  retrycharacters: true

  # Number of times a command is retried if the number of character retries is
  # exceeded.
  commandretries: 5

  # Time (s) before halting a test if the current prompt is not seen in the
  # output stream.
  prompttimeout: 1800.0   

  # If true, send the kill signal to sys161. This should not generally be
  # needed.
  killonexit: false
----

===== Command Override

In addition to the configuration options, command behavior can be overridden
in the YAML front matter. A partial <<Commands, command template>> can be
specified using the `commandoverrides` property, which will be merged with
the command definition found in the commands files.

For example, the following changes the command timeout for a particular test:

....
...
commandoverrides:
  - name: /testbin/forkbomb
    timeout: 20.0
...
....

Note that the name must be specified in order to distinguish between commands
in the test.

==== Test Commands

The second part of the test file is a listing of the commands that make up the
test. In the example at the top of the section, the test file specifies that a
single test should be run, namely `sem1`. It is important to note that *the
command name provided here must match what is specified in the commands files*.

===== Test File Syntactic Sugar

A line starting with `$` will be run in the shell and start the shell as
needed. Lines not starting with `$` are run from the kernel prompt and get
there if necessary by exiting the shell. `sys161` shuts down cleanly without
requiring the test manually exit the shell and kernel, as needed.

So this test:
....
$ /bin/true
....

Expands to:
....
s
/bin/true
exit
q
....

*Note that commands run in the shell _must_ be prefixed with `$`.* Otherwise
`test161` will consider them a kernel command and exit the shell before
running them. For example:

This test is probably not what you want:
....
s
/bin/true
....

Because it will expand to:
....
s
exit
/bin/true # not a kernel command
....

But this is so much simpler, right?
....
$ /bin/true
....

`test161` also supports syntactic sugar for <<leaks, memory leak detection>>.
....
| p /testbin/forktest
....

expands to:
....
khu
p /testbin/forktest
khu
....

=== Targets

Target files (`*.tt`) are located in the `targets/` directory in your test161 root
directory. Targets specify which tests are run for each assignment, and
how the scoring is distributed. When you `test161 submit` your assignments, you will
specify which target to submit to.

The following example provides the full syntax of a target file:

[source, yaml]
----
# The name must be unique across all targets
name: example_target

# The print name, description, and leaderboard are used by the test161 front-end website
print_name: EXAMPLE
description: An example to illustrate target syntax and semantics
leaderboard: true

# Only active targets can be submitted to
active: true

# The test161 server uses the target version internally. The version number
# must be incremented when any test or points details change.
version: 1

# Target types can be asst or perf (assignment and performance).
type: asst

# The total number of points for the target. The sum of the individual test
# points must equal this number.
points: 10

# The associated kernel configuration file. test161 uses this value to
# configure and compile your kernel.
kconfig: ASST1

# true if the userland binaries should be compiled, false (default) if not.
userland: false

# Specify a commit hash id that must be included in the Git history of the
# submitted repo.
required_commit:

# The list of tests that are to be run and evaluated as part of this target.
tests:
    # ID is the path relative to the tests directory
  - id: path/to/test.t
    # entire or partial. With the "entire" (default) method, all commands in
    # the test must pass to earn any points. With partial, each command in the
    # test can earn points.
    scoring: entire

    # The points for this test
    points: 10

    # The number of points to deduct if a memory leak was detected.
    mem_leak_points: 2  # default is 0

    # A list of commands whose behavior needs to be individually specified.
    # This is only necessary when argument overrides need to be provided, or
    # when partial command credit is given.
    commands:
      - id: sem1
        # Particular instance of the command in the test. Only useful if the
        # command is listed multiple times in the test.
        index: 0
        points: 0
        # Override default command arguments.
        args: [arg1, arg2,...]
----

== [[server]]test161-server

`test161-server` is a command line utility that implements the `test161`
back-end functionality. Its main responsibilities include accepting submission
requests, evaluating these requests, and persisting test output and results.

Our `test161-server` uses https://www.mongodb.com/[mongoDB] as its storage
layer, which is also how it communicates with the
https://github.com/jay1999ke/test161-web-ui[front-end]. The interface of the
back-end utility is less mature than the `test161` command line utility, mostly
due to its audience.

=== `test161-server` Configuration

The `test161-server` configuration is in YAML configuration file,
`~/.test161-server.conf`. The following example provides the syntax for this
configuration file:

[source,yaml]
----
overlaydir: /path/to/overlay/directory
test161dir: /path/to/test161/root/directory
cachedir: /path/to/student/repo/cache
keydir: /path/to/student/deploy/keys

# The maximum concurrency for executing test161 tests. This can also be changed
# dynamically from the command line with test161-server set-capacity N.
max_tests: 20

# The mongoDB database name
dbname: "test161"

# The mongoDB database server and port
dbsever: "localhost:27017"

# Database credentials
dbuser: user
dbpw: password

# The port that test161-server is configured to listen on for API requests
api_port: 4000

# The minimum test161 client version that the server will accept submissions from
min_client:
  major: 1
  minor: 2
  revision: 5
----

==== Key Directory

As part of its API, `test161-server` can generate public/private RSA keypairs.
The front-end issues these requests from a student's settings page. The keypairs
are stored in the `keydir` specified in `test161-server.conf`. The student is
required to add the public key to their private Git repository as a deploy key
so `test161` can clone their OS/161 sources.

==== Cache Directory

`test161-server` caches students' source code so that it can fetch updates
rather than re-clone on subsequent submissions.

=== `test161-server` Usage

`test161-server` should be launched as a daemon during boot, but occasionally
you may need to communicate with the running instance.

[source,bash]
----
test161-server status          # Get the status of the running instance
test161-server pause           # Stop the server from accepting new submissions, but
                               # finish processing pending submissions
test161-server resume          # Resume accepting submissions
test161-server set-capacity N  # Set the max number of concurrent tests
test161-server get-capacity    # Get the max number of concurrent tests
----

== Features

=== Progress Tracking Using `stat161` Output

`test161` uses the collected `stat161` output produced by the running kernel to
detect deadlocks, livelocks, and other forms of stalls. We do this using
several different strategies:

. *Progress and prompt timeouts.* Test files can configure both progress
(`monitorconf.timeouts.progress`) and prompt (`monitorconf.timeouts.prompt`)
timeouts. The former is used to kill the test if no output has appeared, while
the latter is passed to `expect` and used to kill the test of the prompt is
delayed. Ideally OS/161 tests should produce some output while they run to
help keep the progress timeout from firing, but the other progress tracking
strategies described below should also help.
. *User and kernel maximum and minimum cycles.* `test161` maintains a buffer
of statistics over a configurable number of `stat161` intervals. Limits on the
minimum and maximum number of kernel and user cycles (expressed as fractions)
over this buffer can help detect deadlocks (minimum) and livelocks (maximum).
User limits are only applied when running in userspace.
.  Note that `test161`
also checks to ensure that there are no user cycles generated when we are
running in kernel mode, which could be caused by a hung progress.

=== [[leaks]] Memory Leak Detection

In addition for checking for test correctness, `test161` can also check for
memory leaks. To implement this feature, a few changes were made to our
https://github.com/ops-class/os161[ops-class OS/161 sources], which implies
this feature will be unavailable or source modification is required if you are
starting from other OS/161 sources.

A new command, `khu`, has been added to our OS/161 kernel menu. When run, this
command prints the number of bytes of memory currently allocated, between both
the `kmalloc` subpage allocator, and the VM subsystem. `test161` parses this
output and calculates the difference between successive invocations to
determine memory leaks. `test161` <<Targets, targets>> can optionally deduct
points for memory leaks.

=== Correctness vs. Grading

The concepts of _correctness_ and _grading_ are purposely separated in
`test161`. Correctness is first established at the command granularity -- each
command has specific criteria that defines correct execution. Since tests are
composed of commands, it follows that test correctness can be determined from
command correctness. Grading, however, is handled independently by targets.
The _partial_ grading method allows for points to be awarded when only some of
the commands in a single test are correct. In the _entire_ scoring method,
points are only awarded of the all of the commands in the test are correct.

=== Partial Credit

`test161` allows for partial credit at the command level. This is different
from the partial scoring method for tests. With partial credit, a command can
earn a fraction of the points it is assigned in the target. This is implemented
by looking for the (secured) string, `PARTIAL CREDIT X OF N`. If X == N, full
credit is awarded and the test is marked correct. Otherwise, a fraction of the
points (X/N) are awarded and the test is marked as incorrect.

=== Security

Given that students are modifying the operating system itself, the attack
surface for gaming the system is quite large. Given that we have modified
user test programs to output `<name>: SUCCESS` when they succeed, it would
be particularly easy for students to fake this output if security measures
were not put in place. Therefore, security has been built into `test161` to
create a _secure testing environment_. This was accomplished through both
`test161` features and additions to our
https://github.com/ops-class/os161[ops-class OS/161 sources]. In particular,
we ensure that our trusted tests are running, and that to a very high degree, we
trust the output.

==== `libtest161` and `secprintf`

Our OS/161 sources add a `test161` library, `libtest161` with the important
function, `secprintf`:

[source,c]
----
int secprintf(const char * secret, const char * msg, const char * name);
----

When `SECRET_TESTING` is disabled, which it normally is, this function simply
outputs the message, such as <name>: SUCCESS. Even though the `test161` command
is expecting trusted output, it knows that `SECRET_TESTING` has been disabled
and will ignore this requirement. This allows students to test their code using
`test161` in an unsecured environment.

When `SECRET_TESTING` is enabled, this function _secures_ the output string by
computing the SHA256 HMAC of the message using the provided _secret key_ and a
random salt value. It outputs a 4-tuple of (name, hash, salt, name: message).
`test161` uses this information to verify the authenticity of the message.

==== Source Overlay

`test161` allows the specification of an _overlay directory_ that contains
trusted source files. These trusted files, such as make files and anything that
prints `SUCCESS`, overwrite students' untrusted source files on the `test161`
server during compilation. During testing, an overlay can be specified using
the `TEST161_OVERLAY` environment variable (see <<Environment Variables>>).

==== Key Injection

When an <<Source Overlay,overlay>> is specified, the process of _key injection_
is triggered. In our OS/161 source code, a placeholder (`SECRET`) for the secret
key was added anywhere a key was required for the `secprintf` function. Key
injection replaces instances of `SECRET` in the source code with a randomly
generated key, _one per command_. During compile time, a map of command to key
is created so `test161` can authenticate messages output by test code.
Importantly, this process repeats itself each time a student submits to a
target, which means no information from previous submissions can be replayed.
Additionally, unique salt values are required during testing, preventing replay
attacks from previously seen command output, such in the case of `triplehuge`.

=== Multiple Output Strategies

`test161` supports different output strategies through its PersistenceManager
interface. Each TestEnvironment has a PersistenceManager which receives
callbacks when events happen, like when scores changes, statuses change, or when
output lines are added. This allows multiple implementations to handle output
as they wish. The `test161` client utility implements the interface through
its ConsolePersistence type, which writes all input to stdout. The server uses
a MongoPersistence type which outputs JSON data to our mongo back-end server.
This feature allows `test161-server` forks to easily use whatever back-end
storage system they desire.

== TODOs

=== Nits

* sys161 version checks

* Check for repository problems:
** Check and fail if it has inappropriate files (`.o`), or is too large. 
(Prevent back-end storage DOS attacks.)

* Use URL associated with the tree-ish id provided to `test161 submit`

* Fix directory bash completion for test161 config test161dir. It's
unfortunately adding a space instead of /.

* Populate man pages

=== Performance Tracking

Most of the infrastructure is in place to handle performance targets, but we still
need finish this and test it. Specifically, we need set the performance number in
the Test object and use it properly in the Submission.

=== Parallel Testing Output

It would be cool to be able to print serial output from one test while queuing
the output from other tests. Maybe using curses to maintain a line at the end
of the screen showing the tests that are being run.

=== Output Frequency

For long running tests, OS/161 tests generate periodic output, usually in the
form of a string of '.' characters. This output is used as a keep-alive
mechanism, resetting test161's progress timeout. Because this output is in a
single line, and it would create more unnecessary DB output and server load to
break these into multiple lines, it would be nice to refactor things in such a
way that the current output line is periodically persisted. This would give
students a better indication of progress, as opposed to tests looking "stuck".

=== Server Nits

* Environment inference with environment variable overrides, similar to the test161 client
** `test161-server config` to both show the configuration and modify it
* Log the configuration on startup
* Usability cleanup
** Usage
** Help
** Bash completion
* Moving window for stats API
* Periodically persist server stats, either in mongoDB or through the logger. We currently lose these on restart.
* Move collaboration messages into their own files instead of hard-coding

Documentation

Overview

Package test161 implements a library for testing OS/161 kernels. We use expect to drive the sys161 system simulator and collect useful output using the stat socket.

Index

Constants

View Source
const (
	CMD_OPT_NO    = "no"
	CMD_OPT_MAYBE = "maybe"
	CMD_OPT_YES   = "yes"
)

Command options for panics and timesout

View Source
const (
	SM_ACCEPTING = iota
	SM_NOT_ACCEPTING
	SM_STAFF_ONLY
)
View Source
const (
	COLLECTION_SUBMISSIONS = "submissions"
	COLLECTION_TESTS       = "tests"
	COLLECTION_STUDENTS    = "students"
	COLLECTION_TARGETS     = "targets"
	COLLECTION_USERS       = "users"
	COLLECTION_USAGE       = "usage"
)
View Source
const (
	MSG_PERSIST_CREATE   = iota // The object has been created
	MSG_PERSIST_UPDATE          // Generic update message.
	MSG_PERSIST_OUTPUT          // Added an output line (command types only)
	MSG_PERSIST_COMPLETE        // We won't update the object any more
	MSG_TARGET_LOAD             // When a target is loaded
)
View Source
const (
	MSG_FIELD_SCORE = 1 << iota
	MSG_FIELD_STATUS
	MSG_FIELD_TESTS
	MSG_FIELD_OUTPUT
	MSG_FIELD_STATUSES
)

Inidividual field updates

View Source
const (
	PERSIST_TYPE_STUDENTS = 1 << iota
	PERSIST_TYPE_USERS
)
View Source
const (
	UpdateReasonOutput = iota
	UpdateReasonScore
	UpdateReasonCommandDone
)
View Source
const (
	COMMAND_STATUS_NONE      = "none"      // The command has not yet run
	COMMAND_STATUS_RUNNING   = "running"   // The command is running
	COMMAND_STATUS_CORRECT   = "correct"   // The command produced the expected output and did not crash
	COMMAND_STATUS_INCORRECT = "incorrect" // The command received some partial credit
)

Statuses for commands

View Source
const (
	SUBMISSION_SUBMITTED = "submitted" // Submitted and queued
	SUBMISSION_BUILDING  = "building"  // Building the kernel
	SUBMISSION_RUNNING   = "running"   // The tests started running
	SUBMISSION_ABORTED   = "aborted"   // Aborted because one or more tests failed to error
	SUBMISSION_COMPLETED = "completed" // Completed
)
View Source
const (
	TARGET_ASST = "asst"
	TARGET_PERF = "perf"
)
View Source
const (
	TEST_SCORING_ENTIRE  = "entire"
	TEST_SCORING_PARTIAL = "partial"
)
View Source
const CUR_USAGE_VERSION = 1
View Source
const CollabMsgAsst1 = `` /* 723-byte string literal not displayed */
View Source
const CollabMsgAsst2 = `` /* 1149-byte string literal not displayed */
View Source
const CollabMsgAsst3 = `` /* 881-byte string literal not displayed */
View Source
const DEFAULT_MGR_CAPACITY uint = 0
View Source
const KEYBYTES = 32
View Source
const MAX_EXPANSION_LOOPS = 1024
View Source
const MAX_RETRY_LOOPS = 16
View Source
const SYS161_TEMPLATE = `` /* 406-byte string literal not displayed */
View Source
const (
	UPLOAD_TYPE_USAGE = iota
)

Variables

View Source
var CONF_DEFAULTS = Test{
	Sys161: Sys161Conf{
		Path: "sys161",
		CPUs: 8,
		RAM:  "1M",
		Disk1: DiskConf{
			Enabled: "false",
			Bytes:   "32M",
			RPM:     7200,
			NoDoom:  "true",
		},
		Disk2: DiskConf{
			Enabled: "false",
			Bytes:   "32M",
			RPM:     7200,
			NoDoom:  "false",
		},
	},
	Stat: StatConf{
		Resolution: 0.01,
		Window:     1,
	},
	Monitor: MonitorConf{
		Enabled: "true",
		Window:  400,
		Kernel: Limits{
			EnableMin: "false",
			Min:       0.001,
			Max:       1.0,
		},
		User: Limits{
			EnableMin: "false",
			Min:       0.0001,
			Max:       1.0,
		},
		ProgressTimeout: 10.0,
		CommandTimeout:  60.0,
	},
	Misc: MiscConf{
		CommandRetries:   5,
		PromptTimeout:    1800.0,
		CharacterTimeout: 1000,
		RetryCharacters:  "true",
		KillOnExit:       "false",
	},
}
View Source
var KERNEL_COMMAND_CONF = &CommandConf{
	Prompt: `OS/161 kernel [? for menu]: `,
	End:    "q",
}
View Source
var SHELL_COMMAND_CONF = &CommandConf{
	Prefix: "$",
	Prompt: `OS/161$ `,
	Start:  "s",
	End:    "exit",
}
View Source
var Version = ProgramVersion{
	Major:    1,
	Minor:    3,
	Revision: 2,
}

Functions

func GetDeployKeySSHCmd

func GetDeployKeySSHCmd(users []string, keyDir string) string

func KeyGen

func KeyGen(email, token string, env *TestEnvironment) (string, error)

On success, KeyGen returns the public key of the newly generated public/private key pair

func ManagerCapacity

func ManagerCapacity() uint

func SetManagerCapacity

func SetManagerCapacity(capacity uint)

func StartManager

func StartManager()

Start the shared test manager

func StopManager

func StopManager()

Stop the shared test manager

Types

type BuildCommand

type BuildCommand struct {
	ID string `yaml:"-" json:"id" bson:"_id,omitempty"`

	Type  string    `json:"type"`
	Input InputLine `json:"input"`

	// Set during target init
	PointsAvailable uint `json:"points_avail" bson:"points_avail"`
	PointsEarned    uint `json:"points_earned" bson:"points_earned"`

	// Set during testing
	Output []*OutputLine `json:"output"`

	// Set during evaluation
	Status string `json:"status"`
	// contains filtered or unexported fields
}

A variant of a Test Command for builds

func (*BuildCommand) Run

func (cmd *BuildCommand) Run(env *TestEnvironment) error

Execute an individual BuildTest command

type BuildConf

type BuildConf struct {
	Repo             string   // The git repository to clone
	CommitID         string   // The git commit id (HEAD, hash, etc.) to check out
	KConfig          string   // The os161 kernel config file for the build
	RequiredCommit   string   // A commit required to be in git log
	CacheDir         string   // Cache for previous builds
	RequiresUserland bool     // Does userland need to be built?
	Overlay          string   // The overlay to use (append to overlay dir in env)
	Users            []string // The users who own the repo. Needed for the finding the key.
}

BuildConf specifies the configuration for building os161.

func (*BuildConf) ToBuildTest

func (b *BuildConf) ToBuildTest(env *TestEnvironment) (*BuildTest, error)

Use the BuildConf to create a sequence of commands that will build an os161 kernel and userspace binaries (ASST2+).

type BuildResults

type BuildResults struct {
	RootDir string
	TempDir string
}

type BuildTest

type BuildTest struct {

	// Mongo ID
	ID string `yaml:"-" json:"id" bson:"_id,omitempty"`

	// ID of the submission this test belongs to.
	SubmissionID string `yaml:"-" json:"-" bson:"submission_id"`

	// Metadata
	Name        string `yaml:"name" json:"name"`
	Description string `yaml:"description" json:"description"`

	Commands []*BuildCommand `json:"commands"` // Protected by L
	Status   []Status        `json:"status"`   // Protected by L
	Result   TestResult      `json:"result"`   // Protected by L

	// Dependency data
	DependencyID string `json:"depid"`
	IsDependency bool   `json:"isdependency"`

	// Grading.  These are set when the test is being run as part of a Target.
	PointsAvailable uint   `json:"points_avail" bson:"points_avail"`
	PointsEarned    uint   `json:"points_earned" bson:"points_earned"`
	ScoringMethod   string `json:"scoring_method" bson:"scoring_method"`
	// contains filtered or unexported fields
}

BuildTest is a variant of a Test, and specifies how the build process should work. We obey the same schema so the front end tools can treat this like any other test.

func (*BuildTest) OutputJSON

func (t *BuildTest) OutputJSON() (string, error)

func (*BuildTest) RootDir

func (t *BuildTest) RootDir() string

Get the root directory of the build output

func (*BuildTest) Run

func (t *BuildTest) Run(env *TestEnvironment) (*BuildResults, error)

Run builds the OS/161 kernel and userspace binaries

type Command

type Command struct {
	// Mongo ID
	ID string `yaml:"-" json:"id" bson:"_id,omitempty"`

	// Set during init
	Type          string          `json:"type"`
	PromptPattern *regexp.Regexp  `json:"-" bson:"-"`
	Input         InputLine       `json:"input"`
	Config        CommandTemplate `json:"config"`

	// Set during target init
	PointsAvailable uint `json:"points_avail" bson:"points_avail"`
	PointsEarned    uint `json:"points_earned" bson:"points_earned"`

	// Set during run init
	Panic          string  `json:"panic"`
	Timeout        float32 `json:"timeout"`
	TimesOut       string  `json:"timesout"`
	ExpectedOutput []*ExpectedOutputLine

	// Set during testing
	Output       []*OutputLine `json:"output"`
	SummaryStats Stat          `json:"summarystats"`
	AllStats     []Stat        `json:"stats"`

	StartTime TimeFixedPoint `json:"starttime"`
	EndTime   TimeFixedPoint `json:"endtime"`
	TimedOut  bool           `json:"timedout"`

	// Set during evaluation
	Status string `json:"status"`

	// Backwards pointer to the Test. This needs to be public for printing
	Test *Test `json:"-" bson:"-"`
}

func (*Command) Id

func (c *Command) Id() string

func (*Command) Instantiate

func (c *Command) Instantiate(env *TestEnvironment) error

Instantiate the command (input, expected output) using the command template. This needs to be must be done prior to executing the command.

type CommandConf

type CommandConf struct {
	Prefix string `yaml:"prefix" json:"prefix"`
	Prompt string `yaml:"prompt" json:"prompt"`
	Start  string `yaml:"start" json:"start"`
	End    string `yaml:"end" json:"end"`
}

type CommandTemplate

type CommandTemplate struct {
	Name     string             `yaml:"name"`
	Output   []*TemplOutputLine `yaml:"output"`
	Input    []string           `yaml:"input"`
	Panic    string             `yaml:"panics"`   // CMD_OPT
	TimesOut string             `yaml:"timesout"` // CMD_OPT
	Timeout  float32            `yaml:"timeout"`  // Timeout in sec. A timeout of 0.0 uses the test default.
}

Template for commands instances. These get expanded depending on the command environment.

func (*CommandTemplate) Clone

func (ct *CommandTemplate) Clone() *CommandTemplate

type CommandTemplates

type CommandTemplates struct {
	Templates []*CommandTemplate `yaml:"templates"`
}

CommandTemplate Collection. We just use this for loading and move the references into a map in the global environment.

func CommandTemplatesFromFile

func CommandTemplatesFromFile(file string) (*CommandTemplates, error)

func CommandTemplatesFromString

func CommandTemplatesFromString(text string) (*CommandTemplates, error)

type DependencyRunner

type DependencyRunner struct {
	// contains filtered or unexported fields
}

This runner has mad respect for dependencies.

func (*DependencyRunner) Group

func (r *DependencyRunner) Group() *TestGroup

func (*DependencyRunner) Run

func (r *DependencyRunner) Run() <-chan *Test161JobResult

type DiskConf

type DiskConf struct {
	Enabled string `yaml:"enabled" json:"enabled"`
	RPM     uint   `yaml:"rpm" json:"rpm"`
	Bytes   string `yaml:"bytes" json:"bytes"`
	NoDoom  string `yaml:"nodoom" json:"nodoom"`
}

type DoNothingPersistence

type DoNothingPersistence struct {
}

func (*DoNothingPersistence) CanRetrieve

func (d *DoNothingPersistence) CanRetrieve() bool

func (*DoNothingPersistence) Close

func (d *DoNothingPersistence) Close()

func (*DoNothingPersistence) Notify

func (d *DoNothingPersistence) Notify(entity interface{}, msg, what int) error

func (*DoNothingPersistence) Retrieve

func (d *DoNothingPersistence) Retrieve(what int, who map[string]interface{},
	filter map[string]interface{}, res interface{}) error

type ExpectedOutputLine

type ExpectedOutputLine struct {
	Text    string
	Trusted bool
	KeyName string
}

Command instance expected output line. The difference here is that we store the name of the key that we need to verify the output.

type GroupConfig

type GroupConfig struct {
	Name    string           `json:"name"`
	UseDeps bool             `json:"usedeps"`
	Tests   []string         `json:"tests"`
	Env     *TestEnvironment `json:"-" bson:"-"`
}

GroupConfig specifies how a group of tests should be created and run.

type GroupStat

type GroupStat struct {
	TargetTagName   string      `bson:"target_tag_name" json:"target_tag_name"`
	PointsAvailable uint        `bson:"max_score" json:"max_score"`
	Status          string      `bson:"status" json:"status"`
	Score           uint        `bson:"score" json:"score"`
	Tests           []*TestStat `bson:"tests" json:"tests"`
	Errors          []string    `bson:"errors" json:"errors"`
	SubmissionTime  time.Time   `bson:"submission_time" json:"submission_time"`
	CompletionTime  time.Time   `bson:"completion_time" json:"completion_time"`
}

type InputLine

type InputLine struct {
	WallTime TimeFixedPoint `json:"walltime"`
	SimTime  TimeFixedPoint `json:"simtime"`
	Line     string         `json:"line"`
}

type Limits

type Limits struct {
	EnableMin string  `yaml:"enablemin" json:"enablemin"`
	Min       float64 `yaml:"min" json:"min"`
	Max       float64 `yaml:"max" json:"max"`
}

type ManagerStats

type ManagerStats struct {
	// protected by manager.L
	Running     uint  `json:"running"`
	HighRunning uint  `json:"high_running"`
	Queued      uint  `json:"queued"`
	HighQueued  uint  `json:"high_queued"`
	Finished    uint  `json:"finished"`
	MaxWait     int64 `json:"max_wait_ms"`
	AvgWait     int64 `json:"avg_wait_ms"`
	StartTime   time.Time
	// contains filtered or unexported fields
}

func GetManagerStats

func GetManagerStats() *ManagerStats

Return a copy of the current shared test manager stats

type MiscConf

type MiscConf struct {
	CommandRetries   uint    `yaml:"commandretries" json:"commandretries"`
	PromptTimeout    float32 `yaml:"prompttimeout" json:"prompttimeout"`
	CharacterTimeout uint    `yaml:"charactertimeout" json:"charactertimeout"`
	TempDir          string  `yaml:"tempdir" json:"-" bson:"-"`
	RetryCharacters  string  `yaml:"retrycharacters" json:"retrycharacters"`
	KillOnExit       string  `yaml:"killonexit" json:"killonexit"`
}

type MongoPersistence

type MongoPersistence struct {
	// contains filtered or unexported fields
}

func (*MongoPersistence) CanRetrieve

func (m *MongoPersistence) CanRetrieve() bool

func (*MongoPersistence) Close

func (m *MongoPersistence) Close()

func (*MongoPersistence) Notify

func (m *MongoPersistence) Notify(t interface{}, msg, what int) (err error)

func (*MongoPersistence) Retrieve

func (m *MongoPersistence) Retrieve(what int, who map[string]interface{}, filter map[string]interface{}, res interface{}) error

type MonitorConf

type MonitorConf struct {
	Enabled         string  `yaml:"enabled" json:"enabled"`
	Window          uint    `yaml:"window" json:"window"`
	Kernel          Limits  `yaml:"kernel" json:"kernel"`
	User            Limits  `yaml:"user" json:"user"`
	ProgressTimeout float32 `yaml:"progresstimeout" json:"progresstimeout"`
	CommandTimeout  float32 `yaml:"commandtimeout" json:"commandtimeout"`
}

type OutputLine

type OutputLine struct {
	WallTime TimeFixedPoint `json:"walltime"`
	SimTime  TimeFixedPoint `json:"simtime"`
	Buffer   bytes.Buffer   `json:"-" bson:"-"`
	Line     string         `json:"line"`
	Trusted  bool           `json:"trusted"`
	KeyName  string         `json:"keyname"`
}

type PersistenceManager

type PersistenceManager interface {
	Close()
	Notify(entity interface{}, msg, what int) error
	CanRetrieve() bool

	// what should be PERSIST_TYPE_*
	// who is a map of field:value
	// res is where to deserialize the data
	Retrieve(what int, who map[string]interface{}, filter map[string]interface{}, res interface{}) error
}

Each Submission has at most one PersistenceManager, and it is pinged when a variety of events occur. These callbacks are invoked synchronously, so it's up to the PersistenceManager to not slow down the tests. We do this because the PersistenceManager can create goroutines if applicable, but we can't make an asynchronous call synchronous when it might be needed. So, be kind ye PersistenceManagers.

func NewMongoPersistence

func NewMongoPersistence(dial *mgo.DialInfo) (PersistenceManager, error)

type ProgramVersion

type ProgramVersion struct {
	Major    uint `yaml:"major"`
	Minor    uint `yaml:"minor"`
	Revision uint `yaml:"revision"`
}

func (ProgramVersion) CompareTo

func (this ProgramVersion) CompareTo(other ProgramVersion) int

Returns 1 if this > other, 0 if this == other, and -1 if this < other

func (ProgramVersion) String

func (v ProgramVersion) String() string

type RequestKeyResonse

type RequestKeyResonse struct {
	User string
	Key  string
}

RequestKeyResonse is the repsonse we send back during validation if the keys aren't up-to-date.

type SimpleRunner

type SimpleRunner struct {
	// contains filtered or unexported fields
}

A simple runner that tries to run everything as fast as it's allowed to, i.e. it doesn't care about dependencies.

func (*SimpleRunner) Group

func (r *SimpleRunner) Group() *TestGroup

func (*SimpleRunner) Run

func (r *SimpleRunner) Run() <-chan *Test161JobResult

type Stat

type Stat struct {
	Start  TimeFixedPoint `json:"start"`
	End    TimeFixedPoint `json:"end"`
	Length TimeFixedPoint `json:"length"`
	Count  uint           `json:"count"`

	WallStart  TimeFixedPoint `json:"wallstart"`
	WallEnd    TimeFixedPoint `json:"wallend"`
	WallLength TimeFixedPoint `json:"walllength"`

	// Read from stat line
	Nsec   uint64 `json:"-" bson:"-"`
	Kinsns uint32 `json:"kinsns"`
	Uinsns uint32 `json:"uinsns"`
	Udud   uint32 `json:"udud"`
	Idle   uint32 `json:"idle"`
	IRQs   uint32 `json:"irqs"`
	Exns   uint32 `json:"exns"`
	Disk   uint32 `json:"disk"`
	Con    uint32 `json:"con"`
	Emu    uint32 `json:"emu"`
	Net    uint32 `json:"net"`

	// Derived
	Insns uint32 `json:"insns"`
	// contains filtered or unexported fields
}

func (*Stat) Add

func (i *Stat) Add(j Stat)

Add adds two stat objects.

func (*Stat) Append

func (i *Stat) Append(j Stat)

Append appends the stat object to an existing stat object.

func (*Stat) Shift

func (i *Stat) Shift(j Stat)

Shift shifts the stat object from an existing stat object.

func (*Stat) Sub

func (i *Stat) Sub(j Stat)

Sub subtracts two stat objects.

type StatConf

type StatConf struct {
	Resolution float32 `yaml:"resolution" json:"resolution"`
	Window     uint    `yaml:"window" json:"window"`
}

type StatsByName

type StatsByName []*TargetStats

Target stats sorting

func (StatsByName) Len

func (a StatsByName) Len() int

func (StatsByName) Less

func (a StatsByName) Less(i, j int) bool

func (StatsByName) Swap

func (a StatsByName) Swap(i, j int)

type Status

type Status struct {
	WallTime TimeFixedPoint `json:"walltime"`
	SimTime  TimeFixedPoint `json:"simtime"`
	Status   string         `json:"status"`
	Message  string         `json:"message"`
}

type Student

type Student struct {
	ID        string `bson:"_id"`
	Email     string `bson:"email"`
	Token     string `bson:"token"`
	PublicKey string `bson:"key"`

	// Stats
	TotalSubmissions uint           `bson:"total_submissions"`
	Stats            []*TargetStats `bson:"target_stats"`
	// contains filtered or unexported fields
}

func (*Student) IsStaff

func (student *Student) IsStaff(env *TestEnvironment) (bool, error)

type Submission

type Submission struct {

	// Configuration
	ID            string   `bson:"_id,omitempty"`
	Users         []string `bson:"users"`
	Repository    string   `bson:"repository"`
	CommitID      string   `bson:"commit_id"`
	CommitRef     string   `bson:"commit_ref"`     // Just informational
	ClientVersion string   `bson:"client_version"` // Just informational

	// From the environment
	OverlayCommitID string `bson:"overlay_commit_id"` // Just informational
	IsStaff         bool   `bson:"is_staff"`

	// Target details
	TargetID      string `bson:"target_id"`
	TargetName    string `bson:"target_name"`
	TargetVersion uint   `bson:"target_version"`
	IsMetaTarget  bool   `bson:"is_meta_target"`

	// Submitted target, which is different from target details if submitting
	// to a subtarget of a metatarget.
	SubmittedTargetID      string `bson:"submitted_target_id"`
	SubmittedTargetName    string `bson:"submitted_target_name"`
	SubmittedTargetVersion uint   `bson:"submitted_target_version"`

	PointsAvailable uint   `bson:"max_score"`
	TargetType      string `bson:"target_type"`

	// Results
	Status         string   `bson:"status"`
	Score          uint     `bson:"score"`
	Performance    float64  `bson:"performance"`
	TestIDs        []string `bson:"tests"`
	Errors         []string `bson:"errors"`
	EstimatedScore uint     `bson:"estimated_score"`

	SubmissionTime time.Time `bson:"submission_time"`
	CompletionTime time.Time `bson:"completion_time"`

	Env *TestEnvironment `bson:"-" json:"-"`

	BuildTest *BuildTest `bson:"-" json:"-"`
	Tests     *TestGroup `bson:"-" json:"-"`

	// Split information for meta/sub-targets. We store IDs for
	// mongo/persistence, and keep references around in case we need them,
	// and for testing.
	OrigSubmissionID string `bson:"orig_submission_id"`

	SubSubmissionIDs []string `bson:"sub_submission_ids"`
	// contains filtered or unexported fields
}

func NewSubmission

func NewSubmission(request *SubmissionRequest, origenv *TestEnvironment) (*Submission, []error)

Create a new Submission that can be evaluated by the test161 server or client.

This submission has a copy of the test environment, so it's safe to pass the same enviromnent for multiple submissions. Local fields will be set accordingly.

func (*Submission) Run

func (s *Submission) Run() error

Synchronous submission runner

func (*Submission) TargetStats

func (s *Submission) TargetStats() (result *TargetStats)

type SubmissionManager

type SubmissionManager struct {
	// contains filtered or unexported fields
}

func NewSubmissionManager

func NewSubmissionManager(env *TestEnvironment) *SubmissionManager

func (*SubmissionManager) CombinedStats

func (sm *SubmissionManager) CombinedStats() *Test161Stats

func (*SubmissionManager) Pause

func (sm *SubmissionManager) Pause()

func (*SubmissionManager) Resume

func (sm *SubmissionManager) Resume()

func (*SubmissionManager) Run

func (sm *SubmissionManager) Run(s *Submission) error

func (*SubmissionManager) SetStaffOnly

func (sm *SubmissionManager) SetStaffOnly()

func (*SubmissionManager) Stats

func (sm *SubmissionManager) Stats() *ManagerStats

func (*SubmissionManager) Status

func (sm *SubmissionManager) Status() int

type SubmissionRequest

type SubmissionRequest struct {
	Target          string                // Name of the target
	Users           []*SubmissionUserInfo // Email addresses of users
	Repository      string                // Git repository to clone
	CommitID        string                // Git commit id to checkout after cloning
	CommitRef       string                // The ref they're submitting with, if one is set
	ClientVersion   ProgramVersion        // The version of test161 the client is running
	EstimatedScores map[string]uint       // The local score test161 computed
}

SubmissionRequests are created by clients and used to generate Submissions. A SubmissionRequest represents the data required to run a test161 target for evaluation by the test161 server.

func (*SubmissionRequest) CheckUserKeys

func (req *SubmissionRequest) CheckUserKeys(env *TestEnvironment) []*RequestKeyResonse

Check if the local copy of the key is up-to-date. Return an empty key if the user's key has not been created, or the new key if the hash is different.

func (*SubmissionRequest) Validate

func (req *SubmissionRequest) Validate(env *TestEnvironment) ([]*Student, error)

type SubmissionUserInfo

type SubmissionUserInfo struct {
	Email   string `yaml:"email"`
	Token   string `yaml:"token"`
	KeyHash string `yaml:"-"`
}

type Sys161Conf

type Sys161Conf struct {
	Path   string   `yaml:"path" json:"path"`
	CPUs   uint     `yaml:"cpus" json:"cpus"`
	RAM    string   `yaml:"ram" json:"ram"`
	Disk1  DiskConf `yaml:"disk1" json:"disk1"`
	Disk2  DiskConf `yaml:"disk2" json:"disk2"`
	Random uint32   `yaml:"-" json:"randomseed" bson:"randomseed"`
}

type TagDescription

type TagDescription struct {
	Name        string `yaml:"name"`
	Description string `yaml:"desc"`
}

type TagDescriptions

type TagDescriptions struct {
	Tags []*TagDescription `yaml:"tags"`
}

TagDescription Collection. We just use this for loading and move the references into a map in the global environment.

func TagDescriptionsFromFile

func TagDescriptionsFromFile(file string) (*TagDescriptions, error)

func TagDescriptionsFromString

func TagDescriptionsFromString(text string) (*TagDescriptions, error)

type TagMap

type TagMap map[string][]*Test

TagMap stores Tests indexed by id and maintains a map of tag -> tests for the test set.

type Target

type Target struct {
	// Make sure to update isChangeAllowed with any new fields that need to be versioned.
	ID               string        `yaml:"-" bson:"_id"`
	Name             string        `yaml:"name"`
	Active           string        `yaml:"active"`
	Version          uint          `yaml:"version"`
	Type             string        `yaml:"type"`
	Points           uint          `yaml:"points"`
	KConfig          string        `yaml:"kconfig"`
	RequiredCommit   string        `yaml:"required_commit" bson:"required_commit"`
	RequiresUserland bool          `yaml:"userland" bson:"userland"`
	Tests            []*TargetTest `yaml:"tests"`
	FileHash         string        `yaml:"-" bson:"file_hash"`
	FileName         string        `yaml:"-" bson:"file_name"`

	// MetaTarget info
	IsMetaTarget   bool     `yaml:"is_meta_target" bson:"is_meta_target"`
	SubTargetNames []string `yaml:"sub_target_names" bson:"sub_target_names"`
	MetaName       string   `yaml:"meta_name"`

	// Front-end only
	PrintName   string `yaml:"print_name" bson:"print_name"`
	Description string `yaml:"description"`
	Link        string `yaml:"link" bson:"link"`
	Leaderboard string `yaml:"leaderboard" bson:"leaderboard"`
	// contains filtered or unexported fields
}

A test161 Target is the sepcification for group of related tests. Currently, we support two types of Targets with special meaning: asst and perf. The main difference between Targets and TestGroups is that Targets can have a scoring component, either points or performance. The test161 submission system operates in terms of Targets.

02/2017 - Targets can now be MetaTargets which have a list of subtargets. Subtargets

are runable, whereas metatargets are not.

func NewTarget

func NewTarget() *Target

NewTarget creates a new, empty Target with the default type of "asst"

func TargetFromFile

func TargetFromFile(file string) (*Target, error)

TargetFromFile creates a Target object from a yaml file

func TargetFromString

func TargetFromString(text string) (*Target, error)

TargetFromString creates a Target object from a yaml string

func (*Target) Instance

func (t *Target) Instance(env *TestEnvironment) (*TestGroup, []error)

Instance creates a runnable TestGroup from this Target

type TargetCommand

type TargetCommand struct {
	Id     string   `yaml:"id" bson:"cmd_id"` // ID, must match ID in test file
	Index  int      `yaml:"index"`            // Index > 0 => match to index in test
	Points uint     `yaml:"points"`           // Points for this command
	Args   []string `yaml:"args"`             // Argument overrides
}

TargetCommands (optionally) specify information about the commands contained in TargetTests. TargetCommands allow you to assign the points for an individual command or override the input arguments.

type TargetList

type TargetList struct {
	Targets []*TargetListItem
}

TargetList is the JSON blob sent to clients

type TargetListItem

type TargetListItem struct {
	Name        string
	PrintName   string
	Description string
	Active      string
	Type        string
	Version     uint
	Points      uint
	FileName    string
	FileHash    string
	CollabMsg   string
}

TargetListItem is the target detail we send to remote clients about a target

type TargetStats

type TargetStats struct {
	TargetName    string `bson:"target_name"`
	TargetVersion uint   `bson:"target_version"`
	TargetType    string `bson:"target_type"`
	MaxScore      uint   `bson:"max_score"`

	TotalSubmissions uint `bson:"total_submissions"`
	TotalComplete    uint `bson:"total_complete"`

	HighScore uint    `bson:"high_score"`
	LowScore  uint    `bson:"low_score"`
	AvgScore  float64 `bson:"avg_score"`

	BestPerf  float64 `bson:"best_perf"`
	WorstPerf float64 `bson:"worst_perf"`
	AvgPerf   float64 `bson:"avg_perf"`

	BestSubmission string `bson:"best_submission_id"`
}

type TargetTest

type TargetTest struct {
	Id            string           `yaml:"id" bson:"test_id"`
	Scoring       string           `yaml:"scoring"`
	Points        uint             `yaml:"points"`
	MemLeakPoints uint             `yaml:"mem_leak_points"`
	Commands      []*TargetCommand `yaml:"commands"`
}

A TargetTest is the specification for a single Test contained in the Target. Currently, the Test can only appear in the Target once.

type TemplOutputLine

type TemplOutputLine struct {
	Text     string `yaml:"text"`
	Trusted  string `yaml:"trusted"`
	External string `yaml:"external"`
}

An expected line of output, which may either be expanded or not.

type Test

type Test struct {

	// Mongo ID
	ID string `yaml:"-" json:"id" bson:"_id,omitempty"`

	// ID of the submission this test belongs to.
	SubmissionID string `yaml:"-" json:"-" bson:"submission_id"`

	// Metadata
	Name        string   `yaml:"name" json:"name"`
	Description string   `yaml:"description" json:"description"`
	Tags        []string `yaml:"tags" json:"tags"`
	Depends     []string `yaml:"depends" json:"depends"`

	// Configuration chunks
	Sys161           Sys161Conf         `yaml:"sys161" json:"sys161"`
	Stat             StatConf           `yaml:"stat" json:"stat"`
	Monitor          MonitorConf        `yaml:"monitor" json:"monitor"`
	CommandConf      []CommandConf      `yaml:"commandconf" json:"commandconf"`
	Misc             MiscConf           `yaml:"misc" json:"misc"`
	CommandOverrides []*CommandTemplate `yaml:"commandoverrides" json:"-"`

	// Actual test commands to run
	Content string `fm:"content" yaml:"-" json:"-" bson:"-"`

	// Big lock that protects most fields shared between Run and getStats
	L *sync.Mutex `json:"-" bson:"-"`

	ConfString string         `json:"confstring"` // Only set during once
	WallTime   TimeFixedPoint `json:"walltime"`   // Protected by L
	SimTime    TimeFixedPoint `json:"simtime"`    // Protected by L
	Commands   []*Command     `json:"commands"`   // Protected by L
	Status     []Status       `json:"status"`     // Protected by L
	Result     TestResult     `json:"result"`     // Protected by L

	// Dependency data
	DependencyID string           `json:"depid"`
	ExpandedDeps map[string]*Test `json:"-" bson:"-"`
	IsDependency bool             `json:"isdependency"`

	// Grading.  These are set when the test is being run as part of a Target.
	PointsAvailable uint   `json:"points_avail" bson:"points_avail"`
	PointsEarned    uint   `json:"points_earned" bson:"points_earned"`
	ScoringMethod   string `json:"scoring_method" bson:"scoring_method"`
	TargetName      string `json:"target_name" bson:"target_name"`

	// Memory leak detection
	MemLeakBytes    int  `json:"mem_leak_bytes" bson:"mem_leak_bytes"`       // How much are they leaking?
	MemLeakChecked  bool `json:"mem_leak_checked" bson:"mem_leak_checked"`   // Did we even check?
	MemLeakPoints   uint `json:"mem_leak_points" bson:"mem_leak_points"`     // potential point hit
	MemLeakDeducted uint `json:"mem_leak_deducted" bson:"mem_leak_deducted"` // actual point hit
	// contains filtered or unexported fields
}

func TestFromFile

func TestFromFile(filename string) (*Test, error)

TestFromFile parses the test file and sets configuration defaults.

func TestFromString

func TestFromString(data string) (*Test, error)

TestFromFile parses the test string and sets configuration defaults.

func (*Test) Close

func (t *Test) Close(time.Time)

func (*Test) ExpectCall

func (t *Test) ExpectCall(time.Time, *regexp.Regexp)

func (*Test) ExpectReturn

func (t *Test) ExpectReturn(time.Time, expect.Match, error)

func (*Test) Key

func (t *Test) Key() string

Keyer interface for the dependency graph

func (*Test) MergeAllDefaults

func (t *Test) MergeAllDefaults() error

func (*Test) MergeConf

func (t *Test) MergeConf(defaults Test) error

func (*Test) OutputJSON

func (t *Test) OutputJSON() (string, error)

OutputJSON serializes the test object and all related output.

func (*Test) OutputString

func (t *Test) OutputString() string

OutputString prints test output in a human readable form.

func (*Test) PrintConf

func (t *Test) PrintConf() (string, error)

PrintConf formats the test configuration for use by sys161 via the sys161.conf file.

func (*Test) Recv

func (t *Test) Recv(receivedTime time.Time, received []byte)

Recv processes new sys161 output and restarts the progress timer

func (*Test) RecvEOF

func (t *Test) RecvEOF(time.Time)

func (*Test) RecvNet

func (t *Test) RecvNet(time.Time, []byte)

func (*Test) Run

func (t *Test) Run(env *TestEnvironment) (err error)

Run a test161 test.

func (*Test) Send

func (t *Test) Send(time.Time, []byte)

Unused parts of the expect.Logger interface

func (*Test) SendMasked

func (t *Test) SendMasked(time.Time, []byte)

func (*Test) SetEnv

func (t *Test) SetEnv(env *TestEnvironment)

type Test161JobResult

type Test161JobResult struct {
	Test *Test
	Err  error
}

A Test161JobResult consists of the completed test and any error that occurred while running the test.

type Test161Stats

type Test161Stats struct {
	Status          string       `json:"status"`
	SubmissionStats ManagerStats `json:"submission_stats"`
	TestStats       ManagerStats `json:"test_stats"`
}

Combined submission and tests statistics since the service started

type TestEnvironment

type TestEnvironment struct {
	// These do not depend on the TestGroup/Target
	TestDir  string
	Commands map[string]*CommandTemplate
	Targets  map[string]*Target

	// Optional - added in version 1.2.6
	Tags map[string]*TagDescription

	CacheDir    string
	OverlayRoot string
	KeyDir      string
	Persistence PersistenceManager

	Log *log.Logger

	RootDir string
	// contains filtered or unexported fields
}

TestEnvironment encapsultes the environment tests runs in. Much of the environment is global - commands, targets, etc. However, some state is local, such as the secure keyMap and OS/161 root directory.

func NewEnvironment

func NewEnvironment(test161Dir string, pm PersistenceManager) (*TestEnvironment, error)

Create a new TestEnvironment from the given test161 directory. The directory must contain these subdirectories: commands/ targets/ tests/ In addition to loading tests, commands, and targets, a logger is set up that writes to os.Stderr. This can be changed by changing env.Log.

func (*TestEnvironment) CopyEnvironment

func (env *TestEnvironment) CopyEnvironment() *TestEnvironment

Create a new TestEnvironment by copying the global state from an existing environment. Local test state will be initialized to default values.

func (*TestEnvironment) SetNullLogger

func (env *TestEnvironment) SetNullLogger()

func (*TestEnvironment) TargetList

func (env *TestEnvironment) TargetList() *TargetList

type TestGroup

type TestGroup struct {
	Tests  map[string]*Test
	Config *GroupConfig
}

A group of tests to be run, which is the result of expanding a GroupConfig.

func EmptyGroup

func EmptyGroup() *TestGroup

EmptyGroup creates an empty TestGroup that can be used to add groups from strings.

func GroupFromConfig

func GroupFromConfig(config *GroupConfig) (*TestGroup, []error)

Create a TestGroup from a GroupConfig. All test expressions are expanded and dependencies are added if UseDeps is set to true in the configuration.

func (*TestGroup) DependencyGraph

func (tg *TestGroup) DependencyGraph() (*graph.Graph, error)

func (*TestGroup) EarnedPoints

func (t *TestGroup) EarnedPoints() (points uint)

func (*TestGroup) OutputJSON

func (tg *TestGroup) OutputJSON() (string, error)

func (*TestGroup) OutputString

func (tg *TestGroup) OutputString() string

func (*TestGroup) TotalPoints

func (t *TestGroup) TotalPoints() (points uint)

type TestResult

type TestResult string
const (
	TEST_RESULT_NONE      TestResult = "none"      // Hasn't run (initial status)
	TEST_RESULT_RUNNING   TestResult = "running"   // Running
	TEST_RESULT_CORRECT   TestResult = "correct"   // Met the output criteria
	TEST_RESULT_INCORRECT TestResult = "incorrect" // Possibly some partial points, but didn't complete everything successfully
	TEST_RESULT_ABORT     TestResult = "abort"     // Aborted - internal error
	TEST_RESULT_SKIP      TestResult = "skip"      // Skipped (dependency not met)
)

type TestRunner

type TestRunner interface {
	Group() *TestGroup
	Run() <-chan *Test161JobResult
}

A TestRunner is responsible for running a TestGroup and sending the results back on a read-only channel. test161 runners close the results channel when finished so clients can range over it. test161 runners also return as soon as they are able to and let tests run asynchronously.

func NewDependencyRunner

func NewDependencyRunner(group *TestGroup) TestRunner

Factory function to create a new DependencyRunner.

func NewSimpleRunner

func NewSimpleRunner(group *TestGroup) TestRunner

Factory function to create a new SimpleRunner.

func TestRunnerFromConfig

func TestRunnerFromConfig(config *GroupConfig) (TestRunner, []error)

Create a TestRunner from a GroupConfig. config.UseDeps determines the type of runner created.

type TestStat

type TestStat struct {
	Name            string     `json:"name" bson:"name"`
	Result          TestResult `json:"result" bson:"result"`
	PointsAvailable uint       `json:"points_avail" bson:"points_avail"`
	PointsEarned    uint       `json:"points_earned" bson:"points_earned"`
	MemLeakBytes    int        `json:"mem_leak_bytes" bson:"mem_leak_bytes"`
	MemLeakPoints   uint       `json:"mem_leak_points" bson:"mem_leak_points"`
	MemLeakDeducted uint       `json:"mem_leak_deducted" bson:"mem_leak_deducted"`
}

type TestUpdateMsg

type TestUpdateMsg struct {
	Test   *Test
	Reason int
	Data   interface{}
}

type TestingPersistence

type TestingPersistence struct {
	Verbose bool
}

func (*TestingPersistence) CanRetrieve

func (d *TestingPersistence) CanRetrieve() bool

func (*TestingPersistence) Close

func (p *TestingPersistence) Close()

func (*TestingPersistence) Notify

func (p *TestingPersistence) Notify(entity interface{}, msg, what int) error

func (*TestingPersistence) Retrieve

func (d *TestingPersistence) Retrieve(what int, who map[string]interface{},
	filter map[string]interface{}, res interface{}) error

type TimeFixedPoint

type TimeFixedPoint float64

func (TimeFixedPoint) MarshalJSON

func (t TimeFixedPoint) MarshalJSON() ([]byte, error)

MarshalJSON prints our TimeFixedPoint type as a fixed point float for JSON.

type UploadRequest

type UploadRequest struct {
	UploadType int
	Users      []*SubmissionUserInfo
}

UploadRequests are created by clients and provide the form fields for file uploads. Currently, we only support stats file uploads, but this could change.

func (*UploadRequest) Validate

func (req *UploadRequest) Validate(env *TestEnvironment) ([]*Student, error)

type UsageStat

type UsageStat struct {
	ID             string     `bson:"_id"`
	Users          []string   `bson:"users" json:"users"`
	Timestamp      time.Time  `bson:"timestamp" json:"timestamp"`
	Version        int        `bson:"version" json:"version"`
	Test161Version string     `bson:"test161_version" json:"test161_version"`
	IsStaff        bool       `bson:"is_staff" json:"-"`
	GroupInfo      *GroupStat `bson:"group_info" json:"group_info"`
}

func NewTestGroupUsageStat

func NewTestGroupUsageStat(users []string, targetOrTag string, tg *TestGroup,
	startTime, endTime time.Time) *UsageStat

func (*UsageStat) JSON

func (stat *UsageStat) JSON() (string, error)

func (*UsageStat) Persist

func (stat *UsageStat) Persist(env *TestEnvironment) error

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL