$ instruqt track test --slug instruqt/getting-started-with-instruqt --skip-fail-check==> Testing track 'instruqt/getting-started-with-instruqt' (ID: b5wj5h80rk0y)Creating environment ...... OK==> Testing challenge [1/3] 'your-first-challenge' (ID: dkkgekwurtqu)Setting up challenge ... OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success OK==> Testing challenge [2/3] 'navigate-between-tabs' (ID: kb7ww4qxf7or)Setting up challenge . OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success OK==> Testing challenge [3/3] 'solving-a-real-challenge' (ID: vfp4xg0ffxpd)Setting up challenge OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success FAIL[ERROR] Error verifying check: Expected challenge status 'completed', but got 'started'Check `instruqt track logs` for details
Instruqt offers you the option to automatically test your track. For this the CLI includes the
instruqt track test command. When running this command, we start a new instance of your track, and for every challenge we execute the following steps:
Check challenge, and expect it to fail
Check challenge again, but this time expect it to succeed
The test will stop running either when one of these verification steps fails, or when all of them have complete successfully.
By running these steps we achieve the following:
Mimic the student's behaviour
Validate that the track starts properly
Validate that the challenge life cycle scripts (
solve) have been implemented correctly
If you have not implemented check scripts for your track, the third step (expecting check failure) will fail. In this case you can add the
--skip-fail-check flag to the
instruqt track test command.
When the test has finished it will automatically stop the track and mark if for cleanup. If you would like to keep it running afterwards, add the
--keep-running flag. This might be useful if you are trying to debug an issue with your scripts, and want to inspect the environment after the test has finished. If you are running the test with your personal credentials, you can then go to play.instruqt.com and continue with the track where the test finished.
To run a test for a specific track, you can either:
--id <track-id> flag;
--slug <org-slug>/<track-slug> flag; or
run it from the folder where the tracks
When running tests locally the CLI will use you personal credentials.
When running test from an automated system (e.g. a CI server), you can authenticate using an API token. To run a test with a token, just set the environment variable
INSTRUQT_TOKEN with the value of you API token.
$ instruqt track logs==> Tailing logs for track my-track2018/09/14 11:13:17 INFO: h84attob7rnw-8c64aa11957d79c2c40f3fb1b9d1096a: - module.core2018/09/14 11:13:17 INFO: h84attob7rnw-8c64aa11957d79c2c40f3fb1b9d1096a: Initializing the backend......
When developing your track, you might run into situations where you need debug logs.
The CLI includes an
instruqt track logs command, that you can use to get the logs of the instances of your track. All output of spinning up the environments for your track, as well as the output of the
setup scripts is available using this command.
This command will tail the logs until you cancel it (ctrl-c).
You can run this command from the folder where your
track.yml is, or you can pass
--slug flags to specify for which track you want to see the logs.