Instruqt offers you the option to automatically test your track. For this the CLI includes the instruqt track test
command. When running this command, we start a new instance of your track, and for every challenge we execute the following steps:
Setup challenge
Start challenge
Check challenge, and expect it to fail
Solve challenge
Check challenge again, but this time expect it to succeed
The test will stop running either when one of these verification steps fails, or when all of them have complete successfully.
$ instruqt track test --slug instruqt/getting-started-with-instruqt --skip-fail-check==> Testing track 'instruqt/getting-started-with-instruqt' (ID: b5wj5h80rk0y)Creating environment ...... OK​==> Testing challenge [1/3] 'your-first-challenge' (ID: dkkgekwurtqu)Setting up challenge ... OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success OK​==> Testing challenge [2/3] 'navigate-between-tabs' (ID: kb7ww4qxf7or)Setting up challenge . OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success OK​==> Testing challenge [3/3] 'solving-a-real-challenge' (ID: vfp4xg0ffxpd)Setting up challenge OKStarting challenge OKRunning check, expecting failure SKIPPEDRunning solve OKRunning check, expecting success FAIL[ERROR] Error verifying check: Expected challenge status 'completed', but got 'started'​Check `instruqt track logs` for details
By running these steps we achieve the following:
Mimic the student's behaviour
Validate that the track starts properly
Validate that the challenge life cycle scripts (setup
, check
and solve
) have been implemented correctly
If you have not implemented check scripts for your track, the third step (expecting check failure) will fail. In this case you can add the --skip-fail-check
flag to the instruqt track test
command.
When the test has finished it will automatically stop the track and mark if for cleanup. If you would like to keep it running afterwards, add the --keep-running
flag. This might be useful if you are trying to debug an issue with your scripts, and want to inspect the environment after the test has finished. If you are running the test with your personal credentials, you can then go to play.instruqt.com and continue with the track where the test finished.
To run a test for a specific track, you can either:
pass the --id <track-id>
flag;
pass the --slug <org-slug>/<track-slug>
flag; or
run it from the folder where the tracks track.yml
is
When running tests locally the CLI will use you personal credentials.
When running test from an automated system (e.g. a CI server), you can authenticate using an API token. To run a test with a token, just set the environment variable INSTRUQT_TOKEN
with the value of you API token.
​​​​