Newer
Older
Easy-to-use, easy-to-build report evaluations in a framework building on pythons unittest classes
- Easy to use for students:
- Run tests as a single python command
- See your score immediately
- Upload your results as a single file on campusnet with no risk of accidential tampering
- All tests are simple classes so it will integrates well with debugger and any IDE
- Easy to use for teacher: New tests can be build in 100% python, with no need to specify expected output
## What it looks like to a student
Homework is broken down into **reports**. A report is a collection of questions which are individually scored, and each question may in turn involve multiple tests. Each report is therefore given an overall score based on a weighted average of how many tests are passed.
In practice, a report consist of an ordinary python file which they simply run. It looks like this:
The file `report1.py` is just an ordinary, non-obfuscated file which they can navigate and debug using a debugger. The file may contain the homework, or it may call functions the students have written. Running the file creates console output which tells the students their current score for each test:
```
sadfsadf
asdf
```
Once students are happy with the result, they run an alternative, not-easy-to-tamper-with script called `report1_grade.py`:
This runs the same tests, and generates a file `report1.token` which they upload to campusnet. This file contains the results of the report evaluation, the script output, and so on.
The framework is build around the build-in `unittest` framework in python. Using the framework therefore also familiarizes students with the use of automatic testing.
A unittest consist of three things:
- The result of the users code,
- The expected result,
- A comparison operation of the two which may either fail or succeed.
The comparisons are build on top of pythons `unittest` framework to obtain a wide variety of well-documented comparisons, and it is easy to write your own.
To get the expected result, one option is to specify it yourself, however the recommended (and much easier option) is to maintain a working branch of the code where all the funtionality the students must implement works, and then use the output of that branch as the *expected output* in the test.
To see how this works, consider the following minimal example:
```
sadf
```