@@ -17,7 +17,7 @@ Unitgrade is an automatic report and exam evaluation framework that enables inst
...
@@ -17,7 +17,7 @@ Unitgrade is an automatic report and exam evaluation framework that enables inst
# Using unitgrade
# Using unitgrade
## At a glance
## A simple example
Unitgrade makes the following assumptions:
Unitgrade makes the following assumptions:
- Your code is in python
- Your code is in python
- Whatever you want to do can be specified as a `unittest`
- Whatever you want to do can be specified as a `unittest`
...
@@ -97,7 +97,7 @@ if __name__ == "__main__":
...
@@ -97,7 +97,7 @@ if __name__ == "__main__":
evaluate_report_student(Report1())
evaluate_report_student(Report1())
```
```
## Deployment
### Deployment
The above is all you need if you simply want to use the framework as a self-check: Students can run the code and see how well they did.
The above is all you need if you simply want to use the framework as a self-check: Students can run the code and see how well they did.
In order to begin using the framework for evaluation we need to create a bit more structure. We do that by deploying the report class as follows:
In order to begin using the framework for evaluation we need to create a bit more structure. We do that by deploying the report class as follows:
```python
```python
...
@@ -116,4 +116,168 @@ if __name__ == "__main__":
...
@@ -116,4 +116,168 @@ if __name__ == "__main__":
- The first line creates the `report1_grade.py` script and any additional data files needed by the tests (none in this case)
- The first line creates the `report1_grade.py` script and any additional data files needed by the tests (none in this case)
- The second line set up the students directory (remember, we have included the solutions!) and remove the students solutions. You can check the results in the students folder.
- The second line set up the students directory (remember, we have included the solutions!) and remove the students solutions. You can check the results in the students folder.
### Using the framework as a student
You can now upload the `student' directory to the students. The students can run their tests either by running `cs101.report1` in their IDE or by typing:
```
python -m cs101.report1
```
in the command line. This produces a detailed output of the test and the program is 100% compatible with a debugger. When the students are happy with their output they can run (using command line or IDE):
```
python -m cs101.report1_grade
```
This runs an identical set of tests, but produces a `.token` file the students can upload to get credit.
- The reason to have a seperate `report1_grade.py` script is to avoid accidential removal of tests.
- The `report1_grade.py` includes all tests and the main parts of the framework and is obfuscated by default. You can apply a much strong level of protection by using e.g. `pyarmor`.
- The `report1_token.token` file includes the outcome of the tests, the time taken, and all python source code in the package. In other words, the file can be used for manual grading, for plagirism detection and for detecting tampering.
- You can easily use the framework to include output of functions.
- See below for how to validate the students results
### How safe is this?
Cheating within the framework is probably best done by manually editing the `.token`-file or by creating a broken set of tests. This involves risk of being trivially detected, for instance because tests have the wrong runtime, but more importantly
the framework automatically pack all the used source code and so if a student is cheating, there is no way to hide it for an instructor who looks at the results. If the
program is used in conjunction with automatic plagiarism software, cheating therefore involves both breaking the framework, and creating 'false' solutions which statistically match other students solutions, and then hope nobody bothers to check the output.
The bottom line is that I think plain old plagiarism is a much more significant risk, and one the framework reduces relative to other project work
by demanding the source code is included.
If this is not enough you have two options: You can either use `pyarmor` to create a **very** difficult challenge for a prospective hacker, or you can simply validate the students results as shown below.
## Example 2: The framework
One of the main advantages of `unitgrade` over web-based autograders it that tests are really easy to develop and maintain. To take advantage of this, we simply change the class the questions inherit from to `UTestCase` (this is still a `unittest.TestCase`) and we can make use of the chache system. As an example:
```python
class Week1(UTestCase):
""" The first question for week 1. """
def test_add(self):
from cs102.homework1 import add
self.assertEqualC(add(2,2))
self.assertEqualC(add(-100, 5))
def test_reverse(self):
from cs102.homework1 import reverse_list
""" Reverse a list """ # Add a title to the test.
self.assertEqualC(reverse_list([1,2,3]))
```
Note we have changed the test-function to `self.assertEqualC` (the `C` is for cache) and dropped the expected result. What `unitgrade` will do
is to evaluate the test *on the working version of the code*, compute the results of the test, and allow them to be available to the user. All this happens in the `deploy.py` script from before.
There are other ways to send the output to the user. For instance:
```python
class Question2(UTestCase):
""" Second problem """
@cache
def my_reversal(self, ls):
# The '@cache' decorator ensures the function is not run on the *students* computer
# Instead the code is run on the teachers computer and the result is passed on with the
# other pre-computed results -- i.e. this function will run regardless of how the student happens to have
# implemented reverse_list.
from cs102.homework1 import reverse_list
return reverse_list(ls)
def test_reverse_tricky(self):
ls = ("butterfly", 4, 1)
ls2 = self.my_reversal( tuple(ls) ) # This will always produce the right result.
ls3 = self.my_reversal( tuple([1,2,3]) ) # Also works; the cache respects input arguments.
self.assertEqualC(self.my_reversal( tuple(ls2) )) # This will actually test the students code.
return ls
```
This code showcase the `@cache` decorator. What it does is it computes the output of the function on your computer and allows that
result to be availble to students (the input arguments must be immutable). This may seem odd, but it is very helpful
- if you have exercises that depend on each other, and you want students to have access to the expected result of older methods which they may not have implemented correctly.
- If you want to use functions the students write to set up appropriate tests without giving away the solution
Furthermore, one of the test now has a return value, which will be automatically included in the `.token` file.
## Example 3: Hidden and secure tests
To use `unitgrade` as a true autograder you both want security nobody tampered with your tests (or the `.token` files), and
also that the students implementations didn't just detect what input was being used and
return the correct answer. To do that you need hidden tests and external validation.
Our new testclass looks like this:
```python
from unitgrade2.unitgrade2 import UTestCase, Report, hide
from unitgrade2.unitgrade_helpers2 import evaluate_report_student
class Week1(UTestCase):
""" The first question for week 1. """
def test_add(self):
from cs103.homework1 import add
self.assertEqualC(add(2,2))
self.assertEqualC(add(-100, 5))
@hide
def test_add_hidden(self):
# This is a hidden test. The @hide-decorator will allow unitgrade to remove the test.
# See the output in the student directory for more information.
from cs103.homework1 import add
self.assertEqualC(add(2,2))
import cs103
class Report3(Report):
title = "CS 101 Report 3"
questions = [(Week1, 20)] # Include a single question for 10 credits.
pack_imports = [cs103]
if __name__ == "__main__":
evaluate_report_student(Report3())
```
This test is stored as `report3_complete.py`. Note the `@hide` decorator which will tell the framework that test (and all code) should be hidden from the user.
In order to use the hidden tests, we first need a version for the students without them. This can be done by changing the `deploy.py` script as follows:
# Let's quickly compare the students score to what we got (the dictionary contains all relevant information including code).
with open(student_token_file, 'rb') as f:
results = pickle.load(f)
print("Student's score was:", results['total'])
print("My independent evaluation of the students score was", checked_token['total'])
```
These steps compile a Docker image (you can easily add whatever packages you need) and runs **our** `project3_complete_grade.py` script on the **students** source code (as taken from the token file).
The last lines load the result and compare the score -- in this case both will return 0 points.