Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
U
Unitgrade
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Locked files
Build
Pipelines
Jobs
Pipeline schedules
Test cases
Artifacts
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tuhe
Unitgrade
Commits
85cc6192
Commit
85cc6192
authored
10 months ago
by
tuhe
Browse files
Options
Downloads
Patches
Plain Diff
Minor change not on pypi
parent
5d01c8d9
Branches
master
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
src/unitgrade.egg-info/PKG-INFO
+1
-1
1 addition, 1 deletion
src/unitgrade.egg-info/PKG-INFO
src/unitgrade/framework.py
+59
-28
59 additions, 28 deletions
src/unitgrade/framework.py
src/unitgrade/version.py
+1
-1
1 addition, 1 deletion
src/unitgrade/version.py
with
61 additions
and
30 deletions
src/unitgrade.egg-info/PKG-INFO
+
1
−
1
View file @
85cc6192
Metadata-Version: 2.1
Name: unitgrade
Version: 1.0.0.
7
Version: 1.0.0.
11
Summary: A student homework/exam evaluation framework build on pythons unittest framework.
Home-page: https://lab.compute.dtu.dk/tuhe/unitgrade
Author: Tue Herlau
...
...
This diff is collapsed.
Click to expand it.
src/unitgrade/framework.py
+
59
−
28
View file @
85cc6192
...
...
@@ -143,35 +143,55 @@ class Report:
modules
=
os
.
path
.
normpath
(
relative_path
[:
-
3
]).
split
(
os
.
sep
)
relative_path
=
relative_path
.
replace
(
"
\\
"
,
"
/
"
)
if
relative_path
.
startswith
(
"
..
"
):
error
=
"""
while
True
:
error
=
"""
--------------------------------------------------------------------------------------
Oh no, you got an installation problem!
Oh no, you got an installation problem!
Please read this carefully:
You
have accidentally downloaded (and
installed
)
the course software in
two locations on your computer
.
The
first place is:
You
are running the grade script from a different location than the folder you
installed the course software in.
The
location of the course software has been determined to be:
> %s
a
> %s
And the
second place is the location that contains this file, namely
:
And the
location of the grade file is
:
> %s
b
> %s
I can
'
t be certain which of these two contains your actual homework, so therefore I have to give you an error :-(.
You are seeing this warning to ensure that the grade-script does not accidentally evaluate a different version
of the your work. No worries, you can still be evaluated and hand in.
But it is easy to fix! Determine which of the two folders contain your homework and simply delete the other one. That
should take care of the problem!
(The terminal in VS Code will tell you the location on your computer you have open right now -- most likely, that is the
location of the right 02002student folder!).
You have two options:
In the future, try to avoid downloading and installing the course software many times -- most issues
can be solved in a much simpler way.
If this problem persists, please contact us on piazza, discord, or directly on tuhe@dtu.dk (or come by my office, building 321, room 127).
Include a copy of this error and a screenshot of VS Code.
"""
%
(
root_dir_0
,
self_file_0
)
print
(
error
)
sys
.
exit
(
1
)
raise
Exception
(
error
)
1) Determine which one of the two locations (a) or (b) contain your homework and delete (or move) the other
folder to a backup location on your computer.
Then restart your IDE for the change to take effect, and run the grade-script from the correct location.
To select this option, type 1 in the terminal followed by enter.
2) You can choose to evaluate and include the source code found at location (a):
> %s
Select this option if this folder indeed contain your solutions. Then you should ensure that the number of points in the .token file name,
and the result of the tests as printed to the terminal, agrees with your own assessment.
You should also include a screenshot of this error as well as your python-files.
To select this option, type 2 in the terminal followed by enter.
Select either of the two options by typing the number
'
1
'
or
'
2
'
in the terminal followed by enter. Only input a single number.
>
"""
%
(
root_dir_0
,
self_file_0
,
root_dir_0
)
num
=
input
(
error
)
if
num
==
'
1
'
:
print
(
"""
you selected option 1. The script will now exit.
Remember that you can always hand in your .py-files and a screenshot of this problem and we will evaluate your homework manually.
"""
)
sys
.
exit
(
1
)
elif
num
==
'
2
'
:
print
(
"
You selected option 2. The script will attempt to continue and include homework from the folder
"
)
print
(
"
root_dir
"
)
break
else
:
print
(
"
-
"
*
50
)
print
(
"
Please input a single number
'
1
'
or
'
2
'
followed by enter. Your input was:
"
,
num
)
print
(
"
-
"
*
50
)
return
root_dir
,
relative_path
,
modules
...
...
@@ -440,8 +460,7 @@ class UTestCase(unittest.TestCase):
mute
=
not
result
.
show_errors_in_grade_mode
else
:
pass
# print("this' not a text result.")
# print(result.show_errors_in_grade_mode)
from
unitgrade.artifacts
import
StdCapturing
from
unitgrade.utils
import
DKPupDB
self
.
_error_fed_during_run
=
[]
# Initialize this to be empty.
...
...
@@ -670,9 +689,13 @@ class UTestCase(unittest.TestCase):
return
key
in
self
.
__class__
.
_cache
def
get_expected_test_value
(
self
):
key
=
(
self
.
cache_id
(),
'
assert
'
)
id
=
self
.
_assert_cache_index
cache
=
self
.
_cache_get
(
key
)
if
cache
is
None
:
return
"
The cache is not set for this test. You may have deleted the unitgrade_data-directory or files therein, or the test is not deployed correctly.
"
_expected
=
cache
.
get
(
id
,
f
"
Key
{
id
}
not found in cache; framework files missing. Please run deploy()
"
)
return
_expected
...
...
@@ -688,6 +711,7 @@ class UTestCase(unittest.TestCase):
print
(
"
Warning, framework missing cache index
"
,
key
,
"
id =
"
,
id
,
"
- The test will be skipped for now.
"
)
if
self
.
_setup_answers_mode
:
_expected
=
first
# Bypass by setting equal to first. This is in case multiple self.assertEqualC's are run in a row and have to be set.
from
numpy.testing
import
assert_allclose
# The order of these calls is important. If the method assert fails, we should still store the correct result in cache.
cache
[
id
]
=
first
...
...
@@ -708,6 +732,14 @@ class UTestCase(unittest.TestCase):
def
assertEqualC
(
self
,
first
,
msg
=
None
):
self
.
wrap_assert
(
self
.
assertEqual
,
first
,
msg
)
def
assertAlmostEqualC
(
self
,
first
,
places
=
None
,
msg
=
None
,
delta
=
None
):
if
isinstance
(
first
,
np
.
ndarray
):
assert
False
,
"
This is not correct. Use assertL1(first, places) instead.
"
import
functools
fn
=
functools
.
partial
(
self
.
assertAlmostEqual
,
places
=
places
,
delta
=
delta
)
self
.
wrap_assert
(
fn
,
first
,
msg
=
msg
)
def
_shape_equal
(
self
,
first
,
second
):
a1
=
np
.
asarray
(
first
).
squeeze
()
a2
=
np
.
asarray
(
second
).
squeeze
()
...
...
@@ -734,8 +766,7 @@ class UTestCase(unittest.TestCase):
return
self
.
wrap_assert
(
self
.
assertLinf
,
first
,
tol
=
tol
,
msg
=
msg
)
else
:
diff
=
self
.
_shape_equal
(
first
,
second
)
np
.
testing
.
assert_allclose
(
first
,
second
,
atol
=
tol
)
np
.
testing
.
assert_allclose
(
first
,
second
,
atol
=
tol
,
err_msg
=
msg
)
max_diff
=
max
(
diff
.
flat
)
if
max_diff
>=
tol
:
from
unittest.util
import
safe_repr
...
...
@@ -744,7 +775,7 @@ class UTestCase(unittest.TestCase):
# np.testing.assert_almost_equal
# import numpy as np
print
(
f
"
|first - second|_max =
{
max_diff
}
>
{
tol
}
"
)
np
.
testing
.
assert_almost_equal
(
first
,
second
)
np
.
testing
.
assert_almost_equal
(
first
,
second
,
err_msg
=
msg
)
# If the above fail, make sure to throw an error:
self
.
assertFalse
(
max_diff
>=
tol
,
msg
=
f
'
Input arrays are not equal within tolerance
{
tol
}
'
)
# self.assertEqual(first, second, msg=f'Not equal within tolerance {tol}')
...
...
@@ -756,7 +787,7 @@ class UTestCase(unittest.TestCase):
# We first test using numpys build-in testing method to see if one coordinate deviates a great deal.
# This gives us better output, and we know that the coordinate wise difference is lower than the norm difference.
if
not
relative
:
np
.
testing
.
assert_allclose
(
first
,
second
,
atol
=
tol
)
np
.
testing
.
assert_allclose
(
first
,
second
,
atol
=
tol
,
err_msg
=
msg
)
diff
=
self
.
_shape_equal
(
first
,
second
)
diff
=
(
(
np
.
asarray
(
diff
.
flatten
()
)
**
2
).
sum
()
)
**
.
5
...
...
@@ -766,7 +797,7 @@ class UTestCase(unittest.TestCase):
msg
=
""
if
msg
is
None
else
msg
print
(
f
"
|first - second|_2 =
{
max_diff
}
>
{
tol
}
"
)
# Deletage to numpy. Let numpy make nicer messages.
np
.
testing
.
assert_almost_equal
(
first
,
second
)
# This function does not take a msg parameter.
np
.
testing
.
assert_almost_equal
(
first
,
second
,
err_msg
=
msg
)
# This function does not take a msg parameter.
# Make sure to throw an error no matter what.
self
.
assertFalse
(
max_diff
>=
tol
,
msg
=
f
'
Input arrays are not equal within tolerance
{
tol
}
'
)
# self.assertEqual(first, second, msg=msg + f"Not equal within tolerance {tol}")
...
...
This diff is collapsed.
Click to expand it.
src/unitgrade/version.py
+
1
−
1
View file @
85cc6192
__version__
=
"
1.0.0.
8
"
__version__
=
"
1.0.0.
11
"
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment