Skip to content
Snippets Groups Projects
Commit 238745ee authored by tuhe's avatar tuhe
Browse files

Removed pupdb as a dependency, using diskcache

parent 81f30ddc
No related branches found
No related tags found
No related merge requests found
# Unitgrade
Unitgrade is an autograding framework which enables instructors to offer automatically evaluated programming assignments in a maximally convenient format for the students.
Unitgrade is an automatic software testing framework that enables instructors to offer automatically evaluated programming assignments with a minimal overhead for students.
Unitgrade is build on pythons `unittest` framework so that the tests can be specified and run in a familiar syntax,
and will integrate well with any modern IDE. What it offers beyond `unittest` is the ability to collect tests in reports (for automatic evaluation)
Unitgrade is build on pythons `unittest` framework; i.e., you can directly use your existing unittests without any changes. It will therefore integrate well with any modern IDE. What it offers beyond `unittest` is the ability to collect tests in reports (for automatic evaluation)
and an easy and safe mechanism for verifying results.
- 100% Python `unittest` compatible
- Integrates with any modern IDE (VSCode, Pycharm, Eclipse)
- No external configuration files or setup required
- Tests are quick to run and will tell you where your mistake is
- No external configuration files
- Hint-system collects hints from code and display it with failed unittests
- A dashboard gives the students an overview of their progress
- Safe and convenient to administer
### Why this instead of an online autograder?
Online autograding services will often say that they have adapter their particular model in order to make students better or happier. I did a small thought-experiments, and asked myself what I would ideally want out of an autograder if I was a student. I quickly realized the only thing I really cared about was easily it allowed me to fix bugs in my homework assignments. In other words, I think students prioritize the same thing as we all do when we write software tests -- to quickly and easily fix problems.
However, I don't think an online autograder is a test-system I would like to use for any of my software projects.
- Why would I want my tests to be executed in another environment than my development-environment?
- Why would I want to copy-paste code online (or rely on a sub-standard web-IDE without autocomplete?)
- The lack of a debugger would drive me nuts
- Why even have an external tool when my IDE has excellent test plugins?
Simply put, I'd never want to use an online autograder as a way to fix issues in my own software projects, so why should students prefer it?
The alternative is obvious -- simply give students a suite of unittests. This raises some potential issues such as safety and administrative convenience, but they turned out to be easy to solve.
## Installation
Unitgrade is simply installed like any other package using `pip`:
Unitgrade requires python 3.8 or higher, can be installed using `pip`:
```terminal
pip install unitgrade
```
This will install unitgrade in your site-packages directory and you should be all set. If you want to upgrade an old version of unitgrade run:
After the command completes you should be all. If you want to upgrade an old version of unitgrade run:
```terminal
pip install unitgrade --upgrade --no-cache-dir
```
If you are using anaconda+virtual environment you can install it as any other package:
If you are using anaconda+virtual environment you can also install it as you would any other package:
```terminal
source activate myenv
conda install git pip
......@@ -29,7 +41,7 @@ pip install unitgrade
```
When you are done, you should be able to import unitgrade. Type `python` in the termial and try:
```pycon
>>> import unitgrade2
>>> import unitgrade
```
## Using Unitgrade
......@@ -56,9 +68,9 @@ The file will run and show an output where the score of each question is compute
| | | |_ __ _| |_| | \/_ __ __ _ __| | ___
| | | | '_ \| | __| | __| '__/ _` |/ _` |/ _ \
| |_| | | | | | |_| |_\ \ | | (_| | (_| | __/
\___/|_| |_|_|\__|\____/_| \__,_|\__,_|\___| v0.1.22, started: 19/05/2022 15:16:20
\___/|_| |_|_|\__|\____/_| \__,_|\__,_|\___| v0.1.29.0, started: 16/09/2022 12:49:50
Week 4: Looping (use --help for options)
02531 week 5: Looping (use --help for options)
Question 1: Cluster analysis
* q1.1) clusterAnalysis([0.8, 0.0, 0.6]) = [1, 2, 1] ?.............................................................PASS
* q1.2) clusterAnalysis([0.5, 0.6, 0.3, 0.3]) = [2, 2, 1, 1] ?.....................................................PASS
......@@ -89,7 +101,7 @@ Question 4: Fermentation rate
* q4.4) fermentationRate([20.1, 19.3, 1.1, 18.2, 19.7, ...], 18.2, 20) = 19.500 ?..................................PASS
* q4) Total.................................................................................................... 10/10
Total points at 15:16:20 (0 minutes, 0 seconds)....................................................................40/40
Total points at 12:49:54 (0 minutes, 4 seconds)....................................................................40/40
Provisional evaluation
--------- -----
q1) Total 10/10
......@@ -101,7 +113,7 @@ Total 40/40
Note your results have not yet been registered.
To register your results, please run the file:
>>> report1intro_grade.py
>>> looping_tests_grade.py
In the same manner as you ran this file.
```
......@@ -112,6 +124,24 @@ python cs101report1_grade.py
```
This script will run *the same tests as before* and generates a file named `Report0_handin_18_of_18.token` (this is called the `token`-file because of the extension). The token-file contains all your results and it is the token-file you should upload (and no other). Because you cannot (and most definitely should not!) edit it, it shows the number of points in the file-name.
### The dashboard
I recommend to watch and run the tests from your IDE, as this allows you to use the debugger in conjunction with your tests. However, I have put together a dashboard that allows you to see the outcome of individual tests and what is currently recorded in your `token`-file. To start the dashboard, simply run the command
```
unitgrade
```
in a terminal from a directory that contains a test (the directory will be searched recursively for tests). This will start a small background service and open this page:
![The dashboard](https://gitlab.compute.dtu.dk/tuhe/unitgrade/-/raw/master/docs/dashboard.png)
What currently works:
- Shows you which files need to be edited to solve the problem
- Collect hints given in the homework files
- fully responsive -- the terminal will update while the test is running regardless of where you launch the test
- Allows you to re-run tests
- Shows current test status and results captured in `.token`-file
- Tested on Windows/Linux
Note that the run feature currently assumes that your system-wide `python` command can run the tests. This may not be the case if you are using virtual environments -- I expect to fix this soon.
### Why are there two scripts?
The reason why we use a standard test script (one with the `_grade.py` extension and one without), is because the tests should both be easy to debug, but at the same time we have to avoid accidential changes to the test scripts. The tests themselves are the same, so if one script works, so should the other.
......@@ -165,9 +195,9 @@ Please contact me and we can discuss your specific concerns.
# Citing
```bibtex
@online{unitgrade,
title={Unitgrade (0.1.22): \texttt{pip install unitgrade}},
title={Unitgrade (0.1.29.0): \texttt{pip install unitgrade}},
url={https://lab.compute.dtu.dk/tuhe/unitgrade},
urldate = {2022-05-19},
urldate = {2022-09-16},
month={9},
publisher={Technical University of Denmark (DTU)},
author={Tue Herlau},
......
# Unitgrade
Unitgrade is an autograding framework which enables instructors to offer automatically evaluated programming assignments in a maximally convenient format for the students.
Unitgrade is an automatic software testing framework that enables instructors to offer automatically evaluated programming assignments with a minimal overhead for students.
Unitgrade is build on pythons `unittest` framework so that the tests can be specified and run in a familiar syntax,
and will integrate well with any modern IDE. What it offers beyond `unittest` is the ability to collect tests in reports (for automatic evaluation)
Unitgrade is build on pythons `unittest` framework; i.e., you can directly use your existing unittests without any changes. It will therefore integrate well with any modern IDE. What it offers beyond `unittest` is the ability to collect tests in reports (for automatic evaluation)
and an easy and safe mechanism for verifying results.
- 100% Python `unittest` compatible
- Integrates with any modern IDE (VSCode, Pycharm, Eclipse)
- No external configuration files or setup required
- Tests are quick to run and will tell you where your mistake is
- No external configuration files
- Hint-system collects hints from code and display it with failed unittests
- A dashboard gives the students an overview of their progress
- Safe and convenient to administer
### Why this instead of an online autograder?
Online autograding services will often say that they have adapter their particular model in order to make students better or happier. I did a small thought-experiments, and asked myself what I would ideally want out of an autograder if I was a student. I quickly realized the only thing I really cared about was easily it allowed me to fix bugs in my homework assignments. In other words, I think students prioritize the same thing as we all do when we write software tests -- to quickly and easily fix problems.
However, I don't think an online autograder is a test-system I would like to use for any of my software projects.
- Why would I want my tests to be executed in another environment than my development-environment?
- Why would I want to copy-paste code online (or rely on a sub-standard web-IDE without autocomplete?)
- The lack of a debugger would drive me nuts
- Why even have an external tool when my IDE has excellent test plugins?
Simply put, I'd never want to use an online autograder as a way to fix issues in my own software projects, so why should students prefer it?
The alternative is obvious -- simply give students a suite of unittests. This raises some potential issues such as safety and administrative convenience, but they turned out to be easy to solve.
## Installation
Unitgrade is simply installed like any other package using `pip`:
Unitgrade requires python 3.8 or higher, can be installed using `pip`:
```terminal
pip install unitgrade
```
This will install unitgrade in your site-packages directory and you should be all set. If you want to upgrade an old version of unitgrade run:
After the command completes you should be all. If you want to upgrade an old version of unitgrade run:
```terminal
pip install unitgrade --upgrade --no-cache-dir
```
If you are using anaconda+virtual environment you can install it as any other package:
If you are using anaconda+virtual environment you can also install it as you would any other package:
```terminal
source activate myenv
conda install git pip
......@@ -29,7 +41,7 @@ pip install unitgrade
```
When you are done, you should be able to import unitgrade. Type `python` in the termial and try:
```pycon
>>> import unitgrade2
>>> import unitgrade
```
## Using Unitgrade
......@@ -60,6 +72,24 @@ python cs101report1_grade.py
```
This script will run *the same tests as before* and generates a file named `Report0_handin_18_of_18.token` (this is called the `token`-file because of the extension). The token-file contains all your results and it is the token-file you should upload (and no other). Because you cannot (and most definitely should not!) edit it, it shows the number of points in the file-name.
### The dashboard
I recommend to watch and run the tests from your IDE, as this allows you to use the debugger in conjunction with your tests. However, I have put together a dashboard that allows you to see the outcome of individual tests and what is currently recorded in your `token`-file. To start the dashboard, simply run the command
```
unitgrade
```
in a terminal from a directory that contains a test (the directory will be searched recursively for tests). This will start a small background service and open this page:
![The dashboard](https://gitlab.compute.dtu.dk/tuhe/unitgrade/-/raw/master/docs/dashboard.png)
What currently works:
- Shows you which files need to be edited to solve the problem
- Collect hints given in the homework files
- fully responsive -- the terminal will update while the test is running regardless of where you launch the test
- Allows you to re-run tests
- Shows current test status and results captured in `.token`-file
- Tested on Windows/Linux
Note that the run feature currently assumes that your system-wide `python` command can run the tests. This may not be the case if you are using virtual environments -- I expect to fix this soon.
### Why are there two scripts?
The reason why we use a standard test script (one with the `_grade.py` extension and one without), is because the tests should both be easy to debug, but at the same time we have to avoid accidential changes to the test scripts. The tests themselves are the same, so if one script works, so should the other.
......
docs/dashboard.png

310 KiB

......@@ -7,7 +7,7 @@ if __name__ == "__main__":
from jinjafy.bibliography_maker import make_bibliography
bibtex = make_bibliography("../setup.py", "./")
out = subprocess.check_output("python --version").decode("utf-8")
out = subprocess.check_output("python --version".split()).decode("utf-8")
fn = unitgrade_private.__path__[0] + "/../../examples/02631/instructor/week5/looping_tests.py"
out = subprocess.check_output(f"cd {os.path.dirname(fn)} && python {os.path.basename(fn)} --noprogress", shell=True, encoding='utf8', errors='strict')
......
@online{unitgrade,
title={Unitgrade (0.1.22): \texttt{pip install unitgrade}},
title={Unitgrade (0.1.29.0): \texttt{pip install unitgrade}},
url={https://lab.compute.dtu.dk/tuhe/unitgrade},
urldate = {2022-05-19},
urldate = {2022-09-16},
month={9},
publisher={Technical University of Denmark (DTU)},
author={Tue Herlau},
......
......@@ -6,14 +6,13 @@ import os
import logging
import sys
import glob
from pathlib import Path
from flask import Flask, render_template
from flask_socketio import SocketIO
from unitgrade.utils import picklestring2dict, load_token
from unitgrade.utils import load_token
from unitgrade.dashboard.app_helpers import get_available_reports, _run_test_cmd
from unitgrade.dashboard.watcher import Watcher
from unitgrade.dashboard.file_change_handler import FileChangeHandler
from unitgrade.framework import DKPupDB
from unitgrade.utils import DKPupDB
from unitgrade.dashboard.dbwatcher import DBWatcher
from diskcache import Cache
logging.getLogger('werkzeug').setLevel("WARNING")
......@@ -42,55 +41,27 @@ def mkapp(base_dir="./", use_command_line=True):
if state == "fail":
pass
print("Emitting update of", key, "to", state)
def callb(*args, **kwargs):
print("Hi I am the callback function")
socketio.emit('testupdate', {"id": key, 'state': state, 'stacktrace': wz, 'stdout': db.get('stdout'),
'run_id': db.get('run_id'),
'coverage_files_changed': coverage_files_changed}, namespace="/status", to=x.get("client_id", None), callback=callb)
z = 234
'coverage_files_changed': coverage_files_changed}, namespace="/status")
def do_something(file_pattern):
"""
Oh crap, `file` has changed on disk. We need to open it, look at it, and then do stuff based on what is in it.
That is, we push all chnages in the file to clients.
We don't know what are on the clients, so perhaps push everything and let the browser resolve it.
`file` has changed on disk. We need to open it, look at it, and then do stuff based on what is in it.
Then push all changes to clients.
"""
with watched_files_lock:
file = watched_files_dictionary[file_pattern]['file']
type = watched_files_dictionary[file_pattern]['type']
lrc = watched_files_dictionary[file_pattern]['last_recorded_change']
if type == 'question_json': # file.endswith(".json"):
if type == 'question_json': # file.endswith(".json"); these no longer exists so this should never be triggered.
if file is None:
return # There is nothing to do, the file does not exist.
# try:
# db = DKPupDB(file)
# if "state" not in db.keys(): # Test has not really been run yet. There is no reason to submit this change to the UI.
# return
# except Exception as e:
# print(e)
# os.remove(file) # Delete the file. This is a bad database file, so we trash it and restart.
# return
#
# state = db.get('state')
# key = os.path.basename(file)[:-5]
# # print("updating", file, key)
# wz = db.get('wz_stacktrace') if 'wz_stacktrace' in db.keys() else None
# if wz is not None:
# wz = wz.replace('<div class="traceback">', f'<div class="traceback"><div class="{key}-traceback">')
# wz += "</div>"
# coverage_files_changed = db.get('coverage_files_changed') if 'coverage_files_changed' in db.keys() else None
# if state == "fail":
# pass
# print("State is fail, I am performing update", state, key)
# socketio.emit('testupdate', {"id": key, 'state': state, 'stacktrace': wz, 'stdout': db.get('stdout'), 'run_id': db.get('run_id'),
# 'coverage_files_changed': coverage_files_changed}, namespace="/status")
elif type =='coverage':
if lrc is None: # Program startup. We don't care about this.
return
# db = get_report_database()
for q in current_report['questions']:
for i in current_report['questions'][q]['tests']:
test_invalidated = False
......@@ -116,41 +87,11 @@ def mkapp(base_dir="./", use_command_line=True):
else:
raise Exception("Bad type: " + type)
# def get_report_database():
# assert False
# import pickle
# dbjson = current_report['json']
# with open(current_report['json'], 'rb') as f:
# rs = pickle.load(f)
#
# # db = DKPupDB(dbjson)
# # from unitgrade_private.hidden_gather_upload import picklestring2dict
# # rs = {}
# # for k in db.keys():
# # if k == 'questions':
# # qenc, _ = picklestring2dict(db.get("questions"))
# # rs['questions'] = qenc # This feels like a good place to find the test-file stuff.
# # else:
# # rs[k] = db.get(k)
#
# lpath_full = Path(os.path.normpath(os.path.dirname(dbjson) + "/../" + os.path.basename(dbjson)[12:].split(".")[0] + ".py"))
# rpath = Path(rs['relative_path'])
# base = lpath_full.parts[:-len(rpath.parts)]
#
# rs['local_base_dir_for_test_module'] = str(Path(*base))
# rs['test_module'] = ".".join(rs['modules'])
#
# del rs['root_dir'] # Don't overwrite this one.
# return rs
def select_report_file(json):
current_report.clear()
for k, v in available_reports[json].items():
current_report[k] = v
# for k, v in get_report_database().items():
# current_report[k] = v
def mkempty(pattern, type):
fls = glob.glob(current_report['root_dir'] + pattern)
fls.sort(key=os.path.getmtime)
......@@ -160,13 +101,11 @@ def mkapp(base_dir="./", use_command_line=True):
watched_blocks = []
with watched_files_lock:
watched_files_dictionary.clear()
# db = PupDB(json)
dct = current_report['questions'] # picklestring2dict(db.get('questions'))[0]
for q in dct.values():
for q in current_report['questions'].values():
for i in q['tests'].values():
file = "*/"+i['artifact_file']
watched_blocks.append(os.path.basename( i['artifact_file'])[:-5])
watched_files_dictionary[file] = mkempty(file, 'question_json') # when the file was last changed and when that change was last handled.
watched_files_dictionary[file] = mkempty(file, 'question_json') # when the file was last changed and when that change was last handled. Superflous.
for c in i['coverage_files']:
file = "*/"+c
watched_files_dictionary[file] = mkempty(file, "coverage")
......@@ -195,13 +134,8 @@ def mkapp(base_dir="./", use_command_line=True):
print("But this directory does not contain any reports. Please run unitgrade from a directory which contains report files.")
sys.exit()
# x['current_report'] =
select_report_file(list(available_reports.keys()).pop())
# @app.route("/app.js") # Unclear if used
# def appjs():
# return render_template("app.js")
@socketio.on("ping", namespace="/status") # Unclear if used.
def ping():
json = current_report['json']
......@@ -209,16 +143,12 @@ def mkapp(base_dir="./", use_command_line=True):
@app.route("/info")
def info_page():
# Print an info page.
# db = Cache(self)
db = Cache( os.path.dirname( current_report['json'] ) )
info = {k: db[k] for k in db}
return render_template("info.html", **current_report, available_reports=available_reports, db=info)
@app.route("/")
def index_bare():
# select_report_file()
return index(list(available_reports.values()).pop()['menu_name'])
@app.route("/report/<report>")
......@@ -243,9 +173,7 @@ def mkapp(base_dir="./", use_command_line=True):
it_key_js = "-".join(it_key)
# do a quick formatting of the hints. Split into list by breaking at *.
hints = it_value['hints']
hints = [] if hints is None else hints.copy()
for k in range(len(hints)):
ahints = []
for h in hints[k][0].split("\n"):
......@@ -266,25 +194,13 @@ def mkapp(base_dir="./", use_command_line=True):
@socketio.on("rerun", namespace="/status")
def rerun(data):
t0 = time.time()
"""write to the child pty. The pty sees this as if you are typing in a real
terminal.
"""
# db = get_report_database()
targs = ".".join( data['test'].split("-") )
m = '.'.join(current_report['modules'])
# cmd = f"python -m {m} {targs}"
# cmd = f"python -m unittest {m}.{targs}"
# import unittest
_run_test_cmd(dir=current_report['root_dir'], module_name=m, test_spec=targs, use_command_line=use_command_line)
# try:
# pass
# # out = subprocess.run(cmd, cwd=db['local_base_dir_for_test_module'], shell=True, check=True, capture_output=True, text=True)
# except Exception as e: # I think this is related to simple exceptions being treated as errors.
# print(e)
# pass
# print("oh dear.")
for q in current_report['questions']:
for i in current_report['questions'][q]['tests']:
if "-".join(i) == data['test']:
......@@ -298,8 +214,6 @@ def mkapp(base_dir="./", use_command_line=True):
"""write to the child pty. The pty sees this as if you are typing in a real
terminal.
"""
# db = get_report_database()
# db = current_report
m = '.'.join(current_report['modules'])
_run_test_cmd(dir=current_report['root_dir'], module_name=m, test_spec="", use_command_line=use_command_line)
......@@ -311,12 +225,10 @@ def mkapp(base_dir="./", use_command_line=True):
def wz():
return render_template('wz.html')
@socketio.event
def connect(sid, environ):
print(environ)
print(sid)
# username = authenticate_user(environ)
# socketio.save_session(sid, {'username': 'bobthebob'})
# @socketio.event
# def connect(sid, environ):
# print(environ)
# print(sid)
@socketio.on("reconnected", namespace="/status")
......@@ -325,8 +237,8 @@ def mkapp(base_dir="./", use_command_line=True):
terminal.
"""
print("--------Client has reconnected----------")
sid = 45;
print(f"{sid=}, {data=}")
# sid = 45;
# print(f"{sid=}, {data=}")
with watched_files_lock:
for k in watched_files_dictionary:
if watched_files_dictionary[k]['type'] in ['token', 'question_json']:
......@@ -348,20 +260,12 @@ def main():
args_host = "127.0.0.1"
# Deploy local files for debug.
deploy.main(with_coverage=False)
deploy.main(with_coverage=True)
mk_bad()
bdir = os.path.dirname(deploy.__file__)
app, socketio, closeables = mkapp(base_dir=bdir)
green = "\033[92m"
end = "\033[0m"
log_format = green + "pyxtermjs > " + end + "%(levelname)s (%(funcName)s:%(lineno)s) %(message)s"
debug = False
# logging.basicConfig(
# format=log_format,
# stream=sys.stdout,
# level=logging.DEBUG if True else logging.INFO,
# )
logging.info(f"serving on http://{args_host}:{args_port}")
os.environ["WERKZEUG_DEBUG_PIN"] = "off"
socketio.run(app, debug=debug, port=args_port, host=args_host, allow_unsafe_werkzeug=True)
......
......@@ -5,7 +5,6 @@ import unittest
import os
import glob
import pickle
# from pupdb.core import PupDB
from pathlib import Path
def get_available_reports(jobfolder):
......
......@@ -12,10 +12,11 @@ import urllib.parse
import requests
import ast
import numpy
from unittest.case import TestCase
from unitgrade.runners import UTextResult
from unitgrade.utils import gprint, Capturing2, Capturing
from unitgrade.artifacts import StdCapturing
from diskcache import Cache
colorama.init(autoreset=True) # auto resets your settings after every output
numpy.seterr(all='raise')
......@@ -25,50 +26,6 @@ def setup_dir_by_class(C, base_dir):
return base_dir, name
class DKPupDB:
def __init__(self, artifact_file, use_pupdb=True):
# Make a double-headed disk cache thingy.
self.dk = Cache(os.path.dirname(artifact_file)) # Start in this directory.
self.name_ = os.path.basename(artifact_file[:-5])
if self.name_ not in self.dk:
self.dk[self.name_] = dict()
self.use_pupdb = use_pupdb
if self.use_pupdb:
from pupdb.core import PupDB
self.db_ = PupDB(artifact_file)
def __setitem__(self, key, value):
if self.use_pupdb:
self.db_.set(key, value)
with self.dk.transact():
d = self.dk[self.name_]
d[key] = value
self.dk[self.name_] = d
self.dk[self.name_ + "-updated"] = True
def __getitem__(self, item):
v = self.dk[self.name_][item]
if self.use_pupdb:
v2 = self.db_.get(item)
if v != v2:
print("Mismatch v1, v2 for ", item)
return v
def keys(self): # This one is also deprecated.
return tuple(self.dk[self.name_].keys()) #.iterkeys())
# return self.db_.keys()
def set(self, item, value): # This one is deprecated.
self[item] = value
def get(self, item, default=None):
return self[item] if item in self else default
def __contains__(self, item):
return item in self.dk[self.name_] #keys()
# return item in self.dk
_DASHBOARD_COMPLETED_MESSAGE = "Dashboard> Evaluation completed."
# Consolidate this code.
......@@ -78,7 +35,7 @@ class classmethod_dashboard(classmethod):
if not cls._generate_artifacts:
f(cls)
return
from unitgrade.utils import DKPupDB
db = DKPupDB(cls._artifact_file_for_setUpClass())
r = np.random.randint(1000 * 1000)
db.set('run_id', r)
......@@ -374,8 +331,7 @@ class UTestCase(unittest.TestCase):
if not self._generate_artifacts:
return super().run(result)
from unitgrade.artifacts import StdCapturing
from unittest.case import TestCase
from unitgrade.utils import DKPupDB
db = DKPupDB(self._artifact_file())
db.set("state", "running")
......
......@@ -6,25 +6,24 @@ import lzma
import hashlib
import pickle
import base64
import os
from collections import namedtuple
from io import StringIO
import numpy as np
import tqdm
from colorama import Fore
from functools import _make_key
from diskcache import Cache
_CacheInfo = namedtuple("CacheInfo", ["hits", "misses", "maxsize", "currsize"])
def gprint(s):
print(f"{Fore.LIGHTGREEN_EX}{s}")
myround = lambda x: np.round(x) # required for obfuscation.
msum = lambda x: sum(x)
mfloor = lambda x: np.floor(x)
"""
Clean up the various output-related helper classes.
"""
......@@ -304,3 +303,46 @@ def load_token(file_in):
## Key/value store related.
class DKPupDB:
""" This key/value store store artifacts (associated with a specific question) in a dictionary. """
def __init__(self, artifact_file, use_pupdb=False):
# Make a double-headed disk cache thingy.
self.dk = Cache(os.path.dirname(artifact_file)) # Start in this directory.
self.name_ = os.path.basename(artifact_file[:-5])
if self.name_ not in self.dk:
self.dk[self.name_] = dict()
self.use_pupdb = use_pupdb
if self.use_pupdb:
from pupdb.core import PupDB
self.db_ = PupDB(artifact_file)
def __setitem__(self, key, value):
if self.use_pupdb:
self.db_.set(key, value)
with self.dk.transact():
d = self.dk[self.name_]
d[key] = value
self.dk[self.name_] = d
self.dk[self.name_ + "-updated"] = True
def __getitem__(self, item):
v = self.dk[self.name_][item]
if self.use_pupdb:
v2 = self.db_.get(item)
if v != v2:
print("Mismatch v1, v2 for ", item)
return v
def keys(self): # This one is also deprecated.
return tuple(self.dk[self.name_].keys()) #.iterkeys())
# return self.db_.keys()
def set(self, item, value): # This one is deprecated.
self[item] = value
def get(self, item, default=None):
return self[item] if item in self else default
def __contains__(self, item):
return item in self.dk[self.name_] #keys()
# return item in self.dk
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment