plom.db module

Plom database stuff.

class plom.db.PlomDB(dbfile_name='plom.db', *, db_name, db_host, db_port, db_username, db_password)[source]

The main Plom database.

ID_delete_predictions(*, predictor=None)

Remove the predictions for IDs, either from a particular predictor or all of them.

Keyword Arguments:

predictor (str/None) – which predictor. If not specified, defaults to None which means all predictors.

ID_get_donotmark_images(test_number)

Return the DoNotMark page images of a paper.

Parameters:

test_number (int) –

Returns:

(True, file_list) where file_list is a possibly-empty list of file names. Otherwise, (False, “NoTest”) or (False, “NoScanAndNotIDd”).

Return type:

2-tuple

ID_get_predictions(*, predictor=None)
Return a dict of predicted test to student_ids.

If all predictions are returned, each dict value contains a list of prediction dicts. If predictions for a specified predictor are returned, each dict value contains a single prediction dict.

Keyword Arguments:

predictor (str/None) – which predictor. If not specified, defaults to None which means all predictors, so multiple predictions may be returned for each paper number.

ID_id_paper(paper_num, user_name, sid, sname, checks=True)

Associate student name and id with a paper in the database.

Parameters:
  • paper_num (int) –

  • user_name (str) – User who did the IDing.

  • sid (str, None) – student ID. None if the ID page was blank: typically sname will then contain some short explanation.

  • sname (str) – student name.

  • checks (bool) – by default (True), the paper must be scanned and the username must match the current owner of the paper (typically because the paper was assigned to them). You can pass False if its ID the paper without being owner (e.g., during automated IDing of prenamed papers.)

Returns:

(True, None, None) if successful or (False, int, msg) on errors, where msg gives details about the error. Some of of these should not occur, and indicate possible bugs. int gives a hint of suggested HTTP status code, currently it can be 404, 403, or 409. (False, 403, msg) for ID tasks belongs to a different user, only tested for when checks=True. (False, 404, msg) for paper not found or not scanned yet. (False, 409, msg) means sid is in use elsewhere.

Return type:

tuple

IDcountAll()

Count all tests in which the ID page is scanned.

IDcountIdentified()

Count all tests in which the ID page is scanned and student has been identified.

IDgetDoneTasks(user_name)

When a id-client logs on they request a list of papers they have already IDd. Send back the list.

IDgetIdentifiedTests()

All tests in which the ID page is scanned and student has been identified.

IDgetImage(user_name, test_number)

Return ID page image of a paper.

Parameters:
  • user_name (str) –

  • test_number (int) –

Returns:

(True, file) or (True, None)`. Otherwise, (False, “NoTest”) or (False, “NoScanAndNotIDd”) or (False, “NotOwner”).

Return type:

2-tuple

IDgetImageFromATest()

Returns ID image from a randomly selected unid’d test.

IDgetImagesOfUnidentified()

For every used but un-identified test, find the filename of its idpage. So gives returns a dictionary of testNumber -> filename.

TODO: add an optional flag to drop those with high (prenamed) level of prediction confidence?

IDgetNextTask()

Find unid’d test and send test_number to client

IDgetUnidentifiedTests()

All tests in which the ID page is scanned but the student is not yet identified.

IDgiveTaskToClient(user_name, test_number)

Assign test #test_number as a task to the given user if available.

Returns:

(True, image_file) if available else (False, msg) where msg is a short string: “NoTest”, “NotScanned”, “NotOwner”.

Return type:

2-tuple

IDreviewID(test_number)

Replace the owner of the ID task for test test_number, with the reviewer.

MaddExistingTag(username, task, tag_text)

Add an existing tag to the task

Returns:

ok, errcode, msg.

Return type:

tuple

McheckTagKeyExists(tag_key)

Check that the given tag_key in the database

McheckTagTextExists(tag_text)

Check that the given tag_text in the database

McountAll(q, v)

Count all the scanned q/v groups.

McountMarked(q, v)

Count all the q/v groups that have been marked.

McreateNewTag(user_name, tag_text)

Create a new tag entry in the DB

Parameters:
  • user_name (str) – name of user creating the tag

  • tag_text (str) – the text of the tag - already validated by system

Returns:

(True, key) or (False, err_msg) where key is the key for the new tag. Can fail if tag text is not alphanum, or if tag already exists.

Return type:

tuple

McreateRubric(user_name, rubric)

Create a new rubric entry in the DB

Parameters:
  • user_name (str) – name of user creating the rubric element

  • rubric (dict) – dict containing the rubric details. Must contain these fields: {kind: “relative”, display_delta: “-1”, value: -1, out_of: 0, text: “blah”, question: 2} # TODO: make out_of optional for relative rubrics? {kind: “absolute”, display_delta: “1 / 5”, value: 1, out_of: 5, text: “blah”, question: 2} The following fields are optional and empty strings will be substituted: {tags: “blah”, meta: “blah”, versions: [1, 2], parameters: []} Currently, its ok if it contains other fields: they are ignored. versions should be a list of integers, or the empty list which means “all versions”. parameters is list of per-version substitutions.

Returns:

(True, key) or (False, err_msg) where key is the key for the new rubric. Can fail if missing fields.

Return type:

tuple

MgetAllTags()

Return a list of all tags - each tag is pair (key, text)

MgetDoneTasks(user_name, q, v)

When a marker-client logs on they request a list of papers they have already marked. Send back the list of [group-ids, mark, marking_time, [list_of_tag_texts] ] for each paper.

MgetNextTask(q, v, *, tag, above)

Find unmarked (but scanned) q/v-group and send the group-id back to client.

MgetOneImageFilename(image_id, md5)

Get the filename of one image.

Parameters:
  • image_id – internal db ref number to image

  • md5 – the md5sum of that image (as sanity check)

Returns:

[True, file_name] or [False, error_msg] where error_msg is the string "no such image" or "wrong md5sum", and file_name is a string.

Return type:

list

MgetOneImageRotation(image_id, md5)

Get the rotation of one image.

Parameters:
  • image_id – internal db ref number to image

  • md5 – the md5sum of that image (as sanity check)

Returns:

[True, rotation] or [False, error_msg] where error_msg is the string "no such image" or "wrong md5sum", and rotation is a float.

Return type:

list

MgetRubrics(question=None)

Get list of rubrics sorted by kind, then delta, then text.

MgetTagsOfTask(task)

Get tags on given task.

Returns:

If no such task, return None.

Return type:

str/None

MgetWholePaper(test_number, question)

All non-ID pages of a paper, highlighting which belong to a question.

Returns:

(True, rval) on success or (False, msg) on failure. Here msg is an error message and rval is a list of dict with keys pagename, md5, id, orientation, server_path, order and included.

Return type:

tuple

Raises:

RuntimeError – some unexpected thing that we think cannot happen.

Mget_annotations(number, question, edition=None, integrity=None)

Retrieve the latest annotations, or a particular set of annotations.

Parameters:
  • number (int) – paper number.

  • question (int) – question number.

  • edition (None/int) – None means get the latest annotation, otherwise this controls which annotation set. Larger number is newer.

  • integrity (None/str) – an optional checksum system the details of which I have forgotten.

Returns:

[True, plom_json_data , annotation_image] on success or on error [False, error_msg]. If the task is not yet annotated, the error will be "no_such_task".

Return type:

list

MgiveTaskToClient(user_name, group_id, version)

Assign a marking task to a certain user, and give them back needed data.

Parameters:
  • user_name (str) – the user name who is claiming the task.

  • group_id (str) – a “task code” like "q0020g3"

  • version (int) – version requested - must match that in db.

Returns:

On error, [False, code, errmsg] where code is a string: "other_claimed", "not_known", "not_scanned", "unexpected", "mismatch" and errmsg is a human-readable error message.

On success, the list is [True, metadata, [list of tag texts], integrity_check] where each row of metadata consists of dicts with keys id, `md5, included, order, server_path, orientation.

Note: server_path is implementation-dependent, could change without notice, etc. Clients could use this to get hints for what to use for a local file name for example.

Return type:

list

question/version via group_id as a task to the given user, unless has been taken by another user.

Create new annotation by copying the last one for that qdata - pages created when returned.

MmodifyRubric(user_name, key, change)

Modify or create a rubric based on an existing rubric in the DB.

Currently this modifies the existing rubric, increasing its revision number. However, this is subject to change and should be considered an implementation detail. Its very likely we will move to an immutable model. At any rate, the returned new_key should be considered as replacing the original and the old key should not be used to place new annotations. It might however be used to find outdated ones to tag or otherwise update papers.

Parameters:
  • user_name (str) – name of user creating the rubric element

  • key (str) – key for the rubric

  • change (dict) – dict containing the changes to make to the rubric. Must contain these fields: {kind: “relative”, delta: “-1”, text: “blah”, tags: “blah”, meta: “blah”} Other fields will be ignored. Note this means you can think you are changing, e.g., the question but this will silently not happen. TODO: in the future we might prevent changing the “kind” or the sign of the delta.

Returns:

(True, new_key) containing the newly generated key (which might be the old key but this is not promised), or (False, “incomplete”), or (False, “noSuchRubric”).

Return type:

tuple

MremoveExistingTag(task, tag_text)

Remove an existing tag from the task

Parameters:
  • task (str) – Code string for the task (paper number and question).

  • tag_text (str) – Text of tag to remove.

Returns:

None

Raises:
  • ValueError – no such task.

  • KeyError – no such tag.

MrevertTask(task)

Reset task, removing all annotations.

Returns:

[bool, error_msg] where bool is True on success and False on failure. On failure, error_msg is string explanation appropriate for showing to users.

Return type:

list

MreviewQuestion(test_number, question)

Give ownership of the given marking task to the reviewer.

Returns:

None

Raises:
  • ValueError – could not find paper or question.

  • RuntimeError – no “reviewer” account.

MtakeTaskFromClient(task, user_name, mark, annot_fname, plom_json, rubrics, marking_time, md5, integrity_check, images_used)

Get marked image back from client and update the record in the database. Update the annotation. Check to see if all questions for that test are marked and if so update the test’s ‘marked’ flag.

RgetCompleteHW()

Get a list of [test_number, sid] that have complete hw-uploads - ie all questions present.

RgetCompletionStatus()

Return a dict of every (ie whether completely scanned or not). Each dict entry is of the form dict[test_number] = [scanned_or_not, identified_or_not, number_of_questions_marked, time_of_last_update]

RgetCoverPageInfo(test_number)

For the given test, return information to build the coverpage for the test. We return a list of the form [[student_id, student_name], [question, version, mark]-for each question]

RgetDanglingPages()

Find all pages that belong to groups that are not scanned

RgetFilesInAllTests()

Return an audit of the files used in all the tests.

RgetFilesInTest(test_number)

Return a list of images and their bundle info for all pages of the given test.

Parameters:

test_number (int) – which test.

Returns:

with keys "id", "dnm", "q1", "q2", etc. Each value is a list of dicts, one for each page. Each of those dicts has keys original_name, bundle_name, bundle_order. Additional keys likely to be added.

Return type:

dict

Note: only scanned pages are included.

RgetIDReview()

Return information about every identified paper. For each paper return a tuple of [test_number, who did the iding, the time, the student ID, and the student name]

RgetIdentified()

Return dict of identified tests - ie ones for which student ID/name are known. Indexed by test-number, lists pairs (student_id/student_name). Note that this includes papers which are not completely scanned.

RgetIncompleteTests()

Get dict of incomplete tests - ie some test pages scanned but not all.

Indexed by test_number Each test lists triples [page-code, version, scanned_or_not]. page-code is t{page}, h{question}{order}, or l{order}. Note - if no tpages scanned, then it will not return tpages. Similalry, if no hwpages/expages scanned, then it will not return hwpages/expages.

RgetMarkHistogram(q, v)

Return a dict of dicts containing histogram of marks for the given q/v as hist[user][question][mark]=count.

RgetMarkReview(*, filterPaperNumber, filterQ, filterV, filterUser, filterMarked)

Return a list of all marked qgroups satisfying the filter conditions.

Filter on paper-number, question-number, version, user-name and whether it is marked. The string "*" is a wildcard to match all papers. TODO: how does type work here? I guess they are either int/str, would it be better to use None/int with None as the wildcard?

Returns:

for each matching qgroup we return a list of the form: [testnumber, question, version, mark of latest annotation, username, marking_time, time finished].

Return type:

list-of-lists

RgetMissingHWQ()

Get dict of tests with missing HW Pages - ie some pages scanned but not all. Indexed by test_number Each test gives [sid, missing hwq’s]. The question-group of each hw-q is checked to see if any tpages present - if there are some, then it is not included. It is likely partially scanned.

RgetNotAutoIdentified()

Return list of test numbers of scanned but unidentified tests. See also IDgetImagesOfUnIDd

RgetOriginalFiles(test_number)

Return list of the filenames for the original (unannotated) page images for the given test.

Lightly deprecated: but still used by reassembly of only-IDed (offline graded) papers.

RgetOutToDo()

Return a list of tasks that are currently out with clients. These have status “todo”. For each task we return a triple of [code, user, time] code = id-t{testnumber} or mrk-t{testnumber}-q{question}-v{version} note that the datetime object is not directly jsonable, so convert it to a string via datetime_to_json which uses arrow.

RgetProgress(spec, q, v)

For the given question/version return a simple progress summary = a dict with keys [numberScanned, numberMarked, numberRecent, avgMark, avgTimetaken, medianMark, minMark, modeMark, maxMark] and their values numberRecent = number done in the last hour.

RgetQuestionUserProgress(q, v)

For the given q/v return the number of questions marked by each user (who marked something in this q/v - so no zeros). Return a dict of the form [ number_scanned, [user, nmarked, avgtime], [user, nmarked,avgtime], etc]

RgetScannedTests()

Get a dict of all scanned tests indexed by test_number. Each test lists pairs [page-code, page-version]. page-code is t.{page}, h.{question}.{order}, or e.{question}.{order}.

RgetSpreadsheet()

Return a dict that contains all the information needed to build the spreadsheet.

RgetStatus(test_number)

For the given test_number return detailed status information.

Returns:

keys and values:

  • number = test_number

  • identified = id’d or not (boolean)

  • marked = marked or not (boolean)

Then if id’d we also add keys/values:

  • sid = student id

  • sname = student name

  • iwho = who did the id-ing

For each question then add a sub-dict with key = that question number, and key/values:

  • marked = marked or not

  • version = the version of that question

if marked also add:

  • mark = the score

  • who = who did the marking.

Return type:

dict

RgetUnusedTests()

Return list of tests (by testnumber) that have not been used - ie no test-pages scanned, no hw pages scanned.

RgetUserFullProgress(user_name)

Return the number of completed tasks of teach type for the given user. Return [ number_id’d, number_marked] number_marked = number marked for all questions.

Rget_rubric_counts()

Return dict of rubrics indexed by key containing min details and counts

Rget_rubric_details(key)

Get a given rubric by its key, return its details and all the tests using that rubric.

Rget_test_rubric_count_matrix()

Return count matrix of rubric vs test_number

addSingleTestToDB(spec, t, vmap_for_test)

Build a single test in the data base from spc and version_map

Parameters:
  • spec (dict) – exam specification, see plom.SpecVerifier().

  • t (int) – the test number to build

  • vmap_for_test (dict) – version map indexed by question number for the given test. It is a slice of the global version_map

Returns:

(ok, status), where ok is True if succuess, and status is a status string with newlines: one line per test, ending with an error message if failure (ok False).

Return type:

2-tuple

Raises:
  • KeyError – problems with version map or spec

  • ValueError – attempt to create test n without test n-1. or attempts to create a test that already exists.

  • RuntimeError – unexpected error, for example we were able to create the test but not the question groups associated with it.

addTPages(tref, gref, t, pages, v)

For initial construction of test-pages for a test. We use these so we know what structured pages we should have.

add_or_change_predicted_id(paper_number, sid, *, certainty=0.9, predictor='prename')

Pre-id a paper with a given student id. If that test already has a prediction of that sid, then do nothing.

Parameters:
  • paper_number (int) –

  • sid (str) – a student id.

Keyword Arguments:
  • certainty (float) – TODO: meaning of this is still evolving.

  • predictor (str) – what sort of prediction this is, meaning is still evolving but “prename” is a rather special case. Others include “MLLAP” and “MLGreedy” and may change in future.

Returns:

(True, None, None) if successful, (False, 404, msg) on error.

Return type:

tuple

buildUpToDateAnnotation(qref)

The pages under the given qgroup have changed, so the old annotations need to be flagged as outdated, and a new up-to-date annotation needs to be instantiated. This also sets the parent qgroup and test as unmarked, and the qgroup status is set to an empty string, “”,ie not ready to go.

If only the zeroth annotation present, then the question is untouched. In that case, recycle the zeroth annotation rather than replacing it. Do this so that when we do initial upload we don’t create new annotations on each uploaded page.

Parameters:

qref (QGroup) – reference to the QGroup being updated.

Returns:

nothing.

checkTPage(test_number, page_number)

Check whether or not the test/page has been scanned. If so then return [collision message, version, image filename] Else return [unscanned message, version]

checkTestScanned(tref)

Check if all groups scanned.

Parameters:

tref (Test) – A reference to the test being checked.

Returns:

True - all groups scanned (and so ready), False otherwise.

Return type:

bool

createNewBundle(bundle_name, md5)

Checks to see if bundle exists.

Parameters:
  • bundle_name (str) –

  • md5 (str) –

Returns:

If bundle exists that matches by name xor by md5sum then return (False, “name”) or (False, “md5sum”). If bundle matches both ‘name’ and ‘md5sum’ then return (True, skip_list) where skip_list is a list of the page-orders from that bundle that are already in the system. The scan scripts will then skip those uploads. If no such bundle return (True, []): we have created the bundle and return an empty skip-list.

Return type:

2-tuple

createNewImage(original_name, file_name, md5, bundle_ref, bundle_order)

Create an image and return the reference.

Parameters:
  • original_name (pathlib.Path/str) – just the filename name please: we will not strip the paths for you.

  • file_name (pathlib.Path/str) – the path and filename where the file is stored on the server.

  • md5 (str) –

  • bundle_ref (TODO) – TODO

  • bundle_order (int) – TODO

doesBundleExist(bundle_name, md5)

Checks if bundle with certain name and md5sum exists.

Parameters:
  • bundle_name (str) –

  • md5 (str) –

Returns:

there are 4 possibilities:

  • neither match: no matching bundle, return (False, None)

  • name but not md5: return (True, “name”) - user is trying to upload different bundles with same name.

  • md5 but not name: return (True, “md5sum”) - user is trying to upload same bundle with different name.

  • both match: return (True, “both”) - user could be retrying after network failure (for example) or uploading unknown or colliding pages. That is, they previously uploaded some from the bundle but now are uploading more (Issue #1008).

Return type:

2-tuple

getAllTestImages(test_number)

All pages in this paper included ID pages.

Returns:

(True, rval) on success or (False, msg) on failure. Here msg is an error message and rval is a list of lists where each inner “row” consists of: [name, md5sum, id, orientation, server_path].

Return type:

tuple

getBundleFromImage(file_name)

From the given filename get the bundle name the image is in. Returns [False, message] or [True, bundle-name]

getDiscardedPages()

Get information about the discarded pages

Returns:

each entry is dict of information about an unknown page. Keys include server_path, orientation, bundle_name, bundle_position, md5sum, id, and reason.

Return type:

list

getImagesInBundle(bundle_name)

Get list of images in the given bundle. Returns [False, message] or [True imagelist] where imagelist is list of triples (filename, md5sum, bundle order) ordered by bundle_order.

getPageFromBundle(bundle_name, bundle_order)

Get the image at position bundle_order from bundle of given name

getPageVersions(t)

Get the mapping between page numbers and version for a test.

Parameters:

t (int) – a paper number.

Returns:

keys are page numbers (int) and value is the page version (int), or empty dict if there was no such paper.

Return type:

dict

getQuestionImages(test_number, question)

All pages in this paper and this question.

Returns:

(True, rval) on success or (False, msg) on failure. Here msg is an error message and rval is a list of lists where each inner “row” consists of: name, md5sum, id, orientation, server_path

Return type:

tuple

getUnknownPages()

Get information about the unknown pages

Returns:

each entry is dict of information about an unknown page. Keys include server_path, orientation, bundle_name, bundle_position, md5sum, id.

Return type:

list

getUserToken(uname)

Return user’s saved token or None if logged out.

Parameters:

uname (str) – username.

Returns:

user’s token or None if use is not logged in.

Return type:

str/None

Raises:

ValueError – no such user.

get_all_question_versions()

Get the mapping between question numbers and versions for all tests.

Returns:

a dict of dicts, where the outer keys are test number (int), the inner keys are question numbers (int), and values are the question version (int). If there are no papers yet, return an empty dict.

Return type:

dict

get_groups_using_image(img_ref)

Get all groups that use the given image in an not-outdated annotation. Note that the image may still be attached to a tpage/hwpage/expage, but if that page has been removed then it will no longer be attached to one of these and so not directly attached to a group. Hence this function searches for annotations that use the image (via an apage) and then finds the associated parent qgroup and grand-parent group.

Parameters:

img_ref (Image) – a reference to the image

Returns:

the set of groups that make use of that image in an annotation.

Return type:

set(Group)

get_question_versions(t)

Get the mapping between question numbers and versions for a test.

Parameters:

t (int) – a paper number.

Returns:

keys are question numbers (int) and value is the question version (int), or empty dict if there was no such paper.

Return type:

dict

hasAutoGenRubrics()

Do we have the manager auto-generated “no answer” rubrics.

Returns:

True if we have such a thing, else False.

Return type:

Bool

how_many_papers_in_database()

How many papers have been created in the database.

is_paper_database_initialised()

True if its too late to change the structure of your papers.

You can change spec up until the paper database is initialised.

is_paper_database_populated()

True if any papers have been created in the DB.

The database is initially created with empty tables. Users get added. This function still returns False. A spec is added; still False. The paper database is initialised but has no papers; this function still returns False (so perhaps you are looking for our cousin is_paper_database_initialised()). Rows are added to the paper table; finally this function returns True.

listBundles()

Returns a list of bundles in the database

Args: None

Returns:

One dict for each bundle. Each dict contains three key-value pairs: “name”, “md5sum” and “numberOfPages”. If no bundles in the system, then it returns an empty list.

Return type:

list-of-dict

moveCollidingToTPage(file_name, test_number, page_number, version)

Move the collision into a TPage and move the original TPage to discards.

Returns:

(True, None, None), or (status, code, error_msg) where the last field is human-readable.

Return type:

3-tuple

moveUnknownToExtraPage(file_name, test_number, questions)

Map an unknown page onto an extra page.

Parameters:
  • file_name (str) – a path and filename to a an image, e.g., “pages/unknownPages/unk.16d85240.jpg”

  • test_number (int) –

  • questions (list) – list of ints to map this page onto.

Returns:

a 3-tuple, either (True, None, None) if the action worked or (False, code, msg) where code is a short string, which currently can be “notfound”, or “unscanned” and msg is a human-readable string suitable for an error message.

Return type:

tuple

moveUnknownToHWPage(file_name, test_number, questions)

Map an unknown page onto an extra page.

Parameters:
  • file_name (str) – a path and filename to a an image, e.g., “pages/unknownPages/unk.16d85240.jpg”

  • test_number (int) –

  • questions (list) – a list of ints.

Returns:

a 3-tuple, either (True, None, None) if the action worked or (False, code, msg) where code is a short string, which currently can be “notfound”, and msg is a human-readable string suitable for an error message.

Return type:

tuple

removeScannedEXPage(test_number, question, order)

Remove a single scanned extra-page.

Returns:

(ok, code, errmsg), where ok is boolean, code is a short string, “unknown”, or None when ok is True.

Return type:

tuple

removeScannedHWPage(test_number, question, order)

Remove a single scanned hw-page.

Returns:

(ok, code, errmsg), where ok is boolean, code is a short string, “unknown”, or None when ok is True.

Return type:

tuple

removeScannedTestPage(test_number, page_number)

Remove a single scanned test-page.

Returns:

(ok, code, errmsg), where ok is boolean, code is a short string, “unknown”, “unscanned”, or None when ok is True.

Return type:

tuple

remove_id_from_paper(paper_num)

Remove association between student name and id and a paper.

This returns the paper to the ones that need to be ID’d.

Parameters:

paper_num (int) –

Returns:

bool

remove_predicted_id(paper_number, *, predictor=None)

Remove any id predictions associated with a particular paper.

Parameters:

paper_number (int) –

Keyword Arguments:

predictor (str) – what sort of prediction this is, meaning is still evolving but “prename” is a rather special case. Others include “MLLAP” and “MLGreedy” and may change in future. TODO: if missing are we going to erase them all?

Returns:

(True, None, None) if successful, or (False, 404, msg) if paper_number does not exist.

Return type:

tuple

replaceMissingTestPage(test_number, page_number, version, original_name, file_name, md5)

Add a image, often a template placeholder, to replace a missing page.

Returns:

(bool, reason, message_or_tuple), bool is true on success, false on failure, reason is a short code. These are documented in uploadTestPage().

Return type:

tuple

sidToTest(student_id)

Find the test number associated with a student ID.

Parameters:

student_id (int) –

Returns:

(True, int) on success with an integer test number. Or (False, str) with an error message.

Return type:

tuple

testOwnersLoggedIn(tref)

Returns list of logged in users who own tasks in given test.

Note - ‘manager’ and ‘HAL’ are not included in this list - else manager could block manager.

updateDNMGroup(dref)

Recreate the DNM pages of dnm-group, and check if all present. Set scanned flag accordingly. Since homework does not upload DNM pages, only check testpages. Will fail if there is an unscanned tpage. Note - a DNM group can be empty - then will succeed. Also note - hwscan upload creates and uploads tpages for DNM groups if needed.

Parameters:

dref (DNMGroup) – a reference to the DNM group to be updated

Returns:

True means DNM group is ready (i.e., all tpages scanned), False otherwise (i.e., missing some tpages).

Return type:

bool

updateGroupAfterChange(gref)

Check the type of the group and update accordingly. return success/failure of that update.

Parameters:

gref (Group) – A reference to the group to be updated.

Returns:

True - the group is ready (ie required pages present), otherwise False.

Return type:

bool

updateIDGroup(idref)

Update the ID task when new pages uploaded to IDGroup. Recreate the IDpages and check if all scanned, set scanned flag accordingly. If group is all scanned then the associated ID-task should be set to “todo”. Note - be careful when group was auto-IDd (which happens when the associated user = HAL) - then we don’t change anything. Note - this should only be triggered by a tpage upload. Also note - hwscan creates required tpage for the IDgroup on upload of pages.

Parameters:

idref (IDGroup) – A reference to the IDGroup of the test.

Returns:

True - the IDGroup (which is a single page) is scanned, False otherwise.

Return type:

bool

updateImageRotation(file_name, rotation)

Updates the rotation in the metadata of the image with the given name

updateQGroup(qref)

A new page has been uploaded to the test, so we have to update the question-group and its annotations. Checks to see if the group has sufficient pages present and the scanned flag is set accordingly (strictly speaking set in the parent ‘group’ not in the qgroup itself).

The updates to the annotations are done by an auxiliary function. Older annotations are now out-of-date and get flagged as such by that aux function.

Parameters:

qref (QGroup) – a reference to the QGroup to be updated.

Returns:

True means that the qgroup is ready (i.e., all tpages present, or hwpages present). False means that either that the group is missing some (but not all) tpages, or no tpages and no hwpages.

Return type:

bool

updateTestAfterChange(tref, group_refs=None)

The given test has changed (page upload/delete) and so its groups need to be updated. When a list or set of group references are passed, just those groups are updated, otherwise all groups updated. When a group is updated, it is checked to see if it is ready (ie sufficient pages present) and any existing work is reset (ie any existing annotations are marked as outdated). After group updates done, the test’s scanned flag set accordingly (ie true when all groups scanned and false otherwise).

Parameters:
  • tref (Test) – reference to the test that needs to be updated after one of its pages has been changed.

  • group_refs (list or set of Group) – If this is absent then all the groups of the test are updated

  • reset) ((and so the corresponding tasks) –

  • updated. (otherwise just those groups are) –

uploadCollidingPage(test_number, page_number, version, original_name, file_name, md5, bundle_name, bundle_order)

Upload given file as a collision of tpage given by tpv.

Check test and tpage exist - fail if they don’t. Check against other collisions of that tpage - fail if already exists. Create image (careful check against bundle) Create collision linked to the tpage.

uploadTestPage(test_number, page_number, version, original_name, file_name, md5, bundle_name, bundle_order)

Upload an image of a Test Page and link it to the right places.

Returns:

(bool, reason, message_or_tuple), bool is true on success, false on failure. reason is a short code string including “success” (when bool is true). Error codes are “testError”, “pageError”, “duplicate”, “collision”, “bundleError” and “bundleErrorDupe”. message_or_tuple is either human-readable message or a list or tuple of information (in the case of “collision”).

Return type:

tuple

userHasToken(uname)

Return user’s saved token or None if logged out.

Parameters:

uname (str) – username.

Returns:

True if the user has a token.

Return type:

bool

Raises:

ValueError – no such user.

plom.db.initialiseExamDatabaseFromSpec(spec, db, version_map=None)[source]

Build metadata for exams from spec but do not build tests in DB.

Parameters:
  • spec (dict) – exam specification, see plom.SpecVerifier().

  • db (database) – the database to populate.

  • version_map (dict/None) – optional predetermined version map keyed by test number and question number. If None, we will build our own random version mapping. For the map format see plom.finish.make_random_version_map().

Returns:

the question-version map.

Return type:

dict

Raises:
  • ValueError – if database already populated, or attempt to build paper n without paper n-1.

  • KeyError – invalid question selection scheme in spec.