April 16, 2015

The Flaws of Online Course Testing

Categories: Critical Perspectives, Digital Learning
character drawing showing student intensive background check with police officer interrogating

A recent New York Times article titled “Online Test-Takers Feel Anti-Cheating Software’s Uneasy Glare” features an interview with a student taking an online course. This part struck me:

“a red warning band appeared on the computer screen indicating that Proctortrack was monitoring her computer and recording video of her. To constantly remind her that she was being watched, the program also showed a live image of her in miniature on her screen.”

Proctortrack is the company that proctors the exams. The article also looks at a pilot program in Texas that is using the software for an online degree program. The justification given for needing to do this is online learning is “high stakes,” which translates to “students must take exams to verify that they have learned and we must do everything we can to verify it is them, even if it means intrusively entering their homes and recording them.”

My first thought when I read about the warning band and miniature image on the screen was, what about those students who have things going on in their home that are best not shared, or worse, illegal to record (especially thinking of students who take online courses because they have families with young children). The impetus, per the article is not only about the universities protecting their brand, but also about being a leader in a the multi-billion dollar online learning industry. Like many verification platforms, Proctrorack uses algorithms to analyze various biometric inputs from those taking the exam to verify identity. In order to be algorithmically compliant, the students must sit, look straight ahead, and have consistent lighting during the examination period. If the student fails, he or she is flagged so that the person administering the course can review the video to see what was going on.

There are many implications and questions this raises. Who benefits from making this form of assessment the norm? Who is punished by it? And, what is being normalized?

Testing is already stressful for Blacks and Latinos in a culturally dictated way. Having to have this level of surveillance for bodies that are already surveilled differently makes an already stressful situation that much more stressful. While Proctrorrack and programs like it theoretically surveille everyone equally, as a socio-cultural practice, surveillance can never be universal. Further, we know that when it comes to darker skin webcams have a harder time discerning things like facial expressions when that process is controlled by an algorithm.

The need to sit in a position, not move and look straight ahead might cause people, who are enrolled in online courses due to physical limitations that make attending class in person difficult or impossible, to also have difficulty staying upright and centered without moving from their position for the duration of the exam.

I am having a hard time figuring out who benefits from this other than the algorithm company that ends up with the data that allows them to make their algorithm better at sorting the “bad” from the “good.” Like most algorithms though, there are limits and flaws in the sorting of these two categories. I worry that these flaws will continue to fall disproportionately on bodies that are already marginalized in education.

The bigger question I have is this: why is the ultimate verification an attempt to port in-person exams to online environments?

We know that testing is flawed. There are people who test better than others for a plethora of reasons. The formats we have for timed exams are limited, even in person. While the algorithm takes care of part of the labor involved in assessing and standardizing online learning to make the credential meaningful, it is a system that does need to be remade in our digitally-augmented world. One of the biggest problems and shortcomings of depending on tests is they tend to measure retention instead of progress. If algorithms need to be deployed for verification and assessment, why not do away with the exam format altogether and instead track progress? And, if an assessment must be done, why limit it to exams? There have been many experiments in peer grading and digital projects at scale. Some of these have been written up on DML Central. Additionally, initiatives like Connected Courses and the current course, DMLCommons, serve as a space to experiment with what alternative digital media and learning might look like.

The last big question I have is about FERPA (I tweeted that one). One of the people interviewed for the article, Jeffrey Alan Johnson, was kind enough to respond to one of my tweets:

jade-tweets