November 6, 2017

The Importance of Student Privacy in Big Data

Category: Digital Citizenship
students using cell phones

I’ve written in the past about understanding the Terms of sites you are asking students to use and been interviewed about the implications of social media in classes. This year, one of the things I want to focus on is bringing those two things together. It is important that we don’t just know the terms we are asking our students to work under when we enforce the use of social media or other proprietary digital platforms for course work, it is important we know the implications and the devastating effects these tools and platforms might have in the future for our students.

Recently, I read “Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans.” The paper wonderfully outlines some of the decisions that are being made based on data streams individuals and their networks create. This includes data gleaned from social network streams that are used through sentiment analysis algorithms, mobile location information, etc. These streams have direct consequences for people, and disproportionately affect people from poorer communities (and those who associate with them as these algorithms look at not just the “health” of the individual but also their network). These algorithms determine things like employability, ability to rent apartments, and getting into schools. Given all the data in Big Data, it is impossible to pinpoint what thing affected the decision the most, so there is almost no way to protect from this legally as algorithms take in so much data it is almost impossible to prove that they are engaging in discriminatory practices even as the data increasingly shows their decisions follow previous structural forms of codified discrimination.

While the article looks at education, it only takes the approach of looking at how admissions offices are using social media feeds. Given the disparate uses of social media based on income level and the lack of privacy and literacy among lower income users, this is an issue for multiple reasons. The article makes the point that this means more affluent applicants might hire a social media manager to fix their digital footprint before applying.

Get the latest updates in Connected Learning, technology, and youth culture.

While this is troublesome, we do have some fail-safes in place to ensure access across income levels to higher education. Community colleges with rolling admission policies and state schools that offer on the spot acceptance mean that while there might be limited choice, access to higher education does exist. Community colleges are important here because they give students two years worth of coursework, allowing students to create a different digital version of themselves through data literacy and meaningful digital engagement and high impact digital practices such as ePortfolio creation.

The biggest risk for students at all levels is the privacy implications of our corporate digital culture especially given that the time an individual is a student is, by design, a very temporary and transient time of life. That isn’t to say tracking is bad. However, the business model that much of the digital spine is built on is exploiting data in various ways, often times in helpful ways, as the success of devices and services such as Siri, Google Home, and Amazon Echo show. However, the temporariness of the students’ space means that this data was never meant to be permanent. One of the things I would always tell my students is to experiment in class, with thoughts, with expression, and with projects because the classroom is one of a the few places you can try something and fail with limited implications on their actual life. With digital projects hosted by corporate entities, that may no longer be true.

Data literacy is the easiest part to disregard because it is the part that is most hidden and the one part of the digital world that doesn’t easily align with a previous practice. While data has always existed, and IBM’s punch cards allowed for large amounts of data to be collected, and computational data sorting at scale and the implications have been imagined since the 1960s, the reality of Big Data is different than previously imagined. The original fear was that increased computational automation sorting power would lead to job losses and that should be our biggest concern. As the article points out though, the implications are far more dire and fall on preexisting lines of structural inequality.  

What does this mean? What steps can we take to help students protect themselves? I always try to tell students and faculty I am working with, the thing I have the biggest trepidation about is the combination of digital tools with mobile technologies. They do not allow for a space outside if triangulation exists. That being said, we know the students most at risk of adverse effects from their data are more likely to both use a smartphone as their primary means of connecting to the internet and less likely to have privacy controls set in such a way that they can’t be tracked either through their device or on social media. We can walk them through how to adjust those settings. We can also design with their data safety in mind. As I’ve spoken with a few people since my last post on frugal innovation I’ve been thinking about this more and more. One of the other things moving me toward frugal innovation in digital learning is when we frugally design learning experiences, we are better able to control the data trails students are leaving and imagine the implications. If we keep our plans small and limit the amount of platforms we have a bit more time to sit with the terms, something I’ve advocated for for many years here and in any professional development I do with faculty. Additionally, as the global political climate adjusts, and we are seeing the effects of students sharing on social media and how detrimental it might be for vulnerable students, we have to do our due diligence in considering the implications of the choices we impose on our students as we integrate digital media into our learning spaces.

* Mary Madden, Michele Gilman, Karen Levy, and Alice Marwick, Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans, 95 Wash. U. L. Rev. 053 (2017). Available at: http://openscholarship.wustl.edu/law_lawreview/vol95/iss1/6