August 4, 2014

Organizing Algorithms, Calculated Publics in Digitally-Mediated Education

Category: Edtech
graphic with the word share digital code words in background

Recent news reports have begun to reveal how various analytics companies are now data mining millions of children. The learning analytics company Knewton, for example, claims that 4.1 million students are now using its proficiency-based adaptive learning platform, which has served 3.5 billion total recommendations between May 2013 and May 2014 alone. The role of these predictive analytics platforms and recommender systems in education is increasingly causing political and parental concerns, largely related to privacy. Less acknowledged, however, is the increasingly autonomous and automated capacity of the software algorithms working in the background of these platforms.

Algorithms have become powerful devices in digitally-mediated education. They are present not only in the predictive analytics and recommender systems of adaptive learning platforms, but in the social networking sites where ‘networked publics’ hang out, in the information practices deployed in inquiry learning, in techniques of digital making, and in the ed-tech software promoted in classrooms. To put it bluntly, algorithms are now deeply embedded in the governance of education and learning—where governance means the techniques by which people’s actions, thoughts and ways of conducting themselves are evaluated, shaped and sculpted.

So what do algorithms do, how much power do they have in the social ordering and governance of education, and how might they be influencing the lives of learners? Some recent publications can help us to begin addressing these questions.

What Algorithms Do

In the recent book “9 Algorithms that Changed the Future,” the computer scientist John MacCormick defines an algorithm simply as “a precise recipe that specifies the exact sequence of steps required to solve a problem.” In computer science specifically, he explains, algorithms are the fundamental entities that computer scientists grapple with to accomplish a task, and without them, there would be no computing.

In addition, all computer algorithms are more or less meaningless without sources of data. Algorithms require some form of input, such as an unsorted list of data, to transform into an output. A search algorithm, for example, works upon a vast database of information that must be indexed, parsed and stored in advance to facilitate fast and accurate information retrieval. This involves companies such as Google crawling the web, collecting and indexing information, logging search queries and links clicked, in order to generate the data required to allow the search algorithms to function autonomously.

However, algorithms are more than simply computer science abstractions and routines for sorting and structuring data. The “algorithms that changed the future” of MacCormick’s title include search engine algorithms for matching and ranking search results, pattern recognition algorithms  for targeted online advertising, database algorithms for online shopping and social networking sites, and public key cryptography algorithms for secure online transactions. These algorithms, MacCormick argues, “have a profound effect on our lives.” Social science has now begun to acknowledge these profound effects.

Ordering Algorithms

A 2013 conference, entitled Governing Algorithms, sought to begin to explore some of the profound effects of algorithms. The conference organisers noted how algorithms have become the subject of popular, political and social scientific attention. These interests have only grown amid both the hype and “anxieties of Big Data,” as Kate Crawford puts it, where the significance of algorithms has been amplified.

Andrew Goffey argues in “Software Studies” that algorithms can be understood as things that can “do things,” enabled by the “command structure” of their programming, and that can therefore exert material effects “on themselves, on machines and on humans.” The “data revolution,” “social physics,” and cyberbolic claims that the entire “universe is programmable” reflect the extent to which algorithms are now understood to be “doing things” in a variety of ways.

Although the role of algorithms in the social world is by no means uncontested among social science researchers, there is some broad agreement that algorithms are now increasingly involved in various forms of social ordering, governance and control. A brief survey of the academic field reveals how algorithms have emerged as an important object of analysis in studies of surveillance, identity formation, popular culture, digital governance, and algorithmic research methods, as well as more widely in debates about the apparent power and control that algorithms command.

According to Adrian Mackenzie in the book “Cutting Code,” for example, the central point about algorithms is the way they establish certain forms of “order,” “pattern” and “coordination” through processes of sorting, matching, swapping, structuring and grouping data. In this sense, algorithms appear as new kinds of “social rules” that then can shape everyday life. As David Beer argues, “algorithms are an integrated and irretractable part of everyday social processes,” with the potential “to reinforce, maintain or even reshape visions of the social world, knowledge and encounters with information.”

Likewise, in the book “Code/Space,” Rob Kitchin and Martin Dodge suggest that “algorithms are products of the world” which can also “produce knowledge that then is applied, altering the world in a recursive fashion.” As such, they argue that algorithms provide “grammars of action” for new forms of social ordering and governance, and are endowed with the power to “actively reshape behavior.”

These accounts produce an image of algorithms as powerfully automated, autonomous and recursive technologies — socially produced and, yet, increasingly capable of producing new social formations, encounters and knowledge.

Modelling Algorithms

In order to reshape behaviour in such ways, algorithms require data to work on. In a recent article on “organizing algorithms,” Daniel Neyland argues that the “politics of algorithms” are materialized in different forms of organization including  direct attempts “to model human action through mathematical logics of order.” In order for an algorithmic system to function, he claims, the world outside of the system has to be mathematically modelled in such a way that it can be built-in to “the social world of the algorithmic system.” Google’s driverless car, for example, relies on built-in ultra-precise digitized maps to navigate the physical world — a compelling case of what Neyland might call building “a world out there into a world in here, in the algorithmic machine.”

To give another example of such a social logic of algorithmic interaction, in research on the politics of search engine algorithms, Astrid Mager has shown how algorithms are not merely neutral mathematical devices but designed to function according to particular powerful ways of perceiving the world, political assumptions, and the codes of conduct to which their designers and promoters have subscribed. Search engine algorithms reinforce existing ways of ordering the world, or models of social order and organization, which they then project and reproduce as they interact with it.

Moreover, however, many innovations in recent algorithm design mean that algorithms are increasingly built to adapt as they interact with the world. Processes such as “machine learning” rely on adaptive algorithms and statistical models that can be “fed training data”; these are, crudely speaking, algorithms that can learn from being taught with example data. Their capacity is based on innovations in optimizing the predictive power of statistical models. The importance of machine learning algorithms is that they not only have the power to reproduce the world in the image in which they were programmed, but exhibit some tendencies of emergence, adaptivity, anticipation and prediction. In this sense, algorithms are not only social inventions capable of reinforcing existing forms of social order and organization, but have a powerfully productive part to play in predicting and even pre-empting future events, actions, and realities.

Socioalgorithmic Governance

Given such accounts of the complex relational interweaving and interaction between algorithms and social worlds, it is perhaps more accurate to write of “socioalgorithmic” processes and practices than of “algorithmic power.” It is important to acknowledge that algorithms are socially produced through mixtures of human and machine activities, as well as being socially productive, than to imply that they act deterministically as mechanical or objective technologies.

As Tarleton Gillespie notes in a recent publication, algorithms, from search engines to social networking sites, organize information according to human but increasingly automated evaluative criteria, and on that basis then tangle with users’ information practices and patterns of political and cultural engagement. Along these lines, Gillespie suggests that algorithms are an important object of study for several reasons, not least to interrogate how “the introduction of algorithms into human knowledge practices may have political ramifications.”

He argues that there are questions about how certain forms of data are chosen, selected, included and prepared for processing by algorithms; about the evaluative criteria written into the command structure of algorithms to determine what is relevant, legitimate and appropriate; and about the way the technical nature of algorithms is presented as a guarantee of impartiality and objectivity.

Moreover, he argues, as algorithms are increasingly being designed to anticipate users and make predictions about their future behaviours, users are now reshaping their practices to suit the algorithms they depend on. This constructs “calculated publics,” the algorithmic presentation of publics back to themselves that shapes a public’s sense of itself.

Educational Calculated Publics

Gillespie raises a series of questions about the production of calculated publics that have immediate resonance in discussions about digitally-mediated learning:

Algorithms … engage in a calculated approximation of a public through their traceable activity, then report back to them …. But behind this, we can ask, what is the gain for providers in making such characterizations, and how does that shape what they’re looking for? Who is being chosen to be measured in order to produce this representation, and who is left out of the calculation? And, perhaps most importantly, how do these technologies, now not just technologies of evaluation but of representation, help to constitute and codify the publics they claim to measure, publics that would not otherwise exist except that the algorithm called them into existence?

These are important issues to address as we consider the role of digital technologies and algorithms in education systems and learning, particularly with the growth of algorithm-dependent adaptive learning platforms and predictive, personal analytics.

Personal and learning analytics utilize adaptive machine learning algorithms and statistical models to analyze users’ data in order to anticipate or even predict individuals’ actions, behaviors and attitudes. Based on socio-algorithmic techniques of machine learning, psychometric learner profiling, and predictive modelling, the aim of learning analytics is to create “smart” pedagogic systems, or new kinds of “database pedagogies“ that are increasingly automated and autonomous of human oversight.

Utilizing these forms of algorithmic calculation, platforms such as Knewton make prediction and anticipatory knowledge very powerful in education. The algorithms behind these platforms are programmed (or taught) in such a way that they have the power to make predictions on the basis of which decisions can be made, or even on the basis of which automatic recommendations can be produced. The algorithm is a means of anticipating or foreseeing probable future events, and can produce “actionable insights” as it says on the Knewton website. These platforms have very real and significant implications for how individuals are evaluated, assessed, and treated — in other words, how they are governed.

Algorithms, as parts of human-machine mixes and socioalgorithmic processes, are contributing to the production of new calculated publics in education. The growing role of various analytics platforms, data mining processes, and other forms of machine learning in education therefore demand a close attention to the work that algorithms do among researchers and educators in this space.

Ultimately, we need to consider how certain forms of data are chosen for collection by these platforms; what models of “learning” are built-in to them; inquire into the evaluative criteria, social norms, and promises of objectivity embodied by their algorithms; and consider how these algorithms are constructed to anticipate and predict learners’ likely futures. Calculated educational publics such as those embodied in the millions of users of Knewton are not just algorithmically approximated from traces of their activities, but actively called into existence by data analytics that are able to anticipate learners’ probabilities for action, construct predictive models, and serve up recommendations to pre-empt and govern their future lives. 

Banner image credit: opensource.com