October 23, 2017

Where is the Humanity in the Computer Science Curriculum?

Categories: Critical Perspectives, Digital Learning
AI and human hands

“Let us move from human-centered design to humanity-centered design.”

— From the Copenhagen Letter

I’ve been struggling to write this post for a long, long time. Every time I see calls for teaching coding to young people or to girls or to minorities, I get frustrated. First off, the need for everyone to learn code may be inflated, as Audrey Watters has written. As someone with a bachelor’s degree in computer science, I can assure you that no coding bootcamp is going to produce a person as qualified as someone who has studied computer science at the undergraduate level. That would be ridiculous, right? Or else, why would anyone study computer science? Even though I’ve never been to a coding bootcamp, I’ve worked with people who know how to code but had not studied computer science, and it’s torture to watch. Seriously. I doubt they would be hired instead of well-trained programmers (with some exceptions for the seriously talented or interested).

But, I’m more interested in something else. Why is all the focus on teaching lay people how to code, and not teaching computer scientists and people who work in tech companies to center empathy and humanity in their work? I came across this tweet over summer:

That tweet came from the #ICA17 conference and reminded me that I needed to write about this (if you overlook the unnecessarily derogatory use of the term “geek”). As a former computer scientist (I left the code behind many years ago), it bothers me how there is such a direction for humanists to learn to code, and very little in the way of working with software engineers and programmers to help them think in more humane and ethical ways about what they’re designing, to be more critical and aware of the underlying politics of what they do. Yes, non-programmers can become more critical citizens when they understand how their (digital) lives are influenced by algorithms, but more importantly, shouldn’t we care about the critical citizenship of the programmers? After all, it is highly unlikely that an amateur coder will be asked to design the next big neural network; as unlikely as someone with a casual interest in medicine, or who studied holistic medicine, will be called on to perform life-threatening surgery. Walter Vaninni reminds us: “As anyone with even minimal exposure to making software knows, behind a minute of typing lies an hour of study.” Right here on DML Central, Ben Williamson wrote:

“There has been far too little focus on enabling young people to appreciate the social consequences of code and algorithms…”

And:

“We need to get beyond the rather naïve and utopian ideal that if you can understand what an algorithm is and how to make one, then you can program the computer rather than it programming you. A different sort of knowledge is required to understand the social power of algorithms.”

Let me step back a minute and talk about the non-neutrality of data and algorithms and why it seems necessary to work on consciousness-raising for computer professionals (for the record, I went to a liberal arts institution and all the direct teaching about ethics in computer science focused solely on copyright; no one even talked to us about open source even as we used Linux and Unix).

Get the latest updates in Connected Learning, technology, and youth culture.

I am not the first to argue this, of course. Cathy O’Neil has an excellent book, “Weapons of Math Destruction,” where she gives detailed examples of how algorithms are not neutral and reproduce human biases and agendas. As danah boyd argues: “we cannot build a neutral platform or punt the political implications of data down the line. Every decision matters, including the decision to… collect certain types of data and not others.” Paul Prinsloo reminds us how learning analytics we use need to be decolonized: “our collection, analysis and use of student data are informed by particular ideological and political agendas.” I agree with Walter Vaninni:

“Programming is not a detail that can be left to ‘technicians’ under the false pretense that their choices will be ‘scientifically neutral.’ Societies are too complex: the algorithmic is political.”

So how much time do those with technological power spend on exploring the ethics of what we do? I did an informal poll on Twitter about whether computer science programs require neural networks or teach the ethics behind algorithms, and found that 58% of respondents’ programs did neither (but I only had 12 responses, so further research is needed).

Cathy O’Neil suggests tech giants are solving problems not relevant to the every day person, and that possibly sensitivity training might help. I feel that something like this would not be enough. I think there should be an element of infusing discussions of ethics, humanity and social consequences into computer science curricula, and I believe that even human-centered design does not go far enough; I suggest that designers of tech consider more “empathetic and participatory design” where there is some degree of involving people who are not in the tech company as autonomous persons in product design decisions, and not just using them as research/testing subjects. So, I love that statement in the Copenhagen letter about emphasizing humanity and social justice in design, not just human-centeredness — and we need to imagine ways of making this happen in practice. Perhaps we should design, fund and celebrate more programs that promote humanity and social justice rather than technical abilities.

This powerful tweet from Chris Gilliard highlights the power differential here:

We should not be targeting creating more factory workers. We should be working on the values of factory owners and managers.

Banner image: “Till Dead Batteries Us Do Part” flickr photo by Peter Kurdulija shared under a Creative Commons (BY-NC-ND) license