How might a philosopher, literary critic, and computer programmer articulate the pros and cons of an increasingly digital world?
Aden Evens, an associate professor in the Department of English and Creative Writing, brings all three perspectives to his scholarship on digital culture. Trained as a philosopher, the former computer programmer warns of the pervasive influence of quantitative values in his new book, The Digital and Its Discontents.
The book follows his 2015 volume, The Logic of the Digital, which details a unique digital ontology—and its paradoxical relationship with human creativity.
With precise prose and deep understanding of the logic of technology, Evens illuminates just what is lost when calculative reasoning and optimization strategies have an outsized influence on human thinking. (This interview has been edited for clarity and length.)
You write about being an "early adopter" and loving technology from a young age. What led you to shift your focus to the negative implications of the digital?
I grew up next door to a famous computer scientist, which gave me a chance to learn some programming languages when I was quite young, at the start of the personal computer era. What I loved then about programming was closely related to what I loved about math: everything had a specific order, everything followed a very narrow and rigid logic, and if you could master that logic you could get the computer to do all sorts of remarkable things. My work experience in high school and college as a coder reinforced that lesson in a variety of contexts: no matter what project or program you are working on, to program a computer well means thinking like a computer.
Some years later, after a PhD in philosophy (and with the internet coming into its own), I was looking for a way to distinguish myself on the academic job market. Not a lot of humanist thinkers had the kind of technical chops that I did, so it made sense to apply my philosophical intuitions to this increasingly important aspect of our lives, leveraging my distinctive knowledge about the digital.
My book looks at both sides, reflecting my great enthusiasm for computing machines but also my sense, born of critical inquiry, of their limitations. It is the same narrowness, the same rigid logic that makes computers so capable and appealing that also makes them inflexible and limited. Because computers follow a strict logic, they never have to navigate any ambiguity, and they do exactly what they are programmed to do and they do it with unparalleled efficiency. But this also means that they can't do anything that can't be represented in this narrow command logic; all computers can do is follow unambiguous, logical commands.
If the book places a greater emphasis on the digital's inherent limits than on its astonishing power, this is an attempt to balance the excessive hype that has accompanied digital technologies for 50 years and blinded us to their shortcomings and areas of inadequacy.
You describe how we interpret the world according to digital values—"even when we step away from our computers and put our phones in our pockets." How do most people experience this "digital hegemony"?
The book argues that digital technologies intensify a certain set of values, a way of looking at and acting in the world. It's a worldview that was already in place long before those technologies were invented and disseminated. So it's not that computers introduced a new way for us to behave or a new set of values; rather, they amplified specific ways of thinking and acting that were already prevalent, and the consequent change (in us) over the digital era is mostly marked by a steady erosion of other ways of being human beyond the influence of the digital.
The most obvious symptom of this change is the way that people reach for their phones to do almost anything these days, and they often reach for their phones even when they have nothing in particular to accomplish. But as digital technologies have become the "default" option, generally unquestioned, people also think like computers in other parts of their lives. We treat important life questions, about friends, lovers, work, family, politics, leisure, etc., through calculative reasoning, making decisions based on optimization strategies that reflect an underlying quantitative outlook on ourselves and the world around us. We understand self-determination and identity as a set of choices within categories rather than an open-ended field of potential yet to be fully defined; the kind of person you are is given by a set of labels for ethnicity, gender, political alignment, personality type, etc., which leaves no room for a recognition of the singular, uncategorizable self that each of us is.
Similarly, we see the world as made of individuated objects (including people), each of which has various properties and (sometimes metaphorical) "handles" by which it can be manipulated to achieve a desired end: when you think like a computer, every object in the world, including people, is just a means to an end, an input into an algorithm. But the thing about ideology is that it does not present itself as one way of being among others; instead it colors our way of seeing the world such that the ideology becomes naturalized, making it difficult to conceive of other ways of being.
Thus, the examples I just offered—about how we make decisions, how we control our own futures and define ourselves as individuals, and how we regard the world in which we live—probably seem strangely anodyne to most people, who can't really imagine any other way that we might do those things. What we lose when we "go digital" is precisely the unpredictable, the indeterminate, the ambiguous that digital machines simply cannot accommodate. But that also means that what is missing from those machines is not predictable, not yet determined, and largely ineffable, making it a bit tough to articulate or even point to what is excluded from the digital ideology.
Your book's title references Freud's influential 1930 book, Civilization and Its Discontents. To a certain extent, both books document a significant loss to humankind at a specific moment in history, with Freud illuminating the clash between individual desire and the expectations of modern society. How do you envision the parallel with Freud's book?
I'm married to a psychoanalyst, and she tells me that the key term of my book, contingency, my name for what the digital inevitably leaves out, is similar to the idea of the unconscious, as made famous in Freud's work. But if contingency is a version of the unconscious, it would be not a personal unconscious but the unconscious of the universe—forces of spontaneity and creativity that operate outside of our direct observation, underneath the rule-bound order outlined by physics and chemistry, disrupting that order to ensure that there is always something new, something unexpected to be discovered. As I put it in the book, contingency is the refusal to hold any rule inviolable, the ever present possibility that anything might happen. (By excluding contingency, I argue, the digital achieves its remarkable potency but also meets its ultimate limitation.) So I can see how my concept of contingency invites some rough comparisons to the idea of the unconscious.
But in choosing that title, I was thinking of a different, equally rough, similarity between my book and Freud's great work: as you suggest, both books are diagnosing a problem in their particular historical-cultural moment (though Freud's context of interwar Europe is never mentioned explicitly in his book). And further, both books use that particular moment to illustrate a claim that is ultimately universal. In Freud's case, he argues that not just his current civilization but every society, every way of living together in groups will bring about a measure of unhappiness, as people are forced to deny many of their individual desires in order to preserve a relative harmony in the whole society.
In my case, I argue that digital technologies always encourage digital thought, the kind of thinking patterned after the strict binarity and rigid logic of digital technologies, but that influence has been especially keen in our current moment (and over the last half century) because the underlying values of digital thought were already widespread in our society. There are many fine critiques of social media and other digital phenomena and my book does not go over that same territory again. Instead, my research looks at how 'digitalness' itself, and not its particular implementation, already has certain limitations, such that some problems of digital culture cannot ever be adequately solved.
Woven throughout the book are short vignettes about a range of topics, from playing a video game to squishing a lemon, that illuminate differences between the human and the digital. One vignette considers the discipline of the digital humanities, where you urge scholars to restore interpretation as its central practice, pushing "a confrontation with contingency." What's a compelling example of this sort of work?
I am not an expert in digital humanities, as I have kept my distance from it, harboring misgivings even when I first started to hear about it in the mid 2000s. But I have attended a few conferences and read a number of articles. I have no doubt that there is really valuable work being done using the methods of digital humanities, and that brief section of my book attempts to offer, in broad terms, a way of thinking about how digital humanities can more consistently maintain its excellence by generating new ideas.
The book includes one example of digital humanities that I really like, the writing of Stephen Ramsay. He encourages scholars to use the techniques of digital humanities to, well, play around with the texts they are studying, in order to discover unexpected patterns and make new, surprising texts out of the old. One of the challenges of digital humanities is that, like a lot of work in the natural sciences, its methods often require that you know what you are looking for, and then you either find it or you don't. That makes it harder to discover radically new ideas, since the results of your analyses are effectively preconceived. Ramsay proposes ways of using those methods more experimentally, without knowing in advance what you are trying to accomplish, and that feels to me more productive and more revelatory than a lot of other digital humanities approaches.
Looking ahead, you encourage readers to "employ the digital as a challenge, take it where it does not fit, and never take its procedures for granted." How can we stave off thinking like a machine?
One key technique, easy to say but evidently kind of hard to do, is simply to engage less with our digital devices. By doing more things "the old-fashioned way," by leaning in frequently to the risks and spontaneity of in-person relationships, by exposing our bodies, our voices, and our ideas to the precarious and sometimes threatening space of the real world, by engaging the world with our whole selves and not just our fingertips and eyes, by putting those full selves on the line and not hiding behind an alias and an avatar, by approaching tasks in their complicated and sometimes awkward materiality instead of confining our actions to the idealized space of the virtual, we will start to recognize what is lost when we conduct our lives through digital interfaces.
It's not always fun and it's not necessarily easy to face the plenitude of reality; indeed, it's usually easier and more comfortable to retreat behind a screen and press a few keys. But my argument is that, as we continue to immerse ourselves deeper in the digital world, we not only lose the essential spontaneity of the real world, but we come to forget that there is anything there to lose.
To be clear, I am not at all "against" digital technologies, which I find incredibly useful and often really rewarding. I don't think the solution is just to stop using the digital or to get rid of it. But whether online or off, I believe that it is vital that we stay in touch with contingency, recognize the irreplaceable value of our fleshy selves, of thinking and acting in a field of potential where the options haven't already been determined in advance and it may never be clear exactly which way to go or how to get there. Computers are appealing because they make everything tidy, because their world is well defined and fully instrumentalizable. But we need to remember the value of messiness, of uncertainty, of strife, as those are the things that make life valuable and move the world forward.