Climate change and the paradox of inaction

One of the things I most often hear when talking to people about climate change is “but what to do?” This, in and of itself, is good news. Perhaps owing to evidently extreme weather patterns1, perhaps owing to the concentrated efforts of primary/secondary school teachers2, perhaps owing to unceasing (though increasingly brutally repressed, even in the UK & the rest of Europe) efforts of activists, it seems the question whether climate change is ‘real’ has finally taken the back seat to “and what shall we do about it?”.

While climate denialism may have had its day, challenges now come from its cousins (or descendants) in the form of climate optimism, technosolutionism, or – as Linsey McGoey and I have recently argued – the specific kind of ignorance associated with liberal fatalism: using indeterminacy to delay action until certain actions are foreclosed. In the latter context in particular, the sometimes overwhelming question “what to do” can compound and justify, even if unintentionally, the absence of action. The problem is that whilst we are deliberating what to do, certain kinds of action become less possible or more costly, thus limiting the likelihood we will be able to implement them in the future. This is the paradox of inaction.

My interest in this question came from researching the complex relationship between knowledge (and ignorance) and (collective or individual) action. Most commonsense theories assume a relatively linear link between the two: knowing about something will lead you to act on it, especially in the contexts of future risk or harm. This kind of approach shaped information campaigns, or struggles to listen to ‘the science’, from early conversations around climate change to Covid-19. Another kind of approach overrides these information- or education-based incentives in favour of behavioural ‘nudges’; awareness of cognitive processing biases (well-documented and plenty) suggested slightly altering decisional infrastructure would be more efficient than trying to, effectively, persuade people to do the right thing. While I can see sense in both approaches, I became interested instead in the ambiguous role of knowledge. In other words, under what conditions would knowing (about the future) prevent us from acting (on the future)?

There are plenty of examples to choose from: from the critique of neoliberalism to Covid-19 (see also the above) to, indeed, climate change (free version here). In the context of teaching, this question often comes up when students begin to realize the complexity of global economy, and the inextricability of questions of personal agency from what we perceive as systemic change. In other words, they begin to realize that the state of the world cannot be reduced either to individual responsibility nor to the supposedly impersonal forces of “economy”, “politics”, “power” etc. But this rightly leaves them at an impasse; if change is not only about individual agency nor about large-scale system change, how can we make anything happen?

It is true that awareness of complexity can often lead to bewilderment or, at worst, inaction. After all, in view of such extraordinary entanglement of factors – individual, cultural, economic, social, political, geological, physical, biological – it can be difficult to even know how to tackle one without unpicking all others. Higher education doesn’t help with this: most people (not all, but most) are, sadly, trained to see the world from the perspective of one discipline or field of study3, which can rightly make processes that span those fields appear impossible to grasp. Global heating is one such process; it is, at the same time, geological, meteorological, ecological, social, political, medical, economic, etc. As Timothy Morton has argued, climate change is a ‘hyperobject’; it exceeds the regular boundaries of human conceptualization.

Luckily, social theory, and in particular social ontology, is particularly good at analysing objects. Gender – e.g. the notion of ‘woman’ – is an example of such an object. This does not mean, by the way, that ‘deconstructing’ objects, concepts, or notions needs to reduce from the complexity of their interrelation; in some approaches to social ontology, a whole is always more than the sum (or any deducible interrelation) of its parts. In other words, to ‘deconstruct’ climate change is not in any way to deny its effects or the usefulness of the concept; it is to understand how different elements – which we conventionally, and historically, but not-at-all necessarily, associate with disciplines or ‘domains’ – interact and interrelate, and what that means. Differently put, the way disciplines construct climate change as an object (or assemblage) tells us something about the way we are likely to perceive solutions (or ways of addressing it, more broadly). It does not determine what is going to happen, but it points to the venues (and limitations) humans are likely to see in doing something about it.

Why does this matter? Our horizon of agency is limited by what we perceive as subjects, objects, and forms of agency. In less weighty parlance, what (and whom) we perceive as being able to do stuff; and the kind of stuff it (they) can do. This, also, includes what we perceive as limitations on doing stuff, real or not. Two limitations apply to all human beings; time and energy. In other words, doing stuff takes time. It also consumes energy. This has implications for what we perceive as the stuff we can do. So what can we do?

As with so many other things, there are two answers. One is obvious: do anything and everything you can, and do it urgently. Anything other than nothing. (Yes, even recycling, in the sense in which it’s better than not recycling, though obviously less useful than not buying packaging in the first place).

The second answer is also obvious, but perhaps less frequent. Simply, what you aim to do depends on what you aim to achieve. Aiming to feel a bit better? Recycle, put a poster up, maybe plant a tree (or just some bee-friendly plants). Make a bit of a difference to your carbon emissions? Leave the car at home (at least some of the time!), stop buying stuff in packaging, cut on flying, eliminate food waste (yes, this is fact very easy to do). Make a real change? Vote on climate policy; pressure your MP; insulate your home (if you have one); talk to others. Join a group, or participate in any kind of collective action. The list goes on; there are other forms of action that go beyond this. They should not be ranked, not in terms of moral rectitude, nor in terms of efficiency (if you’re thinking of the old ‘limitations of individual agency’ argument, do consider what would happen if everyone *did* stop driving and no, that does not mean ambulance vehicles).

The problem with agency is that our ideas of what we can do are often shaped by what we have been trained, raised, and expected to do. Social spaces, in this sense, also become polygons for action. You can learn to do something by being in a space where you are expected to do (that) something; equally, you learn not to do things by being told, explicitly or implicitly, that it is not the done thing. Institutions of higher education are really bad at fostering certain kinds of action, while rewarding others. What is rewarded is (usually) individual performance. This performance is frequently framed, explicitly or implicitly, as competition: against your peers (in relation to whom you are graded) or colleagues (with whom you are compared when it comes to pay, or promotion); against other institutions (for REF scores, or numbers of international students); against everyone in your field (for grants, or permanent jobs). Even instances of team spirit or collaboration are more likely to be rewarded or recognized when they lead to such outcomes (getting a grant, or supporting someone in achieving individual success).

This poses significant limitations for how most people think about agency, whether in the context of professional identities or beyond (I’ve written before about limits to, and my own reluctance towards, affiliation with any kind of professional let alone disciplinary identity). Agency fostered in most contemporary capitalist contexts is either consumption- or competition-oriented (or both, of course, as in conspicuous consumption). Alternatively, it can also be expressive, in the sense in which it can stimulate feelings of identity or belonging, but it bears remembering these do not in and of themselves translate into action. Absent from these is the kind of agency I, for want of a better term, call world-building: the ability to imagine, create, organize and sustain environments that do more than just support the well-being and survival of one and one’s immediate in-group, regardless how narrowly or broadly we may define it, from nuclear family to humanity itself.

The lack of this capacity is starkly evident in classrooms. Not long ago, I asked one of the groups I teach for an example of a social or political issue they were interested in or would support despite the fact it had no direct or personal bearing on their lives. None could (yes, the war on Gaza was already happening). This is not to say that students do not care about issues beyond their immediate scope of interest, or that they are politically disenchanted: there are plenty of examples to the contrary. But it is to suggest that (1), we are really bad at connecting their concerns to broader social and political processes, especially when it comes to issues on which everyone in the global North is relatively privileged (and climate change is one such issue, compared to effects it is likely to have on places with less resilient infrastructure); and (2), institutions are persistently and systematically (and, one might add, intentionally) failing at teaching how to turn this into action. In other words: many people are fully capable of imagining another world is possible. They just don’t know how to build it.

As I was writing this, I found a quote in Leanne Betasamosake Simpson’s (excellent) As We Have Always Done: Indigenous Freedom Through Radical Resistance that I think captures this brilliantly:

Western education does not produce in us the kinds of effects we like to think it does when we say things like ‘education is the new buffalo’. We learn how to type and how to write. We learn how to think within the confines of Western thought. We learn how to pass tests and get jobs within the city of capitalism. If we’re lucky and we fall into the right programs, we might learn to think critically about colonialism. But postsecondary education provides few useful skill sets to those of us who want to fundamentally change the relationship between the Canadian state and Indigenous peoples, because that requires a sustained, collective, strategic long-term movement, a movement the Canadian state has a vested interest in preventing, destroying, and dividing.

(loc 273/795)

It may be evident that generations that have observed us do little but destroy the world will exhibit an absence of capacity (or will) to build one. Here, too, change starts ‘at home’, by which I mean in the classroom. Are we – deliberately or not – reinforcing the message that performance matters? That to ‘do well’ means to fit, even exceed, the demands of capitalist productivity? That this is how the world is, and the best we can do is ‘just get on with it’?

The main challenge for those of us (still) working in higher education, I think, is how to foster and stimulate world-building capacities in every element of our practice. This, make no mistake, is much more difficult than what usually passes for ‘decolonizing’ (though even that is apparently sometimes too much for white colonial institutions), or inserting sessions, talks, or workshops about the climate crisis. It requires resistance to reproducing the relationship to the world that created and sustains the climate crisis – competition-oriented, extractive, and expropriative. It calls for a refusal to conform to the idea that knowledge should, in the end, serve the needs of (a) labour market, ‘economy’, or the state. It requires us to imagine a world beyond such terms. And then teach students how to build it.

  1. Hi, philosophy of science/general philosophy/general bro! Are you looking to explain mansplain stochastic phenomena to me? Please bear in mind that this is a blog post, and thus oriented towards general audience, and that I have engaged with this problem on a slightly different level of complexity elsewhere (and yes, I am well aware of the literature). Here, read up. ↩︎
  2. One of the recent classes I taught that engaged with the question of denialism/strategic ignorance (in addition to a session on sociology of ignorance in Social Theory and Politics of Knowledge, an undergraduate module I taught at Durham in 21-23, and sessions on public engagement, expertise and authority, and environmental sociology in Public Sociology: Theory and Practice, which is a core MSc module at Durham, I teach a number of guest lectures on the relationship between knowledge and ignorance, scientific advice, etc.) was a pleasant surprise insofar as most students were well aware of the scale, scope, and reality of climate change. This is a pronounced change from some of my experiences in the preceding decade, when the likelihood of encountering at least the occasional climate skeptic, if not outright denialist (even if by the virtue of qualifying for the addressee of fn 1 above), was high(er). When asked, most of the students told me they learned about climate change in geography at school. Geography teachers, I salute you. ↩︎
  3. The separation of sociology and politics in most UK degree programmes, for instance, continues to baffle me. ↩︎

In dreams begin responsibilities

Dreams are dangerous places. The control and awareness we tend to ascribe to what is usually referred to as ‘dreams’ in the waking state (ambitions; aspirations) is the exact opposite of the absence of control we tend to assume of dreams in the unconscious (sleeping) state, but neither is, strictly speaking, true; we do not choose our ambitions or orientations with full awareness, much like it is ridiculous to fully outsource authoriality when we sleep.

Psychoanalysis, of course, knows this. But, much like other disciplines and traditions that take dreams seriously, it is all-too-often equated with treating dreams as epistemology; that is, using dream logic to deduce something about the person who dreams, as if exiting from the forces generating the unconscious (in Freud’s formulation, following Ariadne’s thread) is ever truly possible. Sociology, needless to say, hardly does a better job, instead placing dreams at the uncomfortable (all boundaries, for sociology, are uncomfortable) boundary between collective and individual, as if the collective (unconscious) somehow permeates the individual, but always imperfectly (everything, in sociology, is imperfect, except its own imperfections).

Bion describes pathology as the inability to dream and inability to wake up; but is this not another (even if relaxed) call for discreteness, ushering in Freud’s Reality principle through the back door? This seems relevant given the relevance of the ability to dream (and dream differently) for any progressive movement or politics. What if elements of reality become so impoverished that there is nothing to dream about? This is one of the things I remember most clearly from reading Cormac McCarthy’s’s The Road – good, happy, and peaceful dreams usually mean you are dying. Reality, in other words, has become so unbearable that there is nothing but retreat into personal, individualized fantasy as a bulwark against this (this is also, though in a more complicated tone, a motif in one of my favourite films, Wenders’ Until the End of the World).

There are several possible ways out of this. One is to see dreams as shared; that is, to conceptualize dreaming as a collective, rather than solitary activity, and dreams as a possession of more than a single individual. Yet, I fear this too-easily slips into platitudes; as much as dreams (and beliefs, and feelings, and thoughts) can be similar and communicated, it is unlikely they can literally be co-created: individual mental states remain (and, in some cases, are indistinguishable from) individual.

(I’m aware that the Australian Aboriginal concept of Dreamtime may challenge this, but I’m reserving that for a different argument).

Instead of imagining some originary dream-state in which we are connected through other minds as if via a umbilical cord, I’m increasingly thinking it makes sense to conceptualize dreams as places; that is, instances of timespace with laws, sequences, and sets of actions and relations. In this sense, we can be in others’ dream(s), as much as they can be in our(s); but within this place, we are probably still responsible to ourselves. Or are we?

How free are you to act in someone else’s dream?

Tár, or the (im)possibility of female genius

“One is not born a genius, one becomes a genius;”, wrote Simone de Beauvoir in The Second Sex; “and the feminine situation has up to the present rendered this becoming practically impossible.”

Of course, the fact that the book, and its author, are much better known for the other quote on processual/relational ontology – “one is not born a woman, one becomes a woman” – is a self-fulfilling prophecy of the first. A statement about geniuses cannot be a statement about women. A woman writing about geniuses must, in fact, be writing about women. And because women cannot be geniuses, she cannot be writing about geniuses. Nor can she be one herself.

I saw Tár, Todd Field’s lauded drama about the (fictional) first woman conductor of the Berlin Philharmonic Orchestra, earlier this year (most of this blog post was written before the Oscars and reviews). There were many reasons why I was poised to love it: the plot/premise, the scenario, the music (obviously), the visuals (and let’s be honest, Kate Blanchett could probably play a Christmas tree and be brilliant). All the same, it ended up riling me for its unabashed exploitation of most stereotypes in the women x ambition box. Of course the lead character (Lydia Tár, played by Blanchett) is cold, narcissistic, and calculating; of course she is a lesbian; of course she is ruthless towards long-term collaborators and exploitative of junior assistants; of course she is dismissive of identity politics; and of course she is, also, a sexual predator. What we perceive in this equation is that a woman who desires – and attains – power will inevitably end up reproducing exactly the behaviours that define men in those roles, down to the very stereotype of Weinstein-like ogre. What is it that makes directors not be able to imagine a woman with a modicum of talent, determination, or (shhh) ambition as anything other than a monster – or alternatively, as a man, and thus by definition a ‘monster’?

To be fair, this movement only repeats what institutions tend to do with women geniuses: they typecast them; make sure that their contributions are strictly domained; and penalize those who depart from the boundaries of prescribed stereotypical ‘feminine’ behaviour (fickle, insecure, borderline ‘hysterical’; or soft, motherly, caring; or ‘girlbossing’ in a way that combines the volume of the first with the protective urges of the second). Often, like in Tár, by literally dragging them off the stage.

The sad thing is that it does not have to be this way. The opening scene of Tár is a stark contrast with the closing one in this regard. In the opening scene, a (staged) interview with Adam Gopnik, Lydia Tár takes the stage in a way that resists, refuses, and downplays gendered stereotypes. Her demeanor is neither masculine nor feminine; her authority is not negotiated, forced to prove itself, endlessly demonstrated. She handles the interview with an equanimity that does not try to impress, convince, cajole, or amuse; but also not charm, outwit, or patronize. In fact, she does not try at all. She approaches the interviewer from a position of intellectual equality, a position that, in my experience, relatively few men can comfortably handle. But of course, this has to turn out to be a pretense. There is no way to exist as a woman in the competitive world of classical music – or, for that matter, anywhere else – without paying heed to the gendered stereotypes.

A particularly poignant (and, I thought, very successful) depiction of this is in the audition scene, in which Olga – the cellist whose career Tár will help and who will eventually become the object of her predation – plays behind a screen. Screening off performers during auditions (‘blind auditions’) was, by the way, initially introduced to challenge gender bias in hiring musicians to major orchestras – to resounding (sorry) success, making it 50% more likely women would be hired. But Tár recognizes the cellist by her shoes (quite stereotypically feminine shoes, by the way). The implication is that even ‘blind’ auditions are not really blind. You can be either a ‘woman’ (like Olga, young, bold, straight, and feminine); or a ‘man’ (like Lydia, masculine, lesbian, and without scruples). There is no outside, and there is no without.

As entertaining as it is to engage in cultural criticism of stereotypical gendered depiction in cinemas, one question from Tár remains. Is there a way to perform authority and expertise in a gender-neutral way? If so, what would it be?

People often tell me I perform authority in a distinctly non-(stereotypically)-feminine way; this both is and is not a surprise. It is a surprise because I am still occasionally shocked by the degree to which intellectual environments in the UK, and in particular those that are traditionally academic, are structurally, relationally, and casually misogynist, even in contexts supposedly explicitly designed to counter it. It is not a surprise, on the other hand, as I was raised by women who did not desire to please and men who were more than comfortable with women’s intellects, but also, I think, because the education system I grew up in had no problems accepting and integrating these intellects. I attribute this to the competitive streak of Communist education – after all, the Soviets sent the first woman into space. But being (at the point of conception, not reception, sadly) bereft of gendered constraints when it comes to intellect does not solve the other part of the equation. If power is also, always, violence, is there a way to perform power that does not ultimately involve hurting others?

This, I think, is the challenge that any woman – or, for that matter, anyone in a position of power who does not automatically benefit from male privilege – must consider. As Dr Autumn Asher BlackDeer brilliantly summarized it recently, decolonization (or any other kind of diversification) is not about replacing one set of oppressors with another, so having more diverse oppressors. Yet, all too frequently, this kind of work – willingly or not – becomes appropriated and used in exactly these ways.

Working in institutions of knowledge production, and especially working both on and within multiple intersecting structures of oppression – gender, ethnicity/race, ability, nationality, class, you name it – makes these challenges, for me, present on a daily basis in both theoretical and practical work., One of the things I try to teach my students is that, in situations of injustice, it is all too appealing to react to perceived slight or offence by turning it inside out, by perpetuating violence in turn. If we are wronged, it becomes easy to attribute blame and mete out punishment. But real intellectual fortitude lies in resisting this impulse. Not in some meek turning-the-other-cheek kind of way, but in realizing that handing down violence will only, ever, perpetuate the cycle of violence. It is breaking – or, failing that, breaking out of – this cycle we must work towards.

As we do, however, we are faced with another kind of problem. This is something Lauren Berlant explicitly addressed in one of their best texts ever, Feminism and the Institutions of Intimacy: most people in and around institutions of knowledge production find authority appealing. This, of course, does not mean that all intellectual authority lends itself automatically to objectification (on either of the sides), but it does and will happen. Some of this, I think, is very comprehensively addressed in Amia Srinivasan‘s The Right to Sex; some of it is usefully dispensed with by Berlant, who argues against seeing pedagogical relations as indexical for transference (or the other way around?). But, as important as these insights are, questions of knowledge – and thus questions of authority – are not limited to questions of pedagogy. Rather, they are related to the very relational nature of knowledge production itself.

For any woman who is an intellectual, then, the challenge rests in walking the very thin line between seduction and reduction – that is, the degree to which intellectual work (an argument, a book, a work of art) has to seduce, but in turn risks being reduced to an act of seduction (the more successful it is, the more likely this will happen). Virginie Despentes’ King Kong Theory, which I’m reading at the moment (shout out to Phlox Books in London where I bought it), is a case in point. Despentes argues against reducing women’s voices to ‘experience’, or to women as epistemic object (well, OK, the latter formulation is mine). Yet, in the reception of the book, it is often Despentes herself – her clothes, her mannerisms, her history, her sexuality – that takes centre stage.

Come to think of it, this version of ‘damned if you do, damned if you don’t’ applies to all women’s performances: how many times have I heard people say they find, for instance, Judith Butler’s or Lauren Berlant’s arguments or language “too complex” or “too difficult”, but on occasions when they do make an effort to engage with them reduce them to being “about gender” or “about sexuality” (hardly warrants mentioning that the same people are likely to diligently plod through Heidegger, Sartre or Foucault without batting an eyelid and, speaking of sexuality, without reducing Foucault’s work on power to it). The implication, of course, is that writers or thinkers who are not men have the obligation to persuade, to enchant readers/consumers into thinking their argument is worth giving time to.

This is something I’ve often observed in how people relate to the arguments of women and nonbinary intellectuals: “They did not manage to convince me” or “Well, let’s see if she can get away with it”. The problem is not just the casualized use of pronouns (note how men thinkers retain their proper names: Sartre, Foucault, but women slip into being a “she”). It’s the expectation that it is their (her) job to convince you, to lure you. Because, of course, your time is more valuable than hers, and of course, there are all these other men you would/should be reading instead, so why bother? It is not the slightest bit surprising that this kind of intellectual habit lends itself too easily to epistemic positioning that leads to epistemic erasure, but also that it becomes all too easily perpetuated by everyone, including those who claim to care about such things.

One of the things I hope I managed to convey in the Ethics of Ambiguity reading group I ran at the end of 2022 and beginning of 2023 is to not read intellectuals who are not white men in this way. To not sit back with your arms folded and let “her” convince you. Simone Weil, another genius – and a woman – wrote that attention is the primary quality of love we can give to each other. The quality of intellectual attention we give to pieces we read has to be the same to count as anything but a narrow, self-aggrandizing gesture. In other words, a commitment to equality means nothing without a commitment to equality of intellectual attention, and a constant practice and reflection required to sustain and improve it.

Enjoyed this? Try https://journals.sagepub.com/doi/10.1177/00113921211057609

and https://www.thephilosopher1923.org/post/philosophy-herself

You can’t ever go back home again

At the start of December, I took the boat from Newcastle to Amsterdam. I was in Amsterdam for a conference, but it is also true I used to spend a lot of time in Amsterdam – Holland in general – both for private reasons and for work, between 2010 and 2016. Then, after a while, I took a train to Berlin. Then another, sleeper train, to Budapest. Then, a bus to Belgrade.

To wake up in Eastern Europe is to wake up in a context in which history has always already happened. To state this, of course, is a cliché; thinking, and writing, about Eastern Europe is always already infused with clichés. Those of us who come from this part of the world – what Maria Tumarkin marks so aptly as “Eastern European elsewheres” – know. In England, we exist only as shadow projections of a self, not even important enough to be former victims/subjects of the Empire. We are born into the world where we are the Other, so we learn to think, talk, and write of ourselves as the Other. Simone de Beauvoir wrote about this; Frantz Fanon wrote about this too.

To wake up in Berlin is to already wake up in Eastern Europe. This is where it used to begin. To wake up in Berlin is to know that we are always already living in the aftermath of a separation. In Eastern Europe, you know the world was never whole.

I was eight when the Berlin Wall fell. I remember watching it on TV. Not long after, I remember watching a very long session of the Yugoslav League of Communists (perhaps this is where my obsession with watching Parliament TV comes from?). It seemed to go on forever. My grandfather seemed agitated. My dad – whom I only saw rarely – said “Don’t worry, Slovenia will never secede from Yugoslavia”. “Oh, I think it will”, I said*.

When you ask “Are you going home for Christmas?”, you mean Belgrade. To you, Belgrade is a place of clubs and pubs, of cheap beer and abundant grilled meat**. To me, Belgrade is a long dreadful winter, smells of car fumes and something polluting (coal?) used for fuel. Belgrade is waves of refugees and endless war I felt powerless to stop, despite joining the first anti-regime protest in 1992 (at the age of 11), organizing my class to join one in 1996 (which almost got me kicked out of school, not for the last time), and inhaling oceans of tear gas when the regime actually fell, in 2000.

Belgrade is briefly hoping things would get better, then seeing your Prime Minister assassinated in 2003; seeing looting in the streets of Belgrade after Kosovo declared independence in 2008, and while already watching the latter on Youtube, from England, deciding that maybe there was nowhere to return to. Nowadays, Belgrade is a haven of crony capitalism equally indebted to Russian money, Gulf real estate, and Chinese fossil fuel exploitation that makes its air one of the most polluted in the world. So no, Belgrade never felt like home.

Budapest did, though.

It may seem weird that the place I felt most at home is a place where I barely spent three years. My CV will testify that I lived in Budapest between 2010 and 2013, first as a visiting fellow, then as an adjunct professor at the Central European University (CEU). I don’t have a drop of Hungarian blood (not that I know of, at least, thought with the Balkans you can never tell). My command of language was, at best, perfunctory; CEU is an American university and its official language is English. Among my friends – most of whom were East-Central European – we spoke English; some of us have other languages in common, but we still do. And while this group of friends did include some people who would be described as ‘locals’ – that is, Budapest-born and raised – we were, all of us, outsiders, brought together by something that was more than chance and a shared understanding of what it meant to be part of the city***.

Of course, the CV will say that what brought us together was the fact that we were all affiliated with CEU. But CEU is no longer in Budapest; since 2020, it has relocated to Vienna, forced out by the Hungarian regime’s increasingly relentless pursuit against anything that smacks of ‘progressivism’ (are you listening, fellow UK academics?). Almost all of my friends had left before that, just like I did. In 2012, increasingly skeptical about my chances to acquire a permanent position in Western academia with a PhD that said ‘University of Belgrade’ (imagine, it’s not about merit), I applied to do a second PhD at Cambridge. I was on the verge of accepting the offer, when I also landed that most coveted of academic premia, a Marie Curie postdoc position attached to an offer of a permanent – tenured – position, in Denmark****.

Other friends also left. For jobs. For partners’ jobs. For parenthood. For politics. In academia, this is what you did. You swallowed and moved on. Your CV was your life, not its reflection.

So no, there is no longer a home I can return to.

And yet, once there, it comes back. First as a few casually squeezed out words to the Hungarian conductors on the night train from Berlin, then, as a vocabulary of 200+ items that, though rarely used, enabled me to navigate the city, its subways, markets, and occasionally even public services (the high point of my Hungarian fluency was being able to follow – and even part-translate – the Semmelweis Museum curator’s talk! :)). Massolit, the bookshop which also exists in Krakow, which I’ve visited on a goodbye-to-Eastern-Europe trip from Budapest via Prague and Krakow to Ukraine (in 2013, right before the annexation). Gerlóczy utca, where is the French restaurant in which I once left a massive tip for a pianist who played so beautifully that I was happy to be squeezed at the smallest table, right next to the coat stand. Most, which means ‘bridge’ in Serbian (and Bosnian, and Croatian) and ‘now’ in Hungarian. In Belgrade, I now sometimes rely on Google maps to get around; in Budapest, the map of the city is buried so deep in my mental compass that I end up wherever I am supposed to be going.

This is what makes the city your own. Flow, like the Danube, massive as it meanders between the city’s two halves, which do not exactly make a whole. Like that book by psychologist Mihaly Csikszentmihalyi, which is a Hungarian name, btw. Like my academic writing, which, uncoupled from the strictures of British university term, flows.

Budapest has changed, but the old and the new overlay in ways that make it impossible not to remember. Like the ‘twin’ cities of Besźel and Ul Qoma in the fictional universe of China Miéville’s The City and the City (the universe was, of course, modelled on Berlin, but Besźel is Budapest out and out, save for the sea), the memory and its present overlap in distinct patterns that we are trained not to see. Being in one precludes being in the other. But there are rumours of a third city, Orciny, one that predates both. Believing in Orciny is considered a crime, though. There cannot be a place where the past and the future are equally within touching distance. Right?

CEU, granted, is no longer there as an institution; though the building (and the library) remains, most of its services, students, and staff are now in Vienna. I don’t even dare go into the campus; the last time I was there, in 2017, I gave a keynote about how universities mediate disagreement. The green coffee shop with the perennially grim-faced person behind the counter, the one where we went to get good coffee before Espresso Embassy opened, is no longer there. But Espresso Embassy still stands; bigger. Now, of course, there are places to get good coffee everywhere: Budapest is literally overrun by them. The best I pick up is from the Australian coffee shop, which predates my move. Their shop front celebrates their 10th anniversary. Soon, it will be 10 years since I left Budapest.

Home: the word used to fill me with dread. “When are you going home?”, they would ask in Denmark, perhaps to signify the expectation I would be going to Belgrade for the winter break, perhaps to reflect the idea that all immigrants are, fundamentally, guests. “I live here”, I used to respond. “This is my home”. On bad days, I’d add some of the combo of information I used to point out just how far from assumed identities I was: I don’t celebrate Christmas (I’m atheist, for census purposes); if I did, it would be on a different date (Orthodox Christian holidays in Serbia observe the Julian calendar, which is 10 days behind the Gregorian); thanks, I’ll be going to India (I did, in fact, go to India including over the Christmas holidays the first year I lived in Denmark, though not exactly in order to spite everyone). But above and beyond all this, there was a simpler, flatter line: home is not where you return, it’s the place you never left.

In Always Coming Home, another SF novel about finding the places we (n)ever left, Ursula LeGuin retraces a past from the point of view of a speculative future. This future is one in which the world – in fact, multiple worlds – have failed. Like Eastern Europe, it is a sequence of apocalypses whose relationship can only be discovered through a combination of anthropology and archaeology, but one that knows space and its materiality exist only as we have already left it behind; we cannot dig forwards, as it were.

Am I doing the same, now? Am I coming home to find out why I have left? Or did I return from the future to find out I have, in fact, never left?

Towards the end of The City and the City, the main character, Tyador Borlú, gets apprehended by the secret police monitoring – and punishing – instances of trespass (Breach) between two cities, the two worlds. But then he is taken out by one of the Breach – Ashil – and led through the city in a way that allows him to finally see them not as distinct, but as parts of a whole.

Everything I had been unseeing now jostled into sudden close-up. Sound and smell came in: the calls of Besźel; the ringing of its clocktowers; the clattering and old metal percussion of the trams; the chimney smell; the old smells; they came in a tide with the spice and Illitan yells of Ul Qoma, the clatter of a militsya copter, the gunning of German cars. The colours of Ul Qoma light and plastic window displays no longer effaced the ochres and stone of its neighbour, my home.

‘Where are you?’ Ashil said. He spoke so only I could hear. ‘I . . .’

‘Are you in Besźel or Ul Qoma?’

‘. . . Neither. I’m in Breach.’ ‘You’re with me here.’

We moved through a crosshatched morning crowd. ‘In Breach. No one knows if they’re seeing you or unseeing you. Don’t creep. You’re not in neither: you’re in both.’

He tapped my chest. ‘Breathe.’

(Loc. 3944)

Breathe.

*Maybe this is where the tendency not to be overtly impressed by the authority of men comes from (or authority in general, given my father was a professor of sociology and I was, at that point, nine years old, and also right).

** Which I also do not benefit from, as I do not eat meat.

*** Some years later, I will understand that this is why the opening lines of the Alexandria Quartet always resonated so much.

**** How I ended up doing a second PhD at Cambridge after all and relocating to England permanently is a different story, one that I part-told here.

When it ends

In the summer of 2018, I came back to Cambridge from one of my travels to a yellowed, dusty patch of land. The grass – the only thing that grew in the too shady back garden of the house me and my partner were renting – had not only wilted; it had literally burnt to the ground.

I burst into tears. As I sat in the garden crying, to (I think) the dismay of my increasingly bewildered partner, I pondered what a scene of death so close to home was doing – what it was doing in my back yard, and what it was doing to me. For it was neither the surprise at nor the scale that shook me – I had witnessed both human and non-human destruction much vaster than a patch of grass in Cambridge; I had spent most of the preceding year and some reading on the politics, economics, and – as the famed expression goes – ‘the science’ of climate change (starting with the excellent Anthropocene reading group I attended while living in London), so I was well-versed, by then, in precisely what was likely to happen, how and when. It wasn’t, either, the proximity, otherwise assumed to be a strong motivator: I certainly did not need climate change to happen in my literal ‘back yard’ in order to become concerned about it. If nothing else, I had come back to Cambridge from a prolonged stay in Serbia, where I have been observing the very same things, detailed here (including preparations for mineral extraction that will become the main point of contention for the protests against Rio Tinto in 2022). As to anyone who has lived outside of the protected enclaves of the Global North, climate change has felt very real, for quite some time.

What made me break down at the sight of that scorched patch of grass was its ordinariness – the fact that, in front, besides, and around what for me was quite bluntly an extinction event, life seemed to go on as usual. No-one warned me my back garden was a cemetery. Several months before that, at the very start of the first round of UCU strikes in 2018, I raised the question of pension funds invested in fossil fuels, only to be casually told one of the biggest USS shares was in Royal Dutch Shell (USS, and the University of Cambridge, have reluctantly committed to divestment since, but this is yet to yield any results in the case of USS). While universities make pompous statements about sustainability, a substantial chunk of their funding and operating revenue goes to activities that are at best one step removed from directly contributing to the climate crisis, from international (air) travel to building and construction. At Cambridge, I ran a reading group called Ontopolitics of the future, whose explicit question was: What survives in the Anthropocene? In my current experience, the raising of climate change tends to provoke uncomfortable silences, as if everyone had already accepted the inevitability of 1.5+ degree warming and the suffering it would inevitably come with.

This acceptance of death is a key feature of the concept of ‘slow death’ that Lauren Berlant introduced in Cruel Optimism:

“Slow death prospers not in traumatic events, as discrete time-framed phenomena like military encounters and genocides can appear to do, but in temporally labile environments whose qualities and whose contours in time and space are often identified with the presentness of ordinariness itself” (Berlant, 2011: 100).

Berlant’s emphasis on the ordinariness of death is a welcome addition to theoretical frameworks (like Foucault’s bio-, Mbembe’s necro- or Povinelli’s onto-politics) that see the administration of life and death as effects of sovereign power:

“Since catastrophe means change, crisis rhetoric belies the constitutive point that slow death—or the structurally induced attrition of persons keyed to their membership in certain populations—is neither a state of exception nor the opposite, mere banality, but a domain where an upsetting scene of living is revealed to be interwoven with ordinary life after all” (Berlant, 2011: 102).

Over the past year and some, I’ve spent a lot of time thinking about the concept of ‘slow death’ in relation to the Covid-19 pandemic (my contribution to the edited special issue on Encountering Berlant should be coming out in Geography Journal sometime this year). However, what brought back the scorched grass in Cambridge as I sat at home during UK’s hottest day on record in 2022 was not the (inevitable) human, non-human, or infrastructural cost of climate change; it was, rather, the observation that for most academics life seemed to go on as usual, if a little hotter. From research concerns to driving to moaning over (the absence of) AC, there seemed to be little reflection on how our own modes of knowledge production – not to mention lifestyles – were directly contributing to heating the planet.

Of course, the paradox of knowledge and (in)action – or knowing and (not) doing – has long been at the crux of my own work, from performativity and critique of neoliberalism to the use of scientific evidence in the management of the Covid-19 pandemic. But with climate change, surely it has to be obvious to everyone that there is no way to just continue business as usual, that – while effects are surely differentially distributed according to privilege and other kinds of entitlement – no-one is really exempt from it?

Or so I thought, as I took an evening walk and passed a dead magpie on the pavement, which made me think of birds dying from heat exhaustion in India earlier in May (luckily, no other signs of mass bird extinction were in sight, so I returned home, already a bit light-headed from the heat). But as I absent-mindedly scrolled through Twitter (as well as attended a part of a research meeting), what seemed obvious was that there was a clear disconnection between modes of knowing and modes of being in the world. On the one hand, everyone was too hot, commenting on the unsustainability of housing, or the inability of transport networks to sustain temperatures over 40 degrees Celsius. On the other, academic knowledge production seemed to go on, as if things such as ‘universities’, ‘promotions’, or ‘reviews’ had the span of geological time, rather than being – for the most part – a very recent blip in precisely the thing that led to this degree of warming: capitalism, and the drive to (over)produce, (over)compete, and expand.

It is true that these kinds of challenges – like existential crises – can really make people double-down on whatever positions and identities they already have. This is quite obvious in the case of some of political divisions – with, for instance, the death spirals of Covid-denialism, misogyny, and transphobia – but it happens in less explicitly polarizing ways too. In the context of knowledge production, this is something I have referred to as the combination of epistemic attachment and ontological bias. Epistemic attachment refers to being attached to our objects of knowledge; these can be as abstract as ‘class’ or ‘social structure’ or as concrete as specific people, problems, or situations. The relationship between us (as knowers) and what we know (our objects of knowledge) is the relationship between epistemic subjects and epistemic objects. Ontological bias, on the other hand, refers to the fact that our ways of knowing the world become so constitutive of who we are that we can fail to register when the conditions that rendered this mode of knowledge possible (or reliable) no longer obtain. (This, it is important to note, is different from having a ‘wrong’ or somehow ‘distorted’ image of epistemic objects; it is entirely conceivable to have an accurate representation on the wrong ontology, as is vice versa).

This is what happens when we carry on with academic research (or, as I’ve recently noted, the circuit of academic rituals) in a climate crisis. It is not that our analyses and publications stop being more or less accurate, more or less cited, more or less inspiring. On the other side, the racism, classism, ableism, and misogyny of academia do not stop either. It’s just that, technically speaking, the world in which all of these things happen is no longer the same world. The 1.5C (let alone 2 or 2.5, more-or-less certain now) degrees warmer world is no longer the same world that gave rise to the interpretative networks and theoretical frameworks we overwhelmingly use.

In this sense, to me, continuing with academia as business as usual (only with AC) isn’t even akin to the proverbial polishing of brass on the Titanic, not least because the iceberg has likely already melted or at least calved several times over. What it brings to mind, instead, was Jeff Vandermeer’s Area X trilogy, and the way in which professional identities play out in it.

I’ve already written about Area X, in part because the analogy with climate change presents itself, and in part because I think that – in addition to Margaret Atwood’s MaddAddam and Octavia Butler’s Parables – it is the best literary (sometimes almost literal) depiction of the present moment. Area X (or Southern Reach, if you’re in the US), is about an ‘event’ – that is at the same time a space – advancing on the edge of the known, ‘civilized’ world. The event/space – ‘Area’ – is, in a clear parallel to Strugatskys’ The Zone, something akin to a parallel dimension: a world like our own, within our own, and accessible from our own, but not exactly hospitable to us. In Vandermeer’s trilogy, Area X is a lush green, indeed overgrown, space; like in The Zone, ‘nature is healing’ has a more ominous sound to it, as in Area X, people, objects, and things disappear. Or reappear. Like bunnies. And husbands.

The three books of Area X are called Annihilation, Authority, and Acceptance. In the first book, the protagonist – whom we know only as the Biologist – goes on a mission to Area X, the area that has already swallowed (or maybe not) her husband. Other members of the expedition, who we also know only by profession – the Anthropologist, the Psychologist – are also women. The second book, Authority, follows the chief administrator – who we know as Control – of Area X, as the area keeps expanding. Control eventually follows the Biologist into Area X. The third book – well, I’ll stop with the plot spoilers here, but let’s just say that the Biologist is no longer called the Biologist.

This, if anything, is the source of slight reservation I have towards the use of professional identities, authority, and expertise in contexts like the climate crisis. Scientists for XR and related initiatives are both incredibly brave (especially those risking arrest, something I, as an immigrant, cannot do) and – needless to say – morally right; but the underlying emphasis on ‘the science’ too often relies on the assumption that right knowledge will lead to right action, which tends not to hold even for many ‘professional’ academics. In other words, it is not exactly that people do not act on climate change because they do not know or do not believe the science (some do, at least). It is that systems and institutions – and, in many cases, this includes systems and institutions of knowledge production, such as universities – are organized in ways that makes any kind of action that would refuse to reproduce (let alone actually disrupt) the logic of extractive capitalism increasingly difficult.

What to do? It is clear that we are now living on the boundary of Area X, and it is fast expanding. Area X is what was in my back garden in Cambridge. Area X is outside when you open windows in the north of England and what drifts inside has the temperature of a jet engine exhaust of a plane that had just landed. The magpie that was left to die in the middle of the road in Jesmond crossed Area X.

For my part, I know it is no longer sufficient to approach Area X as the Sociologist (or Theorist, or Anthropologist, or whatever other professional identity I have – relucantly, as all identities – perused); I tried doing that for Covid-19, and it did not get very far. Instead, I’d urge my academic colleagues to seriously start thinking about what we are and what we do when these labels – Sociologist, Biologist, Anthropologist, Scientist – no longer have a meaning. For this moment may come earlier than many of us can imagine; by then, we’d have better worked out the relationship between annihilation, authority, and acceptance.  

They’ll come for you next

I saw ‘A Night of Knowing Nothing’ tonight, probably the best film I’ve seen this year (alongside The Wheel of Fortune and Fantasy, but they’re completely different genres – I could say ‘A Night of Knowing Nothing is the best political film I saw this year, but that would take us down the annoying path of ‘what is political’). There was only one other person in the cinema; this may be a depressing reflection of the local audiences’ autofocus (though this autofocus, at least in my experience, did tend to encompass corners of the former Empire), but given my standard response to the lovely people at Tyneside‘s ‘Where would you like to sit?’ – ‘Close to the aisle, as far away from other people’ – I couldn’t complain.

The film is part-documentary, part fiction, told from the angle of an anonymous woman student (who goes by ‘L.’) whose letters document the period of student strikes at the Film and Television Institute of India (FTII), but also, more broadly, the relationship between the ascendance of Modi’s regime and student protests at Jawaharlal Nehru University (JNU) in New Delhi in 2016, as well as related events – including violent attacks of masked mobs on JNU and arrests at Aligarh Muslim University in 2020*.

Where the (scant) reviews are right, and correct, is that the film is also about religion, caste, and the (both ‘slow’ and rapid) violence unleashed by supporters of the nationalist (‘Hinduttva’) project in the Bharatiya Janata Party (BJP) and its student wing, the Akhil Bharatiya Vidyarthi Parishad (ABVP).

What they don’t mention, however, is that it is also about student (and campus) politics, solidarity, and what to do when your right to protest is literally being crushed (one particularly harrowing scene – at least to anyone who has experienced police violence – consists of CCTV footage of what seem like uniformed men breaking into the premises of one of the universities and then randomly beating students trying to escape through the small door; according to reports, policemen were on site but did nothing). Many of the names mentioned in the film – both through documentary footage and L’s letters – will end up in prison, some possibly tortured (one of L’s interlocutors says he does not want to talk about it for fear of dissuading other students from protest); one will commit suicide. Throughout this, yet, what the footage shows are nights of dancing; impassioned speeches; banners and placards that call out the neo-nationalist government and its complicity not only with violence but also with perpetuating poverty, casteism, and Islamophobia. And solidarity, solidarity, solidarity.

This is the message that transpires most clearly throughout the film. The students have managed to connect two things: the role of perpetuating class/caste divisions in education – dismissiveness and abuse towards Dalit students, the increase of tuition meant to exclude those whose student bursaries support their families too – and the strenghtening of nationalism, or neo-nationalism. That the right-wing rearguard rules through stoking envy and resentment towards ‘undeserving’ poor (e.g. ‘welfare scroungers’) is not new; that it can use higher education, including initiatives aimed at widening participation, to do this, is. In this sense, Modi’s supporters’ strategy seems to be to co-opt the contempt for ‘lazy’ and ‘privileged’ students (particularly those with state bursaries) and turn it into accusation of ‘anti-nationalism’, which is equated with being critical of any governmental policy that deepens existing social inequalities.

It wouldn’t be very anthropological to draw easy parallels with the UK government’s war on Critical Race Theory, which equally tends to locate racism in attempts to call it out, rather than in the institutions – and policies – that perpetuate it; but the analogy almost presents itself. Where it fails, more obviously, is that students – and academics – in the UK still (but just about) have a broader scope for protest than their Indian counterparts. Of course, the new Bill on Freedom of Speech (Academic Freedom) proposes to eliminate some of that, too. But until it does, it makes sense to remember that rights that are not exercised tend to get lost.

Finally, what struck me about A Night of Knowing Nothing is the remarkable show of solidarity not only from workers, actors, and just (‘normal’) people, but also from students across campuses (it bears remembering that in India these are often universities in different states and thousands of miles away from each other). This was particularly salient in relation to the increasingly localized nature of fights for both pensions and ‘Four Fights’ of union members in UK higher education. Of course, union laws make it mandatory that there is both a local and a national mandate for strike action, and it is true that we express solidarity when cuts are threatened to colleagues in the sector (e.g. Goldsmiths, or Leicester a bit before that). But what I think we do not realize is that that is, eventually, going to happen everywhere – there is no university, no job, and no senior position safe enough. The night of knowing nothing has lasted for too long; it is, perhaps, time to stop pretending.

Btw, if you happen to live in Toon, the film is showing tomorrow (4 May) and on a few other days. Or catch it in your local – you won’t regret it.

*If you’re wondering why you haven’t heard of these, my guess is they were obscured by the pandemic; I say this as someone who both has friends from India and as been following Indian HE quite closely between 2013 and 2016, though somewhat less since, and I still *barely* recall reading/hearing about any of these.

On doing it badly

I’m reading Christine Korsgaard’sSelf-Constitution: Agency, Identity, and Integrity‘ (2009) – I’ve found myself increasingly drawn recently to questions of normative political philosophy or ‘ideal theory’, which I’ve previously tended to analytically eschew, I presume as part-pluralism, part-anthropological reflex.

In chapter 2 (‘The Metaphysics of Normativity’), Korsgaard engages with Aristotle’s analysis of objects as an outcome of organizing principles. For instance, what makes a house a house rather than just a ‘heap of stones and mortar and bricks’ is its function of keeping out the weather, and this is also how we should judge the house – a ‘good’ house is one that fulfils this function, a bad house is one that does not not, or at least not so much.

This argument, of course, is a well-known one and endlessly discussed in social ontology (at least among the Cambridge Social Ontology crowd, which I still visit). But Korsgaard emphasizes something that has previously completely escaped my attention, which is the implicit argument about the relationship between normativity and knowledge:

Now, it is entirely true that ‘seeing what things do’ is a pretty neat description of my work as a theorist. But there is an equally important one, which is seeing what things can or could do. This means looking at (I’m parking the discussion about privileging the visual/observer approach to theory for the time being, as it’s both a well-known criticism in e.g. feminist & Indigenous philosophy *and* other people have written about it much better than I ever could) ‘things’ – in my case, usually concepts – and understanding what using them can do, that is, looking at them relationally. You are not the same person looking at one kind of social object and another, nor is it, importantly, the same social object ‘unproblematically’ (meaning that yes, it is possible to reach consensus about social objects – e.g. what is a university, or a man, or a woman, or fascism, but it is not possible to reach it without disagreement – the only difference being whether it is open or suppressed). I’m also parking the discussion about observer effects, indefinitely: if you’re interested in how that theoretical argument looks without butchering theoretical physics, I’ve written about it here.

This also makes the normative element of the argument more difficult, as it requires delving not only into the ‘satisficing’ or ‘fitness’ analysis (a good house is a house that does the job of being a house), but also into the performative effects analysis (is a good house a house that does its job in a way that eventually turns ‘houseness’ into something bad?). To note, this is distinct from other issues Korsgaard recognizes – e.g. that a house constructed in a place that obscures the neighbours’ view is bad, but not a bad house, as its ‘badness’ is not derived from its being a house, but from its position in space (the ‘where’, not the ‘what’). This analysis may – and I emphasize may – be sufficient for discrete (and Western) ontologies, where it is entirely conceivable of the same house being positioned somewhere else and thus remaining a good house, while no longer being ‘bad’ for the neighbourhood as a whole. But it clearly encounters problems on any kind of relational, environment-based, or contextual ontologies (a house is not a house only by the virtue of being sufficient to keep out elements for the inhabitants, but also – and, possibly, more importantly – by being positioned in a community, and a community that is ‘poisoned’ by a house that blocks everyone’s view is not a good community for houses).

In this sense, it makes sense to ask when what an object does turns into badness for the object itself? I.e., what would it mean that a ‘good’ house is at the same time a bad house? Plot spoiler: I believe this is likely true for all social objects. (I’ve written about ambiguity here and also here). The task of the (social) theorist – what, I think, makes my work social (both in the sense of applying to the domain of interaction between multiple human beings and in the sense of having relevance to someone beyond me) is to figure out what kind of contexts make one more likely than the other. Under what conditions do mostly good things (like, for instance, academic freedom) become mostly bad things (like, for instance, a form of exclusion)?

I’ve been thinking about this a lot in relation to what constitutes ‘bad’ scholarship (and, I guess, by extension, a bad scholar). Having had the dubious pleasure of encountering people who teach different combinations of neocolonial, right-wing, and anti-feminist ‘scholarship’ over the past couple of years (England, and especially the place where I work, is a trove of surprises in this sense), it strikes me that the key question is under what conditions this kind of work – which universities tend to ignore because it ‘passes’ as scholarship and gives them the veneer of presenting ‘both sides’ – turns the whole idea of scholarship into little more than competition for followers on either of the ‘sides’. This brings me to the question which, I think, should be the source of normativity for academic speech, if anything: when is ‘two-sideism’ destructive to knowledge production as a whole?

This is what Korsgaard says:


Is bad scholarship just bad scholarship, or is it something else? When does the choice to not know about the effects of ‘platforming’ certain kinds of speakers turn from the principle of liberal neutrality to wilful ignorance? Most importantly, how would we know the difference?

Does academic freedom extend to social media?

There is a longer discussion about this that has been going on in the US, continental European, and many other parts of the academic/policy/legal/media complexes and their intersection. Useful points of reference are Magna Charta Universitatum (1988), in part developed to stimulate ‘transition’ of Central/Eastern European universities away from communism, and European University Association’s Autonomy Scorecard, which represents an interesting case study for thinking through tensions between publicly (state) funded higher education and principles of freedom and autonomy (Terhi Nokkala and I have analyzed it here). Discussions in the UK, however, predictably (though hardly always justifiably) transpose most of the elements, political/ideological categories, and dynamics from the US; in this sense, I thought an article I wrote a few years back – mostly about theorising complex objects and their transformation, but with extensive analysis of 2 (and a half) case studies of ‘controversies’ involving academics’ use of social media – could offer a good reference point. The article is available (Open Access!) here; the subheadings that engage with social media in particular are pasted below. If citing, please refer to the following:

Bacevic, J. (2018). With or without U? Assemblage theory and (de)territorialising the university, Globalisation, Societies and Education, 17:1, 78-91, DOI: 10.1080/14767724.2018.1498323

——————————————————————————————————–

Boundary disputes: intellectuals and social media

In an analogy for a Cartesian philosophy of mind, Gilbert Ryle famously described a hypothetical visitor to Oxford (Ryle 1949). This astonished visitor, Ryle argued, would go around asking whether the University was in the Bodleian library? The Sheldonian Theatre? The colleges? and so forth, all the while failing to understand that the University was not in any of these buildings per se. Rather, it was all of these combined, but also the visible and invisible threads between them: people, relations, books, ideas, feelings, grass; colleges and Formal Halls; sub fusc and port. It also makes sense to acknowledge that these components can also be parts of other assemblages: for instance, someone can equally be an Oxford student and a member of the Communist Party, for instance. ‘The University’ assembles these and agentifies them in specific contexts, but they exist beyond those contexts: port is produced and shipped before it becomes College port served at a Formal Hall. And while it is possible to conceive of boundary disputes revolving around port, more often they involve people.

The cases analysed below involve ‘boundary disputes’ that applied to intellectuals using social media. In both cases, the intellectuals were employed at universities; and, in both, their employment ceased because of their activity online. While in the press these disputes were usually framed around issues of academic freedom, they can rather be seen as instances of reterritorialization: redrawing of the boundaries of the university, and reassertion of its agency, in relation to digital technologies. This challenges the assumption that digital technologies serve uniquely to deterritorialise, or ‘unbundle’, the university as traditionally conceived.

The public engagement of those who authoritatively produce knowledge – in sociological theory traditionally referred to as ‘intellectuals’ – has an interesting history (e.g. Small 2002). It was only in the second half of the twentieth century that intellectuals became en masse employed by universities: with the massification of higher education and the rise of the ‘campus university’, in particular in the US, came what some saw as the ‘decline’ of the traditional, bohemian ‘public intellectual’ reflected in Mannheim’s (1936) concept of ‘free-floating’ intelligentsia. Russell Jacoby’s The Last Intellectuals (1987) argues that this process of ‘universitisation’ has led to the disappearance of the intellectual ferment that once characterised the American public sphere. With tenure, he claimed, came the loss of critical edge; intellectuals became tame and complacent, too used to the comfort of a regular salary and an office job. Today, however, the source of the decline is no longer the employment of intellectuals at universities, but its absence: precarity, that is, the insecurity and impermanence of employment, are seen as the major threat not only to public intellectualism, but to universities – or at least the notion of knowledge as public good – as a whole.

This suggests that there has been a shift in the coding of the relationship between intellectuals, critique and universities. In the first part of the twentieth century, the function of social critique was predominantly framed as independent of universities; in this sense, ‘public intellectuals’ were if not more than equally likely to be writers, journalists, and other men (since they were predominantly men) of ‘independent means’ than academic workers. This changed in the second half of the twentieth century, with both the massification of higher education and diversification of the social strata intellectuals were likely to come from. The desirability of university employment increased with the decreasing availability of permanent positions. In part because of this, precarity was framed as one of the main elements of the neoliberal transformation of higher education and research: insecurity of employment, in this sense, became the ‘new normal’ for people entering the academic profession in the twenty-first century.

Some elements of precarity can be directly correlated with processes of ‘unbundling’ (see Gehrke and Kezar 2015; Macfarlane 2011). In the UK, for instance, certain universities rely on platforms such as Teach Higher to provide the service of employing teaching staff, who deliver an increasing portion of courses. In this case, teaching associates and lecturers are no longer employees of the university; they are employed by the platform. Yet even when this is not the case, we can talk about processes of deterritorializing, in the sense in which the practice is part of the broader weakening of the link between teaching staff and the university (cf. Hall 2016). It is not only the security of employment that is changed in the process; universities, in this case, also own the products of teaching as practice, for instance, course materials, so that when staff depart, they can continue to use this material for teaching with someone else in charge of ‘delivery’.

A similar process is observable when it comes to ownership of the products of research. In the context of periodic research assessment and competitive funding, some universities have resorted to ‘buying’, that is, offering highly competitive packages to staff with a high volume of publications, in order to boost their REF scores. The UK research councils and particularly the Stern Review (2016) include measures explicitly aimed to counter this practice, but these, in turn, harm early career researchers who fear that institutional ‘ownership’ of their research output would create a problem for their employability in other institutions. What we can observe, then, is a disassembling of knowledge production, where the relationship between universities, academics, and the products of their labour – whether teaching or research – is increasingly weakened, challenged, and reconstructed.

Possibly the most tenuous link, however, applies to neither teaching nor research, but to what is referred to as universities’ ‘Third mission’: public engagement (e.g. Bacevic 2017). While academics have to some degree always been engaged with the public – most visibly those who have earned the label of ‘public intellectual’ – the beginning of the twenty-first century has, among other things, seen a rise in the demand for the formalisation of universities’ contribution to society. In the UK, this contribution is measured as ‘impact’, which includes any application of academic knowledge outside of the academia. While appearances in the media constitute only one of the possible ‘pathways to impact’, they have remained a relatively frequent form of engaging with the public. They offer the opportunity for universities to promote and strengthen their ‘brand’, but they also help academics gain reputation and recognition. In this sense, they can be seen as a form of extension; they position the universities in the public arena, and forge links with communities outside of its ‘traditional’ boundaries. Yet, this form of engagement can also provoke rather bitter boundary disputes when things go wrong.

In the recent years, the case of Steven Salaita, professor of Native American studies and American literature became one of the most widely publicised disputes between academics and universities. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s ‘incendiary’ posts on Twitter (Dorf 2014; Flaherty 2015). At the time, Israel was conducting one of its campaigns of daily shelling in the Gaza Strip. Salaita tweeted: ‘Zionists, take responsibility: if your dream of an ethnocratic Israel is worth the murder of children, just fucking own it already. #Gaza’ (Steven Salaita on Twitter, 19 July 2014). Salaita’s appointment was made public and was awaiting formal approval by the Board of Trustees of the University of Illinois, usually a matter of pure technicality once it had been recommended by academic committees. Yet, in August Salaita was informed by the Chancellor that the University was withdrawing the offer.

Scandal erupted in the media shortly afterwards. It turned out that several of university’s wealthy donors, as well as a few students, had contacted members of the Board demanding that Salaita’s offer be revoked. The Chancellor justified her decision by saying that the objection to Salaita’s tweets concerned standards of ‘civility’, not the political opinion they expressed, but the discussions inevitably revolved around questions of identity, campus politics, and the degree to which they can be kept separate. This was exacerbated by a split within the American Association of University Professors, which is the closest the professoriate in the US has to a union: while the AAUP issued a statement of support to Salaita as soon as the news broke, Cary Nelson, the association’s former president and a prolific writer on issues of university autonomy and academic freedom, defended the Board’s decision. The reason? The protections awarded by the principle of academic freedom, Nelson claimed, extends only to tenured professors.

Very few people agreed with Nelson’s definition: eventually, the courts upheld Salaita’s case that the University of Illinois Board’s decision constituted breach of contract. He was awarded a hefty settlement (ten times the annual salary he would be earning at Illinois), but was not reinstated. This points to serious limitations of the using ‘academic freedom’ as an analytical concept. While university autonomy and academic freedom are principles invoked by academics in order to protect their activity, their application in academic and legal practice is, at best, open to interpretation. A detailed report by Karran and Malinson (2017), for instance, shows that both the understanding and the legal level of protection of academic freedom vary widely within European countries. In the US, the principle is often framed as part of freedom of speech and thus protected under the First Amendment (Karran 2009); but, as we could see, this does not in any way insulate it against widely differing interpretations of how it should be applied in practice.

While the Salaita case can be considered foundational in terms of making these questions central to a prolonged public controversy as well as a legal dispute, navigating the terrain in which these controversies arise has progressively become more complicated. Carrigan (2016) and Lupton (2014) note that almost everyone, to some degree, is already a ‘digital scholar’. While most human resources departments as well as graduate programmes increasingly offer workshops or courses on ‘using social media’ or ‘managing your identity online’ the issue is clearly not just one of the right tool or skill. Inevitably, it comes down to the question of boundaries, that is, what ‘counts as’ public engagement in the ‘digital university’, and why? How is academic work seen, evaluated, and recognised? Last, but not least, who decides?

Rather than questions of accountability or definitions of academic freedom, these controversies cannot be seen separately from questions of ontology, that is, questions about what entities are composed of, as well as how they act. This brings us back to assemblages: what counts as being a part of the university – and to what degree – and what does not? Does an academic’s activity on social media count as part of their ‘public’ engagement? Does it count as academic work, and should it be valued – or, alternatively, judged – as such? Do the rights (and protections) of academic freedom extend beyond the walls of the university, and in what cases? Last, but not least, which elements of the university exercise these rights, and which parts can refuse to extend them?

The case of George Ciccariello-Maher, until recently a Professor of English at Drexel University, offers an illustration of how these questions impact practice. On Christmas Day 2016, Ciccariello-Maher tweeted ‘All I want for Christmas is white genocide’, an ironic take on certain forms of right-wing critique of racial equality. Drexel University, which had been closed over Christmas vacation, belatedly caught up with the ire that the tweet had provoked among conservative users of Twitter, and issued a statement saying that ‘While the university recognises the right of its faculty to freely express their thoughts and opinions in public debate, Professor Ciccariello-Maher’s comments are utterly reprehensible, deeply disturbing and do not in any way reflect the values of the university’. After the ironic nature of the concept of ‘white genocide’ was repeatedly pointed out both by Ciccariello-Maher himself and some of his colleagues, the university apologised, but did not withdraw its statement.

In October 2017, the University placed Ciccariello-Maher on administrative leave, after his tweets about white supremacy as the cause of the Las Vegas shooting provoked a similar outcry among right-wing users of Twitter.1 Drexel cited safety concerns as the main reason for the decision – Ciccariello-Maher had been receiving racist abuse, including death threats – but it was obvious that his public profile was becoming too much to handle. Ciccariello-Maher resigned on 31st December 2017. His statement read: ‘After nearly a year of harassment by right-wing, white supremacist media and internet trolls, after threats of violence against me and my family, my situation has become unsustainable’.2 However, it indirectly contained a criticism of the university’s failure to protect him: in an earlier opinion piece published right after the Las Vegas controversy, Cicariello-Maher wrote that ‘[b]y bowing to pressure from racist internet trolls, Drexel has sent the wrong signal: That you can control a university’s curriculum with anonymous threats of violence. Such cowardice notwithstanding, I am prepared to take all necessary legal action to protect my academic freedom, tenure rights and most importantly, the rights of my students to learn in a safe environment where threats don’t hold sway over intellectual debate.’.3 The fact that, three months later, he no longer deemed it safe to continue doing that from within the university suggests that something had changed in the positioning of the university – in this case, Drexel – as a ‘bulwark’ against attacks on academic freedom.

Forms of capital and lines of flight

What do these cases suggest? In a deterritorialised university, the link between academics, their actions, and the institution becomes weaker. In the US, tenure is supposed to codify a stronger version of this link: hence, Nelson’s attempt to justify Salaita’s dismissal as a consequence of the fact that he did not have tenure at the University of Illinois, and thus the institutional protection of academic freedom did not extend to his actions. Yet there is a clear sense of ‘stretching’ nature of universities’ responsibilities or jurisdiction. Before the widespread use of social media, it was easier to distinguish between utterances made in the context of teaching or research, and others, often quite literally, off-campus. This doesn’t mean that there were no controversies: however, the concept of academic freedom could be applied as a ‘rule of thumb’ to discriminate between forms of engagement that counted as ‘academic work’ and those that did not. In a fragmented and pluralised public sphere, and the growing insecurity of academic employment, this concept is clearly no longer sufficient, if it ever was.

Of course, one might claim in this particular case it would suffice to define the boundaries of academic freedom by conclusively limiting it to tenured academics. But that would not answer questions about the form or method of those encounters. Do academics tweet in a personal, or in a professional, capacity? Is it easy to distinguish between the two? While some academics have taken to disclaimers specifying the capacity in which they are engaging (e.g. ‘tweeting in a personal capacity’ or ‘personal views/ do not express the views of the employer’), this only obscures the complex entanglement of individual, institution, and forms of engagement. This means that, in thinking about the relationship between individuals, institutions, and their activities, we have to take account the direction in which capital travels. This brings us back to lines of flight.

The most obvious form of capital in motion here is symbolic. Intellectuals such as Salaita and Ciccariello-Maher in part gain large numbers of followers and visibility on social media because of their institutional position; in turn, universities encourage (and may even require) staff to list their public engagement activities and media appearances on their profile pages, as this increases visibility of the institution. Salaita has been a respected and vocal critic of Israel’s policy and politics in the Middle East for almost a decade before being offered a job at the University of Illinois. Ciccariello-Maher’s Drexel profile page listed his involvement as

 … a media commentator for such outlets as The New York Times, Al Jazeera, CNN Español, NPR, the Wall Street Journal, Washington PostLos Angeles Times and the Christian Science Monitor, and his opinion pieces have run in the New York Times’ Room for Debate, The NationThe Philadelphia Inquirer and Fox News Latino.4

One would be forgiven for thinking that, until the unfortunate Tweet, the university supported and even actively promoted Ciccariello-Maher’s public profile.

The ambiguous nature of symbolic capital is illustrated by the case of another controversial public intellectual, Slavoj Žižek. Renowned ‘Elvis of philosophy’ is not readily associated with an institution; however, he in fact has three institutional positions. Žižek is a fellow of the Institute of Philosophy and Social Theory of the University of Ljubljana, teaches at the European Graduate School, and, most recently has been appointed International Director of the Birkbeck Institute of the Humanities. The Institute’s web page describes his appointment:

Although courted by many universities in the US, he resisted offers until the International Directorship of Birkbeck’s Centre came up. Believing that ‘Political issues are too serious to be left only to politicians’, Žižek aims to promote the role of the public intellectual, to be intellectually active and to address the larger public.5

Yet, Žižek quite openly boasts what comes across as a principled anti-institutional stance. Not long ago, a YouTube video in which he dismisses having to read students’ essays as ‘stupid’ attracted quite a degree of opprobrium.6 On the one hand, of course, what Žižek says in the video can be seen as yet another form of attention-seeking, or a testimony to the capacity of new social media to make everything and anything go ‘viral’. Yet, what makes it exceptional is exactly its unexceptionality: Žižek is known for voicing opinions that are bound to prove controversial or at least thread on the boundary of political correctness, and it is not a big secret that most academics do not find the work of essay-reading and marking particularly rewarding. But, unlike Žižek, they are not in a position to say it. Trumpeting disregard for one’s job on social media would, probably, seriously endanger it for most academics. As we could see in examples of Salaita and Ciccariello-Maher, universities were quick to sanction opinions that were far less directly linked to teaching. The fact that Birkbeck was not bothered by this – in fact, it could be argued that this attitude contributed to the appeal of having Žižek, who previously resisted ‘courting’ by universities in the US – serves as a reminder that symbolic capital has to be seen within other possible ‘lines of flight’.

These processes cannot be seen as simply arising from tensions between individual freedom on the one, and institutional regulation on the other side. The tenuous boundaries of the university became more visible in relation to lines of flight that combine persons and different forms of capital: economic, political, and symbolic. The Salaita controversy, for instance, is a good illustration of the ‘entanglement’ of the three. Within the political context – that is, the longer Israeli-Palestinian conflict, and especially the role of the US within it – and within the specific set of economic relationships, that is, the fact US universities are to a great degree reliant on funds from their donors – Salaita’s statement becomes coded as a symbolic liability, rather than an asset. This runs counter to the way his previous statements were coded: so, instead of channelling symbolic capital towards the university, it resulted in the threat of economic capital ‘fleeing’ in the opposite direction, in the sense of donors withholding it from the university. When it came to Ciccariello-Maher, from the standpoint of the university, the individual literally acts as a nodal point of intersection between different ‘lines of flight’: on the one hand, the channelling of symbolic capital generated through his involvement as an influential political commentator towards the institution; on the other, the possible ‘breach’ of the integrity (and physical safety) or staff and students as its constituent parts via threats of physical violence against Ciccariello-Maher.

All of this suggests that deterritorialization can be seen as positive and even actively supported; until, of course, the boundaries of the institution become too porous, in which case the university swiftly reterritorialises. In the case of the University of Illinois, the threat of withdrawn support from donors was sufficient to trigger the reterritorialization process by redrawing the boundaries of the university, symbolically leaving Salaita outside them. In the case of Ciccariello-Maher, it would be possible to claim that agency was distributed in the sense in which it was his decision to leave; yet, a second look suggests that it was also a case of reterritorialization inasmuch as the university refused to guarantee his safety, or that of his students, in the face of threats of white supremacist violence or disruption.

This also serves to illustrate why ‘unbundling’ as a concept is not sufficient to theorise the processes of assembling and disassembling that take place in (or on the same plane as) contemporary university. Public engagement sits on a boundary: it is neither fully inside the university, nor is it ‘outside’ by the virtue of taking place in the environment of traditional or social media. This impossibility to conclusively situate it ‘within’ or ‘without’ is precisely what hints at the arbitrary nature of boundaries. The contours of an assemblage, thus, become visible in such ‘boundary disputes’ as the controversies surrounding Salaita and Ciccariello-Maher or, alternatively, their relative absence in the case of Žižek. While unbundling starts from the assumption that these boundaries are relatively fixed, and it is only components that change (more specifically, are included or excluded), assemblage theory allows us to reframe entities as instantiated through processes of territorialisation and deterritorialization, thus challenging the degree to which specific elements are framed (or, coded) as elements of an assemblage.

Conclusion: towards a new political economy of assemblages

Reframing universities (and, by extension, other organisations) as assemblages, thus, allows us to shift attention to the relational nature of the processes of knowledge production. Contrary to the narratives of university’s ‘decline’, we can rather talk about a more variegated ecology of knowledge and expertise, in which the identity of particular agents (or actors) is not exhausted in their position with(in) or without the university, but rather performed through a process of generating, framing, and converting capitals. This calls for longer and more elaborate study of the contemporary political economy (and ecology) of knowledge production, which would need to take into account multiple other actors and networks – from the more obvious, such as Twitter, to less ‘tangible’ ones that these afford – such as differently imagined audiences for intellectual products.

This also brings attention back to the question of economies of scale. Certainly, not all assemblages exist on the same plane. The university is a product of multiple forces, political and economic, global and local, but they do not necessarily operate on the same scale. For instance, we can talk about the relative importance of geopolitics in a changing financial landscape, but not about the impact of, say, digital technologies on ‘The University’ in absolute terms. Similarly, talking about effects of ‘neoliberalism’ makes sense only insofar as we recognise that ‘neoliberalism’ itself stands for a confluence of different and frequently contradictory forces. Some of these ‘lines of flight’ may operate in ways that run counter to the prior states of the object in question – for instance, by channelling funds, prestige, or ideas away from the institution. The question of (re)territorialisation, thus, inevitably becomes the question of the imaginable as well as actualised boundaries of the object; in other words, when is an object no longer an object? How can we make boundary-work integral to the study of the social world, and of the ways we go about knowing it?

This line of inquiry connects with a broader sociological tradition of the study of boundaries, as the social process of delineation between fields, disciplines, and their objects (e.g. Abbott 2001; Lamont 2009; Lamont and Molnár 2002). But it also brings in another philosophical, or, more precisely, ontological, question: how do we know when a thing is no longer the same thing? This applies not only to universities, but also to other social entities – states, regimes, companies, relationships, political parties, and social movements. The social definition of entities is always community-specific and thus in a sense arbitrary; similarly, how the boundaries of entities are conceived and negotiated has to draw on a socially-defined vocabulary that conceptualises certain forms of (dis-)assembling as potentially destructive to the entity as a whole. From this perspective, understanding how entities come to be drawn together (assembled), how their components gain significance (coding), and how their relations are strengthened or weakened (territorialisation) is a useful tool in thinking about beginnings, endings, and resilience – all of which become increasingly important in the current political and historical moment.

The transformation of processes of knowledge production intensifies all of these dynamics, and the ways in which they play out in universities. While certainly contributing to the unbundling of its different functions, the analysis presented in this article shows that the university remains a potent agent in the social world – though what the university is composed of can certainly differ. In this sense, while the pronouncement of the ‘death’ of universities should be seen as premature, this serves as a potent reminder that understanding change, to a great deal, depends not only on how we conceptualise the mechanisms that drive it, but also on how we view elements that make up the social world. The tendency to posit fixed and durable boundaries of objects – that I have elsewhere referred to as ‘ontological bias’7 – has, therefore, important implications for both scholarship and practice. This article hopes to have made a contribution towards questioning the boundaries of the university as one among these objects.

——————–

If you’re interested in reading more about these tensions, I also recommend Mark Carrigan’s ‘Social Media for Academics’ (Sage).

Night(mare) in Michaelmas*: or, an academic Halloween tale

Halloween, as the tradition goes, is the time when the curtain between the two worlds opens. Of course, in anthropology you learn that this is not a tradition at all – they are all invented, it just depends how long ago. This Halloween, however, I would like to tell you a story about boundaries between worlds, and about those who stand, simultaneously, on both sides.

  1. Straw (wo)men

Scarecrow, effigy, straw man: they are remarkably similar. Made of dried grass, leaves, and branches, sometimes dressed in rags, but rarely with recognizable personal characteristics. Personalizing is the providence of Voodoo dolls, or those who use them, dark magic, and violence, which can sometimes be serious and political. Yet, they are all unmistakeably human: in this sense, they serve to attune us to the ordinariness – the unremarkability – of everyday violence.  

Scarecrows stand on ‘our’ side, and guard our world – that is, the world that relies on agricultural production – against ‘theirs’ (of crows, other birds, and non-human animals: they are, we are told, enemies). The sympathy and even pity we feel for scarecrows (witness The Wizard of Oz) shields us from knowledge that scarecrows bear the disproportionate brunt of the violence we do to Others, and to other worlds. We made it the object of crows’ fear and hatred, so that it protects us from what we do not want to acknowledge: that our well-being, and our food, comes only at the cost of destroying others’.

Effigies are less unambiguously ‘ours’. Regardless of whether they are remnants of *actual* human sacrifice (evidence for this is somewhat thin), they belong both to ‘their’ world and ‘ours’. ‘Theirs’ is the non-human world of fire, ash, and whatever remains once human artifices burn down. ‘Ours’ is the world of ritual, collectivity, of the safe reinstatement of order. Effigies are thus simultaneously dead and alive. We construct them, but not to keep the violence – of Others, and towards Others, like with scarecrows – at bay; we construct them in order to restrain and absorb the violence that is towards our own kind. When we burn effigies, we aim to destroy what is evil, rotten, and polluting amongst ourselves. This is why effigies are such a threatening political symbol: they always herald violence in our midst.

Straw men, by contrast, are neither scarecrows nor effigies: we construct them so that we may – selfishly – live. A ‘straw man’ argument is one we use in order to make it easier to win. We do not engage with actual critique, or possible shortfalls, of our own reasoning: instead, we construct an imaginary opponent to make ourselves appear stronger. This is why it makes no sense to fear straw men, though there are good reasons to be suspicious of those who fashion them all too often. They do not cross boundaries between worlds: they belong fully, and exclusively, to this one.

Straw men are not the stuff of horror. Similarly, there is no reason to fear the scarecrow, unless you are a crow. Effigies, however, are different.

2. Face(mask) to face(mask)

Universities in the UK insist on face-to-face teaching, despite the legal challenge from the University and College Union, protests from individual academics, as well as by now overwhelming evidence that there is no way to make classrooms fully ‘Covid-secure’. The justification for this has usually taken the form ‘students expect *some* face-to-face teaching’. This, I believe, means university leadership fears that students (or, more likely, their parents, possibly encouraged by the OfS and/or The Daily Mail) would request tuition fee reimbursements in case all teaching were to shift online. A more coherent interpretation of the stubborn insistence on f2f teaching is that shifting teaching online would mean many students would elect not to live in student accommodation. Student accommodation, in turn, is not only a major source of profit (and employment) for universities, but also for private landlords, businesses, and different kinds of services in cities that happen to have a significant student population.

In essence, then, f2f teaching serves to secure two sources of income, both disproportionately benefitting the propertied class. In this sense, it remains completely irrelevant who teaches face-to-face or, indeed, what is taught. This is obvious from the logic of guaranteeing face-to-face provision in all disciplines, not only those that might have demonstrable need for some degree of physical co-presence (I’m thinking those that use laboratories, or work with physical material). The content, delivery, and, supremely, rationale for maintaining face-to-face teaching remain unjustified. “They” (students?) expect to see “us” (teachers?) in flesh, blood, and, of course, facemask – which we hope will prevent the airborne particles of Coronavirus from infecting us, and thus from getting ill, suffering consequences, and potentially dying.

That this kind of risk would be an acceptable price for perfunctorily parading behind Perspex screens can only seem odd if we believe that what is being involved in face-to-face teaching is us as human beings and individuals. But it is not: when we walk into the classroom, we are not individual academics, teachers, thinkers, writers, or whatever else we may be. We are the ‘face’ of ‘face-to-face’ teaching. We are the effigies.

3. On institutional violence

On Monday, I am teaching a seminar in social theory. Under ‘normal’ circumstances, this would mean leading small group discussions on activities, and readings, that students have engaged with. Under these circumstances, it will mean groupings of socially distant students trying to have a discussion about readings struggling to hear each other through face masks. Given that I struggle to communicate ‘oat milk flat white’ from behind a mask, I have serious doubts that I will manage to convey particularly sophisticated insights into social theory.

But this does not matter: I am not there as a lecturer, as a human being, as a theorist. I am there to sublimate the violence that we are all complicit in. This violence concerns not only the systematic exposure to harm created by the refusal to acknowledge the risks of cramming human beings unnecessarily into closed spaces during the pandemic of an airborne disease, but also forms of violence specific to higher education. The sporadic violence of the curriculum, still overwhelmingly white, male, and colonial (incidentally, I am teaching exactly such a session). More importantly, it includes the violence that we tacitly accept when we overlook the fact that ‘our’ universities subsist on student fees, and that fees are themselves products of violence. The capital that fees depend on are either a product of exploitation in the past, or of student debt, and thus exploitation in the future.

When I walk into the classroom on Monday, I will want my students to remember that every lecturer stands on the boundary between two worlds, simultaneously dead and alive. Sure, we all hope everyone makes it out of there alive, but that’s not the point: the point is how close to the boundary we get. When I walk into the classroom on Monday, I will remind my students that what they see is not me, but the effigy constructed to obscure the violence of the intersection between academic and financial capital. When I walk into the classroom on Monday, I will want my students to know that the boundary between two worlds is very, very thin, and not only on Halloween.

  • Michaelmas, for those who do not know, is the name of Autumn (first) term of academic year at Oxford, Cambridge, and, incidentally, Durham.

For an Online University of the Left

This proposal started from an observation I made on Twitter this morning about the A-level results ‘scandal’: the fact many working-class and underprivileged students are finding themselves turned away from institutions that they were their first choice because their grades – being from state schools – were algorithmically predicted in ways that made it less likely they will have sufficient scores for elite universities. For many students (and their parents), this is an obvious disappointment – among other reasons, because inferring actual scores from previous grades is both imprecise and unfair. For many of my colleagues in higher education, it was yet another sign of the classism of British HE, which, predictably and consistently, privileges those from more advantaged socio-economic backgrounds. Both are true, but both avoid engaging with the potentially bigger problem that awaits.

The bigger problem, in this case, is that a large number of angry, disappointed young people, stuck either at home behind the screens, or in shit jobs, during a pandemic and a recession, is a recipe for breeding hate and resentment. My guess is that alt-right recruiters are on it as we speak – indeed, have been on it for a while now. The Left has been atrociously slow and on the back foot ever since the December election; we need to step up. Thus, this proposal is brief and necessarily rather general; I do try to address two biggest pitfalls (credentials and funding) but other than that, if people want to try this, details can be worked out as we go along.

The point is not to create a perfect institution that would at the same time solve the problem of social inequality in Britain, neoliberalism and precarity in higher education, and the rise of the far right; no policy can do that anyway. The idea is to move, start doing something now, and then adjust if necessary – or just give up.

Ignore the typos.

 

What is Online University of the Left?

Simply, it would be a platform offering enrolment/attendance in a number of courses, which could take the form of a ‘foundation year’ degree that some students enrol in before starting ‘official’ uni. My proposal on Twitter was some combination of liberal arts and practical skills, but that’s mostly because my own background is in social sciences and humanities, and I’ve found that most students, if left to their own devices, tend to choose something along these lines (this is much more evident in the US, where students are encouraged and often required to acquire at least some ‘credits’ from a field other than their own, so it’s not uncommon to encounter e.g. biologists taking social science credits in sociology, or poetry students taking science credits in astronomy). In all cases, it could be something that many of us know and would like to teach, and that we also believe would be useful for young people’s future lives, employment, and study: how about a course in British history, but the kind that *actually* engages with colonialism and slavery? How about a course in social and political thought that includes thinkers that are not White men? How about a course in basic statistics, so that even students who are very far from A-level maths could potentially understand figures like R? How about introductory economics, so next time students go to the voting booth they know what ‘GDP’ actually stands for?

In addition to this, we could offer talks (or online chat sessions!) on really practical skills: for instance, how to write a CV? How to search for literature? How to conduct interviews? Etc.

As I had mentioned, Tom Sperlinger, Richard Pettigrew, and Josie McLelland of the University of Bristol ran something not too far off from this (obviously, before the pandemic), and wrote about it in their book ‘Who Are Universities For?‘. There are many other places and experiences we can learn from.

So who teaches at this university?

The best part is, no-one needs to do much. All you need to do is think up a topic/course, propose a lecture, and coordinate with a few people who’d like to do something similar. Students could literally pick & choose topics, creating their own courses. If you want to run a series of lectures by yourself, even better, but odds are that we’ll all find out there are many people out there we’d love to develop courses with, given the opportunity. One way in which universities monopolize their staff’s labour is by making sure we cannot collaborate in this way across institutional boundaries; here’s a way to change that.

But who does all the work of preparation and delivery?

Odds are, we are all teaching online at least this term, right? Even if you’re super-strapped for time and cognitive space, nothing prevents you from making one of your lectures available outside your uni’s platform. Clearly I can’t get into more details, but let’s say that even if your recording software is proprietary and the platform is as well, you can always do a slightly different version of the slides and record yourself on your mobile phone. As far as the literature/reading lists are concerned, while it is true that students with access to libraries are enormously privileged in this respect, there are plenty of websites that often this kind of literature for free. Most of us are not able to use them or direct our students to them in our uni teaching, but nothing prevents students from discovering them, um, anyway.

Assuming you do have some time and extra cognitive space, you could use this chance to develop your dream introductory course to…anything, and make it available online, and for free, for ever. For instance, I always say I wished my students had a better understanding of the basic philosophy of science (as in, what is a hypothesis, what is proof, what is an observation, what is the difference between causation and correlation, etc.). Their background, in most cases, provides none; yet this makes most arguments in social theory more difficult to get across, and turns methods teaching into a nightmare. So, I’d be thrilled for an opportunity to develop a series of talks on this.

 

But who puts this online?

There are a number of free platforms for this type of content that can be used; Pat Lockley, who is one of most talented (and experienced!) developers I know, has already offered to help. I am sure I know many learning technologists who would. This isn’t about running a super-complicated multi-sited real-time collaborative simulation of secrets of the universe; it’s basically a series of Ted talks with some links to further reading.

 

Where does the money come in?

Here’s another part of the proposal. Imagine parents were saving, taking out loans etc. for their children to go to college. Odds are, they are saving some of this money anyway, because due to pandemic children are probably staying at home. So if they would be willing to pay some of that – really, a tiny portion of what they would be giving towards tuition and living costs anyway – it could pay precarious colleagues who would be teachers, teaching assistants or supervisors, offering one-to-one or small group online tuition to students. Private tuition, I hear, is anyway a massive business, and also one of the reasons why students with rich parents tend to get better grades and score better at admissions. So this would be an opportunity for more students to access this kind of supervision, thus also – and this is an argument for parents, primarily – increasing their future academic success and employment skills. This isn’t, clearly, to condone this system – it’s awful – nor is it to ignore the fact that graduate students and other precarious colleagues are getting massively shafted over in the current pandemic. This isn’t a way to solve this problem; it is a way to offer a stopgap/livelihood to those who were counting on income from supervisions and are not getting it this year.

It goes without saying that those of us who are permanently employed would need to be teaching for free; if you have a problem with volunteering your labour (I don’t), think of it as an opportunity for public engagement and impact. For those who would be getting paid, there would be fiscal/economic elements to figure out, obviously (tax? Insurance?), but I’m guessing something like a flat-rate per hour + centralized payment platform (Patreon?) could work. Anyway, I’m sure people will have ideas about this.

 

So this is basically a mini-MOOC?

Nope. See above for supervisions – this isn’t just a series of Youtube clips you can watch in your spare time. Also, it’s not run by a single institution (like Harvard or Stanford), which means that prestige does not accrue to a university.

 

But what about credits? Why would anyone want to attend this?

This is the tricky bit, of course – in the current situation most students wouldn’t want to waste their time (and parents their money) on something that is not recognized as a degree or at least as counting towards a degree. The first is a long and formal process, so there’s certainly no way to do the accreditation now; the second, however, is not impossible. A little policy excursion below.

Most education systems have something along the lines of ‘recognition of prior experience’.  (Imagine you were a self-taught violinist who’s played in a local band for 20 years, but never had any formal qualification, and that you wanted for whichever reason to get a degree in composition: this is the route you would take). You would submit evidence of experience relevant to the topic of your studies, and odds are, it would get recognized. So while it would be overtly optimistic to claim the Online University of the Left would count as a formal degree, it could, certainly, be recognized as other qualification or training.

 

Wouldn’t students rather go to any university – even not their first, second, or third choice – that offers them a formal degree, rather than watch online lectures?

On the one hand, probably, and that’s a great thing – it might save many non-elite institutions from premature closures (and their employees from redundancy!), not to mention (hopefully) disrupt the perception that not getting into Oxford, Cambridge or the LSE means you (or your future degree) are worthless (this, in itself, is a terrible perception, but again unfortunately one very resistant to change). On the other, regardless of where students choose to enrol ‘formally’, post-clearing, nothing prevents them from attending a few ‘extra’ courses online – and taught by academics from all, including ‘elite’ institutions, for free! Imagine that. So again, sadly, while this would not in itself ‘disrupt’ the hallowed place Oxford and Cambridge hold in the national imagination, it would (1) give students access to (hopefully) high quality teaching (for free) and high quality supervision (for a small fee) (2) create income for precariously employed colleagues (3) teach us to collaborate across institutional boundaries (4) get us thinking about how to organize and own our labour in ways that do something other than generate profit for our employers.

Oh, also, Online University of the Left is a bit lame; let’s call it the National Higher Education Service. 🙂

 

Edit, 17/08/2020:

Just going through a host of lovely responses this proposal’s had since posting yesterday (570 views since last night, which is pretty good), but one thing that worries me is the amount of people who said they’d be quite happy to participate as long as I did all the organizing labour. But no single person can do that (even if she weren’t a recently employed, immigrant academic). As they say in the policy world, I’ve given you the ideas; I’ve also pointed to some of the existing experience and additonal expertise out there (thank you to people in the thread who mentioned precursors I didn’t have time to in this hastily written post, like AntiUniversity, Social Science Centres etc. and some I didn’teven know about!). Only you can put them into practice.

Or, in more labour-specific lingo, it took about two hours of labour to produce that post (I am counting mostly writing, possibly another hour or so of thinking); if everyone could match that, we’ll be very far ahead already.

 

Two more edits (17/08, evening):

Got reminded that there has been a very similar initiative in place since 2012 with the Free University of Brighton

and there is also the Online University of the Left, mostly US-based

— Which means that all we need to do is bring these initiatives together/expand further! Easy 🙂

 

 

The King’s Two(ish) Bodies

Contemporary societies, as we know, rest on calculation. From the establishment of statistics, which was essential to the construction of the modern state, to double-entry bookkeeping as the key accounting technique for ‘rationalizing’ capitalism and colonial trade, the capacity to express quality (or qualities, to be more precise) through numbers is at the core of the modern world.

From a sociological perspective, this capacity involves a set of connected operations. One is valuation, the social process through which entities (things, beings) come to (be) count(ed); the other is commensuration, or the establishment of equivalence: what counts as or for what, and under what circumstances. Marion Fourcade specifies three steps in this process: nominalization, the establishment of ‘essence’ (properties); cardinalization, the establishment of quantity (magnitude); and ordinalization, the establishment of relative position (e.g. position on a scale defined by distance from other values). While, as Mauss has demonstrated, none of these processes are unique to contemporary capitalism – barter, for instance, involves both cardinalization and commensuration – they are both amplified by and central to the operation of global economies.

Given how central the establishment of equivalence is to contemporary capitalism, it is not a little surprising that we seem so palpably bad at it. How else to explain the fact that, on the day when 980 people died from Coronavirus, the majority of UK media focused on the fact that Boris Johnson was recovering in hospital, reporting in excruciating detail the films he would be watching. While some joked about excessive concern for the health of the (secular) leader as reminiscent of the doctrine of ‘The King’s Two Bodies’, others seized the metaphor and ran along with it – unironically.

Briefly (and somewhat reductively – please go somewhere else if you want to quibble, political theory bros), ‘King’s Two Bodies’ is a concept in political theology by which the state is composed of two ‘corporeal’ entities – the ‘body politic’ (the population) and the ‘body natural’ (the ruler)*. This principle allows the succession of political power even after the death of the ruler, reflected in the pronouncement ‘The King is Dead, Long Live the King’. From this perspective, the claim that 980 < 1 may seem justified. Yet, there is something troubling about this, even beyond basic principles of decency. Is there a large enough number that would disturb this balance? Is it irrelevant whose lives are those?

Formally, most liberal democratic societies forbid the operation of a principle of equivalence that values some human beings as lesser than others. This is most clearly expressed in universal suffrage, where one person (or, more specifically, one political subject) equals one vote; on the global level, it is reflected in the principle of human rights, which assert that all humans have a certain set of fundamental and unalienable rights simply as a consequence of being human. All members of the set ‘human’ have equal value, just by being members of that set: in Badiou’s terms, they ‘count for one.

Yet, liberal democratic societies also regularly violate these principles. Sometimes, unproblematically so: for instance, we limit the political and some other rights of children and young people until they become of ‘legal age’, which is usually the age at which they can vote; until that point, they count as ‘less than one’. Sometimes, however, the consequences of differential valuation of human beings are much darker. Take, for instance, the migrants who are regularly left to drown in the Mediterranean or treated as less-than-human in detention centres; or the NHS doctors and nurses – especially BAME doctors and nurses – whose exposure to Coronavirus gets less coverage than that of politicians, celebrities, or royalty. In the political ontology of contemporary Britain, some lives are clearly worth less than others.

The most troubling implication of the principle by which the body of the ruler is worth more than a thousand (ten thousand? forty thousand?) of ‘his’ subjects, then, is not its ‘throwback’ to mediaeval political theology: it is its meaning for politics here and now. The King’s Two Bodies, after all, is a doctrine of equivalence: the totality of the body politic (state) is worth as much as the body of the ruler. The underlying operation is 1 = 1. This is horribly disproportionate, but it is an equivalence nonetheless: both the ruler and the population, in this sense, ‘count for one’. From this perspective, the death of a sizeable portion of that population cannot be irrelevant: if the body politic is somewhat diminished, the doctrine of King’s Two Bodies suggests that the power of the ‘ruler’ is somewhat diminished too. By implication, the current political ontology of the British state currently rests not on the principle of equivalence, but on a zero-sum game: losses in population do not diminish the power of the ruler, but rather enlarge it. And that is a dangerous, dangerous form of political ontology.

*Hobbes’ Leviathan is often seen as the perfect depiction of this principle; it is possible to quibble with this reading, but the cover image for this post – here’s the credit to its creator on Twitter – is certainly the best possible reflection on the shift in contemporary forms of political power in the aftermath of the Covid-19 pandemic.

Never let a serious virus go to waste: solidarity in times of the Corona

[Please note that nothing in this post is a replacement for public health advice: if in doubt, refer to official guidelines].

I’m not going to bang on about neoliberal origins of the current crisis. To anyone remotely observant, it is obvious that pandemics are more likely to spread quickly in a globalized world, and that decades of underfunding public health services are going to create systems that are unable to deal when one, like the current Covid_19, hits. I’ll leave such conclusions to sufficiently white, British, and hyphenated writers in The Guardian; I’ve written about neoliberalism elsewhere, and a whole host of other people have too. But there is another reason why crises like these are almost a godsend for the kind of authoritarian neoliberalism that seems to be dominant today.

Self-isolation is useful health strategy, especially in the first phases of trying to stem the spread of the disease, but a nation of people boarded up in their homes staring suspiciously at anyone who seems ‘foreign’ or ‘an outsider’, with contact with the ‘outside world’ reduced to television (hello, BBC!) or social media is a perfect breeding ground for fear, hate, and control. In other words, the neoliberal dream of ‘no such thing as society – only individual men, women, and their families’, made flesh. In this sort of environment, not only does paranoia, misinformation, and mistrust abound, it becomes very difficult to maintain progressive movements or ideas. This post, therefore, is intended as a sort of checklist on how to keep some sort of social solidarity going under possible prolonged period of social isolation*.

It is a work in progress, and I didn’t have time to edit and proofread it, which means it is probably going to change. Feel free to adapt and share as necessary.

  1. Maintain social networks: build new ones, and reinforce the old.

Maintain networks and links with people whenever safe. You can spend time with people while keeping a decent distance, and obviously staying at home if you do develop symptoms. If mobility or public transport are limited, try to connect with people from the neighbourhood. Ask your neighbours if they need something. Use technology and social media to reach out to people. Text your friends. Call them on Skype: face-to-face contact, even if you are not physically in the same space, is really important.

Set up mutual aid networks (current link for Cambridge here). You can help distribute food (see more below), skills, and care – from childminding to helping those who are less able to provide for themselves. If you are worried, wear a mask and keep safe distance. Meet in open spaces. Spring is coming, at least on the North hemisphere. This is what parks and community gardens are for. Remember public spaces? Those.

  1. Develop alternative networks of provision and supply chains. SHARE THEM.

I know this doesn’t come naturally to people in highly consumerist societies, but think very carefully about your actual needs, and about possible replacements. Most shortages are outcomes of the combination of inadequate planning and the (surprise!) failure of ‘markets’ to ‘self-regulate’. Not having enough to eat is not the same scale of crisis as not being able to get exactly the brand of beer you prefer. Think about those who may need help with provisions: from simple things like helping the elderly or disabled people reach something on the upper shelf of a supermarket, to those who will inevitably be too ill to go out. Ask them if they need anything. Offer to make a meal and share it with them.

If possible, develop alternative means of providing food and other necessities. Grow vegetables or herbs; borrow and repair items (not that that’s not what you should be doing anyway). Many products that are bought ready-made can be assembled from common household items. Vinegar (white, 5%), for instance, is a relatively reliable disinfectant (this doesn’t mean you should use it on an operating table, but you can use it in the kitchen – listen to doctors, not to Tesco ads; remember your Chemistry lessons). So is vodka, but I didn’t tell you that. And FFS, stop hoarding toilet paper.

  1. Keep busy.

In the first stages, you may be thinking: ‘Lovely! I’ll get to watch all of those Netflix documentaries!’. However, as experience of people in self-isolation with relatively little to do – think long-distance sailors, monks and nuns – shows, you will get bored and listless. Limited range of mobility and/or actual illness will make it worse. It is actually very, very important to maintain at least a minimal amount of daily discipline. Don’t just think ‘oh, I’ll just read and maybe go out for a walk’. Make a schedule for yourself, and for your loved ones. Stick to it.

It is very likely that schools and universities are going to shut down, or at least shift most of instruction online. This may sound like the least of your worries, but it is incredibly important to keep some form of education going – for yourself, and for others. The immediate reason is that it keeps people occupied; the more distant one is that educational contexts are also opportunities for discussing thoughts and feelings, which may otherwise be scarce. It is also an opportunity to think about education outside of the institutional framework. When my school closed early in spring of 1999, our literature teacher kept up weekly seminars, which were completely voluntary. Best of all, it allowed us to read and discuss books that were not on the syllabus.

Obviously, we will have to think about ways to create meaningful discussions and forms of interaction in a mixture of online and offline environments, but it should not stop at technical innovation. That narrative about developing education that is not about the needs of the market? This is your opportunity to build it.

  1. Keep active.

Self-isolation does not mean you have to turn into a couch potato (trust me, there is rarely such a wealth of solitude as experienced on a long walk in nature). Keep moving – it’s fine to go out if you’re feeling healthy, just avoid enclosed and crowded spaces. Apparently, swimming pools are still OK, but even if you do not do swimming there are many forms of physical activity you can enjoy outdoors – from running and cycling to, for instance, doing Tai Chi or yoga out in the open, weather permitting. And walk, walk, walk. If you are unsure of your health or level of fitness, take shorter walks first. Go with a friend or in small groups. Take water and a snack. Stay safe.

The museums, galleries, cinemas, or shopping malls may be closed, but that doesn’t mean there’s nothing to see. Look around. Take a map and explore your local area. Learn names of plants, birds, or local places. There is a multitude of lovely books on how to do this – from Solnit’s Field Guide to Getting Lost to Oddell’s How to Do Nothing, not to mention endless resources on- and offline on local history, wildlife, or geology.

  1. Do not give up politics.

In this sort of moment, politics can rightly feel like a luxury. When you are increasingly reliant on the Government for medical care or emergency rations, criticizing it may seem ill-advised. This is one of the reasons dictators love crises. Crises stifle dissent. Sometimes, this is aided by the designation of a powerful external (or internal) enemy; sometimes, the enemy is invisible – like a virus, or the economic crisis. Unlike wars, however, which tend to – at least in the long run – provoke resistance, invisible sources of the crisis, especially when connected to health, can make it much more difficult to sustain any sort of political challenge.

This is why it is incredibly important to keep connecting, discussing, and supporting each other in small and big ways. Make sure that you include those who are most vulnerable, who are most likely to be excluded from state care (that includes migrants, rough sleepers, and some people with long-term mental health problems or other illnesses). Remember, building solidarity and alternative networks is not only vital for the community to survive, but will also help you organize more efficiently in the future. Trust me, these skills will come in handy.

Stay safe.

 

*You are probably wondering what makes me qualified to write about these things. I have grown a bit tired of the fact that, as an Eastern European woman, I constantly have to justify my epistemic authority, but this time it does actually have to do with the famous Where (Do) I Come From. During the 1999 NATO bombing in Serbia, most public services were closed, there were shortages and a curfew. I was part of the opposition to the Serbian regime, which put me (and many other people) in a slightly odd situation of being opposed to what the regime had been doing (meaning waging war for close to a decade at that point against different parts of former Yugoslavia, which was also the ostensible cause of the NATO intervention) but, obviously, also not very happy about being bombed. Obviously, many things from that period are not scalable: I was 18. It was socialism. A lot of today’s technology wasn’t there (for instance, I remember listening to the long sound of dial-up modem whenever the air raid sirens would go off – it was easier to connect as most people went offline and into bomb shelters). But some are. So use as necessary.

Why you’re never working to contract

During the last #USSstrike, on non-picketing days, I practiced working to contract. Working to contract is part of the broader strategy known as ASOS – action short of a strike – and it means fulfilling your contractual obligations, but not more than that. Together with many other UCU members, I will be moving to ASOS from Thursday. But how does one actually practice ASOS in the neoliberal academia?

 

I am currently paid to work 2.5 days a week. Normally, I am in the office on Thursdays and Fridays, and sometimes half a Monday or Tuesday. The rest of the time, I write and plan my own research, supervise (that’s Cambridgish for ‘teaching’), or attend seminars and reading groups. Last year, I was mostly writing my dissertation; this year, I am mostly panickedly filling out research grant and job applications, for fear of being without a position when my contract ends in August.

Yet I am also, obviously, not ‘working’ only when I do these things. Books that I read are, more often than not, related to what I am writing, teaching, or just thinking about. Often, I will read ‘theory’ books at all times of day (a former partner once raised the issue of the excess of Marx on the bedside table), but the same can apply to science fiction (or any fiction, for that matter). Films I watch will make it into courses. Even time spent on Twitter occasionally yields important insights, including links to articles, events, or just generic mood of a certain category of people.

I am hardly exceptional in this sense. Most academics work much more than the contracted hours. Estimates vary from 45 to as much as 100 hours/week; regardless of what is a ‘realistic’ assessment, the majority of academics report not being able to finish their expected workload within a 37.5-40hr working week. Working on weekends is ‘industry standard’; there is even a dangerous overwork ethic. Yet increasingly, academics have begun to unite around the unsustainability of the system in which we are increasingly feeling overwhelmed, underpaid, and with mental and other health issues on the rise. This is why rising workloads are one of the key elements of the current wave of UCU strikes. It also led to coining of a parallel hashtag: #ExhaustionRebellion. It seems like the culture is slowly beginning to shift.

From Thursday onwards, I will be on ASOS. I look forward to it: being precarious makes not working sometimes almost as exhausting as working. Yet, the problem with the ethic of overwork is not only that is is unsustainable, or that is directly harmful to the health and well-being of individuals, institutions, and the environment. It is also that it is remarkably resilient: and it is resilient precisely because it relies on some of the things academics value the most.

Marx’s theory of value* tells us that the origins of exploitation in industrial capitalism lie in the fact workers do not have ownership over means of production; thus, they are forced to sell their labour. Those who own means of production, on the other hand, are driven by the need to keep capital flowing, for which they need profit. Thus, they are naturally inclined to pay their workers as little as possible, as long as that is sufficient to actually keep them working. For most universities, a steady supply of newly minted graduate students, coupled with seemingly unpalatable working conditions in most other branches of employment, means they are well positioned to drive wages further down (in the UK, 17.5% in real terms since 2009).

This, however, is where the usefulness of classical Marxist theory stops. It is immediately obvious that many of the conditions the late 19th-century industrial capitalism no longer apply. To begin with, most academics own the most important means of production: their minds. Of course, many academics use and require relatively expensive equipment, or work in teams where skills are relatively distributed. Yet, even in the most collective of research teams and the most collaborative of labs, the one ingredient that is absolutely necessary is precisely human thoughts. In social sciences and humanities, this is even more the case: while a lot of the work we do is in libraries, or in seminars, or through conversations, ultimately – what we know and do rests within us**.

Neither, for that matter, can academics simply written off as unwitting victims of ‘false consciousness’. Even if the majority could have conceivably been unaware of the direction or speed of the transformation of the sector in the 1990s or in the early 2000s, after the last year’s industrial action this is certainly no longer the case. Nor is this true only of those who are certainly disproportionately affected by its dual face of exploitation and precarity: even academics on secure contracts and in senior positions are increasingly viewing changes to the sector as harmful not only to their younger colleagues, but to themselves. If nothing else, what USS strikes achieved was to help the critique of neoliberalism, marketization and precarity migrate from the pages of left-leaning political periodicals and critical theory seminars into mainstream media discourse. Knowing that current conditions of knowledge production are exploitative, however, does not necessarily translate into knowing what to do about them.

This is why contemporary academic knowledge production is better characterized as extractive or rentier capitalism. Employers, in most cases, do not own – certainly not exclusively – the means of production of knowledge. What they do instead is provide the setting or platform through which knowledge can be valorized, certified, and exchanged; and charge a hefty rent in the process (this is one part of what tuition fees are about). This ‘platform’ can include anything from degrees to learning spaces; from labs and equipment to email servers and libraries. It can also be adjusted, improved, fitted to suit the interests of users (or consumers – in this case, students); this is what endless investment in buildings is about.

The cunning of extractive capitalism lies in the fact that it does not, in fact, require workers to do very much. You are a resource: in industrial capitalism, your body is a resource; in cognitive capitalism, your mind is a resource too. In extractive capitalism, it gets even better: there is almost nothing you do, a single aspect of your thoughts, feelings, or actions, that the university cannot turn into profit. Reading Marxist theory on the side? It will make it into your courses. Interested in politics? Your awareness of social inequalities will be reflected in your teaching philosophy. Involved in community action? It will be listed in your online profile under ‘public engagement and impact’. It gets better still: even your critique of extractive, neoliberal conditions of knowledge production can be used to generate value for your employer – just make sure it is published in the appropriate journals, and before the REF deadline.

This is the secret to the remarkable resilience of extractive capitalism. It feeds on exactly what academics love most: on the desire to know more, to explore, to learn. This is, possibly, one of the most basic human needs past the point of food, shelter, and warmth. The fact that the system is designed to make access to all of the latter dependent on being exploited for the former speaks, I think, volumes (it also makes The Matrix look like less of a metaphor and more of an early blueprint, with technology just waiting to catch up). This makes ‘working to contract’ quite tricky: even if you pack up and leave your office at 16.38 on the dot, Monday to Friday, your employer will still be monetizing your labour. You are probably, even if unwittingly, helping them do so.

What, then, are we to do? It would be obviously easy to end with a vague call a las barricadas, conveniently positioned so as to boost one’s political cred. Not infrequently, my own work’s been read in this way: as if it ‘reminds academics of the necessity of activism’ or (worse) ‘invites to concrete action’ (bleurgh). Nothing could be farther from the truth: I absolutely disagree with the idea that critical analysis somehow magically transmigrates into political action. (In fact, why we are prone to mistaking one for the other is one of the key topics of my work, but this is an ASOS post, so I will not be writing about it). In other words, what you will do – tomorrow, on (or off?) the picket line, in a bit over a week, in the polling booth, in the next few months, when you are asked to join that and that committee or to a review a junior colleague’s tenure/promotion folder – is your problem and yours alone. What this post is about, however, is what to do when you’re on ASOS.

Therefore, I want to propose a collective reclaiming of the life of the mind. Too much of our collective capacity – for thinking, for listening, for learning, for teaching – is currently absorbed by institutions that turn it, willy-nilly, into capital. We need to re-learn to draw boundaries. We need thinking, learning, and caring to become independent of process that turns them into profit. There are many ways to do it – and many have been tried before: workers and cooperative universities; social science centres; summer schools; and, last but not least, our own teach-outs and picket line pedagogy. But even when these are not happening, we need to seriously rethink how we use the one resource that universities cannot replace: our own thoughts.

So from Thursday next week, I am going to be reclaiming my own. I will do the things I usually do – read; research; write; teach and supervise students; plan and attend meetings; analyse data; attend seminars; and so on – until 4.40. After that, however, my mind is mine – and mine alone.

 

*Rest assured that the students I teach get treated to a much more sophisticated version of the labour theory of value (Soc1), together with variations and critiques of Marxism (Soc2), as well as ontological assumptions of heterodox vs. ‘neoclassical’ economics (Econ8). If you are an academic bro, please resist the urge to try to ‘explain’ any of these as you will both waste my time and not like the result. Meanwhile, I strongly encourage you to read the *academic* work I have published on these questions over the past decade, which you can find under Publications.

**This is one of the reasons why some of the most interesting debates about knowledge production today concern ownership, copyright, or legal access. I do not have time to enter into these debates in this post; for a relatively recent take, see here.

Knowing neoliberalism

(This is a companion/’explainer’ piece to my article, ‘Knowing Neoliberalism‘, published in July 2019 in Social Epistemology. While it does include a few excerpts from the article, if using it, please cite and refer to the original publication. The very end of this post explains why).

What does it mean to ‘know’ neoliberalism?

What does it mean to know something from within that something? This question formed the starting point of my (recently defended) PhD thesis. ‘Knowing neoliberalism’ summarizes some of its key points. In this sense, the main argument of the article is epistemological — that is, it is concerned with the conditions (and possibilities, and limitations) of (human) knowledge — in particular when produced and mediated through (social) institutions and networks (which, as some of us would argue, is always). More specifically, it is interested in a special case of that knowledge — that is, what happens when we produce knowledge about the conditions of the production of our own knowledge (in this sense, it’s not ‘about universities’ any more than, say, Bourdieu’s work was ‘about universities’ and it’s not ‘on education’ any more than Latour’s was on geology or mining. Sorry to disappoint).

The question itself, of course, is not new – it appears, in various guises, throughout the history of Western philosophy, particularly in the second half of the 20th century with the rise (and institutionalisation) of different forms of theory that earned the epithet ‘critical’ (including the eponymous work of philosophers associated with the Frankfurt School, but also other branches of Marxism, feminism, postcolonial studies, and so on). My own theoretical ‘entry points’ came from a longer engagement with Bourdieu’s work on sociological reflexivity and Boltanski’s work on critique, mediated through Arendt’s analysis of the dichotomy between thinking and acting and De Beauvoir’s ethics of ambiguity; a bit more about that here. However, the critique of neoliberalism that originated in universities in the UK and the US in the last two decades – including intellectual interventions I analysed in the thesis – lends itself as a particularly interesting case to explore this question.

Why study the critique of neoliberalism?

  • Critique of neoliberalism in the academia is an enormously productive genre. The number of books, journal articles, special issues, not to mention ‘grey’ academic literature such as reviews or blogs (in the ‘Anglosphere’ alone) has grown exponentially since mid-2000s. Originating in anthropological studies of ‘audit culture’, the genre now includes at least one dedicated book series (Palgrave’s ‘Critical University Studies’, which I’ve mentioned in this book review), as well as people dedicated to establishing ‘critical university studies‘ as a field of its own (for the avoidance of doubt, I do not associate my work within this strand, and while I find the delineation of academic ‘fields’ interesting as a sociological phenomenon, I have serious doubts about the value and validity of field proliferation — which I’ve shared in many amicable discussions with colleagues in the network). At the start of my research, I referred to this as the paradox of the proliferation of critique and relative absence of resistance; the article, in part, tries to explain this paradox through the examination of what happens if and when we frame neoliberalism as an object of knowledge — or, in formal terms, epistemic object.
  • This genre of critique is, and has been, highly influential: the tropes of the ‘death’ of the university or the ‘assault’ on the academia are regularly reproduced in and through intellectual interventions (both within and outside of the university ‘proper’), including far beyond academic neoliberalism’s ‘native’ context (Australia, UK, US, New Zealand). Authors who present this kind of critique, while most frequently coming from (or being employed at) Anglophone universities in the ‘Global North’, are often invited to speak to audiences in the ‘Global South’. Some of this, obviously, has to do with the lasting influence of colonial networks and hierarchies of ‘global’ knowledge production, and, in particular, with the durability of ‘White’ theory. But it illustrates the broader point that the production of critique needs to be studied from the same perspective as the production of any sort of knowledge – rather than as, somehow, exempt from it. My work takes Boltanski’s critique of ‘critical sociology’ as a starting point, but extends it towards a different epistemic position:

Boltanski primarily took issue with what he believed was the unjustified reduction of critical properties of ‘lay actors’ in Bourdieu’s critical sociology. However, I start from the assumption that professional producers of knowledge are not immune to the epistemic biases to which they suspect their research subjects to be susceptible…what happens when we take forms and techniques of sociological knowledge – including those we label ‘critical’ and ‘reflexive’ – to be part and parcel of, rather than opposed to or in any way separate from, the same social factors that we assume are shaping epistemic dispositions of our research subjects? In this sense, recognising that forms of knowledge produced in and through academic structures, even if and when they address issues of exploitation and social (in)justice, are not necessarily devoid of power relations and epistemic biases, seems a necessary step in situating epistemology in present-day debates about neoliberalism. (KN, p. 4)

  • This, at the same time, is what most of the sources I analysed in my thesis have in common: by and large, they locate sources of power – including neoliberal power – always outside of their own scope of influence. As I’ve pointed out in my earlier work, this means ‘universities’ – which, in practice, often means ‘us’, academics – are almost always portrayed as being on the receiving end of these changes. Not only is this profoundly unsociological – literally every single take on human agency in the past 50-odd years, from Foucault through to Latour and from Giddens through to Archer – recognizes ‘we’ (including as epistemic agents) have some degree of influence over what happens; it is also profoundly unpolitical, as it outsources agency to variously conceived ‘others’ (as I’ve agued here) while avoiding the tricky elements of own participation in the process. This is not to repeat the tired dichotomy of complicity vs. resistance, which is another not particularly innovative reading of the problem. What the article asks, instead, is: What kind of ‘purpose’ does systematic avoidance of questions of ambiguity and ambivalence serve?

What does it aim to achieve?

The objective of the article is not, by the way, to say that the existing forms of critique (including other contributions to the special issue) are ‘bad’ or that they can somehow be ‘improved’. Least of all is it to say that if we just ‘corrected’ our theoretical (epistemological, conceptual) lens we would finally be able to ‘defeat neoliberalism’. The article, in fact, argues the very opposite: that as long as we assume that ‘knowing’ neoliberalism will somehow translate into ‘doing away’ with neoliberalism we remain committed to the (epistemologically and sociologically very limited) assumption that knowledge automatically translates into action.

(…) [the] politically soothing, yet epistemically limited assumption that knowledge automatically translates into action…not only omit(s) to engage with precisely the political, economic, and social elements of the production of knowledge elaborated above, [but] eschews questions of ambiguity and ambivalence generated by these contradictions…examples such as doctors who smoke, environmentalists who fly around the world, and critics of academic capitalism who nonetheless participate in the ‘academic rat race’ (Berliner 2016) remind us that knowledge of the negative effects of specific forms of behaviour is not sufficient to make them go away (KN, p. 10)

(If it did, there would be no critics of neoliberalism who exploit their junior colleagues, critics of sexism who nonetheless reproduce gendered stereotypes and dichotomies, or critics of academic hierarchy who evaluate other people on the basis of their future ‘networking’ potential. And yet, here we are).

What is it about?

The article approaches ‘neoliberalism’ from several angles:

Ontological: What is neoliberalism? It is quite common to see neoliberalism as an epistemic project. Yet, does the fact that neoliberalism changes the nature of the production of knowledge and even what counts as knowledge – and, eventually, becomes itself a subject of knowledge – give us grounds to infer that the way to ‘deal’ with neoliberalism is to frame it as an object (of knowledge)? Is the way to ‘destroy’ neoliberalism to ‘know it’ better? Does treating neoliberalism as an ideology – that is, as something that masses can be ‘enlightened’ about – translate into the possibility to wield political power against it?

(Plot spoiler: my answer to the above questions is no).

Epistemological: What does this mean for ways we can go about knowing neoliberalism (or, for that matter, any element of ‘the social’)? My work, which is predominantly in social theory and sociology of knowledge (no, I don’t work ‘on education’ and my research is not ‘about universities’), in many ways overlaps substantially with social epistemology – the study of the way social factors (regardless of how we conceive of them) shape the capacity to make knowledge claims. In this context, I am particularly interested in how they influence reflexivity, as the capacity to make knowledge claims about our own knowledge – including knowledge of ‘the social’. Enter neoliberalism.

What kind of epistemic position are we occupying when we produce an account of the neoliberal conditions of knowledge production in academia? Is one acting more like the ‘epistemic exemplar’ (Cruickshank 2010) of a ‘sociologist’, or a ‘lay subject’ engaged in practice? What does this tell us about the way in which we are able to conceive of the conditions of the production of our own knowledge about those conditions? (KN, p. 4)

(Yes, I know this is a bit ‘meta’, but that’s how I like it).

Sociological: How do specific conditions of our own production of knowledge about neoliberalism influence this? As a sociologist of knowledge, I am particularly interested in relations of power and privilege reproduced through institutions of knowledge production. As my work on the ‘moral economy’ of Open Access with Chris Muellerleile argued, the production of any type of knowledge cannot be analysed as external to its conditions, including when the knowledge aims to be about those conditions.

‘Knowing neoliberalism’ extends this line of argument by claiming we need to engage seriously with the political economy of critique. It offers some of the places we could look for such clues: for instance, the political economy of publishing. The same goes for networks of power and privilege: whose knowledge is seen as ‘translateable’ and ‘citeable’, and whose can be treated as an empirical illustration:

Neoliberalism offers an overarching diagnostic that can be applied to a variety of geographical and political contexts, on different scales. Whose knowledge is seen as central and ‘translatable’ in these networks is not independent from inequalities rooted in colonial exploitation, maintaining a ‘knowledge hierarchy’ between the Global North and the Global South…these forms of interaction reproduce what Connell (2007, 2014) has dubbed ‘metropolitan science’: sites and knowledge producers in the ‘periphery’ are framed as sources of ‘empirical’, ‘embodied’, and ‘lived’ resistance, while the production of theory, by and large, remains the work of intellectuals (still predominantly White and male) situated in prestigious univer- sities in the UK and the US. (KN, p. 9)

This, incidentally, is the only part of the article that deals with ‘higher education’. It is very short.

Political: What does this mean for different sorts of political agency (and actorhood) that can (and do) take place in neoliberalism? What happens when we assume that (more) knowledge leads to (more) action? (apart from a slew of often well-intended but misconceived policies, some of which I’ve analysed in my book, ‘From Class to Identity’). The article argues that affecting a cognitive slippage between two parts of Marx’s Eleventh Thesis – that is, assuming that interpreting the world will itself lead to changing it – is the thing that contributes to the ‘paradox’ of the overproduction of critique. In other words, we become more and more invested in ‘knowing’ neoliberalism – e.g. producing books and articles – and less invested in doing something about it. This, obviously, is neither a zero-sum game (and it shouldn’t be) nor an old-fashioned call on academics to drop laptops and start mounting barricades; rather, it is a reminder that acting as if there were an automatic link between knowledge of neoliberalism and resistance to neoliberalism tends to leave the latter in its place.

(Actually, maybe it is a call to start mounting barricades, just in case).

Moral: Is there an ethically correct or more just way of ‘knowing’ neoliberalism? Does answering these questions enable us to generate better knowledge? My work – especially the part that engages with the pragmatic sociology of critique – is particularly interested in the moral framing and justification of specific types of knowledge claims. Rather than aiming to provide the ‘true’ way forward, the article asks what kind of ideas of ‘good’ and ‘just’ are invoked/assumed through critique? What kind of moral stance does ‘gnossification’ entail? To steal the title of this conference, when does explaining become ‘explaining away’ – and, in particular, what is the relationship between ‘knowing’ something and framing our own moral responsibility in relation to something?

The full answer to the last question, unfortunately, will take more than one publication. The partial answer the article hints at is that, while having a ‘correct’ way of ‘knowing’ neoliberalism will not ‘do away’ with neoliberalism, we can and should invest in more just and ethical ways of ‘knowing’ altogether. It shouldn’t warrant reminding that the evidence of wide-spread sexual harrassment in the academia, not to mention deeply entrenched casual sexism, racism, ableism, ethnocentrism, and xenophobia, all suggest ‘we’ (as academics) are not as morally impeccable as we like to think we are. Thing is, no-one is. The article hopes to have made a small contribution towards giving us the tools to understand why, and how, this is the case.

I hope you enjoy the article!

——————————————————-

P.S. One of the rather straightforward implications of the article is that we need to come to terms with multiple reasons for why we do the work we do. Correspondingly, I thought I’d share a few that inspired me to do this ‘companion’ post. When I first started writing/blogging/Tweeting about the ‘paradox’ of neoliberalism and critique in 2015, this line of inquiry wasn’t very popular: most accounts smoothly reproduced the ‘evil neoliberalism vs. poor us little academics’ narrative. This has also been the case with most people I’ve met in workshops, conferences, and other contexts I have participated in (I went to quite a few as part of my fieldwork).

In the past few years, however, more analyses seem to converge with mine on quite a few analytical and theoretical points. My initial surprise at the fact that they seem not to directly engage with any of these arguments — in fact, were occasionally very happy to recite them back at me, without acknowledgement, attribution or citation — was somewhat clarified through reading the work on gendered citation practices. At the same time, it provided a very handy illustration for exactly the type of paradox described here: namely, while most academics are quick to decry the precarity and ‘awful’ culture of exploitation in the academia, almost as many are equally quick to ‘cite up’ or act strategically in ways that reproduce precisely these inequalities.

The other ‘handy’ way of appropriating the work of other people is to reduce the scope of their arguments, ideally representing it as an empirical illustration that has limited purchase in a specific domain (‘higher education’, ‘gender’, ‘religion’), while hijacking the broader theoretical point for yourself (I have heard a number of other people — most often, obviously, women and people of colour — describe a very similar thing happening to them).

This post is thus a way of clarifying exactly what the argument of the article is, in, I hope, language that is simple enough even if you’re not keen on social ontology, social epistemology, social theory, or, actually, anything social (couldn’t blame you).

PPS. In the meantime, I’ve also started writing an article on how precisely these forms of ‘epistemic positioning’ are used to limit and constrain the knowledge claims of ‘others’ (women, minorities) etc. in the academia: if you have any examples you would like to share, I’m keen to hear them!

Existing while female

Space

The most threatening spectacle to the patriarchy is a woman staring into space.

I do not mean in the metaphorical sense, as in a woman doing astronomy or astrophysics (or maths or philosophy), though all of these help, too. Just plainly sitting, looking into some vague mid-point of the horizon, for stretches of time.

I perform this little ‘experiment’ at least once per week (more often, if possible; I like staring into space). I wholly recommend it. There are a few simple rules:

  • You can look at the passers-by (a.k.a. ‘people-watching’), but try to avoid eye contact longer than a few seconds: people should not feel that they are particular objects of attention.
  • If you are sitting in a café, or a restaurant, you can have a drink, ideally a tea or coffee. That’s not saying you shouldn’t enjoy your Martini cocktails or glasses of Chardonnay, but images of women cradling tall glasses of alcoholic drink of choice have been very succesfully appropriated by both capitalism and patriarchy, for distinct though compatible purposes.
  • Don’t look at your phone. If you must check the time or messages it’s fine, but don’t start staring at it, texting, or browsing.
  • Don’t read (a book, a magazine, a newspaper). If you have a particularly interesting or important thought feel free to scribble it down, but don’t bury your gaze behind a notebook, book, or a laptop.

Try doing this for an hour.

What this ‘experiment’ achieves is that it renders visible the simple fact of existing. As a woman. Even worse, it renders visible the process of thinking. Simultaneously inhabiting an inner space (thinking) and public space (sitting), while doing little else to justify your existence.

NOT thinking-while-minding-children, as in ‘oh isn’t it admirrrrable that she manages being both an academic and a mom’.

NOT any other form of ‘thinking on our feet’ that, as Isabelle Stengers and Vinciane Despret (and Virginia Woolf) noted, was the constitutive condition for most thinking done by women throughout history.

The important thing is to claim space to think, unapologetically and in public.

Depending on place and context, this usually produces at least one of the following reactions:

  • Waiting staff, especially if male, will become increasingly attentive, repeatedly inquiring whether (a) I am alright (b) everything was alright (c) I would like anything else (yes, even if they are not trying to get you to leave, and yes, I have sat in the same place with friends, and this didn’t happen)
  • Men will try to catch my eye
  • Random strangers will start repeatedly glancing and sometimes staring in my direction.

I don’t think my experience in this regard is particularly exceptional. Yes, there are many places where women couldn’t even dream of sitting alone in public without risking things much worse than uncomfortable stares (I don’t advise attempting this experiment in such places). Yes, there are places where staring into a book/laptop/phone, ideally with headphones on, is the only way to avoid being approached, chatted up, or harassed by men. Yet, even in wealthy, white, urban, middle-class, ‘liberal’ contexts, women who display signs of being afflicted by ‘the life of the mind’ are still somehow suspect. For what this signals is that it is, actually, possible for women to have an inner life not defined by relation to men, if not particular men, then at least in the abstract.

Relations

‘Is it possible to not be in relation to white men?’, asks Sara Ahmed, in a brilliant essay on intellectual genealogies and institutional racism. The short answer is yes, of course, but not as long as men are in charge of drawing the family tree. Philosophy is a clear example. Two of my favourite philosophers, De Beauvoir and Arendt, are routinely positioned in relation to, respectively, Sartre and Heidegger (and, in Arendt’s case, to a lesser degree, Jaspers). While, in the case of De Beauvoir, this could be, to a degree, justified – after all, they were intellectual and writing partners for most of Sartre’s life – the narrative is hardly balanced: it is always Simone who is seen in relation to Jean-Paul, not the other way round*.

In a bit of an ironic twist, De Beauvoir’s argument in the Second Sex that a woman exists only in relation to a man seems to have been adopted as a stylistic prescription for narrating intellectual history (I recently downloaded an episode of In Our Time on De Beauvoir only to discover, in frustration, that it repeats exactly this pattern). Another example is the philosopher GEM Anscombe, whose work is almost uniquely described in terms of her interpretation of Wittgenstein (she was also married to the philosopher Peter Geach, which doesn’t help). A great deal of Anscombe’s writing does not deal with Wittgenstein, but that is, somehow, passed over, at least in non-specialist circles. What also gets passed over is that, in any intellectual partnership or friendship, ideas flow in both directions. In this case, the honesty and generosity of women’s acknowledgments (and occasional overstatements) of intellectual debt tends to be taken for evidence of incompleteness of female thinking; as if there couldn’t, possibly, be a thought in their ‘pretty heads’ that had not been placed there by a man.

Anscombe, incidentally, had a predilection for staring at things in public. Here’s an excerpt from the Introduction to the Vol. 2 of her collected philosophical papers, Metaphysics and the philosophy of mind:

“The other central philosophical topic which I got hooked on without realising it was philosophy, was perception (…) For years I would spend time in cafés, for instance, staring at objects saying to myself: ‘I see a packet. But what do I really see? How can I say that I see here anything more than a yellow expanse?’” (1981: viii).

But Wittgenstein, sure.

Nature

Nature abhors a vacuum, if by ‘nature’ we mean the rationalisation of patriarchy, and if by ‘vacuum’ we mean the horrifying prospect of women occupied by their own interiority, irrespectively of how mundane or elevated its contents. In Jane Austen’s novels, young women are regularly reminded that they should seem usefully occupied – embroidering, reading (but not too much, and ideally out loud, for everyone’s enjoyment), playing an instrument, singing – whenever young gentlemen came for a visit. The underlying message is that, of course, young gentlemen are not going to want to marry ‘idle’ women. The only justification for women’s existence, of course, is their value as (future) wives, and thus their reproductive capital: everything else – including forms of internal life that do not serve this purpose – is worthless.

Clearly, one should expect things to improve once women are no longer reduced to men’s property, or the function of wives and mothers. Clearly, they haven’t. In Motherhood, Sheila Heti offers a brilliant diagnosis of how the very question of having children bears down differently on women:

It suddenly seemed like a huge conspiracy to keep women in their thirties—when you finally have some brains and some skills and experience—from doing anything useful with them at all. It is hard to when such a large portion of your mind, at any given time, is preoccupied with the possibility—a question that didn’t seem to preoccupy the drunken men at all (2018: 98).

Rebecca Solnit points out the same problem in The Mother of All Questions: no matter what a woman does, she is still evaluated in relation to her performance as a reproductive engine. One of the messages of the insidious ‘lean-in’ kind of feminism is that it’s OK to not be a wife and a mother, as long as you are remarkably successful, as a businesswoman, a political leader, or an author. Obviously, ‘ideally’, both. This keeps women stressed, overworked, and so predictably willing to tolerate absolutely horrendous working conditions (hello, academia) and partnerships. Men can be mediocre and still successful (again, hello, academia); women, in order to succeed, have to be outstanding. Worse, they have to keep proving their oustandingness; ‘pure’ existence is never enough.

To refuse this – to refuse to justify one’s existence through a retrospective or prospective contribution to either particular men (wife of, mother of, daughter of), their institutions (corporation, family, country), or the vaguely defined ‘humankind’ (which, more often than not, is an extrapolation of these categories) – is thus to challenge the washed-out but seemingly undying assumption that a woman is somehow less-worthy version of a man. It is to subvert the myth that shaped and constrained so many, from Austen’s characters to Woolf’s Shakespeare’s sister: that to exist a woman has to be useful; that inhabiting an interiority is to be performed in secret (which meant away from the eyes of the patriarchy); that, ultimately, women’s existence needs to be justified. If not by providing sex, childbearing, and domestic labour, then at least indirectly, by consuming stuff and services that rely on underpaid (including domestic) labour of other women, from fashion to IPhones and from babysitting to nail salons. Sometimes, if necessary, also by writing Big Books: but only so they could be used by men who see in them the reflection of their own (imagined) glory.

Death

Heti recounts another story, about her maternal grandmother, Magda, imprisoned in a concentration camp during WWII. One day, Nazi soldiers came to the women’s barracks and asked for volunteers to help with cooking, cleaning and scrubbing in the officers’ kitchen. Magda stepped forward; as Heti writes, ‘they all did’. Magda was not selected; she was lucky, as it soon transpired that those women were not taken to the kitchen, but rather raped by the officers and then killed.

I lingered over the sentence ‘they all did’ for a long time. What would it mean for more women to not volunteer? To not accept endlessly proving one’s own usefulness, in cover letters, job interviews, student feedback forms? To simply exist, in space?

I think I’ll just sit and think about it for a while.

Screen Shot 2019-06-12 at 18.12.20.png

(The photo is by the British photographer Hannah Starkey, who has a particular penchant for capturing women inhabiting their own interiority. Thank you to my partner who first introduced me to her work, the slight irony being that he interrupted me in precisely one such moment of contemplation to tell me this).

*I used to make a point of asking the students taking Social Theory to change ‘Sartre’s partner Simone de Beauvoir’ in their essays to ‘de Beauvoir’s partner Jean-Paul Sartre’ and see if it begins to read differently.

Life or business as usual? Lessons of the USS strike

[Shortened version of this blog post was published on Times Higher Education blog on 14 March under the title ‘USS strike: picket line debates will reenergise scholarship’].

 

Until recently, Professor Marenbon writes, university strikes in Cambridge were a hardly noticeable affair. Life, he says, went on as usual. The ongoing industrial action that UCU members are engaging in at UK’s universities has changed all that. Dons, rarely concerned with the affairs of the lesser mortals, seem to be up in arms. They are picketing, almost every day, in the wind and the snow; marching; shouting slogans. For Heaven’s sake, some are even dancing. Cambridge, as pointed out on Twitter, has not seen such upheaval ever since we considered awarding Derrida an honorary degree.

This is possibly the best thing that has happened to UK higher education, at least since the end of the 1990s. Not that there’s much competition: this period, after all, brought us the introduction, then removal of tuition fee caps; abolishment of maintenance grants; REF and TEF; and as crowning (though short-lived) glory, appointment of Toby Young to the Office for Students. Yet, for most of this period, academics’ opposition to these reforms conformed to ‘civilised’ ways of protest: writing a book, giving a lecture, publishing a blog post or an article in Times Higher Education, or, at best, complaining on Twitter. While most would agree that British universities have been under threat for decades, concerted effort to counter these reforms – with a few notable exceptions – remained the provenance of the people Professor Marenbon calls ‘amiable but over-ideological eccentrics’.

This is how we have truly let down our students. Resistance was left to student protests and occupations. Longer-lasting, transgenerational solidarity was all but absent: at the end of the day, professors retreated to their ivory towers, precarious academics engaged in activism on the side of ever-increasing competition and pressure to land a permanent job. Students picked up the tab: not only when it came to tuition fees, used to finance expensive accommodation blocks designed to attract more (tuition-paying) students, but also when it came to the quality of teaching and learning, increasingly delivered by an underpaid, overworked, and precarious labour force.

This is why the charge that teach-outs of dubious quality are replacing lectures comes across as particularly disingenuous. We are told that ‘although students are denied lectures on philosophy, history or mathematics, the union wants them to show up to “teach-outs” on vital topics such as “How UK policy fuels war and repression in the Middle East” and “Neoliberal Capitalism versus Collective Imaginaries”’. Although this is but one snippet of Cambridge UCU’s programme of teach-outs, the choice is illustrative.

The link between history and UK’s foreign policy in the Middle East strikes me as obvious. Students in philosophy, politics or economics could do worse than a seminar on the development of neoliberal ideology (the event was initially scheduled as part of the Cambridge seminar in political thought). As for mathematics – anybody who, over the past weeks, has had to engage with the details of actuarial calculation and projections tied to the USS pension scheme has had more than a crash refresher course: I dare say they learned more than they ever hoped they would.

Teach-outs, in this sense, are not a replacement for education “as usual”. They are a way to begin bridging the infamous divide between “town and gown”, both by being held in more open spaces, and by, for instance, discussing how the university’s lucrative development projects are impacting on the regional economy. They are not meant to make up for the shortcomings of higher education: if anything, they render them more visible.

What the strikes have made clear is that academics’ ‘life as usual’ is vice-chancellors’ business as usual. In other words, it is precisely the attitude of studied depoliticisation that allowed the marketization of higher education to continue. Markets, after all, are presumably ‘apolitical’. Other scholars have expanded considerable effort in showing how this assumption had been used to further policies whose results we are now seeing, among other places, in the reform of the pensions system. Rather than repeat their arguments, I would like to end with the words of another philosopher, Hannah Arendt, who understood well the ambiguous relationship between the academia and politics:

 

‘Very unwelcome truths have emerged from the universities, and very unwelcome judgments have been handed down from the bench time and again; and these institutions, like other refuges of truth, have remained exposed to all the dangers arising from social and political power. Yet the chances for truth to prevail in public are, of course, greatly improved by the mere existence of such places and by the organization of independent, supposedly disinterested scholars associated with them.

This authentically political significance of the Academe is today easily overlooked because of the prominence of its professional schools and the evolution of its natural science divisions, where, unexpectedly, pure research has yielded so many decisive results that have proved vital to the country at large. No one can possibly gainsay the social and technical usefulness of the universities, but this importance is not political. The historical sciences and the humanities, which are supposed to find out, stand guard over, and interpret factual truth and human documents, are politically of greater relevance.’

In this sense, teach-outs, and industrial action in general, are a way to for us to recognise our responsibility to protect the university from the undue incursion of political power, while acknowledging that such responsibility is in itself political. At this moment in history, I can think of no service to scholarship greater than that.

I am a precarious, foreign, early career researcher. Why should I be striking?

 

OK, I’ll admit the title is a bit of clickbait. I’ve never had a moment of doubt around strikes. However, in the past few weeks, as the UCU strike over pensions is drawing nearer, I’ve had a series of conversations in which colleagues, friends, or just acquaintances have raised some of the concerns reflected in, though not exhausted by, this title. So, I’ve decided to write up a short post answering some of these questions, mostly so I could get out of people’s Facebook or Twitter timelines. This isn’t meant to try and convince you, and even less is it any form of official or legal advice: at the end of the day, exercising your rights is your choice. Here are some of mine.

I am precariously employed: I can’t really afford to lose the pay.

This is a very serious concern, especially for those who have no other source of income or savings (and that’s quite a few). The UCU has set up a solidarity fund to help in such cases; quite a few local organisations have as well, and from what I understand early career/precarious researchers should have advantage in applying to these. Even taking this into account, this is by no means a small sacrifice to make, but the current pension reform means that in the long run, you would be losing much more than the pay that could be docked.

But I am not even a member of the Union!

Your right to strike is not dependent on your membership in a(ny) union. That being said, if you would like the Union to represent/help you, it makes sense to join the Union. Actually, it makes sense to join the Union anyway. Why are you not a member of the Union? Join the Union. Here, have a uni(c)o(r)n.

unicorn-toys
Yes I know it’s the worst pun ever

 

 

 

 

I am afraid of pissing off my supervisor/boss, and I rely on their good will/recommendation letters/support for future jobs.

There’s a high chance your supervisor is striking – after all, their pensions are on the line as well. Even if they are not, it is possible that if you calmly explain why you feel this is important, and why you think you should show solidarity with your colleagues, they will see your point (and maybe even join you). Should this not be the case, they have no legal way of preventing you from exercising your basic employment right, one that is part of your contract (which, presumably, they will have read!).

In terms of future recommendations, if you really think your supervisor is evaluating your research on the basis of whether you show up in the office, and not on the basis of your commitment, results, or potential, perhaps it’s time to have a chat with them. Remember, exercising the right to strike is not meant to harm your project, your colleagues, or your supervisor: it is meant to show disagreement concerning a decision that affects you, was taken in your name, but you most likely had little or no say over. Few supervisors would dispute your right to do that.

I’ll be able to strike when I’m more senior/securely employed.

UK abolished ‘tenure’ about thirty years ago, so no one’s job is completely safe. Obviously, of course, this doesn’t mean there are no differences in status, but unfortunately, experience suggests that job security does not directly correlate with the willingness to be critical of the institution you work in. Anyway, look at the senior academics around you. Either they are striking – in which case they will certainly support your right to do the same – or they are not, which would suggest that there is nothing to suggest you will if, and when, you get to their career stage.

Remember, this is why precarity exists: employers benefit from insecure/casual contracts exactly because they provide an army of reserve (and cheap) labour in case the permanently employed decide to strike. Which is exactly what is happening now. Don’t let them get away with it.

I don’t want to let  my students down.

This obviously primarily applies to those of us who are teaching and/or supervising, but I think there is a broader point to be made: students are not children. Universities dispensed with in loco parentis in the 1970s. It’s fine to feel a duty of care for your students, but it also makes sense to recognize that they are capable of making decisions for themselves – for instance, whom they will invite to give a public lecture, how they will vote, or how they will interpret the fact their lecturers are on strike (here‘s a good example from Goldsmiths). Which is not to say you shouldn’t explain to them exactly why you are striking. Even better, invite them to help you organize or come to one of the teach-outs.

Think about it this way: next week, you can teach them one of the following: (a) how to stand up for their rights and show solidarity, or (b) how to read Shakespeare (sorry, English lit scholars, this came to mind first). You’ve got (according to employers’ calculations) 351 days in a year to do the latter. Will you use your chance to do the former?

I won’t even get to a pension; why should I fight for the benefits of entitled, securely employed academics?

If you are an employee of a pre-1992 university in the UK, chances are you are enrolled in the USS. This means you are accruing some pension through the system, thus the proposed changes are affecting you. The less you’ve been in the system – that is, the shorter the period of time you’ve been employed – the more of a difference it makes. Remember, entitled academics you are talking about have accrued most of their pension under the old system; paradoxically, you are set to lose much more than they are.

I feel this struggle is really about the privilege of white male dons, and does not address the deeper structural inequalities I experience.

 It’s true that the struggle is primarily about pensions, and it’s true that the majority of people who have benefited from the system so far are traditionally privileged. This reflects the deeper inequalities of UK higher education, and, in particular, its employment structure. My experience is a bit of a mixed bag: I am a woman and ethnic minority, but I am also white and middle-class, so I clearly can’t speak for everyone, but I think that this is precisely why it’s important to be present in the strike. We need to make sure it doesn’t remain about white men only, and that it becomes obvious that higher education in England rests not on the traditional idea of a ‘professor’, but on the work of many, often precariously employed, early career researchers, women, minorities, non-binaries, and, yes, foreigners.

Speaking of that – I’m a foreigner, why should I care?

This is most difficult for me to relate to, not only because my work has been in and on the UK for quite a while but because, frankly, I’ve never felt like not a foreigner, no matter where I lived, and I always thought solidarity is international or it is nothing. But here’s my attempt at a more pragmatic argument: this is where you work, so this is where you exercise your rights as a worker. You may obviously have a lot of other, non-local concerns – family and friends in different countries, causes (or fieldwork sites) on other continents, and so on, but none of that should preclude the possibility to be actively involved in something that concerns your rights, here and now. After all, if you can show solidarity with Palestinian children or Yemeni refugees, you can show solidarity with people working in the same industry, who share many of your concerns.

There is a related serious issue concerning those on Tier 2 visas – UCU offers some guidance here; in a nutshell, you are most likely safe as long as you don’t intend to be absent without leave (i.e. consent from your employer) for many more consecutive days during the rest of the year.

There are so many problems with higher education, this seems like a very minor fight!

True. Fighting for pensions is not going to stop the neoliberalisation of HE or the precarisation of the academic workforce per se.

Yet, imagine the longer-term potential of an action like this. You will have met other (precarious) colleagues (especially outside of your discipline/field) on picket lines and at teach-outs; you will have learnt how to effectively organize actions that bring together different groups and different concerns; not least importantly, you will have shown your employer how crucial for teaching, and research, people like you really are. Now, that’s something that could come handy in future struggles, don’t you think?

The paradox of resistance: critique, neoliberalism, and the limits of performativity

The critique of neoliberalism in academia is almost as old as its object. Paradoxically, it is the only element of the ‘old’ academia that seems to be thriving amid steadily worsening conditions: as I’ve argued in this book review, hardly a week goes by without a new book, volume, or collection of articles denouncing the neoliberal onslaught or ‘war’ on universities and, not less frequently, announcing their (untimely) death.

What makes the proliferation of critique of the transformation of universities particularly striking is the relative absence – at least until recently – of sustained modes of resistance to the changes it describes. While the UCU strike in reaction to the changes to the universities’ pension scheme offers some hope, by and large, forms of resistance have much more often taken the form of a book or blog post than strike, demo, or occupation. Relatedly, given the level of agreement among academics about the general direction of these changes, engagement with developing long-term, sustainable alternatives to exploitative modes of knowledge production has been surprisingly scattered.

It was this relationship between the abundance of critique and paucity of political action that initially got me interested in arguments and forms of intellectual positioning in what is increasingly referred to as the ‘[culture] war on universities’. Of course, the question of the relationship between critique and resistance – or knowledge and political action – concerns much more than the future of English higher education, and reaches into the constitutive categories of Western political and social thought (I’ve addressed some of this in this talk). In this post, however, my intention is to focus on its implications for how we can conceive critique in and of neoliberal academia.

Varieties of neoliberalism, varieties of critique?

While critique of neoliberalism in the academia tends to converge around the causes as well as consequences of this transformation, this doesn’t mean that there is no theoretical variation. Marxist critique, for instance, tends to emphasise the changes in working conditions of academic staff, increased exploitation, and growing commodification of knowledge. It usually identifies precarity as the problem that prevents academics from exercising the form of political agency – labour organizing – that is seen as the primary source of potential resistance to these changes.

Poststructuralist critique, most of it drawing on Foucault, tends to focus on changing status of knowledge, which is increasingly portrayed as a private rather than a public good. The reframing of knowledge in terms of economic growth is further tied to measurement – reduction to a single, unitary, comparable standard – and competition, which is meant to ensure maximum productivity. This also gives rise to mechanisms of constant assessment, such as the TEF and the REF, captured in the phrase ‘audit culture‘. Academics, in this view, become undifferentiated objects of assessment, which is used to not only instill fear but also keep them in constant competition against each other in hope of eventual conferral of ‘tenure’ or permanent employment, through which they can be constituted as full subjects with political agency.

Last, but not least, the type of critique that can broadly be referred to as ‘new materialist’ shifts the source of political power directly to instruments for measurement and sorting, such as algorithms, metrics, and Big Data. In the neoliberal university, the argument goes, there is no need for anyone to even ‘push the button’; metrics run on their own, with the social world already so imbricated by them that it becomes difficult, if not entirely impossible, to resist. The source of political agency, in this sense, becomes the ‘humanity’ of academics, what Arendt called ‘mere’ and Agamben ‘bare’ life. A significant portion of new materialist critique, in this vein, focuses on emotions and affect in the neoliberal university, as if to underscore the contrast between lived and felt experiences of academics on the one hand, and the inhumanity of algorithms or their ‘human executioners’ on the other.

Despite possibly divergent theoretical genealogies, these forms of critique seem to move in the same direction. Namely, the object or target of critique becomes increasingly elusive, murky, and de-differentiated: but, strangely enough, so does the subject. As power grows opaque (or, in Foucault’s terms, ‘capillary’), the source of resistance shifts from a relatively defined position or identity (workers or members of the academic profession) into a relatively amorphous concept of humanity, or precarious humanity, as a whole.

Of course, there is nothing particularly original in the observation that neoliberalism has eroded traditional grounds for solidarity, such as union membership. Wendy Brown’s Undoing the Demos and Judith Butler’s Notes towards a performative theory of assembly, for instance, address the possibilities for political agency – including cross-sectional approaches such as that of the Occupy movement – in view of this broader transformation of the ‘public’. Here, however, I would like to engage with the implications of this shift in the specific context of academic resistance.

Nerdish subject? The absent centre of [academic] political ontology

The academic political subject, which is why the pun on Žižek, is profoundly haunted by its Cartesian legacy: the distinction between thinking and being, and, by extension, between subject and object. This is hardly surprising: critique is predicated on thinking about the world, which proceeds through ‘apprehending’ the world as distinct from the self; but the self  is also predicated on thinking about that world. Though they may have disagreed on many other things, Boltanski and Bourdieu – both  feature prominently in my work – converge on the importance of this element for understanding the academic predicament: Bourdieu calls it the scholastic fallacy, and Boltanski complex exteriority.

Nowhere is the Cartesian legacy of critique more evident than in its approach to neoliberalism. From Foucault onwards, academic critique has approached neoliberalism as an intellectual project: the product of a ‘thought collective’ or a small group of intellectuals, initially concentrated in the Mont Pelerin society, from which they went on to ‘conquer’ not only economics departments but also, more importantly, centres of political power. Critique, in other words, projects back onto neoliberalism its own way of coming to terms with the world: knowledge. From here, the Weberian assumption that ideas precede political action is transposed to forms of resistance: the more we know about how neoliberalism operates, the better we will be able to resist it. This is why, as neoliberalism proliferates, the books, journal articles, etc. that somehow seek to ‘denounce’ it multiply as well.

Speech acts: the lost hyphen

The fundamental notion of critique, in this sense, is (J.L Austin‘s and Searle’s) notion of speech acts: the assumption that words can have effects. What gets lost in dropping the hyphen in speech(-)acts is a very important bit in the theory of performativity: that is, the conditions under which speech does constitute effective action. This is why Butler in Performative agency draws attention to Austin’s emphasis on perlocution: speech-acts that are effective only under certain circumstances. In other words, it’s not enough to exclaim: “Universities are not for sale! Education is not a commodity! Students are not consumers!” for this to become the case. For this begs the question: “Who is going to bring this about? What are the conditions under which this can be realized?” In other words: who has the power to act in ways that can make this claim true?

What critique bounces against, thus, is thinking its own agency within these conditions, rather than trying to paint them as if they are somehow on the ‘outside’ of critique itself. Butler recognizes this:

“If this sort of world, what we might be compelled to call ‘the bad life’, fails to reflect back my value as a living being, then I must become critical of those categories and structures that produce that form of effacement and inequality. In other words, I cannot affirm my own life without critically evaluating those structures that differentially value life itself [my emphasis]. This practice of critique is one in which my own life is bound up with the objects that I think about” (2015: 199).

In simpler terms: my position as a political subject is predicated on the practice of critique, which entails reflecting on the conditions that make my life difficult (or unbearable). Yet, those conditions are in part what constitutes my capacity to engage in critique in the first place, as the practice of thinking (critically) is, especially in the case of academic critique, inextricably bound up in practices, institutions, and – not least importantly – economies of academic knowledge production. In formal terms, critique is a form of a Russell’s paradox: a set that at the same time both is and is not a member of itself.

Living with (Russell) paradoxes

This is why academic critique of neoliberalism has no problem with thinking about governing rationalities, exploitation of workers in Chinese factories, or VC’s salaries: practices that it perceives as outside of itself, or in which it can conceive of itself as an object. But it faces serious problems when it comes to thinking itself as a subject, and even more, acting in this context, as this – at least according to its own standards – means reflecting on all the practices that make it ‘complicit’ in exactly what it aims to expunge, or criticize.

This means coming to terms with the fact that neoliberalism is the Research Excellence Framework, but neoliberalism is also when you discuss ideas for a super-cool collaborative project. Neoliberalism is the requirement to submit all your research outputs to the faculty website, but neoliberalism is also the pride you feel when your most recent article is Tweeted about. Neoliberalism is the incessant corporate emails about ‘wellbeing’, but it is also the craft beer you have with your friends in the pub. This is why, in the seemingly interminable debates about the ‘validity’ of neoliberalism as an analytical term, both sides are right: yes, on the one hand, the term is vague and can seemingly be applied to any manifestation of power, but, on the other, it does cover everything, which means it cannot be avoided either.

This is exactly the sort of ambiguity – the fact that things can be two different things at the same time – that critique in neoliberalism needs to come to terms with. This could possibly help us move beyond the futile iconoclastic gesture of revealing the ‘true nature’ of things, expecting that action will naturally follow from this (Martijn Konings’ Capital and Time has a really good take on the limits of ‘ontological’ critique of neoliberalism). In this sense, if there is something critique can learn from neoliberalism, it is the art of speculation. If economic discourses are performative, then, by definition, critique can be performative too. This means that futures can be created – but the assumption that ‘voice’ is sufficient to create the conditions under which this can be the case needs to be dispensed with.

 

 

Is there such a thing as ‘centrist’ higher education policy?

OOOresearch
Object-oriented representation of my research, Cambridge, December 2017

This Thursday, I was at the Institute of Education in London, at the launch of David Willetts’ new book, A University Education. The book is another contribution to what I argued constitutes a veritable ‘boom’ in writing on the fate and future of higher education; my research is concerned, among other things, with the theoretical and political question of the relationship between this genre of critique and the social conditions of its production. However, this is not the only reason why I found it interesting: rather, it is because it sets out what may  become Conservatives’ future  policy for higher education. In broader terms, it’s an attempt to carve a political middle ground between Labour’s (supposedly ‘radical’) proposal for the abolition of fees, and the clear PR/political disaster that unmitigated marketisation of higher education has turned out to be. Differently put: it’s the higher education manifesto for what should presumably be the ‘middle’ of UK’s political spectrum.

The book

Critics of the transformation of UK higher education would probably be inclined to dismiss the book with a simple “Ah, Willetts: fees”. On the other hand, it has received a series of predominantly laudatory reviews – some of them, arguably, from people who know or have worked in the same sector as the author. Among the things the reviewers commend is the book’s impressive historical scope, as well as the additional value of ‘peppering’ with anecdotes from Willetts’ time as Minister for Universities and Science. There is substance to both: the anecdotes are sometimes straightforwardly funny, and the historical bits well researched, duly referencing notable predecessors from Kingsley Amis, through C.P. Snow and F.R. Leavis, to Halsey’s “Decline of Donnish Dominion” (though, as James Wilsdon remarked at the event, less so the more recent critics, such as Andrew McGettigan). Yet, what clearly stood out to me, on first reading, is that both historical and personal parts of the narrative are there to support the main argument: that market competition is, and was, the way to ‘solve’ problems of higher education (and, to some degree, the society in general); and that the government is uniquely capable of instituting such a market.

The development of higher education in Britain, in this sense, is told as the story of slow movement against the monopoly (or duopoly) of Oxford and Cambridge, and their selective, elitist model. Willetts recounts the struggle to establish what he (in a not particularly oblique invocation) refers to as ‘challenger’ institutions, from colleges that will become part of the University of London in the 19th century, all the way until Robbins and his own time in government. Fees, loans, and income-contingent repayment are, in this sense, presented as a way to solve the problem of expansion: in other words, their purpose was to make university education both more accessible (as admittance is no longer dependent on inherited privilege) and fairer (as the cost is defrayed not through all taxpayers but only through those who benefit directly from university education, and whose earnings reflect it).

Competition, competition, competition

Those familiar with the political economy of higher education will probably not have problems locating these ideas as part of a neoliberal playbook: competition is necessary to prevent the forming of monopolies, but the government needs to ensure competition actually happens, and this is why it needs to regulate a sector – but from a distance. I unfortunately have no time to get into this argument ; other authors, over the course of the last two decades, have engaged with various assumptions that underpin it. What I would like to turn to instead is the role that the presumably monopolistic ‘nature’ of universities plays in the argument.

Now, engaging with the critique of Oxford and Cambridge is tricky as it risks being interpreted (often, rightly) as a thinly veiled apology of their elitism. As a sociologist of higher education with first-hand experience of both, I’ve always been very – and vocally – far from uncritical endorsement of either. Yet, as Priyamvada Gopal noted not long ago, Oxbridge-bashing in itself constitutes an empty ritual that cannot replace serious engagement with social inequalities. In this sense, one of the reasons why English universities are hierarchical, elitist, and prone to reproducing accumulated privilege is because they are a reflection of their society: unequal, elitist, and fascinated with accumulated privilege (witness the obsession with the Royal Family). Of course, no one is blind to the role which institutions of higher education, and in particular elite universities, play in this. But thinking that ‘solving’ the problem of elite universities is going to solve society’s ills is, at best, an overestimation of their power, and at worst a category error.

Framing competition as a way to solve problems of inequality is, unfortunately, one of the cases where the treatment may be worse than the disease. British universities have shown a stubborn tendency to reproduce existing hierarchies no matter what attempts were made to challenge them – the abolition of differences between universities and polytechnics in 1992; the introduction of rankings and league tables; competitive research funding. The market, in this sense, acts not as “the great leveler” but rather as yet another way of instituting hierarchical relationships, except that mechanisms of reproduction are channeled away from professional (or professorial, in this case) control and towards the government, or, better still, towards supposedly independent and impartial regulatory bodies.

Of course, in comparison with Toby Young’s ‘progressive’ eugenics and rape jokes, Willetts’ take on higher education really sounds rather sensible. His critique of early specialisation is well placed; he addresses head-on the problem of equitable distribution; and, as reviews never tire of mentioning, he really knows universities. In other words: he sounds like one of us. Much like Andrew Adonis, on (presumably) other side of the political spectrum, who took issue with vice chancellors’ pay – one of the rare issues on which the opinion of academics is virtually undivided. But what makes these ideas “centrist” is not so much their actual content – like in the case of stopping Brexit, there is hardly anything wrong with ideas themselves  – as the fact that they seek to frame everything else as ‘radical’ or unacceptable.

What ‘everything else’ stands for in the case of higher education, however, is rather interesting. On the right-hand side, we have the elitism and high selectivity associated with Oxford and Cambridge. OK, one might say, good riddance! On the left, however – we have abolishing tuition fees. Not quite the same, one may be inclined to note.

There ain’t gonna be any middle anymore

Unfortunately, the only thing that makes the idea of abolishing tuition so ‘radical’ in England is its highly stratified social structure. It makes sense to remember that, among OECD countries, the UK is one with the lowest public and highest private expenditure on higher education as percentage of GDP. This means that the cost of higher education is disproportionately underwritten by individuals and their families. In lay terms, this means that public money that could be supporting higher education is spent elsewhere. But it also means something much more problematic, at least judging from the interpretation of this graph recently published by Branko Milanovic.

Let’s assume that the ‘private’ cost of higher education in the UK is currently mostly underwritten by the middle classes (this makes sense both in terms of who goes to university, and who pays for it). If the trends Milanovic analyses continue, not only is the income of middle classes likely to stagnate, it is – especially in the UK, given the economic effects of Brexit – likely to decline. This has serious consequences for the private financing of higher education. In one scenario, this means more loans, more student debt, and the creation of a growing army of indebted precarious workers. In another, to borrow from Pearl Jam, there ain’t gonna be any middle anymore: the middle-class families who could afford to pay for their children’s higher education will become a minority.

This is why there is no ‘centrist’ higher education policy. Any approach to higher education that does not first address longer-term social inequalities is unlikely to work; in periods of economic contraction, such as the one Britain is facing, it is even prone to backfire. Education policies, fundamentally, can do two things: one is to change how things are; the other is to make sure they stay the same. Arguing for a ‘sensible’ solution usually ends up doing the latter.

 

The poverty of student experience

p1010850.jpg
“Be young and shut up”, poster from the demos in France in May 1968, Museum of students, Bologna, Italy, November 2012

 

 

One of my favourite texts back from the time when I was writing my Master’s thesis is the Situationist International’s On The Poverty of Student Life (De la misère au milieu étudiant). Written in 1966 and distributed in 10.000 copies at the official ceremony marking the start of the new academic year at the University of Strasbourg, it provoked an outcry and a swift reaction by the university authorities, who closed down UNEF, the student union that printed it. Today, it is recognized as one of the texts that both diagnosed and helped polarize conditions that eventually led to the famous 1968 student rebellions  in France. This is how it begins:

 

“We might very well say, and no one would disagree with us, that the student is the most universally despised creature in France, apart from the priest and the policeman. The licensed and impotent opponents of capitalism repress the obvious–that what is wrong with the students is also what is wrong with them. They convert their unconscious contempt into a blind enthusiasm. The radical intelligentsia prostrates itself before the so-called ‘rise of the student’ and the declining bureaucracies of the Left bid noisily for his moral and material support.

There are reasons for this sudden enthusiasm, but they are all provided by the present form of capitalism, in its overdeveloped state. We shall use this pamphlet for denunciation. We shall expose these reasons one by one, on the principle that the end of alienation is only reached by the straight and narrow path of alienation itself.

Up to now, studies of student life have ignored the essential issue. The surveys and analyses have all been psychological or sociological or economic: in other words, academic exercises, content with the false categories of one specialization or another. None of them can achieve what is most needed–a view of modern society as a whole.”

 

This diagnosis is pretty much relevant today: most discussions of tuition fees avoid tackling the bigger question, which is the purpose of education and its role in society, beyond the invocation of the standard slogans related to either economic development or social justice and fairness. However, neither clarity of its analysis nor its resonance with contemporary issues are the main reason why I believe the Situationist pamphlet is worth reading. Instead, I would like to draw attention to draw attention to one of its underlying assumptions, reflected in the broader cultural imaginary of the ‘misery’ of student existence, life and social position, and then contrast it with current trends in the provision of student ‘experience’. Last, I want to bring this conversation to the question of tuition fees, which recently re-gained prominence in England, but has been at the back of higher education policy discussions – both in the UK and globally – for at least the last 30 years, and then use it to reflect on the changing role of higher education more generally.

The misery of student life?

There existed a time when being a student was really an exercise in misery. Stories of dank rooms, odd jobs, scraping by on half a baguette and half a pack of cigarettes used to be the staple of ‘the student experience’. Nor were such stories limited to France; I often hear colleagues in the UK complain about not being able to stand cider as they drank way too much of the cheap stuff as undergrads. All of this, as the adage went, was in preparation for a better life to come: stories of nights spent drinking cheap cider only make sense if they are told from a position in which one can afford if not exactly Dom Perignon, then at least decent craft beer.

In fact, these stories are most often told in senior common rooms, at alumni gala dinners, or cheerful reunions of former uni classmates, appropriately decked out in suits. In them, poverty is framed as a rite of passage, serving to justify one’s privileged social and professional position: instituting a myth of meritocracy (look how much I suffered in order to get to where I am now!) as well as the myth of disinterestedness in the material, creature-comforts side of life (I cared about perfecting my intellect so much I was prepared to lead a life of [relative] material deprivation!).

These stories do more than establish the privilege and shared social identity of those who tell them, however. They also support the figure of ‘the student’ as healthy, able-bodied, and – most of all – with little to focus on besides learning. After all, in order to endure between three and eight years on packets of noodle soup, cheap booze, and no sleep, you need to be young, relatively fit, and without caring duties: staying up all night drinking Strongbow and discussing Schopenhauer is kind-of-less-likely if you’ve got to take kids to school or go to work in the morning. This automatically excludes most mature and part-time students; not even to mention that negotiating campus sociality is still more difficult if (for cultural, religious, health or other reasons) you do not drink or do drugs. But, most importantly, it reinforces the idea that scarcity is a choice; the ‘student experience’, in this myth, is a form of poverty tourism or bootcamp from which you emerge strengthened and ready to assume your (obviously advantageous) position in life. This, clearly, excludes everyone without a guaranteed position in the social and economic elite. Poverty is not a rite de passage for those who stay poor throughout their life, and there is no glory in recalling the days of drinking cheap cider if, ten years down the line, you doubt you’ll be able to afford much better. Increasingly, however, that is all of us.

Situationists recognized the connection between the ‘poverty of student life’ and generalised poverty back in 1966:

 

“At least in consciousness, the student can exist apart from the official truths of ‘economic life’ .But for very simple reasons: looked at economically, student life is a hard one. In our ‘society of abundance’, he is still a pauper. 80% of students come from income groups well above the working class, yet 90% have less money than the meanest laborer. Student poverty is an anachronism, a throw-back from an earlier age of capitalism; it does not share in the new poverties of the spectacular societies; it has yet to attain the new poverty of the new proletariat.”

 

This brings us to the misery of student experience here and now. For the romanticisation of the poverty of student life makes sense only if that poverty is chosen, and temporary. Just like the graduate premium, it is predicated on the idea that you are ‘suffering’ now, in order to benefit later. And, of course, in the era of precarity, unemployment, and what David Graeber famously dubbed ‘bullshit jobs’, it no longer holds.

 

The gilded cage of student experience

 

Of course, university degree, in principle, still means your chances on the job market are better than those of someone who hasn’t got a degree. But this data skews the bigger picture, which is that the proportion of bullshit jobs is increasing: it’s not that a university degree guarantees fantastic employment opportunities, it’s that not having one means falling out of the competition for anything but the bottom of the job ladder. Most importantly, talk of graduate premium often omits to take into account the degree to which higher education is still a proxy for something else entirely: class. The effect of a university degree on employment and quality of life is thus a compound of education, social background, cultural capital, and race, gender, age etc., rather than an automatic effect of enduring three to eight years of exam taking, excessive drinking, and excruciating anxiety.

Perhaps surprisingly, one of the most visible reflections of the changing socio-economic structure of student existence is the growth of high-end or luxury student housing, and the associated focus on ‘student experience’. Of course, in most cases universities and property developers do this in order to cater to foreign, ‘overseas’ fee-paying students, who are often quite openly framed as the institution’s main source of income (it is particularly interesting to observe otherwise staunch critics of ‘marketization’ and defenders of the ‘public’ status of the university unashamedly treat such students or their parents as cash cows, or at the very least, consumers). But, to a not much lesser degree, it is also a reflection of (if still implicit) recognition that studying no longer guarantees a good and well-paid job. In other words, if you’re not necessarily going to have a better life after university, you may as well live in decent conditions while you’re in it.

The replacement of dank bedsits and instant noodles with ensuite rooms and gluten-free granola, then, is not ‘selling out’ the ideals of education in order to pander to the ‘Snowflake’ generation, as some conservative authors have argued. It is a reflection of a broader socio-economic shift related to the quality of life and life chances, as well as the breaking of the assumption of a direct (if not necessarily causal) link between education, employment, and status. In this sense, Labour’s plan to abolish tuition fees is a good start, but it does not solve the greater question of poverty and precarity, both of which will increasingly impact even those who have previously been relatively shielded from the effects of the crumbling economy – graduates.

 

Beyond fees

 

Even with no tuition, graduates will either need loans to cover living costs, or – unless they rely on their parents (and here we are stuck in the vicious cycle of class reproduction) – engage in bullshit work (at least until there is an actual effort to integrate part-time study with decent jobs, something that the Open University used to do well). In the same vein, Graduate Tax only makes sense if the highly educated on the whole actually earn much more than the rest of the population (see an interesting discussion here) – which, if current trends continue, is hardly going to be the case. In the meantime, the graduate premium reflects less the actual ‘earning power’ a degree brings and more the further slide into poverty for those without degrees, coupled with the increasing wealth of those in top-tier jobs, hardly representative of graduates as a whole (in fact, they usually come from a small number of institutions, and, again, from relatively privileged social backgrounds).

 

Addressing tuition fees in isolation, then, does little to counter the compound effects of deindustrialization, financialization, and growing public debt. This is not to say that it isn’t a solution – it’s certainly preferable to accruing a lifetime of debt – but it speaks to the need to integrate education policy into broader questions of economic and social justice, rather than treat it as temporary solution for rapid social, technological and demographic change. Meanwhile, we could do something really radical, like, I dunno, tax the rich? Just a thought.

 

Critters, Critics, and Californian Theory – review of Haraway’s Staying with the Trouble

13925114_10153706710291720_1736673444964895015_n
Coproduction

 

[This review was originally published on the blog of the Political Economy Research Centre as part of its Anthropocene Reading Group, as well as on the blog of Centre for Understanding Sustainable Prosperity]

 

Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene(Duke University Press, 2016)

From the opening, Donna Haraway’s recent book reads like a nice hybrid of theoretical conversation and science fiction. Crescendoing in the closing Camille Stories, the outcome of a writing experiment of imagining five future generations, “Staying with the trouble” weaves together – like the cat’s cradle, one of the recurrent metaphors in the book – staple Harawayian themes of the fluidity of boundaries between human and variously defined ‘Others’, metamorphoses of gender, the role of technology in modifying biology, and the related transformation of the biosphere – ‘Gaia’ – in interaction with human species. Eschewing the term ‘Anthropocene’, which she (somewhat predictably) associates with Enlightenment-centric, tool-obsessed rationality, Haraway births ‘Chthulucene’ – which, to be specific, has nothing to do with the famous monster of H.P. Lovecraft’s imagination, instead being named after a species of spider, Pimoa Cthulhu, native to Haraway’s corner of Western California.

This attempt to avoid dealing with human(-made) Others – like Lovecraft’s “misogynist racial-nightmare monster” – is the key unresolved issue in the book. While the tone is rightfully respectful – even celebratory – of nonhuman critters, it remains curiously underdefined in relation to human ones. This is evident in the treatment of Eichmann and the problem of evil. Following Arendt, Haraway sees Eichmann’s refusal to think about the consequences of his actions as the epitome of the banality of evil – the same kind of unthinking that leads to the existing ecological crisis. That more thinking seems like a natural antidote and a solution to the long-term destruction of the biosphere seems only logical (if slightly self-serving) from the standpoint of developing a critical theory whose aim is to save the world from its ultimate extinction. The question, however, is what to do if thoughts and stories are not enough?

The problem with a political philosophy founded on belief in the power of discourse is that it remains dogmatically committed to the idea that only if one can change the story, one can change the world. The power of stories as “worlding” practices fundamentally rests on the assumption that joint stories can be developed with Others, or, alternatively, that the Earth is big enough to accommodate those with which no such thing is possible. This leads Haraway to present a vision of a post-apocalyptic future Earth, in which population has been decimated to levels that allow human groups to exist at sufficient distance from each other. What this doesn’t take into account is that differently defined Others may have different stories, some of which may be fundamentally incompatible with ours – as recently reflected in debates over ‘alternative facts’ or ‘post-truth’, but present in different versions of science and culture wars, not to mention actual violent conflicts. In this sense, there is no suggestion of sympoiesis with the Eichmanns of this world; the question of how to go about dealing with human Others – especially if they are, in Kristeva’s terms, profoundly abject – is the kind of trouble “Staying with the trouble” is quite content to stay out of.

Sympoiesis seems reserved for non-humans, which seem to happily go along with the human attempts to ‘become-with’ them. But it seems easier when ‘Others’ do not, technically speaking, have a voice: whether we like it or not, few of the non-human critters have efficient means to communicate their preferences in terms of political organisation, speaking order at seminars, or participation in elections. The critical practice of com-menting, to which Haraway attributes much of the writing in the book, is only possible to the extent to which the Other has equal means and capacities to contribute to the discussion. As in the figure of the Speaker for the Dead, the Other is always spoken-for, the tragedy of its extinction obscuring the potential conflict or irreconcilability between species.

The idea of a com-pliant Other can, of course, be seen as an integral element of the mythopoetic scaffolding of West Coast academia, where the idea of fluidity of lifestyle choices probably has near-orthodox status. It’s difficult not to read parts of the book, such as the following passage, as not-too-fictional accounts of lived experiences of the Californian intellectual elite (including Haraway herself):

“In the infectious new settlements, every new child must have at least three parents, who may or may not practice new or old genders. Corporeal differences, along with their fraught histories, are cherished. Throughout life, the human person may adopt further bodily modifications for pleasure and aesthetics or for work, as long as the modifications tend to both symbionts’ well-being in the humus of sympoiesis” (p. 133-5)

The problem with this type of theorizing is not so much that it universalises a concept of humanity that resembles an extended Comic-Con with militant recycling; reducing ideas to their political-cultural-economic background is not a particularly innovative critical move. It is that it fails to account for the challenges and dangers posed by the friction of multiple human lives in constrained spaces, and the ways in which personal histories and trajectories interact with the configurations of place, class, and ownership, in ways that can lead to tragedies like the Grenfell tower fire in London.

In other words, what “Staying with the trouble” lacks is a more profound sense of political economy, and the ways in which social relations influence how different organisms interact with their environment – including compete for its scarce resources, often to the point of mutual extinction. Even if the absolution of human woes by merging one’s DNA with those of fellow creatures works well as an SF metaphor, as a tool for critical analysis it tends to avoid the (often literally) rough edges of their bodies. It is not uncommon even for human bodies to reject human organs; more importantly, the political history of humankind is, to a great degree, the story of various groups of humans excluding other humans from the category of humans (colonized ‘Others’, slaves), citizens (women, foreigners), or persons with full economic and political rights (immigrants, and again women). This theme is evident in the contemporary treatment of refugees, but it is also preserved in the apparently more stable boundaries between human groups in the Camille Stories. In this context, the transplantation of insect parts to acquire consciousness of what it means to inhabit the body of another species has more of a whiff of transhumanist enhancement than of an attempt to confront head-on (antennae-first?) multifold problems related to human coexistence on a rapidly warming planet.

At the end of the day, solutions to climate change may be less glamorous than the fantasy of escaping global warming by taking a dip in the primordial soup. In other words, they may require some good ol’ politics, which fundamentally means learning to deal with Others even if they are not as friendly as those in Haraway’s story; even if, as the Eichmanns and Trumps of this world seem to suggest, their stories may have nothing to do with ours. In this sense, it is the old question of living with human Others, including abject ones, that we may have to engage with in the AnthropoCapitaloCthulucene: the monsters that we created, and the monsters that are us.

Jana Bacevic is a PhD candidate at the Department of Sociology at the University of Cambridge, and has a PhD in social anthropology from the University of Belgrade. Her interests lie at the intersection of social theory, sociology of knowledge, and political sociology; her current work deals with the theory and practice of critique in the transformation of higher education and research in the UK.

 

Theory as practice: for a politics of social theory, or how to get out of the theory zoo

 

[These are my thoughts/notes for the “Practice of Social Theory, which Mark Carrigan and I are running at the Department of Sociology of the University of Cambridge from 4 to 6 September, 2017].

 

Revival of theory?

 

It seems we are witnessing something akin to a revival of theory, or at least of an interest in it. In 2016, the British Journal of Sociology published Swedberg’s “Before theory comes theorizing, or how to make social sciences more interesting”, a longer version of its 2015 Annual public lecture, followed by responses from – among others – Krause, Schneiderhan, Tavory, and Karleheden. A string of recent books – including Matt Dawson’s Social Theory for Alternative Societies, Alex Law’s Social Theory for Today, and Craig Browne’s Critical Social Theory, to name but a few – set out to consider the relevance or contribution of social theory to understanding contemporary social problems. This is in addition to the renewal of interest in biography or contemporary relevance of social-philosophical schools such as Existentialism (1, 2) and the Frankfurt School [1, 2].

To a degree, this revival happens on the back of the challenges posed to the status of theory by the rise of data science, leading Lizardo and Hay to engage in defense of the value and contributions of theory to sociology and international relations, respectively. In broader terms, however, it addresses the question of the status of social sciences – and, by extension, academic knowledge – more generally; and, as such, it brings us back to the justification of expertise, a question of particular relevance in the current political context.

The meaning of theory

Surely enough, theory has many meanings (Abend, 2008), and consequently many forms in which it is practiced. However, one of the characteristics that seem to be shared across the board is that it is  part of (under)graduate training, after which it gets bracketed off in the form of “the theory chapter” of dissertations/theses. In this sense, theory is framed as foundational in terms of socialization into a particular discipline, but, at the same time, rarely revisited – at least not explicitly – after the initial demonstration of aptitude. In other words, rather than doing, theory becomes something that is ‘done with’. The exception, of course, are those who decide to make theory the centre of their intellectual pursuits; however, “doing theory” in this sense all too often becomes limited to the exegesis of existing texts (what Krause refers to as ‘theory a’ and Abend as ‘theory 4’) that leads to the competition among theorists for the best interpretation of “what theorist x really wanted to say”, or, alternatively, the application of existing concepts to new observations or ‘problems’ (‘theory b and c’, in Krause’s terms). Either way, the field of social theory resembles less the groves of Plato’s Academy, and more a zoo in which different species (‘Marxists’, ‘critical realists’, ‘Bourdieusians’, ‘rational-choice theorists’) delve in their respective enclosures or fight with members of the same species for dominance of a circumscribed domain.

 

Screen shot 2017-06-12 at 8.11.36 PM
Competitive behaviour among social theorists

 

This summer school started from the ambition to change that: to go beyond rivalries or allegiances to specific schools of thought, and think about what doing theory really means. I often told people that wanting to do social theory was a major reason why I decided to do a second PhD; but what was this about? I did not say ‘learn more’ about social theory (my previous education provided a good foundation), ‘teach’ social theory (though supervising students at Cambridge is really good practice for this), read, or even write social theory (though, obviously, this was going to be a major component). While all of these are essential elements of becoming a theorist, the practice of social theory certainly isn’t reducible to them. Here are some of the other aspects I think we need to bear in mind when we discuss the return, importance, or practice of theory.

Theory is performance

This may appear self-evident once the focus shifts to ‘doing’, but we rarely talk about what practicing theory is meant to convey – that is, about theorising as a performative act. Some elements of this are not difficult to establish: doing theory usually means  identification with a specific group, or form of professional or disciplinary association. Most professional societies have committees, groups, and specific conference sessions devoted to theory – but that does not mean theory is exclusively practiced within them. In addition to belonging, theory also signifies status. In many disciplines, theoretical work has for years been held in high esteem; the flipside, of course, is that ‘theoretical’ is often taken to mean too abstract or divorced from everyday life, something that became a more pressing problem with the decline of funding for social sciences and the concomitant expectation to make them socially relevant. While the status of theory is a longer (and separate) topic, one that has been discussed at length in the history of sociology and other social sciences, it bears repeating that asserting one’s work as theoretical is always a form of positioning: it serves to define the standing of both the speaker, and (sometimes implicitly) others contributors. This brings to mind that…

Theory is power

Not everyone gets to be treated as a theorist: it is also a question of recognition, and thus, a question of political (and other) forms of power. ‘Theoretical’ discussions are usually held between men (mostly, though not exclusively, white men); interventions from women, people of colour, and persons outside centres of epistemic power are often interpreted as empirical illustrations, or, at best, contributions to ‘feminist’ or ‘race’ theory*. Raewyn Connell wrote about this in Southern Theory, and initiatives such as Why is my curriculum white? and Decolonizing curriculum in theory and practice have brought it to the forefront of university struggles, but it speaks to the larger point made by Spivak: that the majority of mainstream theory treats the ‘subaltern’ as only empirical or ethnographic illustration of the theories developed in the metropolis.

The problem here is not only (or primarily) that of representation, in the sense in which theory thus generated fails to accurately depict the full scope of social reality, or experiences and ideas of different people who participate in it. The problem is in a fundamentally extractive approach to people and their problems: they exist primarily, if not exclusively, in order to be explained. This leads me to the next point, which is that…

Theory is predictive

A good illustration for this is offered by pundits and political commentators’ surprise at events in the last year: the outcome of the Brexit referendum (Leave!), US elections (Donald Trump!), and last but not least, the UK General Election (surge in votes for Corbyn!). Despite differences in how these events are interpreted, they in most cases convey that, as one pundit recently confessed, nobody has a clue about what is going on. Does this mean the rule of experts really is over, and, with it, the need for general theories that explain human action? Two things are worth taking into account.

To begin with, social-scientific theories enter the public sphere in a form that’s not only simplified, but also distilled into ‘soundbites’ or clickbait adapted to the presumed needs and preferences of the audience, usually omitting all the methodological or technical caveats they normally come with. For instance, the results of opinion polls or surveys are taken to presented clear predictions, rather than reflections of general statistical tendencies; reliability is rarely discussed. Nor are social scientists always innocent victims of this media spin: some actively work on increase their visibility or impact, and thus – perhaps unwittingly – contribute to the sensationalisation of social-scientific discourse. Second, and this can’t be put delicately, some of these theories are just not very good. ‘Nudgery’ and ‘wonkery’ often rest on not particularly sophisticated models of human behaviour; which is not saying that they do not work – they can – but rather that theoretical assumptions underlying these models are rarely accessible to scrutiny.

Of course, it doesn’t take a lot of imagination to figure out why this is the case: it is easier to believe that selling vegetables in attractive packaging can solve the problem of obesity than to invest in long-term policy planning and research on decision-making that has consequences for public health. It is also easier to believe that removing caps to tuition fees will result in universities charging fees distributed normally from lowest to highest, than to bother reading theories of organizational behaviour in different economic and political environments and try to understand how this maps onto the social structure and demographics of a rapidly changing society. In other words: theories are used to inform or predict human behaviour, but often in ways that reinforce existing divisions of power. So, just in case you didn’t see this coming…

Theory is political

All social theories are about constraints, including those that are self-imposed. From Marx to Freud and from Durkheim to Weber (and many non-white, non-male theorists who never made it into ‘the canon’), theories are about what humans can and cannot do; they are about how relatively durable relations (structures) limit and enable how they act (agency). Politics is, fundamentally, about the same thing: things we can and things we cannot change. We may denounce Bismarck’s definition of politics as the art of the possible as insufficiently progressive, but – at the risk of sounding obvious – understanding how (and why) things stay the same is fundamental to understanding how to go about changing them. The history of social theory, among other things, can be read as a story about shifting the boundaries of what was considered fixed and immutable, on the one hand, and constructed – and thus subject to change – on the other.

In this sense, all social theory is fundamentally political. This isn’t to license bickering over different historical materialisms, or to stimulate fantasies – so dear to intellectuals – of ‘speaking truth to power’. Nor should theories be understood as weapons in the ‘war of time’, despite Débord’s poetic formulation: this is but the flipside of intellectuals’ dream of domination, in which their thoughts (i.e. themselves) inspire masses to revolt, usually culminating in their own ascendance to a position of power (thus conveniently cutting out the middleman in ‘speaking truth to power’, as they become the prime bearers of both).

Theory is political in a much simpler sense, in which it is about society and elements that constitute it. As such, it has to be about understanding what is it that those we think of as society think, want, and do, even – and possibly, especially – when we do not agree with them. Rather than aiming to ‘explain away’ people, or fit their behaviour into pre-defined social models, social theory needs to learn to listen to – to borrow a term from politics – its constituents. This isn’t to argue for a (not particularly innovative) return to grounded theory, or ethnography (despite the fact both are relevant and useful). At the risk of sounding pathetic, perhaps the next step in the development of social theory is to really make it a form of social practice – that is, make it be with the people, rather than about the people. I am not sure what this would entail, or what it would look like; but I am pretty certain it would be a welcome element of building a progressive politics. In this sense, doing social theory could become less of the practice of endlessly revising a blueprint for a social theory zoo, and more of a project of getting out from behind its bars.

 

 

*The tendency to interpret women’s interventions as if they are inevitably about ‘feminist theory’ (or, more frequently, as if they always refer to empirical examples) is a trend I have been increasingly noticing since moving into sociology, and definitely want to spend more time studying. This is obviously not to say there aren’t women in the field of social theory, but rather that gender (and race, ethnicity, and age) influence the level of generality at which one’s claims are read, thus reflecting the broader tendency to see universality and Truth as coextensive with the figure of the male and white academic.

 

 

Solving the democratic problem: intellectuals and reconciling epistemic and liberal democracy

bristols_somewhere
…but where? Bristol, October 2014

 

[This review of “Democratic problem-solving” (Cruickshank and Sassower eds., 2017) was first published in Social Epistemology Review and Reply Collective, 26 May 2017].

It is a testament to the lasting influence of Karl Popper and Richard Rorty that their work continues to provide inspiration for debates concerning the role and purpose of knowledge, democracy, and intellectuals in society. Alternatively, it is a testament to the recurrence of the problem that continues to lurk under the glossy analytical surface or occasional normative consensus of these debates: the impossibility to reconcile the concepts of liberal and epistemic democracy. Essays collected under the title Democratic Problem-Solving (Cruickshank and Sassower 2017) offer grounds for both assumptions, so this is what my review will focus on.

Boundaries of Rational Discussion

Democratic Problem-Solving is a thorough and comprehensive (if at times seemingly meandering) meditation on the implications of Popper’s and Rorty’s ideas for the social nature of knowledge and truth in contemporary Angloamerican context. This context is characterised by combined forces of neoliberalism and populism, growing social inequalities, and what has for a while now been dubbed, perhaps euphemistically, the crisis of democracy. Cruickshank’s (in other contexts almost certainly heretical) opening that questions the tenability of distinctions between Popper and Rorty, then, serves to remind us that both were devoted to the purpose of defining the criteria for and setting the boundaries of rational discussion, seen as the road to problem-solving. Jürgen Habermas, whose name also resonates throughout this volume, elevated communicative rationality to the foundational principle of Western democracies, as the unifying/normalizing ground from which to ensure the participation of the greatest number of members in the public sphere.

Intellectuals were, in this view, positioned as guardians—epistemic police, of sorts—of this discursive space. Popper’s take on epistemic ‘policing’ (see DPS, 42) was to use the standards of scientific inquiry as exemplars for maintaining a high level, and, more importantly, neutrality of public debates. Rorty saw it as the minimal instrument that ensured civility without questioning, or at least without implicitly dismissing, others’ cultural premises, or even ontological assumptions. The assumption they and authors in this volume have in common is that rational dialogue is, indeed, both possible and necessary: possible because standards of rationality were shared across humanity, and necessary because it was the best way to ensure consensus around the basic functioning principles of democracy. This also ensured the pairing of knowledge and politics: by rendering visible the normative (or political) commitments of knowledge claims, sociology of knowledge (as Reed shows) contributed to affirming the link between the epistemic and the political. As Agassi’s syllogism succinctly demonstrates, this link quickly morphed from signifying correlation (knowledge and power are related) to causation (the more knowledge, the more power), suggesting that epistemic democracy was if not a precursor, then certainly a correlate of liberal democracy.

This is why Democratic Problem-Solving cannot avoid running up against the issue of public intellectuals (qua epistemic police), and, obviously, their relationship to ‘Other minds’ (communities being policed). In the current political context, however, to the well-exercised questions Sassower raises such as—

should public intellectuals retain their Socratic gadfly motto and remain on the sidelines, or must they become more organically engaged (Gramsci 2011) in the political affairs of their local communities? Can some academics translate their intellectual capital into a socio-political one? Must they be outrageous or only witty when they do so? Do they see themselves as leaders or rather as critics of the leaders they find around them (149)?

—we might need to add the following: “And what if none of this matters?”

After all, differences in vocabularies of debate matter only if access to it depends on their convergence to a minimal common denominator. The problem for the guardians of public sphere today is not whom to include in these debates and how, but rather what to do when those ‘others’ refuse, metaphorically speaking, to share the same table. Populist right-wing politicians have at their disposal the wealth of ‘alternative’ outlets (Breitbart, Fox News, and increasingly, it seems, even the BBC), not to mention ‘fake news’ or the ubiquitous social media. The public sphere, in this sense, resembles less a (however cacophonous) town hall meeting than a series of disparate village tribunals. Of course, as Fraser (1990) noted, fragmentation of the public sphere has been inherent since its inception within the Western bourgeois liberal order.

The problem, however, is less what happens when other modes of arguing emerge and demand to be recognized, and more what happens when they aspire for redistribution of political power that threatens to overturn the very principles that gave rise to them in the first place. We are used to these terms denoting progressive politics, but there is little that prevents them from being appropriated for more problematic ideologies: after all, a substantial portion of the current conservative critique of the ‘culture of political correctness’, especially on campuses in the US, rests on the argument that ‘alternative’ political ideologies have been ‘repressed’, sometimes justifying this through appeals to the freedom of speech.

Dialogic Knowledge

In assuming a relatively benevolent reception of scientific knowledge, then, appeals such as Chis and Cruickshank’s to engage with different publics—whether as academics, intellectuals, workers, or activists—remain faithful to Popper’s normative ideal concerning the relationship between reasoning and decision-making: ‘the people’ would see the truth, if only we were allowed to explain it a bit better. Obviously, in arguing for dialogical, co-produced modes of knowledge, we are disavowing the assumption of a privileged position from which to do so; but, all too often, we let in through the back door the implicit assumption of the normative force of our arguments. It rarely, if ever, occurs to us that those we wish to persuade may have nothing to say to us, may be immune or impervious to our logic, or, worse, that we might not want to argue with them.

For if social studies of science taught us anything, it is that scientific knowledge is, among other things, a culture. An epistemic democracy of the Rortian type would mean that it’s a culture like any other, and thus not automatically entitled to a privileged status among other epistemic cultures, particularly not if its political correlates are weakened—or missing (cf. Hart 2016). Populist politics certainly has no use for critical slow dialogue, but it is increasingly questionable whether it has use for dialogue at all (at the time of writing of this piece, in the period leading up to the 2017 UK General Election, the Prime Minister is refusing to debate the Leader of the Opposition). Sassower’s suggestion that neoliberalism exhibits a penchant for justification may hold a promise, but, as Cruickshank and Chis (among others) show on the example of UK higher education, ‘evidence’ can be adjusted to suit a number of policies, and political actors are all too happy to do that.

Does this mean that we should, as Steve Fuller suggested in another SERRC article see in ‘post-truth’ the STS symmetry principle? I am skeptical. After all, judgments of validity are the privilege of those who can still exert a degree of control over access to the debate. In this context, I believe that questions of epistemic democracy, such as who has the right to make authoritative knowledge claims, in what context, and how, need to, at least temporarily, come second in relation to questions of liberal democracy. This is not to be teary-eyed about liberal democracy: if anything, my political positions lie closer to Cruickshank and Chis’ anarchism. But it is the only system that can—hopefully—be preserved without a massive cost in human lives, and perhaps repurposed so as to make them more bearable.

In this sense, I wish the essays in the volume confronted head-on questions such as whether we should defend epistemic democracy (and what versions of it) if its principles are mutually exclusive with liberal democracy, or, conversely, would we uphold liberal democracy if it threatened to suppress epistemic democracy. For the question of standards of public discourse is going to keep coming up, but it may decreasingly have the character of an academic debate, and increasingly concern the possibility to have one at all. This may turn out to be, so to speak, a problem that precedes all other problems. Essays in this volume have opened up important venues for thinking about it, and I look forward to seeing them discussed in the future.

References

Cruickshank, Justin and Raphael Sassower. Democratic Problem Solving: Dialogues in Social Epistemology. London: Rowman & Littlefield, 2017.

Fraser, Nancy. “Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy.” Social Text 25/26 (1990): 56-80.

Fuller, Steve. “Embrace the Inner Fox: Post-Truth as the STS Symmetry Principle Universalized.” Social Epistemology Review and Reply Collective, December 25, 2016. http://wp.me/p1Bfg0-3nx

Hart, Randle J. “Is a Rortian Sociology Desirable? Will It Help Us Use Words Like ‘Cruelty’?” Humanity and Society, 40, no. 3 (2016): 229-241.

Universities, neoliberalisation, and the (im)possibility of critique

Last Friday in April, I was at a conference entitled Universities, neoliberalisation and (in)equality at Goldsmiths, University of London. It was an one-day event featuring presentations and interventions from academics who work on understanding, and criticising, the transformation of working conditions in neoliberal academia. Besides sharing these concerns, attending such events is part of my research: I, in fact, study the critique of neoliberalism in UK higher education.

Why study critique, you may ask? At the present moment, it may appear all the more urgent to study the processes of transformation themselves, especially so that we can figure out what can be done about them. This, however, is precisely the reason: critique is essential to how we understand social processes, in part because it entails a social diagnostic – it tells us what is wrong – and, in part, because it allows us to conceptualise our own agency – what is to be done – about this. However, the link between the two is not necessarily straightforward: first you read some Marx, and then you go and start a revolution. Some would argue that the reading of Marx (what we usually think of as consciousness-raising) is essential part of the process, but there are many variables that intervene between awareness of the unfairness of certain conditions – say, knowing that part-time, low paid teaching work is exploitative – and actually doing something about those conditions, such as organising an occupation. In addition, as virtually everyone from the Frankfurt School onwards had noted, linking these two aspects is complicated by the context of mass consumerism, mass media, and – I would add – mass education. Still, the assumption of an almost direct (what Archer dubbed an ‘hydraulic) link between knowledge and action still haunts the concept of critique, both as theory and as practice.

In the opening remarks to the conference, Vik Loveday actually zeroed in on this, asking: why is it that there seems to be a burgeoning of critique, but very little resistance? For it is a burgeoning indeed: despite it being my job, even I have issues keeping up to speed with the veritable explosion of the writing that seeks to analyse, explain, or simply mourn the seemingly inevitable capitulation of universities in the face of neoliberalism. By way of illustration, the Palgrave series in “Critical University Studies” boasts eleven new titles, all published in 2016-7; and this is but one publisher, in English language only.

What can explain the relationship between the relative proliferation of critique, and relative paucity of resistance? This question forms the crux of my thesis: less, however, as an invocation for the need to resist, and more as the querying of the relationship between knowledge – especially as forms of critique, including academic critique – and political agency (I do see political agency on a broader spectrum than the seemingly inexhaustible dichotomy between ‘compliance’ and ‘resistance’, but that is another story).

So here’s a preliminary hypothesis (H, if you wish): the link between critique and resistance is mediated by the existence of and position in of academic hierarchy. Two presentations I had the opportunity to hear at the conference were very informative in this regard: the first is Loveday’s analysis of academics’ experience of anxiety, the other was Neyland and Milyaeva’s research on the experiences of REF panelists. While there is a shared concern among academics about the neoliberalisation of higher education, what struck me was the pronounced difference in the degree to which two groups express doubts about their own worth as academics, future, and relevance (in colloquial parlance, ‘impostor syndrome’). While junior* and relatively precarious academics seem to experience high levels of anxiety in relation to their value as academics, senior* academics who sit on REF panels experience it far less. The difference? Level of seniority and position in decision-making.

Well, you may say, this is obvious – the more established academics are, the more confident they are going to be. However, what varies with levels of seniority is not just confidence and trust in one’s own judgements: it’s the sense of entitlement, the degree to which you feel you deserve to be there (Loveday writes about the classed aspects of the sense of entitlement here). I once overheard someone call it the Business Class Test: the moment you start justifying to yourself flying business class on work trips (unless you’re very old, ill, or incapacitated), is the moment when you will have convinced yourself you deserve this. The issue, however, is not how this impacts travel practices: it’s the effect that the differential sense of entitlement has on the relationship between critique and resistance.

So here’s another hypothesis (h1, if you wish). The more precarious your position, the more likely you are to perceive the working conditions as unfair – and, thus, to be critical of the structure of academic hierarchy that enables it. Yet, at the same time, the more junior you are, the more risk voicing that critique – that is, translating it into action – entails. Junior academics often point out that they have to shut up and go on ‘playing the game’: churning out publications (because REF), applying for external funding (because grant capture), and teaching ever-growing numbers of students (because students generate income for the institution). Thus, junior academics may well know everything that is wrong with the academia, but will go on conforming to it in ways that reproduce exactly the conditions they are critical of.

What happens once one ascends to the coveted castle of permanent employment/tenure and membership in research evaluation panels and appointment committees? Well, I’ve only ever been tenure track for a relatively short period of time (having left the job before I found myself justifying flying business class) but here’s an assumption based on anecdotal evidence and other people’s data (h2): you still grin and bear it. You do not, under any circumstances, stop participating in the academic ‘game’ – with the added catch that now you actually believe you deserved your position in it. I’m not saying senior academics are blind to the biases and social inequalities reflected in the academic hierarchy: what I am saying is that it is difficult, if not altogether impossible, to simultaneously be aware of it and continue participating in it (there’s a nod to Sartre’s notion of ‘bad faith‘ here, but I unfortunately do not have the time to get into that now). Ever encounter a professor stand up at a public lecture or committee meeting and say “I recognize that I owe my being here to the combined fortunes of inherited social capital, [white] male privilege, and the fact English is my native language”? I didn’t either. If anything, there are disavowals of social privilege (“I come from a working class background”), which, admirable as they may be, unfortunately only serve to justify the hierarchical nature of academia and its selection procedures (“I definitely deserve to be here, because look at all the odds I had to beat in order to get here in the first place”).

In practice, this leads to the following. Senior academics stay inside the system, and, if they are critical, believe to work against the system – for instance, by fighting for their discipline, or protecting junior colleagues, or aiming to make academia that little bit more diverse. In the longer run, however, their participation keeps the system going – the equivalent of carbon offsetting your business class flight; sure, it may help plant trees in Guinea Bissau, but it does not obfuscate the fact you are flying in the first place. Junior academics, on the other hand, contribute through their competition for positions inside the system – believing that if only they teach enough (perform low-paid work), publish enough (contribute to abundance), or are visible enough (perform unpaid labour of networking on social media, through conferences etc.) – they will get away from precarity, and then they can really be critical (there’s a nod to Berlant’s cruel optimism here that I also unfortunately cannot expand on). Except that, of course, they end up in the position of senior academics, with an added layer of entitlement (because they fought so hard) and an added layer of fear (because no job is really safe in neoliberalism). Thus, while everyone knows everything is wrong, everyone still plays along. This ‘gamification’ of research, which seems to be the new mot du jour in the academia, becomes a stand-in term for the moral economy of  justifying one’s own position while participating in the reproduction of the conditions that contribute to its instability.

Cui bono critique, in this regard? It depends. If critique is divorced from its capacity to incite political action, there is no reason why it cannot be appropriated – and, correspondingly, commodified – in the broader framework of neoliberal capitalism. It’s already been pointed out that critique sells – and, perhaps less obviously, the critique of neoliberal academia does too. Even if the ever-expanding number of publications on the crisis of the university do not ‘sell’ in the narrow sense of the term, they still contribute to the symbolic economy via accruing prestige (and citation counts!) for their authors. In other words: the critique of neoliberalism in the academia can become part and parcel of the very processes it sets out to criticise. There is nothing, absolutely nothing, in the content, act, or performance of critique itself that renders it automatically subversive or dangerous to ‘the system’. Sorry. (If you want to blame me for being a killjoy, note that Boltanski and Chiapello have noted a long time ago in “The New Spirit of Capitalism” that contemporary capitalism grew through the appropriation of the 1968 artistic critique).

Does this mean critique has, as Latour famously suggested, ‘run out of steam’? If we take the steam engine as a metaphor for the industrial revolution, then the answer may well be yes, and good riddance. Along with other Messianic visions, this may speed up the departure of the Enlightenment’s legacy of pastoral power, reflected – imperfectly, yet unmistakably – in the figure of (organic or avant-guarde) ‘public’ intellectual, destined, as he is (for it is always a he) to lead the ‘masses’ to their ultimate salvation. What we may want to do instead is to examine what promise critique (with a small c) holds – especially in the age of post-truth, post-facts, Donald Trump, and so on. In this, I am fully in agreement with Latour that it is important to keep tabs on the difference between matters of fact, and maters of concern; and, perhaps most disturbingly, think about whether we want to stake out the claim for defining the latter on the monopoly on producing the former.

For getting rid of the veneer of entitlement to critique does not in any way mean abandoning the project of critical examination altogether – but it does, very much so, mean reexamining the positions and perspectives from which it is made. This is the reason why I believe it is so important to focus on the foundations of epistemic authority, including that predicated on the assumption of difference between ‘lay’ and academic forms of reflexivity (I’m writing up a paper on this – meanwhile, my presentation on the topic from this year’s BSA conference is here). In other words, in addition to the analysis of threats to critical scholarship that are unequivocally positioned as coming from ‘the outside’, we need to examine what it is about ‘the inside’ – and, particularly, about the boundaries between ‘out’ and ‘in’ – that helps perpetuate the status quo. Often, this is the most difficult task of all.

Screen shot 2017-05-01 at 1.55.12 PM
Here’s a comic for the end. In case you don’t know it already, it’s Pearls Before Swine, by the brilliant Stephan Pastis. This should at least brighten your day.

P.S. People often ask me what my recommendations would be. I’m reluctant to give any – the academia is broken, and I am not sure whether fixing it in this form makes any sense. But here’s a few preliminary thoughts:

(a) Stop fetishising the difference between ‘inside’ and ‘outside’. ‘Leaving’ the academia is still framed like some epic sort of failure, which amplifies both the readiness of precarious workforce to sustain truly abominable working conditions just in order to stay “in”, and the anxiety and other mental health issues arising from the possibility of falling “out”. Most people with higher education should be able to do well and thrive in all sorts of jobs; if we didn’t frame tenure as a life-or-death achievement, perhaps fewer would agree to suffer for years in hope of its attainment.

(b) Fight for decent working conditions for contingent faculty. Not everyone needs to have tenure if working part-time (or going in and out) are acceptable career choices that offer a liveable income and a level of social support. This would also help those who want to have children or, godforbid, engage in activities other than the rat race for academic positions.

(c) This doesn’t get emphasised enough, but one of the reasons why people vie for positions in the academia is because at least it offers a degree of intellectual satisfaction, in opposition to what Graeber has termed the ever-growing number of ‘bullshit jobs’. So, one of the ways of making working conditions in the academia more decent is by making working conditions outside of academia more decent – and, perhaps, by decentralising a bit the monopoly on knowledge work that the academia holds. Not, however, in the neoliberal outsourcing/’creative hubs’ model, which unfortunately mostly serves to generate value for existing centres while further depleting the peripheries.

* By ”junior” and “senior” I obviously do not mean biological age, but rather status – I am intentionally avoiding denominators such as ‘ECRs’ etc. since I think someone can be in a precarious position whilst not being exactly at the start of their career, and, conversely, someone can be a very early career researcher but have a type of social capital, security, and recognition that are normally associated with ‘later’ career stages.

Boundaries and barbarians: ontological (in)security and the [cyber?] war on universities

baradurPrologue

One Saturday in late January, I go to the PhD office at the Department of Sociology at the University of Cambridge’s New Museums site (yes, PhD students shouldn’t work on Saturdays, and yes, we do). I swipe my card at the main gate of the building. Nothing happens.

I try again, and again, and still nothing. The sensor stays red. An interaction with a security guard who seems to appear from nowhere conveys there is nothing wrong with my card; apparently, there has been a power outage and the whole system has been reset. A rather distraught-looking man from the Department History and Philosophy of Science appears around the corner, insisting to be let back inside the building, where he had left a computer on with, he claims, sensitive data. The very amicable security guard apologises. There’s nothing he can do to let us in. His card doesn’t work, either, and the system has to be manually reset from within the computers inside each departmental building.

You mean the building noone can currently access, I ask.

I walk away (after being assured the issue would be resolved on Monday) plotting sci-fi campus novels in which Skynet is not part of a Ministry of Defense, but of a university; rogue algorithms claim GCSE test results; and classes are rescheduled in a way that sends engineering undergrads to colloquia in feminist theory, and vice versa (the distances one’ s mind will go to avoid thinking about impending deadlines)*. Regretfully pushing prospective pitches to fiction publishers aside (temporarily)**, I find the incident particularly interesting for the perspective it offers on how we think about the university as an institution: its spatiality, its materiality, its boundaries, and the way its existence relates to these categories – in other words, its social ontology.

War on universities?

Critiques of the current transformation of higher education and research in the UK often frame it as an attack, or ‘war’, on universities (this is where the first part of the title of my thesis comes from). Exaggeration for rhetorical purposes notwithstanding, being ‘under attack’ suggests is that it is possible to distinguish the University (and the intellectual world more broadly) from its environment, in this case at least in part populated by forces that threaten its very existence. Notably, this distinction remains almost untouched even in policy narratives (including those that seek to promote public engagement and/or impact) that stress the need for universities to engage with the (‘surrounding’) society, which tend to frame this imperative as ‘going beyond the walls of the Ivory Tower’.

The distinction between universities and the society has a long history in the UK: the university’s built environment (buildings, campuses, gates) and rituals (dress, residence requirements/’keeping term’, conventions of language) were developed to reflect the separateness of education from ordinary experience, enshrined in the dichotomies of intellectual vs. manual labour, active life vs. ‘life of the mind’ and, not least, Town vs. Gown. Of course, with the rise of ‘redbrick’, and, later, ‘plateglass’ universities, this distinction became somewhat less pronounced. Rather than in terms of blurring, however, I would like to suggest we need to think of this as a shift in scale: the relationship between ‘Town’ and ‘Gown’, after all, is embedded in the broader framework of distinctions between urban and suburban, urban and rural, regional and national, national and global, and the myriad possible forms of hybridisation between these (recent work by Addie, Keil and Olds, as well as Robertson et al., offers very good insights into issues related to theorising scale in the context of higher education).

Policing the boundaries: relational ontology and ontological (in)security

What I find most interesting, in this setting, is the way in which boundaries between these categories are maintained and negotiated. In sociology, the negotiation of boundaries in the academia has been studied in detail by, among others, Michelle Lamont (in How Professors Think, as well as in an overview by Lamont and Molnár), Thomas Gieryn (both in Cultural Boundaries of Science and few other texts), Andrew Abbott in The Chaos of Disciplines (and, of course, in sociologically-inclined philosophy of science, including Feyerabend’s Against Method, Lakatos’ work on research programmes, and Kuhn’s on scientific revolutions, before that). Social anthropology has an even longer-standing obsession with boundaries, symbolic as well as material – Mary Douglas’ work, in particular, as well as Augé’s Non-Places offer a good entry point, converging with sociology on the ground of neo-Durkheimian reading of the distinction between the sacred and profane.

My interest in the cultural framing of boundaries goes back to my first PhD, which explored the construal of the category of (romantic) relationship through the delineation of its difference from other types of interpersonal relations. The concept resurfaced in research on public engagement in UK higher education: here, the negotiation of boundaries between ‘inside’ (academics) and ‘outside’ (different audiences), as well as between different groups within the university (e.g. administrators vs. academics) becomes evident through practices of engaging in the dissemination and, sometimes, coproduction of knowledge, (some of this is in my contribution to this volume). The thread that runs through these cases is the importance of positioning in relation to a (relatively) specified Other; in other words, a relational ontology.

It is not difficult to see the role of negotiating boundaries between ‘inside’ and ‘outside’ in the concept of ontological security (e.g. Giddens, 1991). Recent work in IR (e.g. Ejdus, 2017) has shifted the focus from Giddens’ emphasis on social relations to the importance of stability of material forms, including buildings. I think we can extend this to universities: in this case, however, it is not (only) the building itself that is ‘at risk’ (this can be observed in intensified securitisation of campuses, both through material structure such as gates and cards-only entrances, and modes of surveillance such as Prevent – see e.g. Gearon, 2017), but also the materiality of the institution itself. While the MOOC hype may have (thankfully) subsided (though not dissappeared) there is the ubiquitous social media, which, as quite a few people have argued, tests the salience of the distinction between ‘inside’ and ‘outside’ (I’ve written a bit about digital technologies as mediating the boundary between universities and the ‘outside world’ here as well in an upcoming article in Globalisation, Education, Societies special issue that deals with reassembling knowledge production with/out the university).

Barbarians at the gates

In this context, it should not be surprising that many academics fear digital technologies: anything that tests the material/symbolic boundaries of our own existence is bound to be seen as troubling/dirty/dangerous. This brings to mind Kavafy’s poem (and J.M. Coetzee’s novel) Waiting for the Barbarians, in which an outpost of the Empire prepares for the attack of ‘the barbarians’ – that, in fact, never arrives. The trope of the university as a bulwark against and/or at danger of descending into barbarism has been explored by a number of writers, including Thorstein Veblen and, more recently, Roy Coleman. Regardless of the accuracy or historical stretchability of the trope, what I am most interested in is its use as a simultaneously diagnostic and normative narrative that frames and situates the current transformation of higher education and research.

As the last line of Kavafy’s poem suggests, barbarians represent ‘a kind of solution’: a solution for the otherwise unanswered question of the role and purpose of universities in the 21st century, which began to be asked ever more urgently with the post-war expansion of higher education, only to be shut down by the integration/normalization of the soixante-huitards in what Boltanski and Chiapello have recognised as contemporary capitalism’s almost infinite capacity to appropriate critique. Disentangling this dynamic is key to understanding contemporary clashes and conflicts over the nature of knowledge production. Rather than locating dangers to the university firmly beyond the gates, then, perhaps we could use the current crisis to think about how we perceive, negotiate, and preserve the boundaries between ‘in’ and ‘out’. Until we have a space to do that, I believe we will continue building walls only to realise we have been left on the wrong side.

(*) I have a strong interest in campus novels, both for PhD-related and unrelated reasons, as well as a long-standing interest in Sci-Fi, but with the exception of DeLillo’s White Noise can think of very few works that straddle both genres; would very much appreciate suggestions in this domain!

(**) I have been thinking for a while about a book that would be a spin-off from my current PhD that would combine social theory, literature, and critical cultural political economy, drawing on similarities and differences between critical and magical realism to look at universities. This can be taken as a sketch for one of the chapters, so all thoughts and comments are welcome.

On ‘Denial’: or, the uncanny similarity between Holocaust and mansplaining

hero_denial-2016

Last week, I finally got around to seeing Denial. It has many qualities and a few disadvantages – its attempt at hyperrealism treading on both – but I would like to focus on the aspect most reviews I’ve read so far seem to have missed. In other words: mansplaining.

Brief contextualization. Lest I be accused of equating Holocaust and mansplaining (I am not – similarity does not denote equivalence), my work deals with issues of expertise, fact, and public intellectualism; I have always found the Irving case interesting, for a variety of reasons (incidentally, I was also at Oxford during the famous event at the Oxford Union). At the same time, like, I suppose, every woman in the academia and beyond with more agency than a doormat, I have, over the past year, become embroiled in countless arguments about what mansplaining is, whether it is really so widespread, whether it is done only by men (and what to call it when it’s perpetrated by those who are not men?) and, of course, that pseudo-liberal what-passes-as-an-attempt at outmaneuvering the issue, which is whether using the term ‘mansplaining’ blames men as a group and is as such essentialising and oppressive, just like the discourses ‘we’ (feminists conveniently grouped under one umbrella) seek to condemn (otherwise known as a tu quoque argument).

Besides logical flaws, what many of these attacks seem to have in common with the one David Irving launched on Deborah Lipstadt (and Holocaust deniers routinely use) is the focus on evidence: how do we know that mansplaining occurs, and is not just some fabrication of a bunch of conceited females looking to get ahead despite their obvious lack of qualifications? Other uncanny similarities between arguments of Holocaust deniers and those who question the existence of mansplaining temporarily aside, one of undisputable qualities of Denial is that it provides multiple examples of what mansplaining looks like. It is, of course, a film, despite being based on a true story. Rather than presenting a downside, this allows for a concentrated portrayal of the practice – for those doubting its verisimilitude, I strongly recommend watching the film and deciding for yourself whether it resembles real-life situations. For those who do not, voilà, a handy cinematic case to present to those who prefer to plead ignorance as to what mansplaining ‘actually’ entails.

To begin with, the case portrayed in the film is a par excellence instance of mansplaining  as a whole: after all, it is about a self-educated (male) historian who sues an academic historian (a woman) because she does not accept his ‘interpretation’ of World War II (namely, that Holocaust did not happen) and, furthermore, dares to call him out on it. In the case (and the film), he sets out to explain to the (of course, male) judge and the public that Lipstadt (played by Rachel Weisz) is wrong and, furthermore, that her critique has seriously damaged his career (the underlying assumption being that he is entitled to lucrative publishing deals, while she, clearly, has to earn hers – exacerbated by his mockery of the fact that she sells books, whereas his, by contrast, are free). This ‘talking over’ and attempt to make it all about him (remember, he sues her) are brilliantly cast in the opening, when Irving (played by Timothy Spall) visits Lipstadt’s public talk and openly challenges her in the Q&A, ignoring her repeated refusal to engage with his arguments. Yet, it would be a mistake to locate the trope of mansplaining only in the relation Irving-Lipstadt. On the contrary – just like the real thing – it is at its most insidious when it comes from those who are, as it were, ‘on our side’.

A good example is the first meeting of the defence team, where Lipstadt is introduced to people working with her legal counsel, the famous Anthony Julius (Andrew Scott). There is a single woman on Julius’ team: Laura (Caren Pistorius), who, we are told, is a paralegal. Despite it being her first case, it seems she has developed a viable strategy: or at least so we are told by her boss, who, after announcing Laura’s brilliant contribution to the case, continues to talk over her – that is, explain her thoughts without giving her an opportunity to explain them herself. In this sense, what at first seems like an act of mentoring support – passing the baton and crediting a junior staff member – becomes a classical act in which a man takes it onto himself to interpret the professional intervention of a female colleague, appropriating it in the process.

The cases of professional mansplaining are abundant throughout the film: in multiple scenes lawyers explain the Holocaust as well as the concept of denial to Lipstadt despite her meek protests that she “has actually written a book about it”. Obvious irony aside, this serves as a potent reminder that women have to invoke professional credentials not to be recognized as experts, but in order to be recognized as equally valid participants in debate. By contrast, when it comes to the only difference in qualifications in the film that plays against Lipstadt – that of the knowledge of the British legal system – Weisz’s character conveniently remains a mixture of ignorance and naïveté couched in Americanism. One would be forgiven to assume that long-term involvement in a libel case, especially one that carries so much emotional and professional weight, would have provoked a university professor to get acquainted with at least the basic rules of the legal system in which the case was processed, but then, of course, that would have stripped the male characters of the opportunity to shine the light of their knowledge in contrast to her supposed ignorance.

Of course, emotional involvement is, in the film, presented as a clear disadvantage when it comes to the case. While Lipstadt first assumes she will, and then repeatedly asks to be allowed to testify, her legal team insists she would be too emotional a witness. The assumption that having an emotional reaction (even if one that is quite expected – it is, after all, the Holocaust we are talking about) and a cold, hard approach to ‘facts’ are mutually exclusive is played off succinctly in the scenes that take place at Auschwitz. While Lipstadt, clearly shaken (as anyone, Jewish or not, is bound to be when standing at the site of such a potent example of mass slaughter), asks the party to show respect for the victims, the head barrister Richard Rampton (Tom Wilkinson) is focused on calmly gathering evidence. The value of this, however, only becomes obvious in the courtroom, where he delivers his coup de grâce, revealing that his calm pacing around the perimeter of Auschwitz II-Birkenau (which makes him arrive late and upsets everyone, Lipstadt in particular) was actually measuring the distance between the SS barracks and the gas chambers, allowing him to disprove Irving’s assertion that the gas chambers were built as air raid shelters, and thus tilt the whole case in favour of the defence.

The mansplaining triumph, however, happens even before this Sherlockian turn, in the scene in which Rampton visits Lipstadt in her hotel room (uninvited, unannounced) in order to, yet again, convince her that she should not testify or engage with Irving in any form. After he gently (patronisingly) persuades her that  “What feels best isn’t necessarily what works best” (!), she, emotionally moved, agrees to “pass her conscience” to him – that is, to a man. By doing this, she abandons not only her own voice, but also the possibility to speak for Holocaust survivors – the one that appears as a character in the film also, poignantly, being female. In Lipstadt’s concession that silence is better because it “leads to victory”, it is not difficult to read the paradoxical (pseudo)pragmatic assertion that openly challenging male privilege works, in fact, against gender equality, because it provokes a counterreaction. Initially protesting her own silencing, Lipstadt comes to accept what her character in the script dubs “self-denial” as the only way to beat those who deny the Holocaust.

Self-denial: for instance, denying yourself food for fear of getting ‘fat’ (and thus unattractive for the male gaze); denying yourself fun for fear of being labeled easy or promiscuous (and thus undesirable as a long-term partner); denying yourself time alone for fear of being seen as selfish or uncaring (and thus, clearly, unfit for a relationship). Silence: for instance, letting men speak first for fear of being seen as pushy (and thus too challenging); for instance, not speaking up when other women are oppressed, for fear of being seen as too confrontational (and thus, of course, difficult); for instance, not reporting sexual harassment, for fear of retribution, shame, isolation (self-explanatory). In celebrating ‘self-denial’, the film, then, patently reinscribes the stereotype of the patient, silent female.

Obviously, there is value in refusing to engage with outrageous liars; equally, there are issues that should remain beyond discussion – whether Holocaust happened being one of them. Yet, selective silencing masquerading as strategy – note that Lipstadt is not allowed to speak (not even to the media), while Rampton communicates his contempt for Irving by not looking at him (thus, denying him the ‘honour’ of the male gaze) – too often serves to reproduce the structural inequalities that can persist even under a legal system that purports to be egalitarian.

Most interestingly, the fact that a film that is manifestly about mansplaining manages to reproduce quite a few of mansplaining tropes (and, I would argue, not always in a self-referential or ironic manner) serves as a poignant reminder how deeply the ‘splaining complex is embedded not only in politics or the academia, but also in cultural representations. This is something we need to remain acutely aware of in the age of ‘post-truth’ or ‘post-facts’. If resistance to lying politicians and the media is going to take the form of (re)assertion of one, indisputable truth, and the concomitant legitimation of those who claim to know it – strangely enough, most often white, privileged men – then we’d better think of alternatives, and quickly.

@Grand_Hotel_Abyss: digital university and the future of critique

[This post was originally published on 03/01 2017 in Discover Society Special Issue on Digital Futures. I am also working on a longer (article) version of it, which will be uploaded soon].

It is by now commonplace to claim that digital technologies have fundamentally transformed knowledge production. This applies not only to how we create, disseminate, and consume knowledge, but also who, in this case, counts as ‘we’. Science and technology studies (STS) scholars argue that knowledge is an outcome of coproduction between (human) scientists and objects of their inquiry; object-oriented ontology and speculative realism go further, rejecting the ontological primacy of humans in the process. For many, it would not be overstretching to say machines do not only process knowledge, but are actively involved in its creation.

What remains somewhat underexplored in this context is the production of critique. Scholars in social sciences and humanities fear that the changing funding and political landscape of knowledge production will diminish the capacity of their disciplines to engage critically with the society, leading to what some have dubbed the ‘crisis’ of the university. Digital technologies are often framed as contributing to this process, speeding up the rate of production, simultaneously multiplying and obfuscating the labour of academics, perhaps even, as Lyotard predicted, displacing it entirely. Tensions between more traditional views of the academic role and new digital technologies are reflected in, often heated, debates over academics’ use of social media (see, for instance, #seriousacademic on Twitter). Yet, despite polarized opinions, there is little systematic research into links between the transformation of the conditions of knowledge production and critique.

My work is concerned with the possibility – that is, the epistemological and ontological foundations – of critique, and, more precisely, how academics negotiate it in contemporary (‘neoliberal’) universities. Rather than trying to figure out whether digital technologies are ‘good’ or ‘bad’, I think we need to consider what it is about the way they are framed and used that makes them either. From this perspective, which could be termed the social ontology of critique, we can ask: what is it about ‘the social’ that makes critique possible, and how does it relate to ‘the digital’? How is this relationship constituted, historically and institutionally? Lastly, what does this mean for the future of knowledge production?

Between pre-digital and post-critical 

There are a number of ways one can go about studying the relationship between digital technologies and critique in the contemporary context of knowledge production. David Berry and Christian Fuchs, for instance, both use critical theory to think about the digital. Scholars in political science, STS, and sociology of intellectuals have written on the multiplication of platforms from which scholars can engage with the public, such as Twitter and blogs. In “Uberfication of the University”, Gary Hall discusses how digital platforms transform the structure of academic labour. This joins the longer thread of discussions about precarity, new publishing landscapes, and what this means for the concept of ‘public intellectual’.

One of the challenges of theorising this relationship is that it has to be developed out of the very conditions it sets out to criticise. This points to limitations of viewing ‘critique’ as a defined and bounded practice, or the ‘public intellectual’ as a fixed and separate figure, and trying to observe how either has changed with the introduction of the digital. While the use of social media may be a more recent phenomenon, it is worth recalling that the bourgeois public sphere that gave rise to the practice of critique in its contemporary form was already profoundly mediatised. Whether one thinks of petitions and pamphlets in the Dreyfus affair, or discussions on Twitter and Facebook – there is no critique without an audience, and digital technologies are essential in how we imagine them. In this sense, grounding an analysis of the contemporary relationship between the conditions of knowledge production and critique in the ‘pre-digital’ is similar to grounding it in the post-critical: both are a technique of ‘ejecting’ oneself from the confines of the present situation.

The dismissiveness Adorno and other members of the Frankfurt school could exercise towards mass media, however, is more difficult to parallel in a world in which it is virtually impossible to remain isolated from digital technologies. Today’s critics may, for instance, avoid having a professional profile on Twitter or Facebook, but they are probably still using at least some type of social media in their private lives, not to mention responding to emails, reading articles, and searching and gathering information through online platforms. To this end, one could say that academics publicly criticising social media engage, in fact, in a performative contradiction: their critical stance is predicated on the existence of digital technologies both as objects of critique and main vehicles for its dissemination.

This, I believe, is an important source of perceived tensions between the concept of critique and digital technologies. Traditionally, critique implies a form of distancing from one’s social environment. This distancing is seen as both spatial and temporal: spatial, in the sense of providing a vantage point from which the critic can observe and (choose to) engage with the society; temporal, in the sense of affording shelter from the ‘hustle and bustle’ of everyday life, necessary to stimulate critical reflection. Universities, at least in a good part of 20th century, were tasked with providing both. Lukács, in his account of the Frankfurt school, satirized this as “taking residence in the ‘Grand Hotel Abyss’”: engaging in critique from a position of relative comfort, from which one can stare ‘into nothingness’. Yet, what if the Grand Hotel Abyss has a wifi connection?

Changing temporal frames: beyond the Twitter intellectual?

Some potential perils of the ‘always-on’ culture and contracting temporal frames for critique are reflected in the widely publicized case of Steven Salaita, an internationally recognized scholar in the field of Native American studies and American literature. In 2013, Salaita was offered a tenured position at the University of Illinois. However, in 2014 the Board of Trustees withdrew the offer, citing Salaita’s “incendiary” posts on Twitter as the reason. Salaita is a vocal critic of Israel, and his Tweets at the time concerned Israeli military offensive in the Gaza Strip; some of the University’s donors found this problematic and pressured the Board to withdraw the offer. Salaita has in the meanwhile appealed the decision and received a settlement from the University of Illinois, but the case – though by no means unique – drew attention to the issue of the (im)possibility of separating the personal, political and professional on social media.

At the same time, social media can provide venues for practicing critique in ways not confined by the conventions or temporal cycles of the academia. The example of Eric Jarosinski, “The rock star philosopher of Twitter”, shows this clearly. Jarosinski is a Germanist whose Tweets contain clever puns on the Frankfurt school, as well as, among others, Hegel and Nietzsche. In 2013, he took himself out of consideration for tenure at the University of Pennsylvania, but continued to compose philosophically-inspired Tweets, eventually earning a huge following, as well as a column in the two largest newspapers in Germany and The Netherlands. Jarosinski’s moniker, #failedintellectual, is an auto-ironic reminder that it is possible to succeed whilst deviating from the established routes of intellectual critique.

Different ways in which it can be performed on Twitter should not, however, detract from the fact that critique operates in fundamentally politicized and stratified spaces; digital technologies can render them more accessible, but that does not mean that they are more democratic or offer a better view of ‘the public’. This is particularly worth remembering in the light of recent political events in the UK and the US. Once the initial shock following the US election and the British EU referendum had subsided, many academics (and intellectuals more broadly) have taken to social media to comment, evaluate, or explain what had happened. Yet, for the most part, these interventions end exactly where they began – on social media. This amounts to live Tweeting from the balcony of the Grand Hotel Abyss: the view is good, but the abyss no less gaping for it.

By sticking to critique on social media, intellectuals are, essentially, doing what they have always been good at – engaging with audiences and in ways they feel comfortable with. To this end, criticizing the ‘alt-right’ on Twitter is not altogether different from criticising it in lecture halls. Of course, no intellectual critique can aspire to address all possible publics, let alone equally. However, it makes sense to think how the ways in which we imagine our publics influences our capacity to understand the society we live in; and, perhaps more importantly, how it influences our ability to predict – or imagine – its future. In its present form, critique seems far better suited to an idealized Habermasian public sphere, than to the political landscape that will carry on in the 21st century. Digital technologies can offer an approximation, perhaps even a good simulation, of the former; but that, in and of itself, does not mean that they can solve problems of the latter.

Jana Bacevic is a PhD researcher at the Department of Sociology at the University of Cambridge. She works on social theory and the politics of knowledge production; her thesis deals with the social, epistemological and ontological foundations of the critique of neoliberalism in higher education and research in the UK. Previously, she was Marie Curie fellow at the University of Aarhus in Denmark at Universities in Knowledge Economies (UNIKE). She tweets at @jana_bacevic

Against academic labour: foraging in the wildlands of digital capitalism

sqrl
Central Park, NYC, November 2013

I am reading a book called “The Slow Professor: Challenging the Culture of Speed in the Academy”, by two Canadian professors, Maggie Berg and Barbara Seeber. Published earlier in 2016, to (mostly) wide critical acclaim, it critiques the changing conditions of knowledge production in the academia, in particular those associated with the expectation to produce more and at faster rates (also known as ‘acceleration‘). As an antidote, as the Slow Professor Manifesto appended to the Preface suggests, faculty should resist the corporatisation of the university by adopting the principles of Slow Movement (as in Slow Food etc.) in their professional practices.

While the book is interesting, the argument is not particularly exceptional in the context of the expanding genre of diagnoses of the ‘end’ or ‘crisis’ of the Western university. The origins of the genre could be traced to Bill Readings’ 1996 ‘University in Ruins’ (though, of course, one could always stretch the lineage back to 1918 and Veblen’s ‘The Higher Learning in America’; predecessors in Britain include E.P. Thompson’s ‘Warwick University Ltd.’ (1972) and Halsey’s ‘The Decline of Donnish Dominion’ (1982)). Among contemporary representatives of the genre are Nussbaum’s ‘Not for Profit: Why Democracy Needs the Humanities’ (2010), Collini’s ‘What Are Universities For’ (2012), and Giroux’s ‘Neoliberal Attack on Higher Education’ (2013), to name but a few; in other words, there is no shortage of works documenting how the transformation of the conditions of academic labour fundamentally threatens the role and function of universities in the Western societies – and, by extension, the survival of these societies themselves.

I would like to say straight away that I do not, for a single moment, dispute or doubt the toll that the transformation of the conditions of academic labour is having on those who are employed at universities. Having spent the past twelve years researching the politics of academic knowledge, and most of those working in higher education in a number of different countries, I encountered hardly a single academic or student not pressured, threatened, or at the very least insecure about their future employment. What I want to argue, instead, is that the critique of the transformation of knowledge production that focuses on academic labour is no longer sufficient. Concomitantly, the critique of time – as in labour time – isn’t either.

In lieu of labour, I suggest we could think of what academics do as foraging. By this I do not in any way mean to trivialize union struggles that focus on working conditions for faculty or the position of students; these are and continue to be very important, and I have always been proud to support them. However, unfortunately, they cannot capture the way knowledge has already changed. This is not only due to the growing academic ‘precariat’ (or ‘cognitariat’): while the absence of stable or full-time employment has been used to inform both analyses and specific forms of political action on both sides of the Atlantic, they still frame the problem as fundamentally dependent on academic labour. While this may for the time being represent a good strategy in the political sense, it creates a set of potential contradictions in the conceptual.

For one, labour implies the concept of use: Marx’s labour theory of value postulates that this is what it allows it to be exchanged for something (money, favours). Yet, we as  academics are often the first to point out that lot of knowledge is not directly useful: for every paradigmatic scientist in a white lab coat that cures cancer, there is the equally paradigmatic bookworm reading 18th-century poetry (bear with me, it’s that time of the year when clichés abound). Trying to measure their value by the same or even similar standard risks slipping into the pathologies of impact, or, worse, vague statements about the necessity of social sciences and humanities for democracy, freedom, and human rights (despite personal sympathy for the latter argument, it warrants mentioning that the link between democratic regimes and academic freedom is historically contingent, rather than causal).

Second, framing what academics do as labour makes it very difficult to avoid embracing some form of measurement of output. This isn’t always related to quantity: one can also measure the quality of publications (e.g., by rating them in relation to the impact factors of journals they were published in). Often, however, the ideas of productivity and excellence go hand in hand. This contributes to the proliferation of academic writing – not all of which is exceptional, to say the very least – and, in turn, creates incentives to produce both more and better (‘slow’ academia is underpinned by the argument that taking more time creates better writing).

This also points to why the critique of the conditions of knowledge production is so focused on the notion of time. As long as creating knowledge is primarily defined as a form of labour, it depends on socially and culturally defined cycles of production and consumption. Advocating ‘slowness’, thus, does not amount to the critique of the centrality of time to capitalist production: it just asks for more of it.

The concept of foraging, by contrast, is embedded in a different temporal cycle: seasonal, rather that annual or REF-able. This isn’t some sort of neo-primitivist glorification of supposed forms of sustenance of the humanity’s forebears before the (inevitable) fall from grace; it’s, rather, a more precise description of how knowledge works. To this end, we could say most academics forage anyway: they collect bits and scraps of ideas and information, and turn them into something that can be consumed (if only by other academics). Some academics will discover new ‘edible’ things, either by trial and error or by learning from (surveying) the population that lives in the area, and introduce this to other academics. Often, however, this does not amount to creating something entirely new or original, as much to the recombination of existing flavours. This is why it is not abundance as such as much as diversity that plays a role in how interesting an environment a university, city, or region will become.

However, unlike labour, foraging is not ‘naturally’ given to the creation of surplus: while foraged food can be stored, most of it is collected and prepared more or less in relation to the needs of those who eat it. Similarly, it is also by default somewhat undisciplined: foragers must keep an eye out for the plants and other foodstuffs that may be useful to them. This does not mean that it does not rely on tradition, or that it is not susceptible to prejudice – often, people will ignore or attribute negative properties to forms of food that they are unfamiliar with, much like academics ignore or fear disciplines or approaches that do not form part of their ‘tribe’ or school of thought.

As appealing as it may sound, foraging is not a romanticized, or, worse, sterile vision of what academics do. Some academics, indeed, labour. Some, perhaps, even invent. But increasing numbers are actually foraging: hunting for bits and pieces, some of which can be exchanged for other stuff – money, prestige – thus allowing them to survive another winter. This isn’t easy: in the vast digital landscape, knowing how to spot ideas and thoughts that will have traction – and especially those that can be exchanged – requires continued focus and perseverance, as well as a lot of previously accumulated knowledge. Making a mistake can be deadly, perhaps not in the literal sense, but certainly as far as reputation is concerned.

So, workers of all lands, happy New Year, and spare a thought for the foragers in the wildlands of digital capitalism.

We are all postliberals now: teaching Popper in the era of post-truth politics

blackswan
Adelaide, South Australia, December 2014

Late in the morning after the US election, I am sitting down to read student essays for the course on social theory I’m supervising. This part of the course involves the work of Popper, Kuhn, Lakatos, and Feyerabend, and its application in the social sciences. The essay question is: do theories need to be falsifiable, and how to choose between competing theories if they aren’t? The first part is a standard essay question; I added the second a bit more than a week ago, interested to see how students would think about criteria of verification in absence of an overarching regime of truth.

This is one of my favourite topics in the philosophy of science. When I was a student at the University of Belgrade, feeling increasingly out of place in the post-truth and intensely ethnographic though anti-representationalist anthropology, the Popper-Kuhn debate in Criticism and the Growth of Knowledge held the promise that, beyond classification of elements of material culture of the Western Balkans, lurked bigger questions of the politics and sociology of knowledge (paradoxically, this may have been why it took me very long to realize I actually wanted to do sociology).

I was Popper-primed well before that, though: the principle of falsification is integral to the practice of parliamentary-style academic debating, in which the task of the opposing team(s) is to ‘disprove’ the motion. In the UK, this practice is usually associated with debate societies such as the Oxford and Cambridge Union, but it is widespread in the US as well as the rest of the world; during my undergraduate studies, I was an active member of Yugoslav (now Serbian) Universities Debating Network, known as Open Communication. Furthermore, Popper’s political ideas – especially those in Open Society and its Enemies – formed the ideological core of the Open Society Foundation, founded by the billionaire George Soros to help the promotion of democracy and civil society in Central and Eastern Europe.

In addition to debate societies, the Open Society Foundation supported and funded a greater part of civil society activism in Serbia. At the time, most of it was conceived as the opposition to the regime of Slobodan Milošević, a one-time-banker-turned-politician who ascended to power in the wake of the dissolution of the Socialist federal republic of Yugoslavia. Milošević played a major role in the conflicts in its former republics, simultaneously plunging Serbia deeper into economic and political crisis exacerbated by international isolation and sanctions, culminating in the NATO intervention in 1999. Milošević’s rule ended in a coup following a disputed election in 2000.

I had been part of the opposition from the earliest moment conceivable, skipping classes in secondary school to go to anti-government demos in 1996 and 1997. The day of the coup – 5 October 2000 – should have been my first day at university, but, together with most students and staff, I was at what would turn out to be the final public protest that ended up in the storming of the Parliament. I swallowed quite a bit of tear gas, twice in situations I expected not to get out of alive (or at the very least unharmed), but somehow made it to a friend’s house, where, together with her mom and grandma, we sat in the living room and watched one of Serbia’s hitherto banned TV and radio stations – the then-oppositional B92 – come back on air. This is when we knew it was over.

Sixteen years and little more than a month later, I am reading students’ essays on truth and falsehood in science. This, by comparison, is a breeze, and it’s always exciting to read different takes on the issue. Of course, in the course of my undergraduate studies, my own appreciation of Popper was replaced by excitement at the discovery of Kuhn – and the concomitant realization of the inertia of social structures, which, just like normal science, are incredibly slow to change – and succeeded by light perplexity by Lakatos (research programmes seemed equal parts reassuring and inherently volatile – not unlike political coalitions). At the end, obviously, came infatuation with Feyerabend: like every self-respecting former liberal, I reckoned myself a methodological (and not only methodological) anarchist.

Unsurprisingly, most of the essays I read exhibit the same trajectory. Popper is, quite obviously, passé; his critique of Marxism (and other forms of historicism) not particularly useful, his idea of falsificationism too strict a criterion for demarcation, and his association with the ideologues of neoliberalism did probably not help much either.

Except that…. this is what Popper has to say:

It is undoubtedly true that we have a more direct knowledge of the ‘inside of the human atom’ than we have of physical atoms; but this knowledge is intuitive. In other words, we certainly use our knowledge of ourselves in order to frame hypotheses about some other people, or about all people. But these hypotheses must be tested, they must be submitted to the method of selection by elimination.

(The Poverty of Historicism, 127)

Our knowledge of ourselves: for instance, our knowledge that we could never, ever, elect a racist, misogynist, reality TV star for the president of one of world’s superpowers. That we would never vote to leave the European Union, despite the fact that, like all supranational entities, it has flaws, but look at how much it invests in our infrastructure. Surely – as Popper would argue – we are rational animals: and rational animals would not do anything that puts them in unnecessary danger.

Of course, we are correct. The problem, however, is that we have forgotten about the second part of Popper’s claim: we use knowledge of ourselves to form hypotheses about other people. For instance: since we understand that a rich businessman is not likely to introduce economic policies that harm the elite, the poor would never vote for him. For instance: since we remember the victims of Nazism and fascism, everyone must understand how frail is the liberal consensus in Europe.

This is why the academia came to be “shocked” by Trump’s victory, just like it was shocked by the outcome of the Brexit referendum. This is also the key to the question of why polls “failed” to predict either of these outcomes. Perhaps we were too focused on extrapolating our assumptions to other people, and not enough on checking whether they hold.

By failing to understand that the world is not composed of left-leaning liberals with a predilection for social justice, we commit, time and again, what Bourdieu termed scholastic fallacy – propensity to attribute categories of our own thinking to those we study. Alternatively, and much worse, we deny them common standards of rationality: the voters whose political choices differ from ours are then cast as uneducated, deluded, suffering from false consciousness. And even if they’re not, they must be a small minority, right?

Well, as far as hypotheses are concerned, that one has definitely failed. Maybe it’s time we started considering alternatives.

What after Brexit? We don’t know, and if we did, we wouldn’t dare say

[This post originally appeared on the Sociological Review blog, Sunday 3rd July, 2016]

In dark times
Will there also be singing?
Yes, there will be singing
About the dark times.

– Bertolt Brecht

Sociologists are notoriously bad at prediction. The collapse of the Soviet Union is a good example – not only did no one (or almost no one) predict it would happen, it also challenged social theory’s dearly-held assumptions about the world order and the ‘nature’ of both socialism and capitalism. When the next big ‘extraneous’ shocks to the Western world – 9/11 and the 2008 economic crisis – hit, we were almost as unprepared: save for a few isolated voices, no one foresaw either the events or the full scale of their consequences.

The victory of the Leave campaign and Britain’s likely exit from the European Union present a similar challenge. Of course, in this case, everyone knew it might happen, but there are surprisingly few ideas of what the consequences will be – not on the short-term political level, where the scenarios seem pretty clear; but in terms of longer-term societal impact – either on the macro- or micro-sociological level.

Of course, anyone but the direst of positivists will be quick to point out sociology does not predict events – it can, at best, aim to explain them retroactively (for example). Public intellectuals have already offered explanations for the referendum result, ranging from the exacerbation of xenophobia due to austerity, to the lack of awareness of what the EU does. However, as Will Davies’ more in-depth analysis suggests, how these come together is far from obvious. While it is important to work on understanding them, the fact that we are at a point of intensified morphogenesis, or multiple critical junctures – means we cannot stand on the side and wait until they unfold.

Methodological debates temporarily aside, I want to argue that one of the things that prevent us from making (informed) predictions is that we’re afraid of what the future might hold. The progressive ethos that permeates the discipline can make it difficult to think of scenarios predicated on a different worldview. A similar bias kept social scientists from realizing that countries seen as examples of real socialism – like the Soviet Union, and particularly former Yugoslavia – could ever fall apart, especially in a violent manner. The starry-eyed assumption that exit from the European Union could be a portent of a new era of progressive politics in the UK is a case in point. As much as I would like to see it happen, we need to seriously consider other possibilities – or, perhaps, that what the future has in stock is beyond our darkest dreams. In the past years, there has been a resurgence of thinking about utopias as critical alternatives to neoliberalism. Together with this, we need to actively start thinking about dystopias – not as a way of succumbing to despair, but as a way of using sociological imagination to understand both societal causes of the trends we’re observing – nationalism, racism, xenophobia, and so on – and our own fear of them.

Clearly, a strong argument against making long-term predictions is the reputational risk – to ourselves and the discipline – this involves. If the failure of Marx’s prediction of the inevitability of capitalism’s collapse is still occasionally brought up as a critique of Marxism, offering longer-term forecasts in the context where social sciences are increasingly held accountable to the public (i.e. policymakers) rightfully seems tricky. But this is where the sociological community has a role to play. Instead of bemoaning the glory of bygone days, we can create spaces from which to consider possible scenarios – even if some of them are bleak. In the final instance, to borrow from Henshel – the future cannot be predicted, but futures can be invented.

Jana Bacevic is a PhD researcher in the Department of Sociology at the University of Cambridge. She tweets at @jana_bacevic.