Search

Loading...

Thursday, 11 February 2016

The Plot Against America

The American system of government, and in particular its constitutional furniture (Congress, the Supreme Court, the Presidency), is constitutive of national politics as much as it is representative. In other words, the national polity is periodically recreated through the democratic process. This is why US politics is highly prescriptive, obsessed with procedural manoeuvres, and treated as a spectator sport by the media. The spectacle of the US Presidential campaign is particularly didactic in that it takes candidates and voters on a journey from ward-level caucuses through state primaries and inter-state conventions to a nationwide election and the abstractions of national interest. The trajectory serves to marginalise interests that don't easily fit into this structure. For example, race tends to enjoy a brief salience in the South Carolina primary but is usually kept to the undercard elsewhere.

It also produces the paradox of a nation famed for its parochialism obsessing over foreign policy issues, even if they are addressed in cartoonish terms. A gradual process of refinement and moderation, heavily mediated in the latter stages by Washington elites and the traditional sages of foreign affairs (Henry Kissinger will shortly pop up), produces a final consensus that is pretty much "more of the same" with the promise of better behaviour. It also tends to bias towards rhetorical moralism and the global responsibilities of "the indispensable nation", which diverts attention from more obvious economic self-interest, so Russia will usually figure more than South America. This is why American Presidential elections tend to have more foreign policy substance than you'd expect, and why domestic debate often dances around proxies for race: crime, welfare, the "middle class" (i.e. white working class).

There has been plenty of commentary since Obama's election in 2008 about the demographic tilt in the USA and how this is likely to influence future elections. The conventional wisdom holds that the Democrats are a shoo-in because of high levels of support among growing minority populations, as well as the socially liberal but economically-stressed young; while the Republicans will struggle to forge a viable electoral alliance between Hispanic voters across the sun-belt and mad-as-hell seniors across the Midwest. This assessment assumes that blacks will always favour the Democrats and that there is a latent conservative majority among Hispanics (and not just Cuban exiles). This is a bit like the idea that the Scots will always vote Labour and that Asians have yet to wake up to the attractions of the Tories. It's true until it's clearly not.

American conservatism has three pillars: nationalism, capitalism and religious morality. Until the 1950s, nationalism was heavily conditioned by the US's origins as a racist and genocidal state and its belief in the "manifest destiny" of whites. The Civil War may have led to the emancipation of slaves, but it also standardised racial discrimination as a principle of social organisation, as blacks were transformed from chattels to second-class citizens. Race-inflected nationalism in the late 19th century served to advance capitalist interests in the near-abroad, e.g. in the war against Spain, but it also served a domestic purpose in dividing the working class between natives and immigrants, with the latter being defined by culture (e.g. Catholic Irish and Italians) as much as ethnicity (e.g. Chinese).

The value of race as an organising principle weakened as immigrant groups assimilated, blacks moved to the northern industrial cities, and the Great Depression crossed racial and cultural lines. With uncertainty over what constituted a popular nationalism - no longer WASP, let alone centred on small farmers and craftsmen - American identity started to define itself through populist and isolationist opposition: against trusts and robber barons; foreign entanglements and European imperialism; and the red menace of the Soviet Union and China. Anticommunism became central to US politics for two reasons. First, it provided a form of ideological nationalism that marginalised race and class (though this ironically created an environment in which civil rights could be advocated as pro-American). Second, it provided support for the elite pivot from isolationism to internationalism in the 1940s.


After 1989 and the sudden redundancy of anticommunism, US nationalism was reworked by neo-conservatives as the proactive assertion of US economic interests abroad. For all the guff about exporting freedom, this was clearly a return to "realist" geopolitics. This not only maintained America's internationalist focus, but self-consciously connected with the "imperial" lineage of the late 19th century before the turn to isolationism. The problem was that this policy's triumphalism and delusion ("we make our own reality") demanded permanent success. When it hit the buffers in Iraq, the policy framework fell apart. Some conservatives sought to fill the vacuum this created with Islamophobia, but that hasn't provided anywhere near as compelling a threat to America as international communism once did, and the general population remain sceptical that terrorism is sufficient justification for sending American troops (as opposed to drones) overseas.

While white resentment at minority advances remained a strong undercurrent during the conservative revival of the 70s and 80s, it was never strong enough to turn nationalism back towards a nativist or isolationist stance. This was partly because of the Reagan era's aggressive internationalism, but also because conservative strategists deliberately channelled the resentment into an attack on a welfare state that was seen as a proxy for "those people", i.e. blacks and other minorities. The recent shift towards greater support for welfare in the US reflects demography as much as the economy. While the increase in income inequality and the poor prospects for "millennials" grab the headlines, the sea-change is that whites are due to become a minority of the US population by 2043, so "those people" will increasingly be funding the welfare state of tomorrow. The growing intolerance of police violence and guns are signs of this social change: both the growing confidence of minorities and the growing rejection of institutional racism in their name by whites.

The trope of "war" in domestic American politics is usually a sign of elite interests being pursued through divide-and-rule. Cold War anticommunism helped hobble the American labour movement at a time when it was economically powerful, and also provided a means of hampering support for the extension of civil rights. The "War on Drugs", and the subsequent growth of the carceral state, like the rhetorical attack on "welfare queens", was a cynical manoeuvre to divert the resentment of whites, faced by destabilising social and economic changes, towards blacks; a manoeuvre embraced by the Democrats as much as the Republicans. The "War on Terror" has served to create a new "other" (i.e. Muslims) at a time when black and Hispanic voters have become too significant aa constituencies to either ignore or alienate. But America is war-weary, and the paranoid style promoted latterly by Fox News is facing diminishing political returns as it becomes ever more internecine and absurd.

Barack Obama's presidency may have been marked at the beginning by the antics of the Tea Party, but it has been marked at the end by Black Lives Matter. The significance of BeyoncĂ© delivering a Black Power tribute at the Super Bowl is that a TV network dependent on advertisers seeking a broad audience did not bat an eyelid. The problem faced by the political establishment is that America is running out of enemies. This is not because it is drawing in its horns internationally, but because it cannot identify a convincing, common threat that unites an increasingly diverse domestic audience. Global jihad doesn't measure up as an existential danger, old dichotomies like white versus black no longer work for enough people, while the self-identifying middle class is increasingly open to single-payer healthcare, the legalisation of cannabis and more progressive taxation.

Donald Trump is the logical result of the "plot against America" turn of Republican politics. Where other candidates are reduced to threatening to carpet-bomb far away countries where evil villains may be hiding, Trump happily singles out Mexicans and Muslims on US soil. His popularity reflects the rejection by the Republican base of the conventional wisdom that the party must reach out, but there is little evidence that circling the wagons will be electorally successful. In contrast, there is growing evidence that the demographic dividend promised for the Democrats is more likely to be realised by a social democrat like Bernie Sanders than by a neoliberal like Hillary Clinton for whom Wall Street funding, race and gender are all instrumental. As the field thins, it is the "1%" who are increasingly being cast as public enemy number one, almost by default.

Friday, 5 February 2016

The Symbolic State

In 1982, the economic historian Sidney Pollard suggested that the weakness of British economic policy was the result of "concentrating first and foremost on symbolic figures and quantities, like prices, exchange rates and balances of payment, to the neglect of real quantities, like goods and services produced and traded". In his book, The Wasting of the British Economy, he claimed that rational planning and investment in the postwar years were "repeatedly sacrificed for the sake of symbols". This argument can be expanded. In the 1950s and 60s, both Conservative and Labour governments pursued foreign and defence policies whose cost exceeded the UK's financial power, largely to keep the symbolic "seat at the top table". They also maintained Sterling as a semi-global reserve currency for symbolic political rather than practical economic reasons.

Pollard's analysis was reinforced at the time that he wrote by the Thatcher government's narrow focus on the money supply as part of its monetarist experiment. The subsequent commitment to the ERM, like the investment in Trident and the need for the Union Jack to fly over Port Stanley, similarly elevated the symbolic over the pragmatic. In retrospect, British political history after 1945 looks like a teenager flitting from one pop-star infatuation to another. Though neoliberalism introduced a managerialist focus on "process" (e.g. supply-side reform), its British incarnation quickly reverted to an obsession with metrics, notably the emblematic targets of the Blair years in health and education (and a tolerance for the massaging of process to meet those targets). Though Pollard's is an analysis that assumes the economy is heavily determined by decisions made in Whitehall, rather than changes in the material base, it remains insightful because macroeconomic management continues to be dominated by the symbolic norms of the Treasury (e.g. the deficit).


You might say that all nations are invested in symbols, so what's so different about the UK? The Force de dissuasion projects French power, and the euro has clearly inherited the cultural significance of the Deutschmark for Germany (if to the frustration of other Eurozone members less wedded to the "black zero"). The point is that these are substantial: the world's third largest nuclear arsenal and an unprecedented multinational currency. What is noticeable about British symbols is that we cling to increasingly empty forms. The point about Jeremy Corbyn's suggestion that we keep our nuclear weapons "in the cupboard under the stairs", like the fact that the value of the pound reflects the cost of London property rather than the health of the economy as a whole, is that it pulls back the curtain to "let daylight in upon the magic" (to borrow Walter Bagehot's phrase about the need to preserve the "mystery" of the monarchy).

In other words, the "signifier" has become so important that we preserve it even when what it represents, the "signified", is redundant. We have become wedded to the idea that national identity is symbolic, hence the constitutional antiquarianism of the monarchy and the House of Lords. The current EU negotiations are symbolic (and thus baffling to other EU members) in the sense that particular metrics, such as net migration, are taken to be indicators of broader social and economic health, despite being wholly inadequate to the task. While defenders of the symbolic realm become ever more absurd (now proposing a bill to enshrine Parliamentary sovereignty), the suspicion grows that not only are these symbols hollow, but that they serve as vectors of corruption and anti-democratic collusion: the defence industry recycles taxes to privileged corporations, the management of the pound is biased towards the interests of the City, and the Lords have become a means by which corporate interests infest Whitehall.

The vacancy of these symbols chimes with the wider (and, it should be said, contested) notion of British decline. In a review of Pollard's book, Arthur Marwick noted that the postwar search for the causes of relative economic decline ranged over a century, from the failure to invest in technical education and technological innovation in the 1880s to "the conservative reaction against austerity in the 1950s". Others traced the malaise to the anti-industrial ethos of the British upper class in the Victorian era or the self-indulgence of the postwar welfare state. What all these theses had in common was a belief in internal decay masked by outward propriety - a "whited sepulchre" - hence the resonance of hypocrisy, woodworm and moral turpitude in postwar culture. Part of the attraction of neoliberalism for UK elites was the promise that these outmoded forms could be superseded through a commitment to the international norms of modern management practice.

Will Davies has defined neoliberalism as the "pursuit of the disenchantment of politics by economics". As Stephen Dunne puts it in his review of Davies' The Limits of Neolioberalism, "it opposed the enigmatic authority of politics, ... proposing the world as depicted by the Austrian School of economics as the less mysterious, more legitimate alternative". This rationality, whether in the form of homo economicus or Coase's theory of corporate efficiency, was undermined not just by the events of 2008 but by the state of emergency that arose from it, specifically the intervention by the government to reset the game through the bailout of the banks, "simply by force of decision" as Davies puts it. However, I think the rot set in much earlier, arguably within months of Tony Blair coming to power in 1997 when the death of Princess Diana showed the residual power of the symbolic, and was certainly confirmed by the decision to preserve the "enchantment" of the House of Lords.


2008 was when the curtain collapsed. It marked a return to arbitrary power after decades in which we were assured that the executive was subject to the same market constraints as all neoliberal actors, thereby ensuring the preservation of democracy and accountability. The consequence has been both the rise of hitherto impermissible political attitudes (both Sanders and Trump are beneficiaries of this emperor's new clothes moment in the US) and the anthropological turn towards behavioural economics and big data. The former has revived the optimistic idea of the state as the agent of democratic will, rather than just another actor subject to the market, while the latter has sought to replace the metaphysical claims of neoliberalism (the panacea of markets) with a return to the pessimistic anthropological management theorised in the early 20th century by the likes of Thorstein Veblen, Wilfred Trotter and Edward Bernays. The one activist, the other atavistic.

Though neoliberalism continues to be hegemonic, its symbols are increasingly viewed as empty forms. This ranges from suspicion over the motives and practices of "superbrands" like Google, to the identification of the "1%" as rentiers rather than wealth-creators. "Technocrat" has become a term of abuse, there is growing cynicism over the beneficial claims of competition, while meritocracy has given way to "generation rent". The divide between ideology and reality leads to the intellectual redundancy of the political centre, prompting politicians to revive moth-eaten symbols centred on sovereignty and security, to which we respond ambivalently. These are superficially revolutionary times. In other words, we may not see the overthrow of capitalism, but we can now envisage the end of the outmoded forms that we neglected to dismantle during the neoliberal years. If we choose the activist over the atavistic, the process that began in 1989 with the retirement of the symbols of communist power may finally arrive in London and Washington.

Monday, 1 February 2016

Star Wars as Counterfactual History

Now that the fever around Star Wars: The Force Awakens has abated, I thought it might be fun to look at the series through the prism of counterfactual history. The first thing to note is that George Lucas's creation is set in the past - "A long time ago in a galaxy far, far away" - so it is a suitable subject for a historian. It is also entirely fictional, a characteristic it shares with the counterfactual. The over-arching Star Wars story has the framework of a classical political history, notably the transition from republic to empire (and back again), while the historical parallels with the American Revolution and the Vietnam War (and even the post-9/11 era in Episode VII) are clear. It also has strong mythic elements, employing a narrative arc built on Greek and Roman tales and archetypal, if somewhat two-dimensional, characters (Lucas's Freudian obsessions are now giving way to more Jungian tropes under the Disney influence).

With its antique social forms (from taverns to princesses), and daft technology (from light-sabres to death-stars), Star Wars is unimaginative and implausible as Sci-Fi, but this simply highlights that it owes more to counterfactual history than speculative fiction. If The Lord of the Rings is a reactionary fantasy, and Star Trek is a thought-experiment about the liberating potential of technology, Star Wars occupies a parallel universe of the historically familiar in which social development (as opposed to chronology) appears to have stopped. This is why the messed-up production history (the 22 years between the shooting of episodes IV and I) does not really matter. It's not like fashions alter: everyone still dresses as if they had just wandered in from a WW2 film or a Western. Despite its pretensions to rationalism, this is a universe in which change is the product of personal ambition, economics has barely advanced beyond mercantilism, and the galaxy is under the sway of secret societies and the soupy metaphysics of the Force. If I had to put a date on its intellectual vintage, I'd say around 1770.

Counterfactuals are categorically different to speculative fiction. While a counterfactual may be employed in the creation myth of a utopia or a dystopia - the South wins the American Civil War, the Axis Powers win World War Two - the purpose of speculative fiction is to construct a social or political model in which particular relations can be tested under changing conditions: could chattel slavery survive in an industrial society, would a totalitarian regime implode under the impact of modern technologies? Speculative fiction tends towards the dialectic: contending forces, constant stress, disruptions in the social fabric. The subject is change. In contrast, the subject of a counterfactual is likely to be persistence. When change does feature, it usually takes the form of new technologies seamlessly integrated into a traditional setting (those daft light-sabres again), which is an ideological plea for the independence of social relations from the material base: we can acquire high-tech and maintain an aristocratic hierarchy.


Because they are conservative, counterfactuals are paradoxically often optimistic. They seek to wish away actual changes (no fall of Constantinople, no Bolshevik Revolution), but in so doing they annihilate history and imagine an eternal present in which social relations are unchanging. Even those counterfactuals that imagine pessimistic scenarios tend to do so in order to highlight current virtues or make satirical contrasts with the present day. For example, most fictionalised alternate histories in which the Nazis successfully invaded Britain (or the USSR invaded the USA) feature a heroic resistance and complicit state apparatchiks. Just as British pre-1914 "invasion literature" reflected anxiety over empire and the social question, so a Nazi Great Britain was an extreme example of the imaginative response to the welfare state, while the WW3 strand in American culture (e.g. Red Dawn) was more about resisting gun control and Washington than a genuine expectation of a Russian revanche in Alaska. In a similar vein, Star Wars is not just about the eventual triumph of good over evil, but about resilience: the Jedi order cannot be destroyed.

Counterfactuals that extrapolate developments - i.e. "if X didn't happen" - usually reflect the belief that social continuity is to be preferred, even when they allow for technological change. Tory historians who wonder what would have happened if the UK hadn't been involved in the two world wars are usually mourning the loss of empire. Their defence of this approach invariably privileges the opinions of contemporary elites. As Niall Ferguson says, "Virtual history -- and this is a very, very important point, which isn't understood by many people who dabble in 'what if' questions -- is only legitimate if one can show that the alternative that you're discussing, the 'what if' scenario you're discussing, was one that contemporaries seriously contemplated". This distinction is nonsense. The plausibility of an option to a political elite is irrelevant. The UK declaring neutrality in 1914 is no more "realistic" than the Battle of the Somme being stopped by the intervention of Martians. Neither happened: a miss is as good as a mile.

Rightwing alternate histories tend to emphasise the pivotal role of individuals, which is both a reflection of their non-materialist ideology and their emotional origin in the realms of fantasy fiction. This can be inadvertently entertaining. Consider this from the economist Bryan Caplan: "Suppose Karl Marx had never been born.  How would the modern world be different? ...Without Marx, there would have been no prominent intellectual promoter of violent revolution for socialist dictatorship. There would still have been a big socialist movement, including many socialists dreaming of bloodbaths and tyranny. But the movement as a whole would have rapidly evolved into something like social democracy. Third World dictators would still have killed in the name of socialism. But there would have been no Soviet Union without Marx. And without the Soviet Union, there would be no fascist Italy and no Nazi Germany" (the Fascist party was founded in 1915, two years before the Bolsheviks seized power). Killing Luke Skywalker at birth might well have preserved the Galactic Empire, but that's because it's a fictional construct.


Counterfactual history is the louche cousin of comparative history, whose methodology it freely borrows to lend itself some credibility. The latter seeks to contrast developments between different groups or territories, usually in the same historical period. This is a perfectly respectable undertaking that can provide valuable insights, but it requires caution. It tends towards the study of nation states, as units of measure that are more easily compared, and the treatment of economic development as the product of competitive advantage rather than internal social relations, which has an obvious ideological purpose. Its methods are also easily twisted to support non-contemporary and often absurd equivalences, for example Niall Ferguson's recent claim that Muslim immigration to Europe parallels the fall of the Roman Empire. This goes beyond the idea that history repeats itself (or rhymes) to an older, reactionary idea of recurrence as the working of fate. This is a key feature in Star Wars, particularly evident in The Force Awakens.

Conservatives who defend alternate history as a method of enquiry tend to be selective in their interpretations not only of what is plausible but of what is likely. For example, Ferguson believes that had the UK remained neutral in 1914, Germany would have won a short war whose consequence would have been a more liberal German state and lasting peace in Europe (and incidentally an EU that the UK never joined). However, this requires the denial not only of actual history, but of any alternative outside the preferred one: "there's simply no way to imagine a Nazi regime emerging, or, indeed a Weimar Republic emerging, if the Kaiser Reich, the Imperial Reich, is victorious in the war that it begins in 1914". For this to be true, we must accept that Weimar and the Nazis had no causes outside of Germany's defeat, which is dangerously close to the Nazi's own interpretation.

If Star Wars has the form of a counterfactual history, what is the factual history to which it runs counter? One theory is that Lucas's films (including the Indiana Jones series, which he wrote for Steven Spielberg to direct) are an attempt to imagine an alternative American cinema in which the studio system of the 40s and 50s survived the impact of television unscathed. Instead of the "golden age" of the 70s auteurs that the upended industry produced, distinguished by films as diverse as The Godfather, The Exorcist and Taxi Driver, we would have had Star Wars episodes I to III, in strict chronological order and hard on the heels of American Graffiti (which influenced the iconic TV series, Happy Days). What this suggests is that Lucas's films remain stuck in the 1970s. Though he is no longer the driving creative force, it is hard to see the Star Wars series escaping that decade any time soon.

Friday, 29 January 2016

Perfect is the Enemy of Good

Though the subject of basic income has been introduced to mainstream media debate in recent years, the political dimensions have largely been ignored, with most discussions on the subject adopting a technocratic and utilitarian approach. Among other things, this means prominence is given to the dubious potential for shrinking the state (the illusion of "less bureaucracy"), the institutionalisation of the "precariat" (justifying further labour market deregulation), and overdue recognition of "homemakers" and carers (diverting the issue of inequality from class to gender and age). The debate is already ideological, and this will only intensify once the political dimensions come into focus.

To get a sense of how this may develop, it is worth considering the treatment of basic income by the "radical left", not because its proposals might gain traction in mainstream debate, but because its approach to framing the discussion might well be hijacked. A good example is Shannon Ikebe's "The wrong kind of UBI", published in Jacobin, in which he highlights that the core political issue of the basic income is that there are potentially "good" and "bad" versions. To this end he constructs a dichotomy between a "livable ... and a non-livable basic income". The former is emancipatory, in the sense of allowing workers to continually refuse shit jobs or to invest their labour in non-waged work. The latter is parsimonious but politically achievable, not least because it chimes with rightwing advocates of negative income tax. In any contest between maximising and satisficing, it is the latter that will win simply because that is the utilitarian premise of the dichotomy. The question is whether such a dichotomy exists in the case of basic income.

In adopting this approach, Ikebe is trying to undermine the notion that there is a "good enough" UBI, which is necessary because most centrist-friendly schemes (such as that proposed by the Greens last year) are parsimonious: "The fundamental dilemma of a basic income is that the more achievable version — in which basic needs go unmet without supplementary paid employment — leaves out what makes it potentially emancipatory in the first place. Indeed, many commentaries cite basic income experiments to argue it does not significantly reduce work incentives". Ikebe's point is that when basic income supporters claim there would be no substantial drop in work hours, usually citing the Canadian Mincome experiment of the 1970s as evidence, they are implicitly advocating a non-livable model. A drop in hours is actually what we should want and expect.


This is true, but it ignores two features of a basic income: time preference and wage bargaining. The first is the flexibility to temporarily stop working or to defer taking an unattractive job until a better one is available. The second is the availability of an "unconditional and inexhaustible strike fund". While these may not be emancipatory, they potentially increase labour's leverage with capital. The result may not be a reduction in aggregate hours worked, but a better distribution of those hours and wages across the population. Assuming the removal of any welfare trap, so marginal hours are not undervalued, one paradoxical result of a non-livable basic income may be a reduction in the percentage of the population who do no work at all. Once the penny drops, you can expect this to be a key selling point across the political spectrum.

What this shows is that Ikebe's dichotomy is false: he is merely flipping the usual dynamic to argue for a maximising outcome rather than a satisficing one. In doing so he is essentially rejecting the social democratic or ameliorative features of basic income, which he associates with the non-livable version. For him, the livable version (the LBI) is attractive not because it is emancipatory but because it is revolutionary: "The dramatic strengthening of working-class power under a robust LBI would sooner or later lead to capital disinvestment and flight, since capital can only make profits through exploitation and won’t invest unless it can make a profit". In other words, an LBI would prompt a crisis of capital that would necessitate the socialisation of the means of production. In fact, this doesn't necessarily follow.

Strengthening working-class power can lead to capital flight, particularly if it is seen as the precursor to expropriation, but normally it leads to greater investment in an effort to increase capital composition at the expense of labour. That was the story of the 60s and 70s in most developed economies, e.g. the USA, Japan, Germany, France etc. In the UK, "decline" was the result of inadequate investment (relative to our peers) in the face of increasing labour costs. Rising wages were the product of a global trend, rather than local militancy, that was transmitted via trade - hence the regular balance of payments crises. This under-investment, which was heavily-influenced by a City that historically preferred foreign to domestic opportunities and speculation to patience, manifested itself in low productivity growth and declining profits. Despite the "retooling" of industry in the 1980s, the underlying trend continues.


Over and above the desire to increase profit through capital investment, the bidding-up of wages by an LBI would cause the relative price of capital to fall, stimulating further investment. We can already see this in action. The offshoring of labour in the 80s and 90s to increase profit rates gave way to capital investment in emerging markets in the 90s and 00s as developing nation labour costs rose. The current fears of a "hard landing" in China reflect a falling off in the rate of capital investment, not a reduction in consumer demand (consumption is growing vigorously). Similarly, the reshoring of some production in developed countries in recent years shows that distribution costs are becoming a more significant element in profit margins as global labour costs equalise.

Ikebe concludes: "Supporting any plan that seems politically attainable and bears the name 'basic income' isn’t a strategy for winning radical change. In the end, there is no feasible way to achieve a free society, or even one close to it, without challenging the power of private capital." This is undeniable. A basic income can be progressive, if it effects income redistribution and locks-in a future social dividend (i.e. progressive uprating of the income level), but it does not in itself change social relations because it does not address the ownership of capital. However, that doesn't mean that we should reject a basic income scheme that is less than maximal. The danger is that a simplistic dichotomy of the sort that Ikebe employs - the LBI versus the NLBI - will frame future discussion as utopian/generous versus achievable/parsimonious, and it should be obvious whose interests that will serve.

This framing is already apparent in the current US debate over Bernie Sanders' single-payer healthcare proposal (he wants to upgrade the US system to something closer to the Canadian model, if not the NHS). Centrist Democrats like Ezra Klein and Paul Krugman are criticising this as politically unfeasible, preferring the more modest (to the point of evanescent) proposals of Hillary Clinton, and even dismissing Sanders' supporters in a manner all too familiar to Corbynites. This is because Sanders, as an orthodox politician, has produced a costed plan rather than a campaign based on the single-payer principle and a commitment to work towards it. It is better to be criticised for a lack of detail if you have a persuasive objective than to have the principle drowned by charges of impracticality. Likewise, the political discussion of basic income needs to expand from a focus on the level of income, which I agree should be generous, to the principles of distributive justice and the social dividend, which are truly transformative.

Monday, 25 January 2016

Words Fail Me

Why does free-speech feature so prominently in modern debate, from Charlie Hebdo to "safe spaces" at universities? Though the global spread of democracy and the decline of formal censorship are imperfect measures, I suspect most people would consider there to be fewer restrictions on free expression today than 30 years ago, if only because of the proliferation of modern media, yet we are assured that free speech is under threat everywhere. The murders at the French magazine were not "an attack on free speech", which is a universal principle, but a highly-specific attack on perceived "enemies of Islam" by self-appointed guardians of the faith. Terrorists rarely attack principles: they attack people and property. Similarly, the campus debate over "no platform" concerns competing privileges, not great principles, hence the lack of interest by most people. Some of this prominence is down to the structural bias of traditional print and TV media, but that can't explain it all.

The persistence of the topic looks like a sign of liberal decadence in an era of growing state and commercial surveillance, so it should come as little surprise that the tropes of criticism have a musty air about them. For example, accusing the academic left of being anti-liberal and supportive of religious obscurantism, in the form of campus Islamic societies, is an obvious re-run of the liberal critique of academia in the 19th century in which Islam has substituted for the Church of England and Rome. This is reinforced by nostalgia for a "traditional liberalism" that supposedly never compromised its principles, unlike the weaselly progressive sort of today, and the characterisation of social media as an arena both risky (those horrid trolls) and at risk (the PC brigade). The solution appears to be traditional liberal propriety, which in practice means demanding that corporations act as social referees and individuals cultivate self-restraint.

Why do we associate colleges in particular with free speech? Traditional universities started out as Medieval madrasas: places of religious indoctrination. Their reinvention as a site of free expression is a product of the Enlightenment, but it is important to remember that historically this meant "free enquiry" more than "free speech", i.e. the extension of the curriculum to the new technical and social subjects required by an emerging industrial society. This instrumentalism meant that many subjects excluded topics and expressions antithetical to national and bourgeois interests. For example, the statue of Cecil Rhodes at Oxford is a legacy of an era when the teaching of history was euro-centric, geography was a catalogue of resources for imperial exploitation, and moral philosophy struggled to escape the conceit expressed by Rhodes himself: "Remember that you are an Englishman, and have consequently won first prize in the lottery of life".


The "right to say anything you please" on campus was a product of the social democratic era, and more specifically the expansion of further education that started in the 1960s. It coincided with the arrival of relativism and cultural theory, i.e. the right to think anything you please. In other words, free-speech on campus is relatively recent and inseparable from a questioning of canonical authority. This would prompt a conservative backlash in the 1980s, exemplified by Roger Scruton's Thinkers of the New Left and Allan Bloom's The Closing of the American Mind, that would define leftist thought not merely as wrong or misguided but as fraudulent and anti-intellectual, echoing from a rightist perspective Julien Benda's 1927 criticism of the nationalist infection of early 20th century French thought in La Trahison des Clercs. This repurposing of an anti-establishment trope (the free-thinker who sells out) was a feature of the anticommunist era, from George Orwell to Alain Finkielkraut.

Political correctness originates outside academia in the US conservative backlash to civil rights. Though the term had been employed ironically on the left in the 60s and 70s (harking back to the "correct party line" cliché of the 30s and 40s), essentially as a defence of the awkward squad ("I'm not being politically correct"), it was the right that would insist on the existence of an abstract "political correctness" from the 1960s onwards, using it as as way of attacking minorities and anti-establishment groups while ostentatiously claiming victimhood for itself. You couldn't accuse blacks of being "uppity", but you could accuse them of being overly-sensitive or paranoid, and thus "the problem" in a new form. The subsequent spread of unironic PC to the left reflects its success in providing a grammar for neoliberal identity politics that marginalised the older grammar of class.

The conservative academic backlash of the 80s popularised the phrase, but it also did two others things. First, it provided a link via the hate-object of cultural theory to the "cultural Marxism" of the Frankfurt School, suggesting that the communist threat lived on after the fall of the Soviet Union among the deluded left. Second, the impression of sides being taken by academics allowed the right to claim that PC was a "movement", both pervasive and covert, which revived old McCarthyite tropes (ironic, given that the "PC brigade" are also the "new McCarthyites"). By the early-90s the use of the phrase had left the academy and become widespread on the political right and in the media, in part filling the void left by the redundancy of anticommunism. In populist right rhetoric today, political correctness is a "scourge".

The rise of political correctness has parallelled the evolution of "workplace correctness" exemplified by the proliferation of corporate HR policies. While some will insist the latter is a social consequence of the former, the historical evidence points in the other direction. Businesses have long been insistent on managing worker behaviour, both inside and outside the workplace, and have always demanded that the education system prepares labour accordingly. The turn of HR policies towards diversity and sensitivity, like the focus on talent management, is the consequence of changes in the economy: the need to extend markets to previously marginalised groups, the demand for greater "choice" and personalisation in commodities, and the rising cost of higher skills. The disciplinary turn on campus, like the emergence of identity politics, is a wider social phenomenon, not an academic fashion.


Where the two forms of correctness intersect is the modern tech campus, the emblematic workplace of the new economy: "Such offices symbolise not just the future of work in the public mind, but also a new, utopian age with aspirations beyond the workplace. The dream is a place at once comfortable and entrepreneurial, where personal growth aligns with profit growth, and where work looks like play". But despite its utopian and Sci-Fi styling, the tech campus has obvious echoes of universities and company towns, and even of the scientific institutes of the Soviet Union, which points to its essential nostalgia. What is particularly retrograde is its concentration of labour, like an updated New Lanark, which reveals the desire to be isolated from the wider community (and which finds an analog in a reluctance to pay tax), but also reflects a shift in the power-balance from employees to employers.

In the traditional factory setting, the struggle was over time and thus the surplus value of labour. In the knowledge economy, staff are increasingly seen more as a natural resource, like land: "The resources that managers and businesses are trying to extract from workers are in some ways very personal to the worker. Their imagination, their dynamism, their levels of energy – all these sorts of things". The Matrix, in which people are milked of their essence (a variant on our old friend the vampire trope), is the key metaphor. In a knowledge economy, it make sense to try and capture a eureka moment of inspiration at work, where it can be promptly and securely IP-stamped and absorbed by the corporation, particularly in an age when new business ideas often require minimal capital to start up and the threat of your workers going solo is ever-present.

This explains the stunted growth of teleworking. While mobility and constant contact remain characteristics of the professional and executive classes, working at home (or precariously from a coffee-shop or shared office space) is increasingly a sign of economic marginality rather than a perk. This is not to say that a dispersed workforce isn't coming, but that it will probably do so via the medium of virtual reality. VR could make a company campus infinitely scalable, circumventing physical costs, accessing cheaper digital peons in developing nations, and hindering independent labour organisation. Gamification may be the harbinger of a more profound shift in what we mean by the workplace and a working life. The 24-hour office, and workers willing to commit hours previously lost to commuting and recreation to further labour, is already a reality.

The tech campus is therefore not just a particular architectural form, it also mimics the intense form of labour familiar from college, where education and socialisation are blurred into one. In the neoliberal era, the college has come to occupy a similar cultural role to the gym: one a means of improving the value of the body, the other the value of the mind. It is competitive, but increasingly the competition takes place within the individual rather than within a class or cohort. This creates a sense of atomised identity in which the boundary of a still-forming personality is vulnerable to "micro-aggressions". Being "safe" from offence on campus is like wearing earphones in the gym. This combination of the utilitarian and the sensitive encourages an attitude that is both transactional and solipsistic, so students expect their higher fees to deliver both better teaching and a comfortable environment. Likewise, in the safe space of the tech campus, superior workplace conditions demand superior labour commitment.


The claim of student unions is not "You can't say that" but "You can't say that here", which is the same claim of privilege that you'll hear at the Garrick Club. For all the insistence that they are protecting minority interests, student unions are demanding property rights. They are also treating words as commodities. This is a consequence of the 20th century linguistic turn in philosophy. No longer labels, words were now things in their own right, having their own histories and being subject to competing forces in the definition of their meanings. This relativism allowed neoliberalism to reconcile two conflicting beliefs. Orwell's critique of totalitarian language, and Hayek's elevation of "dispersed knowledge" above the wisdom of central planning, made us suspicious of political rhetoric and the claims of the state: words were dangerous. At the same time, rational preference required us to deny the ability of language to influence choice: words weren't dangerous. We recoiled from the horror of Newspeak while simultaneously dismissing the power of advertising.

The paradoxical consequence of the Orwellian tradition has not been a search for clarity and truth in political language but a knowing, postmodern separation of words and deeds. This was exemplified in the 1980s both by the fantastic nature of political rhetoric ("Evil empire" etc) and by the vogue for revisionist histories of the French Revolution that blamed rhetorical excess, rather than any material forces, for the eruption of violence. Where classical liberal history saw words and deeds as tightly-coupled, from republican proclamations to parliamentary debates, and liberal society placed a social value on sincere language ("my word is my bond"), neoliberal thinkers have treated language as contingent and distinct from action, revealing them to be influenced by post-structuralism as much as classical liberalism. The aim in this was not to find common ground with cultural theory but to marginalise language. When "choice" (i.e. action) is modelled, it is done using maths.

When you can say anything, you largely end up saying nothing, hence our modern "free speech battles" centre on insults and offence, while words intended to prompt action ("Workers of the World unite!") are neutralised as slogans on commodities. This doesn't mean that meaningful and influential statements are impossible, but that they are drowned out by the cacophony of the banal: the profusion of language as a commodity. When we regret our words ("I mis-spoke", "I was misinterpreted") we accuse them of being inadequate to the task, as if we bought the wrong items, made the wrong choice. Just as words have become commoditised, so the discourse of free-speech has become a commodity. In the West, this has given rise to a heritage sector centred on 19th century tropes, from securalism and religion locking horns in self-important debate to commercially-driven universities being promoted as arenas of challenging thought. It's a growth market.