Search

Tuesday 30 July 2013

What's Behind the Button?

This week's "Twitter storm" over misogynistic trolling and rape threats has been quickly hijacked by the pro-surveillance camp, not to mention self-publicists like Claire Perry (herself guilty of trolling recently). Despite the best efforts of feminists to focus on misogyny and question why the police are reluctant to apply existing laws on threatening behaviour, the bulk of the media debate has fixated on the technology ("we need a button") and the responsibility of social media providers for policing content. Meanwhile, the background mood music emphasises the lawlessness of the Internet and the need for control, despite the growing concerns over the erosion of privacy.

For the avoidance of doubt, I'm not about to defend the right of anyone to make threats, nor do I believe that free speech is unconditional (knowing it was technically criminal damage didn't stop me doing the odd bit of graffiti in my youth, mind). My point is that the media emphasis given to the Criado-Perez case, like the recent emphasis on online porn, serves the purpose of creating a broad consensus that the Internet now needs more "governance". The coincidence of a fuss that would energise the right (porn) and one that would energise the left (misogyny) looks almost too good to be true, but this was always likely to happen eventually in a process that has been running for a couple of decades.

As usual, much of the debate has been led by people who either don't use online services much or don't understand how the technology works. But leaving aside eejits like Perry, the more worrying development is that some people who should know better are misrepresenting the capabilities of the technology. John Carr, a "government adviser on Internet safety", was prominent on Newsnight last night insisting that naughty words could be picked up by software, which might be a worry if you manage the Twitter account of a rape crisis centre (a variant on the "Scunthorpe problem"). The contextual nature of communication, not to mention the human love of ambiguity, means that fully automated control isn't going to happen until we all become cyborgs, by which point it will be academic. He was on firmer ground when he noted that the issue with a report abuse button (perhaps labelled "Twat") was what lay behind it - i.e. how would action be taken and by whom?


Carr is an ex-employee of Newscorp and Fox (MySpace) who has carved out a niche as a pro-control advocate. He also, coincidentally, pushes an online ID product (he's thinking way beyond national ID cards). Though he gets a media profile largely by crusading against age-inappropriate content, he is also a regular critic of Internet governance bodies outside of state control, such as ICANN, and routinely lines up to castigate the Internet companies in the business-government standoff. He is a government stooge, with an OBE. I particularly liked this snippet from his HuffPo bio: "He is one of the world’s leading authorities on children's and young people’s use of the internet and associated new technologies". You'd think there must be a few hundred million teenagers ahead of him in that particular queue, but they don't count as "authorities" (though many would know to capitalise "Internet").

Twitter, like other Internet companies, has a case in claiming that it just provides the medium and cannot be held responsible for the content, just as BT isn't held responsible if someone makes a threatening phone call. However, this is also disingenuous as BT have always provided the facility for phone-tapping and tracing by the powers that be. The real significance of Twitter is that it didn't feature in the list of companies cooperating with the NSA Prism programme (and by extension GCHQ), which might explain it's unsympathetic treatment today. Some hopefully attribute this to the company's ethics, but the more likely explanation is a combination of relative immaturity (i.e. they may not have been approached yet), flaky technology (i.e. they may not be able to meet the NSA's needs), and the easy anonymity of the service (i.e. excess noise over signal).

The wider tussle between the state (government and police) and the online companies concerns how the cost of surveillance will be apportioned. There is no fundamental disagreement over the need for control, which would cement the current companies' position as "preferred suppliers" (licensing and regulation are a quid pro quo). We should also remember that Twitter, Google, Facebook et al care more about profit than free speech. If they didn't, then they would never have gone through (or be planning) an IPO and thus signed up to the "primacy of shareholder value".

The control infrastructure requires an archive of all activity and a real-world identity, but it also requires human inspection. At present, government is having to pick up most of the tab on this, either through its own agencies or via outsourcers like Booz Allen Hamilton, the company Edward Snowden worked for. Though the neoliberal state and the Internet companies have congruent interests, they squabble over the division of costs and profits. The demand that they take action over porn and hate speech should be seen in the same light as their avoidance of tax.

Sunday 28 July 2013

The Price is Right

What links the iPad and the future of democracy? The answer is commodification, which is affecting both education and political support. An example of the former is the news that schools now expect pupils to buy tablet computers in the same way as they do compasses and set squares. The headteacher of Hove Park school says this is necessary so that pupils can "engage with future employers as fully independent learners confident in their use of modern technologies". Such high-tech kit schemes often collapse due to loss, theft and the inability of technology to mix with chips and fizzy drinks. A more profound reason for failure is that the delay between use and graduation means that the skills gained are largely outmoded unless you commit to constant upgrades - "modern" has a quicker turnover for technology than French grammar or Shakespeare. Were the kids given Windows Vista laptops in 2007/8 the lucky generation?

It should also be borne in mind that tablets are sold on their ease-of-use, i.e. they can be mastered by a functioning idiot within a few hours, so it's hard to imagine that having "experienced Nexus 7 user" on your CV is going to make all the difference for that job application. Tablets can't even be considered as educational "tools" as such, as you can't easily access the OS or write and run programs in the way that you can with a PC. This focus on hardware rather than software (Wikipedia is a genuine tool and cut+paste is a valuable technique) is typical of the ideological stranglehold of the education technology industry, which has long pushed capital equipment and restrictive licences (such as MS-Office). Individual pupil tablets are no more necessary to learning than pencil gonks.

If you think the headteacher was reading from the neoliberal hymn-sheet, Brighton and Hove city council take it to a whole new level: "Hove Park school has been able to negotiate discounts with suppliers. We welcome the fact that the business plan ensures that no child is excluded from the project through inability to pay for the equipment." The school aren't providing the tablets, they are expecting the parents to pay for them outright or rent them from the school. The negotiated discounts will be marginal (the quoted price of £200 is no bargain), as the suppliers want a captive market and guaranteed profits, not the opportunity to make a donation.


The use of the phrase "business plan" tells you how schools are increasingly seen as sites for commercial services. The suspicion is that the tablets (as e-readers) will eventually be used in place of course books, with the bulk of the delivery cost thereby transferred to parents. Once you're paying for uniforms, course e-books, trips and all the other extras that have crept in over the years, it will be difficult to resist paying a fee for a qualified teacher or rent for your child's desk, even if the classroom is only being used as a means of control.

The commodification of democracy is a central tenet of neoliberalism. Oona King trots out the party line in a New Statesman article praising Ed Milliband's "trade union reforms": Milliband is being radical; we must move away from "stitch-ups"; "machine politics are the death throes of the old order"; One Nation = participative democracy; "unions themselves are not as working class as they used to be". It's not obvious what her point is in that last observation, beyond self-justification for being a member of the Labour party's middle-class nomenklatura who secured a union sinecure before election as an MP. Unions are sectional and unrepresentative of the "nation" because they are meant to be. Their job is to represent their members' interests, not those of "Worcester woman" or some other mythical embodiment of Middle England.

This is an example of the modern tendency to eschew representative politics - i.e. the idea that parties or factions should represent sections of society - and replace it with the politics of the homogeneous market. We are all assumed to be equal consumers with equal access, despite the real inequalities in resources. This appears both pro-democratic and empowering - everyone has rights and we can all exercise choices - but it leaves us atomised and effectively powerless because we lack any collective voice. King's own website is full of the tepid terminology that distinguishes this pro-market attitude, such as "Managing diversity and building social cohesion are key challenges of our time" and "there can be no real democracy without effective engagement". I was particularly amused by "Modern democracy was founded on the principle of no taxation without representation". It wasn't, not even in the USA (see the Civil War, civil rights etc). In the UK, where we continue to shower money on a monarchy and indulge the anti-democratic House of Lords (in which sits The Baroness King of Bow), modern democracy remains more theory than practice.

The problem is that such calls for "participative democracy" are unworldly. If you allow people to genuinely participate (and I'd be the first to agree that the Labour party has historically done its best to discourage this), then you should not be surprised when they coalesce into blocs with common interests. What the neoliberal hegemony of the Labour party seeks to do is ostracise any oppositional blocs as anti-democratic. The historical irony is that the "hard left" insurgency of the 1980s was beaten off with the votes of soft-left and right-wing union blocs. Since then, Labour has sought to concentrate its funding on an ever-smaller group of rich individual and corporate donors. The very antithesis of "participative democracy".


Following King, the normally sensible John Naughton suggests that Labour should emulate the US Democrats successful model of crowd-funding via the Internet. A cynic would observe that if securing lots of small donations from ordinary folk led to changes in policy, then Guantanamo would have closed by now. In reality, while online donations increased hugely in 2012 during the Presidential election, the bedrock of financial support remained large donors (individuals and committees) with a distinct pro-business bias. The Democrats shift towards large-volume small donations online is partly the consequence of the technology, but largely the result of the decline of US unions. Small donations make up some of the shortfall, but they also obscure the degree to which the party is now dependent on corporate donors.

The attraction for the Labour party of a funding model based on a mix of individual small donors and a few large donors is that it has the appearance of democracy (lots of donors and a low average contribution) without ceding significant "voice" to blocs outside the neoliberal core. Len McCluskey is presumably hoping that Unite and the other unions can maintain their influence in the party through discretionary donations from the surplus of their political levy funds. As such, they are adopting a neoliberal tactic - viewing power as a commodity that can be bought at a price.

Friday 26 July 2013

She's Lost Control

There has been much amusement this week arising from the government's initiative on porn filtering. David Cameron's proposed "hackathon for child safety" was the sort of idiocy that even the scriptwriters of The Thick of It would have rejected as preposterous. As if determined to draw the hounds of derision off her leader, Claire Perry topped this by proving that she has absolutely no understanding of how the Internet works after her own website was hacked, leading to her clumsily libelling a blogger who reported it. The further revelation that TalkTalk's site blocker is the work of Huawei, the technology company with deep links to the Chinese army, was almost anti-climactic.

Perry has an extensive track record of assertive stupidity, despite only being an MP for 3 years. Her background is typical of the modern managerial political class: Oxford, a Harvard MBA, stints at Bank of America, McKinsey and Credit Suisse. She is a corporate animal without empathy or talent, possessing a planet-sized sense of entitlement. Before selection as a candidate, she was a banker-friendly adviser to George Osborne. Since 2010, she has carved out a niche as an attack-dog for the privatisation of the NHS (the McKinsey link), and has relentlessly used porn as a means of self-promotion, becoming Cameron's self-styled "Adviser on the Commercialisation and Sexualisation of Childhood" as well as PPS to Philip Hammond, the Defence minister. The salient fact is that she has risen so far and so fast despite being gaffe-prone and having a toxic personality. Pointing out that she is technologically illiterate, and therefore unqualified to drive policy on Internet censorship, is as redundant as pointing out that her expertise in the area of childhood does not extend beyond having kids herself.

If you tend to believe what you read, see or hear via mainstream media, you might think any one or all of the following are true: half of the Internet is porn; porn is the biggest driver of traffic to Google; and the most popular search term is porn. In fact, none of them are true. You might also think that software can filter out porn - that the debate concerns simply whether we should do this. Again, this isn't the case. It's easy enough to block specific URLs or IP addresses, but it's also easy enough to bypass such blocks. This rerouting capability is built into the foundations of the Internet. Interfering with search indexes, so naughty words produce zero results, also fails due to false positives (e.g. Pussy Riot) and evolving euphemisms and acronyms (e.g. MILF). Even analysing images for suspicious amounts of flesh tends to fail as you cannot accurately gauge context - is it porn, art or just a lingerie advert?

The proponents of porn filters often insist that the software will be used only for extreme images that are (in effect) records of a crime scene, however this caveat does not make defining the boundary any easier. In practice, a judgement must be made as to whether something is or is not porn of an illegal kind. This means after-the-fact assessment, which in turn means that the infrastructure of porn control depends on two things: the recording of all online activity (so you can produce a smoking gun on demand) and the association of that activity with a real-world identity (the digital fingerprint). Of course, that is the infrastructure of control full-stop, and you can be sure the state will not limit itself to just pursuing illegal porn (or just post-crime investigation - the promise of Big Data is an upgrade from profiling to precrime). The high media profile accorded to filters is just a diversion.

In this light, Claire Perry's ignorance is no hindrance. Her job is to provide media-friendly outrage as cover for the evolving relationship between the government, the ISPs, social media and search providers. This relationship is all about power. The campaign for porn filters has little to do with protecting fragile young minds, but a lot to do with setting a template for controlling the "ungoverned" Internet. The ISPs want a commercial oligopoly. The government wants a monopoly on security. These interests are congruent. The one thing you can be sure of is that "parental control" will deliver little power to the people.

Monday 22 July 2013

Time for a Blue Pill

One perhaps unintended consequence of the NSA/Prism/GCHQ revelations is the grudging acceptance that the Internet is no longer "free, as in speech", even if parts of it remain "free, as in beer". This has emboldened governments. It is only a few short weeks from Barack Obama's comment that "You can't have 100% security, and also then have 100% privacy and zero inconvenience" to David Cameron opining "What has changed … is that for too long we have taken the view that you can't do much about the Internet, that it is ungoverned", as he announces plans "to 'drain the market' of child sexual abuse images online". It obviously does not require much imagination to see how the Internet might also be "governed" for other purposes.

This initiative is presented as a tussle between the state and the service providers, even though we should by now have realised that theirs is a symbiotic relationship. What is noticeable is the appeal to the interests of that collective entity, society. In demanding Google's cooperation, Cameron insists: "If there are technical obstacles to acting on this, don't just stand by and say nothing can be done; use your great brains to help overcome them … you are part of our society and must play a responsible role in it". Given the tendency of Google to float free of society for tax purposes, with government connivance, the suggestion that they are part of it is pretty rich. The schoolboy flattery of "great brains" is just plain annoying.

This marks a shift from the earlier rhetoric about "The Big Society", with the quickly-dashed promise of autonomy and variety, towards a model of the state as a superior relationship manager, mediating between society and the market. This "relational state" is a pure neoliberal construct, emphasising the cooperation of government and business. As Will Davies says, "Neoliberalism was launched as an attack on socialism, as a state-centric project; it is now being subtly reinvented, in ways that take account of the social nature of the individual ... The ‘social’ is brought back in as a way of providing support, such that individuals can continue to live the self-reliant, risk-aware, healthy lifestyles that neoliberalism requires of them."

The continuity between the "high neoliberalism" of the millennium and this "neocommunitarian" style of Big Data and "nudging" is the assumption of a collective intelligence - a determinable consensus about what matters and what works. Where this was once thought to reside in the abstract market, i.e. the aggregate of utility-maximising individuals' decisions, it is now sought in the network of social relationships and personal preferences more concretely located online.

But there is a danger that we misinterpret the nature and value of online relationships: "It is in correlations and patterns where value lies in a 21st century Big Data society, and not in the properties or preference of individuals, as was the case in a 20th century statistical and market society. And it is in the identification of hitherto invisible relationships that networked digital media holds out promise for security agencies" [Davies ibid]. But these very relationships (who we like, follow or communicate with) are expressions of preference, and often self-consciously aspirational. They do not necessarily represent who we are so much as who would would like to be (or who we want others to think we are). Have we taken the blue pill, or have the security agencies?

 
Those who fear that the hounding of Edward Snowden is symptomatic of a wider corruption of liberty do themselves no favours by deploying the Stasi trope - i.e. the myth of state omniscience and a concern about the opinions of every individual. Apart from implying that the security services may actually know what they're doing, this perpetuates the confusion between content and meta-data, between privacy and association. The emerging security apparatus does not care what you think, but they do care what you do, and your associations are a good indicator of your possible intentions (this is the lesson of marketing at the heart of modern surveillance).

The US commentator Frank Rich pinpointed the start of the "devaluation of privacy" with the growth of reality TV and over-sharing celebrity around the millennium. This has conditioned us to accept the truth promulgated by Silicon Valley since the late 90s that "You have zero privacy anyway", and explains the underwhelming nature of the NSA/Prism revelations. Rich has been criticised for his "techno-determinist rhetoric of inevitability", which holds that this is the price we pay for free online services. The fear is that the acceptance of the social media quid pro quo has lulled us into a belief that privacy is conditional, in the same way that other rights have been eroded through the compromises demanded in the permanent wars on drugs and terror.

The relativism of privacy has led us to see surveillance as an intrinsic property of social media. "Most social media users are less concerned with governments or corporations watching their online activities than key members of their extended social network, such as bosses or parents. As a result, people self-monitor their online actions to maintain a desired balance between publicity and seclusion, while readily consuming the profiles and status updates of others". The Stasi has been replaced by cyber-stalking.

Popular reporting on social media usually focuses on "bad behaviour": the goofs, the over-sharing, the flames, the stalking, bullying and trolling. We pay less attention to "good behaviour", that is the construction of a social identity online and the extent to which this incorporates self-repression. We pay even less attention to the degree to which this self-repression is guided and encouraged by the service itself. Social media are normative and performative (consider the tyranny of the "like"). There is a general perception that you must master the netiquette or risk either social exclusion or outright derision. It seems obvious that anxiety should be heightened.

Alice E. Marwick has identified three "status-seeking techniques enabled by social media: micro-celebrity, self-branding, and life-streaming ... These status-seeking techniques constitute technologies of subjectivity which encourage people to apply free-market principles to the organization of social life. This means constructing a persona conditioned by the values of a network dominated by commercial interest". But these are also the techniques of the successful, the elite of the online world, beta programme participants for Google Glass. The great unstated truth of social media is its dependency on class, and its ability to create ever finer sub-divisions to mask this. The vast majority of us remain followers, without status, offering up our private lives to the void like propitiatory offerings to an unanswering deity.

Thursday 18 July 2013

A Face in the Crowd

Anna Chen (aka Madame Miaow) has criticised Ken Loach for the absence of non-white faces in his film on the foundation of the welfare state, The Spirit of '45. I've no dispute with Anna's central claim, that "people of colour like me have been painted out of working-class history", but I was struck by her characterisation of Loach's defence of his choice of material ("That's the record of the time") as an "airy dismissal".

It should hardly need saying that an edited film is necessarily selective, but there are two levels of selection at work when you employ archive footage. There is the selection bias of the film-maker, such as Loach, who chooses images that best support and convey his argument; but there is also the bias of the original film-maker, which, in this context, might involve ignoring non-whites or focusing on them only as exotica. There is also a third level, the structural bias of film-making itself. A good example of this can be found in the Mitchell and Kenyon archive of urban scenes from the early 1900s. These films were shot to capture as many people as possible, often exiting factory gates, in order to drum up business for a showing at a subsequent fair. This approach, and the unselfconscious nature of the crowds, produced images closer to real life than would be the case with later newsreels, which is why the archive is now of such value to historians.


One striking example of this is a 45 second reel of miners leaving Pendlebury Colliery, on the outskirts of Manchester, around 1900. At 35 seconds in, a young black man can be seen joshing with a young white colleague. Neither they, nor another young white man who appears slightly ahead of them and seems to be part of the same group, look like miners knocking off from a shift (their faces are clean and they wear white shirts). They look like factory workers who've strolled into shot as part of a dare. What's noticeable is that no one seems to consider the black guy as unusual. He was presumably a familiar face in the area. This reinforces Anna's point that non-white workers were far more prevalent than most films (and many histories) imply, though it thereby also indicates the degree of selectivity at work in most films and thus the problems faced by later users of the material like Loach. Mitchell and Kenyon weren't trying to record a slice of Mancunian life, and therefore tempted to film only what matched their prejudices. They were simply letting the camera run to capture as many faces as possible.

To what degree should a film-maker adjust for prior selection bias, "to suit our present sensitivities" as Loach put it, when using archive material? In his selection of footage, Loach may well be guilty of subconscious bias, seeking out images from the 1940s when we were "at our best", and this may result in too many happy, shiny, white Labour voters. His use of rare colour stock to make the 1940s feel closer to the present, along with his use of montage (the abrupt jump from 1945 to 1979), indicate that this is a polemic, not an accurate record of the times. The Spirit of '45 is not a forensic depiction of the working class, but an attempt to poetically recapture the collectivist and generous spirit of the age, which though popular was not shared by all (David Kynaston's Austerity Britain 1945-51 is excellent on this truth). Bonnie Greer praised the film on Late Review back in March precisely for trying to recapture the feeling of the times. It might be more accurate to say that Loach's purpose is to make us identify with those who were true believers then. In that sense, the film is exemplary as well as didactic.

In recalling the spirit of 1945, Loach is attempting to provide a historical basis for the modern defence of the welfare state. The target is not merely the stealth-privatisation of the NHS, but the ideological war of attrition against collective action since the 1970s. But the slow decline of collective action is also the product of other socio-economic forces, part of which has been the increase in cultural diversity and the growth of single-issue (or commodity) politics. Anna's critique is thus of its time in the same way as Loach's film seeks to be of an earlier time.

Monday 15 July 2013

Scoubidou and the Protestant Work Ethic

The debate around the presumed inevitability of a basic income is beginning to ramp up. Expect a Horizon or Newsnight special sometime soon. The fundamental premise is that late capitalism cannot provide full employment in an advanced economy, largely because technology substitutes capital for labour (automation) and simultaneously leads to commodity deflation ("the coming abundance"). This pincer movement makes more and more people surplus to requirements as labour, but maintains their usefulness as consumers, assuming basic commodities remain within their reach.

We have now reached a point in history where capital, the inventory of surplus value, is so large it struggles to find opportunities for further productive investment. Simultaneously, the number of people needed to keep growing that inventory is declining due to continuing productivity gains. Some argue that the growth in value over the last 30 years is mainly due to a massive expansion in productive (i.e. non-subsistence) labour through the process of globalisation and trade liberalisation, but I'm of the school that thinks the main driver has been technology. Double-digit Chinese growth rates were less the product of farmers becoming industrial workers and more the improved productivity of Chinese industry. The growing surplus of labour we see in advanced economies will eventually appear in the developing economies too.


In such a world, where an increasing minority are denied the opportunity of a job, and thus access to wealth, we put democracy in jeopardy (the Chinese may be playing a long game by constraining democracy now). If we are to preserve a society based on the ideals of merit and equality of opportunity, then we must either more equitably share work or we must pay people not to work. In reality, a subsidy is a better solution for the rich, i.e. those who own capital now, than ceding their relative monopoly over the shrinking pool of future jobs. Despite the hurdles of intern programmes and professional closed shops, sharing work would mean sharing access to wealth.

The main subsidy options being considered are the job guarantee and the basic income. The former means providing work when the market cannot, not unlike the old idea of outdoor relief. Pro-social work (digging ditches, tidying-up parks) is provided by the state until such time as the private sector can deliver full employment again. The latter means providing everyone (i.e. all citizens of the state) with an unconditional income, independent of employment. For people who work, tax would be applied only on their additional income. A job guarantee attempts to address a surplus of labour. A citizen's basic income attempts to address a surplus of wealth.

The job guarantee is popular among Moden Monetary Theory (MMT) and post-Keynsian economists, who argue that governments have the means to achieve full employment without high inflation. However, the popularity of the job guarantee concept among social democrats and neoliberals is more to do with traditional notions of the disciplining of labour: not leaving "hands idle" and government as the employer of last resort. The basic income has historically been more popular among the libertarian left, as it assumes that individuals should be allowed to decide on their level of labour contribution. Despite evidence of its practicality and hidden benefits (e.g. the spur to innovation and entrepreneurship), the basic income tends to be dismissed as hippy madness that would produce a nation of couch potatoes, rather than fit workers trained for trench warfare.


A fundamental difference between the two is that the job guarantee is paid at a sufficiently low wage to encourage migration to private-sector jobs once the economy improves. In other words, slightly less than the minimum wage. It does not necessarily require coercion, in the sense of obliging everyone to work, but there is an obvious tendency towards the labour battalion given the poverty wages and the manual bias of much of the work. The chief modern argument against the job guarantee is that it is based on traditional assumptions about cyclicality: the periodic move from full employment to unemployment and back again. It does not address secular trends in respect of automation and commodity deflation, and is thus guilty of  "fighting the last war", being more appropriate as a response to the temporary depressions of the 20th century than the structural unemployment of the 21st.

In contrast, the basic income provides a mechanism to transition to a world where most labour is surplus to requirements, either in terms of specific individuals or a gradual reduction in the working week. A basic income also has the potential to be redistributive. Whereas a job guarantee wage will always gravitate to the lowest level, a basic income can be gradually increased to reflect two "social dividends": the gradual reduction in average working time, and the growth of GDP. In other words, the growth in wealth due to productivity could be more equitably distributed, rather than being disproportionately captured by owners of capital. A basic income thus creates a positive tension with the distribution of work, and thus wealth, whereas a job guarantee is concerned with temporary alleviation only and is deliberately parsimonious.

The current obsession with "skivers" may prove to be the last hurrah before the introduction of some form of income subsidy (the "universal benefit" is obviously suggestive). The suspicion must be that workfare will gradually evolve into a job guarantee - i.e. the left will pitch it as "the right to work" while the right will revert to the Biblical "he who does not work, neither shall he eat", and both will bang on about the need to cultivate a "work ethic". The problem is that such moralistic coercion will become increasingly pointless as jobs disappear and more and more "strivers" are sucked into its scope. Indeed, there is a strong argument that we'd do better to encourage a "workshyness ethic". The suspicion is that the job guarantee will mutate in the medium term into permanent boondoggles [*] and mere gestures, not unlike the ritual of signing-on.


That said, the job guarantee (and the work ethic) makes perfect sense if your goal is to defend current wealth inequalities, though it can only be a delaying tactic. Eventually, it will evolve into voluntary work and an unconditional basic income. The real prize will be to ensure that the latter is kept sufficiently low to ensure that productivity gains disproportionately accrue to the owners of capital. Over the coming years, we can expect neoliberal ideology to frame the basic income as "Utopian" and the job guarantee as "pragmatic".

[* - In the US, the term "boondoggle" is used both for pointless projects (it originated in the use of handcraft courses as temporary job creation during the New Deal) and for Scoubidou, the pastime of plaiting and knotting key-rings and other knick-knacks using leather strips or colourful plastic tubes.]

Wednesday 10 July 2013

The Return of the Bubble Car

The driverless car meme has started to evolve in some interesting ways. After the simple joys of the robot chauffeur, the predictions of the future are now starting to focus on the real estate potential and the remodelling of the urban landscape. Much of the speculation is nonsense, but the claims made are indirectly revealing. A good example is an article in the New York Times this week entitled Disruptions: How Driverless Cars Could Reshape Cities.

Amusing claim number one: "Inner-city parking lots could become parks". The theory behind this is that autonomous vehicles, probably shared on-demand ones (i.e. Zipcar with added robot chauffeur), would deliver us to the doors of our city-centre offices and then whizz off to some remote corral, returning promptly at 5 to pick us up. It's a bit like the London Bike Scheme, but without the need to find a docking station that hasn't already been emptied. The implicit extra travel burden is easily dealt with: "Though this would increase miles driven, and thus conceivably increase gas used and congestion, driverless cars will be so efficient there may not be an increase in congestion or gas consumption". I wouldn't bother asking to see the data that backs that claim up.

Bonkers claim number two: "[The] city of the future could have narrower streets because parking spots would no longer be necessary". Where roads were laid out before mass car-ownership, parking often makes the street too narrow. Remove the parking and you would allow cars to move freely in both directions. That's a clear benefit, but one that would be promptly lost by narrowing the street. As cars stop to make deliveries, or allow passengers to alight, the flow of traffic would be halted, much as it is today. There is an argument to be made that high land values in the centre of cities would encourage the conversion of street parking - i.e. offices and shops could expand a bit - but this assumes there are lots of parking bays in situ. The higher the value of the land today, the less likely this is to be true (e.g. New Bond Street or Fifth Avenue). The big development opportunity in city centres would be to demolish and replace multi-storey car parks. In other words, they won't become parks, they'll become office blocks or flats.

Dubious claim number three: "If parking on city streets is reduced and other vehicles on roadways become smaller, homes and offices will take up that space. Today’s big-box stores and shopping malls require immense areas for parking, but without those needs, they could move further into cities". Suburban malls and retail parks work on the principle that you, the customer, will make the deliveries. With autonomous vehicles, you could schlep over to the mall and back in an on-demand car, but you could just as easily order online and have the car deliver the goods to you. The suggestion that driverless cars might arrest the decline of the high street is just an attempt to dream up pro-social benefits for what is an anti-social development. The really transformative potential of driverless cars is to turn suburban malls into "dark stores".

Worrying claim number four: "Traffic lights could be less common because hidden sensors in cars and streets coordinate traffic. And, yes, parking tickets could become a rarity since cars would be smart enough to know where they are not supposed to be". Given the pioneering work in this field by Google, and bearing in mind the recent Prism revelations, it should be obvious that driverless cars are an invasion of privacy on wheels. Not only will the car know where it is "not supposed to be", it will know precisely where you have been and where you are entitled to go. Strangely, none of the cheerleaders for driverless cars have mentioned the impact on crime, e.g. the obsolesence of the getaway driver and the joyrider. Autonomous cars mean a reduction in personal autonomy and an increase in system control. While there are genuine benefits to this, such as efficiency of travel and reduced accidents, the potential for the state to over-step the mark should be obvious.

If that sounds a touch paranoid, consider Tyler Cowen's views (common among right-wing economists) on the need for parking spaces to be charged at a realistic (i.e. expensive) market rate. Though his argument is couched in anti-subsidy and pro-environmental terms, what he's essentially proposing is that city-centre parking be the preserve of the rich. When Westminster Council tried to introduce charges for hitherto free evening and weekend parking, this was widely interpreted as an attempt to raise revenue to offset central government cuts. In fact, the gradual restriction of parking and the increase in charges has been a long-term trend under governments of both left and right, variously sold as pro-public transport and pro-environment. Parking is increasingly framed as a privilege, not as a right (a well-worn false dichotomy of neoliberalism).

Revealing claim number five: "driverless cars will allow people to live farther from their offices and that the car could become an extension of home. I could sleep in my driverless car, or have an exercise bike in the back of the car to work out on the way to work". This gets to the nub of the matter. The "car" that is envisaged here is clearly closer to a Winnebago, with bathroom and diner, despite the previous claims that pool cars will typically be smaller (as most trips involve only one or two passengers). As an "extension of home", it should be seen as just another property - a mobile pied-a-terre. There is no suggestion that this sort of convenience will be extended to those who don't work in city-centre offices, or who do manual jobs that don't necessitate recourse to an exercise bike.

As I've previously noted, the economics of driverless cars require that they be mandated by law if the major benefits are to be realised. This would be difficult to enforce nationally at a stroke, so I suspect the most likely scenario is that they first become mandatory within a city's limits. This will create a de facto border zone, opening up possibilities for the control of movement into and out of the heart of the metropolis, as well as commercial opportunities for transhipments and tolls. I imagine Boris Johnson will see some upside to this. Congestion charging and the bike scheme have familiarised us with the concept of transport zoning in London at precisely the same time that the centre of the city has morphed into an enclave for the wealthy. The domain of the robot chauffeur is already taking shape between Fulham and Shoreditch.

Many city-dwellers will find they can no longer afford their own car and will instead be obliged to use pool cars. This will be sold as a positive lifestyle choice, with the environmental benefits to the fore. With the potential for car platoons during commutes, it's possible that autonomous vehicles might in time substitute for trams and light-rail trains, and perhaps even the underground (without the need for large gaps between trains, the same passenger numbers could be transported in greater comfort and privacy). Society will be further atomised - the bubble of ear-buds and book replaced by bubble cars.


Friday 5 July 2013

Contingent Democracy

Despite their proximity, I've yet to see anyone draw parallels between the coup d'etat in Egypt and the shenanigans in the Falkirk Constituency Labour Party, so I will. Once upon a time, the Forth and Clyde Canal, which runs through Falkirk, allowed ships to transfer between the Firth of Forth and the Firth of Clyde, thus avoiding the need to sail around the North of Scotland. It's not quite on a par with the Suez Canal, but no mean engineering feat either. But that's enough about canals. The more interesting contrast relates to the practice of democracy.

The embarrassed silence of Western governments over the military intervention in Cairo (beyond peace-n-love anodynes) has been widely interpreted as sympathy for "our side", the secular liberal middle class. The implication is that the West cannot help itself in displaying this prejudice - we're only human after all - and that liberals everywhere should simply hope for the best as the alternative (the mad mullahs) remains too objectionable.

Apart from the obvious hypocrisy, this looks strategically foolish as the Muslim Brotherhood clearly enjoy majority support outside of the metropolis, much as the AKP does in Turkey outside of Istanbul. What happens if the electorate fails to deliver the "right" result at the promised (but unscheduled) election? To quote Bertolt Brecht "would it not be be simpler if the government simply dissolved the people and elected another?"

In Cairo, the protestors' charges against Morsi centred on broken promises, economic incompetence and favouritism. These could be levelled at many democratically-elected governments, such as the UK coalition (student fees, stagnation, creeping privatisation etc). As the idea that representative democracies should have the power of recall remains alien in the UK, the dominant narrative in the British media that explains (and justifies) the current rebellion is that Morsi pursued a majoritarian approach that verged on a "constitutional coup". In plain terms, democracy is more than one person, one vote.

This hinterland of democracy is described using flexible terms such as pluralism, tolerance and inclusivity. Of course, the failure of pluralism was as much the fault of the opposition as the Muslim Brotherhood, which has in turn fed the casually racist debate in some parts of the media as to whether Arabs are capable of "doing democracy" (imagine if the Daily Telegraph wrote an editorial suggesting the same crippling deficiency in the Scots). The demands for pluralism now are partly defensive appeals for sectarian tolerance (for the Copts and the small Shia community) but largely a demand that politics be conducted within a narrow spectrum acceptable to liberals. This is consistent with the more subtle coup of neoliberalism in Western democracies, where large areas of political debate have been ruled "beyond dispute" (the free market) or "inconceivable" (workplace democracy).

The army's promise that it will appoint an interim technocratic government pending elections is clearly intended to reassure the West that normal business will shortly be resumed and due proprieties observed. The deployment of the adjective "technocratic" deliberately echoes the Monti interregnum in Italy. The Labour Party's decision to refer l'affaire Falkirk to the Scottish police has much the same purpose: a show of propriety and an appeal to an independent arbiter (this would have been more problematic if the constituency was in London and under the jurisdiction of the Met).

The fuss over Falkirk has predictably produced mutterings about Militant entryism in the 80s and the ridiculous claim that the soft left Unite union is conspiring to dominate the Labour party. In the UK, one-person-one-vote has long been the preferred stick with which to beat the unions. Bloc representation, which is a perfectly respectable democratic practice (after all, MP's represent blocs of voters with widely differing views), is routinely denigrated as anti-democratic. In plain terms, democracy is no more than one person, one vote.

Regardless of the specific abuses in the Falkirk case (which we don't really know about as the "report" remains embargoed), what is noticeable is the ready recourse to the trope of union members as mindless ballot-fodder. You'd expect this lurid depiction from the Tories, but much of it has come from the Blairites who presumably remain "relaxed" about New Labour being primarily dependent on rich business donors. Falkirk is interpreted as a culture clash, with Unite determined to advance working-class, pro-union candidates at the expense of middle-class, neoliberal cuckoos. This is as misleading simplistic as the representation of the Egyptian crisis as a clash between Islamists and secularists, but if it were true, then encouraging more working class MPs would surely be in the interests of pluralism.

What this highlights is a well-worn truth. Pluralism is always advanced from a liberal perspective because it is simply an aspect of liberal practice: the informal division of the spoils. But it rests on the assumption that plural society does not include everyone, only those who ultimately subscribe to liberal values and abide by the rules of the game. This means that there are always some who are beyond the pale, such as Islamists and trade unionists, for whom "democratic rights" are contingent.

Wednesday 3 July 2013

Aerotropolis Now

I spent last weekend in Berlin, or, to be more precise, I spent a slow Friday night in Gatwick airport and then spent the next two days in the German capital trying to catch up on lost sleep. A five hour delay, due to a crocked Easyjet plane, meant that we didn't get our heads down till 4am on Saturday. This would have been fine if we'd been clubbing, but our itinerary was geared to early breakfasts and tramping the city streets. The delay meant an opportunity to experience the emerging airport city of Greater Crawley, courtesy of two £6 a head food vouchers from Easyjet. I did toy with the idea of blowing it all on a couple of oysters in the Caviar House & Prunier seafood bar, where I noticed John Moulton ensconced, but opted instead for steak and chips and a bottle of Cahors in Café Rouge. Naturally, the £6 barely covered the chips.

The aerotroplis trope - the city built around an airport - has been around for a while now, though outside of artificially sustained oases like Dubai there is scant evidence of the successful evolution of airports into destinations in their own right; all this despite the attempts to recreate the nineteenth century arcades experience (famously delineated by Berlin-born Walter Benjamin) as a step-up from the pile-em-high duty-free of the early jet age. Airports are necessary evils whose atmosphere blends befuddled anxiety and soul-sapping ennui. The logistics of air travel - i.e. the need for check-in, security clearance and the likelihood of long delays - means that the airport terminal is a form of purgatory far worse than a seaport or railway station, as Edward Snowden could probably attest.


The airport city is less about the physical centrality of the airport and more about the city's dependence on international flows of goods and people. An aerotroplis is not the result of the insertion of an airport into an existing city, otherwise London's gravitational centre might have shifted to Silvertown some years ago, but the creation of a brand-new urbs around a runway or four. As such, the idea is a combination of clean-slate futurism (with echoes of Futurism) and the yearning for a homogeneous, global environment suitable for the executive class, hence the emblematic importance of seafood bars and luxury brand outlets. There is a palpable nostalgia both for the bourgeois cosmopolitanism of La Belle Époque (the Art Nouveau styling of Café Rouge) and the hopeful glamour of the 1920s (the Art Deco styling of Caviar House & Prunier). The use of aeroplane iconography in the commercial areas of airport terminals is rare, though this may be partly to avoid travellers dwelling on the improbability of flight. You're more likely to see representations of the Orient Express or the SS Normandie than an Airbus A380.

These purpose-built hubs are best seen not as isolated initiatives but as a single "global network whose fast-moving packets are people and goods instead of data". The usually unstated assumption is that some travellers matter more than others: "Floating above it all, meanwhile, are the globe-trotting executives chasing emerging markets". This explains both the comforting evocation of earlier golden ages of travel (i.e. the reassurance of class boundaries and status) and the homogenised corporate advertising that attempts to simultaneously assure us that the world is both hugely various and fundamentally the same everywhere.

One thing I've always found slightly off-putting about airline advertising - specifically advertising by non-budget airlines and national carriers - is the hint of heaven: the stewardesses as angels or houris, the soft focus and sense of antiseptic calm, the (worrying) suggestion that a place in the clouds is your actual destination. It always makes me think of Michael Powell and Emeric Pressburger's A Matter of Life and Death.


We ignore the obvious dissonance between this heavenly atmosphere and the noisy, cramped, fart-laden reality of coach class because we appreciate being transferred quickly from A to B (and because after 5 hours stuck in a "lounge" we'd happily stand knee-deep in pig-shit to get on the move); but surely nobody has ever voluntarily switched their bank account to HSBC because of one of their annoyingly self-satisfied posters? I wonder if the same-but-different schizophrenia of corporate advertising is solely intended to soothe the nagging anxiety of the ungrounded executive.

Apart from force of circumstances, I was musing on airports due to the recent reports that a second runway for Gatwick is looking more likely, which would put it on a par with Heathrow, and because Berlin has been heavily defined by its airports, real and imagined, over the last 100 years. We flew in to Schönefeld, which was the main airport of East Germany and is now the focus for budget airlines. It is due to be subsumed into the new two-runway Berlin-Brandenburg airport, currently being constructed alongside it. Once complete, this will lead to the closure of Tegel airport, which was originally built in 1948 for the Berlin Airlift. That had proved necessary because Tempelhof, the main airport during the Weimar and Nazi years, was too short for the larger transport planes required during the airlift and the commercial jets that came after (though it was fine for the rocket ships in Philip K. Dick's The Man in the High Castle).

An earlier Berlin airport at Staaken was used in the manufacture of zeppelins, and was re-purposed after WW1 as a film studio where part of Fritz Lang's Metropolis was shot. That film famously envisaged planes flying between skyscrapers in the city of the future, without the aid of traffic lights (and no obvious landing strips).


As a city, Berlin today has few really tall skyscrapers, despite the ongoing post-1989 building boom, with the 1969 Fernsehturm TV tower at Alexanderplatz still dominating the skyline. This shows that Lang's vision of the future was no more accurate than Hitler and Speer's bonkers Germania. Despite the weight of the past, Berlin does not feel like a city of ghosts. As you watch the kids playing hide-and-seek in the Holocaust Memorial, or wander through the typical living room in the DDR Museum (which looks little different to a British living room of the 1970s), you notice how lightly history is worn. Some of this effect is the result of the destruction of the city's fabric in WW2, and some the conscious efforts of both East and West to build a positive image during the Cold War years in what was, in West Berlin at least, an aerotropolis avant la lettre.

What I take away from this trip, apart from a gut-full of pork products, is a renewed respect for London's fragmented airport system. It obviously has its problems, but the solutions are incremental: an extra runway, better rail links, onsite cinemas etc. Folies de grandeur like Boris Island may appeal to the Hitler in us all, but the result is likely to disappoint. For the record, Berlin-Brandenburg airport is expected to be delivered at least 4 years late and vastly over budget.