Tuesday, 25 June 2013

Defending the Halfway Line

Back in the 1970s, due to the popularity of a certain musical, it was common for football crowds to chant at long-haired players thus: "Charlie George, superstar, walks like a woman and he wears a bra". (These were the days when The Liver Birds was regarded as cutting edge gender politics.) This random memory popped up while reading a new paper by Greg Mankiw, an economic adviser to George W. Bush and Mitt Romney, entitled Defending the One Percent. This suggests that rising inequality is simply the product of "just deserts" - that the one percent are getting richer due to the impact of technology and the increasing returns to skill that this gives rise to. The paper is a standard conservative polemic against tax-based redistribution, which has been extensively rubbished by others for it dubious claims and rhetorical method (lots of begging the question). I'm less interested in the philosophical defence of privilege, which is unremarkable, or the dubious speculation on the heritability of talent, which doesn't rise above saloon-bar reasoning, and more in the relationship of technology and skill as an explanation of the one percent's good fortune.

Mankiw starts with the orthodox position that skill-biased technological change increases the rewards for skilled labour relative to the unskilled, leading to increased income inequality. Implicit in this model is the assumption that it raises the value of education, leading to more people becoming skilled and thus inequality declining (as occurred during les trente glorieuses). Recent rises in inequality thus reflect a surge of new technologies driving up the returns to skill, with education lagging behind. According to Mankiw, if this position is right "that the broad changes in inequality have [been] driven by the interaction between technology and education, rather than changes in rent-seeking through the political process, then it would seem an unlikely coincidence that the parallel changes at the top have been driven by something entirely different". In other words, this reward for skill must apply to the top one percent as much as it does to the top 10 or 20 percent, so if they're now richer, it must be because their skills have become even more valuable relative to those of the median worker.

Of course, there is no reason to believe that skill-biased change and rent-seeking are mutually exclusive, though you can see the point of maintaining that belief if your basic premise centres on "just deserts" (you see what I mean about begging the question). To take a real-world example, consider professional football. Though you can fairly argue that modern players are more skilled than those of a generation ago, it's clear that the main reason for the disproportionate growth in their incomes since the 1990s is technology in the form of TV and the Internet. This has generated larger revenues for the game, with the lion's share going to players. A case can be made that this is "just deserts", but that would not explain the huge income that now accrues to FIFA and other administrators who control tournaments (and incidentally provide opportunities for large-scale corruption, as in Brazil today). That is pure rent. An example of these two tendencies co-existing is Lionel Messi, a sublimely skilled talent and alleged tax dodger (tax evasion is rent extracted from the public purse).

Football is a useful study for a wider phenomenon first outlined in The Economics of Superstars by Sherwin Rosen in 1981, namely the impact of technology on scale economies. The point is not just that TV or recording media expand the market, but that they also produce a concentration of output on the best talent, hence overkill like the Confederations Cup and close-season club tours, not to mention Neymar's burgeoning commercial rights income. 45 years before Rosen, Walter Benjamin noted how as the work of art in the age of mechanical reproduction loses its "aura", this is compensated for "with an artificial build-up of the 'personality' outside the studio", where that personality is "the phony spell of a commodity". The artist, the superstar, becomes a commodity and object of veneration, like a medieval saint. Old Walt did not live to see the emergence of pop culture and fan merchandise, let alone Elvis impersonators or Glastonbury, but I'm sure he'd have been unsurprised by it.

Rosen presciently asked "What changes in the future will be wrought by cable, video cassettes, and home computers?" What he could not predict was the network effects of Internet-based services - i.e. the more people who use services such as Google, Facebook or Twitter, the more valuable it becomes to each user. A consequence of this is the tendency towards monopoly: one search engine, one social platform, one broadcast medium. But the financial benefits of concentration accrue to the service provider, not to the content providers, i.e. the users. This explains the need for the creators of these services to be simultaneously humanised and sanctified as benefactors of mankind, playing up their lovable nerdiness and rebellious spirit while playing down or excusing their actual strengths as ruthless business people (the trailer for the upcoming Steve Jobs biopic is a good example of these multiple tropes crammed together). Thus the superstar aura of the modern artist is easily transferred to the creators of commodities and commercial services, and reaches a sort of apotheosis in the living brand of Richard Branson (clearly a lifelong aura addict).

Since the 1990s and the emergence of the "new economy", it has become common to apply the adjective "superstar" to jobs outside of entertainment and sport, so much so that it now appears to be casually assigned to the entire top one percent. "Superstar tax accountant" or "superstar lawyer" are terms that can now be used without irony. But to what extent is this a reflection of the impact of technology or skill in harnessing it? Mankiw quotes Brynjolfsson and McAfee (Race Against the Machine, 2011): "Aided by digital technologies, entrepreneurs, CEOs, entertainment stars, and financial executives have been able to leverage their talents across global markets and capture reward that would have been unimaginable in earlier times".

What is obscured by use of the word "aided" is that these people have precisely zero skills when it comes to the digital technologies themselves (beyond operating a BlackBerry or iPhone), and even make a fetish of techno-incompetence (the modern equivalent of spotless, white gloves indicating unfamiliarity with manual labour). Skill-biased technological change, if it means anything, should mean that increased rewards accrue to those who acquire the skills to exploit the new technology. But while it is true that many techies have profited handsomely, the skill of an entrepreneur or a CEO rarely has anything to do with mastery of the technology as opposed to mastery of people and process, which fundamentally means deploying skills that are as old as civilisation, such as selling, analysis, motivation and organisation, not to mention coercion, evasion and swindling. Steve Jobs was not much cop as a technologist himself, but he was a top-drawer manipulator of other technologists.

A characteristic of the IT revolution is the scale of the market that it creates both through physical extent and commodity deflation (it is estimated that the number of mobile devices will exceed the global population by the end of this year). Steam locomotives and steam ships shrank the world, but did not penetrate every nook and cranny, hence the emblematic use of the riverboat in Fitzcarraldo. The internal combustion engine spread further but remains constrained by the availability of roads. The physical extent of datacoms has already far outstripped these earlier technologies, so new markets in Amazonia and the Congo can expect Twitter before Tarmac. This amplified network effect has created a much larger and steeper pyramid of wealth, hence the increase in inequality and the power of the one percent who have risen upon it.

Papers like Mankiw's are just attempts to rationalise this tendency as predominantly the product of individual talent and hard work, rather than the result of changes in the material base, dumb luck or incumbent privilege. Pointing at technology as the driver is unintentionally revealing. The one percent are benefiting not because they are better at harnessing new technology but because technological change is shifting wealth from production (capital + labour) to rent. If this were a matter of skill, we would expect technology to drive healthy wage growth at least down to median earners, i.e. the halfway line in the income distribution, but all the evidence is that median wages are stagnating while incomes at the very top continue to race away. Current skill-biased technological change appears lopsided in its impact on incomes. An explanation based on "changes in rent-seeking through the political process" looks more credible.

Lionel Messi may have been out of order in attempting to defraud the Spanish exchequer, but I don't begrudge him earning top wages for his performance on the pitch, even if the scale of his rewards (compared to those of Charlie George) has been hugely amplified by technology. I do however begrudge Sepp Blatter, the "superstar" of football administration, filling his own and his cronies' bank accounts by licensing the game. Greg Mankiw seems unable to appreciate that the one percent contains far more Blatters than Messis.

Friday, 21 June 2013

And Now, a Message from our Sponsors

As I routinely share my blog posts on Google+, I recently turned on the feature to replicate Google+ comments back to Blogger. I've now turned it off. The reason I did this is that the standard Blogger interface, which accepts anonymous comments, disappears and is replaced by a Google+ form that means commenters now need a Google+ profile. Google's not-so-covert purpose, to get everyone to sign up for Google+ by integrating it into all of its other products, becomes clear.

There is still a widespread belief that Google+ was an attempt to challenge Facebook, and that it has now failed and will eventually go the way of Buzz and Wave. I am of the school that believes Google remains wedded to being the king of search and that it has no desire to go head-to-head with either Facebook or Twitter as a social medium. The role of Google+ appears to be to provide a global profile, spanning all of its products, and thus a way of aggregating your preferences in order to refine search results. The fact that most people's Google+ "stream" is devoid of chatter is actually the point. Thoughtful and deliberate preferences are much more valuable to an advertiser than a casual like or a soon-to-be-meaningless hashtag.

This mundane house-keeping (and otherwise dull opening paragraph) has a certain piquancy in light of the NSA/Prism/GCHQ revelations, which I think should be read more in terms of the ambitions of the Internet companies than the ambitions of the US and UK governments. The state's involvement in surveillance is now parasitical and dependent in a far more profound way than in the era of wiretaps. They have outsourced primary responsibility to the Internet companies, and there are real doubts that they have the technical competence to fully exploit the results. Despite the carefully placed role model of Ben Wishaw as a youthful and nerdish Q in Skyfall, the truth is that the best techies want to work for Google and stock options, not work for the NSA or GCHQ and a civil service pension.

Most political scandals quickly arrange themselves into a them and us dynamic. Dreyfusards and anti-dreyfusards. Scandals, as political theatre, tend to conform to stock plots: venality, sexual opportunism, abuse of office, covering up incompetence, the deep state, executive hubris etc. What tends to be common is the revelation that they (the state, the powerful) have been taking advantage. What determines the toxicity of the scandal is the degree of contempt in which they hold the rest of us that it reveals. For that reason, I suspect the current flap (which hasn't even earned a "gate" suffix yet) will soon die down.

Evidence for this includes the rapid fragmentation of the debate. This is a well-established crisis-management strategy - metaphorically opening up multiple sluice-gates - but it also arises naturally if the scandal lacks an obvious focal point (contrast the NoW phone-hacking affair, which was for long fuelled by speculation that old Rupe himself might be implicated). This fragmentation occurs more rapidly in an online world, where there is both an appetite for news and an inexhaustible supply of opinion. Thus we now have distinct strands focusing on the legality of state surveillance, the adequacy of executive oversight, the motivations of Edward Snowden (traitor vs hero), the ethics of leaking / whistleblowing, and even Snowden's girlfriend's career (pole-dancer vs performance artist). There has been less discussion of how the technology actually works and what this means in practice.

The dominant impression is a general lack of surprise. With the exception of full-time civil libertarians and conspiracy theorists, we're actually quite blasé about it. In fact, I don't think it would be going too far to say that most people who thought about it probably assumed this degree of surveillance was already routine. We've had 12 years of propaganda on the "trade-off between security and privacy", and plenty of evidence that online business models depend on exploiting the surplus labour of our freely-provided data. It looks like this undertow of ennui, buttressed by some token inquiries and proposals on tighter oversight, will let both the authorities and the Internet companies ride out the storm.

Much of the debate has employed the traditional dichotomy of the individual and the state. For Snowden's admirers, it's a case of we, the freedom-loving people, against the over-mighty "guvmint". For Snowden's detractors, like David Brooks in the New York Times, it's about selfish irresponsibility: "But Big Brother is not the only danger facing the country. Another is the rising tide of distrust, the corrosive spread of cynicism, the fraying of the social fabric and the rise of people who are so individualistic in their outlook that they have no real understanding of how to knit others together and look after the common good". This false dichotomy, between Big Brother and the individual leaker, lets business off the hook. The disruptive factor over the last 20 years has been "the rise of people who are so individualistic in their outlook that they have no a very real understanding of how to knit others together". In other words, the iCapitalists for whom "the common good" is meaningless beyond the imposition of their own solipsistic worldview.

Outrage over the attitude of Google, Facebook and the others has now been channelled into the fulminating of various European data regulators, who are just a species of consumer watchdog, and chuntering about the dubious confidentiality of cloud services that may or may not push the discerning consumer towards more specialised providers (i.e. where you pay a fee for "peace of mind"). Consumer rights are not civil liberties (though some would like the reverse to be true). Curiously, only a few have thought to question the monopoly basis of the current service landscape and the consequent rise of rent in the modern economy. It's almost as if we don't want to think evil of those "crazy individualists" out in the Bay Area.

If history teaches us a salutary lesson, it isn't that governments tend to overstep the mark - that's a given - it is rather than monopolies (and cartels) are the true conspiracies against the public. This applies both to commercial monopolies and to monopolies on the exercise of state power. It is the intersection of state and private monopolies that creates the gravest danger as this is bi-directional. I can't be the only person to have wondered if Google's de facto tax exemption is a quid pro quo for services rendered.

Monday, 17 June 2013

The Significance of Triviality

It has been fashionable in recent years to claim that modern technological innovation isn't a patch on the past - that the current IT revolution is little more than trivial consumption, such as iPods and cat videos, which compares poorly with steam power and the internal combustion engine. Robert Gordon and Tyler Cowen have been widely quoted in this regard. Some of this is probably generational - i.e. middle aged men who never got over the non-appearance of jet-packs being underwhelmed by Angry Birds (I imagine Thomas Malthus didn't think much of the potential of steam engines). Some of it is just conservative misanthropy - the assumption that we live in decadent times and everything is a bit shit. And some of it is, I believe, a misunderstanding of the significance of triviality.

It was interesting then to see a report by neoliberal cheerleaders McKinsey outlining the "12 technologies that could drive truly massive economic transformations and disruptions in the coming years". McKinsey, given how they earn their money, focus on near-term technologies - i.e. what they consider to be racing certainties, rather than speculative futurology. They think mobile Internet might be big. There's the usual consensus guff, so 3D printing, "autonomous vehicles" and "advanced materials" (i.e. Graphene) make the first division. In the second division ("on the radar") we find fusion power and quantum computing, which would probably have made the top division in years gone by, before we realised how bloody difficult they are, while the third division of the "interesting and often hyped" includes 3D and volumetric displays, which have little potential use outside of Sci-Fi films.

What is most significant about the top 12 is the extent to which they rely upon information technology, which the report notes is now "pervasive". This is obvious in the case of mobile Internet, cloud technology, the "Internet of things" (i.e. smart devices and RFID), and automation of knowledge work, but it is also true for the other disruptive technologies. Advanced robotics is now more software than hardware, autonomous vehicles depend on realtime processing, genomics depends on massive data-crunching, while advanced oil and gas exploration and recovery has been more about IT than wrenches for decades. 3D printing is emblematic of this. Though it has only broken into public consciousness in recent years, basic 3D printers were built in the early 1980s. The slow march to wider use has been partly due to refinements in the mechanics (additive manufacturing) but mainly due to advances in computing power and the software.

The disruptive change that has triggered the most comment has been the automation of knowledge work. McKinsey note that knowledge workers comprise 9% of the global workforce and account for 27% of total labour costs. As that ratio indicates, we're talking about the better-paid, middle-class jobs. This is leading to further fretting about the returns to education: is it worth getting a degree if skilled, whitecollar jobs are going to start disappearing. Personally, I suspect that this automation will work its way from the bottom up - i.e. attack the least powerful first. Just as telephone support was quickly offshored, we should expect it to be first in line for intelligent automation (something better than "Press 1 for ...") The higher management life-forms, like McKinsey consultants, will be at the back of the queue, so degrees from "top" universities will continue to translate into economic power, they'll just be fewer of them. You can also expect a continuation of the tendency for human behaviours that cannot be automated to be formalised as business "values", and for those behaviours to exhibit a class (or "educated") bias. Once empathy and customer-focus are synthesised, a GSOH and charmingly amateurish cake-making skills will become more important. Clubbability has already proved a more long-lived skill than the ability to use a slide-rule.

We can also expect rapid inroads by technology in the burgeoning "care industry", which is clearly at the vulnerable end of the labour scale. The example of the grandson who video-cammed his infirm granny in order to check up on her care workers was a microcosm of many current trends. The poor quality care (often down to crap wages and insufficient time rather than human wickedness), the upside of surveillance, and the growing expectation that technology and "telecare" may have a much larger role to play. A similar revolution is in the offing for healthcare, which explains the relentless claims that the NHS is inadequate and/or insupportable. While you can make money by privatising a labour-intensive public service and cutting staff (e.g. binmen who no longer have time to walk up the drive), the really big money is to be made through automation that takes out swathes of the workforce. The trick is to secure a long-term contract just before massive capital investment, and ideally get the government to part-fund this investment as the rail and water companies did.

Part of the reason why the current technological revolution is under-appreciated is the massive deflation in costs and associated commodification. This leads to the supposition that the technology must be trivial, because we can afford to put it to trivial uses. McKinsey note that the fastest supercomputer in 1975 was the CDC 7600, which cost a princely $5m then, equivalent to $32m in today's prices. An iPhone 4 has the same processing power and costs about $400. If steam engines had experienced comparable deflation, they would literally have cost buttons by 1900. This tendency towards faster commodification and steeper deflation is nothing new. If you follow the Robert Gordon model, the technologies of the 1870-1900 era, such as electricity, the internal combustion engine, central heating, air-con and indoor plumbing, were all more commodified and affordable than those of the 1750-1830 era, such as steam engines, cotton spinning and railways. Ordinary people got the benefit of cheaper clothes and railway travel, but they couldn't afford their own power looms or railway engines. They could (eventually) afford their own indoor toilets and cars.

This massive drop in the cost of information technology has further ramifications because of the transmission of deflation to secondary technologies, i.e. the many and various applications of IT. A "manufacturing startup" in the 1930s meant a significant investment in machine tools, plant and raw materials. A tech startup today can require little more than a couple of laptops, broadband and a spare bedroom. The point is not that these startups will all produce viable businesses - the failure rate is very high - but that the capital at risk is tiny. If you have ever wondered why venture capitalists threw silly money at barking ideas during the dotcom boom, bear in mind that low entry costs were as much the driver as a desire not to miss the next big thing. The problem for VCs was that they were sitting on a lot of capital, faced with a lot of projects each requiring a relatively small amount. Imagine a 100-horse race, in which every mount has odds of 1,000 to 1, and you have £100. You'd just put a quid on every horse and rake in the winnings.

The impact of IT has (I think) been central to the growing dearth of capital investment opportunities, which has in turn led to more and more capital being pumped into property and resource speculation, incidentally expanding financial markets faster than the fixed capital base. This dearth is less a smaller quantum of opportunities (as some have assumed, feeding the "we've stopped innovating" meme) and more a massive fall in the price of opportunities as discrete investments. This is throwing off huge amounts of wealth, but that wealth is sticking with capital rather than being shared with labour. The consequence is increasing wage inequality, increasing asset inequality, and a polarisation of jobs. As Marc Andreessen said rather dramatically: “The spread of computers and the Internet will put jobs in two categories: People who tell computers what to do, and people who are told by computers what to do.” It is important though to note that the first group will be dominated by owners of capital and their "lackeys", to use a once-popular term, rather than software engineers or other techies. The nerd will not inherit the earth - it will be that lovely young intern with the nice manners. That is the real significance of triviality in the modern economy.

Thursday, 13 June 2013

Happy Days are Here Again

Due to the briefly clement weather the other week, I found myself on the Tube without a book (lack of pockets), so I decided to read the Evening Standard, which these days is little more than a fat property free-sheet wrapped in an outer layer of Mayoral stunts and football non-news. I've also noticed a surge in the amount of estate agent bumf dropping through the letterbox of late, which includes prospectus-style "reports" full of charts showing property price and rental trends. Pictures of fitted kitchens appear to be going out of fashion as we morph from wannabe interior designers to property speculators. These vignettes reflect life in the metropolitan bubble, but that in turn influences public discourse on the wider economy.

The last few weeks have been marked by a rising murmur about the "green shoots" of recovery. The timing is partly down to the season (sentiment tends to improve in Spring), partly the departure of Mervyn King as Governor of the Bank of England (he wishes to leave the house in good order, so is inclined to be more optimistic than usual), and partly the improvement in the housing market (i.e. an increase in new mortgages). The largest factor may be simple boredom - we want to change the tune - hence the sound of barrels being cheerfully scraped, such as the suggestion that succesful PPI claimants spunking their wad on new cars is fuelling growth.

The macroeconomic reality is that GDP remains below its 2007 level and is not likely to fully recover lost ground before 2015. With few exceptions, the predictions for future medium-term growth are modest, which means that recouping trend growth (i.e. getting back to where we would have been were it not for the 2008 crash), is so far out as to be meaningless. It could take decades. This doesn't mean that life won't materially improve for many, because advances in technology will continue to drive productivity gains and commodity deflation, but for others the recovery will be as illusory as it was in the late 1980s. Outside of the Evening Standard's target demographic (people who can afford mortgages in London, rather than football fans), the teenage years of the century promise crap wages, precarious employment and an ever more frayed safety net.

The worrying aspect of this is the assumed dependence of economic recovery on property. In times gone by, this would be a positive sign as it would indicate large-scale investment in housing stock, as in the 1930s and 1950/60s, with its multiplier effect on manufacturing, transport and services. But that was when house prices were a relatively low multiple of salaries and deposits could be saved over a year or two. People were earning more and chosing to spend it on better property. In the modern UK economy, a pick up in house sales can only mean an increase in mortgages at the limit of affordability, and that in turn can only mean an increase in onerous and risky debt.

This is happening not because people are striving to afford mortgages, but because the government is lowering the bar, just as the US did with sub-prime loans. The Help to Buy scheme increases the number who can afford a mortgage, while Funding for Lending aims to increase the availability of mortgage funds. In a perverse Keynesian way, the government is stepping in to substitute for a shortfall in private demand for debt. In more specific terms, these schemes are intended to have the same turbo impact that endowment-backed mortgages had in the 1980s.

The UK housing market is very different to the rest of Europe. The core (notably Germany) has largely maintained sufficient supply to avoid excessive price rises, while the periphery has over-supplied, leading to speculative bubbles. In the UK, we have had persistent under-supply, which produces high prices. For a property crash to happen, as in the US, Ireland and Spain, you simultaneously need buyers exiting the market en masse and significant over-supply. What distinguishes the UK market is the lack of the latter - we have not had a speculative building boom. What we have had is decades of capital being injected into existing stock, land being banked (as developers focus on margin over volume), and the unbalancing of the economy - which drives up prices in London and other hot-spots. Our boom was driven by not building houses.

The government estimated in 2007 that we'd need to build 240k houses a year for a decade just to meet growing demand (due to increased life expectancy and falling household density). The impact of the recession has recalibrated that to 300k. If the next government promised to build 1 million new houses during the life of the parliament, we'd still be behind the curve. As the chart below shows, hitting this peak even in one year is improbable, while an average of 300k a year for 5 years would be enough to match our previous best in the late 1960s.

The sobering truth is that our housing problem predates Thatcher and the moratorium on council house starts. The gradual constriction of supply was a product of the 1970s, as buying gradually outpaced renting, due to rising incomes, and the post-war building boom ran out of steam. Right-to-buy was a product of the increasing attraction of home-owning, rather than calculated speculation, which in turn reflected secular trends such as slum clearance, new towns, DIY, the turn against high-rise council flats etc. Since the 80s, new builds have fallen progressively further behind the rate needed to meet demographic change. Mini-booms, such as in the 80s and the early 00s, have allowed builders to increase volumes while simultaneously increasing prices, but have not checked the overall shortfall of supply. Looked at over the longer term, we've not built enough houses in 90 years out of the last 100. Why should we think this is about to change?

Though previous peaks can be attributed to a number of factors, from post-war recovery to more available mortgage finance, the key driver was greater affordability due to strong wage growth. With the prop of debt taken away, below-inflation wage rises and the increasing prevalance of low-pay means that a soft landing, where greatly increased supply is met by pent-up demand, isn't an option. We can only restore a high-volume market (and incidentally free up more household income for other expenditure) if prices drop significantly, and there is understandably little support for this among home-owning voters fearful of negative equity and evaporating pensions. The only way to push prices down significantly would be to build roughly 500k new homes a year for a decade (increasing stock from 27 to 32 million), of which 70-80% would have to be social housing funded by government (or local government) borrowing. Don't hold your breath.

Despite populist gestures like Help to Buy, the future is therefore less about mortgages and more about outright ownership for both occupation and renting out, with homes becoming cherished legacies and mobility from renting to buying grinding to a near halt. The musical chairs of the last 30 years has now stopped, which is why the government is trying to jump-start the market with cheap loans. Another way of looking at this is that we're seeing a gradual unwinding of historic property debt, which is likely to last for at least another decade and possibly two. A large scale house building programme would jeopardise this orderly liquidation, so it is likely that supply will continue to be constrained in order to keep asset holders and developers happy (and we'll blame restrictive planning regulations instead). That does not look like much of a recovery.

Sunday, 9 June 2013

Prism, Big Data and the Bloody Tudors

I am easily irritated by most history programmes on TV, mainly because they contain little history, and occasionally because they contain nuts (Starkey, Ferguson). Most are little more than a collection of tired televisual tropes, preening presenter tics and thundering platitudes. Dan Snow staring with furrowed brow into the middle distance on location is obviously not on a par with AJP Taylor talking to camera in a darkened studio. The problem is not dumbing down, or inflated budgets, but our desire for the subject to be presented in the neatly-bound and personality-driven form of a historical novel. This explains the current popularity of the Tudor era, both in the bestseller lists and on TV. When Hilary Mantel and Philippa Gregory are being cited as experts on Renassance England you realise why Michael Gove's ambitions for patriotic and heroic history enjoy popular support.

One partial exception to this has been The Time Traveller's Guide to Elizabethan England, which is currently running on BBC 2. Though this was popular history of the National Trust variety (much on fashions in ruffs and dental hygiene), and the decision to put Ian Mortimer in front of the camera added nothing to a voice-over, its focus on the material basis of life was a refreshing change, and it did produce more insights in a few throwaway remarks than were managed by the wall-to-wall speculation on Anne Boleyn's sex life. One particular point that Martin noted was how the growth of the secret state under Francis Walsingham went hand-in-hand with an increase in the use of torture and condign punishment (notably hanging, drawing and quartering), though he explained both in the context of understandable fear, i.e. the repeated attempts on Elizabeth's life and the Spanish Armada.

The obvious modern echo is the way that increasingly intrusive surveillance has marched lock-step with the normalisation of torture and the abrogation of the rights of the powerless. The story of Abu Ghraib, Guantanamo Bay and drone strikes is as much about intelligence as institutional abuse and extra-judicial killing. We like to think that intelligence and coercion are mutually-exclusive, hence the belief that "intelligence-led policing" is an alternative to stop-and-search. We don't like to think they are complementary. This gives rise to the trade-off in Obama's initial comments on the NSA/Verizon revelations: "You can't have 100% security, and also then have 100% privacy and zero inconvenience". In fact, you can't have 100% security full-stop, not because there is an inescapable trade-off between security and privacy, but because there is a trade-off between the rights of different individuals (cf. the US tolerance of unregulated gun ownership). The aggregate trade-off is between personal utility and collective mayhem - the justification for the necessary state outlined by the one-time Tudor schoolboy, Thomas Hobbes.

No Elizabethan would have questioned the authority of Francis Walsingham to spy on them, because it was being done (literally) in defence of the realm, even if they might dispute the Protestant Elizabeth's claim to the throne. The notion of privacy develops over the course of the 17th century and is primarily an artifact of the political compromise between the gentry and the state, recorded in the evolution of political thought from Hobbes to Locke. The core of that compromise is the right of the wealthy to be secure from property seizure by the state. The Anglican compromise under Elizabeth marks the initial steps towards the state's acceptance of freedom of conscience, the ideological expression of that material independence. The institution of systematic spying coincides with the emergence of the private realm.

In the twentieth century era of mass democracy, we retained a belief in the sanctity of private property (e.g. not steaming open other people's letters), but we also accepted that popular opinion (the will of the people) was a collective entity that could and should be manipulated through persuasion and propaganda. The tension between these two, the private and the public, was at the centre of Orwell's 1984, but the book assumed that only the totalitarian state had the capability to invade the private sphere. In late 1940s Britain, the private interests of business might affect opinion through the popular press, but it was assumed this could be stopped at the front-door ("I'll not have that in the house!"). This idea, that's it's the state we need to worry about, remains central to the popular critique of the infringment of civil liberties.

The modern idea of Big Data (so much more reassuring than Big Brother), and the related "hive-mind" meme, can be seen as the privatisation of the twentieth century idea of collective intelligence (we should always remember that privatisation means a joint enterprise between the state and large corporations). A fundamental premise of Big Data is that the hitherto hidden pattern is only visible with the maximum amount of data: it is therefore omnivorous. The claim that it is "only metadata", or that data-mining is less intrusive than routine airport security, misses the point. It has to be all metadata to be most effective.

This inevitably leads to the demand for more and more data, for omniscience, and thus for the boundary of what constitutes metadata to blur, just as News International's phone-hacking blurred the boundary between legitimate public interest and intrusion. A good example of this is advertising in Gmail. In order to know that you're interested in buying a smartphone, the software needs to spot the repeated use of related keywords in your email body text. Google don't want to read your message, in the sense of discovering your opinion about a friend's utter incompetence in using an iPhone. They want to plug into (and help mould) your hopes and desires at a much more fundamental level.

That's why the leaked material about Prism and the targeting of individuals had a strangely amateur air about it, like the dork who claims to hang out with the cool kids. This impression was reinforced by the naff PowerPoint and dodgy logo, which looked like relics from the 1980s. The reported budget of $20 million is tiny, if we assume the programme is actually gathering and analysing data in volume (this wouldn't give you blanket coverage of more than a single city). Perhaps the leak is part of a political campaign to secure a bigger budget.

The inevitable Orwellian trope that is wheeled out on these occasions is misleading ("They quite literally can watch your ideas form as you type", according to one source, which is clearly ridiculous). In 1984, control is exercised through propaganda (the daily two minutes hate) and self-repression, more than through surveillance technology. Hysteria and paranoia are carefully cultivated. You can see this at work in North Korea today, with its old-school leader-mania and fear of imminent invasion, but you can also see a (slightly) more sophisticated version of it in the anti-paedophile campaigns and routine denigration of "scroungers" by the British press. We need our daily hate. A mild form of this paranoia informs the media clamour around the failure of the state to pre-empt the Boston bombings and the Woolwich murder. We oscillate between attraction and repulsion for the all-seeing state.

The revelations about "snooping" by the NSA/Prism/GCHQ should not come as a surprise, nor should the willingness of technology companies to accommodate government access. They haven't had their arms twisted. Rather they have offered the state a quid pro quo to prevent government restrictions on their commercial practices. The NSA apparently considers Silicon Valley "home advantage". The technology companies are possibly more embarrassed by this revelations than the NSA, as it won't help them expand globally if they are seen as stooges of the US government, though the de facto interconnection of big business and the state is common everywhere.

The NSA's boast is ironic given the USA's frequent complaints about Chinese government-backed hacking, though the real irony is how similar both states appear to be in their approach (the "Great Firewall of China" is just more in-yer-face). There are the capabilities to zone in on individuals, but equally important is the ability to track aggregate trends and thereby potentially manage the flux of popular opinion. That is what both large corporations and states are most interested in. Rupert Murdoch must be looking on enviously. Francis Walsingham, were he propelled through time, would be boggled by it all.

Friday, 7 June 2013

From Huddled Masses to Property Investors

Both Paul Krugman and Frances Coppola are worried about emigration in respect of the Eurozone crisis. Coppola believes that the impact of Eurozone austerity on youth unemployment may lead to mass migration, which in turn will reduce future tax revenues required to fund pensions, resulting in the gradual desertification of the periphery. Krugman asks if labour mobility is making the Eurozone crisis worse.

I am dubious about this for two reasons. First, modern migration flows have only a marginal effect on population and take decades to become substantial. For example, the foreign-born share of the UK population increased from 4.2% in 1951 to 11.9% in 2010. That's a change of about 1.3% per decade. We are seduced by folk memories of "poor, huddled masses" in the late 19th and early 20th century into overestimating the scale of such movements. This is compounded by modern anti-immigration scares that talk up the numbers. We also easily forget that immigration and emigration offset each other. The net impact on the working population is much smaller than the raw count in any one direction.

Of course there have been times when a significant number, particularly of young adults, have emigrated, but it would be wrong to conclude that a persistent outward flow means an inexorable slide to national doom. The Celtic Tiger years in Ireland came after over a century of relatively high emigration, so losing lots of bright young things does not necessarily condemn a country to an elderly population and low growth for ever more. It isn't a one-way ticket for a nation, even if it is for the individuals involved.

In fact, since WW2 we have seen a growing tendency for migrants to return "home" periodically and at the end of their working career (this is common among Irish and West Indian emigrants in the UK, for example). This is due to a number of factors: better and cheaper transport, greater wealth, and (particularly in the EU) easier labour and capital mobility. The last of these has also fed emigration by the non-working population, such as Northern European retirees transplanting to Spain, Portugal and Greece (a consequence of this is that a large part of current emigration from the periphery to the core is actually made up of returning migrants - i.e. repatriation - rather than being predominantly a native brain drain).

The second reason is that the immediate impact of the 2008 crisis was a slowdown in migration, not an acceleration. In that light, evidence of increased flows in the last 2 years may actually be a healthy sign - i.e. the market for migrants picking up again because of increased demand for their labour. People don't emigrate for work (and aren't allowed entry by target states) unless there is a reasonable prospect of employment. If millions of Greek and Spanish youth are quitting Europe, that would be a bad sign, but if thousands are moving to London, Amsterdam and Munich, then that looks like business as usual.

Frances Coppola notes that the Euro, in preventing currency devaluation, obliges the periphery to pursue internal devaluation, i.e. pushing down nominal wages, and fears that this will lead to Greek youth taking better paid jobs in Germany. This is true, but the countervailing tendency is to make it attractive for employers to transfer jobs from the high-wage core to the low-wage periphery, much as those states attracted inward investment in the 80s and 90s through devaluation. The key change brought about by the Euro is that it protects cash and capital (Euro-denominated assets) at the expense of labour (wages). This a feature, not a bug. Pre-Euro, devaluation would erode the value of capital and cash in the periphery more than real wages.

Austerity advances the neoliberal agenda: it privileges multinationals (you can expect more Siemens and VW plants to open in the periphery as wages drop); it advances labour market deregulation (it's worth remembering that Germany has no minimum wage) and "prices" workers into jobs by reducing benefits; it pushes "structural reforms" (i.e. public sector cuts and privatisation); and it seeks to extirpate the remnants of old-style, Southern European corporatism (e.g. generous public sector pensions). This is why it is wrong to assume the ECB and EU Commission's strategy is the product of sheer dunderheadedness, as Krugman is wont to do: "it’s hard to think of any previous episode in in the history of economic thought in which we had as thorough a showdown between opposing views, and as thorough a collapse, practical and intellectual, of one side of the argument. And yet nothing changes". Perhaps that's because the results of austerity are actually the point of the exercise.

Frances Coppola is concerned that emigrants may be less likely to remit cash to support aged parents than was the case in the past, and that coupled with the absence of youth's output from a shrunken domestic economy, this will make state pensions unaffordable. Traditionally, emigrants did send cash remittances back home for this purpose, but from the 80s onwards, as state pensions took up the slack, this flow of income was increasingly diverted into property, famously the Southfork-style bungalows that now dot the Irish countryside and the "rusty rod" villas of Greece. We're familiar with the way that the Euro recycled cheap bank credit from the core to the periphery to support domestic property demand, but a contributory factor was emigrant demand for a home in the old country with an eye to retirement. The problem is that much of this recycled wealth evaporated with the crash. It would have been better to stuff Euros under the mattress. If pensions are in peril in the Eurozone periphery, this is more the result of the property bubble than emigration.

Though some current emigration is out of the EU (e.g. Portuguese youth going to Brazil and Angola), much of it will remain within Europe (e.g. Portuguese youth going to Switzerland and Germany). Emigration out of the EU can also be offset by immigration from poorer countries in Eastern Europe, the Middle East and Africa (though this presumes the political will to allow this). The real demographic challenge is not emigration but low birth rates in the near-term coupled with the longer-term likelihood that immigration from Asia and Africa will dwindle as those continents increase their domestic demand for skilled workers. Emigration is the least of our worries.

That said, it also needs to be emphasised that the greying of the population is manageable. Though the proportion of the population producing tax revenues to fund pensions may decline, long-run increased productivity (due to technology) will offset this. The result may be negligible net growth (as seen in Japan), but this isn't the end of the world. The same applies in respect of the health care burden of the elderly. While demographic pressure increases this as a proportion of GDP, technology can offset this too, despite the inherent challenges of Baumol's cost disease. At a certain point, the elderly "bulge" will also start to reduce (i.e. as baby-boomers die), which will boost GDP growth as the working population increases relative to the elderly cohort. We are not doomed, though we will be faced with a novel challenge: managing a shrinking population.

Adjusting for a changed demographic is ultimately a question of redistribution and therefore a political choice. The claim that this is a factor beyond our control is ideological. As I've mentioned before, the regular predictions of doom in respect of both pensions and health care costs can usually be traced back to a belief that we cannot afford the welfare state at any price. Though the fear of the deleterious impact of emigration seems more prevalent on the political centre-left, I think it is just as misplaced as the belief that an ageing population is insupportable.

Sunday, 2 June 2013

The Many Faces of Liam Byrne

Liam Byrne is probably best known for his "There's no money left" note on leaving the Treasury in 2010, though the "Working with Liam Byrne" memo from 2008, which specified in excruciating detail how civil servants should treat the great man, runs it pretty close. The former was stupid, but highlighted his assumption that in addressing fellow neoliberal and ex-investment banker David Laws, the then new Chief Secretary, he was addressing a member of the same club (Laws' decision to release the note to the press, in support of the coalition's austerity strategy, was one of many LibDem betrayals). The latter characterised Byrne as a self-idolising "mover and shaker", applying the micro-managing norms of the corporate world to government. The common thread in this, and Byrne's career to date (Harvard MBA, Accenture, Rothschilds, Parliament), is the neoliberal assumption that the business of government is business.

This shines through in the extracts of his new book, Turning to Face the East: How Britain Can Prosper in the Asian Century, which is published this week. What also shines through is his egomania, which is consistent with his previous oeuvre. I defy you to read the first two paragraphs of this extract in the Observer and not piss yourself laughing. To judge from this taster, the work is of no political or economic (let alone literary) merit, other than as further evidence of the tenacity of neoliberal thought and Liam Byrne's tin-ear. There is the usual threadbare rhetoric about a "global race", though this time the prize is a larger slice of the Chinese market rather than nebulous world domination. He refers to "the great economist Jim O'Neill" on the scale of China's growth. O'Neill, the author of the BRIC acronym, has always been an inhouse economist at global banks, most recently Goldman Sachs. His contribution to economic thought, as opposed to currency speculation, has been zilch. He is a perfect example of the hegemonic control of the discipline, that sees market interests dictate both academic and public discourse, which was exposed in the documentary film Inside Job.

Byrne is particularly concerned about the Germans: "In Europe, Germany is outstripping us. In thinktank-land they talk of a German-Chinese 'special relationship'. Fully 47% of European exports to China are from Germany, far more than a decade ago. German trade targets for 2015 are three times the size of ours". The reference to "thinktank-land", not to mention pushing the Germanophobia button, is presumably Byrne's attempt to talk like an ordinary bloke (let's be grateful he isn't shouting "Cum on, Ingerlund!"). What he seems incapable of pointing out is that healthy German exports to China are largely driven by machine tools and specialised engineering services, i.e. feeding the manufacturing boom. Given that we trashed our manufacturing sector back in the 80s, and did little to revive it during the 90s and 00s, preferring to privilege the finance sector that Byrne worked for instead, bleating about Germany's market share today is just dim.

His remedy is free trade, the free movement of capital, and the free movement of skilled labour: "To succeed, more of our children must study in China, so must more of our teachers and academics. Chinese firms must do more business here – and more British firms must work in China. Managers and employees must pass back and forth. Brits should own great Chinese brands – and vice versa". His self-delusion is obvious when he decides to give us the benefit of his historical knowledge: "For centuries we were connected only by long and dangerous caravan routes along which we traded spices, silk – and myths. We don't trade so much spice and silk any more". The obvious myth is that Britain ever traded spices and silk with China, with "caravans" slowly toiling through the lanes of Kent. During the heyday of the Silk Road, we bought these luxury goods from middlemen, notably the Venetians, who controlled access to the Eastern Mediterranean entrepots.

When direct trade between Britain and China finally started in the 17th century (following the lead of the Dutch), it was via ocean-going ships. This was luxury trade, centred on tea, silk and porcelain. As the Chinese had little appetite for British goods, these had to be purchased with silver, leading to a balance of payments problem similar to that which the US has today with China. The solution was a combination of intellectual property theft (transplanting tea production to Assam in India), which is ironic given the way this charge is routinely levelled at the Chinese today, the importation of cheap textiles (notably Indian cottons), which undercut domestic production, and the illicit importation of opium (grown in India). The last of these was not a wicked plot to undermine Chinese moral fibre, but a rational attempt to achieve a balance of payments. The human cost was incidental. Had it not been for middle class demand in Britain for fine porcelain and Earl Grey tea, we wouldn't have bothered.

What is significant is that Britain's trade with China was dependent on its monopoly control of Indian produce and its ability to use the Royal Navy to prevent the Chinese from blocking imports during two "opium wars". In other words, this was the result of imperial exploitation and military strength. Free trade is not the same as fair trade. Byrne is having none of this: "If there's one lesson we should learn from ... our economic success over three centuries, it is that we thrive on competition. Let's not lose that spirit. Let's be confident enough to throw in our lot with the changing world, to become full-blooded globalisers". There is not a smidgen of doubt in Byrne's mind that globalisation is best, not only for Britain but for the rest of the world. He highlights three win-wins for the UK and China.

First, he believes that they require expertise in the high-value services that we are so good at, such as "advertising, aerospace and automotive, branded consumer products, civil engineering, education (especially higher education), energy, financial services, and life sciences". It should be obvious that these divide into two camps: high-end engineering and science skills that the Chinese are currently developing themselves, and parasitical services sold as positional goods. I'm surprised he didn't mention Scotch whisky and Shakespeare. Second, he subscribes to the Cool Britannia bollix that we are a uniquely inventive nation and can therefore partner with Chinese manufacturing: "the real opportunity is to go global with Chinese firms, harnessing British invention and innovation with Chinese production technology and scale in a search for Chinese – and then global – markets". This is nonsense. China will no more forgo developing its own R&D capabilities than Japan did. It has a generation of PhD students to employ, many of them educated in the UK. Third, and most pertinently, he believes we can help China invest its huge dollar surplus. You can take the boy out of investment banking, but ...

One of the choice snippets from the Working with Liam Byrne memo was the claim that: "Money is the root of all progress. Finance are a vital part of the initiation conversations" The use of "are" rather than "is" in the second sentence tells you that for Byrne "Finance" is an organisational entity (despite the common plural confusion) rather than an abstract concept. It's a group of players that includes not only himself but ideological chums like David Laws and Jim O'Neill. The Asian century is an enormous opportunity ... for investment banks.