Search

Monday 26 January 2015

The Rise of the Machines

A dog can be thought of as a biological robot - an artificial product of human design. We haven't built it out of inanimate matter, but we have selectively bred its forbears for certain characteristics. The idea that this might eventually lead to humanity being supplanted by another species is limited to satire, such as Mikhail Bulgakov's Heart of a Dog, or to the anthropomorphism of our evolutionary nearest and dearest, such as Planet of the Apes. In contrast, we never cease to worry about megalomaniacal computers and murderous robots knocking us off our perch, despite the negligible progress in artificial intelligence (AI). Edge.org is the latest to tackle the issue, rounding up "experts" (some are just journos and boosters) to answer the question: "What do you think about machines that think?" Despite the wincingly cute formulation, this is a fascinating subject, precisely because it combines science (how), philosophy (why) and politics (who).

Though not one of the Edge contributors, movie heart-throb Stephen Hawking is the latest big name to warn that the technological singularity, when computers become self-aware and start to design faster and smarter versions of themselves, is approaching: "The risk is that computers develop intelligence and take over. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded". In suggesting that once machines reach a certain level of intelligence they will push us aside, Hawking is yoking intelligence to autonomy. The singularity is a modern concern, but its form and assumptions originate in the traditional fear that once the lower orders acquire knowledge they will become unruly (or, to put it another way, that class-consciousness will lead to revolution). After a week in which the Davos set eulogised King Abdullah of Saudi Arabia as a moderniser and reformer, the political dimension of the singularity is worth dwelling on.

The singularity: opinion is divided


Those who forecast the rise of the machines can be divided into optimists and pessimists. The optimists look forward to the singularity, assuming that our AI overlords will be benign and that the consequences in terms of technological progress will usher in an earthly paradise. This is obviously a religious impulse: "the rapture of the nerds". Jaron Lanier described it as "a certain kind of superstitious idea about divinity, that there's this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it". That "terrified awe" hints at the origin of this style of thinking in the Enlightenment notion of "the sublime": a secular reverence for nature and the vertiginous wonders of the universe that lives on in the rhapsodies of Brian Cox.


The pessimists don't just fear the singularity, and a future somewhere between Terminator and The Matrix, they also worry about the course of human development leading up to it, which they usually describe in declinst terms - "the Internet is making us stupid" being a current example. This is a variation on the theme of decadence, the belief that we are doomed by our self-indulgence and lack of moral fibre. Traditionally, "we" is not mankind as a whole, but the better sort - hence the focus on literacy, cognitive skills and other class identifiers, all set within a first world paradigm. Before it becomes a totalitarian dystopia, Skynet is a slave revolt. One emotional reaction to this decadent fear is the search for authenticity and "the joy of using your hands", but artisan craft-brewers are less like Cincinnatus and his plough and more like Marie Antoinette milking cows - another kind of decadence. The information overload trope is a variant on this theme of creeping incapacity - there was probably someone in Ur of Sumer complaining about the information overload of urbanisation and how we had lost our ability to concentrate since leaving the fields behind.

Both clever and funny


The singularity presumes that a computer is capable of imagination: envisaging another computer that is superior to itself. This is not the same as being original or creative, which Alan Turing simplified to whether a machine could "take us by surprise" (Amazon's warped logic has often surprised me), all of which can be programmed and simulated. Rather it presumes a capability for purposeful thought on subjects for which there is no prior data. Imagination is a characteristic of consciousness, the self-aware mind. The "strong AI" premise holds that consciousness is computational and emergent, so a machine can become self-aware (the singularity) assuming progressively more powerful technology.

In contrast, the essentialist premise is that the intelligence referred to by the phrase "artificial intelligence" is by definition a human attribute, so an intelligent machine is no more credible that Brian in Family Guy. Both humans and dogs are sentient and conscious, but we do not mistake canine intelligence for human intelligence - they are categorically different - and we therefore have no reason to believe that "machine intelligence" could ever develop the consciousness of organic intelligence, let alone the attributes of human intelligence such as sapience or self-reflection. While the human mind can imagine mental abilities that don't exist, such as telekinesis or the Vulcan mind meld, we have no grounds to believe that machine consciousness could do likewise.


It is worth remembering that the purpose of the Turing Test is not to show that a computer can think like a human, but to broaden the definition of "thinking" by showing that a machine could imitate the appearance of human thought ("the imitation game"). By this method, Turing sidesteps the essentialist challenge and avoids the need to answer such questions as could a computer be a philosopher? But if we restrict the meaning of AI to "machine intelligence", we prompt the question: how would this differ categorically from human intelligence? One obvious answer is that it would be incapable of autonomy, which puts the kybosh on the singularity.

In cinema, there is a fascination with "artificial emotion", i.e. the ability of a machine to simulate empathy through mimicry, such as in Her or Ex Machina. This is a recurrent artistic motif, from Pygmalion via Metropolis to Blade Runner, but it is enjoying a vogue because we extrapolate the rapid advance from Tamagochi to Siri and wonder what's round the next corner. Politically, this is a case of lowering the bar. It doesn't show that androids can be self-aware and autonomous, but that humans can be brought down to "their" level. This reflects current anxieties about downward social mobility, and overlays the traditional belief that the lower orders are emotionally incontinent. The logical extrapolation of Siri is not C-3PO but Howard's disembodied mother in The Big Bang Theory.

Outside of science fiction, robots do not understand sarcasm, let alone irony. To be fair, this goes over the heads of many humans too. The point is not that sarcasm is a more complex form of intelligence, but that it is an example of ambiguity. Though this quality does not necessarily rule out a computational base - consider the ambiguity inherent in quantum computing - it suggests that simulated ambiguity may be a long way off, not least because we don't fully understand how or why it works in humans. One thing we do know is that a lot of ambiguity is social. The significant point here is not the technical challenge for robotics, but the variety of human intelligence and its interdependence. To put it metaphorically, no two humans are running the same program, and if we equate the mind with an operating system, no two humans are fully compatible. Despite this, human intelligence tends to atrophy in isolation, suggesting that it requires social stimuli.

Fear and loathing


The fear of machines is not just a fear of the lower orders, but a fear of the absence of God as a social capstone. The traditional hierarchy saw man below God but above the animals, with the conclusion that man himself was naturally sub-divided into what Edmund Burke called "a chain of subordination", with WASPs above the lessers breeds and monarchs above the multitude. Darwin's theory of evolution replaced this settled order with a linear progression, suggesting that man himself could progress further. That "blasphemy" has since been transferred to machines, both the reality of WMD ("the destroyer of worlds") and the potential of AI. The fear that they could prove malign is a reflection of our own human failings and thus a reformulation of the concept of original sin: we are bound to bugger things up somehow or other.

A weaker form of this is the fear of being superseded and made redundant. As George Dyson puts it, in typically decadent fashion, "What if the cost of machines that think is people who don’t?". The danger is not that HAL9000 might decide to kill us, but that it might decide we are irrelevant to its purpose and thus ignore us altogether (Asimov's Three Laws of Robotics are thus overridden by "whatever"), denying us the benefit of its intelligence either by quitting Earth or devoting itself to hedonism (i.e. sitting there apparently doing nothing, as it can simulate pleasure internally). Of course, this returns us to the problem of explaining how a machine might develop a purpose that had not (and perhaps could not) be imagined by a human mind.


A modern (and distinctively neoliberal) twist on the fear-of-machines trope is the idea that "we" are in competition with them, a conflict that conveniently elides the role of capital. Some, such as tech-booster David Kelly, are prepared to concede the contest now: "This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots". What this ignores is who is doing the paying, and it achieves this misdirection by investing technology with a personality and thus a mind of its own (Kelly is the author of What Technology Wants, which rather begs the question).

One of Steve Jobs's more famous quotes is: "it's really hard to design products by focus groups. A lot of times, people don't know what they want until you show it to them". What is less well known is the next sentence: "That's why a lot of people at Apple get paid a lot of money, because they're supposed to be on top of these things". While most people wheel the quote out in support of not relying on focus groups, or following your dream, the real message is that the rewards go to those who imagine (and fulfill) new needs. This is not a statement that praises human imagination in general (after all, the focus groups were found wanting), but one that praises the vision of a handsomely-rewarded elite. AI can already do a good job of analysing customer preferences, because that is technically quite easy, but it cannot imagine hitherto unmet desires.

The social relations of automation


Karl Marx is often accused of simple technological determinism ("The handmill gives you society with the feudal lord; the steam-mill, society with the industrial capitalist") but as his work on alienation shows, he also recognised that the social relations of capital and labour influenced technological design - i.e. automation was pursued in a manner that disadvantaged labour to capital's profit.  The foundation of the industrial revolution was not steam power but the specialisation by function and production lines of early manufactories. The weaving-frames that the Luddites smashed were as likely to be human-powered as steam-powered. Their chief objection was not that the new frames made workers redundant, but that they lessened the need for skill and thus lowered wages. The substitution of male artisans was by unskilled youth and women, not by autonomous machines.


The factory system was hated because it reduced the autonomy that the worker enjoyed under the older putting-out system. The factory itself was a machine for regulating labour. What powered machines enabled was the scaling up of the organisation of the factory system. Most modern robots do not look like androids, because we organised industry as early as the 18th century to decompose production and thereby "dehumanise" workers. The unit of production was simplified with the intention of minimising the bargaining power of labour, but a byproduct of this was the easier automation of those units of production. However, this meant that machines were cognitively simple, because they inherited the deskilled organisation of industry (Guild Socialism can be read as a humanist reimagining of industry that sought to reintegrate the unity of the worker, which lives on today in the nostalgia of Steampunk).

The assumption is that the "second machine age" will now automate higher-level cognitive tasks in much the same way, and this is plausible when you consider that the jobs already disrupted by software are those that were organised to be highly procedural, with minimal autonomy and little scope for error (or sabotage) by humans. But this implies that automation will continue to be pretty dumb. If we don't value intelligence in humans, why would we build it into computers? It should come as no surprise that the cultural response to automation in whitecollar jobs has been an increase in "performative humanity", i.e. emphasising skills and behaviours that are distinct from machine intelligence (the anti-Turing test, if you will). Examples include the valorisation of emotional intelligence and the vogue for mindfulness, which fetishise social skills and the human brain respectively.

Rights for all


The role of machines as a metaphor for our Promethean hubris means that ethical discussions about AI usually ignore the limitations of technology, assuming instead an equivalence between the biological and the mechanical or even indulging in the metaphysical cop-out of "panpsychism". Our speculation about the rights of self-aware AIs is less about forward-planning and more about contemporary anxieties. The idea of universal human rights is a recent one. For most of history, we have believed that there is as great a gap between the elite and the mass as between the mass and the beasts of the field. Like the philosophy of the Greens, which imagines a world saved from humanity (i.e. "them"), the rapture of the nerds assumes that a worthy elite will survive the transition to live as demigods. The unstated anxiety is that this may turn out to be the owners of intellectual property rights, not software programmers.

The core argument as to why an animal such as a dog should have rights is based on consciousness: because an animal can feel pain, it should not be subject to cruelty. Assuming the "strong AI" premise, before we develop a machine equivalent to an average human intelligence, we must logically create a machine equivalent to a human with low intelligence. This machine would be deserving of rights equivalent to a human by virtue of its ability to feel mental pain. Therefore we would have to grant human-equivalent rights to machines that were, in the language of an earlier age, morons. Given that we struggle in practice to extend full human rights to many humans, I suspect the more likely outcome would be cognitive dissonance, accepting that consciousness is computational but denying the machines rights. In effect, creating a new helot class.


You can see this expectation in the tendency of researchers to stereotype robots as slaves and drudges, building prototypes that serve drinks or fold towels. In parallel we see the development of the servile ego, exemplified in the artificial emotions of Siri and its ilk. The promise of robotics, from automated factories to driverless cars and household droids, is the simultaneous disappearance of the manual working class and the reappearance of the servant class, but without the human interest of Downton Abbey. In the case of the (usually female) sex-bot, a lack of intelligence is a feature, to the point where autonomous behaviour is presented as a bug: "female robots rarely stick to their programs, leading to chaos and destruction". The promise of AI is the disappearance of the clerical classes and the perennially frustrating "human factor" in bureaucracy.

The stars my destination


I don't worry about Skynet. Contrary to the creation myth of the singularity, I think artificial intelligence will remain bounded by human intelligence. It may simulate human personality, and even the artefacts of personality such as works of art, and it may be capable of feats of analysis beyond human comprehension, but it will not achieve consciousness and therefore autonomy. The more realistic danger is not the rise of the machines but technology-augmented humans. Despite the failure of Google Glass (at least in its current form), it is pretty clear that the drive to "improve humanity" is not going to let up, and equally clear that the full benefits will only accrue to a privileged elite, and I don't just mean Tony Stark.

And if this new order is threatened from below, then the elite has its traditional plan B: become an emigrĂ©. According to Hawking: "We face a number of threats to our survival ... We need to expand our horizons beyond planet Earth if we are to have a long-term future . . . spreading out into space, and to other stars, so a disaster on Earth would not mean the end of the human race. Establishing self-sustaining colonies will take time and effort, but it will become easier as our technology improves". You can safely bet that World Economic Forum will be an early-adopter of the off-world opportunities, bidding a fond farewall to Davos as it cruises above the common herd in the Starship King Abdullah.

2 comments:

  1. Herbie Destroys the Environment26 January 2015 at 18:22

    I find the debate around AI misses a more important and currently more relevant issue, namely the consequences of human mastery over nature. As things stand AI is simply a tool that humanity uses to have mastery over nature, and the interesting debate is how that mastery can be abused, and what the philosophical, moral and ethical implications are.

    I find the whole AI obsession a bit of a nerd and geek fest.

    ReplyDelete
    Replies
    1. One part of the role AI fulfills is to act as a proxy for human concerns about our impact on the planet. The rise of the machines is a story of the revenge of the material world (robots are made of metals and other materials pulled from the ground).

      In other words, we fetishise our tools, which avoids the need to address the social relations they embody but also leads us to doubt their reliability. Much of the AI discourse is, as you say, a nerd-fest, but that doesn't explain its prominence in wider cultural debate.

      Delete