If I have seen further it is by standing on the shoulders of giants.

Friday, April 6, 2012

Transgressive Man


There’s a scene near the opening of Transcendent Man, the 2009 documentary on futurist Ray Kurzweil, showing archive footage of the then-17-year-old’s appearance on panel show I’ve Got a Secret. Suited and smiling, exuding the awkward confidence of someone becoming slowly aware of a great gift, Kurzweil sits at a piano and rattles off an unusual piece of music. The panel is surprisingly quick to guess his secret: the composition was written by a computer – a computer, it transpires, that Kurzweil also built and programmed. The host, Steve Allen, congratulates young Raymond and predicts a bright future for him.

It’s an auspicious introduction to a man for whom computers are arguably as valuable as human life itself, a man for whom predicting the future is very much part of the present. Kurzweil made his name as an inventor in the ’70s and ’80s, patenting everything from the flatbed scanner and text-to-speech synthesizer (both pre-emptively created to enable the completion of the Kurzweil Reading Machine for the blind), to the Kurzweil K250, a piano synthesizer constructed following a conversation with Stevie Wonder.

Yet it wasn’t until 1990 that Kurzweil’s first book, The Age of Intelligent Machines, put his decades of research and development into a wider context. His arbitrary inventions now seemed part of a wider effort to nudge humanity towards the age of electronic enlightenment described in those pages: an age in which man and machine coexisted, but in which machines were the superior being, blessed with artificial intelligence that allowed them to take on many of the tasks formerly falling to human hands.

Then in 1998, the year that Kurzweil had predicted would see a computer defeat a human at chess (he was 12 months out – IBM’s Deep Blue beat Garry Kasparov in 1997), he released his follow up, The Age of Spiritual Machines. Kurzweil used the opportunity to extend his earlier predictions of a future in which man and machines coexisted to a point at which they would become, essentially, one and the same.

By 2029, he wrote, man would be able to prolong his lifespan indefinitely through advances in biogenetics and nanotechnology, and would ultimately become all but indistinguishable from the robots he had created. Computers would no longer be rectangular objects sitting in our offices, palms or pockets, but integrated in our very beings; virtual reality worlds and internet applications would be accessed via implants, and robots would be petitioning for recognition of their rights as conscious beings. This dawn of a new age became known as the ‘Singularity’ – a sort of Rapture for technophiles – and it turned Ray Kurzweil from an eccentric modern-day Edison into a different sort of figure altogether; one feared by some, revered by others, ridiculed by many.

The subsequent decade has seen a rise in the number of Kurzweil’s critics and the volume of their complaints. He is regularly attacked for what some see as his pseudo-religious reverence for robotics, and the cult status he holds among more fanatical followers. His promise of technology-enhanced immortality has riled the religious right, while traditional scientists raise issue with everything from his understanding of human biology to his timeline for the Singularity.

In Transcendent Man, Wired magazine co-founder Kevin Kelly notes how ‘convenient’ it is that the Singularity will come to pass just in time for Kurzweil himself to benefit. It’s a criticism he answers with obvious frustration, but without breaking his stride, his voice never losing the monotone timbre that suggests he may already have begun merging with his software.

“Kevin is thinking linearly,” he argues. “He assumes that the necessary precursors just aren’t there, and I agree with that: the precursors aren’t there, but that would only be a problem if progress were linear, and it’s not. Halfway through the Genome Project, people started panicking because it had taken seven years to complete one per cent of the genome, and they believed that therefore it would take 700 years to complete the whole thing. But they were ignoring the fact that progress was exponential, not linear. The whole project was finished seven years later.”

The law of accelerating returns underpins many of Kurzweil’s beliefs – the exponential rate of development, he claims, means that we’ll see 200,000 years of advancement in the twenty-first century alone. “When I came to MIT [in 1967] it had one computer; you needed influence to get inside the building and you had to be an engineer to use it,” he recalls. “Now computers are everywhere, including the poorest nations of the world, and the law of accelerating returns means they get cheaper as they become more ubiquitous. The computer you just called me on is a billion times more powerful per unit currency than the one I used when I was a student, and it will be a billion times more powerful again in another 25 years.”

There is, however, a great deal of mystery surrounding the exact nature of life after the Singularity. Kurzweil notes that this is unavoidable: that beyond a certain number of exponential increases in technology, and the associated effects on our lives, we can know nothing for certain except that humanity will be very different to how we understand it now. Therein lies a problem he has faced for decades: asking people to open their minds to ideas that set every fibre of conventional wisdom ringing with alarm never gets easier.

There’s a scene in Transcendent Man, for example, in which Kurzweil explains his theories on solar energy to Colin Powell. Solar energy technology, says Kurzweil, is doubling in efficiency every two years, and is only eight doublings away from being capable of filling 100 per cent of America’s energy needs. Powell regards him with a look of cautious optimism steadily subsumed by scepticism, but Kurzweil persists, knowing that deep-rooted notions are the hardest to displace.

“This is another of these myopic views: that we’re running out of energy, that we’re running out of food and water. That’s nonsense: we could have 10,000 times more energy than we need from the sun, all of it free, if only we could convert it, and our ability to do that is increasing as we approach the point where we can apply nanotechnology to solar panels. Same with water: 98 per cent of the world’s water is saline water or dirty, but we’re increasingly capable of cleaning it thanks to emerging technologies.”

Kurzweil blames the prevailing notion of a world going to hell in a hand basket on increased visibility. If there’s a battle in Fallujah or Tripoli, we’re there, he says, on our laptops or PalmPads, facing the human tragedy of the situation in ways we never could before. In reality, the number of deaths in wartime has plummeted since the mechanized wars of the twentieth century; he also cites strong evidence to support the idea that democratic nations don’t go to war with each other, and has watched the recent revolutions in the Middle East with great optimism, not least because of the role played by social networking technology in destabilizing former dictatorships.

“In The Age of Intelligent Machines, I predicted that the Soviet Union would be swept away by the then-emerging decentralized communication network. People didn’t believe that a superpower could be overcome by a few Teletype machines. The battle was won by a clandestine network of hackers that kept everyone in the know. The old paradigm of the authorities grabbing a central TV or radio station and plunging everyone into the dark just didn’t work anymore. And now, with the rise of social networking and young people being able to compare their own ways and standards of living with others, everybody wants the same thing. It’s a powerful democratizing force, and it’s bringing the nations of the world closer together all the time.”

Set against these faintly utopian scenarios are some increasingly audible voices of warning. There’s the philosophical question of how much man can merge with machine before the essence of humanity itself is lost (Kurzweil counters that transgressing limitations is what defines humanity), but there are more pressing concerns from critics who offer convincing reasons why the Singularity could well bring about the end of human life altogether.

Kevin Warwick, a cybernetics professor at the UK’s Reading University made famous by Project Cyborg (in which a series of electrodes inserted under the skin allowed him to remotely control everything from lights and heaters to a robotic hand that mimicked his own), envisages a ‘Terminator scenario’: intelligent machines calling the shots, humans reduced to the role of slaves or exterminated altogether. Hugo de Garis, former head of Xiamen University’s Artificial Brain Laboratory, has written at great length on what he calls the ‘Artilect War’: a worldwide conflict between those resisting and those submitting to the new AI. It’s a war that Kurzweil quips would resemble the American military fighting the Amish, yet some are already spearheading pockets of resistance – including Bill McKibben, author of Enough: Staying Human in an Engineered Age, and advocate of the anti-technology ‘relinquishment’ movement.

“I think relinquishment is a bad idea for three reasons,” says Kurzweil. “Firstly, it would deprive us of profound benefits. I think we have a moral imperative to try to cure cancer, for example, and overcome the suffering that still exists in the world. Secondly, it would require a totalitarian government to implement a ban on technology. And thirdly, it would force these technologies underground, where they would actually be more dangerous.”Kurzweil is remarkably sure of himself when it comes to accelerating humanity’s race toward the Singularity. He simply has no interest in the suggestion that what should be done ought to be given the same consideration as what could be done. His boundless belief has brought him under fire from those, including Wired’s Kelly, who liken his single-mindedness to that of a modern-day prophet, and raise the possibility that it may be Kurzweil’s own certainty that the Singularity is inevitable that causes it to become so.

In turn, Kurzweil advocates the implementation of ethical standards like the 1975 Asilomar guidelines for biotechnology, or online defences against software viruses, which have an excellent success rate against those looking to turn technology against its users.

“You have to be an optimist to be an inventor or entrepreneur,” he concludes. “I’m not oblivious to the dangers, but I’m optimistic that we’ll make it through without destroying civilization.”

Source: Transgressive Man

No comments:

Post a Comment