Imagine traveling back in time to witness a critical moment in the invention of science, or the creation of writing, or the evolution of Homo sapiens, or the beginning of life on Earth. No human judgement could possibly encompass all the future consequences of that event – and yet there would be the feeling of being present at the dawn of something worthwhile. The most critical moments of history are not the closed stories, like the start and finish of wars, or the rise and fall of governments. The story of intelligent life on Earth is made up of beginnings.
The beginnings of human intelligence, or the invention of writing, probably went unappreciated by the individuals who were present at the time. But such developments do not always take their creators unaware. Francis Bacon, one of the critical figures in the invention of the scientific method, made astounding claims about the power and universality of his new mode of reasoning and its ability to improve the human condition – claims which, from the perspective of a 21st-century human, turned out to be exactly right. Not all good deeds are unintentional. It does occasionally happen that humanity's victories are won not by accident but by people making the right choices for the right reasons.
Why is the Singularity important? The Singularity Institute for Artificial Intelligence can't possibly speak for everyone who cares about the Singularity. But it seems like a good guess that many of these people share a sense of anticipation of a critical moment in history; of having the chance to win a victory for humanity.
Like a spectator at the dawn of human intelligence, trying to answer directly why superintelligence matters chokes on a dozen different simultaneous replies; what matters is the entire future growing out of that beginning. But we can begin that answer by considering what matters most: the possible benefits to humanity, and the possible risks.
It may not be possible to set an upper bound on what a superintelligence could achieve. Any given upper bound could turn out to have a simple workaround that we are too young as a civilization, or insufficiently intelligent as a species, to see in advance. The most intractable problems facing humanity now may seem mundane and trivial to a superintelligence.
But we can try to describe lower bounds. If we can see how a problem could be solved using technological intelligence of the kind humans use, then at least that problem is probably solvable in a shorter time by a genuinely smarter-than-human intelligence. The problem may not be solved using the particular method we were thinking of, or the problem may be solved as a special case of a more general challenge; but we can still point to the problem and say: "This is part of what's at stake in the Singularity."
If humanity ever discovers a cure for cancer, that discovery will ultimately be traceable to the rise of human intelligence, so it is not absurd to ask whether a superintelligence could deliver a cancer cure in short order. It is probably unreasonable to visualize a significantly smarter-than-human intelligence as wearing a white lab coat and doing the same kind of research that a human would. It may instead view cancer as a special case of the more general problem "The cells in the human body are not externally programmable." A solution to this would require full-scale nanotechnology or other technology that we find unimaginable. But such a solution would provide a cure not just for cancer but for many other diseases besides.
The creation of superintelligence could therefore be seen as a part of the research effort towards curing cancer. But it could also be seen as part of the effort to cure Alzheimer's disease, end world hunger, make aging and illness voluntary... It would be a hugely significant general-purpose technology which would be of great value to many areas of research. Superintelligence could be applied not just to the huge visible problems but the huge silent problems – if modern-day society tends to drain the life force from its inhabitants, that's a problem that could be solved. At the very least, the citizens of a post-Singularity civilization should have an enormously higher standard of living and enormously longer lifespans than we see today.
How long will it take to solve these kind of problems after the Singularity? A conservative version of the Singularity would start with the rise of smarter-than-human intelligence in the form of humans with brains that have been enhanced by purely biological means. This scenario is more "conservative" than a Singularity which takes place as a result of brain-computer interfaces or Artificial Intelligence, because all thinking is still taking place on neurons with a characteristic limiting speed of 200 operations per second; progress would still take place at a humanly comprehensible speed. In this case, the first benefits of the Singularity probably would resemble the benefits of ordinary human technological thinking, only more so. Any given scientific problem could benefit from having a few Einsteins or Edisons dumped into it, but it would still require time for research, manufacturing, commercialization and distribution.
But the conservative scenario would not last long. Some of the areas most likely to receive early attention would be technologies involved in more advanced forms of superintelligence: broadband brain-computer interfaces or full-fledged Artificial Intelligence. The positive feedback dynamic of the Singularity – smarter minds creating still smarter minds – doesn't need to wait for an AI that can rewrite its own source code; it would also apply to enhanced humans creating the next generation of Singularity technologies.
One idea that is often discussed along with the Singularity is the proposal that, up until now, it has taken less and less time for major changes to occur. Life first arose around three and half billion years ago; it was only eight hundred and fifty million years ago that multi-celled life arose; only five million years since the hominid family split off within the primate order; and less than a hundred thousand years since the rise of Homo sapiens sapiens in its modern form. Agriculture was invented ten thousand years ago; the printing press was invented five hundred years ago; the computer was invented around sixty years ago. You can't set a speed limit on the future by looking at the pace of past changes, even if it sounds reasonable at the time; history shows that this method produces very poor predictions. From an evolutionary perspective it is absurd to expect major changes to happen in a handful of centuries, but today's changes occur on a cultural timescale, which bypasses evolution's speed limits. We should be wary of confident predictions that transhumanity will still be limited by the need to seek venture capital from humans or that Artificial Intelligences will be slowed to the rate of their human assistants.
We can't see in advance the technological pathway the Singularity will follow. But it's possible to toss out broad scenarios, such as "A smarter-than-human AI absorbs all unused computing power on the then-existent Internet in a matter of hours; uses this computing power and smarter-than-human design ability to crack the protein folding problem for artificial proteins in a few more hours; emails separate rush orders to a dozen online peptide synthesis labs, and in two days receives via FedEx a set of proteins which, mixed together, self-assemble into an acoustically controlled nanodevice which can build more advanced nanotechnology." This is not a smarter-than-human solution; it is a human imagining how to throw a magnified, sped-up version of human design abilities at the problem. There are admittedly initial difficulties facing a superfast mind in a world of slow human technology, but a smarter-than-human mind would patiently find ways to work around them.
The transformation of civilization into a genuinely nice place to live could occur, not in some unthinkably distant million-year future, but within our own lifetimes. The next leap forward for civilization will happen not because of the slow accumulation of ordinary human technological ingenuity over centuries, but because at some point in the next few decades we will gain the technology to build smarter minds that build still smarter minds. We can create that future and we can be part of it.
There are also dangers for the human species if we can't make the breakthrough to superintelligence reasonably soon. Albert Einstein once said: "The problems that exist in the world today cannot be solved by the level of thinking that created them." We agree with the sentiment, although Einstein may not have had this particular solution in mind. In pointing out that dangers exist it is not our intent to predict a dystopian future; so far, the doomsayers have repeatedly been proven wrong. But if we have to face challenges like basement laboratories creating lethal viruses or nanotechnological arms races with just our human intelligence, we may be in trouble.
Finally, there is the integrity of the Singularity itself to safeguard. Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. Unless this goal is specifically aligned to humanity's interests, the AI will see us merely as competitors for the resources it needs to achieve this goal. If a superintelligent AI causes the extinction of humanity, it will likely be due to indifference, not malice.
This danger is another reason to face the challenge of the Singularity squarely and deliberately. A carefully engineered "Friendly AI" need not present a material or philosophical danger to humanity. It could instead guard humanity against the emergence of other dangerous superintelligent minds. The Singularity Institute currently believes this is the only pragmatic solution to the danger of unfriendly AI. If there is a worldwide ban on artificial intelligence research or human intelligence enhancement, it will just drive the projects underground. We can best safeguard the integrity of the Singularity by confronting the Singularity intentionally and with full awareness of the responsibilities involved.
What does it mean to confront the Singularity? Despite the enormity of the Singularity, sparking the Singularity – creating the first smarter-than-human intelligence and ensuring its safety – is a problem of science and technology. This is not a philosophical way of describing something that inevitably happens to humanity; it is something we can actually go out and do. Sparking the Singularity is no different from any other grand challenge – someone has to do it.
At this moment in time, there is a tiny handful of people who realize what's going on and are trying to do something about it. If you're fortunate enough to be one of the few people who currently know what the Singularity is and would like to see it happen – even if you learned about the Singularity just now – we need your help because there aren't many people like you. This is the one place where your efforts can make the greatest possible difference – not just because of the tremendous stakes, but because so few people are currently involved.
The Singularity Institute exists to carry out the mission of the Singularity-aware and to protect the interests of humanity. Our goal is to ensure the arrival of a positive Singularity in order to realise its human benefits; to close the window of vulnerability that exists while humanity cannot increase its intelligence along with its technology; and to protect the integrity of the Singularity by ensuring that those projects which finally implement the Singularity are carried out in full awareness of the implications and without distraction from the responsibilities involved. That's our dream. Whether it actually happens depends on whether enough people take the Singularity seriously enough to do something about it – whether humanity can scrape up the tiny fraction of its resources needed to face the future deliberately and firmly.
We can do better. The future doesn't have to be the dystopia promised by doomsayers. The future doesn't even have to be the flashy yet unimaginative chrome-and-computer world of traditional futurism. We can become smarter. We can step beyond the millennia-old messes created by human-level intelligence. Humanity can solve its problems – both the huge visible problems everyone talks about and the huge silent problems we've learned to take for granted. If the nature of the world we live in bothers you, there is something rational you can do about it. We can do better with your support.
Don't be a bystander at the Singularity. You can direct your effort at the point of greatest impact – the beginning.
Source: Why Work Toward the Singularity?
No comments:
Post a Comment