Discomfort And Joy - Bill Joy and vision of the future
Zac GoldsmithBill Joy, founder of Sun Microsystems, is one of the world's leading computer gurus. But now he is warning that, if the pace of technological change is not slowed, we could be inventing the species that will replace us. Is he a prophet or a madman?
What do you get if you cross Bill Gates with Theodore Kaczynski, the man better known as the Unabomber? A maniac technophobe willing to kill to save us all? A reclusive computer nerd with nothing interesting to say? System error type 201?
Surprisingly, there is someone who represents the hypothetical love child of these apparently polar extremes -- but let's explore the 'parents' first.
Consider for example a few lines from Gates's book, The Road Ahead: 'I used to date a woman who lived in a different city. We spent a lot of time together on email. And we figured out a way we could sort of go to the movies together. We would find a film that was playing about the same time in both our cities. We would drive to our respective theatres, chatting on our cellular phones. We would watch the movie and on the way home we would use our cellular phones again to discuss the show. In the future this sort of virtual dating will be better because the movie watching will be combined with videoconference.' (I can't help but imagine how Gates might have ended his virtual evening.)
This simplistic techno-enthusiasm could not be further removed from the apocalyptic prophecies of Kaczynski, the man whose terror of technology led him to send home-made bombs to computer scientists and university professors, some of whom he killed, others of whom he maimed for life. 'If trends continue' he wrote in the public manifesto which ultimately led to his capture, 'and scientists succeed in developing intelligent machines that can do all things better than human beings can do them, the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turn ing them off would amount to suicide. The fate of the human race would be at the mercy of the machines. They will have been reduced to the status of domestic animals.'
Maybe Gates's vision of the future seems more likely to you than Kaczynski's. But then, you may not have spent the majority of your adult life studying technological matters. Bill joy has, and he is beginning to wonder.
Bill Joy's credentials as one of the world's leading computer gurus are impeccable. Chief scientist and co-founder of Sun Microsystems, one of America's leading computer firms, he was appointed three years ago as co-chairman of Clinton's Presidential Information Technology Advisory Committee, set up to provide 'guidance and advice on all areas of high-performance computing; to accelerate development and adoption of information technologies that will be vital for American prosperity in the twenty-first century'. In other words, Bill Joy sits at the top of the American technological pecking order, and as such is partly responsible for major social experiment that is technotopia.
But then, last year, Joy changed his tune. He published a lengthy article in the technophile's bible, Wired magazine, in which he warned, in almost apocalyptic tones, of the dangers of going too far with computer technology. 'Its potential to destroy humanity,' he wrote -- 'even to supplant us as the planet's dominant species -- is far greater than that of nuclear weapons; yet we are blindly moving towards a world in which such a possibility becomes a reality.'
Kaczynski said much the same; but he was an eccentric, a loner and a killer, and no one wanted to listen. Joy, though, one of America's technological royal family, is another matter. Bill Joy is our Gates-Kaczynski hybrid, and his vision of the future is worth listening to, because he knows, better than almost anyone else, exactly what he is talking about.
TECHNOTOPIA
Talking to him now, he says that his principal fear is nanotechnology. 'The ability,' he explained to me, when I confessed ignorance, 'to manipulate structures at the atomic scale. The ultimate dream is to be able to build any structure you can design by assembling it atom by atom. This is not yet possible, but the field is advancing rapidly.'
'I think it is no exaggeration to say,' he wrote in Wired, 'that we are on the cusp of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation states, on to a surprising and terrible empowerment of extreme individuals. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today -- sufficient to implement the dreams of Kurzweil.' Kurzweil is the author of The Age of Spiritual Machines, which details a utopian future in which humans achieve near-immortality by becoming one with robotic technology.
How soon could such an intelligent robot be built? 'The coming advances in computer power,' he wrote in the Wired article, 'seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species -- to an intelligent robot than can make evolved copies of itself. We are opening Pandora's most terrifying box, yet people have barely begun to take notice. We are designing technologies that might literally consume ecosystems.'
FANTASY OR REALITY?
My own reaction to his predictions, reading as they do almost like a far-out science fiction fantasy, was at first disbelief. I asked him how serious are these predictions. Has he not allowed himself to become carried away? 'I tried to write the article in a way that wasn't extremist,' he says. 'I now believe that certain of the situations are, in fact, more dangerous than I portrayed. For instance, I probably understated the danger from the biological sciences. While the dangers of industrial chemicals and the like are generally understood, I don't think this is the case for the more extreme dangers I am describing.'
He is in no doubt as to the likelihood of what he is predicting. 'There is no question these technologies are powerful enough to do extreme harm,' he says. 'There's also no question we're on a course to give them to everybody, and this will ultimately lead to disaster.' As to his role as prophet of doom, he simply says, 'Look at Rachel Carson. She painted a pretty bleak picture. While we heeded her warnings about DDT, we are still a long way from dealing with the consequences of industrial chemicals -- consider the continuing issues with chlorine, as outlined in Joe Thornton's Pandora's Poison'.
WHAT FUTURE ARE WE FACING?
In his Wired article, Joy describes the elation he experienced in his youth on reading a book by Eric Drexler, Engines of Creation, on the potential wonders of nanotechnology and manipulation of matter at the atomic level. He goes on to describe how, 10 years later, on rereading Drexler's book, he was 'dismayed to realise how little I had remembered of its lengthy section called "Dangers and hopes" and dismayed too at the naivete of Drexler's safeguard proposals. One such danger was that "plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop -- at least if we make no preparation. We have trouble enough controlling viruses and fruit flies'.
Joy maintains that, ever since he first began developing computer software, he has been concerned, albeit less so than today, about the consequences of his actions. But on rereading Drexler's and other similar books, and closely following developments in nanotechnology, he began to realise that 'I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable'.
Bill Joy is certain that technological 'progress' will land us with forces capable of destroying life on earth -- forces that once unleashed we may never again be able to control. And what really bothers him is that nobody is talking about it. 'We are making these huge changes, introducing hugely disruptive technologies which are arguably more powerful than anyone can imagine,' he insists, 'and there is essentially no public discussion.' Of utmost concern in his mind is the need to avoid handing control of those technologies to 'crazies' -- presumably the Kaczynskis of the world. 'Within 20 years, and quite possibly much sooner, we run the risk of irretrievably giving too much power to crazy people. It's time to talk about these things now.'
I point out to him, though, that it is legitimate corporations, of the sort that he has headed, that are responsible for developing these technologies, and that they are doing so for purely commercial reasons. I point out that his own predictions suggest we could even be heading towards extinction, perhaps simply as a result of a laboratory error. Is it not possible, I ask him, that these technologies are already in the hands of 'crazies'?
He sidesteps the question. Instead, he says that, unlike the period of the cold war, where foreign enemies took part in an arms race, today, the enemy is within us all -- 'our habits, our desires, our economic system, and our competitive need to know'. What we are in danger of, he says matter-of-factly, 'is a self-inflicted wound, a self-inflicted extinction'.
On the subject of science and technology itself, Joy made himself very clear in his Wired article. 'Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists,' he wrote. We are charging towards a murky future, 'hardly evaluating what it might be like to try to live in a world that is the realistic outcome of what we are creating and imagining'. What's more, he wrote, 'There is no profit in publicising the dangers'.
ME VERSUS THEM
Is there not a danger that he might be vilified by his peers for his outspokenness -- excommunicated from the club? After all, his message is not one which the leaders of business and science are likely to want to hear. 'I've heard almost nothing from what I would call the hightech leadership,' he explains. 'Perhaps I'm playing a tune with notes they can't hear, or perhaps they have filters up, blocking out what they perceive as a stridency they associate with the environmental movement.'
'This can't become "me" versus "them",' he insists. 'They'll put dirt on me, right? -- "His software was a commercial failure, he's just bitter," etc.' Alarming words from someone in his position.
Joy stresses, again and again, how different tomorrow's computer technologies will be, in almost every way, from the technologies we have experienced before. 'Used to living with almost routine scientific breakthroughs,' he wrote in Wired, 'we have yet to come to terms with the fact that the most compelling 21st century technologies -- robotics, genetic engineering and nanotechnology -- pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: they can self-replicate. A bomb is blown up only once -- but one bot can become many, and quickly get out of control.'
But interestingly, and maybe surprisingly, his critique of technology does not encapsulate that society which pursues technotopia. I bring to his attention a number of recent studies on the effects of the Internet on users. The Stanford University Institute for the Quantitative Study of Society, for instance, recently found that the Internet 'was creating a brand-new wave of social isolation in the US, raising the spectre of an atomised world,' where people spend more time 'home, alone, anonymous'. I ask him for his views on Clinton's recent pledge to spend $150 billion taxpayer dollars on equipping every school classroom with up-to-date computers. 'Computers in schools,' he says simply, 'are overrated.'
It is as though his computer world, indeed his life, has suddenly been thrown into question by his discovery of the problems surrounding technology. 'There is no doubt we are interfering in systems that we don't understand, and we will surely be responsible for large parts of the environment breaking down,' he acknowledges, casually, 'but I'm attempting to focus my energy on a more narrow problem -- one which I understand.'
WHAT TO DO?
If the problem is as dire as he says it is, and the timescale as short, what action does he propose we take? I suggest that multinational corporations are out of control, and that government is both unwilling and unable to take responsible action.
'I don't believe we will do the right thing unless we are honest about the problem,' he says, 'and it's not in many people's interests to be honest about the problem. It's not even in government's interest to face the problem because of how the political system works. The problem with politicians, and politics in general, using the example of genetic engineering, is that there is always a very powerful small interest that would be seriously affected economically if a ban were implemented, whereas lots of people stand to lose in a smaller way if biotech products were allowed to go through. Interest groups have much more influence and can easily balance out an enormous numerical disadvantage.'
And the sheer speed of modern society, he says, makes positive change more difficult. 'I don't think that government regulation can in any case keep up with the current pace of technological change. Why are we in such a hurry? Why do we have to get there in one generation instead of three?'
Yet 'companies can be monitored,' he assures me. 'What makes it difficult is that there is not one single big offender. We could send strong signals through economic mechanisms, taxation, regulations, insurance requirements and other means to direct business towards the right questions with an emphasis towards a common-sense answer. We need some new, gracefully applied, limits.'
First though, 'we desperately need honest discussion within the province of people in the universities who have academic freedom or who aren't entangled with business interests. We need objective reporting by people willing to take the time to write it up in a way that everyone can understand. We need scientists with strong credentials to examine these problems, and we need to turn to the nonprofit sector for guidance.
'One of the problems has been that industry has been able to profit enormously from consuming irreplaceable things, generating environmental problems, and without internalising the costs. At the moment, society, in other words the taxpayer, has to foot the bill when business messes up. In a sense, the fact that there is no requirement for business to take these things into account amounts to a massive indirect subsidy. These unpleasant consequences have to be internalised within the corporate economic model.'
A possible mechanism, he suggests, for ensuring corporations avoid taking unnecessary risks for quick profit is insurance. 'Companies should have to take catastrophic insurance when they are dealing with dangerous technologies. If the Canadian company which shipped over the GM seed had gone through that process and taken an insurance policy, they would have set up procedures to make sure that the stuff didn't get mixed in order to get a decent rate from the insurance company.'
INTERNATIONAL COUNCIL OF CAUTION
But Joy is talking about relatively short-term measures, which require full co-operation with the very businesses that are driving through the techno-revolution that he fears. In the long term, as he himself accepts, the process of technological change and the consequent risks need to be democratised. In any normal terms, and by any normal standards, a corporation should not be permitted to toy with fundamentals like genes, nanotechnology and the like without the full endorsement of those whose lives will be affected should anything misfire. 'If someone is doing biological, chemical, nuclear, genetic, robotic or nanotech weapons of mass destruction [for example], then that is of global interest. One thing is for sure -- we urgently need to set up a world regulatory body if only to protect us from ourselves.'
Joy likes the idea of international regulation of new technologies, and it features heavily in his thinking, such as it is, about tackling the problems of potential runaway technology. 'The situation in ancient Greece,' he says, 'was that the community was governed by people drawn by lot. Perhaps we could set up an international council on the same basis, continuously selecting new faces to avoid possible co-option. Obviously if it is business dominated and consequently biased, it won't accomplish the task.'
THE ROAD AHEAD
Perhaps, in the long-term, we could. But today in any case, most of the solutions presented are borderline lunatic -- abandoning planet earth for instance in search of further stars to 'develop'. And even marginally more serious proposals are far from convincing. First they rest on the assumption that there can be technical solutions to what are essentially systemic problems. Second, they assume that technocrats are willing and even able to provide protection, and third, they assume an understanding of the problem itself, which in many ways even Joy himself seems not to have grasped. Had he done so, he would surely accept that the economic path currently being pursued by most governments of the world today, coupled with an insatiable demand by an increasingly dissatisfied people for novelty, may well lead straight to the 'realistic outcome' that unnerves him. As Luis Alvarez, a leading physicist whom Joy cites in his Wired article, has said, those responsible for coming up with such techno-fixes, are 'very br ight guys with no common sense.
The only sane alternative, he concedes, is to limit our pursuit of certain kinds of knowledge, even though, as he puts it, such limits 'fly in the face of the human experience'. Common sense, he argues, demands that we re-examine basic, long-held beliefs. 'The American-style economic system encourages an infatuation with the "new". No one seems prepared to slow down and be cautious. You only have to look at our experiment with antibiotics. We clearly jumped into bed with a new technology with our eyes shut, and we are paying for it now, with multi-drug-resistant diseases. DDT and thalidomide fall into the same category. And where is the evidence that cellphones are safe?
'Let's have scientists understand that they have an ethical responsibility; that the distinction between pure and applied science is largely gone. You can't create a scientific breakthrough and not think about what the consequences of the technological use of it are. The Dalai Lama has pointed out what Western research has clearly shown: having more things doesn't necessarily make people happy. Beyond some point it is just a nuisance. The simplicity movement, fractional ownership -- all point towards smarter paths.'
SEEKING CHANGE
In his Wired article, Joy expresses irritation at the reaction of some of his colleagues to the problems he sees. 'Many other people who know about the dangers still seem strangely silent,' he writes. 'When pressed, they trot out the "this is nothing new" riposte--as if awareness of what could happen is response enough.'
In his own life, he says he feels a 'deepened sense of personal responsibility -- not' he quickly adds, 'for the work I have already done, but for the work that I might yet do, at the confluence of the sciences'. Yet none of what he now believes seems to have made him do what many might think would be the obvious first step -- stop developing the very technology that he is warning about. Ask him about this and you might get a curious answer: 'I have always believed,' he wrote in Wired, 'that making software more reliable, given its many uses, will make the world a safer, better place; if I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine such a day may come.'
But he says he is actively trying to raise the dangers at the top level. 'I want to go next January to the World Economic Forum and hopefully we'll have some discussion on this there. I plan to talk about these issues at the OECD meeting in Paris. Obviously economics will play a major part in the discussions. I consider that to be a very positive thing to do. I'm trying to work with numerous science organisations, I've already met with half a dozen, and I'm writing a book with more emphasis on what we can do about these problems. My article triggered an enormous response.'
He is, I remind him, in the very unusual position of straddling two quite different camps. With one foot in big business and one in the technological world, will he seek to use his position to influence America's likely leaders of tomorrow? 'I've written to all the major candidates. So far, I've only received a response from George Bush.' Positive? Evading the question, he answers, 'I'm personally a supporter of Gore, but if Bush is elected I'm happy to work with him. After all, even if Gore is elected we could easily find that he isn't very active on this issue'.
I return to the question of his involvement. Government has shown a total lack of interest in regulating business activity. We only need look at the farcical manner with which biotechnology is currently regulated to see this. Business has generally shown zero self-restraint when dealing with potentially explosive technologies. Global institutions, too, have historically been prone to domination by big business. What hope is there that such a new institution, as recommended by Joy, could avoid such compromises? What hope is there that such an institution would even be set up? Will he lobby for such a body? 'At this point, I'm just lobbying for a discussion. Perhaps G7 or GATT might be able to accommodate these kinds of things,' he says, not realising the contradictions in his own analysis.
Whatever Bill Joy decides to do, there is no doubt he will play a vital role in the coming debate. Though perhaps he has not fully thought out the true implications or the logical conclusions to his 'tune,' his intentions are clear, and unlike others in his field, he is willing to rethink some very basic assumptions.
Our conversation has ranged from the hysterical to the solemn, and the tape recorder attached to my phone crackles to an abrupt end. I comment, a stiff joke, that my own technological shortcomings mean an end to our discussion. No response, just a shard of consolation: 'In the end, we do tend to overestimate our design capabilities'.
Zac Goldsmith is editor of The Ecologist.
'WHY THE FUTURE DOESN'T NEED US'
Bill Joy's article in the April 2000 issue of Wired magazine laid out in detail his fears for the future of technology. The following quotes from that article highlight his key arguments and concerns.
CREATING INTELLIGENT MACHINES:
By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today. As this enormous computing power is combined with the manipulative advances of the physical sciences and the new, deep understandings in genetics, enormous transformative power is being unleashed. These combinations open up the opportunity to completely redesign the world, for better or worse: the replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavour.
NANOTECHNOLOGY:
Manipulation of matter at the atomic level could create a utopian future of abundance, where just about everything could be made cheaply, and almost any imaginable disease or physical problem could be solved using nanotechnology and artificial intelligences. A subsequent book imagines some of the changes that might take place in a world where we had molecular-level 'assemblers.'
Assemblers could make possible incredibly low-cost solar power, cures for cancer and the common cold by augmentation of the human immune system, essentially complete cleanup of the environment, incredibly inexpensive pocket supercomputers -- in fact, any product would be manufacturable by assemblers at a cost no greater than that of wood -- spaceflight more accessible than transoceanic travel today, and restoration of extinct species.
Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnological device -- such devices can be built to be selectively destructive, affecting, for example, only a certain geographical area or a group of people who are genetically distinct.
MERGING MAN WITH MACHINE:
A second dream of robotics is that we will gradually replace ourselves with our robotic technology, achieving near immortality by downloading our consciousnesses... But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.
THE PURSUIT OF KNOWLEDGE:
Aristotle opened his Metaphysics with the simple statement, 'All men by nature desire to know'. We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognise the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge. But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we re-examine even these basic, long-held beliefs.
TOLD YOU SO: New developments give weight to Joy's worries
On August 31, Nature magazine published the most recent developments in robotics: computers can now design and build their own robots with no intervention from humans. The development is being hailed as a crucial step towards the 'artificial evolution' of intelligent robots. Hod Lipson and Jordan Pollack of Brandeis University, Massachusetts, USA, connected a robot-designing computer to a machine capable of automatically building robots to the computer's specification. Given the task of creating a robot capable of moving across the floor, the computer produced designs which it 'evolved' by introducing random mutations -- much as in biological evolution -- until the required result was achieved. Once the computer had perfected the design, it instructed the machine to build the robot.
In a separate development, from the same issue of Nature, three computer scientists from the University of Lausanne, Switzerland, report success in teaching 'swarms' of small robots to interact with each other and work 'in a self-organised manner, similar to workers in an ant colony' -- foraging for 'food' which is then taken back to their 'nest.'
COPYRIGHT 2000 MIT Press Journals
COPYRIGHT 2001 Gale Group