Tuesday, January 27, 2015


Some people argue that we will one day reach a point when our machines, which will have become smarter than us, will be able themselves to make machines that are smarter than them. Superintelligence — an intelligence far-outreaching what we are in a position even to imagine — will come on the scene. We will have attained what is known, in futurist circles, as the "singularity." The singularity is coming. So some people say.

There are singularity optimists and singularity pessimists. The optimists — I think we can rank Ray Kurzweil in this camp — envision a future in which real artificial intelligence helps rid the world of disease and extends our own lives beyond frail biological limitations.

It is the pessimists who are in the news lately. A group of leading thinkers and executives have recently signed a statement urging us to slow down and think through the safeguards necessary to protect us from a race of machines who will know more than us, think faster and farther and less fallibly than us, and who will no longer need us. Such artificial superiority will be able to control us the way we, as a species, have been able to dominate planet earth and her many species. npr

No comments:

Post a Comment