Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Make a sustaining gift today to support local journalism!

Should Science End Humankind?

iStockphoto

"I want you to hold off on your intellectual gag response," the speaker told us. "I want you to stay with me through this 'til we get to the end."

The speaker was Paul Horn, former executive director of research at IBM. He's the man behind Watson, the machine that beat humans at Jeopardy. Horn is a highly informed, deep thinker on future technology. His talk was called "The Co-Evolution of Humans and Machines." His purpose was to get us thinking more deeply about a revolution that, if it comes, would be unlike anything humanity has experienced so far in its long history.

Horn's main argument was that, in the near future, we will build machines surpassing us in intelligence. What the machines — those machines — then build will surpass their own creator's intelligence. This process will rapidly continue until, very soon, it yields a new force on the planet — superintelligence. This runaway process is often called the "singularity" — and Horn's main job was to argue that, given current trends in technology, something more or less like it is coming.

What happens next (not the subject of Horn's talk), depends on your level of optimism. If you think things will turn out badly, well, then, you know the story. Skynet. The Matrix. Robot overlords.

But if you're an optimist, then you think something wonderful is going to happen. With the help of our super-intelligent machines we become more.

"More what?" you ask. Well, more than human. We become the next step in evolution — and that will mean humanity, as we know it, will come to an end. What comes next will be a new post-human era (transhumanism, the step in between, is an idea we've covered before in this blog).

But now comes the real question. Even under the most optimistic scenario where a post-human transformation is available to everyone regardless of race, creed or (the more likely stumbling block) economic status, is it still a good idea? More to the point, is actively developing technologies to put us at the intellectual level of a schnauzer relative to future post-human beings ethical, just and proper?

Nick Bostrom, a philosopher at Oxford, identifies the core value of transhumanism in the ideal of human potential. Thus, for a transhumanist, raising future generations to the heights that our current potential makes possible is all that matters. As Bostrom puts it:

"This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris."

Bostrom runs through the limits that can be overcome when we transcend the current version of humanity: lifespan, intelligence, bodily functionality, sensory modalities, special faculties and sensibilities. Thus, in a post-human world our children's' children may live for centuries, see in all wavelengths of the spectrum and think trillions of times faster and more deeply than we can even imagine.

It all sounds pretty hard to argue with. But that is just where the idea of a gag factor steps in.

Whatever post-humans become, its hard to imagine they will be much like us anymore. Human ideals of beauty and grace may seem offensive or even horrific to them. Imagine the loveliest face you have ever seen. Now replace it with a thin fish-head topped with high ribbed fin. Perhaps that configuration — better for displacing heat from super-charged brains — will be the trans-humanist ideal of beauty. And what about post-human values and ethics? Post-human morality might seem wildly wrong to us on everything from simple social etiquette to questions of life and death. Does the fact that those ideals and ethics emerge from post-human higher intelligence change the importance of the ideals and values we hold?

From physical form (there will be many possibilities) to culture and behavior, it's hard to even imagine how alien our post-human progeny might seem to us or us to them. Given the likely completeness of the post-human transformation, how ready are we to be so completely replaced? It's a question that has to be on the table because we are, as a culture, rapidly pushing the enabling technologies forward right now.

So even in the most optimistic scenario, does the end of human suffering have to mean the end of human kind (at least this version)?

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Adam Frank was a contributor to the NPR blog 13.7: Cosmos & Culture. A professor at the University of Rochester, Frank is a theoretical/computational astrophysicist and currently heads a research group developing supercomputer code to study the formation and death of stars. Frank's research has also explored the evolution of newly born planets and the structure of clouds in the interstellar medium. Recently, he has begun work in the fields of astrobiology and network theory/data science. Frank also holds a joint appointment at the Laboratory for Laser Energetics, a Department of Energy fusion lab.

You make NHPR possible.

NHPR is nonprofit and independent. We rely on readers like you to support the local, national, and international coverage on this website. Your support makes this news available to everyone.

Give today. A monthly donation of $5 makes a real difference.