Instititute for Ethics and Emerging Technologies...
"Uncertainty About Alien Threats"
May 26th, 2010
May 26th, 2010
In a recently concluded reader poll, more than 40% of respondents said we don’t have enough information to know whether space aliens potentially could pose a threat to Earth.
With a nod to the Fermi Paradox, 20% said if anyone was going to molest us, they should have been here by now.
In contrast, almost one in five of our poll respondents apparently believe that aliens are already here. And another 8% are concerned that there could be real danger in announcing our presence to the Universe.
Are space aliens a threat to Earth?
This poll was conducted in response to an episode of Stephen Hawking’s Discovery Channel program in which he warned that advanced extraterrestrial species likely would pose a serious threat to human civilization.
Two articles we posted on our site, one from IEET Board Member George Dvorsky, and the other from IEET Fellow David Brin, generated a large amount of discussion on the topic.
"The Other Kind of Aliens"
by
David Brin
April 30th, 2010
Instititute for Ethics and Emerging Technologies
by
David Brin
April 30th, 2010
Instititute for Ethics and Emerging Technologies
In response to a flurry of interest that’s been stirred by Stephen Hawking’s new Discovery Channel show—specifically, his lead-in episode about extraterrestrials, wherein he recommended against our calling attention to ourselves—I’ll offer a hurried little riff here, about Hawking and aliens, with added contributions by and about Paul Davies, Robin Hanson, and others.
On his show, Professor Hawking said that aliens are almost certainly out there and that Earthlings had better beware. Instead of seeking them out, humanity should be doing all it that can to avoid any contact. His simple reasoning? All living creatures inherently use resources to the limits of their ability, inventing new aims, desires and ambitions to suit their next level of power. If they wanted to use our solar system, for some super project, our complaints would be like an ant colony protesting the laying of a parking lot.
Want an irony? I am actually a moderate on this issue (as I am regarding Transparency). My top aim, in these recent arguments, has been pretty basic; I want more discussion. And for arrogant fools to stop blaring into space “on our behalf” without at least offering the rest of us the courtesy of first openly consulting top people in history, biology, anthropology—and guys like Hawking—in an honest and eclectic way. Their refusal to do this constitutes just about the most conceited and indefensible behavior by scientists that I have ever seen.
Now, everybody and his cousin appears to have an opinion about aliens. In fact, I know almost nobody who seems willing to wait and entertain a wide variety of hypotheses, in this “field without a subject matter.” It seems that the very lack of data makes people more sure of their imagined scenario, rather than less. And more convinced that those who disagree are dunderheads.
Renowned science philosopher Paul Davies has weighed in with a new book, The Eerie Silence, which seems a bit of a take-off my own classic “The Great Silence” paper—(still the only overall review-survey that has ever attempted to cover more than 100+ hypotheses that are out there, to explain our loneliness in the universe.) Alas, Paul seems never to have heard of that paper, or most of the hypotheses in question—he cites me only as a grouch toward METI (“message to ET.”) And, while I have long admired Paul’s work and consider him to be quite amazing, I feel he got a bit lazy with this one.
Space Law scholar Nicholas Szabo is much harsher on him than I am, I’m afraid:
Paul Davies’s arguments are pretty lame, and possibly quite disturbing; for example saying: “Just because we go around wiping out our competitors doesn’t mean aliens would do the same.” But that doesn’t mean they wouldn’t, either. The example of life on earth is all we have to go on, and life on earth is Darwinian.
Szabo continues:
Davies also says: “A civilization that has endured for millions of years would have overcome any aggressive tendencies” But I (Szabo) find that utopian nonsense. By the same reasoning humans should have “overcome any aggressive tendencies” that chimpanzees have. Davies adds: “By comparison, humans would quite likely be considered dangerous warmongers, posing a possible menace to our galactic neighbors in centuries to come. If so, then ET may act to eliminate the threat…”
Um, so much for their peacefulness. George Mason University economist and philosopher Robin Hanson responds:
Many species here on Earth have endured for millions of years while retaining “aggressive” tendencies, and even very “mildly” bellicose aliens, ones who would only exterminate us if they could make a plausible case that we might pose a future menace, should still be of great concern to us. I sure don’t want to be exterminated “just in case.” Wouldn’t it make more sense to shut up until either we don’t look so menacing, or until we are strong enough to defend ourselves?
Another quotation from Szabo:
Davies continues: “...if we didn’t mend our violent ways. Ironically, the greatest danger from an alien encounter may be ourselves.” In other words, ETI really does pose a threat after all, but it’s our own fault, so we shouldn’t (we are presumably left to conclude) try to protect ourselves from this threat beyond taking a profound moral lesson from this flight of imagination and mending our own ways. This “reasoning” from splendidly fashionable PC attitudes combined with his own imputation of human psychology to imaginary entities leads to a rather grotesquely self-loathing conclusion: Davies puts humans on trial against aliens he has conjured up from his imagination and find the humans guilty and deserving of genocide. Fortunately, we have much better reasons to try to be more peaceful than the conjectured attitudes of hypothetical ETI. A good start to achieving human peace would be to withdraw moral support from people who hate their fellow human beings.
While I react less pungently than Szabo… and in fact see a bit of merit in Paul’s point… it remains rather tiresome for the reflex to always be to assume that aliens will automatically be more elevated than us. (Yet, willing to judge and crush us, rather than help us get better.)
In fact, out of sheer ornery contrariness and a habitual wish to avoid limits on thinking, I’m tempted to wonder if humanity may be among the MOST pleasant sapient races in the galaxy!
Just imagine a high tech species descended from solitary stalking carnivores, like tigers, or loner infanticides, like bears, or pack carnivores, or paranoid herd herbivores, or mammoth harem-keepers like elephant seals. We come from tribes of long-lived, relatively patient and contemplative, reciprocal-grooming, gregarious apes, whose male-female differences are relatively small…
...all traits that mitigate toward some degree of otherness-empathy, which may not happen very often, across the stars. And STILL we are violent MoFo’s!
Furthermore, suppose we concede the common SETI talking point that aliens “would have to have learned to avoid much war, given the destructive power of advanced weaponry.” Hm, well, maybe. But is the only way to avoid Armageddon massive racial reprogramming to pacifism? A far more likely way for aliens to stop war and save themselves from self-destruction is the method implicitly commended by Jared Diamond, in his book COLLAPSE.
Hegemony.
The creation of a perfectly stable and perfectly repressive oligarchy that protects itself by maintaining a rigid status quo.
And yes, that kind of stable hegemony can become internally “peaceful” as in Ming China… and more-briefly in many other human cultures. And yet, a perfect, control-freak autarchy ain’t exactly utopian by our terms, or altruistic. Moreover, it remains capable of violence, especially when it sees something outside of itself that it may not like.
Oh, but the most frustrating thing is this. When people leap to their own “pat” explanations for the Great Silence, sighing that “of course” the answer is this and such, and then dismissing all contrary views as foolish, they are cheating themselves, and the rest of us, out of what could be the most fascinating and wondrously open-ended argument/discussion of all time!
A marvelous set-to that juggles every science, every bit of history and biology and astronomy and… well everything! It is the great puzzle of who we are, how we may be different, or the same as those mysterious others, out there.
THAT is what makes me sad, when nearly everybody in this field leaps so quickly—on almost zero evidence—to say “of course the answer is….”
I am, above all, a lover of the greatest enlightenment invention—argument—and its accompanying virtues, curiosity, experimentation, reciprocal accountability, and even the aching joy of being forced, now and then, to admit “Okay, you got me, that time. I may have been wrong.”
[David Brin is a scientist and best-selling author whose future-oriented novels include Earth, The Postman, and Hugo Award winners Startide Rising and The Uplift War.]
"Why Stephen Hawking—and everyone else—is wrong about alien threats"
by
George Dvorsky
May 2nd, 2010
Instititute for Ethics and Emerging Technologies
by
George Dvorsky
May 2nd, 2010
Instititute for Ethics and Emerging Technologies
Sentient Developments
Stephen Hawking is arguing that humanity may be putting itself in mortal peril by actively trying to contact aliens (an approach that is referred to as Active SETI). I’ve got five reasons why he is wrong.
Hawking has said that, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”
He’s basically arguing that extraterrestrial intelligences (ETIs), once alerted to our presence, may swoop in and indiscriminately take what they need from us—and possibly destroy us in the process; David Brin paraphrased Hawking’s argument by saying, “All living creatures inherently use resources to the limits of their ability, inventing new aims, desires and ambitions to suit their next level of power. If they wanted to use our solar system, for some super project, our complaints would be like an ant colony protesting the laying of a parking lot.”
It’s best to keep quiet, goes the thinking, lest we attract any undesirable alien elements.
A number of others have since chimed in and offered their two cents, writers like Robin Hanson, Julian Savulescu, and Paul Davies, along with Brin and many more. But what amazes me is that everyone is getting it wrong.
Here’s the deal, people:
1. If aliens wanted to find us, they would have done so already
First, the Fermi Paradox reminds us that the Galaxy could have been colonized many times over by now. We’re late for the show.
Second, let’s stop for a moment and think about the nature of a civilization that has the capacity for interstellar travel. We’re talking about a civ that has (1) survived a technological Singularity event, (2) is in the possession of molecular-assembling nanotechnology and radically advanced artificial intelligence, and (3) has made the transition from biological to digital substrate (space-faring civs will not be biological—and spare me your antiquated Ring World scenarios).
Now that I’ve painted this picture for you, and under the assumption that ETIs are proactively searching for potentially dangerous or exploitable civilizations, what could possibly prevent them from finding us? Assuming this is important to them, their communications and telescopic technologies would likely be off the scale. Bracewell probes would likely pepper the Galaxy. And Hubble bubble limitations aside, they could use various spectroscopic and other techniques to identify not just life bearing planets, but civilization bearing planets (i.e. looking for specific post-industrial chemical compounds in the atmosphere, such as elevated levels of carbon dioxide).
Moreover, whether we like it or not, we have been ‘shouting out to the cosmos’ for quite some time now. Ever since the first radio signal beamed its way out into space we have made our presence known to anyone caring to listen to us within a radius of about 80 light years.
The cat’s out of the bag, folks.
2. If ETIs wanted to destroy us, they would have done so by now
I’ve already written about this and I suggest you read my article, “If aliens wanted to they would have destroyed us by now.”
But I’ll give you one example. Keeping the extreme age of the Galaxy in mind, and knowing that every single solar system in the Galaxy could have been seeded many times over by now with various types of self-replicating probes, it’s not unreasonable to suggest that a civilization hell-bent on looking out for threats could have planted a dormant berserker probe in our solar system. Such a probe would be waiting to be activated by a radio signal, an indication that a potentially dangerous pre-Singularity intelligence now resides in the ‘hood.
In other words, we should have been destroyed the moment our first radio signal made its way through the solar system.
But because we’re still here, and because we’re on the verge of graduating to post-Singularity status, it’s highly unlikely that we’ll be destroyed by an ETI. Either that or they’re waiting to see what kind of post-Singularity type emerges from human civilization. They may still choose to snuff us out the moment they’re not satisfied with whatever it is they see.
Regardless, our communication efforts, whether active or passive, will have no bearing on the outcome.
3. If aliens wanted our solar system’s resources, they would haven taken them by now
Again, given that we’re talking about a space-faring post-Singularity intelligence, it’s ridiculous to suggest that we have anything of material value for a civilization of this type. They only thing I can think of is the entire planet itself which they could convert into computronium (Jupiter brain)—but even that’s a stretch; we’re just a speck of dust.
If anything, they may want to tap into our sun’s energy output (e.g., they could build a Dyson Sphere or Matrioshka brain) or convert our gas giants into massive supercomputers.
It’s important to keep in mind that the only resource a post-Singularity machine intelligence could possibly want is one that furthers their ability to perform megascale levels of computation.
And it’s worth noting that, once again, our efforts to make contact will have no influence on this scenario. If they want our stuff they’ll just take it.
4. Human civilization has absolutely nothing to offer a post-Singularity intelligence
But what if it’s not our resources they want? Perhaps we have something of a technological or cultural nature that’s appealing.
Well, what could that possibly be? Hmm, think, think think….
What would a civilization that can crunch 10^42 operations per second want from us wily and resourceful humans….
Hmm, I’m thinking it’s iPads? Yeah, iPads. That must be it. Or possibly yogurt.
5. Extrapolating biological tendencies to a post-Singularity intelligence is asinine
There’s another argument out there that suggests we can’t know the behavior or motivational tendencies of ETI’s, therefore we need to tread very carefully. Fair enough. But where this argument goes too far is in the suggestion that advanced civs act in accordance to their biological ancestry.
For examples, humans may actually be nice relative to other civs who, instead of evolving from benign apes, evolved from nasty insects or predatory lizards.
I’m astounded by this argument. Developmental trends in human history have not been driven by atavistic psychological tendencies, but rather by such things as technological advancements, resource scarcity, economics, politics and many other factors. Yes, human psychology has undeniably played a role in our transition from jungle-dweller to civilizational species (traits like inquisitiveness and empathy), but those are low-level factors that ultimately take a back seat to the emergent realities of technological, demographic, economic and politico-societal development.
Moreover, advanced civilizations likely converge around specific survivalist fitness peaks that result in the homogenization of intelligence; there won’t be a lot of wiggle room in the space of all possible survivable post-Singularity modes. In other words, an insectoid post-Singularity SAI or singleton will almost certainly be identical to one derived from ape lineage.
Therefore, attempts to extrapolate ‘human nature’ or ‘ETI nature’ to the mind of its respective post-Singularity descendant is equally problematic. The psychology or goal structure of an SAI will be of a profoundly different quality than that of a biological mind that evolved through the processes of natural selection. While we may wish to impose certain values and tendencies onto an SAI, there’s no guarantee that a ‘mind’ of that capacity will retain even a semblance of it’s biological nature.
So there you have it.
Transmit messages into the cosmos. Or don’t. It doesn’t really matter because in all likelihood no one’s listening and no one really cares. And if I’m wrong, it still doesn’t matter—ETIs will find us and treat us according to their will.
George Dvorsky serves on the Board of Directors for the Institute for Ethics and Emerging Technologies. George is the Director of Operations for Commune Media, an advertising and marketing firm that specializes in marketing science. George produces Sentient Developments blog and podcast.
Instititute for Ethics and Emerging Technologies
Commentaries: Stephen Hawking's Aliens
No comments:
Post a Comment