Friday, March 28, 2008

Ethics: Robots, androids, and cyborgs

There may come a time when robots, androids, and cyborgs will be more than science fiction and develop "intelligence" and with intelligence comes decision-making, freedom, responsibility--ETHICS.

One of the local television stations last night dug into its vaults and aired "Westworld" [MGM-1973] written and directed by Michael Crichton and staring Yul Brynner, Richard Benjamin, and James Brolin. An inexpensive film shot on studio back lots, dessert, and Harold Lloyd's estate, the film exploits dreams of a perfect fantasy vacation [at $1,000 a day] at an amusement facility called Delos where the paying adventurer can choose from Roman World, Medieval World, and Western World. Sophisticated androids are the counterparts of the human visitors and bend at the will of human interaction with NO harm to the humans. Well, maybe. Minor glitches happen which are expected in the complicated computer setup...normal malfunction parameters as expressed by a review board. It isn't much longer when the "glitches" become more complicated and numerous until finally there ensues android revolt--utter chaos. Humans are dying. Not a good thing for the investors of Delos...paid realism with deathly results. Yul Brynner [the gunslinger from the "Western World"] runs amuck, the scientists/programers are sealed in their room with locked doors and perish from asphyxiation, James Brolin dies for real in a shoot out with Yul Brynner, and the rest of the film is a quest by the gunslinger to get Richard Benjamin at all costs. Human ingenuity and reason finally foil the gunslinger, but the whole film, beyond its entertainment value, is the question of the rise of mechanical machines driven by computer programs and the establishment of ethical values. The film neither revealed why the androids changed [substandard, untested components?] and why they took an "evil" and "destructive" stance. Why not a stance of superior intelligence. That would have produced a film of little interest for sure. But the question remains as to the nature of the relationship of androids and the development of the fostering of ethical principals. Is the first stage of quality societal norms a function of a pool of negativity, antisocial behavior; that given time the androids would have evolved into positive, functioning members of their own "species" and interact well with other species? Where do ethical norms originate?

How about the penultimate android with an attitude problem and unshakeable pessimistic disposition--"Marvin, the Paranoid Android" equipped with "GPP" [Genuine People Personalities], from the very popular British TV series and the film "Hitchhiker's Guide To The Galaxy". The story line is somewhat complex in this episodic tale but here is a good summation by Joseph DeMarco:

"Narrowly escaping the destruction of the earth to make way for an intergalatic freeway, hitchhikers Arthur Dent (Earthling Idiot) and Ford Prefect (Writer for the Guide) go on a crazy journey across time and space. They are read bad poetry which is considered terrible torture, and they are almost sucked out an air lock into space. After almost being killed many times, and narrowly escaping at the end of each chapter, they join forces with Zaphod Beeblebrox (A two-headed cocky alien), Trillian (another worthless earthling) and Marvin (the depressed robot) to search for the answer to the meaning of life, which may have been hidden on the recently demolished earth."

When you are contemplating this topic consider the character "Data" from "Star Trek: Next Generation", and recall the 1990 episode of "Star Trek: Next Generation" [#64] called "The Offspring" whereby Data has created a daughter called "Lal". Lal is capable of perception and feeling and given Data's "software" of ethics by "neural transfers". But Lal has some problems with citizens of the star ship. Befriended by Guinan she is introduced to the inhabitants of "Ten Forword" to broaden her social intercourse. Data and Captain Picard are embroiled in a discussion regarding Lal's removal from the star ship when they are interrupted by an emergency message from Counselor Troi. Lal is dying...her functions broke down after experiencing an extraordinary gamut of feelings in the counselor's presence. All attempts to save Lal fail and she succumbs to what we humans all must face--DEATH. Curiously, Lal's demise may have been contributed to a more advanced stage of sensitivity and she was unable to interface the new feelings with the supplied software. Consider Data's inability to experience the grief and emotion the crew feels at Lal's loss and must be content to have only memories of Lal. Data may well be equipped with a sense of ethics when dealing with human issues of loyalty, responsibility, self-sacrifice, etc., but he, and all androids of his caliber, may never fully integrate the full range of human emotions--well beyond the ethics.

Remember Isaac Asimov's "Three Laws of Robotics" which I assume would be relegated to androids too? All is fine until something breaks down or a truly unique circumstance arises that confounds even the best mind's of mankind.

1. A robot may not injure humans nor, through inaction, allow them to come to harm.

2. A robot must obey human orders except where such orders conflict with the First Law.

3. A robot must protect its own existence insofar as such protection does not conflict with the First or Second law.

Roger Clarke has written this detailed essay on Asimov's
"laws of Robotics".

As I suspect...there will BE those unique events where Asimov's robot imperatives or any additional instructions will fail: "The freedom of fiction enabled Asimov to project the laws into many future scenarios; in so doing, he uncovered issues that will probably arise someday in real- world situations. Many aspects of the laws discussed in this article are likely to be weaknesses in any robotic code of conduct."

I suppose some wonder about definitions here. For example is there a clear cut distinction between robots and androids and another version--cyborgs. Maybe not, and all is a matter of semantics. And it may well be a fortuitous effort to make such a distinction other than what is common sense. Cyborgs and androids clearly take on the mantle of sentient beings whereas not all robots are merely drones of task oriented character such as Robbie the Robot [Forbidden Planet, "Lost In Space"] or "Marvin" ["Hitchhiker's Guide To The Galaxy"]. And consider too that the discussion here is really existing within the realm of science fiction and certainly not correlated to any real life antecedents [yet], but is still worthy of discussion and analysis. The advent of sophisticated computers, biotechnology, genetics, etc. force us to become aware of the possibility of artificial devices becoming human like and subject to the same issues that humans face--those pesky ethical dilemmas. The development and integration of these new forms may just well be part of the whole picture of evolution as one writer suggested. Seeing the forest is impossible for us and thus humans may not realize that "humans" aren't the only form of life in a complex evolutionary scheme; that a carbon based sentient being is neither the end product of evolution nor the only species to embrace ethical issues.

Steve Mizrach offers this essay on
cyborg ethics.

Now consider the notion that species ethics are non-transferable; that a species ethics is a category of one and implicitly forbids the overlapping of another category. In such a case the attribution [transfer] of ethical principles [involving servitude and safety of the primary or transferring species, i.e. Asimov's "Three Rules"] would be impossible for an android. This makes the ensuing comment: Each species is unique in its own ethics and only chance would afford similarity. Earth residents have one set of ethics while residents of some very distant planet would have theirs. A unique set for each species that except for chance could well exhibit diametrically opposed ethics. A learning bridge for the sharing of ethics just may not exist--or the simple transfer of ethics that ensures a species safety is impossible. Divergent species just may not have common grounds for mutual acceptance of ethics.

The notion of sound ethics stemming from religion/theology is not new and does carry some significance. [Unfortunately, on the whole, the implementation of such sound ethics has historically been short of world wide demonstration.] Now, whether an android community would adopt ethical norms [be they their own constructs or implanted or borrowed from other beings] to ensure the safety and perpetuation of their species is another mater. It would be arrogant, despite what may appear beneficial, to assume that mankind's ethical resolutions are the best for all species of intelligence or even the only set of ethics in the universe. Androids may discover that the "self" is the most beneficial status and one wonders just how long such a stance would last. Androids may have no community sense of ethics as we would understand. Ethics becomes twisted and inverted in substantial meaning to what we experience. You are quite correct in that survival of the individual and perpetuation of the species is what establishes a set of ethics. Most humans are guided by good ethics and do have a conscience. But you have to wonder just how far those great ethics are really understood and believed. There is no particular pleasure in killing another, but faced with a situation where food for one's survival is at issue, one would consider killing the intruder to sustain one's own existence. Maybe androids would have similar compunctions or maybe they have a different set of ethics that enable them to diffuse the life and death ethical situation.

"From the far horizons of the unknown come transcribed tales of new dimensions in time and space. These are stories of the future, adventures in which you'll live in a million could-be years on a thousand may-be worlds. The National Broadcasting Company in cooperation with Street & Smith, publishers of Astounding Science Fiction Magazine present: "X Minus One".

For you old time radio fans of yesteryear, X Minus One offered many episodes of robots, androids, humanoids, and the like, but one of the most delightful was an episode called
"How To" [Episode #45 that aired April 3rd, 1956]. The story was by Clifford D. Simak, radio transcription was by William Welch, and stared Alan Bunce, Ann Seymour, Les Demon, Joe Bell, Jane Bruce, Santos Ortega, Ben Grauer. As the plot indicates: "A man orders a do-it-yourself robotic dog kit and is accidentally sent a kit for an experimental robot humanoid. The mechanical man is both a blessing and a curse." [Troy Holaday]. This has just about everything: Benevolent robots, counterfeiters, tax men, lawyers.

Let's suppose for the sake of the following argument that androids exist and that they have a set of ethics akin to man: A right to life [murder prohibited], mercy, altruism, etc.--including jurisprudence. Jurisprudence for androids?--yes. If they mirror human ethics of conduct, then they must also abide by the laws of human society and be subject to all of the ramifications.

Robert A. Freitas Jr. offers this essay on
jurisprudence.

And finally..."Jennifer, an emotionally troubled whiz kid with obsessive-compulsive disorder, is desperate to find her birth mother in China. But she's also petrified to leave her house. So she uses her technological genius to build Jenny Chow, a surrogate devoid of dysfunction, to take the journey in her place."--New York Academy of Sciences.

A new play has opened called "The Intelligent Design of Jenny Chow" by Rolin Jones and is concerned with a lonely young woman's [Jennifer Marcus] acute agoraphobia and genius who builds a companion--an android called Jenny Chow. While reviewer Charles Isherwood is dismayed at the overall tone of the play especially the whimsical demeanor of the android, it nevertheless illustrates the value of, shall we say, an alternate personality--far more complex and interactive than the standard doll or teddy bear of childhood. For those individuals that find it difficult or impossible to relate to "real" people or the "real" world such an android is not without merit for such an item will offer comfort, lessen loneliness, offer interaction on that person's level of communication and may possess value of therapy. Chemical sympathy may not be needed--just someone to talk to would be far more beneficial. life, which may have been hidden on the recently demolished earth.


2 comments:

  1. CatBar said....(not really Anonymous - just can never get that lousy Google Blogger thingy to work for me!)I really enjoyed reading your very readable article on 'Asimov's Laws of Robotics (13 April 08)and have saved this to My Favourites.

    I'm a huge Asimov Robots fan and read some brief blog on (the very 'brainy') R.Giskard having 'metacognition'.

    Might you consider elaborating on this subject in a further blog as I'd be really interested in understanding more about this knowledge as would be applied to a very superior robot such as R.Giskard.

    ReplyDelete
  2. I am not familiar enough with Isaac Asimov's R.Giskard to make a comment. Perhaps you could provide some history and comments.

    ReplyDelete