International Journal of Intelligence Science, 2013, 3, 170-175
http://dx.doi.org/10.4236/ijis.2013.34018 Published Online October 2013 (http://www.scirp.org/journal/ijis)
On the Limit of Machine Intelligence
Jinchang Wang
School of Business, Richard Stockton College of New Jersey, Galloway Township, USA
Email: jinchang.wang@stockton.edu
Received September 23, 2013; revised October 20, 2013; accepted October 26, 2013
Copyright © 2013 Jinchang Wang. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT
Whether digital computers can ev entually be as intelligent as humans has b een a topic of controversy for d ecades. Nei-
ther side of the debate has provided solid arguments proving this way or the other. After reviewing the contentions, we
show in this article that machine intelligence is not unlimited. There exists an insurmountable barrier for digital com-
puters to achieve the full range of human intelligence. Particularly, if a robot had a human’s sentience of life and death,
then it would cause a logical co ntradiction. Therefore, a digital computer will never have the full range o f human con-
sciousness. This thesis substantiates a limit of computer intelligence and draws a line between biological humans and
digital robots. It makes us rethink the issu es as whether robo ts will remain forever one of the tools for us to use, or they
will someday become a species competing with us; and whether robots can eventually dominate hu mans intellectually.
Keywords: Artificial Intelligence; Nature of Consciousness; Intelligent Co mputers
1. Introduction
Computer intelligence has achieved tremendous pro-
gress in the past decades. Machines are now more ca-
pable of doing the jobs requiring intelligence. This trend
is continuing and accelerating. Before the first industrial
revolution that was driven by mechanical machines and
electricity, it was difficult for the people to imagine what
a machine could do. Nowadays, amid the second Indus-
trial revolution driven by electronic computers, it is dif-
ficult for people to imagine what a computer cannot do.
People are increasingly concerned about the fate of hu-
man beings: What if a computer is as intelligent as a
human? Will a computer have mind”? Is there a limit of
machine intelligence? Are we creating a species against
ourselves? They are also closely related to fundamental
philosophical curiosities such as who we are, where we
come from, what intelligence and consciousness are,
what the fundamental difference is between a machine
and a human, and whether immortality is possible. It has
been a long-standing debate among scientists on how far
intelligent machines can go. Neither side of the conten-
tion has so far provided solid arguments of proving this
way or the other.
We investigate in the article the issue that whether
there is a limit for the capability o f computer intelligence,
and show logically that machine intelligence is not
unlimited. Electronic computers will never have the full
range of human consciousness and mental experience.
We prove it by using a counter-example that anxiety of
death, which is a piece of human consciousness, cannot
be possessed o r em u l at ed by an electronic robot.
2. Debate on Machine Intelligence
How intelligent a computer can eventually be has been
debated for almost sixty years. Many scholars take it for
grant that digital computers can achieve the full range of
human intelligence and mentality and will become more
intelligent than hu mans. The vanguards of artificial intel-
ligence (AI) cherished an optimistic prospect of machine
intelligence. Alan Turing predicted in 1950 that com-
puters would pass the Turing Test by year 2000 [1].
Marvin Minsky, a founder of AI department in MIT,
has never cast any doubt on the possibility of having
computers with full human intelligence and conscious-
ness, “Most people still believe that no machine could
ever be conscious, or feel ambition, jealousy, humor, or
have any other mental life-experience. To be sure, we are
still far from being able to create machines that so all the
things people d o. But this only means that we n eed be tter
theories about how thinking works.” [2]
Allen Newell and Herbert Simon at the time when AI
as a subject was just set up depicted the future of com-
puter intelligence, “There are now in the world machines
that think, that learn, and that create. Moreover, their
ability to do these things is going to increase rap idly until
—in a visible future—the range of problems they can
C
opyright © 2013 SciRes. IJIS
J. WANG 171
handle will be co-extensive with the range to which the
human mind has been applied.” [3] They thereafter ex-
tended that idea into their Physical Symbol System Hy-
pothesis, “A physical symbol system has the necessary
and sufficient means for general intelligent action.” [4]
They defined physical symbol system (PSS) as a physical
device that contained a set of interpretable and combin-
able symbols and a set of processes that could operate on
the symbols. A human brain and a computer are both
examples of PSS. Their hypothesis states that a PSS pos-
sesses all the matters for thoughts and intelligence, and
that something is intelligent if and only if it is a PSS.
This idea was later known as “strong AI” [5].
An argument for strong AI is that “there is no reason
to believe that biological mechanisms are inherently im-
possible to replicate using nonbiological materials and
mechanisms.” [6] Human intelligence has remained al-
most unchanged for thousands of years, while com-
puter’s capability has been doubled every two years [7].
So, it is inevitable and inexorable for a computer to in-
tellectually catch up with humans, sooner or later.
Minsky once described the human brain as a “meat
machine, no more no less”. “How could a device made of
silicon be conscious? How could it feel pain, joy, fear,
pleasure, and foreboding? It certainly seems unlikely that
such exotic capacities should flourish in such an unusual
silicon setting. But a moment’s reflection should con-
vince you that it is equally amazing that such capacities
should show up in carbon-based meat.” [8] “If we’re a
carbon-based complex, computational, collocation of
atoms, and we’re conscious, then why wouldn’t the same
be true for a sufficiently complex silicon-based com-
puter?” [6]
Stephen Wolfram claimed his “thesis of software of
everything” in 2002, which was even stronger than
strong AI: “Beneath all the complex phenomena we see
in physics there lies some simple program which, if run
for long enough, would reproduce our universe in every
detail.” [9]
Ray Kurzweil, a computer scientist and futurist, be-
lieves that a silicon computer can be as conscious and
spiritual as a human. He is optimistic about the perspec-
tive humanoid era, taking it as a blessing for humans. In
his informative and enlightening books, <The age of in-
telligent machines>, <The age of spiritual machines>,
<The singularity is near>, and many of his articles, he
argued, with seemingly irrefutable reasons, that the new
era of humanoids is inexorable and near. “The human
brain presumably follows the laws of physics, so it must
be a machine, albeit a very complex one. Is there an in-
herent difference between human thinking and machine
thinking? To pose the questions another way, once com-
puters are as complex as the human brain, and can match
the human brain in subtlety and complexity of thought,
are we to consider them conscious? … They (computers)
will appear to have their own free will. They will claim
to have spiritual experiences. And people—those still
using carbon-based neurons—will believe them.” [10]
Gilder and Richards commented on Kurzweil’s utopia,
“Kurzweil’s record as a technology prophet spurred in-
terest in this more provocative prediction that within a
few decades, computers will attain a level o f intelligence
and consciousness both qualitatively and quantitatively
beyond human capacity.” [6]
Storrs Hall, a nano-scientist and computer system ar-
chitect, had no doubt that computers would soon achieve
human intelligence and consciousness, and was optimis-
tic about the “moral machines”. “AI is coming. It is clear
we should give conscience to our machines when we can.
It also seems quite clear that we will be able to create
machines that exceed us in moral as well as intellectual
dimen si ons .” [11]
Hans Moravec, a leading expert in robotics, called for
humans to give the way to the new species of intelligent
machines, “We should keep researching, and should
proudly work to create robots that will supplant humans
as Earth’s superior species. Humans should just get out
of the way of this self-imposed evolution.” [12]
William. Bainbridge, as the deputy director for the Di-
vision of Information and Intelligent Systems at the Na-
tional Science Foundation (NSF), predicted the possible
impacts of robots on human’s longevity, “in principle,
and perhaps in actuality three or four decades from now,
it should be possible to transfer a human personality into
a robot, thereby extending the person’s lifetime by the
durability of the machine.” [13]
Stephen Hawking, a well-known theoretical physicist
in University of Cambridge, joined the chorus in 2013
predicting that computers could have human’s intelli-
gence by “copying the brain”, and “so provide a form of
life after death.” [14]
Some people do not believe that computers can be
human-like. The strongest among them are dualists who
take it for grant that the mind is something separate, and
fundamentally different, from the physical things. How-
ever, they did not provide convincing arguments showing
why mind was not physical, and they did not tell what
mind actually was if it were not physical. AI people
seemed not particularly interested in refuting dualism.
“The only refutation worth doing is simply to build the
AI, and then we will see who is right.” [11]
As a successful and highly regarded computer archi-
tect and entrepreneur in Silicon Valley, Jeff Hawkins
held a firm attitude denying the possibility of human-like
computers, “Can computers be intelligent? For decades,
scientists in the field of artificial intelligence have
claimed that computers will be intelligent when they are
powerful enough. I don’t think so. … Brains and com-
Copyright © 2013 SciRes. IJIS
J. WANG
172
puters do fundamentally different things.” [15]
Discretions were used by some scientists and philoso-
phers on the future of artificial intelligence. The subtle-
ties of a human’s mind are so delicate that science seems
incapable to interpret and a “mechanically” programmed
computer is improbable to emulate them. John Searle, a
philosopher in University of California, Berkeley, chal-
lenged the concept of machine intelligence with his
“Chinese room argument”, arguing “they (computers) are
immensely useful devices for simulating brain process.
But the simulation of mental states is no more a mental
state than the simulation of an explosion is itself an ex-
plosion.” He rejected the strong AI’s claim that “the
mind is just a computer program” [5].
Physicist and mathematician Roger Penrose of the
University of Oxford enumerated in his book <The em-
peror’s new mind> mysterious phenomena and processes
of human mind, and said “According to this perception,
all aspects of mentality are merely features of the com-
putational activity of brain; consequently, electronic
computers should also be capable of consciousness, … I
do my best to express, in a dispassionate way, my scien-
tific reasons for disbelieving th is perception, and arguing
that the conscious minds can find no home within our
present-day scientific world-view.” He hypothesized that
the thorough explanation of the human mind would be
somewhere in the “quantum world”. [16]
Mathematician and psychologist Do uglas Hofstadter at
Indiana University, Bloomington, believed that human
mind would be unlikely to be programmed directly; in-
stead, it would be an emerged phenomenon as by-prod-
ucts of sufficiently complex computer programs. He
wrote in his Pulitzer-Prize-winning book <Godel, Escher,
Bach–an eternal golden braid>, “Will emotions be ex-
plicitly programmed into a machine? No. That is ridicu-
lous. Any direct simulation of emotions cannot approach
the complexity of human emotions, which raise indi-
rectly from the organization of our minds. Programs or
machines will acquire emotions in the same way: as by-
products of their structure, of the way in which they are
organized—not by direct programming.” [17] But he did
not provide necessary or sufficient conditions for such
imaginary “emergent phenomenon” to occur.
The arguments of both sides of the debate are more or
less assertive, lacking strict proofs. Subtleties and mys-
teries of human mentality show difficulties and unlikeli-
ness for silicon mechanisms to realize the biological
mind. But unlikeliness does not amount to impossibility.
On the other hand, the opponents of strong AI have failed
to point out the insurmountable barriers for computers to
catch up with the human intelligence given the fact that
computers are becoming smarter and smarter. The limit
of computer intelligence remains an open issue.
Ray Kurzweil on ce challenged the oppon ents of strong
AI to show the proofs for their assertions that non-bio-
logical things cannot be capable of what biological things
are, “If one is searching for barriers to replicating brain
function, it is an ing eniou s th eory, bu t it fails to intro du ce
any genuine barriers.” [18]
In the next section, we tackle th is open issue by show-
ing a logically “genuine barrier” insurmountable for
computer intelligence so that a robot controlled by an
electronic computer will not possess full human mental-
ity.
3. Computer Intelligence Is Not Omnipotent
As reviewed in the last section, whether computer pro-
grams are capable of emulating all human conscious-
nesses has been debated for decades among computer
scientists, philosophers, physicists, mathematicians, psy-
chologists, and other scientists. This issue is even viewed
by some scholars as one that cannot be proved or dis-
proved. We in this section reason that a digital robot is
not omnipotent. A robot is not able to have the full range
of human mental experience.
3.1. Definitions of Terms
We first define the terms to be used in this section.
Meanings of those daily-used words need to be specified
in an exact way before we use them in reasoning.
Consciousness in this article refers to all mental phe-
nomena of a person such as thinking, calculating, rea-
soning, feelings, emotions, intuitions, and faith. Andy
Clark categorized mental phenomena of “consciousness”
with three levels [8]:
1) The feelings that characterize daily experience
(hunger, sadness, desire, …)
2) The flow of thoughts and reasons;
3) The meta-flow of thoughts about thoughts, thoughts
about feelings, and reflection on reasons.
Although lower level consciousnesses are observed in
all animals, the high level ones such as awareness of self
and thoughts-about-thoughts are associated only with
human beings.
A computer program, or simply a program, refers to a
set of instructions to the computer in a computer lan-
guage. By robot we refer to a machine under control of
its internal digital computer, which is able to move and
act like a human.
A program is copiable or duplicatable if the instruc-
tions in the program can be duplicated so that the original
and the copy are literally identical and the result of run-
ning the copy is indistinguishable from the result of run-
ning the original. With this definition, on ce a computer is
able to do square-root calculation, for example, its pro-
gram can be copied to other computers so that they all
can calculate square-root, and one cannot tell that the
Copyright © 2013 SciRes. IJIS
J. WANG 173
result of square-root of 3.76, for example, is from run-
ning the original program or from running its copy. Once
a computer is able to do spelling-check, other computers,
by copying, can do th e same exactly. With this definition,
once a computer will someday be programmed to have
consciousnesses of “happiness”, “self-awareness” and
“anxiety of death”, other computers, by copying, will
also have same consciousnesses of “happiness”, “self-
awareness” and “anxiety of death”.
3.2. A Digital Computer Is Copiable
A program in a digital computer is copiable. That is be-
cause: i) A program for a digital computer is a step-by-
step procedure or algorithm; 2) Any algorithmic proce-
dure, according to Church-Turing Thesis [16,17,19], can
be converted to a set of equivalent 0-1 codes for a Turing
Machine; 3) The 0 - 1 codes on the tape of a Turing Ma-
chine are obviously copiable.
If all the programs in a computer are copiable, then we
say that the computer is copiable. All computers we have
had so far are copiable since all programs in a digital
computer are copiable.
3.3. Examples of Human Consciousnesses
Self-awareness is a conscious trait “associated with the
tendency to reflect on or think about one-self” [20]. Self-
awareness is a part of intelligence that differentiates sub-
jective selfhood from the other beings. It belongs to the
third level of consciousness in Clark’s classification (see
Section 3.1). A human is capable of reflecting on his own
mental experience and recognizing self-identity, while
the other animals are not.
Death is the destination of life. Death will belong to
everyone with no exception. No matter how hard one
tries to forget it and avoid it, it will come anyway. Once
one had it, s/he would have it forever. No one knows
exactly what it is like after death. Anxiety is “an emotion
of feeling dominated by comprehensions” [21]. A nxiety
of death is comprehension and dread of the mystery and
obscurity of death. The feeling of anxiety of death is due
to the intelligence of a human, realizing that “I live only
once” and “if I died then the world currently around me
would disappear forever.”
People may disagree on the exact definition of “con-
sciousness”. But they would agree that self-awareness
and anxiety of death are two examples of human con-
sciousnesses, which is sufficient for the purpose of ad-
dressing our thesis in this article.
3.4. Anxiety of Death Defies Copying
Let AD denote “anxiety of death”, and SA denote “self-
awareness”. Let R denote a digital robot. Suppose that R
is programmed to have all human consciousnesses. A
human has consciou snesses of AD and SA, so does r obot
R. Suppose all the programmed consciousnesses in robot
R, including SA and AD, are copied to another robot R’.
According to the definition of “copying” in Section 3.1,
R and R’ have identical consciousnesses after copying,
which include self-consciousness and self-identity. That
is, the self-identities of R and R’ are same. R and R’ are a
same “self”, which can be put as R-self = R’-self. Now,
either R or R’ has multiple “self’s”. Realizing this, R
would not fear to die since “death” of itself would not
result in disappearance of the world around itself due to
the existence of R’-self that is another R-self. Therefore,
R would not have anxiety of death. By the same token,
robot R’ would not have anxiety of death either.
Hence, the outcome of copying R to R’ is: The copy
R’ does not have AD as supposed, and the original R
loses AD. Such “copying” is not the “copying” as we
defined in 3.1 since the original is not completely dupli-
cated and this “coping” changes the original. In other
words, if AD is copied, then AD is lost. Therefore, we
say that AD defies copying.
3.5. A Robot Will Never Have Full Range
Consciousness of Humans
Suppose that a digit robot R has been programmed to
have all intelligence and consciousness as a biological
human does. So, R has SA and AD, same as a human.
Robot R is free in moving and acting same as a human is,
so as to maintain those human consciou snesses related to
moving and actions.
As discussed in Section 3.4, the programmed con-
sciousness AD in R is copy-defiant so that if AD is cop-
ied together with SA to another robot, then AD would be
lost from both the original and the copy. So, the pro-
grams for AD and SA in robot R must have no copy in
other robots to maintain the existence of AD. Since R is
as intelligent as humans, R would be able to figure out
that “a copy of the programs in me would relieve my
anxiety of death”. So, R would have a motive to make a
copy of the programs of its consciousnesses.
If R managed to get itself copied, then AD would no
longer exist with R since AD defies copying. But humans
still have AD. So, robot R would have different con-
sciousness from humans at least on the feelings towards
death. Note that R should easily get itself copied because
R is free in action, and copying computer programs is a
simple routine of computer operation : -duplicating all the
programs inside R would be as easy as making a backup
of all the files in a computer.
If R did not get itself copied for some reason, then R
would still have AD as initially it h ad. But rob o t R would
have a motive to have a copy of itself to relieve AD, and
R would know that it could be done as easy as doing
Copyright © 2013 SciRes. IJIS
J. WANG
174
“copy-paste” at a computer. On the other hand, although
a biological human also has a motive to relieve AD by
making himself copied, he realizes that copying himself
is very difficult or almost impossible. Now, R has AD
but knows that its “death” can be easily avoided by mak-
ing himself copied; while a human has no idea on how to
avoid death. For robot R, death can be avoided with a
simple process of program-copying. For the human, on
the other hand, death is the inexorable destination. So,
robot R and a human would inherently have different
feelings towards death. That difference is analogical to
the feeling of a man who has caught a slight cold that can
be cured easily versus the feeling of a man who has got a
terminal cancer that is past beyond cure.
Thus, robot R will not have same consciousn esses as a
human does, no matter whether or not R has a copy of its
consciousnesses: -If R got itself copied, robot R would
not have AD but the human still has; If R did not get it-
self copied, R would have a feeling toward death differ-
ent from the human. It contradicts to the assumption that
R has been programmed to have all consciousnesses as a
biological human has. In other words, the assumption
that digit robot R has all human’s consciousnesses is not
logically valid. Therefore, a digit robot controlled by an
electronic computer cannot be programmed to have the
full range of human consciousness, since a robot would
not have, at least, the sentience towards “death” as a hu-
man does.
The above arguments have an implicit assumption: we
are not able to duplicate an existing persons self by
any technology such as programming and cloning. So,
robot R does not have the self-consciousness of anyone
existing in the world. This assumption en sures anxiety of
death (AD) remains with the human since a human’s self
is not programmed or copied to either a robot R or some
other objects. If a person knew how to copy himself then
he would lose the sentience of AD due to the existence of
his “copies”. Copying “self” has been a dream of human
beings, but it has not yet come true. “Cloning” is a step
toward realizing the dream, but it is, at least for now, not
“copying self”. Cloning a sheep is to duplicate a sheep so
that the copy looks identical to the original. But the “self”
of the cloned sheep, if it could sens e it, is not the original
one. Imagine that one day one would be able to clone a
human. The original person would not agree to be de-
stroyed after being cloned, because he knew the cloned
copies were not “himself”!
Therefore, a robot cannot have human’s sentience of
death and liv ing, which will hold true at least til l the time
when humans come to know how to duplicate themselves.
The thesis below sum marizes what we have derived:
Thesis-1:
A robot controlled by an electronic computer will not
have all human co nsciousnesses, and so will no t have the
same mental experience as a human, as far as we do not
know how to duplicate an existing person’s self-con-
sciousness.
3.6. Summary of the Arguments
The arguments for showing Thesis-1 addressed in sec-
tions 3.2 thro ugh 3.5 are briefed as follows:
a) Anxiety of death is a part of consciousness of hu-
mans.
b) Anxiety of death defies copying.
c) Electronic computers are copiable.
d) Robots controlled by electronic computers cannot
have anxiety of death. (Due to b) and c))
e) Robots controlled by electronic computers cannot
have the full range of human consciousness. (Due to a)
and d))
Therefore, machine intelligence is not unlimited. “An-
xiety of death” is a piece of human consciousness which
is unachievable for computer intelligence.
4. Implications and Discussions
Kurzweil made a prediction in his 2005 book, “By the
late 2020s, we will have completed th e reverse engineer-
ing of the human brain, which will enable us to create
nonbiological systems that match and exceed the com-
plexity and subtlety of humans, including our emotional
intelligence.” [22] Stephen Hawking predicted in Sep-
tember 2013 that “It’s theoretically possible to copy the
brain on to a computer and so provide a form of life after
death.” [14] Thesis-1 in this article shows that the above
predictions will not come true. It is logically impossible
that electronic computers and robots have our human’s
full experience of consciousness and mentality through
either programming or copying, even programming or
copying some of human consciousnesses is not impossi-
ble.
“Anything is not impossible, unless it causes a logical
contradiction” (Gottfried Leibniz). It is the feature of
“duplicatability” of computer programs that would cause
a logical contradiction if a computer had the conscious-
ness of anxiety of death. All digital computers are copi-
able as reasoned in Section 3.2. All man-made machines
we have had so far are copiable. By Thesis-1, machines
will not be as conscious as humans no matter how com-
plex machines are as far as they are copiable. In other
words, machines will not be as conscious as humans
unless they are not copiable. Up to now, humans have
not made an un-copiable machine. How to construct an
uncopiable machine is still beyond our knowledge at this
time.
Emulating human intelligence on computers has been
an ultimate quest of artificial intelligence (AI). Howev er,
“today’s AI bears little resemblance to its initial concep-
Copyright © 2013 SciRes. IJIS
J. WANG
Copyright © 2013 SciRes. IJIS
175
tion,” [23]. Rather than pursuing re-creating human intel-
ligence, today’s AI tries to use the accomplishments of
AI so far, combined with Internet, to master discrete
tasks, such as manipulating stock market, automatic
driving, Internet searching, and fraud detection [24]. This
redirection of AI does not signal the abandonment of
AI’s original quest of emulating human intelligence. It
signals the difficulties of that quest. Emulating human
intelligence on a machine seems a goal farther than peo-
ple initially think, though it remains as the goal for AI
people.
We humbly admit that we are very ignorant about our
own consciousness, motions, mind, mentality, spirit, and
soul. Is there any piece of our consciousnesses, other
than anxiety of death that may not be emulated by an
electronic computer? Can an existing person’s mind be
programmed? Can a human’s “self” be copied? What is
an un-copiable machine like, and how does it work? Is
consciousness a “by-product” emerging from sufficiently
sophisticate programs, as proposed by Hofstadter [17]? If
so, how does such “emerging” process occur? Is the
emerged consciousness copiable? These are examples of
the issues for us to keep reflecting hereafter.
REFERENCES
[1] A. Turing, “Computing Machinery and Intelligence,”
Mind, Vol. 59, 1950, pp. 433-466.
http://dx.doi.org/10.1093/mind/LIX.236.433
[2] M. Minsky, “The Society of Mind,” Touchstone, Simon
& Schuster, New York, 1986, p. 19.
[3] A. Newell and H. Simon, “Heuristic Problem-Solving:
The Next Advance in Operation Research,” Operations
Research, Vol. 6, No. 6, 1958.
[4] A. Newell and H. Simon, “Computer Science as Empiri-
cal Inquiry: Symbols and Search,” Communications of the
Association for Computing Machinery, Vol. 19, No. 3,
1976, pp. 113-126.
http://dx.doi.org/10.1145/360018.360022
[5] J. Searle, “The Mystery of Consciousness,” The New
York Review of Books, New York, 1997, p. 14.
[6] G. Gilder and J. Richards, “Are We Spiritual Machines?
The Beginning of Debate,” In: J. Richards, Ed., Are We
Spiritual Machine? Ray Kurzweil vs. the Critics of Strong
AI, Discovery Institute Press, Seattle, 2002, p. 11.
[7] G. E. Moore, “Cramming More Components onto Inte-
grated Circuits,” Electronics, Vol. 38, No. 8, 1965
[8] A. Clark, “Mindware: In Introduction to the Philosophy
of Cognitive Science,” Oxford University Press, New
York, 2001, p. 2.
[9] S. Wolfram, “A New Kind of Science,” Wolfram Media,
Inc., Champaign, 2002, p. 35.
[10] R. Kurzweil, “The Age of Spiritual Machines—When
Computers Exceed Human Intelligence,” Penguin Books,
Middlesex, New York, 1999, pp. 5-6.
[11] J. Storrs Hall, “Beyond AI: Creating the Conscience of
the Machine,” Prometheus Books, New York, 2007, p.
367.
[12] H. Moravec, “Bill Joy’s Hi-Tech Warning,” 2001.
http://www.gigablast.com/get?q=&c=dmoz3&d=1180344
53864&cnsp=0
[13] W. S. Bainbridge, “Progress toward Cyberimmortality,”
In: I. Basset, Ed., The Scientific Conquest of Death: Es-
says on Infinite Lifespans, Immortality Institute, Wausau,
2004, p. 117.
[14] M. Bennett-Smith, “Stephen Hawking: Brains Could Be
Copied To Computers To Allow Life After Death,” The
Hoffington Post, Science, 2013.
http://www.huffingtonpost.com/2013/09/24/stephen-haw
king-brains-copied-life-after-death_n_3977682.html
[15] J. Hawkins and S. Blakeslee, “On Intelligence,” Holt
Paperback, Times Books, Henry Holt and Company, New
York, 2004, p. 5.
[16] R. Penrose, “The Emperor’s New Mind,” Oxford Univer-
sity Press, Oxford, 1999, pp. 61-64.
[17] D. R. Hofstadter, “Godel, Escher, Bach: An Eternal Gold-
en Braid,” Basic Books, Inc., New York, 1999, pp. 676-
677.
[18] R. Kurzweil, “The Evolution of Mind in the Twenty-First
Century,” In: J. Richard, Ed., Are We Spiritual Machine?
Discovery Institute Press, New York, 2002, p. 48.
[19] S. Russell and P. Norvig, “Artificial Intelligence—A
Modern Approach,” 3rd Edition, Prentice Hall, New Jer-
sey, 2010, p. 8.
[20] “Encyclopedia of Psychology,” Oxford University Press,
Oxford, Vol. 7, 2000, p. 209.
[21] J. A. Popplestone and M. W. McPherson, “Dictionary of
Concepts in General Psychology,” Greenwood Press,
New York, 1988, p. 21.
[22] R. Kurzweil, “The Singularity Is Near—When Humans
Transcend Biology,” Penguin Books, New York, 2005, p.
377.
[23] Steven Levy, “The A.I. Revolution,” Wired, Vol. 19, No.
1, 2011, pp. 86-89.
[24] F. Salmon and J. Stokes, “Bull vs. Bear vs. Bot,” Wired,
Vol. 19, No. 1, 2011, pp. 90-93.