Hritik and the inaccuracies in it. But scholars

Hritik Panchasara

Professor Stout

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now


November 2017









Are the misconceptions surrounding
Artificial Intelligence hampering its own progress?















Are the misconceptions surrounding
Artificial Intelligence hampering its own progress?


Are careers like financial analysts
or telemarketing necessary for us humans to labour at? Could the Greek
mythological figures be simulated and brought to life using a form of
superintelligence? Our technology is on a path of such high magnitude that it
could shape our future for the better. The program JARVIS from the movie Iron
Man is a highly advanced computer artificial intelligence that managed
everything that was related to technology for the protagonist. The fact that
something inorganic can be of such high value goes to predict the future of our
own technological race. Artificial intelligence is defined as a subfield of
computer science wherein computers can perform tasks for humans that we would
normally think of as intelligent or challenging. Envision a future where
computers and machines can carry out our daily human tasks at ease and solve
complex issues without any human input. The ability to invent intelligent
machines has fascinated humans since the ancient times. Researchers are
creating systems and programs that could mimic human thoughts and try doing
things that humans could do, but is it here that they got it wrong? Humans have
always been good at defining problems but not solving them. Machines, on the
other hand, are polar opposites, where their computational power helps them
solve almost any problem, but not define them. It goes to show how these two
aspects are interdependent on each other and why we are looking forward to the
invention of superintelligence. But issues like creationism and negative
typecasting beg the question of whether the misconceptions surrounding
superintelligence are hampering its own progress. A few scholars like Pei Wang
focus on the dynamics of a working model and the inaccuracies in it. But
scholars like Yoav Yigael question the emulation of human-like characteristics
and abilities into machines. This research paper will focus on the various,
incorrect approaches towards harnessing this technology, the consequences that
are being derived from it, and the solutions that could probably be focused on.

One of the main issues surrounding
artificial intelligence is the fact that global leaders have an illusion of
what it is supposed to be. They constantly try and emulate human beings in
machines when that was not the goal of the technology since its inception. Take
the wheel as an example. The wheel was supposed to augment human capacity for
transportation and it successfully paved the way for countless other
inventions. In the same way, artificial intelligence was meant to augment to
our cognizance and help us function in a better manner; to solve problems that
we could only define. The most common trend is the creation of humanoids like
Hanson Robotics’ Sophia. It is an amalgamation of artificial intelligence and
cutting-edge analytical software for peak performance as a “question answering”
machine and it’s more so an “Andro-humanoid” robot. Elsewhere, IBM and its
drive to replicate human nature was not only unsuccessful, but also has been
causing a financial burden on the company. IBM simply tried too hard to push
Watson into everything ranging from recipes to health care and it resulted in a
declining revenue for 5 years now. Hence, it alludes to the misappropriation of
resources by feeding research into pointless products and avenues for a largely
versatile product.

Artificial intelligence used to be a
problem-solving machine where commands were entered in an parameters box. Human
programmers would painstakingly handcraft knowledge items that would then be
compiled into expert systems. These served to be brittle to a certain extent
and could not be scaled. Since then, a quantum leap has changed the field of
artificial intelligence. This pioneered the idea of superintelligence but
somewhere along the way it has been grossly misunderstood. Machine learning, is
what has revolutionised how we make and train AI. Earlier knowledge items and
structures were pre-defined by manual programming, now machine learning enables
us to produce algorithms that learn, from unprocessed perceptual data. This
process can be likened to how human infants learn. Is it possible for us to
take a system of devices interconnected and co-dependent, and process their
data in meaningful ways, to pre-empt their shortcomings and avoid errors? Yes.
Is it possible for us to build a machine so adept in our languages that we can
converse with it like we do with each other? Yes. Can we build into our
computer’s a sense of vision that enables them to recognize objects? Yes. Can
we build a system that can learn from its errors? Yes. Can we build systems
that have a theory of mind? This can be done using neural nets. Can we build
systems that have a moral and proper foundation? This we are still learning.
A.I. is still miles from having the same potent, pan-domain ability to study
and plot as a human being has. Humans have a neurological advantage in this
case, the power of which we yet do not know how to replicate in machines.

Ever since AI’s inception this
question has been asked, is it something to fear? Every advancement in
technology draws upon itself some apprehension. The invention of the Television
was criticised as people complained that it would make the working class
procrastinate and make them dull. On the creation of e-mail, society grieved
that the personal touch and formality of a letter would be lost. When the
Internet become pervasive, it was argued that we would lose our ability to
memorize. There is truth in all these claims, but it’s also these very
technologies that define our way of modern life, where we have taken
information exchange for granted no matter what the medium, this in turn
expanded the human experience in substantial ways. The film, “2001: A
Space Odyssey” by Stanley Kubrick personifies all the stimuli that we have
come to associate with AI, as one of the central characters is HAL 9000, an AI.
HAL, a sentient computer programmed to assist the Discovery spacecraft from the
Earth to Jupiter. It was a flawed character, as in it chose to value its
mission objective more than human life. Even though Hal’s character is rooted
in fiction, it voices mankind’s fear of being subdued by a being of superior
intelligence who is apathetic to our humanity. The AI that researchers and
scientists are trying to make today, is something that is very much along the
lines of HAL, but without its single mindedness of achieving its objective
without nuance. This is a hard engineering problem, to quote Alan Turing, “We
can only see a short distance ahead, but we can see plenty there that needs to
be done.”

To build a safe AI, we need emulate
in machines how humans think, this is a task that seems beyond impossible, but
it can be broken down to three simple axioms, the first axiom is altruism, if
the AI’s only goal is to maximize the comprehension of our objectives, and of
our values. Values here don’t mean values that are distinctly intrinsic,
extrinsic or purely moral and emotional, but a complex mixture of all of the
above, as we humans aren’t binary when it comes to our moral compasses. This
actually violates Asimov’s law that states robots have a sense of
self-preservation. Whilst preserving its existence is no longer its priority
whatsoever. The second axiom is of humility. This states the AI does not know
what our human values are, so it maximizes them, even still it does not know
what they are. This ambiguity of our values is of our advantage over here, this
helps us avoid the problem of single-minded quest of an objective, like HAL. In
order to be of use, The AI has to have rough impression of what we want. It
acquires this information predominantly by observation of our choices. The
question then is what happens if the AI is ambiguous about the objective?  It reasons differently. It considers the
scenario where we could turn it off, but only if it’s doing something wrong.
AI’s do not know what wrong is, but it reasons it does not want to do it. In
this scenario, we can see the first two axioms in action. Hence it should let
the human turn it off. Statistically you can estimate the motivation that the
AI has to permit us to turn it off, and it is directly proportional degree of
uncertainty of the objective set for the AI. When the AI is turned off, that
third axiom comes into play. It infers something about the objectives it should
be pursuing, because it infers that what it did was not right. We are factually
better off with an AI that’s designed in this way than with an AI built any
other way. The scenario above is an example which depicts what humans endeavour
to accomplish with human-compatible A.I. This third axiom draws up
apprehensions from the scientific community, because humans behave badly. A lot
of human behaviour is not only displeasing but also wrong, which means that any
AI that is based on human values will corrupt itself in the same the humans
have. What one must remember is just because the maker behave poorly doesn’t
mean the creation (AI) is going to mimic that behaviour. The fundamental goal
of these axioms was to provide nuance for why humans do what they do, and make
the choices that they make. The final goal is to allow AI to predict for any
person the outcome of all their action/choices in as accurate a manner as

The bigger problem now is, how do we
feed in all our values and morals and the nuances that are associated with them
into an AI which is essentially an inference engine at this point,  doing this the old school way by manually
defining every knowledge item would be impractical , Instead we could leverage
the power of A.I. here, we know that it is already capable of processing raw
perceptual data at blinding speeds, so we essentially use its intelligence to
help us in helping it learn what we value, and its incentive system can be
fashioned in such a way that it is incentivised to pursue our ethics or to
perform actions that it calculates we would approve of using the three axioms
stated above. In this way we tackle the difficult problem of                value-loading to AI with the
resources of an AI. It is possible to build such an artificial intelligence,
and since it will embody some of our morals, the fear that people have for AI
of this capacity is baffling. In the real world constructing a cognitive system
is fundamentally different than programming an outdated software-intensive
system. We do not need to program them. Programmers start to teach them. To
teach an AI how to play a game of chess we have it play the same game of chess
a thousand times, but in the process, we also teach it how to discern a good
game from a bad game. If we want to create an AI medical assistant, we will
teach it endocrinology whilst simultaneously also fusing with it all the
complications in a person that could lead to the underlying symptoms. In
technical terms, this is called ground truth. In programming these AI, we are
therefore teaching them a sense of our morals. IN such cases, humanity must
trust an AI equally if not more as a human who is just as well-trained.

“Superintelligence” a book
written by the academic Nick Bostrom, he reasons that AI could not only be
dangerous to humanity but it’s very existence one day might spell an
existential crisis for all of humanity. Dr Bostrom’s primary dispute with AI
is, that such cognitive systems learn on digital time scales and that means
soon after their inception they will have inferred and deciphered all of human
literature, this alludes to their ravenous huger for information and there
might come a day when eventually when it ascertains that the objectives set for
it by humanity no longer align with its own objectives and goals. Dr Bostrom is
held in high regard by people of immense stature such as Elon Musk and Stephen
Hawking. With all due respect to these academics and philosophers I feel that
their assumptions about AI are erroneous to an extent.  Consider the example of HAL as stated above
was only a hazard to the Discovery crew so as far as it was in command of all
features of the Discovery spacecraft. This is where Dr Bostrom faltered in
assuming that the AI would have to have control over all of our world. The most
popular stereotype of Skynet from “The Terminator” is a prime example of such a
scenario. It was here in the movie that a superintelligence eventually took
command of mankind by turning all machines against humanity. However, we must
remember that our goal with AI was never to build AIs that could control and
harness the weather, that would direct and manipulate the tides, which could
command us whimsical and disordered humans. Furthermore, if such an artificial
intelligence existed, it would have to compete with human economies, and
thereby compete for resources with us. Furthermore, if the three principles
stated above are used as guidelines in the formulation of this omnipotent AI,
then not only do we not fear this AI but we cherish it, for it is built in our
image, with our values and morals. We cannot protect ourselves from all random
acts of violence, Humans are unpredictable and the truth is some of us are
extremists, but I do not think that an AI could ever be a weapon that an
non-governmental tertiary party could ever get its hands on, and to manufacture
an AI by these parties is even more farfetched as the mobilizing of resources
and brainpower alone would raise multiple red flags for the authorities of the
world to stop whatever devious ploy to overthrow world order in its

Artificial Intelligence is heading
into multiple directions and there is a lack of a centralised effort for the
development and advancement of this science towards a neutral goal. Moreover,
humans anthropomorphize machines and this leads them into believing that the
flaws of the maker will be heightened in the flaws of its creation. There are
some obscure problems neural cortex of any AI and how is it that we make it
conscious, what is conscience? Questions like these need to be answered before
we march onwards on our quest of an omnipotent AI. Furthermore, intricacies of
the decision theory for an AI are still primitive and in their infancy so there
is still a long way to go before that is figured out. These problems seem far
to advanced and complex to tackle now, but the truth is that the research is
already underway and sooner rather than later we’ll witness the ushering in of
the era of machine intelligence.



Works Cited:

Wang, Pei. “Three Fundamental
Misconceptions of Artificial Intelligence.” Taylor
and Francis Online, 13 August 2007,
Accessed 13 November 2017.

Yigael, Yoav. “Fundamental Issues in
Artificial Intelligence.” Taylor and
Francis Online, 7 November 2011,
Accessed 13 November 2017.

Yudkowsky, Eliezer. “Artificial
Intelligence as a Positive and Negative Factor in Global Risk.” New York: Oxford University Press, 2008, Accessed 14 November 2017.

Hammond, Kristian. Practical Artificial Intelligence for
Dummies. John Wiley & Sons, Inc, 2015. Accessed 14 November 2017.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University
Press, September 3rd 2014. Accessed 9 December 2017.