term “robotics” was first coined by the legendary science fiction writer Sir
Isaac Asimov in his 1941 short story “Liar!”. One of the first to see the vast
potential of the up and coming technologies that were yet to see public
approval or interest in his time. Since then, however, robotics has been on a
startling upward trajectory that has placed it into the forefront of cutting
edge technologies. While robotics has come with many benefits to modern day
humanity it is also a subject of endless heated debates. Humanity is on the
verge of a robot revolution. And while many see it as a gateway to progress not
seen since the Renaissance it could just as easily result in the end of the
human race. With the ever-present threat of accidentally creating humanities
unfeeling successors it’s only natural to question how much, if at all, we
should allow ourselves to become reliant on our technologies.

“As machines get smarter and
smarter, it becomes more important that their goals, what they are trying to
achieve with their decisions, are closely aligned with human values,” said
Stuart Russell, a professor of computer science at UC Berkley and
co-author of the universities textbook on artificial intelligence. A
strong believer that the survival of humanity may well depend on instilling morals
in our AI’s, and that doing so could be the first step to ensuring a peaceful
and safe relationship between people and robots, especially regarding simpler
settings. “A domestic robot, for example, will have to know that you value your
cat,” he says, “and that the cat is not something that can be put in the oven
for dinner just because the fridge is empty.” This begs the obvious
question, how on Earth do we convince these potentially godlike beings to
conform to a system of values that benefits us?

While experts from several fields
around the world grapple with the challenges of creating more obedient robots,
others caution that it could be a double-edged sword. While it may lead to
machines that are safer and ultimately better it may also introduce an
avalanche of problems regarding the rights of the intelligences that we have
created and perhaps even leading to a crisis of what it is to be human or even
a sentient being.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

The notion that human/robot
relations might prove tricky far from a new one. In 1947, legendary science
fiction writer Isaac Asimov introduced his Three Laws of Robotics in the short
story collection I, Robot, which were designed to be a basic set of
laws that all robots must follow to ensure the safety of humans. 1) A robot
cannot harm human beings, 2) A robot must obey orders given to it unless it
conflicts with the first law, and 3) A robot must protect its own existence
unless in conflicts with either of the first two laws. Asimov’s robots adhere
strictly to the laws and yet, limited by their rigid robot brains, become trapped
in unresolvable moral dilemmas. In one story, a robot lies to a woman and falsely
tells her that a certain man loves her, because the truth might hurt her
feelings, which the logic of the robot believes to be a violation of the first law.
To not break her heart, the robot broke her trust, traumatizing her and ultimately
violating the first law anyway. The conundrum ultimately drives the
robot insane. Although fictional literature, Asimov’s Laws have remained a
central and basic point entry point for serious discussions about the nature of
morality in robots and acting as a reminder that even clear straightforward
rules may fail when interpreted by individual robots on a case to case basis.  

Accelerating advances in new AI technology
have recently spurred an increased interest to the question of how newly intelligent
robots might navigate our world. With a future of highly intelligent AI
seemingly close at hand, robot morality has emerged as a growing field of discussion,
attracting scholars from ethics, philosophy, human rights, law, psychology, and
theology. Research institutes have sprung up focused on the topic. The public
conversation took on a new urgency recently when Stephen Hawking announced that
the development of super-intelligent AI “could spell the end of the human
race.” An ever-growing list of experts, that now warn that robots might
threaten our existence.


I'm Niki!

Would you like to get a custom essay? How about receiving a customized one?

Check it out