machines or “machines with self-acquired intelligence” (the term denotes all
types of AI including robots, drones, automated vehicles, bots) are becoming
increasing popular. As earlier there were “expert system” which were based on
the set of rules pre-programmed into them by human being, but nowadays the
intelligent machines can “acquire” more and more intelligence without any human
intervention. This “self-learning” ability allows these machines to take
actions based on such “intelligence”.

Our society will
be benefitted on large from these technological breakthroughs that will enhance
efficiency, improve process reliability, optimise resource usage, etc. However,
these intelligent machines are not error-free and may cause accidents that damage
private or public property and/or persons. The questions here is that how to
fix liability for such damages and who is culpable that perhaps does not yet
have a clear answer. Can we use the principles of strict liability, or principles
of strict liability, or principles of agency and attribute liability to the
owner of this machine or to the manufacturer of this machine? Or the person who
programmed liable? In order to prescribe a law on this vital topic, we could
consider applying the concepts of strict liability, autonomy, agency, intention
(mens rea), responsibility and culpability. Through this paper the author
intends to provoke the discussion on this complex and emerging legal issue.

The concept of
product liability for the actions and failures of machines presupposes human
control on those machines. Hence, for any accidents caused by the machines the
liability is determined after determining the cause of accident. If it is found
to have been caused by the negligence or failure of the user/operator, then that
individual will be liable. If machine failure is clearly established as the
cause of the failure or accident, then the manufacturer of the machine is made
liable. If there is a manufacturing defect, or a design defect or even a
failure on the part of the manufacturer to warn user about the non-obvious risk
factors, the manufacturer is invariably made liable for accidents resulting
therefrom. In addition to that, the concept of strict liability is to be
imposed on the manufacturer.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
Writers Experience
Recommended Service
From $13.90 per page
4,6 / 5
Writers Experience
From $20.00 per page
4,5 / 5
Writers Experience
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

In the case of intelligent
machines, the concept of manufacturer’s strict liability cannot be entirely
applied in the same form as it is applied to other machines. As we know,
Intelligent machines are the combination of many systems such as hardware
(electronic and mechanical), software and data, and they have the ability to
function outside the control of the human being and can make decision based on
the data that machines gather. But, these decision can go wrong due to inaccurate
raw data gathered by the machine or due to inaccurate processing of data.
Hence, in a way these machines are imbued with human attributes such as the
ability to process data, arrive at conclusion, make decision based on those
conclusions and implement actions. In many instance, human beings don’t have
control over it. Thus, this increasing autonomy of intelligent machines
challenges the above concept of liability.

machines with the ability to decide and grants “personality” to them. Can we
therefore apply the principles of control to fix liability assuming personality
to these machines? In case of error committed by a human employee in an
employer-employee relationship, the employer is generally held liable for the
actions of the employee by applying the principles of vicarious liability. In
the case of an employer-human employee relationship, the employer has a strict
and secondary liability for the acts of its employees. Under the common-law
doctrine of agency, the “superior” or, in a broader sense, a third party that
has the “right, ability or duty to control” the activities of the doer whose
action has resulted in an injury is made liable. Applying a similar analogy,
can we consider intelligent machines as agents of the principals? If yes, then
a complete redefinition of the concept of agency is required. Can we call these
intelligence machines as “legal persons” for us to conclude that they are
agents of its masters? If we say yes, then there arises a question: is it the
whole machine or is it only the “intelligence” that makes it function that can
be called as a “legal person”? Since intelligent machines are created by using
many components from different sources, are we to consider the whole machine as
a legal person or can we attribute personality to each of its components?
Jurisprudence says that a person who is liable should be able to understand the
scope and extent of risks and consequences of his/her actions and therefore,
the associated liability. +However, in the case of intelligent machines, the
creator is not always in a position to assess this. In such instances, what
kind of insurance schemes can we create to mitigate risks? Can we introduce the
principles of joint and several liabilities in such cases? The joint and
several liabilities is more burdensome to the contributor who has the deepest

Another, analogy
is treating these machines like domesticated animals. Whether the principles of
strict liability can be applied to intelligent machines in the same was as they
are applied to the wrongdoings of domestic animals. To mitigate risks all
machines that are prone to accidents regulators have introduced some standards
and specified safe operating limits. What kind of such standards can we
introduce in the context of intelligent machines so that a fair predictability
of adverse risks is achieved?


I'm Niki!

Would you like to get a custom essay? How about receiving a customized one?

Check it out