You are currently viewing Can Artificial Intelligence Be Punished for Committing Offences? A Critical Analysis of The Applicability of Criminal Law Principles on Artificial Intelligence

Can Artificial Intelligence Be Punished for Committing Offences? A Critical Analysis of The Applicability of Criminal Law Principles on Artificial Intelligence

Share It!

Click here to download the full paper (PDF)

Authored By: Ms. Ananya Mishra (B.A.LL.B (Hons)), Hidayatullah National Law University, Raipur, Chhattisgarh,

Click here for Copyright Policy.

Click here for Disclaimer.

ABSTRACT:

Artificial intelligence is rapidly occupying every sphere of our life. From the mobiles to airplanes, AI is being employed everywhere. However, what if AI commits crime while carrying out its duties? Can AI be punished? This article presents possible answers to the question. 

Section I define AI along with present day examples of the same and delineate the grounds for subjecting AI to the rigors of law whereas Section II lists out the reasons justifying the imposition of punishment on AI. Section III deals with the analysis of the scope of punishment for AI followed by conclusion that AI punishment is plausible but there are limitations which needs to be addressed for the same”.

I. INTRODUCTION:

The concept of automatic cars has been a popular theme of sci-fi for a while now. Even Bollywood has its own version of Automatic Car in the name of “Tarzan-the wonder car”. 

In 2014, Elon Musk, Tesla Motors Founder, announced that Tesla’s Model S Sedan has an auto pilot mode where the person behind the wheels can get distracted for a while, when the Car would be running on its own, navigating its way through busy roads and lanes. This could happen thanks to Artificial Intelligence abbreviated as AI.

Although self-driven cars are developed to prevent road accidents but what if the same self-driven car over speeds, breaks red light and rams into an individual killing her instantly. In that case who exactly would be held liable?

Ideally it should be AI, the one driving the car. But is it possible to punish AI for committing offences? The present research strives to find out probable answer to the same.

I.I REVIEW OF LITERATURE:

  1. In this journal article[1] Gabriel Hallevy strongly argues in favor of punishing AI for committing offences by drawing on analogy to corporate criminal liability and even has prescribed punishment like incarceration and fines for AI. However, the paper appears quite opinionated for a research paper where the dilemmas have not been adequately represented. It has been directly used in the present research with appropriate citations.
  2. In this journal article[2] the author discusses about the possible consequences as well as repercussions of developing semi and fully autonomous cars driven by AI and its culpability. He analyzes, how prepared criminal law is and what sort of amendments should be introduced to make AI driven cars responsible. It has been used only as a reference in this paper.
  3. In this journal article[3] the authors have analyzed at length the pros and cons of punishing AI for committing offences along with various examples. They have also suggested certain pragmatic changes which can be emulated in criminal laws to accommodate AI driven systems while recognizing the need to overhaul the system w.r.t future developments. It has been substantially used in the present research with appropriate citations.

I.II RESEARCH QUESTIONS:

  1. What is Artificial Intelligence?
  2. Why AI should be liable under law?
  3. Why AI should be punished for committing offences?
  4. How AI can be punished for committing offences?

I.III RESEARCH OBJECTIVES:

  1. To find out the meaning and nature of AI
  2. To investigate the reasons for holding AI liable under law
  3. To examine the culpability of AI
  4. To critically analyze the scope of punishing AI

I.IV RESEARCH METHODOLOGY:

The present research has been carried out by doctrinal method using secondary sources of information available in the form of articles, essays and research papers over internet. The present research tries to analyze the prospect of punishing AI for committing offences and is limited to examining the culpability and scope of punishment without delineating what all punishments may be imposed on AI.

II. ARTIFICIAL INTELLIGENCE: WHY IT SHOULD BE LIABLE UNDER LAW?

Artificial intelligence or AI signifies the ability of machines to mimic human intelligence, the problem solving and decision making cognitive skills, without any explicit command or assistance by any human being. It can also be defined as manmade intelligence or the ability of machines or abstract things to act smart. In fact, AI has been categorized as ‘Machina sapiens’ or thinking machines.[4] The AI machines that we have currently in use are specific in nature as they can perform only specific tasks. A General AI is one which can perform all tasks given to it. However, as per experts, General AIs are not yet feasible. And the ones shown in sci-fi movies are referred to as Self-aware AI or Strong AI who is morally culpable for its conduct and are only possible in movies.

Examples of AI:

One of the fundamental reasons for making AI liable under law is because of its manifold uses and involvement in day-to-day life of humans in this 21st century. Following is a list of the application of AI in daily life. This list[5] however is not exhaustive as AI is continuously evolving in its application every day.

  • Roomba – The House Cleaning Wizard:

Made by iRobot company, this AI driven vacuum cleaner can determine the best cleaning method for the floor based on its own room-size analysis without any human help or assistance.

  • Sophia – The Celebrity Humanoid:

Built by Hong Kong based Hanson Robotics, Sophia, the self-learning robot made headlines and turned into an overnight celebrity because of near-perfect human appearance as well as humanely emotions and interactions. In fact, this AI driven humanoid made an appearance in the popular show ‘The Tonight Show with Jimmy Fallon’ and even has received citizenship from Saudi Arabia.

  • Olly – The Mood-Based Music Player:

If Amazon’s Alexa can play songs on your command, then Emotech’s Olly can play music as per your mood, based on its own facial expression and voice analyzer. AI infused Olly with its machine-learning capabilities is a table-top robotic voice assistant who can even initiate conversation and ask you about your mood if it sees your long face.

  • Google Maps – The Direction Guide:

This particular application of AI needs no special introduction. Perhaps, there would be no one who has never used Google maps to reach the nearest vegetarian restaurant or to consult a dermatologist. Google maps use AI driven algorithms to determine the shortest routes to the user’s destination and in turn makes commute easier.

  • Grammarly – Your Very Own Writing Assistant/Editor:

Grammarly has helped a lot many non-natives as well as native English speakers. Whether its drafting formal emails or college assignments, Grammarly with its AI Powered machine learning and data sciences reviews the writings in real time, finds errors and helps rectify them in order to get clearer and grammatically correct drafts. Although it is not cent percent correct all time, still something better than nothing.

Apart from above, E-commerce sites like Amazon, Flipkart, Social networks like Meta or Twitter and Self-driving cars like Tesla’s Model S Sedan, all of them use or are beginning to use AI to make our life easier, faster and better. So what makes AI based devices so different from other conventional machines? The answer lies in the analysis of the characteristic features of AI as to what helps it stand apart from its conventional counterparts.

Characteristics of AI[6]:

  1. Ability to communicate
  2. Internal knowledge or knowledge about itself
  3. External knowledge or knowledge about outside world
  4. Goal oriented action or behavior
  5. Creativity

The ability to communicate is what sets apart AI from conventional machines. You cannot talk to a screwdriver but you most definitely can talk to an AI empowered Robot, the way it’s shown in movies. AI possesses internal and external knowledge which makes it feasible to have, like, simple weather talk with AI, just like having a conversation with your friend, albeit a bit formalistic. If an AI robot or machine or software is given a mission to achieve it can tailor made its movements in order to achieve the goal. And, in case of failure of the original plan, it always has a plan B, C or even D at its disposal to achieve the set target.

Apart from these basic features, there are certain characteristics of AI which necessitates that AI should be made subject to the operation of laws, criminal or civil alike. These are:[7]

  1. Ability to act unpredictably where AI can engage in activities which its programmer or developer didn’t explicitly program into it through machine learning & experience
  2. Ability to act unexplainably where the AI functions as a ‘black box model’ in which the output is different from the input and there’s no logical explanation for the same. For example, rejecting a particular credit card but not able to explain why.
  3. Ability to act autonomously where AI system receive input, set target, assess outcome and behave in a way to achieve the set target, all without being directed by humans

Truthfully these features of autonomy, unpredictability and inexplicability are the ones which distinguish AI operated machines or soft wares from conventional machines. Unpredictability and inexplicability sometimes result in making the actions of AI irreducible i.e., not directly reducible to a person’s conduct. For example in 1986, an AI controlled robot employed in a factory, ended up killing a human worker working beside it in cold blood, and resumed its work again without anyone to interfere. Here the question arises, can the AI be held liable for such doing?[8]

There are certain other examples in recent times also which are as follows:

  • An AI driven Bot called Random Darknet Shopper (RDS) programmed to shop in an art exhibition through Darknet, brought ecstacy pills and other illegal items with the $100/- it had been given by its programmer to purchase art pieces and without any explicit command for ecstacy pills.[9]
  • Microsoft’s AI based chatbot Tay was developed to learn and tweet like a millennial in the respective SNS. Within hours after its launch, it has to be removed and destroyed because it tweeted things like “Hitler was right” and that “feminists should…burn in hell”.[10]
  • On 7th May 2016, a car crash occurred in Williston, Florida, after the driver, Joshua Brown, 40, of Ohio put his Model S into Tesla’s autopilot mode, which is able to control the car during highway driving. The manufacturer Tesla itself admitted that the sensors system in the auto pilot mode couldn’t differentiate between a bright spring sky and a large white Truck-Trailer crossing on the highway leading to a fatal accident. This is the case when a large number of automobile companies have started experimenting with AI driven cars.[11]

In all the above cases, although the AI based application or device was developed initially with a noble intention, but later on the concerned entities went astray and acted unpredictably thereby committing offences. These examples are thus a call for subjecting the functioning of AI based applications and systems into the scrutiny of laws and regulations.

AI’s Criminal Liability Analogous to Corporate Criminal Liability:

AI liability has been regarded akin to corporate criminal liability. Gabriel Hallevy, regarded as the best-known defender of punishing AI for criminal offences, argues that “AI entities are taking larger and larger parts in human activities as do corporates.” 

Therefore, he concludes that “there is no substantive difference between the idea of criminal liability imposed on corporations and on AI entities.”  Indeed like AI, corporates are also artificial persons having neither body, nor soul. Yet they are punished for committing criminal offences like criminal breach of trust.

Therefore, “when an AI entity establishes all elements of a specific offence, both external and internal, there is no reason to prevent imposition of criminal liability upon it for that offence.”[12]

III. WHY AI SHOULD BE PUNISHED FOR COMMITTING OFFENCES?

Punishment literally means infliction of undesirable or unpleasant outcome on a person or group of people accused of violating laws by an authority in order to incapacitate the accused and deter the potential offenders in a society. Punishments under criminal law majorly consist of fines or incarceration. Criminal law embodies strictest social control in the form of punishment. Therefore, it should always be the last resort. Imposition of punishment is justified only when the benefits outweigh the repercussions and there are no other less severe alternatives.

Justification of Punishing AI for Committing Offences:

The supporters of punishment outline certain affirmative benefits arising out of its imposition which can be categorized under consequentialist, retributive and expressive benefits. Consequentialist benefits include incarceration which incapacitates the offender, deterrence to potential offenders and opportunity of reformation to delinquents.

Retributive benefits defend the imposition of punishment as giving the offenders their due in response to their culpability whereas expressive benefits consist of expression of society’s condemnation of commission of offences. The negative limitations opposing the imposition of benefits mainly consist of desert constraint and culpability criteria.

Desert constraint takes into account the respect for life and dignity of a human being. It prescribes punishment in excess of culpability. Basically, it opposes the idea of treating a human being as a means where, by inflicting exemplary punishment on a certain individual, an example is set for the society to follow.

Whereas, the culpability factor takes into consideration the ability of a person to design, deliberate and bring into action the desired offence, i.e., the mens rea element which is basic prerequisite to impose criminal liability.

As already stated, if benefits outweigh repercussions, then punishment is justified. The benefits associated with punishing AI consist of the following justifications:

Consequentialist Benefits:

The biggest consequential benefit of imposing punishment is to deter the offender as well as deter the potential offenders from committing either the same offence or from committing any offence at all in future. In case of AI, the situation is different though. AI refers to machines behaving smartly.

Nevertheless, AI is not a human being. It can mimic certain cognitive capabilities of human, but not all. Therefore, deterring AI from committing the same offence would be, more or less the same as deterring it from committing any offence at all. Additionally, as each AI based device or software is designed for specific purposes and practically there’s no such thing as General AI carrying out all kinds of tasks thus, punishing one AI would most definitely not help in deterring other AI devices from committing same offence.

Hence, deterrence seems implausible for an AI empowered device. However, deterrence can be imposed on the conduct of the developer, user, programmer or manufacturer of AI devices by deterring them from developing such AI devices which may end up causing egregious harms.

Expressive Benefits:

One of the biggest fear or worry amongst general mass when it comes to adoption of AI controlled devices is that in case of any default, the AI would not be subject to any law, especially criminal law. Therefore, imposition of punishment on erring AI would help maintain the faith of the common mass on efficacy of criminal law as effective mode of ensuring safety of the society. Additionally, imposition of punishment on AI would fill the victim with a sense of satisfaction. Nonetheless, these expressive consequences are not without criticisms. Punishing an AI device just because the general public demands it is same as giving into mob justice. “Popularity of a practice doesn’t automatically justify it.” [13]

Challenges To the Notion of Punishing AI:

However, there are certain challenges to the notion of punishing AI for committing offences. These are:

1) Conceptual confusion w.r.t imposition of punishment on AI:

The general principle of criminal law regarding imposition of punishment requires the fulfillment of two basic elements namely, the Actus reus and the Mens Rea. Fulfillment of both is essential in order to impose any sort of punishment on AI. Actus Reus corresponds to the act or omission on the part of the accused. Perhaps, because of the fact that in almost all situations involving AI, whether AI operates in partial control of humans or wholly on its own, the act would most definitely be conducted or omitted by the AI only, there is no controversy w.r.t Actus reus or the element of action involved in crime and AI.

Controversy arises mainly regarding the mens rea element of crimes. It is this element which gives rise to the “eligibility challenge” encountered while discussing punishment to be meted out to AI.[14] There are two sets of interpretation of this eligibility challenge, narrow and wide. As per the narrow interpretation, an AI lacks the requisite mental element (as it is a machine, it cannot feel emotions essential to form the mens rea or guilty intention behind committing any crime and thus, punishing AI would violate the basic principle of criminal justice system.

As per the broader interpretation, an AI, even though possess problem-solving qualities, it does not actually have the capacity to deliberate and weigh down probabilities before taking any action or think through the consequences of its action, like an infant or person of unsound mind (doli incapax). Therefore any of its action or omission doesn’t correspond to the level of culpability that criminal law requires and hence it should be placed out of the purview of criminal law. In such a case punishing AI for its conduct would be conceptually confused as it neither can possess guilty intention, nor the capacity to carry out crimes.

But there are certain workable principles of criminal law through which the above limitation can be removed. These are as follows:

a) Respondent Superior Principle:

Hallevy calls this the “perpetration-via-another” model where the mens rea of the agent is imputed to its principal.[15] Here AI application/device is deemed to be an innocent agent, an instrumentality which merely carries out the command given to it by its human master. The principal would be whosoever operates the AI, whether the programmer or the end-user, who would have the primary liability as if committed the offence on its own (perpetration-via-another).

One important thing is that no mental capability is ascribed to the AI here. However, problem may arise when either AI device has no definite number of developers or that the conduct of AI devices is not traceable or reducible to particular human conduct, like in case of Hard AI crimes.

b) Strict Liability Principle:

Strict liability offences are those which are characterized by absence of mens rea. Here an AI can be held liable without infringing the principles of legality. However, one essential requirement in case of strict liability is the voluntary conduct of the accused. Now, if AI lacks both mental state and deliberation abilities, then how would it be able to act voluntarily? One possible solution, as suggested by experts, may be to create a new category of strict liability offences specifically for AI related devices where the voluntary act requirement to be done away with. Additionally, certain duties can be ascribed on the AI developers or Users so that in case of omission of the same, they would be held liable for strict-liability offences.

c) Direct culpability of AI:

This is the most speculative one. Here, different authors have different opinion. As per Hallevy, creativity is not essential to commit offences and what is the essential pre-requisite to commit a crime is having a combination of knowledge and specific intent. Now he goes ahead and defines Knowledge as a mechanical process where information is received and processed by the human brain. In a similar fashion, AI is capable of receiving and processing information and this processing of information by advanced AI systems is similar to that of humans. Therefore, AI is capable of possessing knowledge. As for specific intent requirement, Hallevy asserts that barring a few specific crimes, like hate crimes, almost all of them require only knowledge of external elements which AI possess and therefore AI can be held directly liable under criminal law.[16] Other authors suggest that, analogous to corporate criminal liability, if a sophisticated AI is programed in a fashion to take into consideration the interests of humans and is aware of the legal liabilities, but ends up acting in a way in total disrespect for these existing interests, then it can be held directly liable for such acts or omission. [17]

2) Reducible to Human Conduct:

Skeptics argue that the conduct or misconduct of an AI can be fully reduced to fault of individuals involved. However, it is pertinent to mention that AI, as already stated, many times behave in ways irreducible to humans. For e.g. a pilot puts the plane on auto pilot mode and takes a break. However, in the middle of their chartered path, there is an approaching storm. So the pilot wants to abort the mission and land somewhere safe. But the AI, programed to complete the mission, detects a threat from the approaching pilot and calculating that the best way to eliminate the threat is by killing the pilot, activates a mode by virtue of which the pilot is thrown out from the window of the cockpit and he dies immediately. In such a scenario, the conduct of the AI cannot be attributed to its programmer or user, a clear case of irreducibility.

3) Punishing AI is Actually No Punishment:

This particular line of criticism stems from the fact that AI devices or software are at the end of the day machines only without having any feelings or emotions, unlike humans do. One of the most essential elements of punishment is to inflict pain or unpleasant consequences on the wrongdoer. Humans with flesh, blood and emotions are able to feel the ‘pain’ of punishments. But robots are not capable of conjuring up emotions or feel pain physically or mentally when inflicted with pain. So, punishing AI would not be punishment in the actual sense of the term. However, going by this line of argument even corporate criminal liability and subsequent penal sanctions imposed on them should be fruitless. But that is not the case. One of the strongest reasons to counter the above criticism is that even though AIs can’t feel the pain of punishment, but it definitely will bring some sort of psychological relief to the victims of AI offences. Additionally, it would make the developers or programmers of AI more responsible towards society. Therefore, there are certain criticisms associated with imputing liability on AI and punishing it. However, there are alternatives to work around with such criticisms, as outlined above.

IV. HOW AI CAN BE PUNISHED FOR COMMITTING OFFENCES?

Till now it is clear that there are reasons for punishing AI, albeit with various limitations which may be overruled by certain criminal law principles. Now, after justifying that AI may be punished for committing offences, question arises as to how to punish AI? So, the current section discusses the scope of punishment for AI. As already stated, in case of AI acting as an instrumentality the culpability lays with the operator and not on the AI. In this situation, the operator/programmer remains aware of the fact that AI is going to cause trouble. Cases majorly that falls under this category include, for e.g., using AI to loot a bank or mess up the records of a hospital etc. which can be easily covered under existing criminal law provisions.

The situation is different when, while carrying out a particular act, the AI system ends up committing any crime which the user/programmer didn’t intend per se. Hallevy calls this as the “natural-probable consequence” model.[18] Two situations are possible in such a scenario, either the user or programmer was negligent during programming, or he/she intended one type of offence but the AI ends up committing another, instead or in addition to. In the first case, the user would be liable for negligence while in the second scenario, his liability would be same as that of an abettor or accomplice.

As to the liability of AI, if it was not aware of the legal prohibitions, then it may or may not be held liable. However, if it was aware of the legal prohibitions i.e., to say that certain acts or omissions were encoded into its system as being prohibited and still went ahead with the commission or omission of such Act then it would be held liable and punished along with the user/operator for such specific offence as per the Statute books.

However, one difficult situation would be when AI commits irreducible offences. The only respite is that law makers have still some time left before we witness actual Hard AI crimes. Certain authors have suggested that, to deal with irreducible offences, criminal law needs to be amended in order to have a particular category of offences which deal with such harmful yet irreducible offences caused by AI.

Expressive Cost Associated With Punishments For AI:

Liability of AI has been equated with the liability of corporations. An important point here is that Corporations are regarded as artificial persons under law and by virtue of the same they are subjected to liability. Therefore, unless an AI is vested with legal personality it can be subjected to punishment under criminal law and vesting personhood on AI requires a whole lot of practical changes to the existing criminal justice system. This is termed as the expressive cost associated with AI punishment.

Rights Creep: Excessive Rights Being Detrimental in Long Run:

Full-fledged legal personality to AI is a very debatable issue. Because, vesting of legal personality connotes not only duties but rights. If at all AI is made a legal person then it will lead to growth of a lot of new rights resulting in a phenomenon known as Rights creep. In one of the decisions of the US Federal Supreme Court of Louis K. Liggett Co. vs. Lee,[19] the problem of Rights creep associated with Corporates was raised by Brandeis J. in his dissenting opinion where he voiced concern about corporates eventually dominating the State with vesting of lot of rights on the former. In a similar fashion, even though it may be argued that the rights available to AI may be limited to few, still these rights definitely will restrict the freedoms of humans, if not now then in future for sure.

Feasible Alternatives: Are There Really Any?

After weighing down the possible solutions and legal liabilities or practical considerations, it does seem difficult to impose criminal liability upon AI controlled devices or applications. Nevertheless, it is not impossible. The aim of this article is to analyze if at all AI can be punished for committing offences, i.e., if criminal liability can be imputed on AI. One might be skeptical about criminal liability, since it is the harshest form of social control and might look for other less severe alternatives. After all criminal punishments are only justified if there are no other alternatives available. In such a scenario, civil liabilities like imposing damages or attaching property of the erring individual may be feasible alternatives. Even, certain researchers propose to have a particular person designated as “the responsible individual” to be held liable for any offence or injury caused or inflicted by the AI and in turn, such individual to receive payments in lieu of his “service.”[20] All of the above propositions regarding civil liabilities holds true in case AI is used as an instrumentality or in case AI based offences are based on negligence of the manufacturer or programmer. However, in case of Hard AI crimes where AI acts automatically, unpredictably or preternaturally based on its machine learning, then imputing liability on a human being who has neither acted recklessly, nor gave any such command, seems to be against rules of natural justice which prescribes for just, fair and reasonable conduct. In such a scenario criminal liability appears like the go-to solution and it also might be just what is required, provided certain inherent legal and conceptual limitations are removed. Hence, what may be suggested is to modify existing criminal law principles in order to accommodate the demands of future which will be largely dominated by AI driven devices and applications.

Vesting legal personhood with limited rights and duties on AI entities like corporates should also be considered by law-makers as that will be an inevitable consequence. Because despite of all the repercussions, AI is the future and Laws have to be modified as per changing needs of the society in order to stay effective and true to its nature i.e., dynamic.

V. CONCLUSION:

AI is here to stay. It’s not going away anywhere. From Ok Google, Siri, Alexa, to Tesla’s automatic car, or autopilot mode in aero plane, AI is everywhere. Even law firms have now AI assistants. Very soon we might have AI lawyers and judges. AI has the capability to almost behave like a human being. This begs the question, if AI can judge, then why can’t it commit offences? And, if it can commit offences, which it does also, albeit occasionally, then can it be punished? The answer is ‘yes’.

But it is not an absolute yes. There are so many limitations and complications associated with the concept of punishing AIs for offences. Nevertheless, it is not impossible as discussed above. The need is only to modify the law because laws change as per changing needs and demands.

Bibliography:

  1. Danny Yadron and Dan Tynan Tesla driver dies in first fatal crash while using autopilot mode, THE GUARDIAN, (July 1st, 2016, 00:14 BST), https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.
  2. Gabriel Hallevy, (2010) “The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control,” 4 AKRON I.P.J 171 (2016) https://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2.
  3. K.C. Kingston, Artificial Intelligence and Legal Liability, ARXIV, (Oct 21st, 2021, 04:30 pm), https://arxiv.org/ftp/arxiv/papers/1802/1802.07782.pdf.
  4. Jeffrey K. Gurney, Driving into the Unknown: Examining The Crossroads Of Criminal Law And Autonomous Vehicles, 5, WAKE FOREST J.L. & POL’Y, 393 (2015) https://ssrn.com/abstract=2543696.
  5. Matilda Claussén-Karlsson, Artificial Intelligence and the External Element of the Crime, OREBRO UNI., 1, (2017), http://www.diva-portal.org/smash/get/diva2:1115160/FULLTEXT01.pdf.
  6. Ryan Abbott and Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53, UC DAVIS, 323, (2019), https://lawreview.law.ucdavis.edu/issues/53/1/articles/files/53-1_Abbott_Sarch.pdf.
  7. Sam Daley, 27 Examples of Artificial Intelligence Shaking Up Business as Usual, BUILT IN, (Dec 17th, 2021) https://builtin.com/artificial-intelligence/examples-ai-in-industry.
  8. Ying Hu, Robot Criminal Liability Revisited SSRN (April 3, 2018) https://ssrn.com/abstract=3237352.
  9. Ying Hu, Robot Criminals, 52 U. MICH. J. L. REFORM 487 (2019) https://repository.law.umich.edu/mjlr/vol52/iss2/5.

Cite this article as:

Ms. Ananya Mishra, “Can Artificial Intelligence Be Punished for Committing Offences? A Critical Analysis of The Applicability of Criminal Law Principles on Artificial Intelligence”, Vol.3 & Issue 3, Law Audience Journal (e-ISSN: 2581-6705), Pages 184 to 200 (25th January 2022), available at https://www.lawaudience.com/can-artificial-intelligence-be-punished-for-committing-offences-a-critical-analysis-of-the-applicability-of-criminal-law-principles-on-artificial-intelligence/.

Footnotes & References:

[1] Gabriel Hallevy, (2010) The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control,4 AKRON I.P.J 171 (2016) https://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2.

[2] Jeffrey K. Gurney, Driving into the Unknown: Examining The Crossroads Of Criminal Law And Autonomous Vehicles, 5, WAKE FOREST J.L. & POL’Y, 393 (2015) https://ssrn.com/abstract=2543696.

[3] Ryan Abbott and Alex Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction, 53, UC DAVIS, 323, (2019), https://lawreview.law.ucdavis.edu/issues/53/1/articles/files/53-1_Abbott_Sarch.pdf.

[4] Supra note 1 at 175.

[5] Sam Daley, 27 Examples of Artificial Intelligence Shaking Up Business as Usual, BUILT IN, (Dec 17th, 2021) https://builtin.com/artificial-intelligence/examples-ai-in-industry.

[6] Supra note 1 at 176.

[7] Supra note 3 at 330-331.

[8] Supra note 1 at 171.

[9] Matilda Claussén-Karlsson, Artificial Intelligence and the External Element of the Crime, OREBRO UNI., 1, (2017), http://www.diva-portal.org/smash/get/diva2:1115160/FULLTEXT01.pdf, at 17.

[10] Id. At 18.

[11]Danny Yadron and Dan Tynan Tesla driver dies in first fatal crash while using autopilot mode, THE GUARDIAN, (July 1st, 2016, 00:14 BST), https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk.

[12] Supra note 1 at 191.

[13] Supra note 3 at 347.

[14] Id. at 349.

[15] Supra note 1 at 179.

[16] Supra note 1 at 186.

[17] Supra note 3 at 355.

[18] Supra note 1 at 183.

[19] 288 U.S. 517, 549 (1933).

[20] Supra note 3 at 375.

Leave a Reply