Alex Knight's photo on Pexels
Alex Knight's photo on Pexels


It is an inevitable fact that artificial intelligence technologies are getting more and more involved in our lives. Artificial intelligence technologies are used in the fields of health, education, trade, industry, military and many more, and will no doubt be used even more in the future. This transformation has numerous effects in various fields such as employment, daily life and art, and one of the areas that it shows is emerging as a law. A wide range of artificial intelligence technologies have a direct or indirect effect on many sub-areas of law. In this article, the relationship between legal personality and legal responsibility issues from these sub-areas with artificial intelligence technologies will be examined.


According to High Level Expert Group On A.I. -set up by The Europen Comission-  “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications) 1 .”

Also, according to the European Parliament Legal Affairs Committee, a smart robot must have the following features 2.

  • It acquires autonomy and exchanges and analyzes data by exchanging data through sensors and / or its environment (mutual connection);
  • Has self-learning ability (optional criterion);
  • It has a physical support;
  • Adapts his behavior and actions to his environment.

Considering the speed of technological development and the diversity of the area, the difficulty and negativity of making a restrictive definition will arise.


The issue of legal personality and legal responsibility of a system that works with artificial intelligence appears as a field of discussion on the one hand, as well as this technology itself, as well as futuristic, and this view will be better understood when examined with examples.

a. Personality

First of all, it should be noted that the concept of legal responsibility cannot be considered alone. Before this concept, the concepts of legal personality and capacity to have rights and obligations come. In other words, in order for legal responsibility to arise, the thing that has the potential to be responsible must first be a legal person. Legally, the concept of the person expresses the ability to have rights and debts. Thus, the concept of the person corresponds to capacity to have rights and obligations. The concept of person is not a natural concept, it is a legal concept, which means that the legal order makes the decision of who is accepted as a person. As a matter of fact, legal orders accept individuals or communities such as associations, foundations as juridical person as well as real persons consisting of people 3. To sum up, the assets whose power to have rights and debts are legally granted to them are have legal personality4.

Here we come across the question: Can we call a legal person to a system that works with artificial intelligence technology5? One of the comments within the discussion area created by this problem advocates granting smart software a status similar to juridical person. This view says that smart software, just like companies, should be recorded in a unique register, and even the damages that would occur by creating their own assets should be covered by this property6. Similarly, the eURobotics working group working within the EU expressed the idea of creating an “electronic personhood”7. Accordingly, a frame of responsibility will be drawn that includes users, vendors, manufacturers, etc., and each smart software will be recorded in the registry, and will be acquired as a legal personality. Another proposal of eURobotics is the “artificial humans”8. It can be said that this model, which has led to philosophical discussions over ambiguous concepts for people like consciousness, reason and mental competence, is full of uncertainties and challenging the concept of person in the classical sense. According to another view, intelligent software should be accepted as the acting of the people and the problems that will arise should be solved according to the law of agency9. Furthermore, there is also an opinion that intelligent software can be subjected to slavery status in Roman Law., which requires simple legal action in the name of the master, but not in his own name. However, it is said that this practice will not be useful or even harmful in the face of the danger of reviving the idea and practices of slavery, which has had a negative meaning in world history10. As a result, in a world where smart software is becoming more and more autonomous, we can say that one day it will be impossible to see robots as things or slaves. In this case, it will be necessary to create the most suitable model by making use of the accumulation of world history.

b. Responsibility

After closing the personality question, it is necessary to mention the problems that a system working with artificial intelligence will create in the context of responsibility. As it is known, responsibility is of two kinds: civil liability and criminal liability. Evaluation should be made in accordance with this distinction.

  1. Civil Liability

In this context, a dual distinction can be made11. Accordingly, contractual liability will come to the agenda first. Accordingly, the responsibility of the manufacturer will undoubtedly be found due to any problems arising in the relationship between the manufacturer and the developer12. An example of this is the fact that the software developer has promised software that can speak ten world languages, but that it has produced software that can speak five languages. In the relationship between the seller and the user, the user in the consumer position will of course be able to use his rights arising from this title13. An example of this is the fact that a robot bought as a babysitter does not properly care for the child. When we look at the tortious liability, we can say that each of the people who act as a manufacturer, software developer, seller, user or another person, will be responsible for the damage caused by his / her wrongful act according to his / her tortious liability14. It must be said that acts of omission can also cause responsibility besides wrongful intention15. In addition, as we may encounter in a software sample created by the gathering of many people, it will be difficult to identify the responsible person in cases where error detection becomes difficult. As it will be difficult to predict the behavior of an artificial intelligence that is suitable for self-learning, in this case, determination of responsibility will be a problem16. As an example of the debate on responsibility, an operation robot named da Vinci can be given in an event that took place in 2005. The robot gave an error warning during a surgery and the doctors wanted to fix the problem. However, the robot did not allow this and was closed completely after 45 minutes. The victim patient filed a lawsuit later17. This is a concrete event worth considering. However, as can be seen, there is not much problem in the field of contractual liability and tortious liability in view of existing legal regulations. However, the absolute liabilities, that are considered to be apply mutatis mutandis, are more conducive to discussion. Although there are absolute liabilities such as employer liability, liability of animal keepers and landlord’s liability none of these appear to be suitable for artificial intelligence applications18. It should be noted that, in an opinion, in a system in which the owner of the robot is perfectly responsible, the exclusion clause to that person will also be brought to the agenda. Thus, a person who regularly follows the updates and maintenance of his robot and takes the necessary measures will be acquitted of obligation19. In addition, situations such as a mass cyber attack are counted as force majeure and it is stated that the person will be acquitted of obligation again20. Of course, in scenarios where robots are not held responsible, creating an area of irresponsibility in this way should be avoided. Apart from these, the necessity of adopting the danger liability principle in order to apply it is also among the defended opinions21. In fact, the da Vinci surgery robot is an example that opens the absolute liability to the discussion, since the responsibility of the hospital operation is on the agenda even if the surgery team has no flaws22. However, for example in Turkey, absolute liability status arranged according to numerus clausus principle so  it is not possible to extend these principles. Here, the idea of establishing a hierarchy of values in order to determine the responsibility suggests that this structure be placed in the software as a code of conduct. However, it is even controversial whether this is possible today23. Nevertheless, developments show that there will be a transformation in the future, as in most of the concepts that artificial intelligence affects, and perhaps a new type of responsibility will be required.

2. Criminal Liability

The first example embodying the responsibility of the manufacturer or the user of autonomous robots appeared in 1979. Robert Williams, who worked as an assembly worker at Ford’s Flat Rock factory in Michigan, was killed by a robotic arm used in the production activities24. In another incident that took place two years later, Kenji Urada, who works at a motorcycle factory, will go down in history as the first Japanese killed by a robot as a result of the robot’s perception of itself as a threat25. It is possible to shed light on debates in the area of criminal responsibility by multiplying such scenarios. When the criminal responsibility comes to the agenda, it can be said that the responsibility of the manufacturer, the seller, the user, etc., acting by wrongful intention or act of omission, will arise in these contexts. For example, the person who causes the death of people by producing an autonomous weapon will be responsible for wrongful intention, and the person who causes death as a result of the accident of the autonomous vehicle by not paying due attention and care will be responsible for the act of omission. In terms of negligence, rules explaining the obligation to pay attention and care must be provided. In terms of negligence rules are required to explain due diligence26. The problematic area in the context of criminal liability is the question of whether robots themselves have criminal liability. Although it does not have a practical value for today, this matter, which is important for designing the future, is highly controversial. In criminal law, criminal capacity is a must for punishability27. And this requires an artificial intelligence system with a high level of autonomy. In the assumptions of this, it is necessary to look at what principles artificial intelligence can be criminally responsible28. An opinion put forward in this context says that we should be based on the legal personality model. Because legal personalities are not human beings, but there are legal systems in which they are criminally responsible. For example, Article 5 of the Criminal Code of the Kingdom of Belgium accepts the direct criminal liability of legal personalities29. However, taking into consideration the opinion that legal entities cannot be a offender because there is no criminal capacity, robots cannot be held criminally responsible30. However, if we assume that the increasingly autonomous systems will someday reach the level of consciousness that they can be responsible for, we still need to focus on the consequences. In this context, we encounter the problem of sanction. Criminal law and sanctions, as we know them, are for us people. However, it is among the opinions that an attempt to imprison a robot that does not have to live with the society cannot go beyond being an unnecessary effort31. It should be noted that the prevention function of punishment becomes important, accordingly, it is tried to prevent the occurrence or repetition of the undesired situation32. When we look at the punishment practices that are thought to be specific to robots, we see that practices such as destruction, memory clearing, expulsion for a certain period of time, public service, and to be fined by allocating an asset beforehand are discussed33. The arrangements to be made should be handled in accordance with the structure of the robots and should be suitable for solution. It is also controversial whether a robot can be the victim of a crime. As a matter of fact, in a study, it was observed that evil to a human and evil to a robot affect people in the same way34. For example it is conceivable that a robot could be the victim of sexual assault. However, in order for this to be accepted, the existence of the bodily integrity and sexual freedom for robots must be accepted35. Moreover, a policy can be determined by following the developments of the area called animal rights for attacks against robots produced in animal form36. Of course, all of these possibilities open to discussion the concepts that are thought to be human-specific, such as consciousness, free will, and consent, as the above discussions do, and it seems unlikely to respond to any of them clearly today.


Although it is not acceptable for systems that work with artificial intelligence to be accepted as person or to be responsible for their behaviors today, technological developments show that such discussions will be held more passionately in the near future and these systems will reach the level of intelligence that is only a fiction just today. As stated, artificial intelligence technologies affect many areas other than legal personality and legal responsibility. Marriage law with the thought of marrying a robot, intellectual property law if a robot reveals a work of art, and selection law with software-supported election campaigns become highly questionable. In this process, the concepts that have accumulated humanity until today will also be discussed and redefined if necessary. For this reason, the necessity of an activity that transcends the legal interpretation of existing legal regulations and is now in the field of law creation will be inevitable. This activity should be carried out not by strict law rules but by soft law rules so that the law does not obstruct technology, this method is accepted within the EU37. Finally, it should be noted that the process is so complex that it cannot be carried out only with the accumulation of law and it requires interdisciplinary work.


  1. AKINTURK,Turgut, KARAMAN, Derya Ateş, Medenî Hukuk, 24. Baskı, Beta Basım, İstanbul
  2. ALTUNC, Sinan, “Robotlar, Yapay Zeka ve Ceza Hukuku”, (Date accessed : 01.04.20)
  3. BAYRAMLIOGLU, Emre, “Akıllı Yazılım ve Hukuki Statüsü”, (Date accessed : 01.04.20)
  4. CHOW, Denise, Boston Dynamics’ New Atlas Robot Can’t Be Pushed Around, 24.02.16,, (Date accessed : 01.04.20)
  5. DEMIR, Esra, “Robot Hukuku”, Istanbul Bi̇lgi̇ University Institute of Social Sciences IT Law Master’s Program, 2017
  6. DULGER, Murat Volkan, “Yapay Zekalı Varlıklar ve İnsanlar Arasında Duygusal/Cinsel Yakınlaşmalar: İnsanların Yerini Seks Robotları Mı Alıyor?”, (Date accessed : 01.04.20)
  7. ERSOY, Çağlar, Robotlar, Yapay Zekâ ve Hukuk, 2. Baskı, On İki Levha Yayıncılık, İstanbul, 2017
  8. euRobotics, “Suggestion For A Green Paper On Legal Issues In Robotics”, ed. Christophe Leroux, Roberto Labruto, (Date accessed : 01.04.20)
  9. European Parliament Committee on Legal Affairs, “European Civil Law Rules in Robotics Study for the Jury Committee”, 2016, (Date accessed : 01.04.20)
  10. Independent High-level Expert Group On Artificial Intelligence Set Up By The European Commission, “A Definiton of AI: Main Capabilities and Disciplines”, 2019, (Date accessed : 01.04.20)
  11. INAN, Ali Naim, Türk Medeni Hukuku, 3. Baskı, Seçkin Yayıncılık, Ankara
  12. Istanbul, Ankara and Izmir Bar Associations Workshop Report, “Yapay Zekâ Çağında Hukuk”, 2019
  13. UNSAL, Burçak, “Yapay Zeka, Robotlar, Hukuki Düzenlemeler”, Istanbul Barosu Dergisi, C. 93, S. 2019/4, s.64-73
  14. YUKSEL, Armağan Ebru Bozkurt, “Robot Hukuku”, Türkiye Adalet Akademisi Dergisi, 2017, s.85-112
  15. (Date accessed : 01.04.20)
  16. (Date accessed : 01.04.20)
  17. (Date accessed : 01.04.20)
  18. (Date accessed : 01.04.20)
  19. (Date accessed : 01.04.20)
Kadircan Berkay Çakıralp

Studying Law at Ankara University

  1. Independent High-level Expert Group On Artificial Intelligence Set Up By The European Commission, “A Definiton of AI: Main Capabilities and Disciplines”, 2019, at.1, (Date accessed : 01.04.20)
  2. European Parliament Committee on Legal Affairs, “European Civil Law Rules in Robotics Study for the Jury Committee”, 2016, at. 8, (Date accessed : 01.04.20)
  3. AKINTURK, Turgut, KARAMAN, Derya Ateş, Medenî Hukuk , 24. Baskı, Beta Basım, İstanbul, at.107
  4. INAN, Ali Naim, Türk Medeni Hukuku , 3. Baskı, Seçkin Yayıncılık, Ankara, at.111
  5. For a discussion in the context of Robot Sophia: (Date accessed : 01.04.20), UNSAL, Burçak, “Yapay Zeka, Robotlar, Hukuki Düzenlemeler”, İstanbul Barosu Dergisi , C. 93, S. 2019/4, at.67,68; For a series example: Star Trek: The Next Generation: Season 2, Episode 9: The Measure of a Man
  6. BAYRAMLIOGLU, Emre, “Akıllı Yazılım ve Hukuki Statüsü”, at.8 (Date accessed : 01.04.20)
  7. euRobotics, “Suggestion For A Green Paper On Legal Issues In Robotics”, ed. Christophe Leroux, Roberto Labruto, at.60, ( ) (Date accessed : 01.04.20)
  8. Ibid., at.62
  9. ERSOY, Çağlar, Robotlar, Yapay Zekâ ve Hukuk , 2. Baskı, On İki Levha Yayıncılık, İstanbul, 2017, at.92
  10. Ibid., at.94
  11. Istanbul, Ankara and Izmir Bar Associations Workshop Report, “Yapay Zekâ Çağında Hukuk”, 2019, at.59
  12. Ibid., at.59
  13. Ibid., at.60
  14. Ibid., at.62
  15. ERSOY, at.74
  16. YÜKSEL, Armağan Ebru Bozkurt, "Robot Hukuku", Türkiye Adalet Akademisi Dergisi , 2017, at.99
  17. Ibid., at.97
  18. “Yapay Zekâ Çağında Hukuk”, at.63; BAYRAMLIOGLU, at.4
  19. YÜKSEL, s.96
  20. BAYRAMLIOĞLU, at.5
  21. DEMIR, Esra, “Robot Hukuku”, Istanbul Bi̇ lgi̇ University Institute of Social Sciences IT Law Master’s Program, 2017, at.35
  22. YUKSEL, at.97
  23. ERSOY, at.76
  24. (Date accessed : 01.04.20)
  25. / (Date accessed : 01.04.20)
  26. ERSOY, at.74
  27. ALTUNC, Sinan, “Robotlar, Yapay Zeka ve Ceza Hukuku”, at.7, (Date accessed : 01.04.20)
  28. Ibid., s.13
  29. ALTUNC, at.15
  30. ALTUNC, at.17
  31. From the speech of Selin Çetin at the Istanbul Bar Association IT and IT Law Center "Artificial Intelligence, Robots and Law" conference, January 2018, (Date accessed : 01.04.20) For an example of an arrested robot: (Date accessed : 01.04.20)
  32. ALTUNC, at.20
  33. ALTUNC, at.21
  34. Denise Chow , Boston Dynamics' New Atlas Robot Can't Be Pushed Around, 24.02.16, , (Date accessed : 01.04.20)
  35. DULGER, Murat Volkan, “Yapay Zekalı Varlıklar ve İnsanlar Arasında Duygusal/Cinsel Yakınlaşmalar: İnsanların Yerini Seks Robotları Mı Alıyor?”, at.12, YGUSAL_C%C4%B0NSEL_YAKINLA%C5%9EMALAR_%C4%B0NSANLARIN_YER%C4%B0N%C4%B0_SEKS_ ROBOTLARI_MI_ALIYOR (Date accessed : 01.04.20)
  36. Ibid., at.14
  37. “Yapay Zekâ Çağında Hukuk”, at.105
Language »