首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There are several legal and ethical problems associated with the far-reaching integration of man with Artificial Intelligence (AI) within the framework of algorithmic management. One of these problems is the question of the legal subjectivity of the parties to a contractual obligation within the framework of crowdworking, which includes the service provider, the Internet platform with AI, and the applicant's client. Crowdworking is an excellent example of a laboratory of interdependence and collaboration between humans and artificial intelligence as part of the algorithmic management process. Referring to the example of crowdworking platforms, we should ask whether, in the face of the rapid development of AI and algorithmic management, AI can be an employer equipped with electronic personhood? What characteristics does a work environment in which AI and algorithmic governance mechanisms play a dominant role? What kind of ethical implications are associated with the practical application of the concept of electronic subjectivity of AI in employment relations? This paper analyses the legal and ethical implications of electronic AI subjectivity in the work environment. The legal construction of electronic personhood is examined. The legal entity that uses AI, which manages the work process through algorithmic subordination, bears the risks resulting from such use (economic, personal, technical, and social) and full material responsibility (individual liability regime with the application of the presumption of guilt rule) in case of damage to an employee. Liability provisions can be complemented by a mandatory insurance scheme for AI users and a compensation fund that can offer support when none of the insurance policies covers the risk. A compensation fund can be paid for by the manufacturer, owner, user, or trainer of the AI and can compensate all those who suffer damage as a result of its operations. This is the direction proposed by the European Parliament, which has progressively called for robots to be given an electronic personality. The personalistic concept of work excludes the possibility of AI becoming a legal entity. Alongside legal arguments, ethical questions are of fundamental importance. The final part of the article presents the ethical implications of AI as an employer endowed with a legal entity (electronic personhood).  相似文献   

2.
朱艺浩 《法学杂志》2020,(3):132-140
人工智能技术的进步使得法学界兴起"人工智能法律人格论"。然而,意志只能来源于自然人而非机器,人工智能也无法独立主张权利和承担义务,因此人工智能无法享有法律人格。赋予人工智能法律人格可能会严重冲击现有法律制度,使得其沦为自然人逃避法律制裁的工具,甚至加剧奴役和压迫。面对人工智能技术,需要坚持人类唯一主体地位,在现有法律框架内处理责任承担问题,不得随意拟制法律人格,根据其技术含量采取前期引导、中期约束、后期干预的立法逻辑,理性应对人工智能技术带来的挑战。  相似文献   

3.
Precision and effectiveness of Artificial Intelligence (AI) models are highly dependent on the availability of genuine, relevant, and representative training data. AI systems tested and validated on poor-quality datasets can produce inaccurate, erroneous, skewed, or harmful outcomes (actions, behaviors, or decisions), with far-reaching effects on individuals' rights and freedoms.Appropriate data governance for AI development poses manifold regulatory challenges, especially regarding personal data protection. An area of concern is compliance with rules for lawful collection and processing of personal data, which implies, inter alia, that using databases for AI design and development should be based on a clear and precise legal ground: the prior consent of the data subject or another specific valid legal basis.Faced with this challenge, the European Union's personal data protection legal framework does not provide a preferred, one-size-fits-all answer, and the best option will depend on the circumstances of each case. Although there is no hierarchy among the different legal bases for data processing, in doubtful cases, consent is generally understood by data controllers as a preferred or default choice for lawful data processing. Notwithstanding this perception, obtaining data subjects' consent is not without drawbacks for AI developers or AI-data controllers, as they must meet (and demonstrate) various requirements for the validity of consent. As a result, data subjects' consent could not be a suitable and realistic option to serve AI development purposes. In view of this, it is necessary to explore the possibility of basing this type of personal data processing on lawful grounds other than the data subject's consent, specifically, the legitimate interest of the data controller or third parties. Given its features, legitimate interests could help to meet the challenge of quality, quantity, and relevance of data curation for AI training.The aim of this article is to provide an initial conceptual approach to support the debate about data governance for AI development in the European Union (EU), as well as in non-EU jurisdictions with European-like data protection laws. Based on the rules set by the EU General Data Protection Regulation (GDPR), this paper starts by referring to the relevance of adequate data curation and processing for designing trustworthy AI systems, followed by a legal analysis and conceptualization of some difficulties data controllers face for lawful processing of personal data. After reflecting on the legal standards for obtaining data subject's valid consent, the paper argues that legitimate interests (if certain criteria are met) may better match the purpose of building AI training datasets.  相似文献   

4.
Free will is the foundation of determination of responsibility. Genetic enginnering represented by technologies of gene editing, artificial medical devices and AI have fundamentally challenged the concept of free will and so have significantly influenced determination of legal responsibility. These challenges are fundamental, not instrumental, and can be divided into two aspects in legal philosophy. First, the direct challenge, that is, the emerging technology represented by genetic engineering and artificial narrow intelligence (ANI) has challenged the concept of free will. Second the would-be ultimate challenge, that is, presented by an artificial general intelligence (AGI) agent that is considered to reach humanlevel free will, can be a legal subject, thus taking full legal responsibility. The direct challenge constitutes a new “forgiveness” condition for taking responsibility. The would-be ultimate challenge deserves significant attention, because the concept of free will is not only about human responsibility, but also about human dignity.  相似文献   

5.
人工智能医疗影像诊断侵权损害是指临床医生根据人工智能医疗影像辅助诊断结论所实施的医疗行为对患者造成的损害。人工智能与医疗影像技术的融合应用,大大提高了疾病诊断的效率与质量,其诊断错误所引发的侵权损害赔偿问题也不容忽视。人工智能医疗影像诊断模式多样、责任主体多元、损害原因各异、责任份额不同,其引起的侵权责任错综复杂。人工智能医疗影像诊断侵权应定性为多数人侵权中的分别侵权行为。现阶段人工智能医疗影像诊断侵权损害赔偿的判定,应以人工智能医疗影像诊断设备的民事法律关系客体定位为逻辑起点,根据人工智能医疗影像诊断模式、侵权场景、错误发生原因等因素来判定责任主体,综合原因力大小、过错程度等因素来确定赔偿份额,并从“利益平衡”视角对侵权损害赔偿的责任范围进行适当限制,以促进人工智能技术在医疗影像行业的广泛应用和大健康产业的长足发展。  相似文献   

6.
魏斌 《政法论丛》2021,(1):138-147
法律人工智能的法理是"实践之法理",是证成法律人工智能实践之正当性的理据,它反映人工智能的技术理性与法律实践理性相融合以揭示法律运行的规律和特征,是"法外之理"的又一阐释。法律人工智能的法理逻辑在于辩护和证成,其价值不仅为法律人工智能提供法理解释和学理支撑,还在于规范和引导法律人工智能的发展。法律融合人工智能有其天然条件,探究其蕴含之法理是法律融合科技之法理的新命题,法理形式理性是辩护法律人工智能之法理的本质特征。法理之要义应在于指导人工智能理解和遵循立法及司法规律,符合法律任务的特征,满足法律实践的需求,定位和发挥"辅助手"的作用,最大限度地发挥人工智能的技术优势。  相似文献   

7.
人工智能(AI)作为类人类智能,无论我们是否赋予其主体资格,在解决其法律责任问题时,都必须对其行为进行解释,为此,探讨人工智能的法律责任问题,应该基于人工智能行为的可解释性的全新路径来推进,而不是纠缠于当下学界关于人工智能主体地位与法律责任的各种主体论与责任理论。人工智能的可解释性,亦即解释人工智能如何在大数据的基础上进行算法决策。然而,在AI领域,虽然以深度学习为代表的人工智能技术已取得了令人瞩目的成就,但如何确保以非技术性的方式向最终用户和其他利益相关方解释算法决策以及任何驱动这些决策的数据,仍是一个无法得到解决的难题,人工智能"黑箱"释明难题决定了人工智能行为的不可解释性。法律责任的本质是答责,不具有可解释性的人工智能不能自我答责,因此其无法承担法律责任;法律责任的目的是预防,不具有可解释性的人工智能无法实现法律责任的预防目的。人工智能法学研究的下一个前沿问题,是人工智能的可解释性问题。  相似文献   

8.
Privacy by Design (PbD) is a kind of precautionary legal technology design. It takes opportunities for fundamental rights without creating risks for them. Now the EU Commission “promised” to implement PbD with Art. 23(4) of its proposal of a General Data Protection Regulation. It suggests setting up a committee that can define technical standards for PbD. However the Commission did not keep its promise. Should it be left to the IT security experts who sit in the committee but do not have the legal expertise, to decide on our privacy or, by using overly detailed specifications, to prevent businesses from marketing innovative products? This paper asserts that the Commission's implementation of PbD is not acceptable as it stands and makes positive contributions for the work of a future PbD committee so that the Commission can keep its promise to introduce precautionary legal technology design.  相似文献   

9.
论我国医疗损害技术鉴定制度构建   总被引:1,自引:1,他引:0  
刘鑫  梁俊超 《证据科学》2011,19(3):261-274
《侵权责任法》仍未解决我国医疗鉴定体制的二元化问题。医学会医疗事故技术鉴定和法医医疗损害司法鉴定两种模式各有利弊,法医鉴定模式并不比医学会鉴定模式优越。在专业技术问题的判断上,日本、德国、荷兰、美国的医疗损害鉴定模式都采同行评价的原则。构建我国医疗损害技术鉴定制度应坚持充分利用现有鉴定资源、尽可能融合当前两种鉴定的优点、法律问题与技术问题分离的宏观理念,并坚持公开、救济、辩论、鉴定专家半职业化、鉴定方法科学和法律指导的基本原则。在制度的具体构建上,鉴定名称应选择医疗损害鉴定或医疗损害技术鉴定;新的鉴定机构应在现有医学会医疗事故技术鉴定机构的基础上组建,并要求法医专家参与。调整鉴定专家来源、专家鉴定组和鉴定专家库组成,完善鉴定程序,确定鉴定理论、鉴定方法.明确鉴定原则,扩充医疗:愤害技术鉴定的内容;也可以借鉴日本的鉴定模式,由医学会建立专家库,由法院启动、组织鉴定。  相似文献   

10.
Artificial intelligence (AI) as of the level of development reached today has become a scientific reality that is subject to study in the fields of law, political science, and other social sciences besides computer and software engineering. AI systems which perform relatively simple tasks in the early stages of the development period are expected to become fully or largely autonomous in the near future. Thanks to this, AI which includes the concepts of machine learning, deep learning, and autonomy, has begun to play an important role in producing and using smart arms. However, questions about AI-Based Lethal Weapon Systems (AILWS) and attacks that can be carried out by such systems have not been fully answered under legal aspect. More particularly, it is a controversial issue who will be responsible for the actions that an AILWS has committed. In this article, we discussed whether AILWS can commit offense in the context of the Rome Statute, examined the applicable law regarding the responsibility of AILWS, and tried to assess whether these systems can be held responsible in the context of international law, crime of aggression, and individual responsibility. It is our finding that international legal rules including the Rome Statute can be applied regarding the responsibility for the act/crime of aggression caused by AILWS. However, no matter how advanced the cognitive capacity of an AI software, it will not be possible to resort to the personal responsibility of this kind of system since it has no legal personality at all. In such a case, responsibility will remain with the actors who design, produce, and use the system. Last but not least, since no AILWS software does have specific codes of conduct that can make legal and ethical reasonings for today, at the end of the study it was recommended that states and non-governmental organizations together with manifacturers should constitute the necessary ethical rules written in software programs to prevent these systems from unlawful acts and to develop mechanisms that would restrain AI from working outside human control.  相似文献   

11.
从逻辑学的角度看,法律推理具有非单调性。人工智能时代更清晰地凸显出了与这一特性相应的可废止推理模式的必要性。可废止推理虽未必一定用可废止逻辑来刻画,但这一做法在人工智能的环境下更加合乎目的。法律推理的可废止性源于法律规则的可废止性,法律规则的逻辑形式化要求将其构成要件表征为"有待证明的要素"(P要素)与"未被驳倒的要素"(NR要素)两部分,后者的引入恰当地处理了规则与例外的关系。在此基础上,可以通过引入三类"废止者",即反驳型废止者、截断型废止者和削弱型废止者,来建构可废止法律推理的基本模型。但这同时也显现出了可废止法律推理的智能化限度,核心在于它无法进行司法裁判中必不可少的价值判断。  相似文献   

12.
This article examines the problem of AI memory and the Right to Be Forgotten. First, this article analyzes the legal background behind the Right to Be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the authors explore whether the Right to Be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to Be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to Be Forgotten in artificial intelligence environments. Finally, this article addresses the core issue at the heart of the AI and Right to Be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation.  相似文献   

13.
The development of information and communication technology in health care, also called eHealth, is expected to improve patient safety and facilitate more efficient use of limited resources. The introduction of electronic health records (EHRs) can make possible immediate, even automatic transfer of patient data, for health care as well as other purposes, across any kind of institutional, regional or national border. Data can thus be shared and used more effectively for quality assurance, disease surveillance, public health monitoring and research. eHealth may also facilitate patient access to health information and medical treatment, and is seen as an effective tool for patient empowerment. At the same time, eHealth solutions may jeopardize both patient safety and patients' rights, unless carefully designed and used with discretion. The success of EHR systems will depend on public trust in their compatibility with fundamental rights, such as privacy and confidentiality. Shared European EHR systems require interoperability not only with regard to technological and semantic standards, but also concerning legal, social and cultural aspects. Since the area of privacy and medical confidentiality is far from harmonized across Europe, we are faced with a diversity that will make fully shared EHR systems a considerable challenge.  相似文献   

14.
Anonymization is viewed as an instrument by which personal data can be rendered so that it can be processed further without harming data subjects' private lives, for purposes that are beneficial to the public good. The anonymization is fair if the possibility of re-identification can be practically excluded. The data processor does all that he or she can to ensure this. For a fair anonymization, simply removing the primary personal identification data, such as the name, resident address, phone number and email address, is not enough, as many papers have warned. Therefore, new guidance documents, and even legal rulings such as the HIPAA Privacy Rule on de-identification, may improve the security of anonymization. Researchers are continuously testing the efficiency of the methods and simulating re-identification attacks. Since the US and Canada do not have a population registry, re-identification experiments were carried out with the help of other publicly available databases, such as census data or the voters' database. Unfortunately, neither of these is complete and sufficiently detailed, so the computed risk was only an estimate. The author obtained the zip code, gender, date of birth distribution data from the Hungarian population registry and computed re-identification risks in several simulated cases. This paper also gives an insight into the legal environment of Hungarian personal medical data protection legislation.  相似文献   

15.
Epidemiologic research often relies on existing data, collected for nonepidemiologic reasons, to support studies. Data are obtained from hospital records, police reports, labor reports, death certificates, or other sources. Medical examiner/coroner records are, however, not often used in epidemiologic studies. The National Institute for Occupational Safety and Health's Division of Safety Research has begun using these records in its research program on work-related trauma. Because medical examiners and coroners have the legal authority and responsibility to investigate all externally caused deaths, these records can be used in surveillance of these deaths. Another use of these records is to validate cases identified by other case ascertainment methods, such as death certificates. Using medical examiner/coroner records also allows rapid identification of work-related deaths without waiting several years for mortality data from state offices of vital statistics. Finally, the records are an invaluable data source since they contain detailed information on the nature of the injury, external cause of death, and results of toxicologic testing, which is often not available from other sources. This paper illustrates some of the ways that medical examiner/coroner records are a valuable source of information for epidemiologic studies and makes recommendations to improve their usefulness.  相似文献   

16.
In the age of artificial intelligence (AI), robots have profoundly impacted our life and work, and have challenged our civil legal system. In the course of AI development, robots need to be designed to protect our personal privacy, data privacy, intellectual property rights, and tort liability identification and determination. In addition, China needs an updated Civil Code in line with the growth of AI. All measures should aim to address AI challenges and also to provide the needed institutional space for the development of AI and other emerging technologies.  相似文献   

17.
This article explores existing data protection law provisions in the EU and in six other jurisdictions from around the world - with a focus on Latin America - that apply to at least some forms of the processing of data typically part of an Artificial Intelligence (AI) system. In particular, the article analyzes how data protection law applies to “automated decision-making” (ADM), starting from the relevant provisions of EU's General Data Protection Regulation (GDPR). Rather than being a conceptual exploration of what constitutes ADM and how “AI systems” are defined by current legislative initiatives, the article proposes a targeted approach that focuses strictly on ADM and how data protection law already applies to it in real life cases. First, the article will show how GDPR provisions have been enforced in Courts and by Data Protection Authorities (DPAs) in the EU, in numerous cases where ADM is at the core of the facts of the case considered. After showing that the safeguards in the GDPR already apply to ADM in real life cases, even where ADM does not meet the high threshold in its specialized provision in Article 22 (“solely” ADM which results in “legal or similarly significant effects” on individuals), the article includes a brief comparative law analysis of six jurisdictions that have adopted general data protection laws (Brazil, Mexico, Argentina, Colombia, China and South Africa) and that are visibly inspired by GDPR provisions or its predecessor, Directive 95/46/EC, including those that are relevant for ADM. The ultimate goal of this study is to support researchers, policymakers and lawmakers to understand how existing data protection law applies to ADM and profiling.1  相似文献   

18.
The use of artificial intelligence (AI) in law has again become of great interest to lawyers and government. Legal Information Institutes (LIIs) have played a significant role in the provision of legal information via the web. The concept of ‘free access to law’ is not static, and the evolution of its principles now requires a response from providers of free access to legal information (‘a LII response’) to this renewed prominence of AI. This should include improving and expanding free access to legal advice. This paper proposes, and proposes to test, one approach that LIIs might take in the use of AI (specifically, ‘decision support’ or ‘intelligent assistance’ (IA) technologies), an approach that leverages the very large legal information assets that some LIIs have built over the past two decades. This approach focuses on how LIIs can assist providers of free legal advice (the ‘legal assistance sector’) to serve their clients. We consider the constraints that the requirement of ‘free’ imposes (on both the legal assistance sector and on LIIs), including on what types of free legal advice systems are sustainable, and what roles LIIs may realistically play in the development of such a ‘commons of free legal advice’. We suggest guidelines for development of such systems. The AI-related services and tools that the Australasian Legal Information Institute (AustLII) is providing (the ‘DataLex’ platform), and how they could be used to achieve these goals, are outlined.  相似文献   

19.
Organisations can use artificial intelligence to make decisions about people for a variety of reasons, for instance, to select the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision-making. To illustrate, an AI system could reject applications of people with a certain ethnicity, while the organisation did not plan such ethnicity discrimination. But in Europe, an organisation runs into a problem when it wants to assess whether its AI system accidentally discriminates based on ethnicity: the organisation may not know the applicants’ ethnicity. In principle, the GDPR bans the use of certain ‘special categories of data’ (sometimes called ‘sensitive data’), which include data on ethnicity, religion, and sexual preference. The proposal for an AI Act of the European Commission includes a provision that would enable organisations to use special categories of data for auditing their AI systems. This paper asks whether the GDPR's rules on special categories of personal data hinder the prevention of AI-driven discrimination. We argue that the GDPR does prohibit such use of special category data in many circumstances. We also map out the arguments for and against creating an exception to the GDPR's ban on using special categories of personal data, to enable preventing discrimination by AI systems. The paper discusses European law, but the paper can be relevant outside Europe too, as many policymakers in the world grapple with the tension between privacy and non-discrimination policy.  相似文献   

20.
In recent years a substantial literature has emerged concerning bias, discrimination, and fairness in artificial intelligence (AI) and machine learning. Connecting this work to existing legal non-discrimination frameworks is essential to create tools and methods that are practically useful across divergent legal regimes. While much work has been undertaken from an American legal perspective, comparatively little has mapped the effects and requirements of EU law. This Article addresses this critical gap between legal, technical, and organisational notions of algorithmic fairness. Through analysis of EU non-discrimination law and jurisprudence of the European Court of Justice (ECJ) and national courts, we identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. A clear gap exists between statistical measures of fairness as embedded in myriad fairness toolkits and governance mechanisms and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements used by the ECJ; we refer to this approach as “contextual equality.”This Article makes three contributions. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Many of the concepts fundamental to bringing a claim, such as the composition of the disadvantaged and advantaged group, the severity and type of harm suffered, and requirements for the relevance and admissibility of evidence, require normative or political choices to be made by the judiciary on a case-by-case basis. We show that automating fairness or non-discrimination in Europe may be impossible because the law, by design, does not provide a static or homogenous framework suited to testing for discrimination in AI systems.Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Equivalent signalling mechanisms and agency do not exist in algorithmic systems. Compared to traditional forms of discrimination, automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect. The increasing use of algorithms disrupts traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition. Consistent assessment procedures that define a common standard for statistical evidence to detect and assess prima facie automated discrimination are urgently needed to support judges, regulators, system controllers and developers, and claimants.Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. A ‘gold standard’ for assessment of prima facie discrimination has been advanced by the European Court of Justice but not yet translated into standard assessment procedures for automated discrimination. We propose ‘conditional demographic disparity’ (CDD) as a standard baseline statistical measurement that aligns with the Court's ‘gold standard’. Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号