首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The paper examines how the EU General Data Protection Regulation (GDPR) is applied to the development of AI products and services, drawing attention to the differences between academic and commercial research. The GDPR aims to encourage innovation by providing several exemptions from its strict rules for scientific research. Still, the GDPR defines scientific research in a broad manner, which includes academic and commercial research. However, corporations conducting commercial research might not have in place a similar level of ethical and institutional safeguards as academic researchers. Furthermore, corporate secrecy and opaque algorithms in AI research might pose barriers to oversight. The aim of this paper is to stress the limits of the GDPR research exemption and to find the proper balance between privacy and innovation. The paper argues that commercial AI research should not benefit from the GDPR research exemption unless there is a public interest and has similar safeguards to academic research, such as review by research ethics committees. Since the GDPR provides this broad exemption, it is crucial to clarify the limits and requirements of scientific research, before the application of AI drastically transforms this field.  相似文献   

2.
This article uses the example of the cryptocurrency Bitcoin and the General Data Protection Regulation (GDPR) to show how distributed networks challenge existing legal mechanisms of allocating responsibility. The Bitcoin network stores personal data by automated means. Furthermore, full nodes qualify as establishments and the network offers a service to citizens in the EU. The data processing within the Bitcoin network therefore falls into the material and territorial scope of the GDPR. To protect data subjects, the GDPR allocates responsibility to the controller, who determines the ‘how’ and the ‘why’ of the data processing. However, the distributed structure of the Bitcoin network blurs the lines between actors who are responsible and actors who are worth protecting. Neither the Bitcoin users running lightweight nodes or full nodes nor the miners determine the ‘how’ and the ‘why’ of the data processing. They carry out their network activities according to the Bitcoin protocol, which can only be adopted and enforced by a collective of full nodes and miners. Members of this collective are joint controllers under Article 26 GDPR, which obliges them to clearly and transparently determine their respective responsibilities for compliance with the GDPR. However, this mechanism fails because of the very structure it aims to eliminate. Therefore, a solution to allocating responsibility for data protection in distributed networks lies outside the GDPR.  相似文献   

3.
The paper illustrates obligations emerging under articles 9 and 89 of the EU Reg. 2016/679 (General Data Protection Regulation, hereinafter “GDPR”) within the health-related data processing for research purposes. Furthermore, through a comparative analysis of the national implementations of the GDPR on the topic, the paper highlights few practical issues that the researcher might deal with while accomplishing the GDPR obligations and the other ethical requirements. The result of the analyses allows to build up a model to achieve an acceptable standard of accountability in health-related data research. The legal remarks are framed within the myth of Ulysses.  相似文献   

4.
This article examines the two major international data transfer schemes in existence today – the European Union (EU) model which at present is effectively the General Data Protection Regulation (GDPR), and the Asia-Pacific Economic Cooperation (APEC) Cross Border Privacy Rules system (CBPR), in the context of the Internet of Things (IoT).While IoT data ostensibly relates to things i.e. products and services, it impacts individuals and their data protection and privacy rights, and raises compliance issues for corporations especially in relation to international data flows. The GDPR regulates the processing of personal data of individuals who are EU data subjects including cross border data transfers. As an EU Regulation, the GDPR applies directly as law to EU member nations. The GDPR also has extensive extraterritorial provisions that apply to processing of personal data outside the EU regardless of place of incorporation and geographical area of operation of the data controller/ processor. There are a number of ways that the GDPR enables lawful international transfer of personal data including schemes that are broadly similar to APEC CBPR.APEC CBPR is the other major regional framework regulating transfer of personal data between APEC member nations. It is essentially a voluntary accountability scheme that initially requires acceptance at country level, followed by independent certification by an accountability agent of the organization wishing to join the scheme. APEC CBPR is viewed by many in the United States of America (US) as preferable to the EU approach because CBPR is considered more conducive to business than its counterpart schemes under the GDPR, and therefore is regarded as the scheme most likely to prevail.While there are broad areas of similarity between the EU and APEC approaches to data protection in the context of cross border data transfer, there are also substantial differences. This paper considers the similarities and major differences, and the overall suitability of the two models for the era of the Internet of Things (IoT) in which large amounts of personal data are processed on an on-going basis from connected devices around the world. This is the first time the APEC and GDPR cross-border data schemes have been compared in this way. The paper concludes with the author expressing a view as to which scheme is likely to set the global standard.  相似文献   

5.
The aim of this paper is to analyse the very recently approved national Member States’ laws that have implemented the GDPR in the field of automated decision-making (prohibition, exceptions, safeguards): all national legislations have been analysed and in particular 9 Member States Law address the case of automated decision making providing specific exemptions and relevant safeguards, as requested by Article 22(2)(b) of the GDPR (Belgium, The Netherlands, France, Germany, Hungary, Slovenia, Austria, the United Kingdom, Ireland).The approaches are very diverse: the scope of the provision can be narrow (just automated decisions producing legal or similarly detrimental effects) or wide (any decision with a significant impact) and even specific safeguards proposed are very diverse.After this overview, this article will also address the following questions: are Member States free to broaden the scope of automated decision-making regulation? Are ‘positive decisions’ allowed under Article 22, GDPR, as some Member States seem to affirm? Which safeguards can better guarantee rights and freedoms of the data subject?In particular, while most Member States refers just to the three safeguards mentioned at Article 22(3) (i.e. subject's right to express one's point of view; right to obtain human intervention; right to contest the decision), three approaches seem very innovative: a) some States guarantee a right to legibility/explanation about the algorithmic decisions (France and Hungary); b) other States (Ireland and United Kingdom) regulate human intervention on algorithmic decisions through an effective accountability mechanism (e.g. notification, explanation of why such contestation has not been accepted, etc.); c) another State (Slovenia) require an innovative form of human rights impact assessments on automated decision-making.  相似文献   

6.
The EU faces substantive legislative reform in data protection, specifically in the form of the General Data Protection Regulation (GDPR). One of the new elements in the GDPR is its call to establish data protection certification mechanisms, data protection seals and marks to help enhance transparency and compliance with the Regulation and allow data subjects to quickly assess the level of data protection of relevant products and services. To this effect, it is necessary to review privacy and data protection seals afresh and determine how data protection certification mechanisms, seals or marks might work given the role they will be called to play, particularly in Europe, in facilitating data protection. This article reviews the current state of play of privacy seals, the EU policy and regulatory thrusts for privacy and data protection certification, and the GDPR provisions on certification of the processing of personal data. The GDPR leaves substantial room for various options on data protection certification, which might play out in various ways, some of which are explored in this article.  相似文献   

7.
In this paper we introduce the concept of ‘reviewability' as an alternative approach to improving the accountability of automated decision-making that involves machine learning systems. In doing so, we draw on an understanding of automated decision-making as a socio-technical process, involving both human (organisational) and technical components, beginning before a decision is made and extending beyond the decision itself. Although explanations for automated decisions may be useful in some contexts, they focus more narrowly on the model and therefore do not provide the information about that process as a whole that is necessary for many aspects of accountability, regulatory oversight, and assessments for legal compliance. Drawing on previous work on the application of administrative law and judicial review mechanisms to automated decision-making in the public sector, we argue that breaking down the automated decision-making process into its technical and organisational components allows us to consider how appropriate record-keeping and logging mechanisms implemented at each stage of that process would allow for the process as a whole to be reviewed. Although significant research is needed to explore how it can be implemented, we argue that a reviewability framework potentially offers for a more useful and more holistic form of accountability for automated decision-making than approaches focused more narrowly on explanations.  相似文献   

8.
Whilst the legal debate concerning automated decision-making has been focused mainly on whether a ‘right to explanation’ exists in the GDPR, the emergence of ‘explainable Artificial Intelligence’ (XAI) has produced taxonomies for the explanation of Artificial Intelligence (AI) systems. However, various researchers have warned that transparency of the algorithmic processes in itself is not enough. Better and easier tools for the assessment and review of the socio-technical systems that incorporate automated decision-making are needed. The PLEAD project suggests that, aside from fulfilling the obligations set forth by Article 22 of the GDPR, explanations can also assist towards a holistic compliance strategy if used as detective controls. PLEAD aims to show that computable explanations can facilitate monitoring and auditing, and make compliance more systematic. Automated computable explanations can be key controls in fulfilling accountability and data-protection-by-design obligations, able to empower both controllers and data subjects. This opinion piece presents the work undertaken by the PLEAD project towards facilitating the generation of computable explanations. PLEAD leverages provenance-based technology to compute explanations as external detective controls to the benefit of data subjects and as internal detective controls to the benefit of the data controller.  相似文献   

9.
This article explores existing data protection law provisions in the EU and in six other jurisdictions from around the world - with a focus on Latin America - that apply to at least some forms of the processing of data typically part of an Artificial Intelligence (AI) system. In particular, the article analyzes how data protection law applies to “automated decision-making” (ADM), starting from the relevant provisions of EU's General Data Protection Regulation (GDPR). Rather than being a conceptual exploration of what constitutes ADM and how “AI systems” are defined by current legislative initiatives, the article proposes a targeted approach that focuses strictly on ADM and how data protection law already applies to it in real life cases. First, the article will show how GDPR provisions have been enforced in Courts and by Data Protection Authorities (DPAs) in the EU, in numerous cases where ADM is at the core of the facts of the case considered. After showing that the safeguards in the GDPR already apply to ADM in real life cases, even where ADM does not meet the high threshold in its specialized provision in Article 22 (“solely” ADM which results in “legal or similarly significant effects” on individuals), the article includes a brief comparative law analysis of six jurisdictions that have adopted general data protection laws (Brazil, Mexico, Argentina, Colombia, China and South Africa) and that are visibly inspired by GDPR provisions or its predecessor, Directive 95/46/EC, including those that are relevant for ADM. The ultimate goal of this study is to support researchers, policymakers and lawmakers to understand how existing data protection law applies to ADM and profiling.1  相似文献   

10.
The EU lawmaker has introduced several certification models in the GDPR. A first model entitles accredited private certification bodies to design and manage certification schemes under the close monitoring of the supervisory authorities. Another model gives to the supervisory authorities the opportunity to design and manage their own schemes. The EU lawmaker has also left the door open to the establishment of schemes at the margin of the data protection framework. Nothing in the GDPR prohibits to create certification schemes outside Articles 42/43 regime. The diversity of arrangements shows that certification is a flexible system capable of adapting to many different situations and environments. This is also a free market that proves to be difficult, if not impossible, to entirely monitor. These basic features challenge the attempt of the EU lawmaker to monitor the design and management of certification schemes in the GDPR. The GDPR also tells that the definition of certification suggested by the European Data Protection Board does not fully map this notion as designed in the GDPR. The data protection regulation offers a much more detailed picture of certification than the one proposed by the European Data Protection Board. The GDPR shows that the nature of certification is driven by the context in which this instrument is used. The analysis of the monitoring process of the codes of conduct set in Article 41 GDPR contributes, by contrast, to clarify the very nature certification. It shows that this is neither the attestation of conformity nor the conformity assessment that best defines certification.  相似文献   

11.
There has naturally been a good deal of discussion of the forthcoming General Data Protection Regulation. One issue of interest to all data controllers, and of particular concern for researchers, is whether the GDPR expands the scope of personal data through the introduction of the term ‘pseudonymisation’ in Article 4(5). If all data which have been ‘pseudonymised’ in the conventional sense of the word (e.g. key-coded) are to be treated as personal data, this would have serious implications for research. Administrative data research, which is carried out on data routinely collected and held by public authorities, would be particularly affected as the sharing of de-identified data could constitute the unconsented disclosure of identifiable information.Instead, however, we argue that the definition of pseudonymisation in Article 4(5) GDPR will not expand the category of personal data, and that there is no intention that it should do so. The definition of pseudonymisation under the GDPR is not intended to determine whether data are personal data; indeed it is clear that all data falling within this definition are personal data. Rather, it is Recital 26 and its requirement of a ‘means reasonably likely to be used’ which remains the relevant test as to whether data are personal. This leaves open the possibility that data which have been ‘pseudonymised’ in the conventional sense of key-coding can still be rendered anonymous. There may also be circumstances in which data which have undergone pseudonymisation within one organisation could be anonymous for a third party. We explain how, with reference to the data environment factors as set out in the UK Anonymisation Network's Anonymisation Decision-Making Framework.  相似文献   

12.
The GDPR mandates humans to intervene in different ways in automated decision-making (ADM). Similar human intervention mechanisms can be found amongst the human oversight requirements in the future regulation of AI in the EU. However, Article 22 GDPR has become an unenforceable second-class right, following the fate of its direct precedent -Article 15 of the 1995 Data Protection Directive-. Then, why should European policymakers rely on mandatory human intervention as a governance mechanism for ADM systems? Our approach aims to move away from a view of human intervention as an individual right towards a procedural right that is part of the culture of accountability in the GDPR. The core idea to make humans meaningfully intervene in ADM is to help controllers comply with regulation and to demonstrate compliance. Yet, human intervention alone is not sufficient to achieve appropriate human oversight for these systems. Human intervention will not work without human governance. This is why DPIAs should play a key role before introducing it and throughout the life-cycle of the system. This approach fits better with the governance model proposed in the Artificial Intelligence Act. Human intervention is not a panacea, but we claim that it should be better understood and integrated into the regulatory ecosystem to achieve appropriate oversight over ADM systems.  相似文献   

13.
Article 35 of the GDPR introduces the legal obligation to perform DPIAs in cases where the processing operations are likely to present high risks to the rights and freedoms of natural persons. This obligation is part of a change of approach in the GDPR towards a modified compliance scheme in terms of a reinforced principle of accountability. The DPIA is a prominent example of this approach given that it has an inclusive, comprehensive and proactive nature. Its importance lies in the fact that it forces data controllers to identify, assess and ultimately manage the high risks to the rights and freedoms. However, what is first and foremost important for a meaningful performance of DPIAs, is to have a common and objective understanding of what constitutes a risk in the field of data protection and of how to assess its likelihood and severity. The legislature has approached these concepts via the method of denotation, meaning by giving examples of (highly) risky processing operations. This article suggests a complementary approach, the connotation of these concepts and explains the added value of such a method. By way of a case-study the article also demonstrates the importance of performing complete and accurate DPIAs, in terms of contributing to improving the protection of personal data.  相似文献   

14.
This article reports on preliminary findings and recommendations of a cross-discipline project to accelerate international business-to-business automated sharing of cyber-threat intelligence, particularly IP addresses. The article outlines the project and its objectives and the importance of determining whether IP addresses can be lawfully shared as cyber threat intelligence.The goal of the project is to enhance cyber-threat intelligence sharing throughout the cyber ecosystem. The findings and recommendations from this project enable businesses to navigate the international legal environment and develop their policy and procedures to enable timely, effective and legal sharing of cyber-threat information. The project is the first of its kind in the world. It is unique in both focus and scope. Unlike the cyber-threat information sharing reviews and initiatives being developed at country and regional levels, the focus of this project and this article is on business-to-business sharing. The scope of this project in terms of the 34 jurisdictions reviewed as to their data protection requirements is more comprehensive than any similar study to date.This article focuses on the sharing of IP addresses as cyber threat intelligence in the context of the new European Union (EU) data protection initiatives agreed in December 2015 and formally adopted by the European Council and Parliament in April 2016. The new EU General Data Protection Regulation (GDPR) applies to EU member countries, a major focus of the international cyber threat sharing project. The research also reveals that EU data protection requirements, particularly the currently applicable law of the Data Protection Directive 95/46/EC (1995 Directive) (the rules of which the GDPR will replace in practice in 2018), generally form the basis of current data protection requirements in countries outside Europe. It is expected that this influence will continue and that the GDPR will shape the development of data protection internationally.In this article, the authors examine whether static and dynamic IP addresses are “personal data” as defined in the GDPR and its predecessor the 1995 Directive that is currently the model for data protection in many jurisdictions outside Europe. The authors then consider whether sharing of that data by a business without the consent of the data subject, can be justified in the public interest so as to override individual rights under Articles 7 and 8(1) of the Charter of Fundamental Rights of the European Union, which underpin EU data protection. The analysis shows that the sharing of cyber threat intelligence is in the public interest so as to override the rights of a data subject, as long as it is carried out in ways that are strictly necessary in order to achieve security objectives. The article concludes by summarizing the project findings to date, and how they inform international sharing of cyber-threat intelligence within the private sector.  相似文献   

15.
This article examines the intersection of the GDPR and selected due process requirements in the context of automated administrative decision-making. It finds that the safeguards for decisions based solely on automated data processing provided by the GDPR coincide with or serve a comparable function to traditional administrative due process elements such as the duty to give reasons, the duty of care principle, and the right to a hearing. The automation of decision-making by public authorities across the EU will therefore be regulated by an overlap of national administrative procedures and the GDPR. This overlap, however, leads to a paradoxical problem: on the one hand, the GDPR is an inflexible legal instrument aimed at setting out in detail the rights of data subjects and the obligations of data controllers, and it does not offer national legislators much room to align its terms with national administrative procedures. On the other, the GDPR's broad language makes it susceptible to interpretations embedded in the elaborated practices of the national administrations. The unclear relationship between national administrative procedures and the GDPR may undermine its main purpose – to establish an equal level of protection in all EU Member States through its ‘consistent and homogenous application’. After outlining the main challenges in this regard, the article concludes with a call for further research and regulatory frameworks adjustments aimed at developing a better governance regime for automated administrative decision-making that would allow for embracing technological progress while keeping threats to individual rights in check.  相似文献   

16.
Liberalization of key network industries is often said to reduce accountability by undermining its traditional mechanisms. Liberalization, others say, promotes accountability by creating new channels and mechanisms. This article suggests that neither view is sufficiently nuanced. Accountability comes in many forms, and the question is less "how much" accountability there is, but what form it takes. And accountability will take different forms in relation to different issues, even within the same organization. Examining accountability in relation to the provision of universal service in electricity and telecommunications, this article demonstrates that in the regimes studied, agencies were generally accountable for providing universal service by deferring, to the maximum possible extent, to political actors or stakeholders. However, when faced with an expert technical question—in this case, determining the costs of the universal service—agencies stressed their professional judgment and transparency. This observation supports a wider hypothesis concerning the conditions under which a variety of agency accountability strategies may be adopted.  相似文献   

17.
The goal of this contribution is to understand the notion of risk as it is enshrined in the General Data Protection Regulation (GDPR), with a particular on Art. 35 providing for the obligation to carry out data protection impact assessments (DPIAs), the first risk management tool to be enshrined in EU data protection law, and which therefore contains a number of key elements in order to grasp the notion. The adoption of this risk-based approach has not come without a number of debates and controversies, notably on the scope and meaning of the risk-based approach. Yet, what has remained up to date out of the debate is the very notion of risk itself, which underpins the whole risk-based approach. The contribution uses the notions of risk and risk analysis as tools for describing and understanding risk in the GDPR. One of the main findings is that the GDPR risk is about “compliance risk” (i.e., the lower the compliance the higher the consequences upon the data subjects' rights). This stance is in direct contradiction with a number of positions arguing for a strict separation between compliance and risk issues. This contribution sees instead issues of compliance and risk to the data subjects rights and freedoms as deeply interconnected. The conclusion will use these discussions as a basis to address the long-standing debate on the differences between privacy impact assessments (PIAs) and DPIAs. They will also warn against the fact that ultimately the way risk is defined in the GDPR is somewhat irrelevant: what matters most is the methodology used and the type of risk at work therein.  相似文献   

18.
This paper aims to critically assess the information duties set out in the General Data Protection Regulation (GDPR) and national adaptations when the purpose of processing is scientific research. Due to the peculiarities of the legal regime applicable to the research context information about the processing plays a crucial role for data subjects. However, the analysis points out that the information obligations, or mandated disclosures, introduced in the GDPR are not entirely satisfying and present some flaws.In addition, the GDPR information duties risk suffering from the same shortcomings usually addressed in the literature about mandated disclosures. The paper argues that the principle of transparency, developed as a “user-centric” concept, can support the adoption of solutions that embed behavioural insights to support the rationale of the information provision better.  相似文献   

19.
The endorsement of certification in Article 42 and 43 of the General Data Protection Regulation (hereinafter GDPR) extends the scope of this procedure to the enforcement of fundamental rights. The GDPR also leverages the high flexibility of this procedure to make of certification something else than a voluntary process attesting the conformity with technical standards. This paper argues that the GDPR turned certification into a new regulatory instrument in data protection, I suggest to call it monitored self-regulation, seeking to fill the gap between self-regulation and traditional regulation in order to build a regulation continuum.  相似文献   

20.
The General Data Protection Regulation (GDPR) will come into force in the European Union (EU) in May 2018 to meet current challenges related to personal data protection and to harmonise data protection across the EU. Although the GDPR is anticipated to benefit companies by offering consistency in data protection activities and liabilities across the EU countries and by enabling more integrated EU-wide data protection policies, it poses new challenges to companies. They are not necessarily prepared for the changes and may lack awareness of the upcoming requirements and the GDPR's coercive measures. The implementation of the GDPR requirements demands substantial financial and human resources, as well as training of employees; hence, companies need guidance to support them in this transition. The purposes of this study were to compare the current Data Protection Directive 95/46/EC with the GDPR by systematically analysing their differences and to identify the GDPR's practical implications, specifically for companies that provide services based on personal data. This study aimed to identify and discuss the changes introduced by the GDPR that would have the most practical relevance to these companies and possibly affect their data management and usage practices. Therefore, a review and a thematic analysis and synthesis of the article-level changes were carried out. Through the analysis, the key practical implications of the changes were identified and classified. As a synthesis of the results, a framework was developed, presenting 12 aspects of these implications and the corresponding guidance on how to prepare for the new requirements. These aspects cover business strategies and practices, as well as organisational and technical measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号