首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article examines the two major international data transfer schemes in existence today – the European Union (EU) model which at present is effectively the General Data Protection Regulation (GDPR), and the Asia-Pacific Economic Cooperation (APEC) Cross Border Privacy Rules system (CBPR), in the context of the Internet of Things (IoT).While IoT data ostensibly relates to things i.e. products and services, it impacts individuals and their data protection and privacy rights, and raises compliance issues for corporations especially in relation to international data flows. The GDPR regulates the processing of personal data of individuals who are EU data subjects including cross border data transfers. As an EU Regulation, the GDPR applies directly as law to EU member nations. The GDPR also has extensive extraterritorial provisions that apply to processing of personal data outside the EU regardless of place of incorporation and geographical area of operation of the data controller/ processor. There are a number of ways that the GDPR enables lawful international transfer of personal data including schemes that are broadly similar to APEC CBPR.APEC CBPR is the other major regional framework regulating transfer of personal data between APEC member nations. It is essentially a voluntary accountability scheme that initially requires acceptance at country level, followed by independent certification by an accountability agent of the organization wishing to join the scheme. APEC CBPR is viewed by many in the United States of America (US) as preferable to the EU approach because CBPR is considered more conducive to business than its counterpart schemes under the GDPR, and therefore is regarded as the scheme most likely to prevail.While there are broad areas of similarity between the EU and APEC approaches to data protection in the context of cross border data transfer, there are also substantial differences. This paper considers the similarities and major differences, and the overall suitability of the two models for the era of the Internet of Things (IoT) in which large amounts of personal data are processed on an on-going basis from connected devices around the world. This is the first time the APEC and GDPR cross-border data schemes have been compared in this way. The paper concludes with the author expressing a view as to which scheme is likely to set the global standard.  相似文献   

2.
A series of recent developments highlight the increasingly important role of online platforms in impacting data privacy in today's digital economy. Revelations and parliamentary hearings about privacy violations in Facebook's app and service partner ecosystem, EU Court of Justice judgments on joint responsibility of platforms and platform users, and the rise of smartphone app ecosystems where app behaviour is governed by app distribution platforms and operating systems, all show that platform policies can make or break the enjoyment of privacy by users. In this article, we examine these developments and explore the question of what can and should be the role of platforms in protecting data privacy of their users.The article first distinguishes the different roles that platforms can have in ensuring respect for data privacy in relevant ecosystems. These roles include governing access to data, design of relevant interfaces and privacy mechanisms, setting of legal and technical standards, policing behaviour of the platform's (business) users, coordinating responsibility for privacy issues between platform users and the platform, and direct and indirect enforcement of a platform's data privacy standards on relevant players. At a higher level, platforms can also perform a role by translating different international regulatory requirements into platform policies, thereby facilitating compliance of apps in different regulatory environments. And in all of this, platforms are striking a balance between ensuring the respect for data privacy in data-driven environments on the one hand and optimization of the value and business opportunities connected to the platform and underlying data for users of the platform on the other hand.After this analysis of platforms’ roles in protecting privacy, the article turns to the question of what should this role be and how to better integrate platforms in the current legal frameworks for data privacy in Europe and the US. The article will argue for a compromise between direct regulation of platforms and mere self-regulation, in arguing that platforms should be required to make official disclosures about their privacy-related policies and practices for their respective ecosystems. These disclosures should include statements about relevant conditions for access to data and the platform, the platform's standards with respect to privacy and the way in which these standards ensure or facilitate compliance with existing legal frameworks by platform users, and statements with respect to the risks of abuse of different data sources and platform tools and actions taken to prevent or police such abuses. We argue that such integration of platforms in current regulatory frameworks is both feasible and desirable. It would make the role that platforms already have in practice more explicit. This would help to highlight best practices, create more accountability and could save significant regulatory and compliance resources in bringing relevant information together in one place. In addition, it could provide clarity for business users of platforms, who are now sometimes confronted with restrictive decisions by platforms in ways that lack transparency and oversight.  相似文献   

3.
Big data and machine learning algorithms have paved the way towards the bulk accumulation of tax and financial data which are exploited to either provide novel financial services to consumers or to augment authorities with automated conformance checks. In this regard, the international and EU policies toward collecting and exchanging a large amount of personal tax and financial data to facilitate innovation and to promote transparency in the financial and tax domain have been increased substantially over the last years. However, this vast collection and utilization of “big” tax and financial data raise also considerations around privacy and data protection, especially when these data are fed to clever algorithms to build detailed personal profiles or to take automated decisions which may exceptionally affect people's lives. Ultimately, these practices of profiling tax and financial behaviour provide fertile ground for discriminating processing of individuals and groups.In light of the above, this paper aims to shed light on the following four interdependent and highly disputed areas: firstly, to review the most well-known profiling and automated decision risks emerged from big data technology and machine learning algorithmic processing as well as to analyse their impact on the tax and financial privacy rights through their immense profiling practices; secondly, to document the current EU initiatives toward financial and tax transparency, namely the AEOI, PSD2, MiFID2, and data retention policies, along with their implications for personal data protection when used for profiling and automated decision purposes; thirdly, to highlight the way forward for mitigating the risks of profiling and automated decision in the big data era and to investigate the protection of individuals against these practices in the light of the new technical and legal frameworks; in this respect, we finally delve into the regulatory EU efforts towards fairer and accountable profiling and automated decision processes, and in particular we examine the extent to which the GDPR provisions establishes a protection regime for individuals against advanced profiling techniques, enabling thus accountability and transparency.  相似文献   

4.
The paper examines how the EU General Data Protection Regulation (GDPR) is applied to the development of AI products and services, drawing attention to the differences between academic and commercial research. The GDPR aims to encourage innovation by providing several exemptions from its strict rules for scientific research. Still, the GDPR defines scientific research in a broad manner, which includes academic and commercial research. However, corporations conducting commercial research might not have in place a similar level of ethical and institutional safeguards as academic researchers. Furthermore, corporate secrecy and opaque algorithms in AI research might pose barriers to oversight. The aim of this paper is to stress the limits of the GDPR research exemption and to find the proper balance between privacy and innovation. The paper argues that commercial AI research should not benefit from the GDPR research exemption unless there is a public interest and has similar safeguards to academic research, such as review by research ethics committees. Since the GDPR provides this broad exemption, it is crucial to clarify the limits and requirements of scientific research, before the application of AI drastically transforms this field.  相似文献   

5.
《个人信息保护法》以信息主体同意为基础,构筑了个人控制的个人信息直接利用制度,但其是否为流通利用提供了通道仍存疑问。信息因其识别性能的差异,可区分为直接标识符、间接标识符和准标识符,三者给个人权益带来的危害风险不同。《个人信息保护法》规定的匿名化和去标识化本质上是针对特定数据集中信息识别风险的制度安排,能消除因信息本身识别性产生的风险,而很难消除基于识别分析的识别性产生的风险。因此,缺失针对“基于识别分析的识别性产生的风险”的措施,现行关于匿名化和去标识化的规范均不能支撑个人信息流通利用。去标识化需要改造成为“去直接标识符+识别控制”的受控去标识化制度,在防控个人信息识别风险的前提下,为个人信息流通利用提供制度保障,以最大化实现个人信息的社会价值。  相似文献   

6.
The endorsement of certification in Article 42 and 43 of the General Data Protection Regulation (hereinafter GDPR) extends the scope of this procedure to the enforcement of fundamental rights. The GDPR also leverages the high flexibility of this procedure to make of certification something else than a voluntary process attesting the conformity with technical standards. This paper argues that the GDPR turned certification into a new regulatory instrument in data protection, I suggest to call it monitored self-regulation, seeking to fill the gap between self-regulation and traditional regulation in order to build a regulation continuum.  相似文献   

7.
The right to data portability is one of the most important novelties within the EU General Data Protection Regulation, both in terms of warranting control rights to data subjects and in terms of being found at the intersection between data protection and other fields of law (competition law, intellectual property, consumer protection, etc.). It constitutes, thus, a valuable case of development and diffusion of effective user-centric privacy enhancing technologies and a first tool to allow individuals to enjoy the immaterial wealth of their personal data in the data economy. Indeed, a free portability of personal data from one controller to another can be a strong tool for data subjects in order to foster competition of digital services and interoperability of platforms and in order to enhance controllership of individuals on their own data. However, the adopted formulation of the right to data portability in the GDPR could benefit from further clarification: several interpretations are possible, particularly with regard to the object of the right and its interrelation with other rights, potentially leading to additional challenges within its technical implementation. The aim of this article is to propose a first systematic interpretation of this new right, by suggesting a pragmatic and extensive approach, particularly taking advantage as much as possible of the interrelationship that this new legal provision can have with regard to the Digital Single Market and the fundamental rights of digital users. In sum, the right to data portability can be approximated under two different perspectives: the minimalist approach (the adieu scenario) and the empowering approach (the fusing scenario), which the authors consider highly preferable.  相似文献   

8.
The goal of this contribution is to understand the notion of risk as it is enshrined in the General Data Protection Regulation (GDPR), with a particular on Art. 35 providing for the obligation to carry out data protection impact assessments (DPIAs), the first risk management tool to be enshrined in EU data protection law, and which therefore contains a number of key elements in order to grasp the notion. The adoption of this risk-based approach has not come without a number of debates and controversies, notably on the scope and meaning of the risk-based approach. Yet, what has remained up to date out of the debate is the very notion of risk itself, which underpins the whole risk-based approach. The contribution uses the notions of risk and risk analysis as tools for describing and understanding risk in the GDPR. One of the main findings is that the GDPR risk is about “compliance risk” (i.e., the lower the compliance the higher the consequences upon the data subjects' rights). This stance is in direct contradiction with a number of positions arguing for a strict separation between compliance and risk issues. This contribution sees instead issues of compliance and risk to the data subjects rights and freedoms as deeply interconnected. The conclusion will use these discussions as a basis to address the long-standing debate on the differences between privacy impact assessments (PIAs) and DPIAs. They will also warn against the fact that ultimately the way risk is defined in the GDPR is somewhat irrelevant: what matters most is the methodology used and the type of risk at work therein.  相似文献   

9.
The EU General Data Protection Regulation (GDPR) devotes particular attention to the protection of personal data of children. The rationale is that children are less aware of the risks and the potential consequences of the processing of their personal data on their rights. Yet, the text of the GDPR offers little clarity as to the actual implementation and impact of a number of provisions that may significantly affect children and their rights, leading to legal uncertainty for data controllers, parents and children. This uncertainty relates for instance to the age of consent for processing children's data in relation to information society services, the technical requirements regarding parental consent in that regard, the interpretation of the extent to which profiling of children is allowed and the level of transparency that is required vis-à-vis children. This article aims to identify a number of key issues and questions – both theoretical and practical – that raise concerns from a multi-dimensional children's rights perspective, and to clarify remaining ambiguities in the run-up to the actual application of the GDPR from 25 May 2018 onwards.  相似文献   

10.
The advent of DNA databanks: implications for information privacy   总被引:2,自引:0,他引:2  
Genetic identification tests -- better known as DNA profiling -- currently allow criminal investigators to connect suspects to physical samples retrieved from a victim or the scene of a crime. A controversial yet acclaimed expansion of DNA analysis is the creation of a massive databank of genetic codes. This Note explores the privacy concerns arising out of the collection and retention of extremely personal information in a central database. The potential for unauthorized access by those not investigating a particular crime compels the implementation of national standards and stringent security measures.  相似文献   

11.
Internet of things (IoT) is changing the way data is collected and processed. The scale and variety of devices, communication networks, and protocols involved in data collection present critical challenges for data processing and analyses. Newer and more sophisticated methods for data integration and aggregation are required to enhance the value of real-time and historical IoT data. Moreover, the pervasive nature of IoT data presents a number of privacy threats because of intermediate data processing steps, including data acquisition, data aggregation, fusion and integration. User profiling and record linkage are well studied topics in online social networks (OSNs); however, these have become more critical in IoT applications where different systems share and integrate data and information. The proposed study aims to discuss the privacy threat of information linkage, technical and legal approaches to address it in a heterogeneous IoT ecosystem. The paper illustrates and explains information linkage during the process of data integration in a smart neighbourhood scenario. Through this work, the authors aim to enable a technical and legal framework to ensure stakeholders awareness and protection of subjects about privacy breaches due to information linkage.  相似文献   

12.
The EU faces substantive legislative reform in data protection, specifically in the form of the General Data Protection Regulation (GDPR). One of the new elements in the GDPR is its call to establish data protection certification mechanisms, data protection seals and marks to help enhance transparency and compliance with the Regulation and allow data subjects to quickly assess the level of data protection of relevant products and services. To this effect, it is necessary to review privacy and data protection seals afresh and determine how data protection certification mechanisms, seals or marks might work given the role they will be called to play, particularly in Europe, in facilitating data protection. This article reviews the current state of play of privacy seals, the EU policy and regulatory thrusts for privacy and data protection certification, and the GDPR provisions on certification of the processing of personal data. The GDPR leaves substantial room for various options on data protection certification, which might play out in various ways, some of which are explored in this article.  相似文献   

13.
We study variability in General Data Protection Regulation (GDPR) awareness in relation to digital experience in the 28 European countries of EU27-UK, through secondary analysis of the Eurobarometer 91.2 survey conducted in March 2019 (N = 27,524). Education, occupation, and age are the strongest sociodemographic predictors of GDPR awareness, with little influence of gender, subjective economic well-being, or locality size. Digital experience is significantly and positively correlated with GDPR awareness in a linear model, but this relationship proves to be more complex when we examine it through a typological analysis. Using an exploratory k-means cluster analysis we identify four clusters of digital citizenship, across both dimensions of digital experience and GDPR awareness: the off-line citizens (22%), the social netizens (32%), the web citizens (17%), and the data citizens (29%). The off-line citizens rank lowest in internet use and GDPR awareness; the web citizens rank at about average values, while the data citizens rank highest in both digital experience and GDPR knowledge and use. The fourth identified cluster, the social netizens, have a discordant profile, with remarkably high social network use, below average online shopping experiences, and low GDPR awareness. Digitalization in human capital and general internet use is a strong country-level correlate of the national frequency of the data citizen type. Our results confirm previous studies of the low privacy awareness and skills associated with intense social media consumption, but we find that young generations are evenly divided between the rather carefree social netizens and the strongly invested data citizens. In order to achieve the full potential of the GDPR in changing surveillance practices while fostering consumer trust and responsible use of Big Data, policymakers should more effectively engage the digitally connected social netizens in the public debate over data use and protection. Moreover, they should enable all types of digital citizens to exercise their GDPR rights and to support the creation of value from data, while defending the right to protection of personal data.  相似文献   

14.
In the age of artificial intelligence (AI), robots have profoundly impacted our life and work, and have challenged our civil legal system. In the course of AI development, robots need to be designed to protect our personal privacy, data privacy, intellectual property rights, and tort liability identification and determination. In addition, China needs an updated Civil Code in line with the growth of AI. All measures should aim to address AI challenges and also to provide the needed institutional space for the development of AI and other emerging technologies.  相似文献   

15.
We are the middle of a global identity crisis. New notions of identity are made possible in the online world where people eagerly share their personal data and leave ‘digital footprints’. Multiple, partial identities emerge distributed across cyberspace divorced from the physical person. The representation of personal characteristics in data sets, together with developing technologies and systems for identity management, in turn change how we are identified. Trustworthy means of electronic identification is now a key issue for business, governments and individuals in the fight against online identity crime. Yet, along with the increasing economic value of digital identity, there are also risks of identity misuse by organisations that mine large data sets for commercial purposes and in some cases by governments. Data proliferation and the non-transparency of processing practices make it impossible for the individual to track and police their use. Potential risks encompass not only threats to our privacy, but also knowledge-engineering that can falsify digital profiles attributed to us with harmful consequences. This panel session will address some of the big challenges around identity in the digital age and what they mean for policy and law (its regulation and protection). Questions for discussion include: What does identity mean today? What types of legal solutions are fit for purpose to protect modern identity interests? What rights, obligations and responsibilities should be associated with our digital identities? Should identity management be regulated and who should be held liable and for what? What should be the role of private and public sectors in identity assurance schemes? What are the global drivers of identity policies? How can due process be ensured where automated technologies affect the rights and concerns of citizens? How can individuals be more empowered to control their identity data and give informed consent to its use? How are biometrics and location-tracking devices used in body surveillance changing the identity landscape?  相似文献   

16.
The revised Payment Services Directive (‘PSD2’) has been adopted to stimulate the development of an integrated internal market for payment services. In particular, it facilitates payment initiation services and account information services by granting the providers of these services access to the accounts of the payment service users. At the same time, the recitals state that the PSD2 guarantees a high level of consumer protection, security of payment transactions and protection against fraud.This paper answers the following question: To what extent does the access to accounts of the payment initiation service providers and account information service providers balance the development of the market for payment services with the security of the payment account and the privacy of the user? An analysis of the PSD2 shows that the development of the market for payment services has a higher priority. Security and privacy are ultimately subordinate.First, the PSD2 does not adequately protect the personal data of the users. The definition of ‘account information service’ is broad and covers a wide range of services. This allows the payment service providers to circumvent the limitations of the access to accounts.Next, the payment service providers have a ‘fall back option’ that allows ‘screen scraping’ if the dedicated interface is not functioning properly. Although this access is constrained by several safeguards, the fall back option gives the payment services provider unlimited access to the account of the user.Finally, the payment service providers have considerable freedom to arrange their authentication process as they see fit. The banks seem to be required to trust this process. The PSD2 and regulatory technical standards do not demand that a bank is able to verify the authentication or the integrity of the payment order.  相似文献   

17.
Mobile customers are being tracked and profiled by behavioural advertisers to be able to send them personalized advertising. This process involves data mining consumer databases containing personally-identifying or anonymous data and it raises a host of important privacy concerns. This article, the first in a two part series on consumer information privacy issues on Profiling the Mobile Customer, addresses the questions: “What is profiling in the context of behavioural advertising?” and “How will consumer profiling impact the privacy of mobile customers?” The article examines the EU and U.S. regulatory frameworks for protecting privacy and personal data in regards to profiling by behavioural advertisers that targets mobile customers. It identifies potential harms to privacy and personal data related to profiling for behavioural advertising. It evaluates the extent to which the existing regulatory frameworks in the EU and the U.S. provide an adequate level of privacy protection and identifies key privacy gaps that the behavioural advertising industry and regulators will need to address to adequately protect mobile consumers from profiling by marketers. The upcoming second article in this series will discuss whether industry self-regulation or privacy-enhancing technologies will be adequate to address these privacy gaps and makes suggestions for principles to guide this process.1  相似文献   

18.
The rise of biometric data use in personal consumer objects and governmental (surveillance) applications is irreversible. This article analyses the latest attempt by the General Data Protection Regulation (EU) 2016/679 and the Directive (EU) 2016/680 to regulate biometric data use in the European Union. We argue that the new Regulation fails to provide clear rules and protection which is much needed out of respect of fundamental rights and freedoms by making an artificial distinction between various categories of biometric data. This distinction neglects the case law of the European Court of Human Rights and serves the interests of large (governmental) databases. While we support regulating the use and the general prohibition in the GDPR of using biometric data for identification, we regret this limited subjective and use based approach. We argue that the collection, storage and retention of biometric images in databases should be tackled (objective approach). We further argue that based on the distinctions made in the GDPR, several categories of personal data relating to physical, physiological or behavioural characteristics are made to which different regimes apply. Member States are left to adopt or modify their more specific national rules which are eagerly awaited. We contend that the complex legal framework risks posing headaches to bona fide companies deploying biometric data for multifactor authentication and that the new legal regime is not reaching its goal of finding a balance between the free movement of such data and protecting citizens. Law enforcement authorities also need clear guidance. It is questioned whether Directive (EU) 2016/680 provides this.  相似文献   

19.
Mobile customers are increasingly being tracked and profiled by behavioural advertisers to enhance delivery of personalized advertising. This type of profiling relies on automated processes that mine databases containing personally-identifying or anonymous consumer data, and it raises a host of significant concerns about privacy and data protection. This second article in a two part series on “Profiling the Mobile Customer” explores how to best protect consumers’ privacy and personal data through available mechanisms that include industry self-regulation, privacy-enhancing technologies and legislative reform.1 It discusses how well privacy and personal data concerns related to consumer profiling are addressed by two leading industry self-regulatory codes from the UK and the U.S. that aim to establish fair information practices for behavioural advertising by their member companies. It also discusses the current limitations of using technology to protect consumers from privacy abuses related to profiling. Concluding that industry self-regulation and available privacy-enhancing technologies will not be adequate to close important privacy gaps related to consumer profiling without legislative reform, it offers suggestions for EU and U.S. regulators about how to do this.2  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号