首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There has naturally been a good deal of discussion of the forthcoming General Data Protection Regulation. One issue of interest to all data controllers, and of particular concern for researchers, is whether the GDPR expands the scope of personal data through the introduction of the term ‘pseudonymisation’ in Article 4(5). If all data which have been ‘pseudonymised’ in the conventional sense of the word (e.g. key-coded) are to be treated as personal data, this would have serious implications for research. Administrative data research, which is carried out on data routinely collected and held by public authorities, would be particularly affected as the sharing of de-identified data could constitute the unconsented disclosure of identifiable information.Instead, however, we argue that the definition of pseudonymisation in Article 4(5) GDPR will not expand the category of personal data, and that there is no intention that it should do so. The definition of pseudonymisation under the GDPR is not intended to determine whether data are personal data; indeed it is clear that all data falling within this definition are personal data. Rather, it is Recital 26 and its requirement of a ‘means reasonably likely to be used’ which remains the relevant test as to whether data are personal. This leaves open the possibility that data which have been ‘pseudonymised’ in the conventional sense of key-coding can still be rendered anonymous. There may also be circumstances in which data which have undergone pseudonymisation within one organisation could be anonymous for a third party. We explain how, with reference to the data environment factors as set out in the UK Anonymisation Network's Anonymisation Decision-Making Framework.  相似文献   

2.
Drawing on theories of European integration and governance and sociological studies on the influence of elite law firms on rule-setting, this paper shows that law firms (a) operate in the area of data protection that is of extreme complexity and requires expert knowledge; and (b) display characteristics similar to other actors who succeeded in influencing agenda-setting and the results of policy-making despite having no formal competence to do so. This article proposes a hypothesis of the influence of elite law firms in EU data protection rule-setting. It argues that the EU data protection sector is prone to such influence as it is by definition transnational and, at some technical and some core points, inadequate to reflect the real data processing practices and therefore is entrenched with uncertainty. Therefore, the research into politics of data protection in Europe cannot disregard the role of these actors in shaping the European data protection regime.  相似文献   

3.
This article seeks to determine the economic costs and consequences of implementing the Data Retention Directive (Directive 2006/24/EC), an extraordinary counter terrorism measure that mandates the a priori retention of communications data on every European citizen, by drawing on the insights of economic analysis. It also explores the monetary costs of the Directive on subscribers and communications service providers of Member States within the EU. Furthermore, it examines the implications of the Directive on the economic sector of the European Union, by focusing on the Directive’s impact on EU competitiveness and other EU policies such as the Lisbon Strategy. This analysis is motivated by the following questions: what are the monetary costs of creating and maintaining the proposed database for data retention? What are the effects of these measures on individuals? What obstacles arise for the global competitiveness of EU telecommunications and electronic communications service providers as a result of these measures? Are other policies in the European Union affected by this measure? If so, which ones?  相似文献   

4.
Never has a text been received with so many requests for amendments; never has the debate around it been so huge. Some see it as a simple duplicate of the Directive 95/46; others present the GDPR, as a monster. In the context of this birthday, it cannot be a question of analyzing this text or of launching new ideas, but simply of raising two questions. I state the first as follows: "In the end, what are the major features that cross and justify this regulation? In addition, the second: "Is the regulation adequate for today's digital challenges to our societies and freedoms? The answers given in the following lines express the opinion of their author. It is just an invitation for a dialogue to go forth in this journal where so many excellent reflections have been published on Digital Law, thanks to our common friend: Steve.  相似文献   

5.
ABSTRACT

The question whether algorithms dream of ‘data’ without bodies is asked with the intention of highlighting the material conditions created by wearables for fitness and health, reveal the underlying assumptions of the platform economy regarding individuals’ autonomy, identities and preferences and reflect on the justifications for intervention under the General Data Protection Regulation. The article begins by highlighting key features of platform infrastructures and wearables in the health and fitness landscape, explains the implications of algorithms automating, what can be described as ‘rituals of public and private life’ in the health and fitness domain, and proceeds to consider the strains they place on data protection law. It will be argued that technological innovation and data protection rules played a part in setting the conditions for the mediated construction of meaning from bodies of information in the platform economy.  相似文献   

6.
The Anti-Money Laundering regime has been important in harmonizing laws and institutions, and has received global political support. Yet there has been minimal effort at evaluation of how well any AML intervention does in achieving its goals. There are no credible estimates either of the total amount laundered (globally or nationally) nor of most of the specific serious harms that AML aims to avert. Consequently, reduction of these is not a plausible outcome measure. There have been few efforts by country evaluators in the FATF Mutual Evaluation Reports (MERs) to acquire qualitative data or seriously analyze either quantitative or qualitative data. We find that data are relatively unimportant in policy development and implementation. Moreover, the long gaps of about 8 years between evaluations mean that widely used ‘country risk’ models for AML are forced still to rely largely on the 3rd Round evaluations whose use of data was minimal and inconsistent. While the 4th round MERs (2014–2022) have made an effort to be more systematic in the collection and analysis of data, FATF has still not established procedures that provide sufficiently informative evaluations. Our analysis of five recent National Risk Assessments (a major component of the new evaluations) in major countries shows little use of data, though the UK is notably better than the others. In the absence of more consistent and systematic data analysis, claims that countries have less or more effective systems will be open to allegations of ad hoc, impressionistic or politicized judgments. This reduces their perceived legitimacy, though this does not mean that the AML efforts and the evaluation processes themselves have no effects.  相似文献   

7.
8.
The forensic value of Y-STR markers in Guiné-Bissau was accessed by typing of 215 males. Allele and haplotype frequencies, determined for loci DYS19, DYS389-I, DYS389-II, DYS390, DYS391, DYS392, DYS393, DYS437, DYS438, DYS439 and the duplicated locus DYS385, are within the limits of variation found in other populations south of the Sahara. The level of discrimination achieved is Guineans is higher than for European or other African populations with comparable data. The haplotype diversity of 0.9995 is reduced to 0.9981 when the minimal haplotype is considered thus revealing the importance of increasing the number of typed loci.  相似文献   

9.
Against the common perception of data protection as a road-block, we demonstrate that the GDPR can work as a research enabler. This study demonstrates that European data protection law's regulatory pillars, the first related to the protection of the fundamental right to data protection and the second regarding the promotion of the free flow of personal data, result into an architecture of layered data protection regimes, which come to tighten or relax data subjects’ rights and data protection safeguards vis à vis processing activities differently grounded in public or merely economic interests. Each of the identified data protection regimes shape different “enabling regulatory spots” for the processing of sensitive personal data for research purposes.  相似文献   

10.
In 1746, Antoine Deparcieux (1703–1768) published Essai sur les probabilités de la durée de la vie humaine [An Essay on the Probabilities of the Duration of Human Life]. Deparcieux analyzed in detail empirical observations. As a mathematician and physicist, he can be considered, after Halley and Struyck, one of the founders of the estimation of longevity and all the issues surrounding that concept. The article analyzes the statistical data Deparcieux presented in his book and examines the way he dealt with them. He criticized the methods of his predecessors and showed what, according to him, were “good” data. Although he only had lists of annuitants or ecclesiastical registers at his disposal and no data from the government or state, Deparcieux constructed his calculations with careful regard to the value and quality of the figures used. He also envisaged a specific project to collect data about infant mortality. His work holds an important place in the history of French demographic statistics.  相似文献   

11.
A set of 87 reference samples collected from the population of Saudi Arabia were sequenced using the ForenSeq™DNA Signature Prep Kit on a MiSeq FGx™. The FASTQ files contain the sequences of the SE33 STR, but are not reported by the ForenSeq™ Universal Analysis Software (UAS). The STRait Razor software was used to recover and to report SE33 sequence‐based data for the Saudi population. Ninety-six sequence-based alleles were recovered, most of which had previously reported motif patterns. Two unreported motif patterns found in three alleles and seven novel allele sequences were reported. We also reported a single discordance between the sequence-based data and the CE data that was due to the presence of a common TTTT deletion. SE33 had 130% more sequence-based alleles; the highest number of observed sequence variants were in alleles 27.2 and 30.2, which each had 7 sequence variants. The statistical parameters emphasize the usefulness of using the sequence-based data.  相似文献   

12.
As early as the 1970's, privacy studies recognised that ‘anonymisation’ needed to be approached with caution. This caution has since been vindicated by the increasing sophistication of techniques for reidentification. Yet the courts in the UK have so far only hesitatingly grappled with the issues involved, while European courts have produced no guidance.  相似文献   

13.
An archive of 5 years of cases involving the identification of human remains was curated, collecting information on: The sample type submitted, the number of STR loci yielding interpretable results, the kinship challenge posed, and the outcome for the case. A total of 129 cases of remains ID were investigated using manual DNA extraction and recovery methods with amplification of STR markers using the Power Plex 21 multiplex STR kit from Promega Corp. In 52 cases, blood spots collected by the ME were provided as sample and in 100% of those cases, probabilities of relatedness to the reference samples was ≥99%. In 77 cases, tissue other than blood was provided as a source of DNA. These other samples were grouped categorically into long bones (femur and tibia; 40 cases), skull bones/teeth (11 cases), other bones (16 cases), and tissue (normally adherent to bone) (10 cases). Reference samples provided for cases included alleged parents or child(ren) of the victim (86 cases), alleged full siblings of the victim (38 cases), or alleged second-order relatives (five cases). The overall success rate in confirming the identity of the source of the remains in these cases was 89.2%. Our results demonstrate that a laboratory can be often successful identifying human remains using methods easily implemented in any DNA typing laboratory.  相似文献   

14.
Edwin Sutherland published his famous White Collar Crime in 1949 where he excoriated leaders of American firms for their war crimes. The names of all corporations were deleted, however, from the book by the threat of legal action. The unabridged version was published in 1983 when the Sutherland files at Indiana University were unsealed. These files can now be compared with both the 1949 and 1983 book, as well as with other evidence of corporate war crimes during World War II.  相似文献   

15.
Internet Protocol addresses [IP addresses] are central for Internet electronic communications. They individualize computers and their users to make the delivery of data packets possible. IP addresses are also often used to identify websurfers for litigation purposes. In particular, they constitute a key in the fight against online copyright infringement to identify infringers. However, it is a matter of dispute to know if IP addresses are personal data. In a review of relevant case law, the present paper seeks to identify when IP addresses are - or should be - considered as personal data. It suggests a contextual approach to the concept of personal data.  相似文献   

16.
The commodification of digital identities is an emerging reality in the data-driven economy. Personal data of individuals represent monetary value in the data-driven economy and are often considered a counter performance for “free” digital services or for discounts for online products and services. Furthermore, customer data and profiling algorithms are already considered a business asset and protected through trade secrets. At the same time, individuals do not seem to be fully aware of the monetary value of their personal data and tend to underestimate their economic power within the data-driven economy and to passively succumb to the propertization of their digital identity. An effort that can increase awareness of consumers/users on their own personal information could be making them aware of the monetary value of their personal data. In other words, if individuals are shown the “price” of their personal data, they can acquire higher awareness about their power in the digital market and thus be effectively empowered for the protection of their information privacy. This paper analyzes whether consumers/users should have a right to know the value of their personal data. After analyzing how EU legislation is already developing in the direction of propertization and monetization of personal data, different models for quantifying the value of personal data are investigated. These models are discussed, not to determine the actual prices of personal data, but to show that the monetary value of personal data can be quantified, a conditio-sine-qua-non for the right to know the value of your personal data. Next, active choice models, in which users are offered the option to pay for online services, either with their personal data or with money, are discussed. It is concluded, however, that these models are incompatible with EU data protection law. Finally, practical, moral and cognitive problems of pricing privacy are discussed as an introduction to further research. We conclude that such research is needed to see to which extent these problems can be solved or mitigated. Only then, it can be determined whether the benefits of introducing a right to know the value of your personal data outweigh the problems and hurdles related to it.  相似文献   

17.
Away from the hubbub about HFT (High Frequency Trading) a quiet storm is blowing in to the EU that will radically change securities trading in bonds, OTC derivatives and other asset classes. The rules, called MiFID II,2 top off the alphabet soup of an extensive new rule book that, after the European Parliament's ‘Super Tuesday’ on 15 April 2014, is finally set to become law. Radical changes are afoot!  相似文献   

18.
This article analyses government deployment of information security sensor systems from primarily a European human rights perspective. Sensor systems are designed to detect attacks against information networks by analysing network traffic and comparing this traffic to known attack-vectors, suspicious traffic profiles or content, while also recording attacks and providing information for the prevention of future attacks. The article examines how these sensor systems may be one way of ensuring the necessary protection of personal data stored in government IT-systems, helping governments fulfil positive obligations with regards to data protection under the European Convention on Human Rights (ECHR), the EU Charter of Fundamental Rights (The Charter), as well as data protection and IT-security requirements established in EU-secondary law. It concludes that the implementation of sensor systems illustrates the need to balance data protection against the negative privacy obligations of the state under the ECHR and the Charter and the accompanying need to ensure that surveillance of communications and associated metadata reach established principles of legality and proportionality. The article highlights the difficulty in balancing these positive and negative obligations, makes recommendations on the scope of such sensor systems and the legal safeguards surrounding them to ensure compliance with European human rights law and concludes that there is a risk of privatised policymaking in this field barring further guidance in EU-secondary law or case law.  相似文献   

19.
Samples collected for forensic case work may be of varying quality and quantity. The sample DNA is often quantified prior to short tandem repeats (STR) profile analysis with methods such as Quantifiler™Trio (QFT). The QFT measures the quantity of DNA as well as an internal PCR control (IPC) and a degradation index (DI).The aim of this study was to use IPC and DI measurements to identify samples, which would benefit from a modified PCR amplification set-up when generating the STR profiles. The sample quality of 6287 single source case work samples were categorized as 'Good’, ‘Partly degraded’, ‘Highly degraded’, ‘Inhibited’ and ‘Degraded and Inhibited’ based on the peak height ratios in the electropherogram data. The DI and IPC were correlated with the assigned quality categories of the samples. Samples categorized into the degraded and/or inhibited categories were found to have statistically significantly different DI and IPC compared to samples categorized as ‘Good’. This indicates that the additional information gained from the QFT can be useful to identify degraded and/or inhibited samples prior to the STR-profile analysis. Future work will re-evaluate the criteria of inclusion in the sample quality groups and implement multi source samples.  相似文献   

20.
This article examines the two major international data transfer schemes in existence today – the European Union (EU) model which at present is effectively the General Data Protection Regulation (GDPR), and the Asia-Pacific Economic Cooperation (APEC) Cross Border Privacy Rules system (CBPR), in the context of the Internet of Things (IoT).While IoT data ostensibly relates to things i.e. products and services, it impacts individuals and their data protection and privacy rights, and raises compliance issues for corporations especially in relation to international data flows. The GDPR regulates the processing of personal data of individuals who are EU data subjects including cross border data transfers. As an EU Regulation, the GDPR applies directly as law to EU member nations. The GDPR also has extensive extraterritorial provisions that apply to processing of personal data outside the EU regardless of place of incorporation and geographical area of operation of the data controller/ processor. There are a number of ways that the GDPR enables lawful international transfer of personal data including schemes that are broadly similar to APEC CBPR.APEC CBPR is the other major regional framework regulating transfer of personal data between APEC member nations. It is essentially a voluntary accountability scheme that initially requires acceptance at country level, followed by independent certification by an accountability agent of the organization wishing to join the scheme. APEC CBPR is viewed by many in the United States of America (US) as preferable to the EU approach because CBPR is considered more conducive to business than its counterpart schemes under the GDPR, and therefore is regarded as the scheme most likely to prevail.While there are broad areas of similarity between the EU and APEC approaches to data protection in the context of cross border data transfer, there are also substantial differences. This paper considers the similarities and major differences, and the overall suitability of the two models for the era of the Internet of Things (IoT) in which large amounts of personal data are processed on an on-going basis from connected devices around the world. This is the first time the APEC and GDPR cross-border data schemes have been compared in this way. The paper concludes with the author expressing a view as to which scheme is likely to set the global standard.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号