首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Likelihood ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The likelihood ratio should be seen as an estimate and not a fixed value, because the calculations are functions of allelic frequency estimates that were estimated from a small portion of the population. Current methods do not account for uncertainty in the likelihood ratio estimates and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of likelihood ratios. The confidence interval is calculated using the standard forensic likelihood ratio formulae and a variance estimate derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.  相似文献   

2.
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing samples from over 400 writers. We found that, when provided with the same two items of evidence, the three methods often would lead to differing conclusions (with rates of disagreement ranging from 0.005 to 0.48). Rates of misleading evidence and Tippet plots are both used to characterize the range of behavior for the methods over varying sized questioned documents. The appendix shows that the three score-based likelihood ratios are theoretically very different not only from each other, but also from the likelihood ratio, and as a consequence each display drastically different behavior.  相似文献   

3.
Fingerprints with similar morphological characteristics but from different individuals can lead to errors in individual identification,especially when dealing with large databases containing millions of fingerprints.To address this issue and enhance the accuracy of similar fingerprint identification,the use of the likelihood ratio(LR)model for quantitative evaluation of fingerprint evidence has emerged as an effective research method.In this study,the LR fingerprint evidence evaluation model was established by using mathematical statistical methods,such as parameter estimation and hypothesis testing.This involved various steps,including database construction,scoring,fitting,calculation,and visual evaluation.Under the same-source conditions,the optimal parameter methods selected by different number of minutiae are gamma and Weibull distribution,while normal,Weibull,and lognormal distributions were the fitting parameters selected for minutiae configurations.The fitting parameters selected by different number of minutiae under different-source conditions are lognormal distribution,and the parameter methods selected for different minutiae configurations include Weibull,gamma,and lognormal distributions.The results of the LR model showed increased accuracy as the number of minutiae increased,indicating strong discriminative and corrective power.However,the accuracy of the LR evaluation based on different configurations was comparatively lower.In addition,the LR models with different numbers of minutiae outperformed those with different minutiae configurations.Our study shows that the use of LR models based on parametric methods is favoured in reducing the risk of fingerprint evidence misidentification,improving the quantitative assessment methods of fingerprint evidence,and promoting fingerprint identification from experience to science.  相似文献   

4.
5.
6.
7.
8.
9.
《Science & justice》2014,54(4):316-318
This letter to the Editor comments on the article When ‘neutral’ evidence still has probative value (with implications from the Barry George Case) by N. Fenton et al. [[1], 2014].  相似文献   

10.
Shoemark evidence remains a cornerstone of forensic crime investigation. Shoemarks can be used at a crime scene to reconstruct the course of events; they can be used as forensic intelligence tool to establish links between crime scenes; and when control material is available, used to help infer the participation of given individuals to the commission of a crime. Nevertheless, as for most other impression evidence, the current process used to evaluate and report the weight of shoemark evidence is under extreme scrutiny. Building on previous research, this paper proposes a model to evaluate shoemark evidence in a more transparent manner. The model is currently limited to sole pattern and wear characteristics. It does not account formally for cuts and other accidental damages. Furthermore, it requires the acquisition of relevant shoemark datasets and the development of automated comparison algorithms to deploy its full benefits. These are not currently available. Instead, we demonstrate, using casework examples, that a pragmatic consideration of the various variables of the model allows us to already evaluate shoemark evidence in a more transparent way and therefore begin to address the current scientific and legal concerns.  相似文献   

11.
A new computational method using a Monte Carlo technique is described for the calculation of plausibility of paternity in blood group systems. In this study gene frequencies of a blood group system are simulated by the range of the seven digit random numbers. By using a Monte Carlo method, four random numbers are generated and converted into paternal and maternal genotypes. Then the genotype of the child is determined according to the law of inheritance, and finally genotypes of the father, mother and child are converted into phenotypes. Repeating this process more than one hundred thousand times, the phenotypic frequencies of child-mother-father combinations (trio) and the likelihood ratio of paternity in any blood group system are calculated for all phenotypic combinations of the trios. This method is much easier than methods reported previously, and is sufficiently accurate.  相似文献   

12.
This paper focuses on likelihood ratio based evaluations of fibre evidence in cases in which there is uncertainty about whether or not the reference item available for analysis - that is, an item typically taken from the suspect or seized at his home - is the item actually worn at the time of the offence. A likelihood ratio approach is proposed that, for situations in which certain categorical assumptions can be made about additionally introduced parameters, converges to formula described in existing literature. The properties of the proposed likelihood ratio approach are analysed through sensitivity analyses and discussed with respect to possible argumentative implications that arise in practice.  相似文献   

13.
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situations involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hypotheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.  相似文献   

14.
This paper extends previous research and discussion on the use of multivariate continuous data, which are about to become more prevalent in forensic science. As an illustrative example, attention is drawn here on the area of comparative handwriting examinations. Multivariate continuous data can be obtained in this field by analysing the contour shape of loop characters through Fourier analysis. This methodology, based on existing research in this area, allows one describe in detail the morphology of character contours throughout a set of variables. This paper uses data collected from female and male writers to conduct a comparative analysis of likelihood ratio based evidence assessment procedures in both, evaluative and investigative proceedings. While the use of likelihood ratios in the former situation is now rather well established (typically, in order to discriminate between propositions of authorship of a given individual versus another, unknown individual), focus on the investigative setting still remains rather beyond considerations in practice. This paper seeks to highlight that investigative settings, too, can represent an area of application for which the likelihood ratio can offer a logical support. As an example, the inference of gender of the writer of an incriminated handwritten text is forwarded, analysed and discussed in this paper. The more general viewpoint according to which likelihood ratio analyses can be helpful for investigative proceedings is supported here through various simulations. These offer a characterisation of the robustness of the proposed likelihood ratio methodology.  相似文献   

15.
采用自主研发似然比率计算器进行ITO亲缘关系分析   总被引:1,自引:1,他引:0  
目的探讨采用自主研发的似然比率计算器进行ITO法亲缘关系鉴定的可行性。方法针对GoldeneyeTMDNA ID System 20A系统的19个常染色体STR基因座分型数据,使用自主研发的似然比率计算器,人工模拟100 000对无关个体及100 000个家系的父子、同胞、半同胞、一、二代堂表兄弟,应用ITO法,自动分析计算亲权指数(PI)和亲缘关系概率(RCP)。结果父子和无关个体关系概率界值下无重叠,具有显著差异。同胞关系概率大于99.99%时,同胞占66.48%,无关个体占0%,具有显著差异;概率小于99.99%时,同胞个体和无关个体有部分重叠,具有一定差异。半同胞关系概率大于99.99%时,半同胞占1.52%,无关个体占0%,具有显著差异;概率小于99.99%时,两组有部分重叠,具有一定差异。一、二代堂表兄弟关系概率为10%~90%范围内时,无关个体分别占90%,99.98%,无法推断。结论本文方法可用于推断父子关系、同胞关系、半同胞关系。  相似文献   

16.
17.
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.  相似文献   

18.
The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions.  相似文献   

19.
Extension education approaches generally have been visualized from isolated perspectives ranging from philosophical foundations to practical applications. This paper provides a broad analytical framework to describe and examine different scopes, elements, knowledge fields, and change areas relevant to extension education for agricultural and rural development. For the purpose of a multi-model analysis, a sample of 46 ideographic representations (schematic drawings) on agricultural and rural development-related processes was reviewed using concept-mapping techniques to prepare a computerized database management system. The discussion is based on the commonality findings among models and groups of models. He studied veterinary medicine in the National University of Mexico (1970–1975), received his master degree in agriculture at the University of Florida in 1979, and his Ph.D. in extension and adult education at Cornell University in 1990. He has published two books, the last one (1987) entitled “Extensionismo para el Desarrollo Rural y de la Comunidad” (Extension for Rural and Community Development). He is a member of the National Researchers System of Mexico, and has over 10 years of experience in various extension activities.  相似文献   

20.
The processing of personal data across national borders by both governments and the private sector has increased exponentially in recent years, as has the need for legal protections for personal data. This article examines calls for a global legal framework for data protection, and in particular suggestions that have been made in this regard by the International Law Commission and various national data protection authorities. It first examines the scope of a potential legal framework, and proceeds to analyze the status of data protection in international law. The article then considers the various options through which an international framework could be enacted, before drawing some conclusions about the form and scope such a framework could take, the institutions that could coordinate the work on it, and whether the time is ripe for a multinational convention on data protection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号