首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Computer methods have been developed for mathematically interpreting mixed and low‐template DNA. The genotype modeling approach computationally separates out the contributors to a mixture, with uncertainty represented through probability. Comparison of inferred genotypes calculates a likelihood ratio (LR), which measures identification information. This study statistically examined the genotype modeling performance of Cybergenetics TrueAllele® computer system. High‐ and low‐template DNA mixtures of known randomized composition containing 2, 3, 4, and 5 contributors were tested. Sensitivity, specificity, and reproducibility were established through LR quantification in each of these eight groups. Covariance analysis found LR behavior to be relatively invariant to DNA amount or contributor number. Analysis of variance found that consistent solutions were produced, once a sufficient number of contributors were considered. This study demonstrates the reliability of TrueAllele interpretation on complex DNA mixtures of representative casework composition. The results can help predict an information outcome for a DNA mixture analysis.  相似文献   

2.
Performance of likelihood ratio (LR) methods for evidence evaluation has been represented in the past using, for example, Tippett plots. We propose empirical cross‐entropy (ECE) plots as a metric of accuracy based on the statistical theory of proper scoring rules, interpretable as information given by the evidence according to information theory, which quantify calibration of LR values. We present results with a case example using a glass database from real casework, comparing performance with both Tippett and ECE plots. We conclude that ECE plots allow clearer comparisons of LR methods than previous metrics, allowing a theoretical criterion to determine whether a given method should be used for evidence evaluation or not, which is an improvement over Tippett plots. A set of recommendations for the use of the proposed methodology by practitioners is also given.  相似文献   

3.
Most DNA evidence is a mixture of two or more people. Cybergenetics TrueAllele® system uses Bayesian computing to separate genotypes from mixture data and compare genotypes to calculate likelihood ratio (LR) match statistics. This validation study examined the reliability of TrueAllele computing on laboratory-generated DNA mixtures containing up to ten unknown contributors. Using log(LR) match information, the study measured sensitivity, specificity, and reproducibility. These reliability metrics were assessed under different conditions, including varying the number of assumed contributors, statistical sampling duration, and setting known genotypes. The main determiner of match information and variability was how much DNA a person contributed to a mixture. Observed contributor number based on data peaks gave better results than the number known from experimental design. The study found that TrueAllele is a reliable method for analyzing DNA mixtures containing up to ten unknown contributors.  相似文献   

4.
DNA analyses can be used for both investigative (crime scene-focused), or evaluative (suspect-focused) reporting. Investigative, DNA-led exploration of serious crimes always involves the comparison of hundreds of biological samples submitted by the authorities for analysis. Crime stain comparisons include both evidence to evidence profiles and reference to evidence profiles. When many complex DNA results (mixtures, low template LT-DNA samples) are involved in the investigation of a crime, the manual comparison of DNA profiles is very time-consuming and prone to manual errors. In addition, if the person of interest is a minor contributor, the classical approach of performing searches of national DNA databases is problematic because it is realistically restricted to clear major contributors and the occurrence of masking and drop-out means that there will not be a definitive DNA profile to perform the search with.CaseSolver is an open source expert system that automates analysis of complex cases. It does this by three sequential steps: a) simple allele comparison b) likelihood ratio (LR) based on a qualitative model (forensim) c) LR based on a quantitative model (EuroForMix). The software generates a list of potential match candidates, ranked according to the LRs, which can be exported as a report. The software can also identify contributors from small or large databases (e.g., staff database or 1 mill. individuals). In addition, an informative graphical network plot is generated that easily identifies contributors in common to multiple stains. Here we describe recent improvements made to the software in version v1.5.0, made in response to user requirements during intensive casework usage.  相似文献   

5.
Likelihood ratios used for the analysis of complex DNA mixtures depend on a number of modeling assumptions and parameter estimates. In particular, the LR does not give information about the relative weight of the separate contributors for hypotheses conditioned on several contributors. An alternative is to evaluate the observed LR with respect to likelihood ratios expected under the defense hypothesis. Further, a p-value corresponding to the LR can be calculated. The p-value is the probability of observing a LR equally large or larger than the one observed, if the defense hypothesis is true. In this paper we investigate the distribution of likelihood ratios for mixtures with drop-in and drop-out and related contributors. Disregarding a plausible close relative of the suspect as an alternative contributor may overestimate the LR against a suspect.  相似文献   

6.
DNA mixture interpretation is undertaken either by calculating a LR or an exclusion probability (RMNE or its complement CPI). Debate exists as to which has the greater claim. The merits and drawbacks of the two approaches are discussed. We conclude that the two matters that appear to have real force are: (1) LRs are more difficult to present in court and (2) the RMNE statistic wastes information that should be utilised.  相似文献   

7.
It is often challenging to ascribe an objective measure of confidence for identifications based on surveillance imagery from a crime scene. The present work seeks to address this deficiency in the case of garment comparison evidence by developing a quantitative method for establishing a conservative lower bound on the likelihood ratio (LR) for identifications involving patterned garments. The method is based on statistical analysis of pattern offset measurements taken from a sample of garments of the same type (manufacturer, style, and size) as the seized evidence. The developed analysis framework was demonstrated on different types of garments over a range of modeled surveillance imaging scenarios with variable image quality; the lower bounds on the LRs ranged from approximately 10–1 to over 400–1. The statistical model was tested and validated through a large‐scale empirical study involving both simulated and human observer‐performed garment comparisons.  相似文献   

8.
Abstract: The frontal sinuses are known to be unique to each individual; however, no one has tested the independence of the frontal sinus traits to see if probability analysis through trait combination is a viable method of identifying an individual using the frontal sinuses. This research examines the feasibility of probability trait combination, based on criteria recommended in the literature, and examines two other methods of identification using the frontal sinuses: discrete trait combinations and superimposition pattern matching. This research finds that most sinus traits are dependent upon one another and thus cannot be used in probability combinations. When looking at traits that are independent, this research finds that metric methods are too fraught with potential errors to be useful. Discrete trait combinations do not have a high enough discriminating power to be useful. Only superimposition pattern matching is an effective method of identifying an individual using the frontal sinuses.  相似文献   

9.
Recent court challenges have highlighted the need for statistical research on fingerprint identification. This paper proposes a model for computing likelihood ratios (LRs) to assess the evidential value of comparisons with any number of minutiae. The model considers minutiae type, direction and relative spatial relationships. It expands on previous work on three minutiae by adopting a spatial modeling using radial triangulation and a probabilistic distortion model for assessing the numerator of the LR. The model has been tested on a sample of 686 ulnar loops and 204 arches. Features vectors used for statistical analysis have been obtained following a preprocessing step based on Gabor filtering and image processing to extract minutiae data. The metric used to assess similarity between two feature vectors is based on an Euclidean distance measure. Tippett plots and rates of misleading evidence have been used as performance indicators of the model. The model has shown encouraging behavior with low rates of misleading evidence and a LR power of the model increasing significantly with the number of minutiae. The LRs that it provides are highly indicative of identity of source on a significant proportion of cases, even when considering configurations with few minutiae. In contrast with previous research, the model, in addition to minutia type and direction, incorporates spatial relationships of minutiae without introducing probabilistic independence assumptions. The model also accounts for finger distortion.  相似文献   

10.
The calculation of likelihood ratios (LRs) for DNA mixture analysis is necessary to establish an appropriate hypothesis based on the estimated number of contributors and known contributor genotypes. In this paper, we recommend a relevant analytical method from the 15 short tandem repeat typing system (the Identifiler multiplex), which is used as a standard in Japanese forensic practice and incorporates a flowchart that facilitates hypothesis formulation. We postulate that: (1) all detected alleles need to be above the analytical threshold (e.g., 150 relative fluorescence unit (RFU)); (2) alleles of all known contributors should be detected in the mixture profile; (3) there should be no contribution from close relatives. Furthermore, we deduce that mixtures of four or more persons should not be interpreted by Identifiler as the LR values of 100,000 simulated cases have a lower expectation of exceeding our temporal LR threshold (10,000) which strongly supports the prosecution hypothesis. We validated the method using various computer-based simulations. The estimated number of contributors is most likely equal to the actual number if all alleles detected in the mixture can be assigned to those from the known contributors. By contrast, if an unknown contributor(s) needs to be designated, LRs should be calculated from both two-person and three-person contributions. We also consider some cases in which the unknown contributor(s) is genetically related to the known contributor(s).  相似文献   

11.
In forensic DNA casework, the interpretation of an evidentiary profile may be dependent upon the assumption on the number of individuals from whom the evidence arose. Three methods of inferring the number of contributors—NOCIt, maximum likelihood estimator, and maximum allele count, were evaluated using 100 test samples consisting of one to five contributors and 0.5–0.016 ng template DNA amplified with Identifiler® Plus and PowerPlex® 16 HS. Results indicate that NOCIt was the most accurate method of the three, requiring 0.07 ng template DNA from any one contributor to consistently estimate the true number of contributors. Additionally, NOCIt returned repeatable results for 91% of samples analyzed in quintuplicate, while 50 single‐source standards proved sufficient to calibrate the software. The data indicate that computational methods that employ a quantitative, probabilistic approach provide improved accuracy and additional pertinent information such as the uncertainty associated with the inferred number of contributors.  相似文献   

12.
In the forensic context, teeth are often recovered in mass disasters, armed conflicts, and mass graves associated with human rights violations. Therefore, for victim identification, techniques utilizing the dentition to estimate the first parameters of identity (e.g., age) can be critical. This analysis was undertaken to apply a Bayesian statistical method, transition analysis, based on the Gompertz-Makeham (GM) hazard model, to estimate individual ages-at-death for Balkan populations utilizing dental wear. Dental wear phases were scored following Smith's eight-phase ordinal scoring method and chart. To estimate age, probability density functions for the posterior distributions of age for each tooth phase are calculated. Transition analysis was utilized to generate a mean age-of-transition from one dental wear phase to the next. The age estimates are based on the calculated age distribution from the GM hazard analysis and the ages-of-transition. To estimate the age-at-death for an individual, the highest posterior density region for each phase is calculated. By using a Bayesian statistical approach to estimate age, the population's age distribution is taken into account. Therefore, the age estimates are reliable for the Balkan populations, regardless of population or sex differences. The results showed that a vast amount of interpersonal variation in dental wear exists within the current sample and that this method may be most useful for classifying unknown individuals into broad age cohorts rather than small age ranges.  相似文献   

13.
There is an apparent paradox that the likelihood ratio (LR) approach is an appropriate measure of the weight of evidence when forensic findings have to be evaluated in court, while it is typically not used by bloodstain pattern analysis (BPA) experts. This commentary evaluates how the scope and methods of BPA relate to several types of evaluative propositions and methods to which LRs are applicable. As a result of this evaluation, we show how specificities in scope (BPA being about activities rather than source identification), gaps in the underlying science base, and the reliance on a wide range of methods render the use of LRs in BPA more complex than in some other forensic disciplines. Three directions are identified for BPA research and training, which would facilitate and widen the use of LRs: research in the underlying physics; the development of a culture of data sharing; and the development of training material on the required statistical background. An example of how recent fluid dynamics research in BPA can lead to the use of LR is provided. We conclude that an LR framework is fully applicable to BPA, provided methodic efforts and significant developments occur along the three outlined directions.  相似文献   

14.
15.
Abstract: Likelihood ratios (LRs) provide a natural way of computing the value of evidence under competing propositions. We propose LR models for classification and comparison that extend the ideas of Aitken, Zadora, and Lucy and Aitken and Lucy to include consideration of zeros. Instead of substituting zeros by a small value, we view the presence of zeros as informative and model it using Bernoulli distributions. The proposed models are used for evaluation of forensic glass (comparison and classification problem) and paint data (comparison problem). Two hundred and sixty‐four glass samples were analyzed by scanning electron microscopy, coupled with an energy dispersive X‐ray spectrometer method and 36 acrylic topcoat paint samples by pyrolysis gas chromatography hyphened with mass spectrometer method. The proposed LR model gave very satisfactory results for the glass comparison problem and for most of the classification tasks for glass. Results of comparison of paints were also highly satisfactory, with only 3.0% false positive answers and 2.8% false negative answers.  相似文献   

16.
A great deal has previously been written about the use of skeletal morphological changes in estimating ages-at-death. This article looks in particular at the pubic symphysis, as it was historically one of the first regions to be described in the literature on age estimation. Despite the lengthy history, the value of the pubic symphysis in estimating ages and in providing evidence for putative identifications remains unclear. This lack of clarity primarily stems from the fact that rather ad hoc statistical methods have been applied in previous studies. This article presents a statistical analysis of a large data set (n = 1766) of pubic symphyseal scores from multiple contexts, including anatomical collections, war dead, and victims of genocide. The emphasis is in finding statistical methods that will have the correct "coverage."Coverage" means that if a method has a stated coverage of 50%, then approximately 50% of the individuals in a particular pubic symphyseal stage should have ages that are between the stated age limits, and that approximately 25% should be below the bottom age limit and 25% above the top age limit. In a number of applications it is shown that if an appropriate prior age-at-death distribution is used, then "transition analysis" will provide accurate "coverages," while percentile methods, range methods, and means (+/-standard deviations) will not. Even in cases where there are significant differences in the mean ages-to-transition between populations, the effects on the stated age limits for particular "coverages" are minimal. As a consequence, more emphasis needs to be placed on collecting data on age changes in large samples, rather than focusing on the possibility of inter-population variation in rates of aging.  相似文献   

17.
The high complexity of the genetic analysis of crime scene samples is mainly related to the unknown number of contributors, low DNA quantity and quality, and associated stochastic effects. The difficulty and subjectivity of interpreting casework samples was the motto for the development of software to mitigate these conditions and allow the quantification of the genetic evidence. Currently, there are several tools for statistical analysis of mixture samples based on either qualitative or quantitative models. The first considers the electropherograms’ qualitative information, while the latter also considers the associated quantitative information. This work’s main goal was to evaluate the effect that parameters’ settings variation may have on the LR computation, specifically the drop-in frequency parameter. For that, a qualitative – LRmix Studio – and two quantitative software – STRmix™ and EuroForMix – were considered and an intra-software analysis was performed, using as input real casework samples. The drop-in frequency variation showed an impact, leading to differences higher than four units (log10 scale) for some pairs of samples. In addition, for some cases, no comparisons were performed either because the tool computed a null LR value or displayed an error message. Thus, this work reinforces the importance of proper parameters’ modeling and estimation in forensic casework evaluation.  相似文献   

18.
DNA evidence in criminal cases may be challenging to interpret if several individuals have contributed to a DNA-mixture. The genetic markers conventionally used for forensic applications may be insufficient to resolve cases where there is a small fraction of DNA (say less than 10%) from some contributors or where there are several (say more than 4) contributors. Recently methods have been proposed that claim to substantially improve on existing approaches [1]. The basic idea is to use high-density single nucleotide polymorphism (SNP) genotyping arrays including as many as 500,000 markers or more and explicitly exploit raw allele intensity measures. It is claimed that trace fractions of less than 0.1% can be reliably detected in mixtures with a large number of contributors. Specific forensic issues pertaining to the amount and quality of DNA are not discussed in the paper and will not be addressed here. Rather our paper critically examines the statistical methods and the validity of the conclusions drawn in Homer et al. (2008) [1].We provide a mathematical argument showing that the suggested statistical approach will give misleading results for important cases. For instance, for a two person mixture an individual contributing less than 33% is expected to be declared a non-contributor. The quoted threshold 33% applies when all relative allele frequencies are 0.5. Simulations confirmed the mathematical findings and also provide results for more complex cases. We specified several scenarios for the number of contributors, the mixing proportions and allele frequencies and simulated as many as 500,000 SNPs.A controlled, blinded experiment was performed using the Illumina GoldenGate® 360 SNP test panel. Twenty-five mixtures were created from 2 to 5 contributors with proportions ranging from 0.01 to 0.99. The findings were consistent with the mathematical result and the simulations.We conclude that it is not possible to reliably infer the presence of minor contributors to mixtures following the approach suggested in Homer et al. (2008) [1]. The basic problem is that the method fails to account for mixing proportions.  相似文献   

19.
Abstract: DNA mixtures with two or more contributors are a prevalent form of biological evidence. Mixture interpretation is complicated by the possibility of different genotype combinations that can explain the short tandem repeat (STR) data. Current human review simplifies this interpretation by applying thresholds to qualitatively treat STR data peaks as all‐or‐none events and assigning allele pairs equal likelihood. Computer review, however, can work instead with all the quantitative data to preserve more identification information. The present study examined the extent to which quantitative computer interpretation could elicit more identification information than human review from the same adjudicated two‐person mixture data. The base 10 logarithm of a DNA match statistic is a standard information measure that permits such a comparison. On eight mixtures having two unknown contributors, we found that quantitative computer interpretation gave an average information increase of 6.24 log units (min = 2.32, max = 10.49) over qualitative human review. On eight other mixtures with a known victim reference and one unknown contributor, quantitative interpretation averaged a 4.67 log factor increase (min = 1.00, max = 11.31) over qualitative review. This study provides a general treatment of DNA interpretation methods (including mixtures) that encompasses both quantitative and qualitative review. Validation methods are introduced that can assess the efficacy and reproducibility of any DNA interpretation method. An in‐depth case example highlights 10 reasons (at 10 different loci) why quantitative probability modeling preserves more identification information than qualitative threshold methods. The results validate TrueAllele® DNA mixture interpretation and establish a significant information improvement over human review.  相似文献   

20.
根据概率分析所涉及证据数量的不同,事实认定概率分析可分为单一证据维度和证据组合维度。在单一证据维度中,司法鉴定领域会不断涌现像DNA证据那样兼具实证统计数据和高度科学确认度的统计概率证据。在证据组合维度中,事实认定不可能通过数学推理实现,其原因主要有:事实认定过程复杂,数学推理难以模拟;数学推理并非司法证明思维的"母语",而是一门需要翻译的"外语";数学推理会通过"量"上的运算模糊、混淆乃至掩盖了"质"上的差别。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号