首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
There is general lack of awareness that high LR based on complex propositions e.g. three contributors, does not necessarily translate into probative evidence against a suspect. In some cases there is an increased chance of false inclusion of a person of interest. This is an issue for all LR-based models. One way to address this issue is to further evaluate or qualify the estimated LR by a performance test. Based on simulations, this was achieved by non-contributor-testing: replacing the reference profile of interest (typically the suspect's profile), by the profile of a simulated random man. An exact p-value can also be calculated, giving the chance of observing an LR-value exceeding the estimated if the defense hypothesis is true.  相似文献   

2.
The calculation of likelihood ratios (LRs) for DNA mixture analysis is necessary to establish an appropriate hypothesis based on the estimated number of contributors and known contributor genotypes. In this paper, we recommend a relevant analytical method from the 15 short tandem repeat typing system (the Identifiler multiplex), which is used as a standard in Japanese forensic practice and incorporates a flowchart that facilitates hypothesis formulation. We postulate that: (1) all detected alleles need to be above the analytical threshold (e.g., 150 relative fluorescence unit (RFU)); (2) alleles of all known contributors should be detected in the mixture profile; (3) there should be no contribution from close relatives. Furthermore, we deduce that mixtures of four or more persons should not be interpreted by Identifiler as the LR values of 100,000 simulated cases have a lower expectation of exceeding our temporal LR threshold (10,000) which strongly supports the prosecution hypothesis. We validated the method using various computer-based simulations. The estimated number of contributors is most likely equal to the actual number if all alleles detected in the mixture can be assigned to those from the known contributors. By contrast, if an unknown contributor(s) needs to be designated, LRs should be calculated from both two-person and three-person contributions. We also consider some cases in which the unknown contributor(s) is genetically related to the known contributor(s).  相似文献   

3.
Computer methods have been developed for mathematically interpreting mixed and low‐template DNA. The genotype modeling approach computationally separates out the contributors to a mixture, with uncertainty represented through probability. Comparison of inferred genotypes calculates a likelihood ratio (LR), which measures identification information. This study statistically examined the genotype modeling performance of Cybergenetics TrueAllele® computer system. High‐ and low‐template DNA mixtures of known randomized composition containing 2, 3, 4, and 5 contributors were tested. Sensitivity, specificity, and reproducibility were established through LR quantification in each of these eight groups. Covariance analysis found LR behavior to be relatively invariant to DNA amount or contributor number. Analysis of variance found that consistent solutions were produced, once a sufficient number of contributors were considered. This study demonstrates the reliability of TrueAllele interpretation on complex DNA mixtures of representative casework composition. The results can help predict an information outcome for a DNA mixture analysis.  相似文献   

4.
Two person DNA admixtures are frequently encountered in criminal cases and their interpretation can be challenging, particularly if the amount of DNA contributed by both individuals is approximately equal. Due to an inevitable degree of uncertainty in the constituent genotypes, reduced statistical weight is given to the mixture evidence compared to that expected from the constituent single source contributors. The ultimate goal of mixture analysis, then, is to precisely discern the constituent genotypes and here we posit a novel strategy to accomplish this. We hypothesised that LCM-mediated isolation of multiple groups of cells (‘binomial sampling’) from the admixture would create separate cell sub-populations with differing constituent weight ratios. Furthermore we predicted that interpreting the resulting DNA profiling data by the quantitative computer-based TrueAllele® interpretation system would result in an efficient recovery of the constituent genotypes due to newfound abilities to compute a maximum LR from sub-samples with skewed weight ratios, and to jointly interpret all possible pairings of sub-samples using a joint likelihood function.As a proof of concept, 10 separate cell samplings of size 20 recovered by LCM from each of two 1:1 buccal cell mixtures were DNA-STR profiled using a specifically developed LCN methodology, with the data analyzed by the TrueAllele® Casework system. In accordance with the binomial sampling hypothesis, the sub-samples exhibited weight ratios that were well dispersed from the 50% center value (50 ± 35% at the 95% level). The maximum log(LR) information for a genotype inferred from a single 20 cell sample was 18.5 ban, with an average log(LR) information of 11.7 ban. Co-inferring genotypes using a joint likelihood function with two sub-samples essentially recovered the full genotype information. We demonstrate that a similar gain in genotype information can be obtained with standard (28-cycle) PCR conditions using the same joint interpretation methods. Finally, we discuss the implications of this work for routine forensic practice.  相似文献   

5.
Most DNA evidence is a mixture of two or more people. Cybergenetics TrueAllele® system uses Bayesian computing to separate genotypes from mixture data and compare genotypes to calculate likelihood ratio (LR) match statistics. This validation study examined the reliability of TrueAllele computing on laboratory-generated DNA mixtures containing up to ten unknown contributors. Using log(LR) match information, the study measured sensitivity, specificity, and reproducibility. These reliability metrics were assessed under different conditions, including varying the number of assumed contributors, statistical sampling duration, and setting known genotypes. The main determiner of match information and variability was how much DNA a person contributed to a mixture. Observed contributor number based on data peaks gave better results than the number known from experimental design. The study found that TrueAllele is a reliable method for analyzing DNA mixtures containing up to ten unknown contributors.  相似文献   

6.
《Science & justice》2022,62(2):156-163
DNA mixtures are a common source of crime scene evidence and are often one of the more difficult sources of biological evidence to interpret. With the implementation of probabilistic genotyping (PG), mixture analysis has been revolutionized allowing previously unresolvable mixed profiles to be analyzed and probative genotype information from contributors to be recovered. However, due to allele overlap, artifacts, or low-level minor contributors, genotype information loss inevitably occurs. In order to reduce the potential loss of significant DNA information from donors in complex mixtures, an alternative approach is to physically separate individual cells from mixtures prior to performing DNA typing thus obtaining single source profiles from contributors. In the present work, a simplified micro-manipulation technique combined with enhanced single-cell DNA typing was used to collect one or few cells, referred to as direct single-cell subsampling (DSCS). Using this approach, single and 2-cell subsamples were collected from 2 to 6 person mixtures. Single-cell subsamples resulted in single source DNA profiles while the 2-cell subsamples returned either single source DNA profiles or new mini-mixtures that are less complex than the original mixture due to the presence of fewer contributors. PG (STRmix™) was implemented, after appropriate validation, to analyze the original bulk mixtures, single source cell subsamples, and the 2-cell mini mixture subsamples from the original 2–6-person mixtures. PG further allowed replicate analysis to be employed which, in many instances, resulted in a significant gain of genotype information such that the returned donor likelihood ratios (LRs) were comparable to that seen in their single source reference profiles (i.e., the reciprocal of their random match probabilities). In every mixture, the DSCS approach gave improved results for each donor compared to standard bulk mixture analysis. With the 5- and 6- person complex mixtures, DSCS recovered highly probative LRs (≥1020) from donors that had returned non-probative LRs (<103) by standard methods.  相似文献   

7.
PENDULUM--a guideline-based approach to the interpretation of STR mixtures   总被引:2,自引:0,他引:2  
Several years ago, a theory to interpret mixed DNA profiles was proposed that included a consideration of peak area using the method of least squares. This method of mixture interpretation has not been widely adopted because of the complexity of the associated calculations. Most reporting officers (RO) employ an experience and judgement based approach to the interpretation of mixed DNA profiles. Here we present an approach that has formalised the thinking behind this experience and judgement. This has been written into a computer program package called PENDULUM. The program uses a least squares method to estimate the pre-amplification mixture proportion for two potential contributors. It then calculates the heterozygous balance for all of the potential sets of genotypes. A list of "possible" genotypes is generated using a set of heuristic rules. External to the programme the candidate genotypes may then be used to formulate likelihood ratios (LR) that are based on alternative casework propositions. The system does not represent a black box approach; rather it has been integrated into the method currently used by the reporting officers at the Forensic Science Service (FSS). The time saved in automating routine calculations associated with mixtures analysis is significant. In addition, the computer program assists in unifying reporting processes, thereby improving the consistency of reporting.  相似文献   

8.
Sleep sex may be a defense for alleged sexual assault. The International Classification of Sleep Disorders (ICSD3) states: “Disorders of arousal should not be diagnosed in the presence of alcohol intoxication… The former [alcohol blackouts] are exponentially more prevalent.” A panel member of ICSD3, quoting ICSD3 asserts: “alcohol intoxication should rule out a sleep-walking defense”. This implies extremely strong support for a prosecution hypothesis (Hp) over a defense hypothesis (Hd). I use Bayesian methodology to evaluate the evidential probity of alcohol intoxication. The likelihood ratio, LR, measures the amplification of prior odds of guilt, . By Bayes' theorem, . I use data from cross-sectional studies of sexual assault and prevalence of alcohol use, in college students, with data from longitudinal studies, and data from the epidemiology of parasomnias to evaluate LR (alcohol). LR ~1.5 or 5, depending whether alcohol does, or does not, increase the risk of parasomnias. The proposition of extremely strong support for Hp implies a LR ~1,000,000, so the proposition in ICSD3 is not supported by formal analysis. The statistical reasoning in ICSD3 is unclear. There appears to be inversion of the Bayesian conditional (confusing intoxication given assault, and assault given intoxication) and failure to evaluate alcohol intoxication in Hd. Similar statistical errors in R. v Sally Clark are discussed. The American Academy of Sleep Medicine should review the statistical methodology in ICSD3.  相似文献   

9.
The Bayesian approach provides a unified and logical framework for the analysis of evidence and to provide results in the form of likelihood ratios (LR) from the forensic laboratory to court. In this contribution we want to clarify how the biometric scientist or laboratory can adapt their conventional biometric systems or technologies to work according to this Bayesian approach. Forensic systems providing their results in the form of LR will be assessed through Tippett plots, which give a clear representation of the LR-based performance both for targets (the suspect is the author/source of the test pattern) and non-targets. However, the computation procedures of the LR values, especially with biometric evidences, are still an open issue. Reliable estimation techniques showing good generalization properties for the estimation of the between- and within-source variabilities of the test pattern are required, as variance restriction techniques in the within-source density estimation to stand for the variability of the source with the course of time. Fingerprint, face and on-line signature recognition systems will be adapted to work according to this Bayesian approach showing both the likelihood ratios range in each application and the adequacy of these biometric techniques to the daily forensic work.  相似文献   

10.
A novel Bayesian methodology has been developed to quantitatively assess handwriting evidence by means of a likelihood ratio (LR) designed for multivariate data. This methodology is presented and its applicability is shown through a simulated case of a threatening anonymous text where a suspect is apprehended. The shape of handwritten characters a, d, o, and q of the threatening text was compared with characters of the true writer, and then with two other writers, one with similar and one with dissimilar characters shape compared to the true writer. In each of these three situations, 100 draws of characters were made and the resulting distributions of LR were established to consider the natural handwriting variation. LR values supported the correct hypothesis in every case. This original Bayesian methodology provides a coherent and rigorous tool for the assessment of handwriting evidence, contributing undoubtedly to integrate the field of handwriting examination into science.  相似文献   

11.
Assessment of forensic findings with likelihood ratios is for several cases straightforward, but there are a number of situations where contemplation of the alternative explanation to the evidence needs consideration, in particular when it comes to the reporting of the evidentiary strength. The likelihood ratio approach cannot be directly applied to cases where the proposition alternative to the forwarded one is a set of multiple propositions with different likelihoods and different prior probabilities. Here we present a general framework based on the Bayes' factor as the quantitative measure of evidentiary strength from which it can be deduced whether the direct application of a likelihood ratio is reasonable or not. The framework is applied on DNA evidence in forms of an extension to previously published work. With the help of a scale of conclusions we provide a solution to the problem of communicating to the court the evidentiary strength of a DNA match when a close relative to the suspect has a non-negligible prior probability of being the source of the DNA.  相似文献   

12.
Abstract: Determining the number of contributors to a forensic DNA mixture using maximum allele count is a common practice in many forensic laboratories. In this paper, we compare this method to a maximum likelihood estimator, previously proposed by Egeland et al., that we extend to the cases of multiallelic loci and population subdivision. We compared both methods’ efficiency for identifying mixtures of two to five individuals in the case of uncertainty about the population allele frequencies and partial profiles. The proportion of correctly resolved mixtures was >90% for both estimators for two‐ and three‐person mixtures, while likelihood maximization yielded success rates 2‐ to 15‐fold higher for four‐ and five‐person mixtures. Comparable results were obtained in the cases of uncertain allele frequencies and partial profiles. Our results support the use of the maximum likelihood estimator to report the number of contributors when dealing with complex DNA mixtures.  相似文献   

13.
Mixed DNA profiles are being encountered more frequently as laboratories analyze increasing amounts of touch evidence. If it is determined that an individual could be a possible contributor to the mixture, it is necessary to perform a statistical analysis to allow an assignment of weight to the evidence. Currently, the combined probability of inclusion (CPI) and the likelihood ratio (LR) are the most commonly used methods to perform the statistical analysis. A third method, random match probability (RMP), is available. This article compares the advantages and disadvantages of the CPI and LR methods to the RMP method. We demonstrate that although the LR method is still considered the most powerful of the binary methods, the RMP and LR methods make similar use of the observed data such as peak height, assumed number of contributors, and known contributors where the CPI calculation tends to waste information and be less informative.  相似文献   

14.
15.
Gunshot residues (GSR), cartridge projectiles, and casings are frequently encountered evidence in gun-related forensic investigations. However, in circumstances where the investigation of striation marks is impossible, such as unrecovered or deformed projectiles and cartridge casings, GSR deposited on the hands or clothes of the shooter and victim-related items can provide information to establish a link between the suspect, the firearms used, and the victim. Since the formula of primers used by all cartridge manufacturers in China is identical, links based on the conventional morphological and compositional analysis of GSR are difficult to establish. However, the abundance of lead isotopes in primer components of lead styphnate varies significantly, and a fundamental understanding of these differences may facilitate the validation of primer (p)GSR evidence in forensic investigations. Here, 44 pGSR samples were characterized by Pb isotope ratios of 206Pb/204Pb, 207Pb/204Pb, and 208Pb/204Pb using laser ablation multicollector inductively coupled plasma mass spectrometry. There was no obvious mass fractionation of the lead isotope ratios of the primers from individual cartridges analyzed before and after the shooting process, thereby establishing a basis for the comparison of pGSR and unfired cartridges. Evaluation of the results using univariate likelihood ratio (LR) computations revealed low rates of misleading evidence (<0.53%) The results demonstrated that lead isotope ratio analysis of pGSR and LR predictions can provide a practicable method for forensic cartridge discrimination and individualization.  相似文献   

16.
DNA analyses can be used for both investigative (crime scene-focused), or evaluative (suspect-focused) reporting. Investigative, DNA-led exploration of serious crimes always involves the comparison of hundreds of biological samples submitted by the authorities for analysis. Crime stain comparisons include both evidence to evidence profiles and reference to evidence profiles. When many complex DNA results (mixtures, low template LT-DNA samples) are involved in the investigation of a crime, the manual comparison of DNA profiles is very time-consuming and prone to manual errors. In addition, if the person of interest is a minor contributor, the classical approach of performing searches of national DNA databases is problematic because it is realistically restricted to clear major contributors and the occurrence of masking and drop-out means that there will not be a definitive DNA profile to perform the search with.CaseSolver is an open source expert system that automates analysis of complex cases. It does this by three sequential steps: a) simple allele comparison b) likelihood ratio (LR) based on a qualitative model (forensim) c) LR based on a quantitative model (EuroForMix). The software generates a list of potential match candidates, ranked according to the LRs, which can be exported as a report. The software can also identify contributors from small or large databases (e.g., staff database or 1 mill. individuals). In addition, an informative graphical network plot is generated that easily identifies contributors in common to multiple stains. Here we describe recent improvements made to the software in version v1.5.0, made in response to user requirements during intensive casework usage.  相似文献   

17.
The reporting of a likelihood ratio (LR) calculated from probabilistic genotyping software has become more popular since 2015 and has allowed for the use of more complex mixtures at court. The meaning of “inconclusive” LRs and how to communicate the significance of low LRs at court is now important. We present a method here using the distribution of LRs obtained from nondonors. The nondonor distribution is useful for examining calibration and discrimination for profiles that have produced LRs less than about 104. In this paper, a range of mixed DNA profiles of varying quantity were constructed and the LR distribution considering the minor contributor for a number of nondonors was compared to the expectation given a calibrated system. It is demonstrated that conditioning genotypes should be used where reasonable given the background information to decrease the rate of nondonor LRs above 1. In all 17 cases examined, the LR for the minor donor was higher than the nondonor LRs, and in 12 of the 17 cases, the 99.9 percentile of the nondonor distribution was lower when appropriate conditioning information was used. The output of the tool is a graph that can show the position of the LR for the person of interest set against the nondonor LR distribution. This may assist communication between scientists and the court.  相似文献   

18.
《Science & justice》2014,54(4):292-299
Across forensic speech science, the likelihood ratio (LR) is increasingly becoming accepted as the logically and legally correct framework for the expression of expert conclusions. However, there remain a number of theoretical and practical shortcomings in the procedures applied for computing LRs based on speech evidence. In this paper we review how the LR is currently applied to speaker comparison evidence and outline three specific areas which deserve further investigation: namely statistical modelling, issues relating to the relevant population and the combination of LRs from correlated parameters. We then consider future directions for confronting these issues and discuss the implications for forensic comparison evidence more generally.  相似文献   

19.
Assigning NoC in a mixed STR profile is an important preliminary step in computing a likelihood ratio (LR). A common metric is maximum allele count (MAC) whereby the locus exhibiting the largest number of alleles is used to set the NOC. This metric can be supplemented by considering total allele count (TAC) and locus allele count (LAC). TAC is the total number of alleles across all loci and is compared with probability distributions generated in silico. LAC works similarly, save that the probability distributions are generated at the locus level. Herein, we present a comparative analysis of these three metrics using a dataset of 10,000 of each of 2–7 person simulated ground truth mixtures. These datasets were used to generate parameter distributions for each NoC. This analysis showed LAC to be the most accurate single metric in all circumstances tested. We have developmentally validated an excel-based tool to automate calculations for use by operational caseworkers.  相似文献   

20.
Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two-trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two-trace transfer problem and generalize this scenario to n traces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号