首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A reported likelihood ratio for the value of evidence is very often a point estimate based on various types of reference data. When presented in court, such frequentist likelihood ratio gets a higher scientific value if it is accompanied by an error bound. This becomes particularly important when the magnitude of the likelihood ratio is modest and thus is giving less support for the forwarded proposition. Here, we investigate methods for error bound estimation for the specific case of digital camera identification. The underlying probability distributions are continuous and previously proposed models for those are used, but the derived methodology is otherwise general. Both asymptotic and resampling distributions are applied in combination with different types of point estimators. The results show that resampling is preferable for assessment based on asymptotic distributions. Further, assessment of parametric estimators is superior to evaluation of kernel estimators when background data are limited.  相似文献   

2.
Likelihood ratios are necessary to properly interpret mixed stain DNA evidence. They can flexibly consider alternate hypotheses and can account for population substructure. The likelihood ratio should be seen as an estimate and not a fixed value, because the calculations are functions of allelic frequency estimates that were estimated from a small portion of the population. Current methods do not account for uncertainty in the likelihood ratio estimates and are therefore an incomplete picture of the strength of the evidence. We propose the use of a confidence interval to report the consequent variation of likelihood ratios. The confidence interval is calculated using the standard forensic likelihood ratio formulae and a variance estimate derived using the Taylor expansion. The formula is explained, and a computer program has been made available. Numeric work shows that the evidential strength of DNA profiles decreases as the variation among populations increases.  相似文献   

3.
In this paper, the issue of whether DNA databases collected by different convenience sampling methods are significantly different statistically is investigated. Testing the null hypothesis that the population probability or frequency distributions of DNA profiles under different sampling methods are the same is of interest in this investigation. Some statistical analyses are conducted on the single-locus VNTR databases collected from different sources by the Hong Kong Government Laboratory. The bootstrap, Monte Carlo simulation and significance tests including the Pearson's chi-squared, likelihood ratio, and Kolmogorov-Smirnov two-sample statistics are employed for testing the hypothesis. The results are promising that no probability values of the tests are smaller than 5%. In other words, there is not enough evidence to reject the null hypothesis at the 5% level, which provides more confidence for using the VNTR reference databases commonly collected by convenience sampling.  相似文献   

4.
Score based procedures for the calculation of forensic likelihood ratios are popular across different branches of forensic science. They have two stages, first a function or model which takes measured features from known-source and questioned-source pairs as input and calculates scores as output, then a subsequent model which converts scores to likelihood ratios. We demonstrate that scores which are purely measures of similarity are not appropriate for calculating forensically interpretable likelihood ratios. In addition to taking account of similarity between the questioned-origin specimen and the known-origin sample, scores must also take account of the typicality of the questioned-origin specimen with respect to a sample of the relevant population specified by the defence hypothesis. We use Monte Carlo simulations to compare the output of three score based procedures with reference likelihood ratio values calculated directly from the fully specified Monte Carlo distributions. The three types of scores compared are: 1. non-anchored similarity-only scores; 2. non-anchored similarity and typicality scores; and 3. known-source anchored same-origin scores and questioned-source anchored different-origin scores. We also make a comparison with the performance of a procedure using a dichotomous “match”/“non-match” similarity score, and compare the performance of 1 and 2 on real data.  相似文献   

5.
6.
This article introduces the use of regression models based on the Poissondistribution as a tool for resolving common problems in analyzing aggregatecrime rates. When the population size of an aggregate unit is small relativeto the offense rate, crime rates must be computed from a small number ofoffenses. Such data are ill-suited to least-squares analysis. Poisson-basedregression models of counts of offenses are preferable because they arebuilt on assumptions about error distributions that are consistent withthe nature of event counts. A simple elaboration transforms the Poissonmodel of offense counts to a model of per capita offense rates. Todemonstrate the use and advantages of this method, this article presentsanalyses of juvenile arrest rates for robbery in 264 nonmetropolitancounties in four states. The negative binomial variant of Poisson regressioneffectively resolved difficulties that arise in ordinary least-squaresanalyses.  相似文献   

7.
Application of subpopulation theory to evaluation of DNA evidence   总被引:2,自引:0,他引:2  
The strength of any evidence can be assessed using a likelihood ratio (from Bayes' point of view). This is the ratio of the probabilities that the evidence would have been obtained given that the suspect is guilty and innocent, respectively. This, in turn, depends upon the probability that a match will be produced if the suspect is innocent. An essential population genetics parameter is the 'coancestry coefficient', or θ, or F(ST), which is the correlation between two genes sampled from distinct individuals within a subpopulation. In this paper, θ coefficients for the southern Polish population were calculated for three loci of forensic interest: TH01, TPOX and CSF1PO. Three small southern Polish subpopulations of different ethnic origin were analysed. The results suggest that values of θ appropriate to forensic applications are quite small in the southern Polish population (they vary in the range of 0.002 to 0.013), and the value of θ=0.03 suggested by the National Research Council is too conservative for the defendant.  相似文献   

8.
目的对STR基因座进行遗传学调查时样本量大小和采样方式进行考察分析。方法用DNA Typer~(TM)19试剂盒对血卡样本进行直扩,ABI3730型遗传分析仪电泳检测,Genemapper ID v3.2软件进行等位基因分型,根据公式计算群体遗传学参数。计算样本量在50、100、150、200、250、300、350、400、450、500十个水平上遗传多态性水平,并评估与总体的差异。计算常见的四种采样方法(整群采样、随机采样、系统采样、分层采样)所抽取样本遗传多态性水平,并评估抽样误差。结果 18个STR基因座在河北地区16 058份随机样本共检出317种等位基因,基因频率分布在3.114e-5~0.515。18个STR基因座在河北群体水平上PM为6.04e-14,TDP为0.999 999 999 999 94,CEP为0.999 999 987 514 828。PM值、CEP值在样本量大于200后中值基本稳定。四种抽样方法的抽样误差大小是:整群抽样≥单纯随机抽样≥系统抽样≥分层抽样。结论 DNATyper~(TM)19试剂盒的18个STR基因座在河北地区具有较高的遗传多态性水平。在小范围内进行人群频率调查时,可随机选择200份样本代表当地遗传多态性水平。在相对较大范围或遗传背景复杂的区域,可采用分层抽样的方法降低抽样误差。  相似文献   

9.
《Science & justice》2021,61(6):743-754
Facial comparison is an important yet understudied discipline in forensics. The recommended method for facial comparison in a forensic setting involves morphological analysis (MA) with the use of a facial feature list. The performance of this approach has not been tested across various closed-circuit television (CCTV) conditions. This is of particular concern as video and image data available to law enforcement is often varied and of subpar conditions. The present study aimed at testing MA across two types of CCTV data, representing ideal and less than ideal settings, also assessing which particular shortcomings arose from less-than-ideal settings. The study was conducted on a subset of the Wits Face Database arranged in a total of 225 face pools. Each face pool consisted of a target image obtained from either a high-definition digital CCTV camera or a low-definition analogue CCTV camera in monochrome, contrasted to 10 possible matches. The face pools were analysed and scored using MA and confusion matrices were used to analyse the outcomes. A notably high chance corrected accuracy (CCA) (97.3%) and reliability (0.969) was identified across the digital CCTV sample, while in the analogue CCTV sample MA appeared to underperform both in accuracy (CCA: 33.1%) and reliability (0.529). The majority of the errors in scoring resulted in false negatives in the analogue sample (75.2%), while across both CCTV conditions false positives were low (digital: 0.3%; analogue: 1.2%). Even though hit rates appeared deceptively high in the analogue sample, the various measures of performance used and particularly the chance corrected accuracy highlighted its shortfalls. Overall, CCTV recording quality appears closely associated to MA performance, despite the favourable error rates when using the Facial Identification Scientific Working Group feature list.  相似文献   

10.
This study examines spheno-occipital synchondrosis fusion in the modern American population and presents age ranges for forensic use. The sample includes 162 modern individuals aged 5-25 years. The basilar synchondrosis was scored as open, closing, or closed via direct inspection of the ectocranial site of the suture. Transition analysis was used to determine the average ages at which an individual transitions from unfused to fusing and from fusing to fused. The maximum likelihood estimates from the transition analysis indicate that females are most likely to transition from open to closing at 11.4 years and males at 16.5 years. Females transition from closing to closed at 13.7 years and males at 17.4 years. The probability distributions associated with these maximum likelihood estimates were used to derive age ranges for age estimation purposes. These results reflect sexual dimorphism in basilar synchondrosis fusion and agree approximately with average age at pubertal onset.  相似文献   

11.
The utilization of 3D computerized systems has allowed more effective procedures for forensic facial reconstruction. Three 3D computerized facial reconstructions were produced using skull models from live adult Korean subjects to assess facial morphology prediction accuracy. The 3D skeletal and facial data were recorded from the subjects in an upright position using a cone-beam CT scanner. Shell-to-shell deviation maps were created using 3D surface comparison software, and the deviation errors between the reconstructed and target faces were measured. Results showed that 54%, 65%, and 77% of the three facial reconstruction surfaces had <2.5 mm of error when compared to the relevant target face. The average error for each reconstruction was -0.46 mm (SD = 2.81) for A, -0.31 mm (SD = 2.40) for B, and -0.49 mm (SD = 2.16) for C. The facial features of the reconstructions demonstrated good levels of accuracy compared to the target faces.  相似文献   

12.
Tests that infer the ancestral origin of a DNA sample have considerable potential in the development of forensic tools that can help to guide crime investigation. We have developed a single-tube 34-plex SNP assay for the assignment of ancestral origin by choosing ancestry-informative markers (AIMs) exhibiting highly contrasting allele frequency distributions between the three major population-groups. To predict ancestral origin from the profiles obtained, a classification algorithm was developed based on maximum likelihood. Sampling of two populations each from African, European and East Asian groups provided training sets for the algorithm and this was tested using the CEPH Human Genome Diversity Panel. We detected negligible theoretical and practical error for assignments to one of the three groups analyzed with consistently high classification probabilities, even when using reduced subsets of SNPs. This study shows that by choosing SNPs exhibiting marked allele frequency differences between population-groups a practical forensic test for assigning the most likely ancestry can be achieved from a single multiplexed assay.  相似文献   

13.
The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.  相似文献   

14.
Subclass characteristics can be found on the breech face marks left on spent cartridge cases. Even if they are assumed to be rare and their reported number is small, they can potentially lead to false associations. Subclass characteristics have been studied empirically allowing examiners to recognize them and to understand in which conditions they are produced. Until now, however, their influence on the identification process has not been studied from a probabilistic point of view. In this study, we aim at measuring the effect of these features on the strength of association derived from examinations involving subclass characteristics. The study takes advantage of a 3D automatic comparison system allowing the calculation of likelihood ratios (LRs). The similarities between cartridge case specimens fired by thirteen S&W .40S&W Sigma pistols are quantified, and their respective LRs are computed. The results show that the influence of subclass characteristics on the LRs is limited, even when these features are prevalent among the potential sources considered in a case. We show that the proportion of firearms sharing subclass characteristics should be larger than 40% of the pool of potential firearms for the effect to be significant.  相似文献   

15.
Forensic hair examiners using traditional microscopic comparison techniques cannot state with certainty, except in extremely rare cases, that a found hair originated from a particular individual. They also cannot provide a statistical likelihood that a hair came from a certain individual and not another. There is no data available regarding the frequency of a specific microscopic hair characteristic (i.e., microtype) or trait in a particular population. Microtype is a term we use to describe certain internal characteristics and features expressed when observing hairs with unpolarized transmitted light. Courts seem to be sympathetic to lawyer's concerns that there are no accepted probability standards for human hair identification. Under Daubert, microscopic hair analysis testimony (or other scientific testimony) is allowed if the technique can be shown to have testability, peer review, general acceptance, and a known error rate. As with other forensic disciplines, laboratory error rate determination for a specific hair comparison case is not possible. Polymerase chain reaction (PCR)-based typing of hair roots offer hair examiners an opportunity to begin cataloging data with regard to microscopic hair association error rates. This is certainly a realistic manner in which to ascertain which hair microtypes and case circumstances repeatedly cause difficulty in association. Two cases are presented in which PCR typing revealed an incorrect inclusion in one and an incorrect exclusion in another. This paper does not suggest that such limited observations define a rate of occurrence. These cases illustrate evidentiary conditions or case circumstances which may potentially contribute to microscopic hair association errors. Issues discussed in this review paper address the potential questions an expert witness may expect in a Daubert hair analysis admissibility hearing.  相似文献   

16.
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments.  相似文献   

17.
A random effects model using two levels of hierarchical nesting has been applied to the calculation of a likelihood ratio as a solution to the problem of comparison between two sets of replicated multivariate continuous observations where it is unknown whether the sets of measurements shared a common origin. Replicate measurements from a population of such measurements allow the calculation of both within-group and between-group variances/covariances. The within-group distribution has been modelled assuming a Normal distribution, and the between-group distribution has been modelled using a kernel density estimation procedure. A graphical method of estimating the dependency structure among the variables has been used to reduce this highly multivariate problem to several problems of lower dimension. The approach was tested using a database comprising measurements of eight major elements from each of four fragments from each of 200 glass objects and found to perform well compared with previous approaches, achieving a 15.2% false-positive rate, and a 5.5% false-negative rate. The modelling was then applied to two examples of casework in which glass found at the scene of the criminal activity has been compared with that found in association with a suspect.  相似文献   

18.
The overlay of a skull and a face image for identification purposes requires similar subject‐to‐camera distances (SCD) to be used at both photographic sessions so that differences in perspective do not compromise the anatomical comparisons. As the facial photograph is the reference standard, it is crucial to determine its SCD first and apply this value to photography of the skull. So far, such a method for estimating the SCD has been elusive (some say impossible), compromising the technical validity of the superimposition procedure. This paper tests the feasibility of using the palpebral fissure length and a well‐established photographic algorithm to accurately estimate the SCD from the facial photograph. Recordings at known SCD across a 1–10 m range (repeated under two test conditions) demonstrate that the newly formulated method works: a mean SCD estimation error of 7% that translates into <1% perspective distortion error between estimated and actual conditions.  相似文献   

19.
As a result of a financial and demographic crisis, the Israeli kibbutz is experiencing a period of transformation. Many kibbutzim (kibbutzim is the plural form of “kibbutz” Hebrew) have abandoned the classic egalitarian way of life and have adopted a new paradigm in which each member receives a different income. This transformation process makes the kibbutz a unique test case for the preferences of people who face the choice between equality, capitalism or an in-between combination. This study uses data on a small sample of kibbutzim that have recently adopted a safety net model to derive some implications of this fundamental transformation for the income distribution within- and between kibbutzim. The results show that there is no longer equality between kibbutz members. However, the new kibbutz manages to minimize poverty. The new structure also encourages kibbutz’ female members to study and work towards greater equality in income and jobs.  相似文献   

20.
Abstract: Determining the number of contributors to a forensic DNA mixture using maximum allele count is a common practice in many forensic laboratories. In this paper, we compare this method to a maximum likelihood estimator, previously proposed by Egeland et al., that we extend to the cases of multiallelic loci and population subdivision. We compared both methods’ efficiency for identifying mixtures of two to five individuals in the case of uncertainty about the population allele frequencies and partial profiles. The proportion of correctly resolved mixtures was >90% for both estimators for two‐ and three‐person mixtures, while likelihood maximization yielded success rates 2‐ to 15‐fold higher for four‐ and five‐person mixtures. Comparable results were obtained in the cases of uncertain allele frequencies and partial profiles. Our results support the use of the maximum likelihood estimator to report the number of contributors when dealing with complex DNA mixtures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号