首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Science & justice》2014,54(4):258-261
The Bayesian paradigm is the preferred approach to evidence interpretation. It requires the evaluation of the probability of the evidence under at least two propositions. The value of the findings (i.e., our LR) will depend on these propositions and the case information, so it is crucial to identify which propositions are useful for the case at hand. Previously, a number of principles have been advanced and largely accepted for the evaluation of evidence. In the evaluation of traces involving DNA mixtures there may be more than two propositions possible. We apply these principles to some exemplar situations. We also show that in some cases, when there are no clear propositions or no defendant, a forensic scientist may be able to generate explanations to account for observations. In that case, the scientist plays a role of investigator, rather than evaluator. We believe that it is helpful for the scientist to distinguish those two roles.  相似文献   

2.
Forensic scientists working in 12 state or private laboratories participated in collaborative tests to improve the reliability of the presentation of DNA data at trial. These tests were motivated in response to the growing criticism of the power of DNA evidence. The experts' conclusions in the tests are presented and discussed in the context of the Bayesian approach to interpretation. The use of a Bayesian approach and subjective probabilities in trace evaluation permits, in an easy and intuitive manner, the integration into the decision procedure of any revision of the measure of uncertainty in the light of new information. Such an integration is especially useful with forensic evidence. Furthermore, we believe that this probabilistic model is a useful tool (a) to assist scientists in the assessment of the value of scientific evidence, (b) to help jurists in the interpretation of judicial facts and (c) to clarify the respective roles of scientists and of members of the court. Respondents to the survey were reluctant to apply this methodology in the assessment of DNA evidence.  相似文献   

3.
DNA analysis has become an essential intelligence tool in the criminal justice system for the identification of possible offenders. However, it appears that about half of the processed DNA samples contains too little DNA for analysis. This study looks at DNA success rates within 28 different categories of trace exhibits and relates the DNA concentration to the characteristics of the DNA profile. Data from 2260 analyzed crime samples show that cigarettes, bloodstains, and headwear have relatively high success rates. Cartridge cases, crowbars, and tie‐wraps are on the other end of the spectrum. These objective data can assist forensics in their selection process.The DNA success probability shows a positive relation with the DNA concentration. This finding enables the laboratory to set an evidence‐based threshold value in the DNA analysis process. For instance, 958 DNA extracts had a concentration value of 6 pg/μL or less. Only 46 of the 958 low‐level extracts provided meaningful DNA profiling data.  相似文献   

4.
In this paper we analyse Stegdetect, one of the well-known image steganalysis tools, to study its false positive rate. In doing so, we process more than 40,000 images randomly downloaded from the Internet using Google images, together with 25,000 images from the ASIRRA (Animal Species Image Recognition for Restricting Access) public corpus. The aim of this study is to help digital forensic analysts, aiming to study a large number of image files during an investigation, to better understand the capabilities and the limitations of steganalysis tools like Stegdetect. The results obtained show that the rate of false positives generated by Stegdetect depends highly on the chosen sensitivity value, and it is generally quite high. This should support the forensic expert to have better interpretation in their results, and taking the false positive rates into consideration. Additionally, we have provided a detailed statistical analysis for the obtained results to study the difference in detection between selected groups, close groups and different groups of images. This method can be applied to any steganalysis tool, which gives the analyst a better understanding of the detection results, especially when he has no prior information about the false positive rate of the tool.  相似文献   

5.
6.
This study examines the conditions under which an intervening lineup affects identification accuracy on a subsequent lineup. One hundred and sixty adults observed a photograph of one target individual for 60 s. One week later, they viewed an intervening target-absent lineup and were asked to identify the target individual. Two days later, participants were shown one of three 6-person lineups that included a different photograph of the target face (present or absent), a foil face from the intervening lineup present or absent), plus additional foil faces. The hit rate was higher when the foil face from the intervening lineup was absent from the test lineup and the false alarm rate was greater when the target face was absent from the test lineup. The results suggest that simply being exposed to an innocent suspect in an intervening lineup, whether that innocent suspect is identified by the witness or not, increases the probability of misidentifying the innocent suspect and decreases the probability of correctly identifying the true perpetrator in a subsequent test lineup. The implications of these findings both for police lineup procedures and for the interpretation of lineup results in the courtroom are discussed.  相似文献   

7.
随着法证DNA证据以及它所适用的概率模型日益凸显,反映了传统法证科学的局限性,并使人们对法证科学领域的决策产生了越来越多的质疑,焦点集中在对结论的解读方式和实际运用。分析表明,科学证据的本质不是绝对性或确定性的,而是概率性的;同时,事实审判者需要基于这些概率性的证据对事实作出明确的决定。因此,对于法证科学领域的决策,应当是专家在一系列归纳得出的特定假设基础上,就研究结果的概率进行恰当的报告,由事实审判者承担对概率作出决断的任务。  相似文献   

8.
The objectivity of forensic science decision making has received increased attention and scrutiny. However, there are only a few published studies experimentally addressing the potential for contextual bias. Because of the esteem of DNA evidence, it is important to study and assess the impact of subjectivity and bias on DNA mixture interpretation. The study reported here presents empirical data suggesting that DNA mixture interpretation is subjective. When 17 North American expert DNA examiners were asked for their interpretation of data from an adjudicated criminal case in that jurisdiction, they produced inconsistent interpretations. Furthermore, the majority of 'context free' experts disagreed with the laboratory's pre-trial conclusions, suggesting that the extraneous context of the criminal case may have influenced the interpretation of the DNA evidence, thereby showing a biasing effect of contextual information in DNA mixture interpretation.  相似文献   

9.
A review of the scientific papers published on inorganic gunshot residue (GSR) analysis permits to study how the particle analysis has shown its capability in detection and identification of gunshot residue. The scanning electron microscope can be the most powerful tool for forensic scientists to determine the proximity to a discharging firearm and/or the contact with a surface exposed to GSR. Particle analysis can identify individual gunshot residue particles through both morphological and elemental characteristics. When particles are detected on the collected sample, the analytical results can be interpreted following rules of a formal general interpretative system, to determine whether they come from the explosion of a primer or from other possible sources. The particles on the sample are compared with an abstract idea of "unique" GSR particle produced by the sole source of the explosion of a primer. "Uniqueness" is not the only problem related to GSR detection and identification for a forensic scientist. With "not-unique" particles interpretation of results is extremely important. The evidential strength of "not-unique" particles can increase with a more fruitful interpretative framework based on Bayes rule. For the assessment of the value of a GSR in linking a suspect and a crime, it is important to compare two hypothesis: the first can be that of the evidence if the suspect has been shooting in a specific situation, the second that of the evidence if the suspect was not involved in this shooting. This case specific or case-by-case approach is closer to what the court is interested in. The authors consider that a "case-by-case" approach should be followed whenever possible. Research of models and data such as those developed in other trace evidence material (fibres, glass, etc.) using a Bayesian approach is suggested in the interpretation of GSR.  相似文献   

10.
A "realistic" prior probability is always based on case experience (Akten-a-priori). In serological opinions pertaining to parentage, the realistic prior probability is only one piece of information in the whole body of evidence before the judge and does not have any special significance per se. There is no such thing as a "neutral" prior probability. It either implies "ignorance," in which case it cannot be "information," or it must be taken in connection with the utility principle, in which case it is not a "probability." The utility principle is defined in law and cannot be expressed in figures. The utility principle takes effect only when the judge reaches a decision (on the basis of all the evidence before him). It determines the relative importance of the participant's objects of legal protection which are at issue in the case. The expert is bound to apply a neutral utility component, i.e., in a two-hypothesis case (the normal situation) the significance of both the null and the counter hypothesis must carry the same weight. A null and/or a counter hypothesis can combine several single hypotheses; the mean value of their frequencies is taken. As a rule, one should avoid using a "prior case probability" ("Akten-a-priori") when calculating a W value. An "expectation of error" should be as realistic as possible and hence be obtained using a "prior case probability."  相似文献   

11.
拟构建一套正确应用DNA证据的逻辑框架,以避免DNA证据被错误解读和运用,明确法庭科学家和法庭审判者在DNA证据应用中的权责界限,保障事实认定之准确性和司法审判之公正性。提出了以概率统计学为工具,实现从“匹配”到“来源”之逻辑转化的必要性、合理性以及初步构想,并对当下主要的DNA证据解释方法予以分类和评价。  相似文献   

12.
王国龙 《法学论坛》2012,(3):126-134
发生在陈金钊和范进学之间有关法律解释问题的争论,扩及到对诸多相关法律理论问题上的争论。双方秉持各自"反对解释"抑或"如何解释"的立场,从对"法治反对解释"命题的"真/假"之争不断地上升到对相关法律解释学的学科属性之争、司法观之争、法律观之争以及法治观之争等。无论是主张守法主义的法律意识形态,还是主张能动主义的法律意识形态,实际上,法治时代同时需要这两种不同的声音。  相似文献   

13.
Bayesian networks (BNs) are a kind of graphical model that formally combines elements of graph and probability theory. BNs are a mathematically and statistically rigorous technique allowing their user to define a pictorial representation of assumed dependencies and influences among a set of variables deemed to be relevant for a particular inferential problem. The formalism allows one to process newly acquired evidence according to the rules of probability calculus. Applications of BNs have been reported in various forensic disciplines. However, there seems to be some reluctance to consider BNs as a more general framework for representing and evaluating sources of uncertainties associated with scientific evidence. Notably, BNs are widely thought of as an essentially numerical method, requiring "exact" numbers with a high "accuracy". The present paper aims to draw the reader's attention to the point that the availability of hard numerical data is not a necessary requirement for using BNs in forensic science. An abstraction of quantitative BNs, known as qualitative probabilistic networks (QPNs), and sensitivity analyses are presented and their potential applications discussed. As a main difference to their quantitative counterpart, QPNs contain qualitative probabilistic relationships instead of numerical relations. Sensitivity analyses consist of varying the probabilities assigned to one or more variables and evaluating the effect on one or more other variables of interest. Both QPNs and sensitivity analyses appear to be useful concepts that permit one to work in contexts with acute lack of numerical data and where reasoning consistent with the laws of probability should nevertheless be performed.  相似文献   

14.
A computation of false positive and false negative rates concerning the probability that directly communicated written or oral threats predict subsequent violent behavior yields a striking difference between "public" and "private" targets. Among private targets, communicated threats appear to increase risk, but are so common that they have little predictive value. On the other hand, public targets are unlikely to receive a direct threat from those who approach to attack. The author suggests that the most parsimonious explanation for this difference is the type, or mode of violence, that is apparent. Private targets appear to be most likely victimized by affective violence, wherein the emotionally reactive subject will immediately shove, push, punch, slap, choke, fondle, or hair pull the victim without the use of a weapon, usually in response to a perceived rejection or humiliation. Public targets are most likely to be victimized by predatory violence, which is planned, purposeful, cognitively motivated, opportunistic rather than impulsive, and often involves a firearm. Implications for risk assessment are discussed.  相似文献   

15.
The paper follows on from earlier work [Taroni F and Aitken CGG. Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence. Science & Justice 1998; 38: 165-177]. Different explanations of the value of DNA evidence were presented to students from two schools of forensic science and to members of fifteen laboratories all around the world. The responses were divided into two groups; those which came from a school or laboratory identified as Bayesian and those which came from a school or laboratory identified as non-Bayesian. The paper analyses these responses using a likelihood approach. This approach is more consistent with a Bayesian analysis than one based on a frequentist approach, as was reported by Taroni F and Aitken CGG. [Probabilistic reasoning in the law, Part 1: assessment of probabilities and explanation of the value of DNA evidence] in Science & Justice 1998.  相似文献   

16.
In police interrogation, an explicit false claim to have evidence raises important legal and constitutional questions. Therefore, some interrogation manuals recommend implicit false-evidence ploys (FEP) that ask suspects about potential evidence without making a direct claim to possess the evidence. Similar to the hypotheses in a recent study of implicit FEP and confession rates, we hypothesized that individuals would perceive implicit FEP as less coercive and deceptive when compared to explicit FEP that involve direct claims of false evidence. Although mock jurors rated all FEP as highly deceptive and coercive and as more deceptive than controls, we found that participants did not view implicit and explicit FEP differently and that ploy specificity (implicit or explicit) failed to affect verdicts or recommended sentences. These findings suggest that although interrogation trainers and scholars in law and psychology discriminate between the methods, jurors do not.  相似文献   

17.
The standard of proof beyond a reasonable doubt is based on the law's primary motivation to avoid false conviction even at the expense of increasing the probability of false acquittal. Individual jurors, however, have common sense motivations to make factually correct decisions by avoiding both types of error. As a result jurors may interpret the standard of reasonable doubt correctly but deviate from that interpretation in predictable ways when they apply the standard in court. This study makes three hypotheses: (1) jurors are less confident when deciding on acquittal than when deciding upon conviction, (2) conviction is associated with a downward adjustment of the interpreted stringency of the standard at the time of application, and (3) a highly stringent interpretation of the standard is associated with a severe downward adjustment of that stringency at the time of application. The study asked 260 juror-eligible participants to examine a trial scenario. The participants first interpreted the stringency of the legal standard on a probability scale. They then judged the probability of the defendant's guilt, decided on a verdict, and rated their confidence in that verdict. The findings strongly supported all three hypotheses. Application and implication of the study were discussed.  相似文献   

18.
Previously, the interpretation of low copy number (LCN) STR profiles has been carried out using the biological or 'consensus' method-essentially, alleles are not reported, unless duplicated in separate PCR analyses [P. Gill, J. Whitaker, C. Flaxman, N. Brown, J. Buckleton, An investigation of the rigor of interpretation rules for STRs derived from less than 100 pg of DNA, Forens. Sci. Int. 112 (2000) 17-40]. The method is now widely used throughout Europe. Although a probabilistic theory was simultaneously introduced, its time-consuming complexity meant that it could not be easily applied in practice. The 'consensus' method is not as efficient as the probabilistic approach, as the former wastes information in DNA profiles. However, the theory was subsequently extended to allow for DNA mixtures and population substructure in a programmed solution by Curran et al. [J.M. Curran, P. Gill, M.R. Bill, Interpretation of repeat measurement DNA evidence allowing for multiple contributors and population substructure, Forens. Sci. Int. 148 (2005) 47-53]. In this paper, we describe an expert interpretation system (LoComatioN) which removes this computational burden, and enables application of the full probabilistic method. This is the first expert system that can be used to rapidly evaluate numerous alternative explanations in a likelihood ratio approach, greatly facilitating court evaluation of the evidence. This would not be possible with manual calculation. Finally, the Gill et al. and Curran et al. papers both rely on the ability of the user to specify two quantities: the probability of allelic drop-out, and the probability of allelic contamination ("drop-in"). In this paper, we offer some guidelines on how these quantities may be specified.  相似文献   

19.
The way in which statistical DNA evidence is presented to legal decision makers can have a profound impact on the persuasiveness of that evidence. Evidence that is presented one way may convince most people that the suspect is almost certainly the source of DNA evidence recovered from a crime scene. However, when the evidence is presented another way, a sizable minority of people equally convinced that the suspect is almost certainly not the source of the evidence. Three experiments are presented within the context of a theory (exemplar cueing theory) for when people will find statistical match evidence to be more and less persuasive. The theory holds that the perceived probative value of statistical match evidence depends on the cognitive availability of coincidental match exemplars. When legal decision makers find it hard to imagine others who might match by chance, the evidence will seem compelling. When match exemplars are readily available, the evidence will seem less compelling. Experiments 1 and 2 show that DNA match statistics that target the individual suspect and that are framed as probabilities (i.e., The probability that the suspect would match the blood drops if he were not their source is 0.1%) are more persuasive than mathematically equivalent presentations that target a broader reference group and that are framed as frequencies (One in 1,000 people in Houston would also match the blood drops). Experiment 3 shows that the observed effects are less likely to occur at extremely small incidence rates. Implications for the strategic use of presentation effects at trial are considered.  相似文献   

20.
《Science & justice》2019,59(4):367-379
Examples of reasoning problems such as the twins problem and poison paradox have been proposed by legal scholars to demonstrate the limitations of probability theory in legal reasoning. Specifically, such problems are intended to show that use of probability theory results in legal paradoxes. As such, these problems have been a powerful detriment to the use of probability theory – and particularly Bayes theorem – in the law. However, the examples only lead to ‘paradoxes’ under an artificially constrained view of probability theory and the use of the so-called likelihood ratio, in which multiple related hypotheses and pieces of evidence are squeezed into a single hypothesis variable and a single evidence variable. When the distinct relevant hypotheses and evidence are described properly in a causal model (a Bayesian network), the paradoxes vanish. In addition to the twins problem and poison paradox, we demonstrate this for the food tray example, the abuse paradox and the small town murder problem. Moreover, the resulting Bayesian networks provide a powerful framework for legal reasoning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号