首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Previous papers in Science & Justice have described the work of the Case Assessment and Interpretation (CAI) project that has been running for several years within the Forensic Science Service (FSS). The principles of the CAI model, which have developed through casework, are the foundation of a balanced, robust and logical approach to interpretation. The question arises frequently as to what is the most appropriate database that should be available to assist in assigning a value to a given probability. In this paper we present a set of guidelines in the form of flowcharts and explore them within the context of a range of case examples.  相似文献   

2.
The new emphasis on quantification of evidence has led to perplexing courtroom decisions and it has been difficult for forensic scientists to pursue logical arguments. Especially, for evaluating DNA evidence, though both the genetic relationship for two compared persons and the examined locus system should be considered, the understanding for this has not yet drawn much attention. In this paper, we suggest to calculate the match probability by using coancestry coefficient when the family relationship is considered, and thus the performances of the identification values depending on the calculation of match probability are compared under various situations.  相似文献   

3.
We all agree on the justification of defending ourselves or others in some situations, but we do not often agree on why. Two main views compete: subjectivism and objectivism. The discussion has mainly been held in normative terms. But every theory must pass a previous test: logical consistency. It has recently been held that, at least in the case of defending others from aggression, objective theories lead, in some situations, to normative contradiction. My aim is to challenge the idea that only objective theories have this uncomfortable feature. In fact, any plausible theory justifying the defense of others, whether subjectively or objectively, can lead to situations of normative inconsistency. Therefore, the logical test is not the most fitting one for choosing between different theories of private defense.  相似文献   

4.
The field of firearms and toolmark analysis has encountered deep scrutiny of late, stemming from a handful of voices, primarily in the law and statistical communities. While strong scrutiny is a healthy and necessary part of any scientific endeavor, much of the current criticism leveled at firearm and toolmark analysis is, at best, misinformed and, at worst, punditry. One of the most persistent criticisms stems from the view that as the field lacks quantified random match probability data (or at least a firm statistical model) with which to calculate the probability of a false match, all expert testimony concerning firearm and toolmark identification or source attribution is unreliable and should be ruled inadmissible. However, this critique does not stem from the hard work of actually obtaining data and performing the scientific research required to support or reject current findings in the literature. Although there are sound reasons (described herein) why there is currently no unifying probabilistic model for the comparison of striated and impressed toolmarks as there is in the field of forensic DNA profiling, much statistical research has been, and continues to be, done to aid the criminal justice system. This research has thus far shown that error rate estimates for the field are very low, especially when compared to other forms of judicial error. The first purpose of this paper is to point out the logical fallacies in the arguments of a small group of pundits, who advocate a particular viewpoint but cloak it as fact and research. The second purpose is to give a balanced review of the literature regarding random match probability models and statistical applications that have been carried out in forensic firearm and toolmark analysis.  相似文献   

5.
In R v T [2010] EWCA Crim 2439, [2011] 1 Cr App Rep 85, the Court of Appeal indicated that ‘mathematical formulae’, such as likelihood ratios, should not be used by forensic scientists to analyse data where firm statistical evidence did not exist. Unfortunately, when considering the forensic scientist's evidence, the judgment consistently commits a basic logical error, the ‘transposition of the conditional’ which indicates that the Bayesian argument has not been understood and extends the confusion surrounding it. The judgment also fails to distinguish between the validity of the relationships in a formula and the precision of the data. We explain why the Bayesian method is the correct logical method for analysing forensic scientific evidence, how it works and why ‘mathematical formulae’ can be useful even where firm statistical data is lacking.  相似文献   

6.
《Science & justice》2019,59(4):367-379
Examples of reasoning problems such as the twins problem and poison paradox have been proposed by legal scholars to demonstrate the limitations of probability theory in legal reasoning. Specifically, such problems are intended to show that use of probability theory results in legal paradoxes. As such, these problems have been a powerful detriment to the use of probability theory – and particularly Bayes theorem – in the law. However, the examples only lead to ‘paradoxes’ under an artificially constrained view of probability theory and the use of the so-called likelihood ratio, in which multiple related hypotheses and pieces of evidence are squeezed into a single hypothesis variable and a single evidence variable. When the distinct relevant hypotheses and evidence are described properly in a causal model (a Bayesian network), the paradoxes vanish. In addition to the twins problem and poison paradox, we demonstrate this for the food tray example, the abuse paradox and the small town murder problem. Moreover, the resulting Bayesian networks provide a powerful framework for legal reasoning.  相似文献   

7.
Psychological research on eyewitness testimony has made important contributions to the measurement of lineup fairness. The mock witness task, and measures of functional size, effective size, and diagnosticity have proved useful both in application to real-world problems and to ongoing research aimed at the optimization of criminal investigation techniques. However, these measures are typically used in the absence of any inferential statistical considerations. This is unfortunate, since the mock witness task relies on an implicit probability model. An attempt is made in this paper to identify a suitable formal probability model for the mock witness task, and suggestions are made with respect to how to reason inferentially about many of the lineup measures developed in psycholegal research. It is important to reason inferentially about these measures, and a failure to do so may mislead those to whom measures of lineup fairness are presented.  相似文献   

8.
In this article, we present a model of individual dismissals based on the workers' right to file a suit against their employer arguing that the dismissal is unjustified or unfair. The model is a standard pre-trial bargaining game between a firm and a worker. We study two cases: when the law states the severance pay for unfair dismissal (the European case), and when judges can decide freely on the compensation to be paid to the worker (the American case). The model provides some guidelines for Labour Law reforms. In the European case, a decrease in the severance pay for unfair dismissals fixed by law will decrease the severance pay offered by the firm, and only under some assumptions will decrease the expected firing cost and will increase the settlement probability. In addition, the transition from the European to the American case is likely to increase the probability of settlement (and to decrease it in the opposite case) with ambiguous effects on agreed severance pay and expected firing costs.  相似文献   

9.
Much has been said about the logical difference between rules and principles, yet few authors have focused on the distinct logical connectives linking the normative conditions of both norms. I intend to demonstrate that principles, unlike rules, are norms whose antecedents are linguistically formulated in a generic fashion, and thus logically described as inclusive disjunctions. This core feature incorporates the relevance criteria of normative antecedents into the world of principles and also explains their aptitude to conflict with opposing norms, namely that their consequents are fulfilled to varying extents more frequently than those of rules. I conclude that the property of genericity should be predicated to the norm antecedent of principles, more precisely to the hypothetical action. This is of paramount importance to explain, in terms of logical implication and exclusion, the expansibility of competing principles, in contrast with the exclusive character of conflicting rules.  相似文献   

10.
Conclusion There is certainly a paradox in all this but it is not that of the constitution and the counterfactual beliefs it demands of us. It is rather that in the absence of Hegel's inspiration and of the actual closure on the dialectic of freedom that justifies his claims, his accomplishment, the logical reconstruction of modern Western ideas of right, would not have been possible. Some will cry irony in the face of this paradox and well they might. My argument,vis a vis Hegel isad hominem. But it is not made for the purpose of irony. It is made in the hope of recovering Hegel's sense of the ambiguity of tautology from the straitjacket of our legal and ethical thought. I would hope then to use this paradox to subvert anidée fixe — that right is right — and develop, from the lives of those whom the constitution has stripped bare, our understanding of its wrong.  相似文献   

11.
Drunk driving is a serious threat to public safety. All available and appropriate tools for curbing this threat should be employed to their full extent. The handheld pre‐arrest breath test instrument (PBT) is one tool for identifying the alcohol‐impaired driver and enforcing drunk driving legislation. A set of data was evaluated (n = 1779) where the PBT instrument was employed in drunk driving arrests to develop a multivariate predictive model. When maintained and operated by trained personnel, the PBT provides a reasonable estimate of the evidential test result within the relevant forensic range (95% prediction interval:  ± 0.003 g/210 L). ROC analysis shows that a multivariate model for PBT prediction of the evidentiary alcohol concentration above versus below the legal limit of 0.08 g/210 L has excellent performance with an AUC of 0.96. These results would be of value in evidential hearings seeking to admit the PBT results in drunk driving trials.  相似文献   

12.
Prediction of visible traits from genetic data in certain forensic cases may provide important information that can speed up the process of investigation. Research that has been conducted on the genetics of pigmentation has revealed polymorphisms that explain a significant proportion of the variation observed in human iris color. Here, on the basis of genetic data for the six most relevant eye color predictors, two alternative Bayesian network model variants were developed and evaluated for their accuracy in prediction of eye color. The first model assumed eye color to be categorized into blue, brown, green, and hazel, while the second variant assumed a simplified classification with two states: light and dark. It was found that particularly high accuracy was obtained for the second model, and this proved that reliable differentiation between light and dark irises is possible based on analysis of six single nucleotide polymorphisms and a Bayesian procedure of evidence interpretation.  相似文献   

13.
The premise that progress in document examination will depend on employing techniques useful in the more formal branches of science is not exactly logical. The correlation between the work of the document examiner and the behavioral sciences has been discussed by presenting some random thoughts which have occurred to the author over a period of years. The suggestion is made, by illustration and implication, that the unfortunate connotation of the word "behavior" with the word "graphology" has tended to direct the attention of document examiners away from a study of the behavioral sciences, a branch of science from which much can be learned. The fact that the subjective concepts of probability formed by the mature document examiner will approach mathematical expectation has been noted.  相似文献   

14.
The use of third molars in predicting juvenile/adult status (</≥ 18 years) has important legal ramifications. Third molar development was assessed using Köhler's grading on 268 orthopantomograms of Indian subjects. Logistic regression analysis was applied to determine allocation accuracy of juvenile/adult status and the level of probability that is “reliable” in predicting juvenile/adult status. Allocation accuracies ranged between 75.8% and 78.2% for the sexes combined, with minimal male‐female differences. Adults were categorized more accurately than juveniles, suggesting that Köhler's grading puts Indian juveniles at greater risk of unwarranted punishment. In both sexes, juvenile/adult status was “reliably” predicted when the probability was >80% using individual third molars (excepting the lower right third molar in males); combining upper and lower third molars on the left/right sides, “reliable” predictions were possible when the probability was >80% and >90% for females and males, respectively. Overall, “reliable” juvenile/adult status prediction was achieved in c. 36% of subjects.  相似文献   

15.
Some recent articles have proposed that the confidence interval for the predicted outcome of a single case can be used to describe the predictive accuracy of risk assessments (Hart et al. Br J Psychiat 190:60–65, 2007b; Cooke and Michie, Law Hum Behav 2009). Given that the confidence intervals for an individual prediction are very large, Cooke and colleagues have questioned the wisdom of applying recidivism rates estimated from group data to single cases. In this article, we argue that the confidence intervals for the recidivism outcome predicted for a single case will range between zero to one (i.e., be uninformative) when the outcome is dichotomous and the predicted probability is between .05 and .95. This is true by definition and limits the utility of using individual confidence intervals to measure predictive accuracy. Consequently, other quality indicators (many of which are non-quantitative) are needed to determine the accuracy and error of risk evaluations.  相似文献   

16.
In most jurisdictions, there is a statutory preference for releasing on bail an accused in custody that has not yet been convicted unless the accused is charged with very serious offence like homicide. Nonetheless, the courts are vested with the powers to decide on the quantum of bail or to even refuse bail outright. To induce the defendant to surrender for trial [Lim, B.-T., & Quah, E. (1998). Economics of bail setting. Bulletin of Economic Research, 257–264] demonstrate that the bail quantum should be based on the expected cost of punishment and the probability of re-arrest if the defendant jumps bail. However, there are costs to society if the defendant absconds, which include, inter alia, the cost of re-arresting the defendant. In this paper, we derive the optimal bail quantum on the assumption that the probability of re-arrest and the penalty for absconding are chosen by the courts whose objective function is to minimize the sum of the expected harm to society and the net costs to law enforcement if the defendant jumps bail. The cost and benefit of being released on bail are examined. A model is proposed which may be useful to the court officials in bail setting as an effective means to secure the defendant's attendance at trial as well as to achieve social equity.  相似文献   

17.
Forensic DNA interpretation is transitioning from manual interpretation based usually on binary decision‐making toward computer‐based systems that model the probability of the profile given different explanations for it, termed probabilistic genotyping (PG). Decision‐making by laboratories to implement probability‐based interpretation should be based on scientific principles for validity and information that supports its utility, such as criteria to support admissibility. The principles behind STRmix? are outlined in this study and include standard mathematics and modeling of peak heights and variability in those heights. All PG methods generate a likelihood ratio (LR) and require the formulation of propositions. Principles underpinning formulations of propositions include the identification of reasonably assumed contributors. Substantial data have been produced that support precision, error rate, and reliability of PG, and in particular, STRmix?. A current issue is access to the code and quality processes used while coding. There are substantial data that describe the performance, strengths, and limitations of STRmix?, one of the available PG software.  相似文献   

18.
Research on delinquency involvement has often employed structural or control theories to account for such behavior. Structural models typically have been applied to lower-class delinquency, control models to explanotions of middle-class juvenile miscanduct. Much of the inconclusiveness and many of the controdictions in the delinquency literature are arguably the result of this focus on either lower-class or middle-class adolescents employing a single conceptual orientation. Such a restrictive focus has produced narrow, class-specific explanotions of delinquency involvement which obscure probable similarities in etiological processes at varying socioeconomic locations in the society. The intent of this research is to test the predictive utility of central aspects of bath the structural and control models across a wide range of social status positions. Self-report data obtained from o representative sample of 412 male high school students in a mid-western SMSA indicate that (1) bath structural and control theory serve to explain significant. though small. proportions of the variance in delinquency; (2) the control model variables account for the mast unique variation in delinquency involvement; and (3) the combined effects of the two models account for more variance than either of the models taken separately.  相似文献   

19.
This literature review summarizes the existing research examining how the attitude a potential juror has toward the death penalty impacts on the probability of favoring conviction. The summary of 14 investigations indicates that a favorable attitude toward the death penalty is associated with an increased willingness to convict (average r = .174). Using the binomial effect size display, this favorable attitude towards the death penalty translates into a 44% increase in the probability of a juror favoring conviction.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号