首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 303 毫秒
1.
《Digital Investigation》2014,11(2):81-89
Bytewise approximate matching is a relatively new area within digital forensics, but its importance is growing quickly as practitioners are looking for fast methods to analyze the increasing amounts of data in forensic investigations. The essential idea is to complement the use of cryptographic hash functions to detect data objects with bytewise identical representation with the capability to find objects with bytewise similar representations.Unlike cryptographic hash functions, which have been studied and tested for a long time, approximate matching ones are still in their early development stages, and have been evaluated in a somewhat ad-hoc manner. Recently, the FRASH testing framework has been proposed as a vehicle for developing a set of standardized tests for approximate matching algorithms; the aim is to provide a useful guide for understanding and comparing the absolute and relative performance of different algorithms.The contribution of this work is twofold: a) expand FRASH with automated tests for quantifying approximate matching algorithm behavior with respect to precision and recall; and b) present a case study of two algorithms already in use–sdhash and ssdeep.  相似文献   

2.
Investigating seized devices within digital forensics gets more and more difficult due to the increasing amount of data. Hence, a common procedure uses automated file identification which reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is also helpful to detect similar data by applying approximate matching.Let x denote the number of digests in a database, then the lookup for a single similarity digest has the complexity of O(x). In other words, the digest has to be compared against all digests in the database. In contrast, cryptographic hash values are stored within binary trees or hash tables and hence the lookup complexity of a single digest is O(log2(x)) or O(1), respectively.In this paper we present and evaluate a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(x) to O(1). Therefore, instead of using multiple small Bloom filters (which is the common procedure), we demonstrate that a single, huge Bloom filter has a far better performance. Our evaluation demonstrates that current approximate matching algorithms are too slow (e.g., over 21 min to compare 4457 digests of a common file corpus against each other) while the improved version solves this challenge within seconds. Studying the precision and recall rates shows that our approach works as reliably as the original implementations. We obtain this benefit by accuracy–the comparison is now a file-against-set comparison and thus it is not possible to see which file in the database is matched.  相似文献   

3.
Automated input identification is a very challenging, but also important task. Within computer forensics this reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is necessary to cope with similar inputs (e.g., different versions of a file), embedded objects (e.g., a JPG within a Word document), and fragments (e.g., network packets), too. Over the recent years a couple of different similarity hashing algorithms were published. However, due to the absence of a definition and a test framework, it is hardly possible to evaluate and compare these approaches to establish them in the community.The paper at hand aims at providing an assessment methodology and a sample implementation called FRASH: a framework to test algorithms of similarity hashing. First, we describe common use cases of a similarity hashing algorithm to motivate our two test classes efficiency and sensitivity & robustness. Next, our open and freely available framework is briefly described. Finally, we apply FRASH to the well-known similarity hashing approaches ssdeep and sdhash to show their strengths and weaknesses.  相似文献   

4.
Large-scale digital forensic investigations present at least two fundamental challenges. The first one is accommodating the computational needs of a large amount of data to be processed. The second one is extracting useful information from the raw data in an automated fashion. Both of these problems could result in long processing times that can seriously hamper an investigation.In this paper, we discuss a new approach to one of the basic operations that is invariably applied to raw data – hashing. The essential idea is to produce an efficient and scalable hashing scheme that can be used to supplement the traditional cryptographic hashing during the initial pass over the raw evidence. The goal is to retain enough information to allow binary data to be queried for similarity at various levels of granularity without any further pre-processing/indexing.The specific solution we propose, called a multi-resolution similarity hash (or MRS hash), is a generalization of recent work in the area. Its main advantages are robust performance – raw speed comparable to a high-grade block-level crypto hash, scalability – ability to compare targets that vary in size by orders of magnitude, and space efficiency – typically below 0.5% of the size of the target.  相似文献   

5.
The fast growth of the average size of digital forensic targets demands new automated means to quickly, accurately and reliably correlate digital artifacts. Such tools need to offer more flexibility than the routine known-file filtering based on crypto hashes. Currently, there are two tools for which NIST has produced reference hash sets–ssdeep and sdhash. The former provides a fixed-sized fuzzy hash based on random polynomials, whereas the latter produces a variable-length similarity digest based on statistically-identified features packed into Bloom filters.This study provides a baseline evaluation of the capabilities of these tools both in a controlled environment and on real-world data. The results show that the similarity digest approach significantly outperforms in terms of recall and precision in all tested scenarios and demonstrates robust and scalable behavior.  相似文献   

6.
Over the past decade, a substantial effort has been put into developing methods to classify file fragments. Throughout, it has been an article of faith that data fragments, such as disk blocks, can be attributed to different file types. This work is an attempt to critically examine the underlying assumptions and compare them to empirically collected data. Specifically, we focus most of our effort on surveying several common compressed data formats, and show that the simplistic conceptual framework of prior work is at odds with the realities of actual data. We introduce a new tool, zsniff, which allows us to analyze deflate-encoded data, and we use it to perform an empirical survey of deflate-coded text, images, and executables. The results offer a conceptually new type of classification capabilities that cannot be achieved by other means.  相似文献   

7.
We describe the design, implementation, and evaluation of FROST—three new forensic tools for the OpenStack cloud platform. Our implementation for the OpenStack cloud platform supports an Infrastructure-as-a-Service (IaaS) cloud and provides trustworthy forensic acquisition of virtual disks, API logs, and guest firewall logs. Unlike traditional acquisition tools, FROST works at the cloud management plane rather than interacting with the operating system inside the guest virtual machines, thereby requiring no trust in the guest machine. We assume trust in the cloud provider, but FROST overcomes non-trivial challenges of remote evidence integrity by storing log data in hash trees and returning evidence with cryptographic hashes. Our tools are user-driven, allowing customers, forensic examiners, and law enforcement to conduct investigations without necessitating interaction with the cloud provider. We demonstrate how FROST's new features enable forensic investigators to obtain forensically-sound data from OpenStack clouds independent of provider interaction. Our preliminary evaluation indicates the ability of our approach to scale in a dynamic cloud environment. The design supports an extensible set of forensic objectives, including the future addition of other data preservation, discovery, real-time monitoring, metrics, auditing, and acquisition capabilities.  相似文献   

8.
9.
The increasing popularity of cryptography poses a great challenge in the field of digital forensics. Digital evidence protected by strong encryption may be impossible to decrypt without the correct key. We propose novel methods for cryptographic key identification and present a new proof of concept tool named Interrogate that searches through volatile memory and recovers cryptographic keys used by the ciphers AES, Serpent and Twofish. By using the tool in a virtual digital crime scene, we simulate and examine the different states of systems where well known and popular cryptosystems are installed. Our experiments show that the chances of uncovering cryptographic keys are high when the digital crime scene are in certain well-defined states. Finally, we argue that the consequence of this and other recent results regarding memory acquisition require that the current practices of digital forensics should be guided towards a more forensically sound way of handling live analysis in a digital crime scene.  相似文献   

10.
The real interest rate is a very important variable in the transmission of monetary policy. It features in vast majority of financial and macroeconomic models. Though the theoretical importance of the real interest rate has generated a sizable literature that examines its long-run properties, surprisingly, there does not exist any study that delves into this issue for South Africa. Given this, using quarterly data (1960:Q2-2010:Q4) for South Africa, our paper endeavors to analyze the long-run properties of the ex post real rate by using tests of unit root, cointegration, fractional integration and structural breaks. In addition, we also analyze whether monetary shocks contribute to fluctuations in the real interest rate based on test of structural breaks of the rate of inflation, as well as, Bayesian change point analysis. Based on the tests conducted, we conclude that the South African EPPR can be best viewed as a very persistent but ultimately mean-reverting process. Also, the persistence in the real interest rate can be tentatively considered as a monetary phenomenon.  相似文献   

11.
Identity-based cryptography has attracted attention in the cryptographic research community in recent years. Despite the importance of cryptographic schemes for applications in business and law, the legal implications of identity-based cryptography have not yet been discussed. We investigate how identity-based signatures fit into the legal framework. We focus on the European Signature Directive, but also take the UNCITRAL Model Law on Electronic Signatures into account. In contrast to previous assumptions, identity-based signature schemes can, in principle, be used even for qualified electronic signatures, which can replace handwritten signatures in the member states of the European Union. We derive requirements to be taken into account in the development of future identity-based signature schemes.  相似文献   

12.
Although the recent development of a measure for perceived coercion has led to great progress in research on coercion in psychiatric settings, there still exists no consensus on how to measure the existence of real coercive events or pressures. This article reports the development of a system for integrating chart review data and data from interviews with multiple participants in the decision for an individual to be admitted to a psychiatric hospital. The method generates a most plausible factual account (MPFA). We then compare this account with that of patients, admitting clinicians and other collateral informants in 171 cases. Patient accounts most closely approximate the MPFA on all but one of nine dimensions related to coercion. This may be due to wider knowledge of the events surrounding the admission.  相似文献   

13.
This paper explores the use of purpose-built functions and cryptographic hashes of small data blocks for identifying data in sectors, file fragments, and entire files. It introduces and defines the concept of a “distinct” disk sector—a sector that is unlikely to exist elsewhere except as a copy of the original. Techniques are presented for improved detection of JPEG, MPEG and compressed data; for rapidly classifying the forensic contents of a drive using random sampling; and for carving data based on sector hashes.  相似文献   

14.
不动产登记簿的公信力和善意取得制度是两种构造迥异的物之交易信赖保护机制。善意取得制度以占有不足以充分表征动产所有权为构造前提,以竭力衡量所有权人与善意第三人的利益关系为轴心;不动产登记簿的公信力制度以不动产登记簿可以作为权利外观为构造前提,以完备的不动产登记制度为根基。善意取得制度的效果只能是第三人由无权利人取得物权;不动产登记簿的公信力所具有的效果不但有积极信赖保护与消极信赖保护之分,且其积极信赖保护的内容除由无权利人取得物权外,还包括由有权利人取得物权、受领给付、获得权利顺位等。因此,以善意取得制度保护不动产交易的便捷与安全,其局限非常明显。物权法第106条应限缩解释为主要适用于动产,不动产交易的信赖保护可通过解释物权法第16条来实现。  相似文献   

15.
In later Yogācāra, the path to enlightenment is the course of learning the Four Noble Truths, investigating their meaning, and realizing them directly and experientially through meditative practice (bhāvanā). The object of the yogi’s enlightenment-realization is dharma and dharmin: The dharma is the true nature of real things, e.g., momentariness, while the dharmin is real things i.e., momentary things. During the practice of meditation, dharma is directly grasped in the process of clear manifestation (vi?adābhā) and the particular dharmin is indirectly ascertained in the process of determination (adhyavasāya). So, even though a yogi does not directly perceive any actual thing, s/he is still nonetheless able to undertake practical activity directed toward it. The realization of the Four Noble Truths consists of two aspects: firstly, the manifestation of momentariness, etc., in the stream of the yogi’s consciousness; secondly, the ascertainment of momentariness, etc., in whatever s/he happens to encounter.  相似文献   

16.

Objectives

While many criminological theories posit causal hypotheses, many studies fail to use methods that adequately address the three criteria of causality. This is particularly important when assessing the impact of criminal justice involvement on later outcomes. Due to practical and ethical concerns, it is challenging to randomize criminal sanctions, so quasi-experimental methods such as propensity score matching are often used to approximate a randomized design. Based on longitudinal data from the Cambridge Study in Delinquent Development, the current study used propensity score matching to investigate the extent to which convictions and/or incarcerations in the first two decades of life were related to adverse mental health during middle adulthood.

Methods

Propensity scores were utilized to match those with and without criminal justice involvement on a wide range of risk factors for offending.

Results

The results indicated that there were no significant differences in mental health between those involved in the criminal justice system and those without such involvement.

Conclusions

The results did not detect a relationship between justice system involvement and later mental health suggesting that the consequences of criminal justice involvement may only be limited to certain domains.
  相似文献   

17.
叶良芳 《法律科学》2014,(1):98-108
法条竞合,是指在法条动态适用过程中,似乎有数个法条可以评价待决行为事实,但实际上仅有一个法条具有评价资格的情形。法条竞合是一种非真正竞合,其实乃评价主体的一种主观“误认”,其原因是错综复杂的法条规定所导致的找法困难。刑法分则法条本质上具有互斥性,所谓的法条逻辑类型并不具有确定性,最终仍将归结为对构成要件的解释。因此,法条竞合基本上是一个虚化的概念,并不存在一个可验证性的标准,其理论价值相当有限。即使取消法条竞合概念,也不会给案件认定带来实质性的影响。不过,法条竞合概念仍然具有一定的对照功能,特别是在界定想象竞合犯、明确刑罚适用原则等方面。  相似文献   

18.
完善我国房地产登记制度的实证思考   总被引:3,自引:0,他引:3  
我国的房地产登记模式,仍应采取权利登记模式,同时可以吸取契约登记制度的一些优点;登记机关的设立一般以市为基础;在立法上,应明确区分土地权利的总登记和初始登记,在实务中,有必要引入房地产登记专业代理人制度,同时确立登记机关和专业代理人相应的赔偿责任和保险制度。  相似文献   

19.
公共决策科学化依赖于科学的公共政策评估。随着国外公共政策评估研究越来越依赖于旨在评估政策因果效应的因果推断方法,国内公共政策学者也开始进行初步尝试,但是如何有效选择和运用因果推断方法开展政策评估研究仍有待系统梳理。本文介绍了公共政策评估方法的反事实框架,精确定义了公共政策的因果效应。在此基础上,本文将匹配方法总结为距离测算与配对两个步骤,并详细阐述了协变量匹配、粗糙完全匹配、马氏距离匹配、倾向值匹配和熵平衡匹配的原理,比较了不同方法的优势与劣势。结合公共政策评估的前沿实证研究,本文介绍了如何具体运用匹配方法进行政策评估研究。本文还探讨了应用匹配方法的注意事项,包括匹配方法的适用性、匹配与回归的关系、匹配方法对样本数量的要求及是否允许放回等。  相似文献   

20.
Inconsistency between the way in which the law is structured, and the way in which technologies actually operate is always an interesting and useful topic to explore. When a law conflicts with a business model, the solution will often be changing the business model. However, when the law comes into conflict with the architecture of hardware and software, it is less clear how the problem will be managed.In this paper, we analyze the contradiction of blockchain technology and the requirements of GDPR. The three contradictions we examine are (i) right to be forgotten versus irreversibility/immutability of records, (ii) data protection by design versus tamper-proofness and transparency of blockchain, and (iii) data controller versus decentralized nodes. We highlight that the conflicts can be handled through focusing on commonalities of GDPR and the blockchain, developing new approaches and interpretations, and tailoring the blockchain technology according to the needs of data protection law.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号