首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
This paper investigates whether computer forensic tools (CFTs) can extract complete and credible digital evidence from digital crime scenes in the presence of file system anti-forensic (AF) attacks. The study uses a well-established six stage forensic tool testing methodology based on black-box testing principles to carry out experiments that evaluate four leading CFTs for their potential to combat eleven different file system AF attacks. Results suggest that only a few AF attacks are identified by all the evaluated CFTs, while as most of the attacks considered by the study go unnoticed. These AF attacks exploit basic file system features, can be executed using simple tools, and even attack CFTs to accomplish their task. These results imply that evidences collected by CFTs in digital investigations are not complete and credible in the presence of AF attacks. The study suggests that practitioners and academicians should not absolutely rely on CFTs for evidence extraction from a digital crime scene, highlights the implications of doing so, and makes many recommendations in this regard. The study also points towards immediate and aggressive research efforts that are required in the area of computer forensics to address the pitfalls of CFTs.  相似文献   

2.
The role of live forensics in digital forensic investigations has become vital due to the importance of volatile data such as encryption keys, network activity, currently running processes, in memory only malware, and other key pieces of data that are lost when a device is powered down. While the technology to perform the first steps of a live investigation, physical memory collection and preservation, is available, the tools for completing the remaining steps remain incomplete. First-generation memory analyzers performed simple string and regular expression operations on the memory dump to locate data such as passwords, credit card numbers, fragments of chat conversations, and social security numbers. A more in-depth analysis can reveal information such as running processes, networking information, open file data, loaded kernel modules, and other critical information that can be used to gain insight into activity occurring on the machine when a memory acquisition occurred. To be useful, tools for performing this in-depth analysis must support a wide range of operating system versions with minimum configuration. Current live forensics tools are generally limited to a single kernel version, a very restricted set of closely related versions, or require substantial manual intervention.This paper describes techniques developed to allow automatic adaptation of memory analysis tools to a wide range of kernel versions. Dynamic reconstruction of kernel data structures is obtained by analyzing the memory dump for the instructions that reference needed kernel structure members. The ability to dynamically recreate C structures used within the kernel allows for a large amount of information to be obtained and processed. Currently, this capability is used within a tool called RAMPARSER that is able to simulate commands such as ps and netstat as if an investigator were sitting at the machine at the time of the memory acquisition. Other applications of the developed capabilities include kernel-level malware detection, recovery of processes memory and file mappings, and other areas of forensics interest.  相似文献   

3.
This paper discusses the use of communication technology to commit crimes, including crime facts and crime techniques. The analysis focuses on the security of voice over Internet protocol (VoIP), a prevention method against VoIP call attack and the attention points for setting up an Internet phone. The importance of digital evidence and digital forensics are emphasised. This paper provides the VoIP digital evidence forensics standard operating procedures (DEFSOP) to help police organisations and establishes an experimental platform to simulate phone calls, hacker attacks and forensic data. Finally, this paper provides a general discussion of a digital evidence strategy that includes VoIP for crime investigators who are interested in digital evidence forensics.  相似文献   

4.
由于目前计算机专业取证人员数量的不足,当前司法实践中对于现场中正处于运行状态的计算机大多采用“二步式”取证的方式来搜集数字证据。即先由侦查人员对涉案计算机实施关机分离和保全。尔后再移交专业机构进行数字证据司法鉴定。这种方式虽然保障了数字证据的原始性和证明力。但无形之中造成了存储在RAM中的“易挥发”数据以及其他形式的潜在数字证据的丢失。而计算机信息系统中的这些“易挥发数据”可以为案件的侦破提供重要线索和潜在的数字证据。因此通过对侦查人员的专业培训.实现“易挥发数据”的现场动态获取和合理保全对数字案件侦查取证意义重大。  相似文献   

5.
Investigating seized devices within digital forensics gets more and more difficult due to the increasing amount of data. Hence, a common procedure uses automated file identification which reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is also helpful to detect similar data by applying approximate matching.Let x denote the number of digests in a database, then the lookup for a single similarity digest has the complexity of O(x). In other words, the digest has to be compared against all digests in the database. In contrast, cryptographic hash values are stored within binary trees or hash tables and hence the lookup complexity of a single digest is O(log2(x)) or O(1), respectively.In this paper we present and evaluate a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(x) to O(1). Therefore, instead of using multiple small Bloom filters (which is the common procedure), we demonstrate that a single, huge Bloom filter has a far better performance. Our evaluation demonstrates that current approximate matching algorithms are too slow (e.g., over 21 min to compare 4457 digests of a common file corpus against each other) while the improved version solves this challenge within seconds. Studying the precision and recall rates shows that our approach works as reliably as the original implementations. We obtain this benefit by accuracy–the comparison is now a file-against-set comparison and thus it is not possible to see which file in the database is matched.  相似文献   

6.
《Science & justice》2023,63(4):537-541
Environmental context reinstatement has a particular value for recall of information in forensic interviews. Since odors are valuable memory cues and can act as memory triggers, in our preliminary study we explored whether odor exposure can help people recall details of a crime scene. The study comprised 58 women and 15 men aged 22–35 who immersed in a carefully controlled environment closely resembling an actual crime setting, i.e., a virtual reality crime. Participants were exposed to an odor at encoding, recall, both or neither of these instances, yielding a total of 4 experimental groups that further completed a memory recall task. The crime scene content recall was tested in a free recall and a forced-response test immediately after seeing the crime scene and one month later. We found no significant effects of odor exposure on the free or the cued recall of the crime scene. The memory scores correlated neither with the self-assessed olfactory/visual sensitivity of the subjects, nor with the perceived odor pleasantness. These preliminary findings suggest that introduction of a vanilla odor while encoding and recalling a crime scene does not aid witness recall accuracy.  相似文献   

7.
Bytewise approximate matching is a relatively new area within digital forensics, but its importance is growing quickly as practitioners are looking for fast methods to screen and analyze the increasing amounts of data in forensic investigations. The essential idea is to complement the use of cryptographic hash functions to detect data objects with bytewise identical representation with the capability to find objects with bytewise similar representations.Unlike cryptographic hash functions, which have been studied and tested for a long time, approximate matching ones are still in their early development stages and evaluation methodology is still evolving. Broadly, prior approaches have used either a human in the loop to manually evaluate the goodness of similarity matches on real world data, or controlled (pseudo-random) data to perform automated evaluation.This work's contribution is to introduce automated approximate matching evaluation on real data by relating approximate matching results to the longest common substring (LCS). Specifically, we introduce a computationally efficient LCS approximation and use it to obtain ground truth on the t5 set. Using the results, we evaluate three existing approximate matching schemes relative to LCS and analyze their performance.  相似文献   

8.
《Digital Investigation》2014,11(2):90-101
This paper defines a model of a special type of digital forensics tools, known as data acquisition tools, using the formal refinement language Event-B. The complexity and criticality of many types of computer and Cyber crime nowadays combined with improper or incorrect use of digital forensic tools calls for more robust and reliable specifications of the functionality of digital forensics applications. As a minimum, the evidence produced by such tools must meet the minimum admissibility standards the legal system requires, in general implying that it must be generated from reliable and robust tools. Despite the fact that some research and effort has been spent on the validation of digital forensics tools by means of testing, the verification of such tools and the formal specification of their expected behaviour remains largely under-researched. The goal of this work is to provide a formal specification against which implementations of data acquisition procedures can be analysed.  相似文献   

9.
In the early 1990s, unmanned aerial vehicles (UAV) were used exclusively in military applications by various developed countries. Now with its ease of availability and affordability in the electronic device market, this aerial vehicular technology has augmented its familiarity in public and has expanded its usage to countries all over the world. However, expanded use of UAVs, colloquially known as drones, is raising understandable security concerns. With the increasing possibility of drones' misuse and their abilities to get close to critical targets, drones are prone to potentially committing crimes and, therefore, investigation of such activities is a much-needed facet. This motivated us to devise a comprehensive drone forensic framework that includes hardware/physical and digital forensics, proficient enough for the post-flight investigation of drone's activity. For hardware/physical forensics, we propose a model for investigating drone components at the crime scene. Additionally, we propose a robust digital drone forensic application with a primary focus on analyzing the essential log parameters of drones through a graphical user interface (GUI) developed using JavaFX 8.0. This application interface would allow users to extract and examine onboard flight information. It also includes a file converter created for easy and effective 3D flight trajectory visualization. We used two popular drones for conducting this research; namely, DJI Phantom 4 and Yuneec Typhoon H. The interface also provides a visual representation of the sensor recordings from which pieces of evidence could be acquired. Our research is intended to offer the forensic science community a powerful approach for investigating drone-related crimes effectively.  相似文献   

10.
11.
Over the last few decades, the use of non-intrusive geophysical techniques, which allows for coverage of an entire crime scene in a reasonable amount of time, in forensic investigation has increased. In this study, we analyze the effectiveness of ground-penetrating radar (GPR) in forensics. Experimental scenes were simulated and some of the most commonly buried items in actual crime scenes were introduced, such as bone remains, guns and drug caches. Later, a GPR survey was conducted on the experimental grids with a 500 MHz antenna. The final purpose was to characterize the radar wave response expected for each set of remains to assist with its identification in later actual investigations. The results collected provided promising information that can be used when surveying real cases. Nevertheless, there were some interpretational difficulties regarding the sizes of the items and the electromagnetic properties of the materials. For these cases, finite-difference time-domain modeling was employed to achieve an advanced interpretation of the field data. The simulated models used were built from accurate geometric data provided by photogrammetric methods, which replicate the experimental scenes in fine detail. Furthermore, this approach allowed for the simulation of more realistic models, and the synthetic data obtained provided valuable information for assisting in the interpretation of field data. As a result of this work, it was concluded that GPR can be an effective tool when searching for a variety of materials during a crime scene investigation.  相似文献   

12.
Current figures on the efficiency of DNA as an investigative tool in criminal investigations only tell part of the story. To get the DNA success story in the right perspective, we examined all forensic reports from serious (N = 116) and high‐volume crime cases (N = 2791) over the year 2011 from one police region in the Netherlands. These data show that 38% of analyzed serious crime traces (N = 384) and 17% of analyzed high‐volume crime traces (N = 386) did not result in a DNA profile. Turnaround times (from crime scene to DNA report) were 66 days for traces from serious crimes and 44 days for traces from high‐volume crimes. Suspects were truly identified through a match with the Offender DNA database of the Netherlands in 3% of the serious crime cases and in 1% of the high‐volume crime cases. These data are important for both the forensic laboratory and the professionals in the criminal justice system to further optimize forensic DNA testing as an investigative tool.  相似文献   

13.
Event reconstruction plays a critical role in solving physical crimes by explaining why a piece of physical evidence has certain characteristics. With digital crimes, the current focus has been on the recognition and identification of digital evidence using an object's characteristics, but not on the identification of the events that caused the characteristics. This paper examines digital event reconstruction and proposes a process model and procedure that can be used for a digital crime scene. The model has been designed so that it can apply to physical crime scenes, can support the unique aspects of a digital crime scene, and can be implemented in software to automate part of the process. We also examine the differences between physical event reconstruction and digital event reconstruction.  相似文献   

14.
《Digital Investigation》2014,11(2):81-89
Bytewise approximate matching is a relatively new area within digital forensics, but its importance is growing quickly as practitioners are looking for fast methods to analyze the increasing amounts of data in forensic investigations. The essential idea is to complement the use of cryptographic hash functions to detect data objects with bytewise identical representation with the capability to find objects with bytewise similar representations.Unlike cryptographic hash functions, which have been studied and tested for a long time, approximate matching ones are still in their early development stages, and have been evaluated in a somewhat ad-hoc manner. Recently, the FRASH testing framework has been proposed as a vehicle for developing a set of standardized tests for approximate matching algorithms; the aim is to provide a useful guide for understanding and comparing the absolute and relative performance of different algorithms.The contribution of this work is twofold: a) expand FRASH with automated tests for quantifying approximate matching algorithm behavior with respect to precision and recall; and b) present a case study of two algorithms already in use–sdhash and ssdeep.  相似文献   

15.
When digital forensics started in the mid-1980s most of the software used for analysis came from writing and debugging software. Amongst these tools was the UNIX utility ‘dd’ which was used to create an image of an entire storage device. In the next decade the practice of creating and using ‘an image’ became established as a fundamental base of what we call ‘sound forensic practice’. By virtue of its structure, every file within the media was an integrated part of the image and so we were assured that it was wholesome representation of the digital crime scene. In an age of terabyte media ‘the image’ is becoming increasingly cumbersome to process, simply because of its size. One solution to this lies in the use of distributed systems. However, the data assurance inherent in a single media image file is lost when data is stored in separate files distributed across a system. In this paper we assess current assurance practices and provide some solutions to the need to have assurance within a distributed system.  相似文献   

16.
《Science & justice》2021,61(6):761-770
Many criminal investigations maintain an element of digital evidence, where it is the role of the first responder in many cases to both identify its presence at any crime scene, and assess its worth. Whilst in some instances the existence and role of a digital device at-scene may be obvious, in others, the first responder will be required to evaluate whether any ‘digital opportunities’ exist which could support their inquiry, and if so, where these are. This work discusses the potential presence of digital evidence at crime scenes, approaches to identifying it and the contexts in which it may exist, focusing on the investigative opportunities that devices may offer. The concept of digital devices acting as ‘digital witnesses’ is proposed, followed by an examination of potential ‘digital crime scene’ scenarios and strategies for processing them.  相似文献   

17.
In this work, we describe our experiences in developing cloud forensics tools and use them to support three main points:First, we make the argument that cloud forensics is a qualitatively different problem. In the context of SaaS, it is incompatible with long-established acquisition and analysis techniques, and requires a new approach and forensic toolset. We show that client-side techniques, which are an extension of methods used over the last three decades, have inherent limitations that can only be overcome by working directly with the interfaces provided by cloud service providers.Second, we present our results in building forensic tools in the form of three case studies: kumodd–a tool for cloud drive acquisition, kumodocs–a tool for Google Docs acquisition and analysis, and kumofs–a tool for remote preview and screening of cloud drive data. We show that these tools, which work with the public and private APIs of the respective services, provide new capabilities that cannot be achieved by examining client-side artifacts.Finally, we use current IT trends, and our lessons learned, to outline the emerging new forensic landscape, and the most likely course of tool development over the next five years.  相似文献   

18.
Patch-Match is an efficient algorithm used for structural image editing and available as a tool on popular commercial photo-editing software. The tool allows users to insert or remove objects from photos using information from similar scene content. Recently, a modified version of this algorithm was proposed as a counter-measure against Photo-Response Non-Uniformity (PRNU) based Source Camera Identification (SCI). The algorithm can provide anonymity at a great rate (97%) and impede PRNU based SCI without the need of any other information, hence leaving no-known recourse for the PRNU-based SCI. In this paper, we propose a method to identify sources of the Patch-Match-applied images by using randomized subsets of images and the traditional PRNU based SCI methods. We evaluate the proposed method on two forensics scenarios in which an adversary makes use of the Patch-Match algorithm and distorts the PRNU noise pattern in the incriminating images she took with his camera. Our results show that it is possible to link sets of Patch-Match-applied images back to their source camera even in the presence of images that come from unknown cameras. To our best knowledge, the proposed method represents the very first counter-measure against the usage of Patch-Match in the digital forensics literature.  相似文献   

19.
Due to the democratisation of new technologies, computer forensics investigators have to deal with volumes of data which are becoming increasingly large and heterogeneous. Indeed, in a single machine, hundred of events occur per minute, produced and logged by the operating system and various software. Therefore, the identification of evidence, and more generally, the reconstruction of past events is a tedious and time-consuming task for the investigators. Our work aims at reconstructing and analysing automatically the events related to a digital incident, while respecting legal requirements. To tackle those three main problems (volume, heterogeneity and legal requirements), we identify seven necessary criteria that an efficient reconstruction tool must meet to address these challenges. This paper introduces an approach based on a three-layered ontology, called ORD2I, to represent any digital events. ORD2I is associated with a set of operators to analyse the resulting timeline and to ensure the reproducibility of the investigation.  相似文献   

20.
Currently, a series of promising new tools are under development that will enable crime scene investigators (CSIs) to analyze traces in situ during the crime scene investigation or enable them to detect blood and provide information on the age of blood. An experiment is conducted with thirty CSIs investigating a violent robbery at a mock crime scene to study the influence of such technologies on the perception and interpretation of traces during the first phase of the investigation. Results show that in their search for traces, CSIs are not directed by the availability of technologies, which is a reassuring finding. Qualitative findings suggest that CSIs are generally more focused on analyzing perpetrator traces than on reconstructing the event. A focus on perpetrator traces might become a risk when other crime‐related traces are overlooked, and when analyzed traces are in fact not crime‐related and in consequence lead to the identification of innocent suspects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号