首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper describes research and analysis that were performed to identify a robust and accurate method for identifying and extracting the residual contents of deleted files stored within an HFS+ file system. A survey performed during 2005 of existing tools and techniques for HFS+ deleted file recovery reinforced the need for newer, more accurate techniques.Our research and analysis were based on the premise that a transactional history of file I/O operations is maintained in a Journal on HFS+ file systems, and that this history could be used to reconstruct recent deletions of active files from the file system. Such an approach offered a distinct advantage over other current techniques, including recovery of free/unallocated blocks and “file carving” techniques. If the journal entries contained or referenced file attributes such as the extents that specify which file system blocks were occupied by each file, then a much more accurate identification and recovery of deleted file data would be possible.  相似文献   

2.
File system forensics is an important part of Digital Forensics. Investigators of storage media have traditionally focused on the most commonly used file systems such as NTFS, FAT, ExFAT, Ext2-4, HFS+, APFS, etc. NTFS is the current file system used by Windows for the system volume, but this may change in the future. In this paper we will show the structure of the Resilient File System (ReFS), which has been available since Windows Server 2012 and Windows 8. The main purpose of ReFS is to be used on storage spaces in server systems, but it can also be used in Windows 8 or newer. Although ReFS is not the current standard file system in Windows, while users have the option to create ReFS file systems, digital forensic investigators need to investigate the file systems identified on a seized media. Further, we will focus on remnants of non-allocated metadata structures or attributes. This may allow metadata carving, which means searching for specific attributes that are not allocated. Attributes found can then be used for file recovery. ReFS uses superblocks and checkpoints in addition to a VBR, which is different from other Windows file systems. If the partition is reformatted with another file system, the backup superblocks can be used for partition recovery. Further, it is possible to search for checkpoints in order to recover both metadata and content.Another concept not seen for Windows file systems, is the sharing of blocks. When a file is copied, both the original and the new file will share the same content blocks. If the user changes the copy, new data runs will be created for the modified content, but unchanged blocks remain shared. This may impact file carving, because part of the blocks previously used by a deleted file might still be in use by another file. The large default cluster size, 64 KiB, in ReFS v1.2 is an advantage when carving for deleted files, since most deleted files are less than 64 KiB and therefore only use a single cluster. For ReFS v3.2 this advantage has decreased because the standard cluster size is 4 KiB.Preliminary support for ReFS v1.2 has been available in EnCase 7 and 8, but the implementation has not been documented or peer-reviewed. The same is true for Paragon Software, which recently added ReFS support to their forensic product. Our work documents how ReFS v1.2 and ReFS v3.2 are structured at an abstraction level that allows digital forensic investigation of this new file system. At the time of writing this paper, Paragon Software is the only digital forensic tool that supports ReFS v3.x.It is the most recent version of the ReFS file system that is most relevant for digital forensics, as Windows automatically updates the file system to the latest version on mount. This is why we have included information about ReFS v3.2. However, it is possible to change a registry value to avoid updating. The latest ReFS version observed is 3.4, but the information presented about 3.2 is still valid. In any criminal case, the investigator needs to investigate the file system version found.  相似文献   

3.
《Digital Investigation》2007,4(3-4):119-128
Carving is the term most often used to indicate the act of recovering a file from unstructured digital forensic images. The term unstructured indicates that the original digital image does not contain useful filesystem information which may be used to assist in this recovery.Typically, forensic analysts resort to carving techniques as an avenue of last resort due to the difficulty of current techniques. Most current techniques rely on manual inspection of the file to be recovered and manually reconstructing this file using trial and error. Manual processing is typically impractical for modern disk images which might contain hundreds of thousands of files.At the same time the traditional process of recovering deleted files using filesystem information is becoming less practical because most modern filesystems purge critical information for deleted files. As such the need for automated carving techniques is quickly arising even when a filesystem does exist on the forensic image.This paper explores the theory of carving in a formal way. We then proceed to apply this formal analysis to the carving of PDF and ZIP files based on the internal structure inherent within the file formats themselves. Specifically this paper deals with carving from the Digital Forensic Research Work-Shop's (DFRWS) 2007 carving challenge.  相似文献   

4.
File carving is the process of reassembling files from disk fragments based on the file content in the absence of file system metadata. By leveraging both file header and footer pairs, traditional file carving mainly focuses on document and image files such as PDF and JPEG. With the vast amount of malware code appearing in the wild daily, recovery of binary executable files becomes an important problem, especially for the case in which malware deletes itself after compromising a computer. However, unlike image files that usually have both a header and footer pair, executable files only have header information, which makes the carving much harder. In this paper, we present Bin-Carver, a first-of-its-kind system to automatically recover executable files with deleted or corrupted metadata. The key idea is to explore the road map information defined in executable file headers and the explicit control flow paths present in the binary code. Our experiment with thousands of binary code files has shown our Bin-Carver to be incredibly accurate, with an identification rate of 96.3% and recovery rate of 93.1% on average when handling file systems ranging from pristine to chaotic and highly fragmented.  相似文献   

5.
Globe positioning system (GPS) devices are an increasing importance source of evidence, as more of our devices have built-in GPS capabilities. In this paper, we propose a novel framework to efficiently recover National Marine Electronics Association (NMEA) logs and reconstruct GPS trajectories. Unlike existing approaches that require file system metadata, our proposed algorithm is designed based on the file carving technique without relying on system metadata. By understanding the characteristics and intrinsic structure of trajectory data in NMEA logs, we demonstrate how to pinpoint all data blocks belonging to the NMEA logs from the acquired forensic image of GPS device. Then, a discriminator is presented to determine whether two data blocks can be merged. And based on the discriminator, we design a reassembly algorithm to re-order and merge the obtained data blocks into new logs. In this context, deleted trajectories can be reconstructed by analyzing the recovered logs. Empirical experiments demonstrate that our proposed algorithm performs well when the system metadata is available/unavailable, log files are heavily fragmented, one or more parts of the log files are overwritten, and for different file systems of variable cluster sizes.  相似文献   

6.
File carving is a technique whereby data files are extracted from a digital device without the assistance of file tables or other disk meta-data. One of the primary challenges in file carving can be found in attempting to recover files that are fragmented. In this paper, we show how detecting the point of fragmentation of a file can benefit fragmented file recovery. We then present a sequential hypothesis testing procedure to identify the fragmentation point of a file by sequentially comparing adjacent pairs of blocks from the starting block of a file until the fragmentation point is reached. By utilizing serial analysis we are able to minimize the errors in detecting the fragmentation points. The performance results obtained from the fragmented test-sets of DFRWS 2006 and 2007 show that the method can be effectively used in recovery of fragmented files.  相似文献   

7.
File carving is a technique whereby data files are extracted from a digital device without the assistance of file tables or other disk meta-data. One of the primary challenges in file carving can be found in attempting to recover files that are fragmented. In this paper, we show how detecting the point of fragmentation of a file can benefit fragmented file recovery. We then present a sequential hypothesis testing procedure to identify the fragmentation point of a file by sequentially comparing adjacent pairs of blocks from the starting block of a file until the fragmentation point is reached. By utilizing serial analysis we are able to minimize the errors in detecting the fragmentation points. The performance results obtained from the fragmented test-sets of DFRWS 2006 and 2007 show that the method can be effectively used in recovery of fragmented files.  相似文献   

8.
Electronic documents often contain personal or confidential information, which can be used as valuable evidence in criminal investigations. In the digital investigation, special techniques are required for grouping and screening electronic documents, because it is challenging to analyze relationships between numerous documents in storage devices manually. To this end, although techniques such as keyword search, similarity search, topic modeling, metadata analysis, and document clustering are continually being studied, there are still limitations for revealing the relevance of documents. Specifically, metadata used in previous research are not always values present in the documents, and clustering methods with specific keywords may be incomplete because text‐based contents (including metadata) can be easily modified or deleted by users. In this work, we propose a novel method to efficiently group Microsoft Office Word 2007+ (MS Word) files by using revision identifier (RSID). Through a thorough understanding of the RSID, examiners can predict organizations to which a specific user belongs, and further, it is likely to discover unexpected interpersonal relationships. An experiment with a public dataset (GovDocs) provides that it is possible to categorize documents more effectively by combining our proposal with previously studied methods. Furthermore, we introduce a new document tracking method to understand the editing history and movement of a file, and then demonstrate its usefulness through an experiment with documents from a real case.  相似文献   

9.
“File carving” reconstructs files based on their content, rather than using metadata that points to the content. Carving is widely used for forensics and data recovery, but no file carvers can automatically reassemble fragmented files. We survey files from more than 300 hard drives acquired on the secondary market and show that the ability to reassemble fragmented files is an important requirement for forensic work. Next we analyze the file carving problem, arguing that rapid, accurate carving is best performed by a multi-tier decision problem that seeks to quickly validate or discard candidate byte strings – “objects” – from the media to be carved. Validators for the JPEG, Microsoft OLE (MSOLE) and ZIP file formats are discussed. Finally, we show how high speed validators can be used to reassemble fragmented files.  相似文献   

10.
File systems have always played a vital role in digital forensics and during the past 30–40 years many of these have been developed to suit different needs. Some file systems are more tightly connected to a specific Operating System (OS). For instance HFS and HFS+ have been the file systems of choice in Apple devices for over 30 years.Much has happened in the evolution of storage technologies, the capacity and speed of devices has increased and Solid State Drives (SSD) are replacing traditional drives. All of these present challenges for file systems. APFS is a file system developed from first principles and will, in 2017, become the new file system for Apple devices.To date there is no available technical information about APFS and this is the motivation for this article.  相似文献   

11.
The National Software Reference Library (NSRL) is an essential data source for forensic investigators, providing in its Reference Data Set (RDS) a set of hash values of known software. However, the NSRL RDS has not previously been tested against a broad spectrum of real-world data. The current work did this using a corpus of 36 million files on 2337 drives from 21 countries. These experiments answered a number of important questions about the NSRL RDS, including what fraction of files it recognizes of different types. NSRL coverage by vendor/product was also tested, finding 51% of the vendor/product names in our corpus had no hash values at all in NSRL. It is shown that coverage or “recall” of the NSRL can be improved with additions from our corpus such as frequently-occurring files and files whose paths were found previously in NSRL with a different hash value. This provided 937,570 new hash values which should be uncontroversial additions to NSRL. Several additional tests investigated the accuracy of the NSRL data. Experiments testing the hash values saw no evidence of errors. Tests of file sizes showed them to be consistent except for a few cases. On the other hand, the product types assigned by NSRL can be disputed, and it failed to recognize any of a sample of virus-infected files. The file names provided by NSRL had numerous discrepancies with the file names found in the corpus, so the discrepancies were categorized; among other things, there were apparent spelling and punctuation errors. Some file names suggest that NSRL hash values were computed on deleted files, not a safe practice. The tests had the secondary benefit of helping identify occasional errors in the metadata obtained from drive imaging on deleted files in our corpus. This research has provided much data useful in improving NSRL and the forensic tools that depend upon it. It also provides a general methodology and software for testing hash sets against corpora.  相似文献   

12.
A problem that arises in computer forensics is to determine the type of a file fragment. An extension to the file name indicating the type is stored in the disk directory, but when a file is deleted, the entry for the file in the directory may be overwritten. This problem is easily solved when the fragment includes the initial header, which contains explicit type-identifying information, but it is more difficult to determine the type of a fragment from the middle of a file.We investigate two algorithms for predicting the type of a fragment: one based on Fisher's linear discriminant and the other based on longest common subsequences of the fragment with various sets of test files. We test the ability of the algorithms to predict a variety of common file types. Algorithms of this kind may be useful in designing the next generation of file-carvers – programs that reconstruct files when directory information is lost or deleted. These methods may also be useful in designing virus scanners, firewalls and search engines to find files that are similar to a given file.  相似文献   

13.
SQLite是一个轻量级SQL数据库引擎,由于运行简单、功能强大,被广泛应用在手机短信、浏览器历史记录等各种应用中。在鉴定实践中也经常需要在SQLite的数据库文件中搜索查找特定内容,所以了解SQLite数据库文件结构非常重要。分析了SQLite数据库文件的结构,分析了android手机中的短信息文件的数据结构,以实际案例分析了被删除短信的搜索分析方法.这种检验分析的思路和方法对所有使用SQLite数据库的应用都是可以借鉴的。  相似文献   

14.
Document forensics remains an important field of digital forensics. To date, previously existing methods focused on the last saved version of the document file stored on the PC; however, the drawback of this approach is that this provides no indication as to how the contents have been modified. This paper provides a novel method for document forensics based on tracking the revision history of a Microsoft Word file. The proposed method concentrates on the TMP file created when the author saves the file and the ASD file created periodically by Microsoft Word during editing. A process whereby the revision history lists are generated based on metadata of the Word, TMP, and ASD files is presented. Furthermore, we describe a technique developed to link the revision history lists based on similarity. These outcomes can provide considerable assistance to a forensic investigator trying to establish the extent to which document file contents have been changed and when the file was created, modified, deleted, and copied.  相似文献   

15.
《Digital Investigation》2014,11(2):102-110
Anti-forensics has developed to prevent digital forensic investigations, thus forensic investigations to prevent anti-forensic behaviors have been studied in various area. In the area of user activity analysis, “IconCache.db” files contain icon cache information related to applications, which can yield meaningful information for digital forensic investigations such as the traces of deleted files. A previous study investigated the general artifacts found in the IconCache.db file. In the present study, further features and structures of the IconCache.db file are described. We also propose methods for analyzing anti-forensic behaviors (e.g., time information related to the deletion of files). Finally, we introduce an analytical tool that was developed based on the file structure of IconCache.db. The tool parses out strings from the IconCache.db to assist an analyst. Therefore, an analyst can more easily analyze the IconCache.db file using the tool.  相似文献   

16.
In this paper, we propose two methods to recover damaged audio files using deep neural networks. The presented audio file recovery methods differ from the conventional file carving-based recovery method because the former restore lost data, which are difficult to recover with the latter method. This research suggests that recovery tasks, which are essential yet very difficult or very time consuming, can be automated with the proposed recovery methods using deep neural networks. We apply feed-forward and Long Short Term Memory neural networks for the tasks. The experimental results show that deep neural networks can distinguish speech signals from non-speech signals, and can also identify the encoding methods of the audio files at the level of bits. This leads to successful recovery of the damaged audio files, which are otherwise difficult to recover using the conventional file-carving-based methods.  相似文献   

17.
《Digital Investigation》2007,4(3-4):138-145
Pidgin, formerly known as Gaim, is a multi-protocol instant messaging (IM) client that supports communication on most of the popular IM networks. Pidgin is chiefly popular under Linux, and is available for Windows, BSD and other UNIX versions. This article presents a number of traces that are left behind after the use of Pidgin on Linux, enabling digital investigators to search for and interpret instant messaging activities, including online conversations and file transfers. Specifically, the contents and structures of user settings, log files, contact files and the swap partition are discussed. In addition looking for such information in active files on a computer, forensic examiners can recover deleted items by searching a hard drive for file signatures and known file structures detailed in this article.  相似文献   

18.
徐炼 《刑事技术》2006,(4):24-26
目的恢复Microsoft SQL Server数据库中删除数据。方法解析SQL Server的数据文件和日志文件结构,使用Lumigent Log Explorer软件分析提取的日志中的记录。结果从某案件提取的数据库日志文件中提取有效记录70万余条。结论该方法可以分析并且提取Microsoft SQL Server日志文件中的记录,恢复被删除数据,具有很强的实用性。  相似文献   

19.
Minnaard proposed a novel method that constructs a creation time bound of files recovered without time information. The method exploits a relationship between the creation order of files and their locations on a storage device managed with the Linux FAT32 file system. This creation order reconstruction method is valid only in non-wraparound situations, where the file creation time in a former position is earlier than that in a latter position. In this article, we show that if the Linux FAT32 file allocator traverses the storage space more than once, the creation time of a recovered file is possibly earlier than that of a former file and possibly later than that of a latter file on the Linux FAT32 file system. Also it is analytically verified that there are at most n candidates for the creation time bound of each recovered file where n is the number of traversals by the file allocator. Our analysis is evaluated by examining file allocation patterns of two commercial in-car dashboard cameras.  相似文献   

20.
Several operating systems provide a central logging service which collects event messages from the kernel and applications, filters them and writes them into log files. Since more than a decade such a system service exists in Microsoft Windows NT. Its file format is well understood and supported by forensic software. Microsoft Vista introduces an event logging service which entirely got newly designed. This confronts forensic examiners and software authors with unfamiliar system behavior and a new, widely undocumented file format.This article describes the history of Windows system loggers, what has been changed over time and for what reason. It compares Vista log files in their native binary form and in a textual form. Based on the results, this paper for the first time publicly describes the key-elements of the new log file format and the proprietary binary encoding of XML. It discusses the problems that may arise during daily work. Finally it proposes a procedure for how to recover information from log fragments. During a criminal investigation this procedure was successfully applied to recover information from a corrupted event log.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号