首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
2.
Memory analysis has gained popularity in recent years proving to be an effective technique for uncovering malware in compromised computer systems. The process of memory acquisition presents unique evidentiary challenges since many acquisition techniques require code to be run on a potential compromised system, presenting an avenue for anti-forensic subversion. In this paper, we examine a number of simple anti-forensic techniques and test a representative sample of current commercial and free memory acquisition tools. We find that current tools are not resilient to very simple anti-forensic measures. We present a novel memory acquisition technique, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities - making it more difficult to subvert. We then evaluate this technique's further vulnerability to subversion by considering more advanced anti-forensic attacks.  相似文献   

3.
Traditional, persistent data-oriented approaches in computer forensics face some limitations regarding a number of technological developments, e.g., rapidly increasing storage capabilities of hard drives, memory-resident malicious software applications, or the growing use of encryption routines, that make an in-time investigation more and more difficult. In order to cope with these issues, security professionals have started to examine alternative data sources and emphasize the value of volatile system information in RAM more recently. In this paper, we give an overview of the prevailing techniques and methods to collect and analyze a computer's memory. We describe the characteristics, benefits, and drawbacks of the individual solutions and outline opportunities for future research in this evolving field of IT security.  相似文献   

4.
We present a novel approach for the construction and application of cryptographic hashes to user space memory for the purposes of verifying the provenance of code in memory images. Several key aspects of Windows behaviour which influence this process are examined in-depth. Our approach is implemented and evaluated on a selection of malware samples with user space components as well as a collection of common Windows applications. The results demonstrate that our approach is highly effective at reducing the amount of memory requiring manual analysis, highlighting the presence of malicious code in all the malware sampled.  相似文献   

5.
Software based Memory acquisition on modern systems typically requires the insertion of a kernel module into the running kernel. On Linux, kernel modules must be compiled against the exact version of kernel headers and the exact kernel configuration used to build the currently executing kernel. This makes Linux memory acquisition significantly more complex in practice, than on other platforms due to the number of variations of kernel versions and configurations, especially when responding to incidents. The Linux kernel maintains a checksum of kernel version and will generally refuse to load a module which was compiled against a different kernel version. Although there are some techniques to override this check, there is an inherent danger leading to an unstable kernel and possible kernel crashes. This paper presents a novel technique to safely load a pre-compiled kernel module for acquisition on a wide range of Linux kernel versions and configuration. Our technique injects a minimal acquisition module (parasite) into another valid kernel module (host) already found on the target system. The resulting combined module is then relinked in such a way as to grant code execution and control over vital data structures to the acquisition code, whilst the host module remains dormant during runtime.  相似文献   

6.
Over the past decade, a substantial effort has been put into developing methods to classify file fragments. Throughout, it has been an article of faith that data fragments, such as disk blocks, can be attributed to different file types. This work is an attempt to critically examine the underlying assumptions and compare them to empirically collected data. Specifically, we focus most of our effort on surveying several common compressed data formats, and show that the simplistic conceptual framework of prior work is at odds with the realities of actual data. We introduce a new tool, zsniff, which allows us to analyze deflate-encoded data, and we use it to perform an empirical survey of deflate-coded text, images, and executables. The results offer a conceptually new type of classification capabilities that cannot be achieved by other means.  相似文献   

7.
The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64‐bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not‐wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job.  相似文献   

8.
Communication apps can be an important source of evidence in a forensic investigation (e.g., in the investigation of a drug trafficking or terrorism case where the communications apps were used by the accused persons during the transactions or planning activities). This study presents the first evidence‐based forensic taxonomy of Windows Phone communication apps, using an existing two‐dimensional Android forensic taxonomy as a baseline. Specifically, 30 Windows Phone communication apps, including Instant Messaging (IM) and Voice over IP (VoIP) apps, are examined. Artifacts extracted using physical acquisition are analyzed, and seven digital evidence objects of forensic interest are identified, namely: Call Log, Chats, Contacts, Locations, Installed Applications, SMSs and User Accounts. Findings from this study would help to facilitate timely and effective forensic investigations involving Windows Phone communication apps.  相似文献   

9.
In this paper we present a methodology for the forensic analysis of the artifacts generated on Android smartphones by Telegram Messenger, the official client for the Telegram instant messaging platform, which provides various forms of secure individual and group communication, by means of which both textual and non-textual messages can be exchanged among users, as well as voice calls.Our methodology is based on the design of a set of experiments suitable to elicit the generation of artifacts and their retention on the device storage, and on the use of virtualized smartphones to ensure the generality of the results and the full repeatability of the experiments, so that our findings can be reproduced and validated by a third-party.In this paper we show that, by using the proposed methodology, we are able (a) to identify all the artifacts generated by Telegram Messenger, (b) to decode and interpret each one of them, and (c) to correlate them in order to infer various types of information that cannot be obtained by considering each one of them in isolation.As a result, in this paper we show how to reconstruct the list of contacts, the chronology and contents of the messages that have been exchanged by users, as well as the contents of files that have been sent or received. Furthermore, we show how to determine significant properties of the various chats, groups, and channels in which the user has been involved (e.g., the identifier of the creator, the date of creation, the date of joining, etc.). Finally, we show how to reconstruct the log of the voice calls made or received by the user.Although in this paper we focus on Telegram Messenger, our methodology can be applied to the forensic analysis of any application running on the Android platform.  相似文献   

10.
The Windows Common Controls is a library which facilitates the construction of GUI controls commonly used by Windows applications. Each control is an extension of the basic ‘window’ class. The difference in the extension results in one control over another; for example, an Edit control as opposed to a Button control. The basic window class is documented by Microsoft and the generic information about a Window can be extracted, but this is of very limited use. There is no documentation and very little research into how these extensions are laid out in memory. This paper demonstrates how the extension bytes for the Edit control can be parsed leading to identification of previously unobtainable data which reveal information about the state of the control at runtime. Most notably, the undo buffer, that is, text that was previously present in the control can be recovered – an aspect which traditional disk forensics would simply not provide. The paper explains why previous attempts to achieve similar goals have failed, and how the technique could be applied to any control from the Windows Common Controls library.  相似文献   

11.
With an increase in the creation and maintenance of personal websites, web content management systems are now frequently utilized. Such systems offer a low cost and simple solution for those seeking to develop an online presence, and subsequently, a platform from which reported defamatory content, abuse, and copyright infringement has been witnessed. This article provides an introductory forensic analysis of the three current most popular web content management systems available, WordPress, Drupal, and Joomla! Test platforms have been created, and their site structures have been examined to provide guidance for forensic practitioners facing investigations of this type. Result's document available metadata for establishing site ownership, user interactions, and stored content following analysis of artifacts including Wordpress's wp_users, and wp_comments tables, Drupal's “watchdog” records, and Joomla!'s _users, and _content tables. Finally, investigatory limitations documenting the difficulties of investigating WCMS usage are noted, and analysis recommendations are offered.  相似文献   

12.
In this work, we describe our experiences in developing cloud forensics tools and use them to support three main points:First, we make the argument that cloud forensics is a qualitatively different problem. In the context of SaaS, it is incompatible with long-established acquisition and analysis techniques, and requires a new approach and forensic toolset. We show that client-side techniques, which are an extension of methods used over the last three decades, have inherent limitations that can only be overcome by working directly with the interfaces provided by cloud service providers.Second, we present our results in building forensic tools in the form of three case studies: kumodd–a tool for cloud drive acquisition, kumodocs–a tool for Google Docs acquisition and analysis, and kumofs–a tool for remote preview and screening of cloud drive data. We show that these tools, which work with the public and private APIs of the respective services, provide new capabilities that cannot be achieved by examining client-side artifacts.Finally, we use current IT trends, and our lessons learned, to outline the emerging new forensic landscape, and the most likely course of tool development over the next five years.  相似文献   

13.
Due to the democratisation of new technologies, computer forensics investigators have to deal with volumes of data which are becoming increasingly large and heterogeneous. Indeed, in a single machine, hundred of events occur per minute, produced and logged by the operating system and various software. Therefore, the identification of evidence, and more generally, the reconstruction of past events is a tedious and time-consuming task for the investigators. Our work aims at reconstructing and analysing automatically the events related to a digital incident, while respecting legal requirements. To tackle those three main problems (volume, heterogeneity and legal requirements), we identify seven necessary criteria that an efficient reconstruction tool must meet to address these challenges. This paper introduces an approach based on a three-layered ontology, called ORD2I, to represent any digital events. ORD2I is associated with a set of operators to analyse the resulting timeline and to ensure the reproducibility of the investigation.  相似文献   

14.
Investigating seized devices within digital forensics gets more and more difficult due to the increasing amount of data. Hence, a common procedure uses automated file identification which reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is also helpful to detect similar data by applying approximate matching.Let x denote the number of digests in a database, then the lookup for a single similarity digest has the complexity of O(x). In other words, the digest has to be compared against all digests in the database. In contrast, cryptographic hash values are stored within binary trees or hash tables and hence the lookup complexity of a single digest is O(log2(x)) or O(1), respectively.In this paper we present and evaluate a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(x) to O(1). Therefore, instead of using multiple small Bloom filters (which is the common procedure), we demonstrate that a single, huge Bloom filter has a far better performance. Our evaluation demonstrates that current approximate matching algorithms are too slow (e.g., over 21 min to compare 4457 digests of a common file corpus against each other) while the improved version solves this challenge within seconds. Studying the precision and recall rates shows that our approach works as reliably as the original implementations. We obtain this benefit by accuracy–the comparison is now a file-against-set comparison and thus it is not possible to see which file in the database is matched.  相似文献   

15.
《Science & justice》2021,61(5):627-634
The importance of ensuring the results of any digital forensic (DF) examination are effectively communicated cannot be understated. In most cases, this communication will be done via written report, yet despite this there is arguably limited best practice guidance available which is specific for this field in regards to report construction. Poor reporting practices in DF are likely to undermine the reliability of evidence provided across this field, where there is a need for formalised guidance regarding the requirements for effective DF report construction; this should not be a task left solely to each individual practitioner to determine without instruction. For this, the field of DF should look to the wider forensic community and the existing work in this area for support. In line with many other ‘traditional’ forensic science types, a DF practitioner can be commissioned to report in one of three ways - ‘technical’, ‘investigative’ or ‘evaluative’, where each reporting type maintains a specific purpose and interpretative-context, determined by the examination workflow undertaken by a practitioner following client instruction. This work draws upon guidance set out in fundamental forensic science reporting literature in order to describe each reporting type in turn, outlining their scope, content and construction requirements in an attempt to provide support for the DF field.  相似文献   

16.
Forensic psychologists are sometimes faced with the task of educating triers of fact about the evidential weight of dissociative experiences reported by claimants in litigation procedures. In their two-part essay, Brand et al. (Psychological Injury and Law, 10, 283–297, 2017a; Psychological Injury and Law, 10, 298–312, 2017b) provide advice to experts who find themselves in such situation. We argue that the Brand et al. approach is problematic and might induce confirmation bias in experts. Their approach is not well connected to the extant literature on recovered memories, dissociative amnesia, memory distortions, and symptom validity testing. In some instances, Brand et al. (Psychological Injury and Law, 10, 283–297, 2017a; Psychological Injury and Law, 10, 298–312, 2017b) simplify the current body of knowledge about dissociation; in other instances, they ignore relevant empirical studies to an extent that is worrisome.  相似文献   

17.
This article proposes an analytical framework for exploring policy responses to common challenges of environmental governance. Observing that governance involves multiple processes, I begin by identifying a conceptual platform for studying unilateral learning and adaptation as well as international cooperation as integral and interacting components of a complex governance system. I propose the concept of co-evolution as the cornerstone of this platform and distinguish between two modes of co-evolution: diffusion and cooperation. The article draws findings and propositions from recent literature to identify the mechanisms at work and the conditions under which they foster mutually beneficial solutions. Indicating how important governance challenges differ with respect to these conditions, I build the case for a diagnostic and differential approach that matches capacity-building and policy strategies with the challenge in focus.  相似文献   

18.
《Science & justice》2022,62(3):349-357
Shahtoosh, the most expensive and sought-after wool in the illegal wildlife trade is obtained from the underfur of a critically endangered species-the Tibetan Antelope (Pantholops hodgsonii). It is often adulterated or mixed with the wool of Pashmina goat (Capra aegagrus hircus) for making shawls, scarves and other woolen articles to maximize the profit. The comparable fineness, color and texture, makes it a challenging task in wildlife forensics to differentiate them. In this study, an attempt has been made to differentiate 50 reference unprocessed underfur hairs from five individuals of each species using ATR FT-IR spectroscopy in combination with chemometric tools such as PCA, and PLS-DA. Results of PCA model demonstrated slight overlap and thus failed to distinguish hairs of these two species. Subsequently, PLS-DA model was employed, and also validation tests (external and blind testing) were carried out to ensure the predictive ability of the model, which resulted in 100% accuracy. The results of PLS-DA model exhibited complete differentiation between Shahtoosh, Pashmina and Angora (Oryctolagus cuniculus domesticus) wool used for external validation study with highly significant predictive ability (R-square value 0.99). This proof-of-concept study illustrates the potential of ATR FT-IR spectroscopy to complement current forensic microscopic and DNA based technique to analyze hair evidence in wildlife investigations owing to its rapid and non-destructive nature with high degree of confidence, and its ease-of-use with minimal to no sample preparation.  相似文献   

19.
This article reviews maltreatment-related pediatric posttraumatic stress disorder (PTSD) neuroimaging and neuropsychology research. Existent interventions that target brain networks associated with PTSD and cognitive impairment are highlighted. Furthermore, the benefits of combining neuroimaging and neuropsychology research in treatment outcomes are discussed. To conduct this review, a literature search was done utilizing the words MRI, structural, functional, neuropsychological testing, children, maltreatment, treatment, and PTSD. This was supplemented with a direct search of developmental trauma experts. Results from the neuroimaging studies found differences in the total cerebral volume, prefrontal cortex, hippocampus, cerebellum, superior temporal gyrus, corpus callosum, and other regions in maltreatment-related childhood PTSD. Neuropsychological findings demonstrate deficits in memory, attention, learning, and executive function that correspond to these brain regions. Existent and novel psychotherapeutic interventions address these deficits. These interventions may be directed at key networks associated with cognitive processing. Future directions include the implementation of treatment outcome research integrating psychotherapy with putative biological and psychological markers.  相似文献   

20.
Automated input identification is a very challenging, but also important task. Within computer forensics this reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is necessary to cope with similar inputs (e.g., different versions of a file), embedded objects (e.g., a JPG within a Word document), and fragments (e.g., network packets), too. Over the recent years a couple of different similarity hashing algorithms were published. However, due to the absence of a definition and a test framework, it is hardly possible to evaluate and compare these approaches to establish them in the community.The paper at hand aims at providing an assessment methodology and a sample implementation called FRASH: a framework to test algorithms of similarity hashing. First, we describe common use cases of a similarity hashing algorithm to motivate our two test classes efficiency and sensitivity & robustness. Next, our open and freely available framework is briefly described. Finally, we apply FRASH to the well-known similarity hashing approaches ssdeep and sdhash to show their strengths and weaknesses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号