首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
The Virtual Machine Introspection (VMI) has emerged as a fine-grained, out-of-VM security solution that detects malware by introspecting and reconstructing the volatile memory state of the live guest Operating System (OS). Specifically, it functions by the Virtual Machine Monitor (VMM), or hypervisor. The reconstructed semantic details obtained by the VMI are available in a combination of benign and malicious states at the hypervisor. In order to distinguish between these two states, the existing out-of-VM security solutions require extensive manual analysis. In this paper, we propose an advanced VMM-based, guest-assisted Automated Internal-and-External (A-IntExt) introspection system by leveraging VMI, Memory Forensics Analysis (MFA), and machine learning techniques at the hypervisor. Further, we use the VMI-based technique to introspect digital artifacts of the live guest OS to obtain a semantic view of the processes details. We implemented an Intelligent Cross View Analyzer (ICVA) and implanted it into our proposed A-IntExt system, which examines the data supplied by the VMI to detect hidden, dead, and dubious processes, while also predicting early symptoms of malware execution on the introspected guest OS in a timely manner. Machine learning techniques are used to analyze the executables that are mined and extracted using MFA-based techniques and ascertain the malicious executables. The practicality of the A-IntExt system is evaluated by executing large real-world malware and benign executables onto the live guest OSs. The evaluation results achieved 99.55% accuracy and 0.004 False Positive Rate (FPR) on the 10-fold cross-validation to detect unknown malware on the generated dataset. Additionally, the proposed system was validated against other benchmarked malware datasets and the A-IntExt system outperforms the detection of real-world malware at the VMM with performance exceeding 6.3%.  相似文献   

2.
《Digital Investigation》2014,11(4):323-335
The battle between malware developers and security analysts continues, and the number of malware and malware variants keeps increasing every year. Automated malware generation tools and various detection evasion techniques are also developed every year. To catch up with the advance of malware development technologies, malware analysis techniques need to be advanced to help security analysts. In this paper, we propose a malware analysis method to categorize malware using dynamic mnemonic frequencies. We also proposed a redundancy filtering technique to alleviate drawbacks of dynamic analysis. Experimental results show that our proposed method can categorize malware and can reduce storage overheads of dynamic analysis.  相似文献   

3.
Android operating system has the highest market share in 2014; making it the most widely used mobile operating system in the world. This fact makes Android users the biggest target group for malware developers. Trend analyses show large increase in mobile malware targeting the Android platform. Android's security mechanism is based on an instrument that informs users about which permissions the application needs to be granted before installing them. This permission system provides an overview of the application and may help gain awareness about the risks. However, we do not have enough information to conclude that standard users read or digital investigators understand these permissions and their implications. Digital investigators need to be on the alert for the presence of malware when examining Android devices, and can benefit from supporting tools that help them understand the capabilities of such malicious code. This paper presents a permission-based Android malware detection system, APK Auditor that uses static analysis to characterize and classify Android applications as benign or malicious. APK Auditor consists of three components: (1) A signature database to store extracted information about applications and analysis results, (2) an Android client which is used by end-users to grant application analysis requests, and (3) a central server responsible for communicating with both signature database and smartphone client and managing whole analysis process. To test system performance, 8762 applications in total, 1853 benign applications from Google's Play Store and 6909 malicious applications from different sources were collected and analyzed by the system developed. The results show that APK Auditor is able to detect most well-known malwares and highlights the ones with a potential in approximately 88% accuracy with a 0.925 specificity.  相似文献   

4.
The assessment of signature disguise, where an individual attempts to disguise their own signature on a document with the intent of later disclaiming it (so-called 'view to deny' signatures), is a problem faced by many document examiners. This study evaluates a method known as the angle value test and another experimental method involving angle measurements to determine if either of them can reliably establish whether a questioned signature is disguised or has been written by another person. By using 29 sets of normal and disguised signatures, both methods of analysis were shown to be unreliable techniques for identifying the author of a particular signature.  相似文献   

5.
Memory analysis has gained popularity in recent years proving to be an effective technique for uncovering malware in compromised computer systems. The process of memory acquisition presents unique evidentiary challenges since many acquisition techniques require code to be run on a potential compromised system, presenting an avenue for anti-forensic subversion. In this paper, we examine a number of simple anti-forensic techniques and test a representative sample of current commercial and free memory acquisition tools. We find that current tools are not resilient to very simple anti-forensic measures. We present a novel memory acquisition technique, based on direct page table manipulation and PCI hardware introspection, without relying on operating system facilities - making it more difficult to subvert. We then evaluate this technique's further vulnerability to subversion by considering more advanced anti-forensic attacks.  相似文献   

6.
Reverse engineering is the primary step to analyze a piece of malware. After having disassembled a malware binary, a reverse engineer needs to spend extensive effort analyzing the resulting assembly code, and then documenting it through comments in the assembly code for future references. In this paper, we have developed an assembly code clone search system called ScalClone based on our previous work on assembly code clone detection systems. The objective of the system is to identify the code clones of a target malware from a collection of previously analyzed malware binaries. Our new contributions are summarized as follows: First, we introduce two assembly code clone search methods for malware analysis with a high recall rate. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we present a scalable system with a database model to support large-scale assembly code search. Finally, experimental results on real-life malware binaries suggest that our proposed methods can effectively identify assembly code clones with the consideration of different scenarios of code mutations.  相似文献   

7.
The role of live forensics in digital forensic investigations has become vital due to the importance of volatile data such as encryption keys, network activity, currently running processes, in memory only malware, and other key pieces of data that are lost when a device is powered down. While the technology to perform the first steps of a live investigation, physical memory collection and preservation, is available, the tools for completing the remaining steps remain incomplete. First-generation memory analyzers performed simple string and regular expression operations on the memory dump to locate data such as passwords, credit card numbers, fragments of chat conversations, and social security numbers. A more in-depth analysis can reveal information such as running processes, networking information, open file data, loaded kernel modules, and other critical information that can be used to gain insight into activity occurring on the machine when a memory acquisition occurred. To be useful, tools for performing this in-depth analysis must support a wide range of operating system versions with minimum configuration. Current live forensics tools are generally limited to a single kernel version, a very restricted set of closely related versions, or require substantial manual intervention.This paper describes techniques developed to allow automatic adaptation of memory analysis tools to a wide range of kernel versions. Dynamic reconstruction of kernel data structures is obtained by analyzing the memory dump for the instructions that reference needed kernel structure members. The ability to dynamically recreate C structures used within the kernel allows for a large amount of information to be obtained and processed. Currently, this capability is used within a tool called RAMPARSER that is able to simulate commands such as ps and netstat as if an investigator were sitting at the machine at the time of the memory acquisition. Other applications of the developed capabilities include kernel-level malware detection, recovery of processes memory and file mappings, and other areas of forensics interest.  相似文献   

8.
We present a novel approach for the construction and application of cryptographic hashes to user space memory for the purposes of verifying the provenance of code in memory images. Several key aspects of Windows behaviour which influence this process are examined in-depth. Our approach is implemented and evaluated on a selection of malware samples with user space components as well as a collection of common Windows applications. The results demonstrate that our approach is highly effective at reducing the amount of memory requiring manual analysis, highlighting the presence of malicious code in all the malware sampled.  相似文献   

9.
Today many investigations involve TomTom devices due to the wide-spread use of these navigation systems. The process of acquiring a memory dump from the first generation of TomTom devices was relatively easy by utilising the USB-connection and standard forensic tools. Newer devices, however, do not provide this or any other readily available data connection, making the task much more complex. In addition to existing and relatively complex chip-extraction procedures, an easier data acquisition method was developed without the need to de-solder flash memory chips. The presence of new files and the differences in data formats found in these devices meant that new methods of data analysis and decoding also needed to be developed.  相似文献   

10.
Video content stored in Video Event Data Recorders (VEDRs) are used as important evidence when certain events such as vehicle collisions occur. However, with sophisticated video editing software, assailants can easily manipulate video records to their advantage without leaving visible clues. Therefore, the integrity of video content recorded through VEDRs cannot be guaranteed, and the number of related forensic issues increases. Existing video integrity detection methods use the statistical properties of the pixels within each frame of the video. However, these methods require ample time, because they check frames individually. Moreover, the frame can easily be replaced and forged using the appropriate public software. To solve this problem, we propose an integrity checking mechanism using the structure of ordered fields in a video file, because existing video editing software does not allow users to access or modify field structures. In addition, because our proposed method involves checking the header information of video content only once, much less detection time is required compared with existing methods that examine the entire frames. We store an ordered file structure of video content as a signature in the database using a customized automated tool. The signature appears according to the video editing software. Then, the suspected video content is compared to a set of signatures. If the file structure matches with a signature, we recognize a manipulated video file by its corresponding editing software. We tested five types of video editing software that cover 99% of the video editing software market share. Furthermore, we arranged 305,981 saving options for all five video editing suites. As a result, we obtained 100% detection accuracy using stored signatures, without false positives, in a collection of 305,981 video files. The principle of this method can be applied to other video formats.  相似文献   

11.
Body fluid traces recovered at crime scenes are among the most common and important types of forensic evidence. However, the ability to characterize a biological stain at a crime scene nondestructively has not yet been demonstrated. Here, we expand the Raman spectroscopic approach for the identification of dry traces of pure body fluids to address the problem of heterogeneous contamination, which can impair the performance of conventional methods. The concept of multidimensional Raman signatures was utilized for the identification of blood in dry traces contaminated with sand, dust, and soil. Multiple Raman spectra were acquired from the samples via automatic scanning, and the contribution of blood was evaluated through the fitting quality using spectroscopic signature components. The spatial mapping technique allowed for detection of “hot spots” dominated by blood contribution. The proposed method has great potential for blood identification in highly contaminated samples.  相似文献   

12.
Scientific content analysis (SCAN) is a technique that claims to enable the detection of deception in written statements. The underlying assumption is that statements of self-experienced events differ in several ways – such as liveliness and concreteness – from imaginary statements. It is used in many countries as an investigative tool. Nevertheless, little research on the reliability and validity of the SCAN technique is available. In this paper, two studies are presented. The first study focuses on the accuracy of SCAN to detect deception by three groups of raters with a different level of experience. This study shows a lack of validity of SCAN. Study 2 investigated the inter-rater reliability as a possible explanation for the poor validity results, and found little agreement between raters in identifying SCAN criteria. Overall, results indicate that the psychometric qualities of SCAN as an investigative tool is insufficient for use in police practice.  相似文献   

13.
A prototype using simple mathematical treatment of the pen pressure data recorded by a digital pen movement recording device was derived. In this study, a total of 48 sets of signature and initial specimens were collected. Pearson's correlation coefficient was used to compare the data of the pen pressure patterns. From the 820 pair comparisons of the 48 sets of genuine signatures, a high degree of matching was found in which 95.4% (782 pairs) and 80% (656 pairs) had rPA > 0.7 and rPA > 0.8, respectively. In the comparison of the 23 forged signatures with their corresponding control signatures, 20 of them (89.2% of pairs) had rPA values < 0.6, showing a lower degree of matching when compared with the results of the genuine signatures. The prototype could be used as a complementary technique to improve the objectivity of signature examination and also has a good potential to be developed as a tool for automated signature identification.  相似文献   

14.
通过一种签名笔迹的频谱分析方法,并提取签名笔迹的压力、速度等动态特征数据,利用傅立叶变换原理,对特征数据进行频谱分析。实验研究发现,同一人的签名笔迹频谱图相似度高,且明显区别于他人的签名笔迹。签名笔迹的频谱分析方法可以直观、有效的鉴别签名笔迹。  相似文献   

15.
This paper aims to evaluate possible threats with unofficial Android marketplaces, and geo-localize the malware distribution over three main regions: China; Europe; and Russia. It provides a comprehensive review of existing academic literature about security in Android focusing especially on malware detection systems and existing malware databases. Through the implementation of a methodology for identification of malicious applications it has been collected data revealing a 5% of them as malicious in an overall analysis. Furthermore, the analysis shown that Russia and Europe have a preponderance of generic detections and adware, while China is found to be targeted mainly by riskware and malware.  相似文献   

16.
Dynamic malware analysis aims at revealing malware's runtime behavior. To evade analysis, advanced malware is able to detect the underlying analysis tool (e.g. one based on emulation.) On the other hand, existing malware-transparent analysis tools incur significant performance overhead, making them unsuitable for live malware monitoring and forensics. In this paper, we present IntroLib, a practical tool that traces user-level library calls made by malware with low overhead and high transparency. IntroLib is based on hardware virtualization and resides outside of the guest virtual machine where the malware runs. Our evaluation of an IntroLib prototype with 93 real-world malware samples shows that IntroLib is immune to emulation and API hooking detection by malware, uncovers more semantic information about malware behavior than system call tracing, and incurs low overhead (<15% in all-but-one test case) in performance benchmark testing.  相似文献   

17.
The aims of this study were to determine if computer‐measured dynamic features (duration, size, velocity, jerk, and pen pressure) differ between genuine and simulated signatures. Sixty subjects (3 equal groups of 3 signature styles) each provided 10 naturally written (genuine) signatures. Each of these subjects then provided 15 simulations of each of three model signatures. The genuine (N = 600) and simulated (N = 2700) signatures were collected using a digitizing tablet. MovAlyzeR® software was used to estimate kinematic parameters for each pen stroke. Stroke duration, velocity, and pen pressure were found to discriminate between genuine and simulated signatures regardless of the simulator's own style of signature or the style of signature being simulated. However, there was a significant interaction between style and condition for size and jerk (a measure of smoothness). The results of this study, based on quantitative analysis and dynamic handwriting features, indicate that the style of the simulator's own signature and the style of signature being simulated can impact the characteristics of handwriting movements for simulations. Writer style characteristics might therefore need to be taken into consideration as potentially significant when evaluating signature features with a view to forming opinions regarding authenticity.  相似文献   

18.
File carving is the process of reassembling files from disk fragments based on the file content in the absence of file system metadata. By leveraging both file header and footer pairs, traditional file carving mainly focuses on document and image files such as PDF and JPEG. With the vast amount of malware code appearing in the wild daily, recovery of binary executable files becomes an important problem, especially for the case in which malware deletes itself after compromising a computer. However, unlike image files that usually have both a header and footer pair, executable files only have header information, which makes the carving much harder. In this paper, we present Bin-Carver, a first-of-its-kind system to automatically recover executable files with deleted or corrupted metadata. The key idea is to explore the road map information defined in executable file headers and the explicit control flow paths present in the binary code. Our experiment with thousands of binary code files has shown our Bin-Carver to be incredibly accurate, with an identification rate of 96.3% and recovery rate of 93.1% on average when handling file systems ranging from pristine to chaotic and highly fragmented.  相似文献   

19.
《Science & justice》2020,60(3):273-283
Transferring theoretical knowledge to practical skills remains a big challenge in forensic science, especially in questioned documents. The examination of handwriting and signatures requires years of practice to develop the necessary skills. While students (and to some extent the general population) often have the impression that it is easy to differentiate handwriting from different persons, in practice, particularly when dealing with simulated signatures, there is a high risk of reaching a wrong conclusion when questioned document experts do not use a systematic approach and/or are not sufficiently experienced (see for example the famous French Dreyfus case). Thus, a novel teaching approach, based on collaborative learning, has been introduced in a theoretical handwriting class to improve the students’ theoretical knowledge, and additionally make them aware of the limitations of their practical skills and give them tools to improve them in their future practice. Through five activities, the students took the roles of victims, forgers, teachers and experts and created their own learning materials (i.e. signatures and mock casework). During those interactive activities, they learned to describe their signature’s characteristics, intra-variability and complexity, and thus evaluate their own signature’s vulnerability (as potential victims). They learned techniques to simulate signatures and detect the resulting forgeries’ characteristics (in the role of forgers). In the role of teachers, they prepared mock casework scenarios and gave feedback to their colleague’s examination of the produced material. As experts, they carried out signature examination as they would in a proficiency test and were exposed to the difficulties an actual expert may encounter in practice. The evaluation of this novel teaching scenario was very positive, as students learned more extensively the possibilities and limitations of signature comparison. They were more active and motivated in their learning experiences. The teaching team also had an improved experience. Some students complained of an increased workload and imprecise instructions. Improvements were tested and are discussed in this paper.  相似文献   

20.
A bullet signature measurement system based on a stylus instrument was developed at the National Institute of Standards and Technology (NIST) for the signature measurements of NIST RM (Reference Material) 8240 standard bullets. The standard bullets are developed as a reference standard for bullet signature measurements and are aimed to support the recently established National Integrated Ballistics Information Network (NIBIN) by the Bureau of Alcohol, Tobacco and Firearms (ATF) and the Federal Bureau of Investigation (FBI). The RM bullets are designed as both a virtual and a physical bullet signature standard. The virtual standard is a set of six digitized bullet signatures originally profiled from six master bullets fired at ATF and FBI using six different guns. By using the virtual signature standard to control the tool path on a numerically controlled diamond turning machine at NIST, 40 RM bullets were produced. In this paper, a comparison parameter and an algorithm using auto-and cross-correlation functions are described for qualifying the bullet signature differences between the RM bullets and the virtual bullet signature standard. When two compared signatures are exactly the same (point by point), their cross-correlation function (CCF) value will be equal to 100%. The measurement system setup, measurement program, and initial measurement results are discussed. Initial measurement results for the 40 standard bullets, each measured at six land impressions, show that the CCF values for the 240 signature measurements are higher than 95%, with most of them even higher than 99%. These results demonstrate the high reproducibility for both the manufacturing process and the measurement system for the NIST RM 8240 standard bullets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号