首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
    
Due to the democratisation of new technologies, computer forensics investigators have to deal with volumes of data which are becoming increasingly large and heterogeneous. Indeed, in a single machine, hundred of events occur per minute, produced and logged by the operating system and various software. Therefore, the identification of evidence, and more generally, the reconstruction of past events is a tedious and time-consuming task for the investigators. Our work aims at reconstructing and analysing automatically the events related to a digital incident, while respecting legal requirements. To tackle those three main problems (volume, heterogeneity and legal requirements), we identify seven necessary criteria that an efficient reconstruction tool must meet to address these challenges. This paper introduces an approach based on a three-layered ontology, called ORD2I, to represent any digital events. ORD2I is associated with a set of operators to analyse the resulting timeline and to ensure the reproducibility of the investigation.  相似文献   

2.
    
This paper discusses the challenges of performing a forensic investigation against a multi-node Hadoop cluster and proposes a methodology for examiners to use in such situations. The procedure's aim of minimising disruption to the data centre during the acquisition process is achieved through the use of RAM forensics. This affords initial cluster reconnaissance which in turn facilitates targeted data acquisition on the identified DataNodes. To evaluate the methodology's feasibility, a small Hadoop Distributed File System (HDFS) was configured and forensic artefacts simulated upon it by deleting data originally stored in the cluster. RAM acquisition and analysis was then performed on the NameNode in order to test the validity of the suggested methodology. The results are cautiously positive in establishing that RAM analysis of the NameNode can be used to pinpoint the data blocks affected by the attack, allowing a targeted approach to the acquisition of data from the DataNodes, provided that the physical locations can be determined. A full forensic analysis of the DataNodes was beyond the scope of this project.  相似文献   

3.
    
《Science & justice》2022,62(1):86-93
The prominence of technology usage in society has inevitably led to increasing numbers of digital devices being seized, where digital evidence often features in criminal investigations. Such demand has led to well documented backlogs placing pressure on digital forensic labs, where in an effort to combat this issue, the ‘at-scene triage’ of devices has been touted as a solution. Yet such triage approaches are not straightforward to implement with multiple technical and procedural issues existing, including determining when it is actually appropriate to triage the contents of a device at-scene. This work remains focused on this point due to the complexities associated with it, and to support first responders a nine-stage triage decision model is offered which is designed to promote consistent and transparent practice when determining if a device should be triaged.  相似文献   

4.
The big data era has a high impact on forensic data analysis. Work is done in speeding up the processing of large amounts of data and enriching this processing with new techniques. Doing forensics calls for specific design considerations, since the processed data is incredibly sensitive. In this paper we explore the impact of forensic drivers and major design principles like security, privacy and transparency on the design and implementation of a centralized digital forensics service.  相似文献   

5.
This paper discusses the use of communication technology to commit crimes, including crime facts and crime techniques. The analysis focuses on the security of voice over Internet protocol (VoIP), a prevention method against VoIP call attack and the attention points for setting up an Internet phone. The importance of digital evidence and digital forensics are emphasised. This paper provides the VoIP digital evidence forensics standard operating procedures (DEFSOP) to help police organisations and establishes an experimental platform to simulate phone calls, hacker attacks and forensic data. Finally, this paper provides a general discussion of a digital evidence strategy that includes VoIP for crime investigators who are interested in digital evidence forensics.  相似文献   

6.
《Digital Investigation》2014,11(4):273-294
A major challenge to digital forensic analysis is the ongoing growth in the volume of data seized and presented for analysis. This is a result of the continuing development of storage technology, including increased storage capacity in consumer devices and cloud storage services, and an increase in the number of devices seized per case. Consequently, this has led to increasing backlogs of evidence awaiting analysis, often many months to years, affecting even the largest digital forensic laboratories. Over the preceding years, there has been a variety of research undertaken in relation to the volume challenge. Solutions posed range from data mining, data reduction, increased processing power, distributed processing, artificial intelligence, and other innovative methods. This paper surveys the published research and the proposed solutions. It is concluded that there remains a need for further research with a focus on real world applicability of a method or methods to address the digital forensic data volume challenge.  相似文献   

7.
    
《Digital Investigation》2014,11(4):295-313
Distributed filesystems provide a cost-effective means of storing high-volume, velocity and variety information in cloud computing, big data and other contemporary systems. These technologies have the potential to be exploited for illegal purposes, which highlights the need for digital forensic investigations. However, there have been few papers published in the area of distributed filesystem forensics. In this paper, we aim to address this gap in knowledge. Using our previously published cloud forensic framework as the underlying basis, we conduct an in-depth forensic experiment on XtreemFS, a Contrail EU-funded project, as a case study for distributed filesystem forensics. We discuss the technical and process issues regarding collection of evidential data from distributed filesystems, particularly when used in cloud computing environments. A number of digital forensic artefacts are also discussed. We then propose a process for the collection of evidential data from distributed filesystems.  相似文献   

8.
    
Any investigation can have a digital dimension, often involving information from multiple data sources, organizations and jurisdictions. Existing approaches to representing and exchanging cyber-investigation information are inadequate, particularly when combining data sources from numerous organizations or dealing with large amounts of data from various tools. To conduct investigations effectively, there is a pressing need to harmonize how this information is represented and exchanged. This paper addresses this need for information exchange and tool interoperability with an open community-developed specification language called Cyber-investigation Analysis Standard Expression (CASE). To further promote a common structure, CASE aligns with and extends the Unified Cyber Ontology (UCO) construct, which provides a format for representing information in all cyber domains. This ontology abstracts objects and concepts that are not CASE-specific, so that they can be used across other cyber disciplines that may extend UCO. This work is a rational evolution of the Digital Forensic Analysis eXpression (DFAX) for representing digital forensic information and provenance. CASE is more flexible than DFAX and can be utilized in any context, including criminal, corporate and intelligence. CASE also builds on the Hansken data model developed and implemented by the Netherlands Forensic Institute (NFI). CASE enables the fusion of information from different organizations, data sources, and forensic tools to foster more comprehensive and cohesive analysis. This paper includes illustrative examples of how CASE can be implemented and used to capture information in a structured form to advance sharing, interoperability and analysis in cyber-investigations. In addition to capturing technical details and relationships between objects, CASE provides structure for representing and sharing details about how cyber-information was handled, transferred, processed, analyzed, and interpreted. CASE also supports data marking for sharing information at different levels of trust and classification, and for protecting sensitive and private information. Furthermore, CASE supports the sharing of knowledge related to cyber-investigations, including distinctive patterns of activity/behavior that are common across cases. This paper features a proof-of-concept Application Program Interface (API) to facilitate implementation of CASE in tools. Community members are encouraged to participate in the development and implementation of CASE and UCO.  相似文献   

9.
The year 2017 has seen many EU and UK legislative initiatives and proposals to consider and address the impact of artificial intelligence on society, covering questions of liability, legal personality and other ethical and legal issues, including in the context of data processing. In March 2017, the Information Commissioner's Office (UK) updated its big data guidance to address the development of artificial intelligence and machine learning, and to provide (GDPR), which will apply from 25 May 2018.This paper situates the ICO's guidance in the context of wider legal and ethical considerations and provides a critique of the position adopted by the ICO. On the ICO's analysis, the key challenge for artificial intelligence processing personal data is in establishing that such processing is fair. This shift reflects the potential for artificial intelligence to have negative social consequences (whether intended or unintended) that are not otherwise addressed by the GDPR. The question of ‘fairness’ is an important one, to address the imbalance between big data organisations and individual data subjects, with a number of ethical and social impacts that need to be evaluated.  相似文献   

10.
On the background of the increasing amount of discriminatory challenges facing artificial intelligence applications, this paper examines the requirements that are needed to comply with European non-discrimination law to prevent discrimination in the automated online job advertising business. This paper explains under which circumstance the automated targeting of job advertisements can amount to direct or indirect discrimination. The paper concludes with technical recommendations to dismantle the dangers of automated job advertising. Various options like influencing the pre-processing of big data and altering the algorithmic models are evaluated. This paper also examines the possibilities of using techniques like data mining and machine learning to actively battle direct and indirect discrimination. The European non-discrimination directives 2000/43/EC, 2000/78/EC, and 2006/54/EC which prohibit direct and indirect discrimination in the field of employment on the grounds of race or ethnic origin, sex, sexual orientation, religious belief, age and disability are used as a legal framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号