首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   293篇
  免费   31篇
各国政治   3篇
工人农民   1篇
世界政治   3篇
外交国际关系   7篇
法律   229篇
中国共产党   1篇
中国政治   28篇
政治理论   17篇
综合类   35篇
  2024年   1篇
  2023年   12篇
  2022年   11篇
  2021年   13篇
  2020年   10篇
  2019年   21篇
  2018年   19篇
  2017年   23篇
  2016年   19篇
  2015年   19篇
  2014年   43篇
  2013年   31篇
  2012年   11篇
  2011年   17篇
  2010年   14篇
  2009年   13篇
  2008年   5篇
  2007年   12篇
  2006年   6篇
  2005年   7篇
  2004年   6篇
  2003年   3篇
  2002年   4篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
排序方式: 共有324条查询结果,搜索用时 15 毫秒
221.
Investigating seized devices within digital forensics gets more and more difficult due to the increasing amount of data. Hence, a common procedure uses automated file identification which reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is also helpful to detect similar data by applying approximate matching.Let x denote the number of digests in a database, then the lookup for a single similarity digest has the complexity of O(x). In other words, the digest has to be compared against all digests in the database. In contrast, cryptographic hash values are stored within binary trees or hash tables and hence the lookup complexity of a single digest is O(log2(x)) or O(1), respectively.In this paper we present and evaluate a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(x) to O(1). Therefore, instead of using multiple small Bloom filters (which is the common procedure), we demonstrate that a single, huge Bloom filter has a far better performance. Our evaluation demonstrates that current approximate matching algorithms are too slow (e.g., over 21 min to compare 4457 digests of a common file corpus against each other) while the improved version solves this challenge within seconds. Studying the precision and recall rates shows that our approach works as reliably as the original implementations. We obtain this benefit by accuracy–the comparison is now a file-against-set comparison and thus it is not possible to see which file in the database is matched.  相似文献   
222.
A critical aspect of malware forensics is authorship analysis. The successful outcome of such analysis is usually determined by the reverse engineer's skills and by the volume and complexity of the code under analysis. To assist reverse engineers in such a tedious and error-prone task, it is desirable to develop reliable and automated tools for supporting the practice of malware authorship attribution. In a recent work, machine learning was used to rank and select syntax-based features such as n-grams and flow graphs. The experimental results showed that the top ranked features were unique for each author, which was regarded as an evidence that those features capture the author's programming styles. In this paper, however, we show that the uniqueness of features does not necessarily correspond to authorship. Specifically, our analysis demonstrates that many “unique” features selected using this method are clearly unrelated to the authors' programming styles, for example, unique IDs or random but unique function names generated by the compiler; furthermore, the overall accuracy is generally unsatisfactory. Motivated by this discovery, we propose a layered Onion Approach for Binary Authorship Attribution called OBA2. The novelty of our approach lies in the three complementary layers: preprocessing, syntax-based attribution, and semantic-based attribution. Experiments show that our method produces results that not only are more accurate but have a meaningful connection to the authors' styles.  相似文献   
223.
《Digital Investigation》2014,11(1):30-42
The pervasive availability of cheap cloud computing services for data storage, either as persistence layer to applications or as mere object store dedicated to final users, is remarkably increasing the chance that cloud platforms potentially host evidence of criminal activity. Once presented a proper court order, cloud providers would be in the best position for extracting relevant data from their platforms in the most reliable and complete way. However, this kind of services are not so widespread to date and, therefore, the need to adopt a structured and forensically sound approach calls for innovative weaponry which leverage the data harvesting capabilities offered by the low level program interfaces exposed by providers. This paper describes the concepts and internals of the Cloud Data Imager Library, a mediation layer that offers a read only access to files and metadata of selected remote folders and currently supports access to Dropbox, Google Drive and Microsoft Skydrive storage facilities. A demo application has been build on top of the library which allows directory browsing, file content view and imaging of folder trees with export to widespread forensic formats.  相似文献   
224.
Technology acceptance in policing is under-researched, yet mobile devices are widely implemented across UK police forces. The paper validates a mobile technology acceptance model (M-TAM) developed in a single police force. It shows that the M-TAM is transferrable to other UK police forces, and potentially worldwide. The influence of local supervision and fit of technology to roles and tasks are shown to be the most influential factors. Factors beyond the technology itself, such as the influence of peers and involvement of operational officers in technology investment decisions, must be considered to accommodate the strong cultural barriers in policing.  相似文献   
225.
A bead‐based liquid hybridization assay, Luminex® 100?, was used to identify four pathogenic bacteria, Bacillus anthracis, Clostridium botulinum, Francisella tularensis subsp. tularensis, and Yersinia pestis, and several close relatives. Hybridization between PCR‐amplified target sequences and probe sequences (located within the 23S ribosomal RNA gene rrl and the genes related to the toxicity of each bacterium) was detected in single‐probe or multiple‐probe assays, depending on the organism. The lower limits of detection (LLDs) for the probes ranged from 0.1 to 10 ng. Sensitivity was improved using lambda exonuclease to digest the noncomplementary target strand. All contributors in 33 binary, ternary, and quaternary mixtures in which all components were present in a 1:1 ratio were identified with an 80% success rate. Twenty‐eight binary mixtures in which the two components were combined in various ratios were further studied. All target sequences were detected, even when the minor component was overshadowed by a tenfold excess of the major component.  相似文献   
226.
International regulations about the safety of ships at sea require every modern vessel to be equipped with a Voyage Data Recorder to assist investigations in the event of an accident. As such, these devices are the primary means for acquiring reliable data about an accident involving a ship, and so they must be the first targets in an investigation. Although regulations describe the sources and amount of data to be recorded, they say nothing about the format of the recording. Because of this, nowadays investigators are forced to rely solely on the help of the builder of the system, which provides proprietary software to “replay” the voyage recordings. This paper delves into the examination of data found in the VDR from the actual Costa Concordia accident in 2012, and describes the recovery of information useful for the investigation, both by deduction and by reverse engineering of the data, some of which were not even shown by the official replay software.  相似文献   
227.
We describe the design, implementation, and evaluation of FROST—three new forensic tools for the OpenStack cloud platform. Our implementation for the OpenStack cloud platform supports an Infrastructure-as-a-Service (IaaS) cloud and provides trustworthy forensic acquisition of virtual disks, API logs, and guest firewall logs. Unlike traditional acquisition tools, FROST works at the cloud management plane rather than interacting with the operating system inside the guest virtual machines, thereby requiring no trust in the guest machine. We assume trust in the cloud provider, but FROST overcomes non-trivial challenges of remote evidence integrity by storing log data in hash trees and returning evidence with cryptographic hashes. Our tools are user-driven, allowing customers, forensic examiners, and law enforcement to conduct investigations without necessitating interaction with the cloud provider. We demonstrate how FROST's new features enable forensic investigators to obtain forensically-sound data from OpenStack clouds independent of provider interaction. Our preliminary evaluation indicates the ability of our approach to scale in a dynamic cloud environment. The design supports an extensible set of forensic objectives, including the future addition of other data preservation, discovery, real-time monitoring, metrics, auditing, and acquisition capabilities.  相似文献   
228.
Automated input identification is a very challenging, but also important task. Within computer forensics this reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is necessary to cope with similar inputs (e.g., different versions of a file), embedded objects (e.g., a JPG within a Word document), and fragments (e.g., network packets), too. Over the recent years a couple of different similarity hashing algorithms were published. However, due to the absence of a definition and a test framework, it is hardly possible to evaluate and compare these approaches to establish them in the community.The paper at hand aims at providing an assessment methodology and a sample implementation called FRASH: a framework to test algorithms of similarity hashing. First, we describe common use cases of a similarity hashing algorithm to motivate our two test classes efficiency and sensitivity & robustness. Next, our open and freely available framework is briefly described. Finally, we apply FRASH to the well-known similarity hashing approaches ssdeep and sdhash to show their strengths and weaknesses.  相似文献   
229.
Performing a digital forensic investigation (DFI) requires a standardized and formalized process. There is currently neither an international standard nor does a global, harmonized DFI process (DFIP) exist. The authors studied existing state-of-the-art DFIP models and concluded that there are significant disparities pertaining to the number of processes, the scope, the hierarchical levels, and concepts applied. This paper proposes a comprehensive model that harmonizes existing models. An effort was made to incorporate all types of processes proposed by the existing models, including those aimed at achieving digital forensic readiness. The authors introduce a novel class of processes called concurrent processes. This is a novel contribution that should, together with the rest of the model, enable more efficient and effective DFI, while ensuring admissibility of digital evidence. Ultimately, the proposed model is intended to be used for different types of DFI and should lead to standardization.  相似文献   
230.
In June 2013, Texas Senate Bill 344 (SB 344) was signed into law after strong Innocence Project support. SB 344 has since transformed the Texan judicial landscape. Known as the ‘Junk Science Writ’, SB 344 enables the court to grant habeas corpus relief based on scientific evidence that ‘(1) was not available to be offered by a convicted person at the convicted person''s trial; or (2) contradicts scientific evidence relied on by the state at trial’. Inmates, such as the ‘San Antonio Four’, whose convictions were based upon what is now considered ‘faulty’ medical and forensic testimony, have been released under SB 344. Yet, science, as a field dependent on innovation, is inherently prone to debunking the scientific and forensic methods the law has relied upon to convict individuals. This commentary identifies policy behind SB 344, how SB 344 may influence the perception of science in the courtroom, and how ‘junk science’ is defined and/or limited. Furthermore, this commentary concludes that to achieve justice in the legal system through habeas relief based on ‘junk science’, it is necessary to revitalize and standardize forensic science.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号