首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Identifying the source camera of images is becoming increasingly important nowadays. A popular approach is to use a type of pattern noise called photo-response non-uniformity (PRNU). The noise of image contains the patterns which can be used as a fingerprint. Despite that, the PRNU-based approach is sensitive towards scene content and image intensity. The identification is poor in areas having low or saturated intensity, or in areas with complicated texture. The reliability of different regions is difficult to model in that it depends on the interaction of scene content and the characteristics of the denoising filter used to extract the noise. In this paper, we showed that the local variance of the noise residual can measure the reliability of the pixel for PRNU-based source camera identification. Hence, we proposed to use local variance to characterize the severeness of the scene content artifacts. The local variance is then incorporated to the general matched filter and peak to correlation energy (PCE) detector to provide an optimal framework for signal detection. The proposed method is tested against several state-of-art methods. The experimental results show that the local variance based approach outperformed other state-of-the-art methods in terms of identification accuracy.  相似文献   

2.
Source camera identification is one of the emerging field in digital image forensics, which aims at identifying the source camera used for capturing the given image. The technique uses photo response non-uniformity (PRNU) noise as a camera fingerprint, as it is found to be one of the unique characteristic which is capable of distinguishing the images even if they are captured from similar cameras. Most of the existing PRNU based approaches are very sensitive to the random noise components existing in the estimated PRNU, and also they are not robust when some simple manipulations are performed on the images. Hence a new feature based approach of PRNU is proposed for the source camera identification by choosing the features which are robust for image manipulations. The PRNU noise is extracted from the images using wavelet based denoising method and is represented by higher order wavelet statistics (HOWS), which are invariant features for image manipulations and geometric variations. The features are fed to support vector machine classifiers to identify the originating source camera for the given image and the results have been verified by performing ten-fold cross validation technique. The experiments have been carried out using the images captured from various cell phone cameras and it demonstrated that the proposed algorithm is capable of identifying the source camera of the given image with good accuracy. The developed technique can be used for differentiating the images, even if they are captured from similar cameras, which belongs to same make and model. The analysis have also showed that the proposed technique remains robust even if the images are subjected to simple manipulations or geometric variations.  相似文献   

3.
在一定条件下,传感器的光电响应非均匀(PRNU)特性能够作为拍摄器材的特异性指标,用于视频图像的来源鉴别.本文利用Luká(s)等[1]定义的小波滤波器提取经过AVC方式重新编码的视频文件中的光电响应非均匀特性模式,研究调节分辨率和编码参数对拍摄器材的光电响应非均匀性模式的影响,发现部分经过重新编码的视频仍能够通过光电响应非均匀特性进行拍摄器材的鉴别.  相似文献   

4.
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.  相似文献   

5.
Patch-Match is an efficient algorithm used for structural image editing and available as a tool on popular commercial photo-editing software. The tool allows users to insert or remove objects from photos using information from similar scene content. Recently, a modified version of this algorithm was proposed as a counter-measure against Photo-Response Non-Uniformity (PRNU) based Source Camera Identification (SCI). The algorithm can provide anonymity at a great rate (97%) and impede PRNU based SCI without the need of any other information, hence leaving no-known recourse for the PRNU-based SCI. In this paper, we propose a method to identify sources of the Patch-Match-applied images by using randomized subsets of images and the traditional PRNU based SCI methods. We evaluate the proposed method on two forensics scenarios in which an adversary makes use of the Patch-Match algorithm and distorts the PRNU noise pattern in the incriminating images she took with his camera. Our results show that it is possible to link sets of Patch-Match-applied images back to their source camera even in the presence of images that come from unknown cameras. To our best knowledge, the proposed method represents the very first counter-measure against the usage of Patch-Match in the digital forensics literature.  相似文献   

6.
Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor‐based SCI heavily relies on the denoising filter used. This study proposes a novel sensor‐based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state‐of‐the‐art methods, which is a big advantage considering the potential real‐time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state‐of‐the‐art methods in terms of accuracy.  相似文献   

7.
A video can be manipulated using synthetic zooming without using the state-of-the-art video forgeries. Synthetic zooming is performed by upscaling individual frames of a video with varying scale factors followed by cropping them to the original frame size. These manipulated frames resemble genuine natural (optical) camera zoomed frames and hence may be misclassified as a pristine video by video forgery detection algorithms. Even if such a video is classified as forged, forensic investigators may ignore the results, believing it as part of an optical camera zooming activity. Hence, this can be used as an anti-forensic method which eliminates digital evidence. In this paper, we propose a method for differentiating optical camera zooming from synthetic zooming for video tampering detection. The features used for this method are pixel variance correlation and sensor pattern noise. Experimental results on a dataset containing 3200 videos show the effectiveness of the proposed method.  相似文献   

8.
《Digital Investigation》2014,11(2):111-119
To discriminate natural images from computer generated graphics, a novel identification method based on the features of the impact of color filter array (CFA) interpolation on the local correlation of photo response non-uniformity noise (PRNU) is proposed. As CFA interpolation generally exists in the generation of natural images and it imposes influence on the local correlation of PRNU, the differences between the PRNU correlations of natural images and those of computer generated graphics are investigated. Nine dimensions of histogram features are extracted from the local variance histograms of PRNU to represent the identification features. The discrimination is accomplished by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 99.43%, and it is robust against scaling, JPEG compression, rotation and additive noise. Thus, it has great potential to be used in image source pipelines forensics.  相似文献   

9.
Abstract: In this research, we examined whether fixed pattern noise or more specifically Photo Response Non‐Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 × 480 pixels. We extracted PRNU patterns from both reference and questioned images using a two‐dimensional Gaussian filter and compared these patterns by calculating the correlation coefficient between them. Both the closed and open‐set problems were addressed, leading the problems in the closed set to high accuracies for 83% for single images and 100% for around 20 simultaneously identified questioned images. The correct source camera was chosen from a set of 38 cameras of four different types. For the open‐set problem, decision levels were obtained for several numbers of simultaneously identified questioned images. The corresponding false rejection rates were unsatisfactory for single images but improved for simultaneous identification of multiple images.  相似文献   

10.
In the field of forensic science, bullet identification is based on the fact that firing the cartridge from a barrel leaves exclusive microscopic striation on the fired bullets as the fingerprint of the firearm. The bullet identification methods are categorized in 2‐D and 3‐D based on their image acquisition techniques. In this study, we focus on 2‐D optical images using a multimodal technique and propose several distinct methods as its modalities. The proposed method uses a multimodal rule‐based linear weighted fusion approach which combines the semantic level decisions from different modalities with a linear technique that its optimized modalities weights have been identified by the genetic algorithm. The proposed approach was applied on a dataset, which includes 180 2‐D bullet images fired from 90 different AK‐47 barrels. The experimentations showed that our approach attained better results compared to common methods in the field of bullet identification.  相似文献   

11.
Fingerprint pattern restoration by digital image processing techniques   总被引:2,自引:0,他引:2  
Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared.  相似文献   

12.
Digital devices now play an important role in the lives of many in society. Whilst they are used predominantly for legitimate purposes, instances of digital crime are witnessed, where determining their usage is important to any criminal investigation. Typically, when determining who has used a digital device, digital forensic analysis is utilised, however, biological trace evidence or fingerprints residing on its surfaces may also be of value. This work provides a preliminary study which examines the potential for fingerprint recovery from computer peripherals, namely keyboards and mice. Our implementation methodology is outlined, and results discussed which indicate that print recovery is possible. Findings are intended to support those operating at-scene in an evidence collection capacity.  相似文献   

13.
An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near‐infrared (near‐IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light‐emitting diode (LED), and halogen light sources were compared for use with the camera. Test‐fired bullets and cartridge cases from different makes and models of firearms were photographed under either near‐IR or visible light. With visual comparisons, near‐IR images and visible light images were comparable. The use of near‐IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near‐IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography.  相似文献   

14.
Over the past decade, a substantial effort has been put into developing methods to classify file fragments. Throughout, it has been an article of faith that data fragments, such as disk blocks, can be attributed to different file types. This work is an attempt to critically examine the underlying assumptions and compare them to empirically collected data. Specifically, we focus most of our effort on surveying several common compressed data formats, and show that the simplistic conceptual framework of prior work is at odds with the realities of actual data. We introduce a new tool, zsniff, which allows us to analyze deflate-encoded data, and we use it to perform an empirical survey of deflate-coded text, images, and executables. The results offer a conceptually new type of classification capabilities that cannot be achieved by other means.  相似文献   

15.
It is now extremely easy to recapture high-resolution and high-quality images from LCD (Liquid Crystal Display) screens. Recaptured image detection is an important digital forensic problem, as image recapture is often involved in the creation of a fake image in an attempt to increase its visual plausibility. State-of-the-art image recapture forensic methods make use of strong prior knowledge about the recapturing process and are based on either the combination of a group of ad-hoc features or a specific and somehow complicated dictionary learning procedure. By contrast, we propose a conceptually simple yet effective method for recaptured image detection which is built upon simple image statistics and a very loose assumption about the recapturing process. The adopted features are pixel-wise correlation coefficients in image differential domains. Experimental results on two large databases of high-resolution, high-quality recaptured images and comparisons with existing methods demonstrate the forensic accuracy and the computational efficiency of the proposed method.  相似文献   

16.
Investigating seized devices within digital forensics gets more and more difficult due to the increasing amount of data. Hence, a common procedure uses automated file identification which reduces the amount of data an investigator has to look at by hand. Besides identifying exact duplicates, which is mostly solved using cryptographic hash functions, it is also helpful to detect similar data by applying approximate matching.Let x denote the number of digests in a database, then the lookup for a single similarity digest has the complexity of O(x). In other words, the digest has to be compared against all digests in the database. In contrast, cryptographic hash values are stored within binary trees or hash tables and hence the lookup complexity of a single digest is O(log2(x)) or O(1), respectively.In this paper we present and evaluate a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(x) to O(1). Therefore, instead of using multiple small Bloom filters (which is the common procedure), we demonstrate that a single, huge Bloom filter has a far better performance. Our evaluation demonstrates that current approximate matching algorithms are too slow (e.g., over 21 min to compare 4457 digests of a common file corpus against each other) while the improved version solves this challenge within seconds. Studying the precision and recall rates shows that our approach works as reliably as the original implementations. We obtain this benefit by accuracy–the comparison is now a file-against-set comparison and thus it is not possible to see which file in the database is matched.  相似文献   

17.
The emergence of webOS on Palm devices has created new challenges and opportunities for digital investigators. With the purchase of Palm by Hewlett Packard, there are plans to use webOS on an increasing number and variety of computer systems. These devices can store substantial amounts of information relevant to an investigation, including digital photographs, videos, call logs, SMS/MMS messages, e-mail, remnants of Web browsing and much more. Although some files can be obtained from such devices with relative ease, the majority of information of forensic interest is stored in databases on a system partition that many mobile forensic tools do not acquire. This paper provides a methodology for acquiring and examining forensic duplicates of user and system partitions from a device running webOS. The primary sources of digital evidence on these devices are covered with illustrative examples. In addition, the recovery of deleted items from various areas on webOS devices is discussed.  相似文献   

18.
Visualization of latent fingerprints on metallic surfaces by the method of applying electrostatic charging and adsorption is considered as a promising chemical‐free method, which has the merit of nondestruction, and is considered to be effective for some difficult situations such as aged fingerprint deposits or those exposed to environmental extremes. In fact, a portable electrostatic generator can be easily accessible in a local forensic technology laboratory, which is already widely used in the visualization of footwear impressions. In this study, a modified version of this electrostatic apparatus is proposed for latent fingerprint development and has shown great potential in visualizing fingerprints on metallic surfaces such as cartridge cases. Results indicate that this experimental arrangement can successfully develop aged latent fingerprints on metal surfaces, and we demonstrate its effectiveness compared with existing conventional fingerprint recovery methods.  相似文献   

19.
Visible absorption spectra were recorded for single textile fibers using a microspectrophotometer based on a liquid crystal tunable filter. Spectra compared well with results from a conventional instrument. Some advantages include very fast and simple sample preparation and easy comparison of multiple fibers at the same time. Advantages over extraction-dependent methods include the fact that it is applicable to extremely small sample size, not susceptible to artifacts induced by variable extraction efficiencies, non-destructive, and much easier. Because an immense amount of information is collected in one experiment, good signal averaging is possible, along with multiple comparisons for each data set. The addition of a camera, computer, and liquid crystal tunable filter can transform a standard microscope into a microspectrophotometer capable of performing similar work.  相似文献   

20.
Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号