首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Patch-Match is an efficient algorithm used for structural image editing and available as a tool on popular commercial photo-editing software. The tool allows users to insert or remove objects from photos using information from similar scene content. Recently, a modified version of this algorithm was proposed as a counter-measure against Photo-Response Non-Uniformity (PRNU) based Source Camera Identification (SCI). The algorithm can provide anonymity at a great rate (97%) and impede PRNU based SCI without the need of any other information, hence leaving no-known recourse for the PRNU-based SCI. In this paper, we propose a method to identify sources of the Patch-Match-applied images by using randomized subsets of images and the traditional PRNU based SCI methods. We evaluate the proposed method on two forensics scenarios in which an adversary makes use of the Patch-Match algorithm and distorts the PRNU noise pattern in the incriminating images she took with his camera. Our results show that it is possible to link sets of Patch-Match-applied images back to their source camera even in the presence of images that come from unknown cameras. To our best knowledge, the proposed method represents the very first counter-measure against the usage of Patch-Match in the digital forensics literature.  相似文献   

2.
《Digital Investigation》2014,11(2):111-119
To discriminate natural images from computer generated graphics, a novel identification method based on the features of the impact of color filter array (CFA) interpolation on the local correlation of photo response non-uniformity noise (PRNU) is proposed. As CFA interpolation generally exists in the generation of natural images and it imposes influence on the local correlation of PRNU, the differences between the PRNU correlations of natural images and those of computer generated graphics are investigated. Nine dimensions of histogram features are extracted from the local variance histograms of PRNU to represent the identification features. The discrimination is accomplished by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 99.43%, and it is robust against scaling, JPEG compression, rotation and additive noise. Thus, it has great potential to be used in image source pipelines forensics.  相似文献   

3.
Abstract: In this research, we examined whether fixed pattern noise or more specifically Photo Response Non‐Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 × 480 pixels. We extracted PRNU patterns from both reference and questioned images using a two‐dimensional Gaussian filter and compared these patterns by calculating the correlation coefficient between them. Both the closed and open‐set problems were addressed, leading the problems in the closed set to high accuracies for 83% for single images and 100% for around 20 simultaneously identified questioned images. The correct source camera was chosen from a set of 38 cameras of four different types. For the open‐set problem, decision levels were obtained for several numbers of simultaneously identified questioned images. The corresponding false rejection rates were unsatisfactory for single images but improved for simultaneous identification of multiple images.  相似文献   

4.
在一定条件下,传感器的光电响应非均匀(PRNU)特性能够作为拍摄器材的特异性指标,用于视频图像的来源鉴别.本文利用Luká(s)等[1]定义的小波滤波器提取经过AVC方式重新编码的视频文件中的光电响应非均匀特性模式,研究调节分辨率和编码参数对拍摄器材的光电响应非均匀性模式的影响,发现部分经过重新编码的视频仍能够通过光电响应非均匀特性进行拍摄器材的鉴别.  相似文献   

5.
Each digital camera has an intrinsic fingerprint that is unique to each camera. This device fingerprint can be extracted from an image and can be compared with a reference device fingerprint to determine the device origin. The complexity of the filters proposed to accomplish this is increasing. In this note, we use a relatively simple algorithm to extract the sensor noise from images. It has the advantages of being easy to implement and parallelize, and working faster than the wavelet filter that is common for this application. In addition, we compare the performance with a simple median filter and assess whether a previously proposed fingerprint enhancement technique improves results. Experiments are performed on approximately 7500 images originating from 69 cameras, and the results are compared with this often used wavelet filter. Despite the simplicity of the proposed method, the performance exceeds the common wavelet filter and reduces the time needed for the extraction.  相似文献   

6.
Identifying the source camera of images is becoming increasingly important nowadays. A popular approach is to use a type of pattern noise called photo-response non-uniformity (PRNU). The noise of image contains the patterns which can be used as a fingerprint. Despite that, the PRNU-based approach is sensitive towards scene content and image intensity. The identification is poor in areas having low or saturated intensity, or in areas with complicated texture. The reliability of different regions is difficult to model in that it depends on the interaction of scene content and the characteristics of the denoising filter used to extract the noise. In this paper, we showed that the local variance of the noise residual can measure the reliability of the pixel for PRNU-based source camera identification. Hence, we proposed to use local variance to characterize the severeness of the scene content artifacts. The local variance is then incorporated to the general matched filter and peak to correlation energy (PCE) detector to provide an optimal framework for signal detection. The proposed method is tested against several state-of-art methods. The experimental results show that the local variance based approach outperformed other state-of-the-art methods in terms of identification accuracy.  相似文献   

7.
8.
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring.  相似文献   

9.
Due to present of enormous free image and video editing software on the Internet, tampering of digital images and videos have become very easy. Validating the integrity of images or videos and detecting any attempt of forgery without use of active forensic technique such as Digital Signature or Digital Watermark is a big challenge to researchers. Passive forensic techniques, unlike active techniques, do not need any preembeded information about the image or video. The proposed paper presents a comprehensive review of the recent developments in the field of digital image and video forensic using noise features. The previously existing methods of image and video forensics proved the importance of noises and encourage us for the study and perform extensive research in this field. Moreover, in this paper, forensic task cover mainly source identification and forgery detection in the image and video using noise features. Thus, various source identification and forgery detection methods using noise features are reviewed and compared in this paper for image and video. The overall objective of this paper is to give researchers a broad perspective on various aspects of image and video forensics using noise features. Conclusion part of this paper discusses about the importance of noise features and the challenges encountered by different image and video forensic method using noise features.  相似文献   

10.
The combination of photographs taken at two or three wavelengths at and bracketing an absorbance peak indicative of a particular compound can lead to an image with enhanced visualization of the compound. This procedure works best for compounds with absorbance bands that are narrow compared with "average" chromophores. If necessary, the photographs can be taken with different exposure times to ensure that sufficient light from the substrate is detected at all three wavelengths. The combination of images is readily performed if the images are obtained with a digital camera and are then processed using an image processing program. Best results are obtained if linear images at the peak maximum, at a slightly shorter wavelength, and at a slightly longer wavelength are used. However, acceptable results can also be obtained under many conditions if non-linear photographs are used or if only two wavelengths (one of which is at the peak maximum) are combined. These latter conditions are more achievable by many "mid-range" digital cameras. Wavelength selection can either be by controlling the illumination (e.g., by using an alternate light source) or by use of narrow bandpass filters. The technique is illustrated using blood as the target analyte, using bands of light centered at 395, 415, and 435 nm. The extension of the method to detection of blood by fluorescence quenching is also described.  相似文献   

11.
Reflected ultraviolet imaging techniques allow for the visualization of evidence normally outside the human visible spectrum. Specialized digital cameras possessing extended sensitivity can be used for recording reflected ultraviolet radiation. Currently, there is a lack of standardized methods for ultraviolet image recording and processing using digital cameras, potentially limiting the implementation and interpretation. A methodology is presented for processing ultraviolet images based on linear responses and the sensitivity of the respective color channels. The methodology is applied to a FujiS3 UVIR camera, and a modified Nikon D70s camera, to reconstruct their respective spectral sensitivity curves between 320 and 400 nm. This method results in images with low noise and high contrast, suitable for qualitative and/or quantitative analysis. The application of this methodology is demonstrated in the recording of latent fingerprints.  相似文献   

12.
Video surveillance camera (VSC) is an important source of information during investigations especially if used as a tool for the extraction of verified and reliable forensic measurements. In this study, some aspects of human height extraction from VSC video frames are analyzed with the aim of identifying and mitigating error sources that can strongly affect the measurement. More specifically, those introduced by lens distortion are present in wide-field-of-view lens such as VSCs. A weak model, which is not able to properly describe and correct the lens distortion, could introduce systematic errors. This study focuses on the aspect of camera calibration to verify human height extraction by Amped FIVE software, which is adopted by the Forensic science laboratories of Carabinieri Force (RaCIS), Italy. A stable and reliable approach of camera calibration is needed since investigators have to deal with different cameras while inspecting the crime scene. The performance of the software in correcting distorted images is compared with a technique of single view self-calibration. Both approaches were applied to several frames acquired by a fish-eye camera and then measuring the height of five different people. Moreover, two actual cases, both characterized by common low-resolution and distorted images, were also analyzed. The height of four known persons was measured and used as reference value for validation. Results show no significant difference between the two calibration approaches working with fish-eye camera in test field, while evidence of differences was found in the measurement on the actual cases.  相似文献   

13.
《Digital Investigation》2014,11(1):67-77
The detection of stego images, used as a carrier for secret messages for nefarious activities, forms the basis for Blind Image Steganalysis. The main issue in Blind Steganalysis is the non-availability of knowledge about the Steganographic technique applied to the image. Feature extraction approaches best suited for Blind Steganalysis, either dealt with only a few features or single domain of an image. Moreover, these approaches lead to low detection percentage. The main objective of this paper is to improve the detection percentage. In this paper, the focus is on Blind Steganalysis of JPEG images through the process of dilation that includes splitting of given image into RGB components followed by transformation of each component into three domains, viz., frequency, spatial, and wavelet. Extracted features from each domain are given to the Support Vector Machine (SVM) classifier that classified the image as steg or clean. The proposed process of dilation was tested by experiments with varying embedded text sizes and varying number of extracted features on the trained SVM classifier. Overall Success Rate (OSR) was chosen as the performance metric of the proposed solution and is found to be effective, compared with existing solutions, in detecting higher percentage of steg images.  相似文献   

14.
Closed-circuit television (CCTV) security systems have been widely used in banks, convenience stores, and other facilities. They are useful to deter crime and depict criminal activity. However, CCTV cameras that provide an overview of a monitored region can be useful for criminal investigation but sometimes can also be used for object identification (e.g., vehicle numbers, persons, etc.). In this paper, we propose a framework for improving the image quality of CCTV security systems. This framework is based upon motion detection technology. There are two cameras in the framework: one camera (camera A) is fixed focus with a zoom lens for moving-object detection, and the other one (camera B) is variable focus with an auto-zoom lens to capture higher resolution images of the objects of interest. When camera A detects a moving object in the monitored area, camera B, driven by an auto-zoom focus control algorithm, will take a higher resolution image of the object of interest. Experimental results show that the proposed framework can improve the likelihood that images obtained from stationary unattended CCTV cameras are sufficient to enable law enforcement officials to identify suspects and other objects of interest.  相似文献   

15.
As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods.  相似文献   

16.
A symmetry perceiving adaptive neural network and facial image recognition   总被引:1,自引:0,他引:1  
The paper deals with the forensic problem of comparing nearly from view and facial images for personal identification. The human recognition process for such problems, is primarily based on both holistic as well as feature-wise symmetry perception aided by subjective analysis for detecting ill-defined features. It has been attempted to approach the modelling of such a process by designing a robust symmetry perceiving adaptive neural network. The pair of images to be compared should be presented to the proposed neural network (NN) as source (input) and target images. The NN learns about the symmetry between the pair of images by analysing examples of associated feature pairs belonging to the source and the target images. In order to prepare a paired example of associated features for training purpose, when we select one particular feature on the source image as a unique pixel, we must associate it with the corresponding feature on the target image also. But, in practice, it is not always possible to fix the latter feature also as a unique pixel due to pictorial ambiguity. The robust or fault tolerant NN takes care of such a situation and allows fixing the associated target feature as a rectangular array of pixels, rather than fixing it as a unique pixel, which is pretty difficult to be done with certainty. From such a pair of sets of associated features, the NN searches out proper locations of the target features from the sets of ambiguous target features by a fuzzy analysis during its learning. If any of target features, searched out by the NN, lies outside the prespecified zone, the training of the NN is unsuccessful. This amounts to non-existence of symmetry between the pair of images and confirms non-identity. In case of a successful training, the NN gets adapted with appropriate symmetry relation between the pair of images and when the source image is input to the trained NN, it responds by outputting a processed source image which is superimposable over the target images and identity may subsequently be established by examining detailed matching in machine-made superimposed/composite images which are also suitable for presentation before the court. The performance of the proposed NN has been tested with various cases including simulated ones and it is hoped to serve as a working tool of forensic anthropologists.  相似文献   

17.
This paper demonstrates the feasibility of the automation of forensic hair analysis and comparison task using neural network explanation systems (NNESs). Our system takes as input microscopic images of two hairs and produces a classification decision as to whether or not the hairs came from the same person. Hair images were captured using a NEXTDimension video board in a NEXTDimension color turbo computer, connected to a video camera. Image processing was done on an SGI indigo workstation. Each image is segmented into a number of pieces appropriate for classification of different features. A variety of image processing techniques are used to enhance this information. Use of wavelet analysis and the Haralick texture algorithm to pre-process data has allowed us to compress large amounts of data into smaller, yet representative data. Neural networks are then used for feature classification. Finally, statistical tests determine the degree of match between the resulting collection of hair feature vectors. An important issue in automation of any task in criminal investigations is the reliability and understandability of the resulting system. To address this concern, we have developed methods to facilitate explanation of neural network's behavior using a decision tree. The system was able to achieve a performance of 83% hair match accuracy, using 5 of the 21 morphological characteristics used by experts. This shows promise for the usefulness of a fuller scale system. While an automated system would not replace the expert, it would make the task easier by providing a means for pre-processing the large amount of data with which the expert must contend.  相似文献   

18.
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号