首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor‐based SCI heavily relies on the denoising filter used. This study proposes a novel sensor‐based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state‐of‐the‐art methods, which is a big advantage considering the potential real‐time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state‐of‐the‐art methods in terms of accuracy.  相似文献   

2.
3.
Copy-move is one of the most commonly used image tampering operation, where a part of image content is copied and then pasted to another part of the same image. In order to make the forgery visually convincing and conceal its trace, the copied part may subject to post-processing operations such as rotation and blur. In this paper, we propose a polar cosine transform and approximate nearest neighbor searching based copy-move forgery detection algorithm. The algorithm starts by dividing the image into overlapping patches. Robust and compact features are extracted from patches by taking advantage of the rotationally-invariant and orthogonal properties of the polar cosine transform. Potential copy-move pairs are then detected by identifying the patches with similar features, which is formulated as approximate nearest neighbor searching and accomplished by means of locality-sensitive hashing (LSH). Finally, post-verifications are performed on potential pairs to filter out false matches and improve the accuracy of forgery detection. To sum up, the LSH based similar patch identification and the post-verification methods are two major novelties of the proposed work. Experimental results reveal that the proposed work can produce accurate detection results, and it exhibits high robustness to various post-processing operations. In addition, the LSH based similar patch detection scheme is much more effective than the widely used lexicographical sorting.  相似文献   

4.
Due to present of enormous free image and video editing software on the Internet, tampering of digital images and videos have become very easy. Validating the integrity of images or videos and detecting any attempt of forgery without use of active forensic technique such as Digital Signature or Digital Watermark is a big challenge to researchers. Passive forensic techniques, unlike active techniques, do not need any preembeded information about the image or video. The proposed paper presents a comprehensive review of the recent developments in the field of digital image and video forensic using noise features. The previously existing methods of image and video forensics proved the importance of noises and encourage us for the study and perform extensive research in this field. Moreover, in this paper, forensic task cover mainly source identification and forgery detection in the image and video using noise features. Thus, various source identification and forgery detection methods using noise features are reviewed and compared in this paper for image and video. The overall objective of this paper is to give researchers a broad perspective on various aspects of image and video forensics using noise features. Conclusion part of this paper discusses about the importance of noise features and the challenges encountered by different image and video forensic method using noise features.  相似文献   

5.
《Digital Investigation》2014,11(2):111-119
To discriminate natural images from computer generated graphics, a novel identification method based on the features of the impact of color filter array (CFA) interpolation on the local correlation of photo response non-uniformity noise (PRNU) is proposed. As CFA interpolation generally exists in the generation of natural images and it imposes influence on the local correlation of PRNU, the differences between the PRNU correlations of natural images and those of computer generated graphics are investigated. Nine dimensions of histogram features are extracted from the local variance histograms of PRNU to represent the identification features. The discrimination is accomplished by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 99.43%, and it is robust against scaling, JPEG compression, rotation and additive noise. Thus, it has great potential to be used in image source pipelines forensics.  相似文献   

6.
Source camera identification is one of the emerging field in digital image forensics, which aims at identifying the source camera used for capturing the given image. The technique uses photo response non-uniformity (PRNU) noise as a camera fingerprint, as it is found to be one of the unique characteristic which is capable of distinguishing the images even if they are captured from similar cameras. Most of the existing PRNU based approaches are very sensitive to the random noise components existing in the estimated PRNU, and also they are not robust when some simple manipulations are performed on the images. Hence a new feature based approach of PRNU is proposed for the source camera identification by choosing the features which are robust for image manipulations. The PRNU noise is extracted from the images using wavelet based denoising method and is represented by higher order wavelet statistics (HOWS), which are invariant features for image manipulations and geometric variations. The features are fed to support vector machine classifiers to identify the originating source camera for the given image and the results have been verified by performing ten-fold cross validation technique. The experiments have been carried out using the images captured from various cell phone cameras and it demonstrated that the proposed algorithm is capable of identifying the source camera of the given image with good accuracy. The developed technique can be used for differentiating the images, even if they are captured from similar cameras, which belongs to same make and model. The analysis have also showed that the proposed technique remains robust even if the images are subjected to simple manipulations or geometric variations.  相似文献   

7.
In our society digital images are a powerful and widely used communication medium. They have an important impact on our life. In recent years, due to the advent of high-performance commodity hardware and improved human-computer interfaces, it has become relatively easy to create fake images. Modern, easy to use image processing software enables forgeries that are undetectable by the naked eye. In this work we propose a method to automatically detect and localize duplicated regions in digital images. The presence of duplicated regions in an image may signify a common type of forgery called copy-move forgery. The method is based on blur moment invariants, which allows successful detection of copy-move forgery, even when blur degradation, additional noise, or arbitrary contrast changes are present in the duplicated regions. These modifications are commonly used techniques to conceal traces of copy-move forgery. Our method works equally well for lossy format such as JPEG. We demonstrate our method on several images affected by copy-move forgery.  相似文献   

8.
Patch-Match is an efficient algorithm used for structural image editing and available as a tool on popular commercial photo-editing software. The tool allows users to insert or remove objects from photos using information from similar scene content. Recently, a modified version of this algorithm was proposed as a counter-measure against Photo-Response Non-Uniformity (PRNU) based Source Camera Identification (SCI). The algorithm can provide anonymity at a great rate (97%) and impede PRNU based SCI without the need of any other information, hence leaving no-known recourse for the PRNU-based SCI. In this paper, we propose a method to identify sources of the Patch-Match-applied images by using randomized subsets of images and the traditional PRNU based SCI methods. We evaluate the proposed method on two forensics scenarios in which an adversary makes use of the Patch-Match algorithm and distorts the PRNU noise pattern in the incriminating images she took with his camera. Our results show that it is possible to link sets of Patch-Match-applied images back to their source camera even in the presence of images that come from unknown cameras. To our best knowledge, the proposed method represents the very first counter-measure against the usage of Patch-Match in the digital forensics literature.  相似文献   

9.
With the availability of the powerful editing software and sophisticated digital cameras, region duplication is becoming more and more popular in image manipulation where part of an image is pasted to another location to conceal undesirable objects. Most existing techniques to detect such tampering are mainly at the cost of higher computational complexity. In this paper, we present an efficient and robust approach to detect such specific artifact. Firstly, the original image is divided into fixed-size blocks, and discrete cosine transform (DCT) is applied to each block, thus, the DCT coefficients represent each block. Secondly, each cosine transformed block is represented by a circle block and four features are extracted to reduce the dimension of each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by a preset threshold value. In order to make the algorithm more robust, some parameters are proposed to remove the wrong similar blocks. Experiment results show that our proposed scheme is not only robust to multiple copy-move forgery, but also to blurring or nosing adding and with low computational complexity.  相似文献   

10.
11.
The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5h] and [5, 24h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme.  相似文献   

12.
A video can be manipulated using synthetic zooming without using the state-of-the-art video forgeries. Synthetic zooming is performed by upscaling individual frames of a video with varying scale factors followed by cropping them to the original frame size. These manipulated frames resemble genuine natural (optical) camera zoomed frames and hence may be misclassified as a pristine video by video forgery detection algorithms. Even if such a video is classified as forged, forensic investigators may ignore the results, believing it as part of an optical camera zooming activity. Hence, this can be used as an anti-forensic method which eliminates digital evidence. In this paper, we propose a method for differentiating optical camera zooming from synthetic zooming for video tampering detection. The features used for this method are pixel variance correlation and sensor pattern noise. Experimental results on a dataset containing 3200 videos show the effectiveness of the proposed method.  相似文献   

13.
《Science & justice》2019,59(4):390-404
When a bullet is fired from a barrel, random imperfections in the interior surface of the barrel imprint 3-D micro structures on the bullet surface that are seen as striations. Despite being random and non-stationary in nature, these striations are known to be consistently reproduced in a unique pattern on every bullet. This is a key idea in bullet identification. Common procedures in the field of automatic bullet identification include extraction of a feature profile from bullet image, profile smoothing and comparison of profiles using normalized cross correlation. Since the cross correlation based comparison is susceptible to high-frequency noise and nonlinear baseline drift, profile smoothing is a critical step in bullet identification. In previous work, we considered bullet images as nonlinear non-stationary processes and applied ensemble empirical mode decomposition (EEMD) as a preprocessing algorithm for smoothing and feature extraction. Using EEMD, each bullet average profile was decomposed into several scales known as intrinsic mode functions (IMFs). By choosing an appropriate range of scales, the resultant smoothed profile contained less high-frequency noise and no nonlinear baseline drift. But the procedure of choosing the proper number of IMFs to reduce the high-frequency noise effect was manual. This poses a problem in comparison of bullets whose images contained less or more noise in comparison to others because their useful information may be present in the corresponding discarded IMFs. Moreover, another problem arises when the bullet type changes. In this case manual inspection is needed once more to figure out which range of IMFs contain less high-frequency noise for this particular type of bullet. In this paper, we propose a novel combination of EEMD and Bayesian Kalman filter to solve these problems. First the bullet images are rotated using Radon transform. The rotated images are averaged column-wise to acquire averaged 1-D profiles. The nonlinear baseline drifts of averaged profiles are removed using EEMD algorithm. The profiles are then processed by a Kalman filter that is designed to automatically and optimally reduce the effect of high-frequency noise. Using Expectation Maximization (EM) technique, for each averaged profile, the parameters of Kalman filter are reconfigured to optimally suppress the high-frequency noise in each averaged profile. This work is the first effort that practically implements the Kalman filter for optimal denoising of firearm image profiles. In addition, we believe that Euclidean distance metric can help the normalized cross-correlation based comparison. Therefore, in this paper, we propose a comparison metric that is invariant to start and endpoints of firearm image profiles. This metric combines the prized properties of both Euclidean and normalized cross-correlation metrics in order to improve identification results. The proposed algorithm was evaluated on a database containing 180 2-D gray-scale images acquired from bullets fired from different AK-47 assault rifles. Although the proposed method needs more calculations in comparison to conventional methods, the experiments showed that it attained better results compared with the conventional methods and the previous method based on EMD in the field of automatic bullet identification.  相似文献   

14.
《Digital Investigation》2007,4(3-4):129-137
In this paper we discuss how operating system design and implementation influence the methodology for computer forensics investigations, with the focus on forensic acquisition of memory. In theory the operating system could support such investigations both in terms of tools for analysis of data and by making the system data readily accessible for analysis. Conventional operating systems such as Windows and UNIX derivatives offer some memory-related tools that are geared towards the analysis of system crashes, rather than forensic investigations. In this paper we demonstrate how techniques developed for persistent operating systems, where lifetime of data is independent of the method of its creation and storage, could support computer forensics investigations delivering higher efficiency and accuracy. It is proposed that some of the features offered by persistent systems could be built into conventional operating systems to make illicit activities easier to identify and analyse. We further propose a new technique for forensically sound acquisition of memory based on the persistence paradigm.  相似文献   

15.
Identifying the source camera of images is becoming increasingly important nowadays. A popular approach is to use a type of pattern noise called photo-response non-uniformity (PRNU). The noise of image contains the patterns which can be used as a fingerprint. Despite that, the PRNU-based approach is sensitive towards scene content and image intensity. The identification is poor in areas having low or saturated intensity, or in areas with complicated texture. The reliability of different regions is difficult to model in that it depends on the interaction of scene content and the characteristics of the denoising filter used to extract the noise. In this paper, we showed that the local variance of the noise residual can measure the reliability of the pixel for PRNU-based source camera identification. Hence, we proposed to use local variance to characterize the severeness of the scene content artifacts. The local variance is then incorporated to the general matched filter and peak to correlation energy (PCE) detector to provide an optimal framework for signal detection. The proposed method is tested against several state-of-art methods. The experimental results show that the local variance based approach outperformed other state-of-the-art methods in terms of identification accuracy.  相似文献   

16.
We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way, the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. The preliminary results reported in this paper, which take into account a limited amount of shredded pieces (10-15) demonstrate that feature-matching-based procedure produces interesting results for the problem of document reconstruction.  相似文献   

17.
We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way, the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. The preliminary results reported in this paper, which take into account a limited amount of shredded pieces (10–15) demonstrate that feature-matching-based procedure produces interesting results for the problem of document reconstruction.  相似文献   

18.
Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale‐invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross‐correlation between the query image and each shoeprint image in the database. Experimental results show that full‐size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints.  相似文献   

19.
电子数据司法鉴定已经成为当前司法鉴定研究热点问题之一。它是计算机司法鉴定的一种主要类型。根据鉴定性质不同,电子数据司法鉴定可以分为以“发现证据”为目标的鉴定和以“评估证据”为目标的鉴定。前者包括数据检索与固定、数据恢复、数据来源分析、数据内容分析、数据综合分析;后者包括同一鉴定、真伪鉴定、相似性鉴定、功能鉴定、复合鉴定等不同性质的鉴定项目。这两类鉴定在鉴定目标、程序、风险、意见主观性和证据审查等方面均存在显著差异。  相似文献   

20.
To prevent image forgeries, a number of forensic techniques for digital image have been developed that can detect an image's origin, trace its processing history, and can also locate the position of tampering. Especially, the statistical footprint left by JPEG compression operation can be a valuable source of information for the forensic analyst, and some image forensic algorithm have been raised based on the image statistics in the DCT domain. Recently, it has been shown that footprints can be removed by adding a suitable anti‐forensic dithering signal to the image in the DCT domain, this results in invalid for some image forensic algorithms. In this paper, a novel anti‐forensic algorithm is proposed, which is capable of concealing the quantization artifacts that left in the single JPEG compressed image. In the scheme, a chaos‐based dither is added to an image's DCT coefficients to remove such artifacts. Effectiveness of both the scheme and the loss of image quality are evaluated through the experiments. The simulation results show that the proposed anti‐forensic scheme can verify the reliability of the JPEG forensic tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号