首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
In our society digital images are a powerful and widely used communication medium. They have an important impact on our life. In recent years, due to the advent of high-performance commodity hardware and improved human-computer interfaces, it has become relatively easy to create fake images. Modern, easy to use image processing software enables forgeries that are undetectable by the naked eye. In this work we propose a method to automatically detect and localize duplicated regions in digital images. The presence of duplicated regions in an image may signify a common type of forgery called copy-move forgery. The method is based on blur moment invariants, which allows successful detection of copy-move forgery, even when blur degradation, additional noise, or arbitrary contrast changes are present in the duplicated regions. These modifications are commonly used techniques to conceal traces of copy-move forgery. Our method works equally well for lossy format such as JPEG. We demonstrate our method on several images affected by copy-move forgery.  相似文献   

3.
Due to present of enormous free image and video editing software on the Internet, tampering of digital images and videos have become very easy. Validating the integrity of images or videos and detecting any attempt of forgery without use of active forensic technique such as Digital Signature or Digital Watermark is a big challenge to researchers. Passive forensic techniques, unlike active techniques, do not need any preembeded information about the image or video. The proposed paper presents a comprehensive review of the recent developments in the field of digital image and video forensic using noise features. The previously existing methods of image and video forensics proved the importance of noises and encourage us for the study and perform extensive research in this field. Moreover, in this paper, forensic task cover mainly source identification and forgery detection in the image and video using noise features. Thus, various source identification and forgery detection methods using noise features are reviewed and compared in this paper for image and video. The overall objective of this paper is to give researchers a broad perspective on various aspects of image and video forensics using noise features. Conclusion part of this paper discusses about the importance of noise features and the challenges encountered by different image and video forensic method using noise features.  相似文献   

4.
Because of the rapidly increasing use of digital composite images, recent studies have identified digital forgery and filtering regions. This research has shown that interpolation, which is used to edit digital images, is an effective way to analyze digital images for composite regions. Interpolation is widely used to adjust the size of the image of a composite target, making the composite image seem natural by rotating or deforming. As a result, many algorithms have been developed to identify composite regions by detecting a trace of interpolation. However, many limitations have been found in detection maps developed to identify composite regions. In this study, we analyze the pixel patterns of noninterpolation and interpolation regions. We propose a detection map algorithm to separate the two regions. To identify composite regions, we have developed an improved algorithm using minimum filer, Laplacian operation and maximum filters. Finally, filtering regions that used the interpolation operation are analyzed using the proposed algorithm.  相似文献   

5.
Copy-move is one of the most commonly used image tampering operation, where a part of image content is copied and then pasted to another part of the same image. In order to make the forgery visually convincing and conceal its trace, the copied part may subject to post-processing operations such as rotation and blur. In this paper, we propose a polar cosine transform and approximate nearest neighbor searching based copy-move forgery detection algorithm. The algorithm starts by dividing the image into overlapping patches. Robust and compact features are extracted from patches by taking advantage of the rotationally-invariant and orthogonal properties of the polar cosine transform. Potential copy-move pairs are then detected by identifying the patches with similar features, which is formulated as approximate nearest neighbor searching and accomplished by means of locality-sensitive hashing (LSH). Finally, post-verifications are performed on potential pairs to filter out false matches and improve the accuracy of forgery detection. To sum up, the LSH based similar patch identification and the post-verification methods are two major novelties of the proposed work. Experimental results reveal that the proposed work can produce accurate detection results, and it exhibits high robustness to various post-processing operations. In addition, the LSH based similar patch detection scheme is much more effective than the widely used lexicographical sorting.  相似文献   

6.
Region duplication forgery is one of the tampering techniques that are frequently used, where a part of an image is copied and pasted into another part of the same image. In this paper, a phase correlation method based on polar expansion and adaptive band limitation is proposed for region duplication forgery detection. Our method starts by calculating the Fourier transform of the polar expansion on overlapping windows pair, and then an adaptive band limitation procedure is implemented to obtain a correlation matrix in which the peak is effectively enhanced. After estimating the rotation angle of the forgery region, a searching algorithm in the sense of seed filling is executed to display the whole duplicated region. Experimental results show that the proposed approach can detect duplicated region with high accuracy and robustness to rotation, illumination adjustment, blur and JPEG compression while rotation angle is estimated precisely for further calculation.  相似文献   

7.
Document forgery is a significant issue in Korea, with around ten thousand cases reported every year. Analyzing paper plays a crucial role in examining questionable documents such as marketable securities and contracts, which can aid in solving criminal cases of document forgery. Paper analysis can also provide essential insights in other types of criminal cases, serving as an important clue for solving cases such as the source of a blackmail letter. The papermaking process generates distinct forming fabric marks and formations, which are critical features for paper classification. These characteristics are observable under transmitted light and are created by the forming fabric pattern and the distribution of pulp fibers, respectively. In this study, we propose a novel approach for paper identification based on hybrid features. This method combines texture features extracted from images converted using the gray-level co-occurrence matrix (GLCM) approach and a convolutional neural network (CNN), with another set of features extracted by the CNN using the same images as input. We applied the proposed method to classification tasks for seven major paper brands available in the Korean market, achieving an accuracy of 97.66%. The results confirm the applicability of this method for visually inspecting paper products and demonstrate its potential for assisting in solving criminal cases involving document forgery.  相似文献   

8.
《Digital Investigation》2014,11(2):120-140
In this paper, we present a passive approach for effective detection and localization of region-level forgery from video sequences possibly with camera motion. As most digital image/video capture devices do not have modules for embedding watermark or signature, passive forgery detection which aims to detect the traces of tampering without embedded information has become the major focus of recent research. However, most of current passive approaches either work only for frame-level detection and cannot localize region-level forgery, or suffer from high false detection rates for localization of tampered regions. In this paper, we investigate two common region-level inpainting methods for object removal, temporal copy-and-paste and exemplar-based texture synthesis, and propose a new approach based on spatio-temporal coherence analysis for detection and localization of tampered regions. Our approach can handle camera motion and multiple object removal. Experiments show that our approach outperforms previous approaches, and can effectively detect and localize regions tampered by temporal copy-and-paste and texture synthesis.  相似文献   

9.
10.
With recent advancements in image processing and printing technology, home printers have improved in performance and grown more widespread. As such, they have been increasingly used in counterfeiting and forgery. Most counterfeit bills in Korea have been created using home scanners and printers. The identification of printer model is thus necessary to rapidly track down criminals and solve crimes. Household printers can be largely divided into inkjet and laser printers. These two types of printers print halftone textures instead of continuous images. This study proposed a technique of printer classification based on halftone textures that can be observed in printed documents. Since halftone textures are expressed as periodic lattices, the images were transformed via FFT, which is highly effective at expressing periodicity. ResNet, known for its superior gradient flow, was used for training. The experiment was conducted on 12 color laser jets and 2 inkjets. Scans of bills printed by each printer were used, and halftone texture analysis was performed on these images for printer model classification. Each image was cropped into several parts; one of the cropped parts was analyzed. The analysis showed that laser printers could be 100% distinguished from inkjet printers. An accuracy of 98.44% was achieved in make classification. When 50 cropped images were used instead of a single image, the technique achieved 100% accuracy in model classification. The proposed technique is non-destructive; it offers high accessibility and efficiency as it can be performed using a scanner alone, without requiring additional optical equipment.  相似文献   

11.
In this paper we present a novel approach to the problem of steganography detection in JPEG images by applying a statistical attack. The method is based on the empirical Benford's Law and, more specifically, on its generalized form. We prove and extend the validity of the logarithmic rule in colour images and introduce a blind steganographic method which can flag a file as a suspicious stego-carrier. The proposed method achieves very high accuracy and speed and is based on the distributions of the first digits of the quantized Discrete Cosine Transform coefficients present in JPEGs. In order to validate and evaluate our algorithm, we developed steganographic tools which are able to analyse image files and we subsequently applied them on the popular Uncompressed Colour Image Database. Furthermore, we demonstrate that not only can our method detect steganography but, if certain criteria are met, it can also reveal which steganographic algorithm was used to embed data in a JPEG file.  相似文献   

12.
Nowadays, surveillance systems are used to control crimes. Therefore, the authenticity of digital video increases the accuracy of deciding to admit the digital video as legal evidence or not. Inter‐frame duplication forgery is the most common type of video forgery methods. However, many existing methods have been proposed for detecting this type of forgery and these methods require high computational time and impractical. In this study, we propose an efficient inter‐frame duplication detection algorithm based on standard deviation of residual frames. Standard deviation of residual frame is applied to select some frames and ignore others, which represent a static scene. Then, the entropy of discrete cosine transform coefficients is calculated for each selected residual frame to represent its discriminating feature. Duplicated frames are then detected exactly using subsequence feature analysis. The experimental results demonstrated that the proposed method is effective to identify inter‐frame duplication forgery with localization and acceptable running time.  相似文献   

13.
A video can be manipulated using synthetic zooming without using the state-of-the-art video forgeries. Synthetic zooming is performed by upscaling individual frames of a video with varying scale factors followed by cropping them to the original frame size. These manipulated frames resemble genuine natural (optical) camera zoomed frames and hence may be misclassified as a pristine video by video forgery detection algorithms. Even if such a video is classified as forged, forensic investigators may ignore the results, believing it as part of an optical camera zooming activity. Hence, this can be used as an anti-forensic method which eliminates digital evidence. In this paper, we propose a method for differentiating optical camera zooming from synthetic zooming for video tampering detection. The features used for this method are pixel variance correlation and sensor pattern noise. Experimental results on a dataset containing 3200 videos show the effectiveness of the proposed method.  相似文献   

14.
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring.  相似文献   

15.
It is now extremely easy to recapture high-resolution and high-quality images from LCD (Liquid Crystal Display) screens. Recaptured image detection is an important digital forensic problem, as image recapture is often involved in the creation of a fake image in an attempt to increase its visual plausibility. State-of-the-art image recapture forensic methods make use of strong prior knowledge about the recapturing process and are based on either the combination of a group of ad-hoc features or a specific and somehow complicated dictionary learning procedure. By contrast, we propose a conceptually simple yet effective method for recaptured image detection which is built upon simple image statistics and a very loose assumption about the recapturing process. The adopted features are pixel-wise correlation coefficients in image differential domains. Experimental results on two large databases of high-resolution, high-quality recaptured images and comparisons with existing methods demonstrate the forensic accuracy and the computational efficiency of the proposed method.  相似文献   

16.
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.  相似文献   

17.
Pinpoint authentication watermarking based on a chaotic system   总被引:1,自引:0,他引:1  
Watermarking technique is one of the active research fields in recent ten years, which can be used in copyright management, content authentication, and so on. For the authentication watermarking, tamper localization and detection accuracy are two important performances. However, most methods in literature cannot obtain precise localization. In addition, few researchers pay attention to the problem of detection accuracy. In this paper, a pinpoint authentication watermarking is proposed based on a chaotic system, which is sensitive to the initial value. The approach can not only exactly localize the malicious manipulations but reveal block substitutions when Holliman-Memon attack (VQ attack) occurs. An image is partitioned into non-overlapped regions according to the requirement on precision. In each region, a chaotic model is iteratively applied to produce the chaotic sequences based on the initial values, which are determined by combining the prominent luminance values of pixels, position information and an image key. Subsequently, an authentication watermark is constructed using the binary chaotic sequences and embedded in the embedding space. At the receiver, a detector extracts the watermark and localizes the tampered regions without access to the host image or the original watermark. The precision of spatial localization can attain to one pixel, which is valuable to the images observed at non-ordinary distance, such as medical images and military images. The detection accuracy rate is defined and analyzed to present the probability of a detector making right decisions. Experimental results demonstrate the effectiveness and advantages of our algorithm.  相似文献   

18.
《Digital Investigation》2014,11(1):67-77
The detection of stego images, used as a carrier for secret messages for nefarious activities, forms the basis for Blind Image Steganalysis. The main issue in Blind Steganalysis is the non-availability of knowledge about the Steganographic technique applied to the image. Feature extraction approaches best suited for Blind Steganalysis, either dealt with only a few features or single domain of an image. Moreover, these approaches lead to low detection percentage. The main objective of this paper is to improve the detection percentage. In this paper, the focus is on Blind Steganalysis of JPEG images through the process of dilation that includes splitting of given image into RGB components followed by transformation of each component into three domains, viz., frequency, spatial, and wavelet. Extracted features from each domain are given to the Support Vector Machine (SVM) classifier that classified the image as steg or clean. The proposed process of dilation was tested by experiments with varying embedded text sizes and varying number of extracted features on the trained SVM classifier. Overall Success Rate (OSR) was chosen as the performance metric of the proposed solution and is found to be effective, compared with existing solutions, in detecting higher percentage of steg images.  相似文献   

19.
Identifying the source camera of images is becoming increasingly important nowadays. A popular approach is to use a type of pattern noise called photo-response non-uniformity (PRNU). The noise of image contains the patterns which can be used as a fingerprint. Despite that, the PRNU-based approach is sensitive towards scene content and image intensity. The identification is poor in areas having low or saturated intensity, or in areas with complicated texture. The reliability of different regions is difficult to model in that it depends on the interaction of scene content and the characteristics of the denoising filter used to extract the noise. In this paper, we showed that the local variance of the noise residual can measure the reliability of the pixel for PRNU-based source camera identification. Hence, we proposed to use local variance to characterize the severeness of the scene content artifacts. The local variance is then incorporated to the general matched filter and peak to correlation energy (PCE) detector to provide an optimal framework for signal detection. The proposed method is tested against several state-of-art methods. The experimental results show that the local variance based approach outperformed other state-of-the-art methods in terms of identification accuracy.  相似文献   

20.
《Digital Investigation》2014,11(2):111-119
To discriminate natural images from computer generated graphics, a novel identification method based on the features of the impact of color filter array (CFA) interpolation on the local correlation of photo response non-uniformity noise (PRNU) is proposed. As CFA interpolation generally exists in the generation of natural images and it imposes influence on the local correlation of PRNU, the differences between the PRNU correlations of natural images and those of computer generated graphics are investigated. Nine dimensions of histogram features are extracted from the local variance histograms of PRNU to represent the identification features. The discrimination is accomplished by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 99.43%, and it is robust against scaling, JPEG compression, rotation and additive noise. Thus, it has great potential to be used in image source pipelines forensics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号