首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
《Digital Investigation》2014,11(2):111-119
To discriminate natural images from computer generated graphics, a novel identification method based on the features of the impact of color filter array (CFA) interpolation on the local correlation of photo response non-uniformity noise (PRNU) is proposed. As CFA interpolation generally exists in the generation of natural images and it imposes influence on the local correlation of PRNU, the differences between the PRNU correlations of natural images and those of computer generated graphics are investigated. Nine dimensions of histogram features are extracted from the local variance histograms of PRNU to represent the identification features. The discrimination is accomplished by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 99.43%, and it is robust against scaling, JPEG compression, rotation and additive noise. Thus, it has great potential to be used in image source pipelines forensics.  相似文献   

2.
Source camera identification is one of the emerging field in digital image forensics, which aims at identifying the source camera used for capturing the given image. The technique uses photo response non-uniformity (PRNU) noise as a camera fingerprint, as it is found to be one of the unique characteristic which is capable of distinguishing the images even if they are captured from similar cameras. Most of the existing PRNU based approaches are very sensitive to the random noise components existing in the estimated PRNU, and also they are not robust when some simple manipulations are performed on the images. Hence a new feature based approach of PRNU is proposed for the source camera identification by choosing the features which are robust for image manipulations. The PRNU noise is extracted from the images using wavelet based denoising method and is represented by higher order wavelet statistics (HOWS), which are invariant features for image manipulations and geometric variations. The features are fed to support vector machine classifiers to identify the originating source camera for the given image and the results have been verified by performing ten-fold cross validation technique. The experiments have been carried out using the images captured from various cell phone cameras and it demonstrated that the proposed algorithm is capable of identifying the source camera of the given image with good accuracy. The developed technique can be used for differentiating the images, even if they are captured from similar cameras, which belongs to same make and model. The analysis have also showed that the proposed technique remains robust even if the images are subjected to simple manipulations or geometric variations.  相似文献   

3.
《Science & justice》2022,62(5):624-631
Counterfeiting of banknotes is still a severe crime problem in many countries. One of the most significant issue for solving the crime is to classify the counterfeit types and identify the sources. Most of the current methods to classify counterfeit banknotes rely on manual examination that is time-consuming and labor-intensive. Moreover, these methods only detect surface features which can be easily imitated through advanced printing technology. In this study, an automated method based on optical coherence tomography (OCT) and machine-learning algorithms was proposed to classify different types of banknotes based on the internal features. A spectral-domain OCT (SD-OCT) system was employed for sub-surface imaging and quantitative assessment of banknotes. A total of 29 Chinese 100-Yuan banknotes were collected, in which 4 of them were real and 25 of them were counterfeiting by three different printing processes. Each banknote was imaged 10 times in 3 distinct regions, which resulted in a dataset of 290 samples. Each sample was characterized by extracting 2 A-scan (OCT signal intensity along depth) based features and 14B-scan (cross-sectional OCT images) based features. Several machine-learning models, including logistic regression (LR), support vector machines (SVM), K-nearest neighbor (KNN) and random forest (RF), were built and optimized as the classifiers that were trained using 203 samples and applied to predict 87 testing samples. The best performance was achieved by SVM classifier in which the sensitivity of 96.55% and specificity of 98.85% were obtained in discriminating between authentic and counterfeit banknotes, and the sensitivity of 94.67% and specificity of 98.22% were obtained in predicting the types of counterfeit banknotes. These classifiers were also evaluated using the receiver operating characteristic (ROC) curves. To the best of our knowledge, this is the first study where A-scan and B-scan derived features from OCT images have been used for the detection and classification of different types of counterfeit banknotes.  相似文献   

4.
It is now extremely easy to recapture high-resolution and high-quality images from LCD (Liquid Crystal Display) screens. Recaptured image detection is an important digital forensic problem, as image recapture is often involved in the creation of a fake image in an attempt to increase its visual plausibility. State-of-the-art image recapture forensic methods make use of strong prior knowledge about the recapturing process and are based on either the combination of a group of ad-hoc features or a specific and somehow complicated dictionary learning procedure. By contrast, we propose a conceptually simple yet effective method for recaptured image detection which is built upon simple image statistics and a very loose assumption about the recapturing process. The adopted features are pixel-wise correlation coefficients in image differential domains. Experimental results on two large databases of high-resolution, high-quality recaptured images and comparisons with existing methods demonstrate the forensic accuracy and the computational efficiency of the proposed method.  相似文献   

5.
Due to present of enormous free image and video editing software on the Internet, tampering of digital images and videos have become very easy. Validating the integrity of images or videos and detecting any attempt of forgery without use of active forensic technique such as Digital Signature or Digital Watermark is a big challenge to researchers. Passive forensic techniques, unlike active techniques, do not need any preembeded information about the image or video. The proposed paper presents a comprehensive review of the recent developments in the field of digital image and video forensic using noise features. The previously existing methods of image and video forensics proved the importance of noises and encourage us for the study and perform extensive research in this field. Moreover, in this paper, forensic task cover mainly source identification and forgery detection in the image and video using noise features. Thus, various source identification and forgery detection methods using noise features are reviewed and compared in this paper for image and video. The overall objective of this paper is to give researchers a broad perspective on various aspects of image and video forensics using noise features. Conclusion part of this paper discusses about the importance of noise features and the challenges encountered by different image and video forensic method using noise features.  相似文献   

6.
Digital image forgery detection is important because of its wide use in applications such as medical diagnosis, legal investigations, and entertainment. Copy–move forgery is one of the famous techniques, which is used in region duplication. Many of the existing copy–move detection algorithms cannot effectively blind detect duplicated regions that are made by powerful image manipulation software like Photoshop. In this study, a new method is proposed for blind detecting manipulations in digital images based on modified fractal coding and feature vector matching. The proposed method not only detects typical copy–move forgery, but also finds multiple copied forgery regions for images that are subjected to rotation, scaling, reflection, and a mixture of these postprocessing operations. The proposed method is robust against tampered images undergoing attacks such as Gaussian blurring, contrast scaling, and brightness adjustment. The experimental results demonstrated the validity and efficiency of the method.  相似文献   

7.
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring.  相似文献   

8.
9.
《Science & justice》2021,61(6):789-796
Depending on the metric and non-metric skeletal features of various bones, forensic experts proposed diverse sex identification methods. The main focus of the present study is to calculate sexual dimorphism in human unfused or disarticulated hyoid bone and compared it with studies conducted by different researchers. For this study, 293 unfused hyoid bones were accumulated and investigated from 173 male and 120 female cadavers of the northwest Indian population from the age of 15 to 80 years. Initially, discriminant analysis was performed on the dataset to predict sex and to get an idea for the crucial variables for sexual dimorphism. Later, significant variables predicted by the discriminant analysis were used for machine learning approaches to improve accuracy for sex determination. The standard scaler method is used for pre-processing of the data before machine learning analysis and to prevent overfitting and underfitting, 70 % of the whole dataset was utilized in the training of the model and the remaining data were used for testing the model. According to the discriminant analysis, body length (BL) and body height (BH) were found to be highly significant for the sex determination and predicted sex with 75.1 % accuracy. However, implementation of machine learning approaches such as the XG Boost classifier increased the accuracy to 83 % with sensitivity, and specificity scores of 0.81 and 0.84, respectively. Moreover, the ROC-AUC score achieved by the XG Boost classifier is 0.89; indicating machine learning investigation can improve the sex determination accuracy up to the appropriate standard.  相似文献   

10.
目的本文主要研究基于虹膜图像的人工特征选择和标注方法,探寻虹膜人工鉴定研究方向,探讨虹膜技术作为一种新的刑事技术在诉讼中应用的可行性。方法首先,作者结合眼解剖学与虹膜基础理论,对虹膜特征进行了粗分类,分为五大类型:放射状沟线、向心沟、卷缩轮、隐窝和色素点;其次,作者结合已有的虹膜算法和图像处理方法对虹膜图像特征的提取和分析方法进行研究;最后,通过专用软件辅助,对虹膜图像进行区域切割、归一化、特征定位和标记、特征信息提取等系列处理。结果初步实现了两张虹膜图像在同一尺度下的人工特征选取和标注。结论本文研究的虹膜人工特征选择和标注方法,是虹膜识别技术在检验鉴定领域应用的初步探索,为后续虹膜人工比对鉴定的深入研究奠定基础。  相似文献   

11.
A symmetry perceiving adaptive neural network and facial image recognition   总被引:1,自引:0,他引:1  
The paper deals with the forensic problem of comparing nearly from view and facial images for personal identification. The human recognition process for such problems, is primarily based on both holistic as well as feature-wise symmetry perception aided by subjective analysis for detecting ill-defined features. It has been attempted to approach the modelling of such a process by designing a robust symmetry perceiving adaptive neural network. The pair of images to be compared should be presented to the proposed neural network (NN) as source (input) and target images. The NN learns about the symmetry between the pair of images by analysing examples of associated feature pairs belonging to the source and the target images. In order to prepare a paired example of associated features for training purpose, when we select one particular feature on the source image as a unique pixel, we must associate it with the corresponding feature on the target image also. But, in practice, it is not always possible to fix the latter feature also as a unique pixel due to pictorial ambiguity. The robust or fault tolerant NN takes care of such a situation and allows fixing the associated target feature as a rectangular array of pixels, rather than fixing it as a unique pixel, which is pretty difficult to be done with certainty. From such a pair of sets of associated features, the NN searches out proper locations of the target features from the sets of ambiguous target features by a fuzzy analysis during its learning. If any of target features, searched out by the NN, lies outside the prespecified zone, the training of the NN is unsuccessful. This amounts to non-existence of symmetry between the pair of images and confirms non-identity. In case of a successful training, the NN gets adapted with appropriate symmetry relation between the pair of images and when the source image is input to the trained NN, it responds by outputting a processed source image which is superimposable over the target images and identity may subsequently be established by examining detailed matching in machine-made superimposed/composite images which are also suitable for presentation before the court. The performance of the proposed NN has been tested with various cases including simulated ones and it is hoped to serve as a working tool of forensic anthropologists.  相似文献   

12.
13.
Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale‐invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross‐correlation between the query image and each shoeprint image in the database. Experimental results show that full‐size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints.  相似文献   

14.
Because of the rapidly increasing use of digital composite images, recent studies have identified digital forgery and filtering regions. This research has shown that interpolation, which is used to edit digital images, is an effective way to analyze digital images for composite regions. Interpolation is widely used to adjust the size of the image of a composite target, making the composite image seem natural by rotating or deforming. As a result, many algorithms have been developed to identify composite regions by detecting a trace of interpolation. However, many limitations have been found in detection maps developed to identify composite regions. In this study, we analyze the pixel patterns of noninterpolation and interpolation regions. We propose a detection map algorithm to separate the two regions. To identify composite regions, we have developed an improved algorithm using minimum filer, Laplacian operation and maximum filters. Finally, filtering regions that used the interpolation operation are analyzed using the proposed algorithm.  相似文献   

15.
Copy-move is one of the most commonly used image tampering operation, where a part of image content is copied and then pasted to another part of the same image. In order to make the forgery visually convincing and conceal its trace, the copied part may subject to post-processing operations such as rotation and blur. In this paper, we propose a polar cosine transform and approximate nearest neighbor searching based copy-move forgery detection algorithm. The algorithm starts by dividing the image into overlapping patches. Robust and compact features are extracted from patches by taking advantage of the rotationally-invariant and orthogonal properties of the polar cosine transform. Potential copy-move pairs are then detected by identifying the patches with similar features, which is formulated as approximate nearest neighbor searching and accomplished by means of locality-sensitive hashing (LSH). Finally, post-verifications are performed on potential pairs to filter out false matches and improve the accuracy of forgery detection. To sum up, the LSH based similar patch identification and the post-verification methods are two major novelties of the proposed work. Experimental results reveal that the proposed work can produce accurate detection results, and it exhibits high robustness to various post-processing operations. In addition, the LSH based similar patch detection scheme is much more effective than the widely used lexicographical sorting.  相似文献   

16.
王桂强 《刑事技术》2003,(5):30-35,57
目的阐述刑事影像领域内影像技术及应用的现状和发展。方法从理论方面研究国内外刑事影像技术文献。结果提出了刑事影像影像技术新的框架体系。结论刑事影像技术三个主要组成部分是影像成像检验、影像分析检验和影像合成演示。  相似文献   

17.
Making changes or additions to written entries in a document can be profitable and illegal at the same time. A simple univariate approach is first used in this paper to quantify the evidential value in color measurements for inks on a document coming from a different or the same source. Graphic, qualitative discrimination is then obtained independently by applying color deconvolution image processing to document images, with parameters optionally optimized by support vector machines (SVM), a machine learning method. Discrimination based on qualitative results from image processing is finally compared to the quantitative results of the statistical approach. As color differences increase, optimized color deconvolution achieves qualitative discrimination when the statistical approach indicates evidence for the different source hypothesis.  相似文献   

18.
19.
Pinpoint authentication watermarking based on a chaotic system   总被引:1,自引:0,他引:1  
Watermarking technique is one of the active research fields in recent ten years, which can be used in copyright management, content authentication, and so on. For the authentication watermarking, tamper localization and detection accuracy are two important performances. However, most methods in literature cannot obtain precise localization. In addition, few researchers pay attention to the problem of detection accuracy. In this paper, a pinpoint authentication watermarking is proposed based on a chaotic system, which is sensitive to the initial value. The approach can not only exactly localize the malicious manipulations but reveal block substitutions when Holliman-Memon attack (VQ attack) occurs. An image is partitioned into non-overlapped regions according to the requirement on precision. In each region, a chaotic model is iteratively applied to produce the chaotic sequences based on the initial values, which are determined by combining the prominent luminance values of pixels, position information and an image key. Subsequently, an authentication watermark is constructed using the binary chaotic sequences and embedded in the embedding space. At the receiver, a detector extracts the watermark and localizes the tampered regions without access to the host image or the original watermark. The precision of spatial localization can attain to one pixel, which is valuable to the images observed at non-ordinary distance, such as medical images and military images. The detection accuracy rate is defined and analyzed to present the probability of a detector making right decisions. Experimental results demonstrate the effectiveness and advantages of our algorithm.  相似文献   

20.
Improved detection of forensic evidence by combining narrow band photographic images taken at a range of wavelengths is dependent on the substance of interest having a significantly different spectrum from the underlying substrate. While some natural substances such as blood have distinctive spectral features which are readily distinguished from common colorants, this is not true for visualization agents commonly used in forensic science. We now show that it is possible to select reagents with narrow spectral features that lead to increased visibility using digital cameras and computer image enhancement programs even if their coloration is much less intense to the unaided eye than traditional reagents. The concept is illustrated by visualising latent fingermarks on paper with the zinc complex of Ruhemann's Purple, cyanoacrylate-fumed fingerprints with Eu(tta)(3)(phen), and soil prints with 2,6-bis(benzimidazol-2-yl)-4-[4'-(dimethylamino)phenyl]pyridine [BBIDMAPP]. In each case background correction is performed at one or two wavelengths bracketing the narrow absorption or emission band of these compounds. However, compounds with sharp spectral features would also lead to improved detection using more advanced algorithms such as principal component analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号