首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistical feature selection is a key issue affecting the performance of steganalytic methods. In this paper, a performance comparison method for different types of image steganalytic features was proposed firstly based on the changing rates. Then, for two types of typical steganalytic features – co-occurrence matrix and Markov transition probability matrix, the performances of them were discussed and theoretically compared for detecting two types of well-known JPEG steganography that preserve DCT coefficients histogram and lead the histogram to shrink respectively. At last, a conclusion on the sensitivity comparison between components of these two types of features was derived: for the steganography that preserve the histogram, their sensitivities are comparable to each other; whereas for the other one (such as the steganography that subtract 1 from absolute value of the coefficient), different feature components have different sensitivities, on the basis of that, a new steganalytic feature could be obtained by fusing better components. Experimental results based on detection of three typical JPEG steganography (F5, Outguess and MB1) verified the theoretical comparison results, and showed that the detection accuracy of the fused new feature outperforms that of existing typical features.  相似文献   

2.
Source camera identification is one of the emerging field in digital image forensics, which aims at identifying the source camera used for capturing the given image. The technique uses photo response non-uniformity (PRNU) noise as a camera fingerprint, as it is found to be one of the unique characteristic which is capable of distinguishing the images even if they are captured from similar cameras. Most of the existing PRNU based approaches are very sensitive to the random noise components existing in the estimated PRNU, and also they are not robust when some simple manipulations are performed on the images. Hence a new feature based approach of PRNU is proposed for the source camera identification by choosing the features which are robust for image manipulations. The PRNU noise is extracted from the images using wavelet based denoising method and is represented by higher order wavelet statistics (HOWS), which are invariant features for image manipulations and geometric variations. The features are fed to support vector machine classifiers to identify the originating source camera for the given image and the results have been verified by performing ten-fold cross validation technique. The experiments have been carried out using the images captured from various cell phone cameras and it demonstrated that the proposed algorithm is capable of identifying the source camera of the given image with good accuracy. The developed technique can be used for differentiating the images, even if they are captured from similar cameras, which belongs to same make and model. The analysis have also showed that the proposed technique remains robust even if the images are subjected to simple manipulations or geometric variations.  相似文献   

3.
Eye tracking was used to measure visual attention of nine forensic document examiners (FDEs) and 12 control subjects on a blind signature comparison trial. Subjects evaluated 32 questioned signatures (16 genuine, eight disguised, and eight forged) which were compared, on screen, with four known signatures of the specimen provider while their eye movements, response times, and opinions were recorded. FDEs' opinions were significantly more accurate than controls, providing further evidence of FDE expertise. Both control and FDE subjects looked at signature features in a very similar way and the difference in the accuracy of their opinions can be accounted for by different cognitive processing of the visual information that they extract from the images. In a separate experiment the FDEs re-examined a reordered set of the same 32 questioned signatures. In this phase each signature was presented for only 100 msec to test if eye movements are relevant in forming opinions; performance significantly dropped, but not to chance levels indicating that the examination process comprises a combination of both global and local feature extraction strategies.  相似文献   

4.
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer‐generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer‐generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer‐generated graphics.  相似文献   

5.
With the availability of the powerful editing software and sophisticated digital cameras, region duplication is becoming more and more popular in image manipulation where part of an image is pasted to another location to conceal undesirable objects. Most existing techniques to detect such tampering are mainly at the cost of higher computational complexity. In this paper, we present an efficient and robust approach to detect such specific artifact. Firstly, the original image is divided into fixed-size blocks, and discrete cosine transform (DCT) is applied to each block, thus, the DCT coefficients represent each block. Secondly, each cosine transformed block is represented by a circle block and four features are extracted to reduce the dimension of each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by a preset threshold value. In order to make the algorithm more robust, some parameters are proposed to remove the wrong similar blocks. Experiment results show that our proposed scheme is not only robust to multiple copy-move forgery, but also to blurring or nosing adding and with low computational complexity.  相似文献   

6.
One of the significant problems encountered in criminology studies is the successful automated matching of fired cartridge cases, on the basis of the characteristic marks left on them by firearms. An intermediate step in the solution of this problem is the segmentation of certain regions that are defined on the cartridge case base. This paper describes a model-based method that performs segmentation of the cartridge case using surface height image of a center fire cartridge case base. The proposed method detects the location of the cartridge case base center and specific circular contours around it iteratively by projecting the problem to a one-dimensional feature space. In addition, the firing pin impression region is determined by utilizing an adaptive threshold that differentiates impression marks form primer region surface. Letters on the cartridge case base are also detected by using surface modeling and adaptive thresholding, in order to render the surface comparison operation robust against irrelevant surface features. Promising experimental results indicate the eligibility of the proposed method to be used for automated cartridge case base region segmentation process.  相似文献   

7.
《Science & justice》2019,59(4):390-404
When a bullet is fired from a barrel, random imperfections in the interior surface of the barrel imprint 3-D micro structures on the bullet surface that are seen as striations. Despite being random and non-stationary in nature, these striations are known to be consistently reproduced in a unique pattern on every bullet. This is a key idea in bullet identification. Common procedures in the field of automatic bullet identification include extraction of a feature profile from bullet image, profile smoothing and comparison of profiles using normalized cross correlation. Since the cross correlation based comparison is susceptible to high-frequency noise and nonlinear baseline drift, profile smoothing is a critical step in bullet identification. In previous work, we considered bullet images as nonlinear non-stationary processes and applied ensemble empirical mode decomposition (EEMD) as a preprocessing algorithm for smoothing and feature extraction. Using EEMD, each bullet average profile was decomposed into several scales known as intrinsic mode functions (IMFs). By choosing an appropriate range of scales, the resultant smoothed profile contained less high-frequency noise and no nonlinear baseline drift. But the procedure of choosing the proper number of IMFs to reduce the high-frequency noise effect was manual. This poses a problem in comparison of bullets whose images contained less or more noise in comparison to others because their useful information may be present in the corresponding discarded IMFs. Moreover, another problem arises when the bullet type changes. In this case manual inspection is needed once more to figure out which range of IMFs contain less high-frequency noise for this particular type of bullet. In this paper, we propose a novel combination of EEMD and Bayesian Kalman filter to solve these problems. First the bullet images are rotated using Radon transform. The rotated images are averaged column-wise to acquire averaged 1-D profiles. The nonlinear baseline drifts of averaged profiles are removed using EEMD algorithm. The profiles are then processed by a Kalman filter that is designed to automatically and optimally reduce the effect of high-frequency noise. Using Expectation Maximization (EM) technique, for each averaged profile, the parameters of Kalman filter are reconfigured to optimally suppress the high-frequency noise in each averaged profile. This work is the first effort that practically implements the Kalman filter for optimal denoising of firearm image profiles. In addition, we believe that Euclidean distance metric can help the normalized cross-correlation based comparison. Therefore, in this paper, we propose a comparison metric that is invariant to start and endpoints of firearm image profiles. This metric combines the prized properties of both Euclidean and normalized cross-correlation metrics in order to improve identification results. The proposed algorithm was evaluated on a database containing 180 2-D gray-scale images acquired from bullets fired from different AK-47 assault rifles. Although the proposed method needs more calculations in comparison to conventional methods, the experiments showed that it attained better results compared with the conventional methods and the previous method based on EMD in the field of automatic bullet identification.  相似文献   

8.
目的本文主要研究基于虹膜图像的人工特征选择和标注方法,探寻虹膜人工鉴定研究方向,探讨虹膜技术作为一种新的刑事技术在诉讼中应用的可行性。方法首先,作者结合眼解剖学与虹膜基础理论,对虹膜特征进行了粗分类,分为五大类型:放射状沟线、向心沟、卷缩轮、隐窝和色素点;其次,作者结合已有的虹膜算法和图像处理方法对虹膜图像特征的提取和分析方法进行研究;最后,通过专用软件辅助,对虹膜图像进行区域切割、归一化、特征定位和标记、特征信息提取等系列处理。结果初步实现了两张虹膜图像在同一尺度下的人工特征选取和标注。结论本文研究的虹膜人工特征选择和标注方法,是虹膜识别技术在检验鉴定领域应用的初步探索,为后续虹膜人工比对鉴定的深入研究奠定基础。  相似文献   

9.
396I;I139_6刘媛;00070005;107-110基于混沌优化算法的支持向量机参数选取方法袁小芳;王耀南;湖南大学电气与信息工程学院,湖南大学电气与信息工程学院 长沙410082,长沙410082机器学习;;支持向量机;;混沌优化;;参数选取支持向量机(SVM)的参数取值决定了其学习性能和泛化能力.对此,将SVM参数的选取看作参数的组合优化,建立组合优化的目标  相似文献   

10.
The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5h] and [5, 24h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme.  相似文献   

11.
Reconstruction of 2D object is a problem concerning many different fields such as forensics science, archiving, and banking. In the literature, it is considered as one‐sided puzzle problem. But this study handles torn banknotes as a double‐sided puzzle problem for the first time. In addition to that, a new dataset (ToB) is created for solving this problem. A selection approach based on the Borda count method is adopted in order to make the right decision as to which keypoint‐based method is to be used in the proposed reconstruction system. The selection approach was determined the Accelerated‐KAZE (AKAZE) as the most successful keypoint‐based method. This study also proposes new measures determining the success ratio of the reconstructed banknotes and calculating their loss ratio. When the torn banknotes were reconstructed with the AKAZE‐based reconstruction system, the average success rate was calculated as 95.55% by the proposed metric.  相似文献   

12.
Using the index of distances and the Optimal Classification method, the party positions and the dimensionality of the Danish Folketing is calculated from 1920 to 2005. The empirical results suggest that a one-dimensional model explains around 85 per cent of the legislative dimensionality in Denmark over time with only a modest presence of two or more dimensions in the 1970s. The rank order of party positions derived through the index of distance has higher face validity than the rankings derived through the Optimal Classification method. The Optimal Classification method produces a dimensionality measure which, when analysed, can be used to explain most of the events in Danish politics over the last 80 years. It is further argued that the shifts in the dimensionality can be explained by the number of parties in parliament and by historical incidents.  相似文献   

13.
Bite mark identification is based on the individuality of a dentition, which is used to match a bite mark to a suspected perpetrator. This matching is based on a tooth-by-tooth and arch-to-arch comparison utilising parameters of size, shape and alignment. The most common method used to analyse bite mark are carried out in 2D space. That means that the 3D information is preserved only two dimensionally with distortions. This paper presents a new 3D documentation, analysis and visualisation approach based on forensic 3D/CAD supported photogrammetry (FPHG) and the use of a 3D surface scanner. Our photogrammetric approach and the used visualisation method is, to the best to our knowledge, the first 3D approach for bite mark analysis in an actual case. The documentation has no distortion artifacts as can be found with standard photography. All the data are documented with a metric 3D measurement, orientation and subsequent analysis in 3D space. Beside the metrical analysis between bite mark and cast, it is possible using our method to utilise the topographical 3D feature of each individual tooth. This means that the 3D features of the biting surfaces and edges of each teeth are respected which is--as shown in our case--very important especially in the front teeth which have the first contact to the skin. Based upon the 3D detailed representation of the cast with the 3D topographic characteristics of the teeth, the interaction with the 3D documented skin can be visualised and analysed on the computer screen.  相似文献   

14.
A novel method for the non-destructive age determination of a blood stain is described. It is based on the measurement of the visible reflectance spectrum of the haemoglobin component using a microspectrophotometer (MSP), spectral pre-processing and the application of supervised statistical classification techniques. The reflectance spectra of sample equine blood stains deposited on a glazed white tile were recorded between 1 and 37 days, using an MSP at wavelengths between 442 nm and 585 nm, under controlled conditions. The determination of age was based on the progressive change of the spectra with the aging of the blood stain. These spectra were pre-processed to reduce the effects of baseline variations and sample scattering. Two feature selection methods based on calculation of Fisher's weights and Fourier transform (FT) of spectra were used to create inputs into a statistical model based on linear discriminant analysis (LDA). This was used to predict the age of the blood stain and tested by using the leave-one-out cross validation method. When the same blood stain was used to create the training and test datasets an excellent correct classification rate (CCR) of 91.5% was obtained for 20 input frequencies, improving to 99.2% for 66 input frequencies. A more realistic scenario where separate blood stains were used for the training and test datasets led to poorer successful classification due to problems with the choice of substrate but nevertheless up to 19 days a CCR of 54.7% with an average error of 0.71 days was obtained.  相似文献   

15.
目的探讨人体行走运动主要步态特征参数新的提取识别方法和定量化分析技术。方法以步态分析技术与传统的足迹检验理论和方法为基础,运用先进的Simi Motion视频动作捕捉系统,对50名健康男性正常行走的步态特征参数进行识别和分析。结果提出了一种基于关节点的步态特征提取方法,得到了行走步态特征的基本参数,为步态特征定量化检验提供技术支持。结论研究表明,simi视频动作捕捉系统可以获得较精确的步态特征数据,能够满足人体步态分析的需求,进而达到人身个体识别的目的。  相似文献   

16.
Crimes, such as robbery and murder, often involve firearms. In order to assist with the investigation into the crime, firearm examiners are asked to determine whether cartridge cases found at a crime scene had been fired from a suspect's firearm. This examination is based on a comparison of the marks left on the surfaces of cartridge cases. Firing pin impressions can be one of the most commonly used of these marks. In this study, a total of nine Ruger model 10/22 semiautomatic rifles were used. Fifty cartridges were fired from each rifle. The cartridge cases were collected, and each firing pin impression was then cast and photographed using a comparison microscope. In this paper, we will describe how one may use a computer vision algorithm, the Histogram of Orientated Gradient (HOG), and a machine learning method, Support Vector Machines (SVMs), to classify images of firing pin impressions. Our method achieved a reasonably high accuracy at 93%. This can be used to associate a firearm with a cartridge case recovered from a scene. We also compared our method with other feature extraction algorithms. The comparison results showed that the HOG-SVM method had the highest performance in this classification task.  相似文献   

17.
We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way, the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. The preliminary results reported in this paper, which take into account a limited amount of shredded pieces (10–15) demonstrate that feature-matching-based procedure produces interesting results for the problem of document reconstruction.  相似文献   

18.
Nowadays, surveillance systems are used to control crimes. Therefore, the authenticity of digital video increases the accuracy of deciding to admit the digital video as legal evidence or not. Inter‐frame duplication forgery is the most common type of video forgery methods. However, many existing methods have been proposed for detecting this type of forgery and these methods require high computational time and impractical. In this study, we propose an efficient inter‐frame duplication detection algorithm based on standard deviation of residual frames. Standard deviation of residual frame is applied to select some frames and ignore others, which represent a static scene. Then, the entropy of discrete cosine transform coefficients is calculated for each selected residual frame to represent its discriminating feature. Duplicated frames are then detected exactly using subsequence feature analysis. The experimental results demonstrated that the proposed method is effective to identify inter‐frame duplication forgery with localization and acceptable running time.  相似文献   

19.
The widespread use of mobile devices in comparison to personal computers has led to a new era of information exchange. The purchase trends of personal computers have started decreasing whereas the shipment of mobile devices is increasing. In addition, the increasing power of mobile devices along with portability characteristics has attracted the attention of users. Not only are such devices popular among users, but they are favorite targets of attackers. The number of mobile malware is rapidly on the rise with malicious activities, such as stealing users data, sending premium messages and making phone call to premium numbers that users have no knowledge. Numerous studies have developed methods to thwart such attacks. In order to develop an effective detection system, we have to select a subset of features from hundreds of available features. In this paper, we studied 100 research works published between 2010 and 2014 with the perspective of feature selection in mobile malware detection. We categorize available features into four groups, namely, static features, dynamic features, hybrid features and applications metadata. Additionally, we discuss datasets used in the recent research studies as well as analyzing evaluation measures utilized.  相似文献   

20.
We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way, the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. The preliminary results reported in this paper, which take into account a limited amount of shredded pieces (10-15) demonstrate that feature-matching-based procedure produces interesting results for the problem of document reconstruction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号