首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

A novel robust image hashing scheme based on quaternion Zernike moments (QZMs) and the scale invariant feature transform (SIFT) is proposed for image authentication. The proposed method can locate tampered region and detect the nature of the modification, including object insertion, removal, replacement, copy-move and cut-to-paste operations. QZMs considered as global features are used for image authentication while SIFT key-point features provide image forgery localization and classification. Proposed approach performance were evaluated on the color images database of UCID and compared with several recent and efficient methods. These experiments show that the proposed scheme provides a short hash length that is robust to most common image content-preserving manipulations like large angle rotations, and allows us to correctly locating forged image regions as well as detecting types of forgery image.

  相似文献   

2.
Zhao  Chen  Shuai  Renjun  Ma  Li  Liu  Wenjia  Wu  Menglin 《Multimedia Tools and Applications》2022,81(17):24265-24300

Cervical cell classification has important clinical significance in cervical cancer screening at early stages. However, there are fewer public cervical cancer smear cell datasets, the weights of each classes’ samples are unbalanced, the image quality is uneven, and the classification research results based on CNN tend to overfit. To solve the above problems, we propose a cervical cell image generation model based on taming transformers (CCG-taming transformers) to provide high-quality cervical cancer datasets with sufficient samples and balanced weights, we improve the encoder structure by introducing SE-block and MultiRes-block to improve the ability to extract information from cervical cancer cells images; we introduce Layer Normlization to standardize the data, which is convenient for the subsequent non-linear processing of the data by the ReLU activation function in feed forward; we also introduce SMOTE-Tomek Links to balance the source data set and the number of samples and weights of the images we use Tokens-to-Token Vision Transformers (T2T-ViT) combing transfer learning to classify the cervical cancer smear cell image dataset to improve the classification performance. Classification experiments using the model proposed in this paper are performed on three public cervical cancer datasets, the classification accuracy in the liquid-based cytology Pap smear dataset (4-class), SIPAKMeD (5-class), and Herlev (7-class) are 98.79%, 99.58%, and 99.88%, respectively. The quality of the images we generated on these three data sets is very close to the source data set, the final averaged inception score (IS), Fréchet inception distance (FID), Recall and Precision are 3.75, 0.71, 0.32 and 0.65 respectively. Our method improves the accuracy of cervical cancer smear cell classification, provides more cervical cell sample images for cervical cancer-related research, and assists gynecologists to judge and diagnose different types of cervical cancer cells and analyze cervical cancer cells at different stages, which are difficult to distinguish. This paper applies the transformer to the generation and recognition of cervical cancer cell images for the first time.

  相似文献   

3.
骨髓是人体的主要造血器官,骨髓细胞在形态、种类、数目上的变化往往反映出一些重要疾病的信息。对骨髓涂片图像中的细胞进行分类识别和计数,对辅助临床诊断有着重要意义。论文在采用改进型遗传算法进行特征优选的基础上,提出了基于信息熵的成员分类器动态选择和自适应模糊积分分类器融合的骨髓细胞识别算法,进而采用临床病例证明了该识别方法的有效性。  相似文献   

4.
Goyal  Neha  Kumar  Nitin  Kapil 《Multimedia Tools and Applications》2022,81(22):32243-32264

Automated plant recognition based on leaf images is a challenging task among the researchers from several fields. This task requires distinguishing features derived from leaf images for assigning class label to a leaf image. There are several methods in literature for extracting such distinguishing features. In this paper, we propose a novel automated framework for leaf identification. The proposed framework works in multiple phases i.e. pre-processing, feature extraction, classification using bagging approach. Initially, leaf images are pre-processed using image processing operations such as boundary extraction and cropping. In the feature extraction phase, popular nature inspired optimization algorithms viz. Spider Monkey Optimization (SMO), Particle Swarm Optimization (PSO) and Gray Wolf Optimization (GWO) have been exploited for reducing the dimensionality of features. In the last phase, a leaf image is classified by multiple classifiers and then output of these classifiers is combined using majority voting. The effectiveness of the proposed framework is established based on the experimental results obtained on three datasets i.e. Flavia, Swedish and self-collected leaf images. On all the datasets, it has been observed that the classification accuracy of the proposed method is better than the individual classifiers. Furthermore, the classification accuracy for the proposed approach is comparable to deep learning based method on the Flavia dataset.

  相似文献   

5.
This paper proposes a heuristic dynamic programming (HDP) scheme to simultaneously control the dissolved oxygen concentration and the nitrate level in wastewater treatment processes (WWTP). Unlike traditional HDP schemes, the optimal control values are calculated in an analytical way by the proposed HDP controller. It can reduce the learning burden of the HDP controller to a great extent. The system model and the evaluation index J are approximated by two echo state networks (ESNs). Gradient‐based learning algorithms are employed to train both ESNs online, and the convergence of the training algorithm is investigated based on Lyapunov theory. The performance of the proposed ESN‐based HDP (E‐HDP) controller is tested and evaluated on a WWTP benchmark. Experimental results demonstrate that the proposed approach can achieve effective performance.  相似文献   

6.

The underwater images suffer from low contrast and color distortion due to variable attenuation of light and nonuniform absorption of red, green and blue components. In this paper, we propose a Retinex-based underwater image enhancement approach. First, we perform underwater image enhancement using the contrast limited adaptive histogram equalization (CLAHE), which limits the noise and enhances the contrast of the dark components of the underwater image at the cost of blurring the visual information. Then, in order to restore the distorted colors, we perform the Retinex-based enhancement of the CLAHE processed image. Next, in order to restore the distorted edges and achieve smoothing of the blurred parts of image, we perform bilateral filtering on the Retinex processed image. In order to utilize the individual strengths of CLAHE, Retinex and bilateral filtering algorithms in a single framework, we determine the suitable parameter values. The qualitative and quantitative performance comparison with some of the existing approaches shows that the proposed approach achieves better enhancement of the underwater images.

  相似文献   

7.
Zhou  Wen  Liang  Yiwen 《Applied Intelligence》2022,52(2):1461-1476

Anomaly detection is an important issue, which has been deeply studied in different research domains and application fields. The dendritic cell algorithm (DCA) is one of the most popular artificial immune system inspired approaches to handle anomaly detection problems. The performance of DCA depends significantly on the parameters used to compute the relationship between input instance and detectors. However, we find that while the DCA’s performance is good in practical applications, it is difficult to analyze due to the empirical based parameters and lacks adaptability. This paper studies how to effectively learn appropriate parameters for deterministic DCA (dDCA) for anomaly detection tasks. In particular, we propose a novel immune optimization based dDCA (IO-dDCA) for anomaly detection. It consists of dDCA classification, T cell (TC) classification, gradient descent optimization and immune nonlinear dynamic optimization. First, the dDCA is regarded as a binary classifier, and the data instances which are labeled as normal will be classified by a T cell inspired classification method, so as to improve the classification performance of dDCA. Then, to improve dDCA’s adaptability, gradient descent is adopted for dDCA parameters’ optimization. Finally, the immune nonlinear model is introduced to adjust learning rate in gradient descent to find the optimal parameters. The theoretical and experimental performance analysis of IO-dDCA show effectiveness of the novel approach through simulations, and the experimental results show that the proposed IO-dDCA has good classification accuracy.

  相似文献   

8.

For almost the past four decades, image classification has gained a lot of attention in the field of pattern recognition due to its application in various fields. Given its importance, several approaches have been proposed up to now. In this paper, we will present a dyadic multi-resolution deep convolutional neural wavelets’ network approach for image classification. This approach consists of performing the classification of one class versus all the other classes of the dataset by the reconstruction of a Deep Convolutional Neural Wavelet Network (DCNWN). This network is based on the Neural Network (NN) architecture, the Fast Wavelet Transform (FWT) and the Adaboost algorithm. It consists, first, of extracting features using the FWT based on the Multi-Resolution Analysis (MRA). These features are used to calculate the inputs of the hidden layer. Second, those inputs are filtered by using the Adaboost algorithm to select the best ones corresponding to each image. Third, we create an AutoEncoder (AE) using wavelet networks of all images. Finally, we apply a pooling for each hidden layer of the wavelet network to obtain a DCNWN that permits the classification of one class and rejects all other classes of the dataset. Classification rates given by our approach show a clear improvement compared to those cited in this article.

  相似文献   

9.
Abstract

A new classification technique was proposed from the viewpoint of memory saving, as well as intensive processing time reduction, to meet the strong requirement for easier operation on a personal computer. To carry out this process efficiently, some neighbouring pixels were lumped into a cell. This idea is based on the fact that changes of the CCT counts in the sea area are monotone, that is histograms are symmetrical, and that cell by cell classification is, therefore, sufficient.

First, a cell distance was denned by extending the concept of the Mahalano-bis' distance, which is the statistical difference between a cell and a cluster. The classification results agree well with those of the conventional Maximum Likelihood Method. We define this method as CDM (Cell Distance Method).

Secondly, an alternative concept which indicates the degree of similarity between two cells was proposed. It was found that this concept, defined as HOM (Histogram Overlay Method), not only improves the speed of processing image data but also has a close relation with the cell distance. In fact, it corresponds fairly to the cell distance under a certain condition.

Thirdly, these two methods were extended to unsupervised classification and applied to the investigation of turbidity in the sea around Hiroshima and Kure, West Japan.  相似文献   

10.

In recent years, image scene classification based on low/high-level features has been considered as one of the most important and challenging problems faced in image processing research. The high-level features based on semantic concepts present a more accurate and closer model to the human perception of the image scene content. This paper presents a new multi-stage approach for image scene classification based on high-level semantic features extracted from image content. In the first stage, the object boundaries and their labels that represent the content are extracted. For this purpose, a combined method of a fully convolutional deep network and a combined network of a two-class SVM-fuzzy and SVR are used. Topic modeling is used to represent the latent relationships between the objects. Hence in the second stage, a new combination of methods consisting of the bag of visual words, and supervised document neural autoregressive distribution estimator is used to extract the latent topics (topic modeling) in the image. Finally, classification based on Bayesian method is performed according to the extracted features of the deep network, objects labels and the latent topics in the image. The proposed method has been evaluated on three datasets: Scene15, UIUC Sports, and MIT-67 Indoor. The experimental results show that the proposed approach achieves average performance improvement of 12%, 11% and 14% in the accuracy of object detection, and 0.5%, 0.6% and 1.8% in the mean average precision criteria of the image scene classification, compared to the previous state-of-the-art methods on these three datasets.

  相似文献   

11.
为了更好地完成骨髓细胞图像的分类识别,在对临床血液学及细胞学图谱中的细胞图像进行切分的基础上,提出了一种基于分形维数和图像重心的算法,用来提取每一细胞图像的分维特征.由于图像的纹理不仅表现在图像结构上具有某种统计意义的相似性,还反映在彩色分布方面,因此,可从骨髓细胞的真彩色图像中提取一个新的彩色识别分量图像C,并与HIS彩色空间的饱和度分量图像S一起进行分维特征提取.实验数据表明,由于不同类型骨髓细胞图像的分形参数具有不同程度的差异,因此分形参数对于某些类别的骨髓细胞具有较好的区分能力.  相似文献   

12.

Imaging spectroscopy records the solar reflected spectrum at a fine spectral resolution and in a large number of bands thereby producing a spectral profile associated with each pixel in an image. This type of data tends to be highly correlated and we intend to harness the information of this spectral dependence by introducing the S-space concept. This concept in conjunction with measures of spatial dependence allows one to visualize the spectral profile as a regionalized variable where distance is measured in wavelengths. Unlike image space, S-space is one-dimensional. We illustrate the S-space concept using a CASI image of a forest scene and an AVIRIS image of an urban scene. This new technique provides spectral correlation information for each individual spectral profile on a per-pixel basis rather than the spectral variability across the entire image as is traditionally done in remote sensing investigations. As an example of the possibilities, spectral dependence was quantified using the semivariogram in S-space. A model of spatial dependence was then fitted to each semivariogram and the model parameters used as input to a classification algorithm in order to extract land cover information. To compare our approach with standard techniques, we used the first three principal components to produce a land cover classification. The semivariogram model parameter derived classification results displayed a better spatial contiguity and greatly diminished the dimensionality of the dataset. We also discuss future directions for the use of the S-space concept.  相似文献   

13.
Wang  Sheng  Lv  Lin-Tao  Yang  Hong-Cai  Lu  Di 《Multimedia Tools and Applications》2021,80(21-23):32409-32421

In the register detection of printing field, a new approach based on Zernike-CNNs is proposed. The edge feature of image is extracted by Zernike moments (ZMs), and a recursive algorithm of ZMs called Kintner method is derived. An improved convolutional neural networks (CNNs) are investigated to improve the accuracy of classification. Based on the classic convolutional neural network (CNN), the improved CNNs adopt parallel CNN to enhance local features, and adopt auxiliary classification part to modify classification layer weights. A printed image is trained with 7?×?400 samples and tested with 7?×?100 samples, and then the method in this paper is compared with other methods. In image processing, Zernike is compared with Sobel method, Laplacian of Gaussian (LoG) method, Smallest Univalue Segment Assimilating Nucleus (SUSAN) method, Finite Impusle Response (FIR) method, Multi-scale Morphological Gradient (MMG) method. In image classification, improved CNNs are compared with classical CNN. The experimental results show that Zernike-CNNs have the best performance, the mean square error (MSE) of the training samples reaches 0.0143, and the detection accuracy of training samples and test samples reached 91.43% and 94.85% respectively. The experiments reveal that Zernike-CNNs are a feasible approach for register detection.

  相似文献   

14.

The Reversible data hiding (RDH) approach can retrieve the original image from the marked image without any distortion. RDH in encrypted images is an approach that hides extra information into the ciphertext using a skill of recovering the actual data losslessly. To guarantee reversibility for addressing the information redundancy drawback, the cover image pixels are copied into two images. This paper presents a high capacity RDH scheme in encrypted images using fuzzy-based encryption. Initially, the texture classification is processed by a convolutional neural network (CNN) to classify the dense and transparent region. It automatically identifies the significant features without any individual supervision. Then, the plain text encryption is activated by the fuzzy group teaching with infinite elliptic curve (FGTIE) method. To overcome the demerit of FCM, the GTA is hybrid with FCM approach and the encryption is processed by the IE method. Next, a new embedding approach is used to enhance the embedding capacity, namely quotient multi-pixel value differencing (QMPVD). In order to obtain the higher PSNR and payload, the multi-pixel differencing is hybrid with the quotient value differencing. Finally, the original data is extracted and recovered with good quality and high capacity. The performances are evaluated using several performance metrics such as PSNR, SSIM, BER, MSE, embedding capacity/payload, sensitivity, specificity, tampering ratio, correlation coefficient, number of pixel change rate and unified average changing intensity. The performance of PSNR and capacity is compared with existing approaches named Encrypted image-based RDH with Paillier cryptosystem (EIRDH-PC), EIRDH with Redundancy Transfer (EIRDH-RT) and EIRDH with pixel value ordering (EIRDH-PVO). The performance is calculated for three groups of images such as the brain, lungs and abdomen. The implementation results show that the introduced model attained better performance compared to existing approaches in terms of PSNR and capacity. Besides, the proposed approach achieved the merits of no pixel expansion, lossless and alternative order recovery.

  相似文献   

15.

Generative Adversarial Networks (GANs) are most popular generative frameworks that have achieved compelling performance. They follow an adversarial approach where two deep models generator and discriminator compete with each other. They have been used for many applications especially for image synthesis because of their capability to generate high quality images. In past few years, different variants of GAN have proposed and they produced high quality results for image generation. This paper conducts an analysis of working and architecture of GAN and its popular variants for image generation in detail. In addition, we summarize and compare these models according to different parameters such as architecture, training method, learning type, benefits and performance metrics. Finally, we apply all these methods on a benchmark MNIST dataset, which contains handwritten digits and compare qualitative and quantitative results. The evaluation is based on quality of generated images, classification accuracy, discriminator loss, generator loss and computational time of these models. The aim of this study is to provide a comprehensive information about GAN and its various models in the field of image synthesis. Our main contribution in this work is critical comparison of popular GAN variants for image generation on MNIST dataset. Moreover, this paper gives insights regarding existing limitations and challenges faced by GAN and discusses associated future research work.

  相似文献   

16.
Transductive cost-sensitive lung cancer image classification   总被引:3,自引:3,他引:0  
Previous computer-aided lung cancer image classification methods are all cost-blind, which assume that the misdiagnosis (categorizing a cancerous image as a normal one or categorizing a normal image as a cancerous one) costs are equal. In addition, previous methods usually require experienced pathologists to label a large amount of images as training samples. To this end, a novel transductive cost-sensitive method is proposed for lung cancer image classification on needle biopsies specimens, which only requires the pathologist to label a small amount of images. The proposed method analyzes lung cancer images in the following procedures: (i) an image capturing procedure to capture images from the needle biopsies specimens; (ii) a preprocessing procedure to segment the individual cells from the captured images; (iii) a feature extraction procedure to extract features (i.e. shape, color, texture and statistical information) from the obtained individual cells; (iv) a codebook learning procedure to learn a codebook on the extracted features by adopting k-means clustering, which aims to represent each image as a histogram over different codewords; (v) an image classification procedure to predict labels for testing images using the proposed multi-class cost-sensitive Laplacian regularized least squares (mCLRLS). We evaluate the proposed method on a real-image set provided by Bayi Hospital, which contains 271 images including normal ones and four types of cancerous ones (squamous carcinoma, adenocarcinoma, small cell cancer and nuclear atypia). The experimental results demonstrate that the proposed method achieves a lower cancer-misdiagnosis rate and lower total misdiagnosis costs comparing with previous methods, which includes the supervised learning approach (kNN, mcSVM and MCMI-AdaBoost), semi-supervised learning approach (LapRLS) and cost-sensitive approach (CS-SVM). Meanwhile, the experiments also disclose that both transductive and cost-sensitive settings are useful when only a small amount of training images are available.  相似文献   

17.
Zhang  Yong  Liu  Bo  Cai  Jing  Zhang  Suhua 《Neural computing & applications》2016,28(1):259-267

Extreme learning machine for single-hidden-layer feedforward neural networks has been extensively applied in imbalanced data learning due to its fast learning capability. Ensemble approach can effectively improve the classification performance by combining several weak learners according to a certain rule. In this paper, a novel ensemble approach on weighted extreme learning machine for imbalanced data classification problem is proposed. The weight of each base learner in the ensemble is optimized by differential evolution algorithm. Experimental results on 12 datasets show that the proposed method could achieve more classification performance compared with the simple vote-based ensemble method and non-ensemble method.

  相似文献   

18.
Li  Yannuan  Wan  Lin  Fu  Ting  Hu  Weijun 《Multimedia Tools and Applications》2019,78(17):24431-24451

In this paper, we propose a novel hash code generation method based on convolutional neural network (CNN), called the piecewise supervised deep hashing (PSDH) method to directly use a latent layer data and the output layer result of the classification network to generate a two-segment hash code for every input image. The first part of the hash code is the class information hash code, and the second part is the feature message hash code. The method we proposed is a point-wise approach and it is easy to implement and works very well for image retrieval. In particular, it performs excellently in the search of pictures with similar features. The more similar the images are in terms of color and geometric information and so on, the better it will rank above the search results. Compared with the hashing method proposed so far, we keep the whole hashing code search method, and put forward a piecewise hashing code search method. Experiments on three public datasets demonstrate the superior performance of PSDH over several state-of-art methods.

  相似文献   

19.

Texture analysis of remote sensing images based on classification of area units represented in image segments is usually more accurate than operating on an individual pixel basis. In this paper we suggest a two-step procedure to segment texture patterns in remotely sensed data. An image is first classified based on texture analysis using a multi-parameter and multi-scale technique. The intermediate results are then treated as initial segments for subsequent segmentation based on the Gaussian Markov random field (GMRF) model. The segmentation procedure seeks to merge pairs of segments with the minimum variance difference. Experiments using real data prove that the two-step procedure improves both computational efficiency and accuracy of texture classification.  相似文献   

20.
Computational methods used in microscopy cell image analysis have largely augmented the impact of imaging techniques, becoming fundamental for biological research. The understanding of cell regulation processes is very important in biology, and in particular confocal fluorescence imaging plays a relevant role for the in vivo observation of cells. However, most biology researchers still analyze cells by visual inspection alone, which is time consuming and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cells. While the classic approach for automatic cell analysis is to use image segmentation, for in vivo confocal fluorescence microscopy images of plants, such approach is neither trivial nor is it robust to image quality variations. To analyze plant cells in in vivo confocal fluorescence microscopy images with robustness and increased performance, we propose the use of local convergence filters (LCF). These filters are based in gradient convergence and as such can handle illumination variations, noise and low contrast. We apply a range of existing convergence filters for cell nuclei analysis of the Arabidopsis thaliana plant root tip. To further increase contrast invariance, we present an augmentation to local convergence approaches based on image phase information. Through the use of convergence index filters we improved the results for cell nuclei detection and shape estimation when compared with baseline approaches. Using phase congruency information we were able to further increase performance by 11% for nuclei detection accuracy and 4% for shape adaptation. Shape regularization was also applied, but with no significant gain, which indicates shape estimation was good for the applied filters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号