首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multimedia Tools and Applications - This article proposes a new image encryption approach based on bitplane decomposition methods and chaotic maps. This approach does not depend on any additional...  相似文献   

2.
Automatic speech recognition (ASR) systems follow a well established approach of pattern recognition, that is signal processing based feature extraction at front-end and likelihood evaluation of feature vectors at back-end. Mel-frequency cepstral coefficients (MFCCs) are the features widely used in state-of-the-art ASR systems, which are derived by logarithmic spectral energies of the speech signal using Mel-scale filterbank. In filterbank analysis of MFCC there is no consensus for the spacing and number of filters used in various noise conditions and applications. In this paper, we propose a novel approach to use particle swarm optimization (PSO) and genetic algorithm (GA) to optimize the parameters of MFCC filterbank such as the central and side frequencies. The experimental results show that the new front-end outperforms the conventional MFCC technique. All the investigations are conducted using two separate classifiers, HMM and MLP, for Hindi vowels recognition in typical field condition as well as in noisy environment.  相似文献   

3.
Robust high-dimensional data processing has witnessed an exciting development in recent years. Theoretical results have shown that it is possible using convex programming to optimize data fit to a low-rank component plus a sparse outlier component. This problem is also known as robust PCA, and it has found application in many areas of computer vision. In image and video processing and face recognition, the opportunity to process massive image databases is emerging as people upload photo and video data online in unprecedented volumes. However, data quality and consistency is not controlled in any way, and the massiveness of the data poses a serious computational challenge. In this paper we present t-GRASTA, or “Transformed GRASTA (Grassmannian robust adaptive subspace tracking algorithm)”. t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate three components of a decomposition of a collection of images: a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image. We show that t-GRASTA is 4 × faster than state-of-the-art algorithms, has half the memory requirement, and can achieve alignment for face images as well as jittered camera surveillance images.  相似文献   

4.
Multimedia Tools and Applications - The integrity of image is the premise for various applications. The existing image encryption algorithms rarely have the function of verifying the integrity for...  相似文献   

5.
In this paper, a robust hybrid image encryption algorithm with permutation-diffusion structure is proposed, based on chaotic control parameters and hyper-chaotic system. In the proposed method, a chaotic logistic map is employed to generate the control parameters for the permutation stage which results in shuffling the image rows and columns to disturb the high correlation among pixels. Next, in the diffusion stage, another chaotic logistic map with different initial conditions and parameters is employed to generate the initial conditions for a hyper-chaotic Hopfield neural network to generate a keystream for image homogenization of the shuffled image. The new hybrid method has been compared with several existing methods and shows comparable or superior robustness to blind decryption.  相似文献   

6.
7.
We designed a stream-cipher algorithm based on one-time keys and robust chaotic maps, in order to get high security and improve the dynamical degradation. We utilized the piecewise linear chaotic map as the generator of a pseudo-random key stream sequence. The initial conditions were generated by the true random number generators, the MD5 of the mouse positions. We applied the algorithm to encrypt the color image, and got the satisfactory level security by two measures: NPCR and UACI. When the collision of MD5 had been found, we combined the algorithm with the traditional cycle encryption to ensure higher security. The ciphered image is robust against noise, and makes known attack unfeasible. It is suitable for application in color image encryption.  相似文献   

8.
Most construction repetitive scheduling methods developed so far have been based on the premise that a repetitive project is comprised of many identical production units. Recently Huang and Sun [Huang RY, Sun KS. Non-unit based planning and scheduling of repetitive construction project. J Constr Eng Manage ASCE 2006;132(6):585–97] developed a workgroup-based repetitive scheduling method that takes the view that a repetitive construction project consists of repetitive activities of workgroups. Instead of repetitive production units, workgroups with repetitive or similar activities in a repetitive project are identified and employed in the planning and scheduling. The workgroup-based approach adds more flexibility to the planning and scheduling of repetitive construction projects and enhances the effectiveness of repetitive scheduling. This work builds on previous research and develops an optimization model for workgroup-based repetitive scheduling. A genetic algorithm (GA) is employed in model formation for finding the optimal solution. A chromosome representation, as well as specification of other parameters for GA analysis, is described in the paper. A sample case study is used for model validation and demonstration. Results and findings are reported.  相似文献   

9.
针对现有图像加密算法的相关性较高的问题,提出一种基于DNA编码与统计信息优化的图像加密算法。首先,采用DNA编码将图像的RGB三色通道分转化为DNA序列,并通过DNA运算对像素进行空间域的失真处理;然后,对图像进行扩散处理,并使用和声搜索算法搜索扩散阶段的最大熵;最终,对图像进行混沌置乱处理,通过和声搜索算法搜索该阶段的最小相关系数。仿真实验结果显示,本算法获得的熵与相关系数分别接近理想值8与0,同时获得了理想的安全性。  相似文献   

10.
A separable reversible data hiding and encryption for high efficiency video coding (HEVC) video is proposed in this paper. In the encoding phase, one can encrypt the signs and amplitudes of motion vector differences and the signs of residual coefficients with an encryption key by using RC4, while another can hide data into nonzero AC residual coefficients with a hiding key. For the decoding phase, one can decrypt the HEVC video and obtain a HEVC video similar to the original one if he only has the encryption key; he can extract the hiding data, but he does not know the HEVC video content if he only has the hiding key; he can extract the hiding data and recover the original HEVC video if he has both the encryption key and hiding key. Experimental results and analysis show that the proposed scheme can obtain a good security performance and keep the HEVC video stream format compliant.  相似文献   

11.
We report on the automated determination of the minimal required area of a MEMS accelerometer conforming to given specifications. For a realistic nonlinear sensor model this process is only possible by the use of numerical optimization, which typically has the difficulty of finding the global minimum or is time consuming. A miniaturized sensor’s chip size reduces manufacturing cost and leads to more competitive package sizes and new, unforeseen applications. Size reduction is especially important for consumer applications like mobile phones and navigation devices, where an increasing demand for accelerometers is expected in the near future. With further miniaturization of a sensor it is increasingly important to find the optimal design in order to use chip area as efficiently as possible. To achieve a robust and flexible automated area reduction without loss of functionality we uniquely combine available genetic and gradient-based optimization algorithms. Furthermore, we reduce the model complexity, apply different scaling techniques and adapt optimization algorithm settings. The application to a capacitive and a piezoresistive MEMS accelerometer shows significant improvement of efficiency when compared with the use of currently available optimization algorithms.  相似文献   

12.
An image encryption technique using DNA (Deoxyribonucleic acid) operations and chaotic maps has been proposed in this paper. Firstly, the input image is DNA encoded and a mask is generated by using 1D chaotic map. This mask is added with the DNA encoded image using DNA addition. Intermediate result is DNA complemented with the help of a complement matrix produced by two 1D chaotic maps. Finally, the resultant matrix is permuted using 2D chaotic map followed by DNA decoding to get the cipher image. Proposed technique is totally invertible and it can resist known plain text attack, statistical attacks and differential attacks.  相似文献   

13.
In recent years, AES is known as the excellent choice between the existing symmetric cryptosystems to encrypt binary or text data. However, this standard encryption algorithm is unsuitable for images regarding the particular features of these kinds of data such as high correlation, high redundancy, and voluminosity. Based on this fact, we have designed a new cryptosystem taking into account the aforementioned characteristics which provides high performance and security level. Actually, the cryposystem adopts a new structure named Outer-Inner structure. This new structure consists of two phases: Outer phase and Inner phase. In the former phase, the image is treated as a single block. The purpose of this phase is to address the correlation and the redundancy issue of images. In order to ensure the efficiency and to avoid the problem of treating an image as a single block where several image encryption algorithms suffer from it, this phase only involves lightweight operations. In the later phase the image is treated as a set of fixed blocks. The aim of this phase is to address the voluminosity issue. In fact, during this phase each block can be encrypted independently through an iterative function. The experiment results show that 7 rounds are sufficient to reach an acceptable security level.  相似文献   

14.
The RST approach (robust satellite technique) is a multi-temporal scheme of satellite data analysis already successfully used to monitor volcanoes at different geographic locations. In this work, the results of a long-term validation analysis of RST-based hot spot products are presented. This study was performed processing fourteen years of NOAA-AVHRR (National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer) records acquired over Mt. Etna area between 1 January 1995 and 31 December 2008, at different overpass times (day/night), and analyzing hundreds of volcano bulletins reporting information on Mt. Etna eruptive activity, provided by visual observations and ground-based measurements. An optimized configuration of the RST approach, named RSTVOLC, is also, for the first time, presented and discussed here. This method, thanks to a better tradeoff between reliability and sensitivity, may be particularly suitable to support operational systems for volcano monitoring and hazard mitigation.  相似文献   

15.
Multimedia Tools and Applications - Present era is marked by exponential growth in transfer of multimedia data through internet. Most of the Internet-of-Things(IoT) applications send images to...  相似文献   

16.
Multimedia Tools and Applications - The paper proposes a robust image encryption scheme based on chaotic system and elliptic curve over a finite field. The sender and receiver agree on an elliptic...  相似文献   

17.
In this paper, we develop an easy-to-implement approximate method to take uncertainties into account during a multidisciplinary optimization. Multidisciplinary robust design usually involves setting up a full uncertainty propagation within the system, requiring major modifications in every discipline and on the shared variables. Uncertainty propagation is an expensive process, but robust solutions can be obtained more easily when the disciplines affected by uncertainties have a significant effect on the objectives of the problem. A heuristic method based on local uncertainty processing (LOUP) is presented here, allowing approximate solving of specific robust optimization problems with minor changes in the initial multidisciplinary system. Uncertainty is processed within the disciplines that it impacts directly, without propagation to the other disciplines. A criterion to verify a posteriori the applicability of the method to a given multidisciplinary system is provided. The LOUP method is applied to an aircraft preliminary design industrial test case, in which it allowed to obtain robust designs whose performance is more stable than the one of deterministic solutions, relatively to uncertain parameter variations.  相似文献   

18.
Many real-life problems can be formulated as numerical optimization of certain objective functions. However, often an objective function possesses numerous local optima, which could trap an algorithm from moving toward the desired global solution. Evolutionary algorithms (EAs) have emerged to enable global optimization; however, at the present stage, EAs are basically limited to solving small-scale problems due to the constraint of computational efficiency. To improve the search efficiency, this paper presents a stochastic genetic algorithm (StGA). A novel stochastic coding strategy is employed so that the search space is dynamically divided into regions using a stochastic method and explored region-by-region. In each region, a number of children are produced through random sampling, and the best child is chosen to represent the region. The variance values are decreased if at least one of five generated children results in improved fitness, otherwise, the variance values are increased. Experiments on 20 test functions of diverse complexities show that the StGA is able to find the near-optimal solution in all cases. Compared with several other algorithms, StGA achieves not only an improved accuracy, but also a considerable reduction of the computational effort. On average, the computational cost required by StGA is about one order less than the other algorithms. The StGA is also shown to be able to solve large-scale problems.  相似文献   

19.
This paper presents the splitting cubes, a fast and robust technique for performing interactive virtual cutting on deformable objects. The technique relies on two ideas. The first one is to embed the deformable object in a regular grid, to apply the deformation function to the grid nodes and to interpolate the deformation inside each cell from its 8 nodes. The second idea is to produce a tessellation for the boundary of the object on the base of the intersections of such boundary with the edges of the grid. Please note that the boundary can be expressed in any way; for example it can be a triangle mesh, an implicit or a parametric surface. The only requirement is that the intersection between the boundary and the grid edges can be computed. This paper shows how the interpolation of the deformation inside the cells can be used to produce discontinuities in the deformation function, and the intersections of the cut surface can be used to visually show the cuts on the object. The splitting cubes is essentially a tessellation algorithm for growing, deformable surface, and it can be applied to any method for animating deformable objects. In this paper the case of the mesh-free methods (MMs) is considered: in this context, we described a practical GPU friendly method, that we named the extended visibility criterion, to introduce discontinuities of the deformation. Electronic supplementary material  Supplementary material is available in the online version of this article at and is accessible for authorized users.  相似文献   

20.
This paper concerns X-ray tomography image reconstruction of an object function from few projections in Computed Tomography (CT). The problem is so ill-posed that no classical method can give satisfactory result. We have investigated a new combined method for penalized-likelihood image reconstruction that combines the fuzzy penalty function (FP) and GA (genetic algorithm) optimization. The proposed algorithm does not suffer from the same problem as that of ML EM (maximum likelihood expectation maximization) algorithm, and it converges rapidly to a low noisy solution even if the iteration number is high, and gives global estimation not a local one like in classical algorithm such as gradient, to the problem of determining object parameters. The method was tested and validated on datasets of synthetic and real image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号