共查询到18条相似文献,搜索用时 15 毫秒
1.
The chosen-message method is used to be employed in conducting Simple Power Analysis (SPA) attack by means of selecting special input messages. However, it is difficult to make distinction by visual observation i.e., SPA in practical IoT hardware environment. In this paper, we proposed a practical chosen-message correlation power analysis (CPA) attack which combines the chosen-message method with CPA for side channel attack. Then, we adopt other two practical chosen-messages, 1 and n + 1, to attack Boscher’s right-to-left binary exponentiation algorithm which is wildly considered as an efficient side channel resistant algorithm. Finally, this paper presents a countermeasure to resist the chosen-message CPA attack over Boscher’s algorithm without nullifying its countermeasure features to Differential Power Analysis (DPA) and Differential Fault Analysis (DFA). To validate the proposed attack method and countermeasure, a 1024-bit RSA coprocessor is constructed on the Xilinx Virtex-5 with the Side-channel Attack Standard Evaluation Board (SASEBO) to implement Boscher’s algorithm as well as our proposed algorithm and launched the proposed attack on it separately. The experiment results show that the proposed attack and countermeasure are feasible and efficient. 相似文献
2.
3.
J.M. González-Linares Author VitaeN. GuilAuthor Vitae E.L. Zapata Author Vitae 《Pattern recognition》2003,36(11):2543-2556
This paper presents a complete method for the automatic detection and location of two-dimensional objects even in the presence of noise, occlusion, cluttering and/or deformations. This method is based on shape information extracted from the edges gradient and only needs a template of the object to be located. A new Generalized Hough Transform is proposed to automatically locate rigid objects in the presence of noise, occlusion and/or cluttering. A Bayesian scheme uses this rigid objects location algorithm to obtain the deformation of the object. The whole method is invariant to rotation, scale, displacement and minor deformations. Several examples with real images are presented to show the validity of the method. 相似文献
4.
Peter Verkerk 《Computer Physics Communications》1982,25(4):325-345
A new procedure to correct experimental spectra for instrumental resolution is described. A-priori knowledge is not used, in contrast to other deconvolution methods; consequently the range of applications is very wide, while the necessary computations are simple. Use is being made of the Fast Fourier Transform technique, which makes the method suitable to process large amounts of data. Not only the propagation of the random statistical errors but also the contribution from systematic errors, which arise in any method, is estimated. The validity of the method is demonstrated by means of numerical tests taken from different sources in the literature. 相似文献
5.
A widely used method for parameterizing hidden semi-Markov model is using Gaussian distribution to form the output probability and using Gamma distribution to form the state duration probability. Most of these models are based on the classical Newton?s method with second-order convergence, whose iterative convergence speed is slow for large-scale realtime applications. An improved parameter re-estimation algorithm is introduced for such parametric hidden semi-Markov model in this paper. The proposed approach is based on forward and backward algorithm. It applies an iterative method with eighth-order convergence to improve the performance of the model. The numerical examples validate the proposed method. 相似文献
6.
In this paper, we present an accurate and general interconnect model for planar transmission line interconnects with arbitrary boundary conditions. Based on the unified approach, we develop a SPICE-compatible parameter extraction algorithm that can be used in high-performance computer-aided-design applications. A range of multilayered interconnect geometries with arbitrary boundaries are analyzed. Different typical configurations of ground placement are considered to verify the applicability of this method. For all such cases, results are compared for admittance, line parameters, and delay giving physical insight on the effect of boundary conditions on them. Compared with existing industry standard numerical field-solvers, like HFSS, the proposed model demonstrates more than 10× speedup within 2% accuracy. 相似文献
7.
Shih-Yu Huang Yeuan-Kuen Lee Graeme Bell Zhan-he Ou 《Multimedia Tools and Applications》2010,48(2):267-289
A CAPTCHA is a test designed to distinguish computer programs from human beings, in order to prevent the abuse of networked
resources. Academic research into CAPTCHAs includes designing friendly and secure CAPTCHA systems and defeating existing CAPTCHA
systems. Traditionally, defeating a CAPTCHA test requires two procedures: segmentation and recognition. Recent research shows that the problem of segmentation is much harder than recognition. In this paper, two new segmentation
techniques called projection and middle-axis point separation are proposed for CAPTCHAs with line cluttering and character warping. Experimental results show the proposed techniques can
achieve segmentation rates of about 75%. 相似文献
8.
Miguel ángel González Inés González-Rodríguez Camino R. Vela Ramiro Varela 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(12):2097-2113
We confront the job shop scheduling problem with sequence-dependent setup times and weighted tardiness minimization. To solve this problem, we propose a hybrid metaheuristic that combines the intensification capability of tabu search with the diversification capability of a genetic algorithm which plays the role of long term memory for tabu search in the combined approach. We define and analyze a new neighborhood structure for this problem which is embedded in the tabu search algorithm. The efficiency of the proposed algorithm relies on some elements such as neighbors filtering and a proper balance between intensification and diversification of the search. We report results from an experimental study across conventional benchmarks, where we analyze our approach and demonstrate that it compares favorably to the state-of-the-art methods. 相似文献
9.
In the event that big-sized complex products (containing a large number of assembly tasks most of which have long task times) are produced in simple or two-sided assembly lines, hundreds of stations are essentially required. Long product flow time, a large area for establishment of the line, a high budget for the investment of equipment, and tools in stations and several work-in-process are also required for these kinds of products. In order to avoid these disadvantages, assembly lines with parallel multi-manned workstations can be utilized. In this paper, these lines and one of their balancing problems are addressed, and a branch and bound algorithm is proposed. The algorithm is composed of a branching scheme, some efficient dominance and feasibility criteria based on a problem-specific knowledge. A heuristic-based guidance for enumeration process is included as an efficient component of the algorithm as well. VWSolver algorithm proposed for a special version of the problem in the literature has been modified and compared with the proposed algorithm. Results show that proposed algorithm outperforms VWSolver in terms of both CPU times and quality of feasible solutions found. 相似文献
10.
11.
Cell formation is the first step in the design of cellular manufacturing systems. In this study, an efficient tabu search algorithm based on a similarity coefficient is proposed to solve the cell formation problem with alternative process routings and machine reliability considerations. In the proposed algorithm, good initial solutions are first generated and later on improved by a tabu search algorithm combining the mutation operator and an effective neighborhood solution searching mechanism. Computational experiences from test problems show that the proposed approach is extremely effective and efficient. When compared with the mathematical programming approach which took three hours to solve problems, the proposed algorithm is able to produce optimal solutions in less than 2 s. 相似文献
12.
Optical interconnections attract many engineers and scientists’ attention due to their potential for gigahertz transfer rates and concurrent access to the bus in a pipelined fashion. These unique characteristics of optical interconnections give us the opportunity to reconsider traditional algorithms designed for ideal parallel computing models, such as PRAMs. Since the PRAM model is far from practice, not all algorithms designed on this model can be implemented on a realistic parallel computing system. From this point of view, we study Cole’s pipelined merge sort [Cole R. Parallel merge sort. SIAM J Comput 1988;14:770–85] on the CREW PRAM and extend it in an innovative way to an optical interconnection model, the LARPBS (Linear Array with Reconfigurable Pipelined Bus System) model [Pan Y, Li K. Linear array with a reconfigurable pipelined bus system—concepts and applications. J Inform Sci 1998;106;237–58]. Although Cole’s algorithm is optimal, communication details have not been provided due to the fact that it is designed for a PRAM. We close this gap in our sorting algorithm on the LARPBS model and obtain an O(log N)-time optimal sorting algorithm using O(N) processors. This is a substantial improvement over the previous best sorting algorithm on the LARPBS model that runs in O(log N log log N) worst-case time using N processors [Datta A, Soundaralakshmi S, Owens R. Fast sorting algorithms on a linear array with a reconfigurable pipelined bus system. IEEE Trans Parallel Distribut Syst 2002;13(3):212–22]. Our solution allows efficiently assign and reuse processors. We also discover two new properties of Cole’s sorting algorithm that are presented as lemmas in this paper. 相似文献
13.
Wilhelmiina H?m?l?inen 《Knowledge and Information Systems》2012,32(2):383-414
Statistical dependency analysis is the basis of all empirical science. A commonly occurring problem is to find the most significant dependency rules, which describe either positive or negative dependencies between categorical attributes. In medical science, for example, one is interested in genetic factors, which can either predispose or prevent diseases. The requirement of statistical significance is essential, because the discoveries should hold also in future data. Typically, the significance is estimated either by Fisher??s exact test or the ?? 2-measure. The problem is computationally very difficult, because the number of all possible dependency rules increases exponentially with the number of attributes. As a solution, different kinds of restrictions and heuristics have been applied, but a general, scalable search method has been missing. In this paper, we introduce an efficient algorithm, called Kingfisher, for searching for the best non-redundant dependency rules with statistical significance measures. The rules can express either positive or negative dependencies between a set of positive attributes and a single consequent attribute. The algorithm itself is independent from the used goodness measure, but we concentrate on Fisher??s exact test and the ?? 2-measure. The algorithm is based on an application of the branch-and-bound search strategy, supplemented by several pruning properties. Especially, we prove a new lower bound for Fisher??s p and introduce a new effective pruning principle. According to our experiments on classical benchmark data, the algorithm is well scalable and can efficiently handle even dense and high-dimensional data sets. An interesting observation was that Fisher??s exact test did not only produce more reliable rules than the ?? 2-measure, but it also performed the search much faster. 相似文献
14.
A parallel workload balanced and memory efficient lattice-Boltzmann algorithm for laminar Newtonian fluid flow through large porous media is investigated. It relies on a simplified LBM scheme using a single unit BGK relaxation time, which is implemented by means of a shift algorithm and comprises an even fluid node partitioning domain decomposition strategy based on a vector data structure. It provides perfect parallel workload balance, and its two-nearest-neighbour communication pattern combined with a simple data transfer layout results in 20-55% lower communication cost, 25-60% higher computational parallel performance and 40-90% lower memory usage than previously reported LBM algorithms. Performance tests carried out using scale-up and speed-up case studies of laminar Newtonian fluid flow through hexagonal packings of cylinders and a random packing of polydisperse spheres on two different computer architectures reveal parallel efficiencies with 128 processors as high as 75% for domain sizes comprising more than 5 billion fluid nodes. 相似文献
15.
This paper proposes a modified harmony search (MHS) algorithm with an intersect mutation operator and cellular local search for continuous function optimization problems. Instead of focusing on the intelligent tuning of the parameters during the searching process, the MHS algorithm divides all harmonies in harmony memory into a better part and a worse part according to their fitness. The novel intersect mutation operation has been developed to generate new -harmony vectors. Furthermore, a cellular local search also has been developed in MHS, that helps to improve the optimization performance by exploring a huge search space in the early run phase to avoid premature, and exploiting a small region in the later run phase to refine the final solutions. To obtain better parameter settings for the proposed MHS algorithm, the impacts of the parameters are analyzed by an orthogonal test and a range analysis method. Finally, two sets of famous benchmark functions have been used to test and evaluate the performance of the proposed MHS algorithm. Functions in these benchmark sets have different characteristics so they can give a comprehensive evaluation on the performance of MHS. The experimental results show that the proposed algorithm not only performs better than those state-of-the-art HS variants but is also competitive with other famous meta-heuristic algorithms in terms of the solution accuracy and efficiency. 相似文献
16.
Aida Khajavirad Jeremy J. Michalek Timothy W. Simpson 《Structural and Multidisciplinary Optimization》2009,39(2):187-201
Product family optimization involves not only specifying the platform from which the individual product variants will be derived,
but also optimizing the platform design and the individual variants. Typically these steps are performed separately, but we
propose an efficient decomposed multiobjective genetic algorithm to jointly determine optimal (1) platform selection, (2)
platform design, and (3) variant design in product family optimization. The approach addresses limitations of prior restrictive
component sharing definitions by introducing a generalized two-dimensional commonality chromosome to enable sharing components
among subsets of variants. To solve the resulting high dimensional problem in a single stage efficiently, we exploit the problem
structure by decomposing it into a two-level genetic algorithm, where the upper level determines the optimal platform configuration
while each lower level optimizes one of the individual variants. The decomposed approach improves scalability of the all-in-one
problem dramatically, providing a practical tool for optimizing families with more variants. The proposed approach is demonstrated
by optimizing a family of electric motors. Results indicate that (1) decomposition results in improved solutions under comparable
computational cost and (2) generalized commonality produces families with increased component sharing under the same level
of performance.
A preliminary version of this paper was presented at the 2007 AIAA Multidisciplinary Design Optimization Specialists Conference. 相似文献
17.
This paper proposes a computationally efficient method for estimating angle of arrival and polarization parameters of multiple farfield narrowband diversely polarized electromagnetic sources, using arbitrarily spaced electromagnetic vector sensors at unknown locations. The electromagnetic vector sensor is six-component in composition, consisting of three orthogonal electric dipoles plus three orthogonal magnetic loops, collocating in space. The presented method is based on an estimation method named propagator, which requires only linear operations but no eigenvalue decomposition or singular value decomposition into the signal and noise subspaces, to estimate the scaled electromagnetic vector sensors' steering vectors and then to estimate the azimuth arrival angle, the elevation arrival angle, and the polarization parameters. Comparing with its ESPRIT counterpart [K.T. Wong, M.D. Zoltowski, Closed-form direction finding and polarization estimation with arbitrarily spaced electromagnetic vector-sensors at unknown locations, IEEE Trans. Antennas Propagat. 48 (5) (2000) 671–681], the propagator method has its computational complexity reduced by this ratio: the number of sources to sextuple the number of vector sensors. Simulation results show that at high and medium signal-to-noise ratio, the proposed propagator method's estimation accuracy is similar to its ESPRIT counterpart. 相似文献