首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a novel adaptive cuckoo search (ACS) algorithm for optimization. The step size is made adaptive from the knowledge of its fitness function value and its current position in the search space. The other important feature of the ACS algorithm is its speed, which is faster than the CS algorithm. Here, an attempt is made to make the cuckoo search (CS) algorithm parameter free, without a Levy step. The proposed algorithm is validated using twenty three standard benchmark test functions. The second part of the paper proposes an efficient face recognition algorithm using ACS, principal component analysis (PCA) and intrinsic discriminant analysis (IDA). The proposed algorithms are named as PCA + IDA and ACS–IDA. Interestingly, PCA + IDA offers us a perturbation free algorithm for dimension reduction while ACS + IDA is used to find the optimal feature vectors for classification of the face images based on the IDA. For the performance analysis, we use three standard face databases—YALE, ORL, and FERET. A comparison of the proposed method with the state-of-the-art methods reveals the effectiveness of our algorithm.  相似文献   

2.
Analysis of networks of queues under repetitive service blocking mechanism has been presented in this paper. Nodes are connected according to an arbitrary configuration and each node in the networks employs an active queue management (AQM) based queueing policy to guarantee certain quality of service for multiple class external traffic. This buffer management scheme has been implemented using queue thresholds. The use of queue thresholds is a well known technique for network traffic congestion control. The analysis is based on a queue-by-queue decomposition technique where each queue is modelled as a GE/GE/1/N queue with single server, R (R  2) distinct traffic classes and {N = N1, N2,  , NR} buffer threshold values per class under first-come-first-serve (FCFS) service rule. The external traffic is modelled using the generalised exponential (GE) distribution which can capture the bursty property of network traffic. The analytical solution is obtained using the maximum entropy (ME) principle. The forms of the state and blocking probabilities are analytically established at equilibrium via appropriate mean value constraints. The initial numerical results demonstrate the credibility of the proposed analytical solution.  相似文献   

3.
A systematic algorithm for building integrating factors of the form μ(x, y), μ(x, y) or μ(y, y) for second-order ODEs is presented. The algorithm can determine the existence and explicit form of the integrating factors themselves without solving any differential equations, except for a linear ODE in one subcase of the μ (x, y) problem. Examples of ODEs not having point symmetries are shown to be solvable using this algorithm. The scheme was implemented in Maple, in the framework of the ODEtools package and its ODE-solver. A comparison between this implementation and other computer algebra ODE-solvers in tackling non-linear examples from Kamke's book is shown.  相似文献   

4.
Multidimensional analysis and online analytical processing (OLAP) operations require summary information on multidimensional data sets. Most common are aggregate operations along one or more dimensions of numerical data values. Simultaneous calculation of multidimensional aggregates are provided by the Data Cube operator, used to calculate and store summary information on a number of dimensions. This is computed only partially if the number of dimensions is large. Query processing for these applications requires different views of data to gain insight and for effective decision support. Queries may either be answered from a materialized cube in the data cube or calculated on the fly.  The multidimensionality of the underlying problem can be represented both in relational and in multidimensional databases, the latter being a better fit when query performance is the criteria for judgment. Relational databases are scalable in size for OLAP and multidimensional analysis and efforts are on to make their performance acceptable. On the other hand multidimensional databases have proven to provide good performance for such queries, although they are not very scalable. In this article we address (1) scalability in multidimensional systems for OLAP and multidimensional analysis and (2) integration of data mining with the OLAP framework. We describe our system PARSIMONY, parallel and scalable infrastructure for multidimensional online analytical processing, used for both OLAP and data mining. Sparsity of data sets is handled by using chunks to store data either as a dense block using multidimensional arrays or as sparse representation using a bit encoded sparse structure. Chunks provide a multidimensional index structure for efficient dimension oriented data accesses much the same as multidimensional arrays do. Operations within chunks and between chunks are a combination of relational and multidimensional operations depending on whether the chunk is sparse or dense. Further, we develop parallel algorithms for data mining on the multidimensional cube structure for attribute-oriented association rules and decision-tree-based classification. These take advantage of the data organization provided by the multidimensional data model.  Performance results for high dimensional data sets on a distributed memory parallel machine (IBM SP-2) show good speedup and scalability.  相似文献   

5.
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE + PSO + C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.  相似文献   

6.
Uneven energy consumption is an inherent problem in wireless sensor networks characterized by multi-hop routing and many-to-one traffic pattern. Such unbalanced energy dissipation can significantly reduce network lifetime. In this paper, we study the problem of prolonging network lifetime in large-scale wireless sensor networks where a mobile sink gathers data periodically along the predefined path and each sensor node uploads its data to the mobile sink over a multi-hop communication path. By using greedy policy and dynamic programming, we propose a heuristic topology control algorithm with time complexity O(n(m + n log n)), where n and m are the number of nodes and edges in the network, respectively, and further discuss how to refine our algorithm to satisfy practical requirements such as distributed computing and transmission timeliness. Theoretical analysis and experimental results show that our algorithm is superior to several earlier algorithms for extending network lifetime.  相似文献   

7.
Data partitioning and scheduling is one the important issues in minimizing the processing time for parallel and distributed computing system. We consider a single-level tree architecture of the system and the case of affine communication model, for a general m processor system with n rounds of load distribution. For this case, there exists an optimal activation order, optimal number of processors m* (m *  m), and optimal rounds of load distribution n* (n *  n), such that the processing time of the entire processing load is a minimum. This is a difficult optimization problem because for a given activation order, we have to first identify the processors that are participating (in the computation process) in every round of load distribution and then obtain the load fractions assigned to them, and the processing time. Hence, in this paper, we propose a real-coded genetic algorithm (RCGA) to solve the optimal activation order, optimal number of processors m* (m *  m), and optimal rounds of load distribution n* (n *  n), such that the processing time of the entire processing load is a minimum. RCGA employs a modified crossover and mutation operators such that the operators always produce a valid solution. Also, we propose different population initialization schemes to improve the convergence. Finally, we present a comparative study with simple real-coded genetic algorithm and particle swarm optimization to highlight the advantage of the proposed algorithm. The results clearly indicate the effectiveness of the proposed real-coded genetic algorithm.  相似文献   

8.
An up to date and accurate aviation emission inventory is a prerequisite for any detailed analysis of aviation emission impact on greenhouse gases and local air quality around airports. In this paper we present an aviation emission inventory using real time air traffic trajectory data. The reported inventory is in the form of a 4D database which provides resolution of 1° ×  × 1000 ft for temporal and spatial emission analysis. The inventory is for an ongoing period of six months starting from October 2008 for Australian Airspace.In this study we show 6 months of data, with 492,936 flights (inbound, outbound and over flying). These flights used about 2515.83 kt of fuel and emitted 114.59 kt of HC, 200.95 kt of CO, 45.92 kt of NOx, 7929.89 kt of CO2, and 2.11 kt of SOx. From the spatial analysis of emissions data, we found that the CO2 concentration in some parts of Australia is much higher than other parts, especially in some major cities. The emission results also show that NOx emission of aviation may have a significant impact on the ozone layer in the upper troposphere, but not in the stratosphere.It is expected that with the availability of this real time aviation emission database, environmental analysts and aviation experts will have an indispensable source of information for making timely decisions regarding expansion of runways, building new airports, applying route charges based on environmentally congested airways, and restructuring air traffic flow to achieve sustainable air traffic growth.  相似文献   

9.
《Computers & Fluids》2006,35(8-9):863-871
Following the work of Lallemand and Luo [Lallemand P, Luo L-S. Theory of the lattice Boltzmann method: acoustic and thermal properties in two and three dimensions. Phys Rev E 2003;68:036706] we validate, apply and extend the hybrid thermal lattice Boltzmann scheme (HTLBE) by a large-eddy approach to simulate turbulent convective flows. For the mass and momentum equations, a multiple-relaxation-time LBE scheme is used while the heat equation is solved numerically by a finite difference scheme. We extend the hybrid model by a Smagorinsky subgrid scale model for both the fluid flow and the heat flux. Validation studies are presented for laminar and turbulent natural convection in a cavity at various Rayleigh numbers up to 5 × 1010 for Pr = 0.71 using a serial code in 2D and a parallel code in 3D, respectively. Correlations of the Nusselt number are discussed and compared to benchmark data. As an application we simulated forced convection in a building with inner courtyard at Re = 50 000.  相似文献   

10.
We develop a theory of Gröbner bases over Galois rings, following the usual formulation for Gröbner bases over finite fields. Our treatment includes a division algorithm, a characterization of Gröbner bases, and an extension of Buchberger’s algorithm. One application is towards the problem of decoding alternant codes over Galois rings. To this end we consider the module M =  {(a, b) :aS  b  mod xr} of all solutions to the so-called key equation for alternant codes, where S is a syndrome polynomial. In decoding, a particular solution (Σ, Ω)   M is sought satisfying certain conditions, and such a solution can be found in a Gröbner basis of M. Applying techniques introduced in the first part of this paper, we give an algorithm which returns the required solution.  相似文献   

11.
In this paper we consider the following problems: we are given a set of n items {u1,  , un} and a number of unit-capacity bins. Each item ui has a size wi  (0, 1] and a penalty pi  0. An item can be either rejected, in which case we pay its penalty, or put into one bin under the constraint that the total size of the items in the bin is no greater than 1. No item can be spread into more than one bin. The objective is to minimize the total number of used bins plus the total penalty paid for the rejected items. We call the problem bin packing with rejection penalties, and denote it as BPR. For the on-line BPR problem, we present an algorithm with an absolute competitive ratio of 2.618 while the lower bound is 2.343, and an algorithm with an asymptotic competitive ratio arbitrarily close to 1.75 while the lower bound is 1.540. For the off-line BPR problem, we present an algorithm with an absolute worst-case ratio of 2 while the lower bound is 1.5, and an algorithm with an asymptotic worst-case ratio of 1.5. We also study a closely related bin covering version of the problem. In this case pi means some amount of profit. If an item is rejected, we get its profit, or it can be put into a bin in such a way that the total size of the items in the bin is no smaller than 1. The objective is to maximize the number of covered bins plus the total profit of all rejected items. We call this problem bin covering with rejection (BCR). For the on-line BCR problem, we show that no algorithm can have absolute competitive ratio greater than 0, and present an algorithm with asymptotic competitive ratio 1/2, which is the best possible. For the off-line BCR problem, we also present an algorithm with an absolute worst-case ratio of 1/2 which matches the lower bound.  相似文献   

12.
Traditional strategies, such as fingerprinting and face recognition, are becoming more and more fraud susceptible. As a consequence, new and more fraud proof biometrics modalities have been considered, one of them being the heartbeat pattern acquired by an electrocardiogram (ECG). While methods for subject identification based on ECG signal work with signals sampled in high frequencies (>100 Hz), the main goal of this work is to evaluate the use of ECG signal in low frequencies for such aim. In this work, the ECG signal is sampled in low frequencies (30 Hz and 60 Hz) and represented by four feature extraction methods available in the literature, which are then feed to a Support Vector Machines (SVM) classifier to perform the identification. In addition, a classification approach based on majority voting using multiple samples per subject is employed and compared to the traditional classification based on the presentation of single samples per subject each time. Considering a database composed of 193 subjects, results show identification accuracies higher than 95% and near to optimality (i.e., 100%) when the ECG signal is sampled in 30 Hz and 60 Hz, respectively, being the last one very close to the ones obtained when the signal is sampled in 360 Hz (the maximum frequency existing in our database). We also evaluate the impact of: (1) the number of training and testing samples for learning and identification, respectively; (2) the scalability of the biometry (i.e., increment on the number of subjects); and (3) the use of multiple samples for person identification.  相似文献   

13.
In general, to achieve high compression efficiency, a 2D image or a 2D block is used as the compression unit. However, 2D compression requires a large memory size and long latency when input data are received in a raster scan order that is common in existing TV systems. To address this problem, a 1D compression algorithm that uses a 1D block as the compression unit is proposed. 1D set partitioning in hierarchical trees (SPIHT) is an effective compression algorithm that fits the encoded bit length to the target bit length precisely. However, the 1D SPIHT can have low compression efficiency because 1D discrete wavelet transform (DWT) cannot make use of the redundancy in the vertical direction. This paper proposes two schemes for improving compression efficiency in the 1D SPIHT. First, a hybrid coding scheme that uses different coding algorithms for the low and high frequency bands is proposed. For the low-pass band, a differential pulse code modulation–variable length coding (DPCM–VLC) is adopted, whereas a 1D SPIHT is used for the high-pass band. Second, a scheme that determines the target bit length of each block by using spatial correlation with a minimal increase in complexity is proposed. Experimental results show that the proposed algorithm improves the average peak signal to noise ratio (PSNR) by 2.97 dB compared with the conventional 1D SPIHT algorithm. With the hardware implementation, the throughputs of both encoder and decoder designs are 6.15 Gbps, and gate counts of encoder and decoder designs are 42.8 K and 57.7 K, respectively.  相似文献   

14.
15.
Network-on-Chip (NoC) architecture has been widely used in many multi-core system designs. To improve the communication efficiency and the bandwidth utilization of NoC for various applications, we firstly propose a table-based algorithm for identifying the dominant flows at runtime. Then a two-layer NoC architecture with an application-driven bandwidth allocation scheme is presented, which is capable of identifying heavy-load dataflows and dynamically reconfiguring point-to-point (P2P) connections to optimize the heavy-load traffic. Experimental results reveal that our design (8 × 8 mesh NoC) achieves 28.5% performance improvement and 25.9% power consumption saving compared to the baseline NoC.  相似文献   

16.
We present a parallel algorithm for solving thenext element search problemon a set of line segments, using a BSP-like model referred to as thecoarse grained multicomputer(CGM). The algorithm requiresO(1) communication rounds (h-relations withh=O(n/p)),O((n/p) log n) local computation, andO((n/p) log p) memory per processor, assumingn/pp. Our result implies solutions to the point location, trapezoidal decomposition, and polygon triangulation problems. A simplified version for axis-parallel segments requires onlyO(n/p) memory per processor, and we discuss an implementation of this version. As in a previous paper by Develliers and Fabri (Int. J. Comput. Geom. Appl.6(1996), 487–506), our algorithm is based on a distributed implementation of segment trees which are of sizeO(n log n). This paper improves onop. cit.in several ways: (1) It studies the more general next element search problem which also solves, e.g., planar point location. (2) The algorithms require onlyO((n/p) log n) local computation instead ofO(log p*(n/p) log n). (3) The algorithms require onlyO((n/p) log p) local memory instead ofO((n/p) log n).  相似文献   

17.
《Parallel Computing》2013,39(10):615-637
A key point for the efficient use of large grid systems is the discovery of resources, and this task becomes more complicated as the size of the system grows up. In this case, large amounts of information on the available resources must be stored and kept up-to-date along the system so that it can be queried by users to find resources meeting specific requirements (e.g. a given operating system or available memory). Thus, three tasks must be performed, (1) information on resources must be gathered and processed, (2) such processed information has to be disseminated over the system, and (3) upon users’ requests, the system must be able to discover resources meeting some requirements using the processed information. This paper presents a new technique for the discovery of resources in grids which can be used in the case of multi-attribute (e.g. {OS = Linux & memory = 4 GB}) and range queries (e.g. {50 GB < disk-space < 100 GB}). This technique relies on the use of content summarisation techniques to perform the first task mentioned before and strives at the main drawback found in proposals from literature using summarization. This drawback is related to scalability, and is tackled by means of using Peer-to-Peer (P2P) techniques, namely Routing Indices (RIs), to perform the second and third tasks.Another contribution of this work is a performance evaluation conducted by means of simulations of the EU DataGRID Testbed which shows the usefulness of this approach compared to other proposals from literature. More specifically, the technique presented in this paper improves on the scalability and produces good performance. Besides, the parameters involved in the summary creation have been tuned and the most suitable values for the presented test case have been found.  相似文献   

18.
We study the Weyl closure Cl(L)  = K(x)〈L  D for an operator L of the first Weyl algebra D = Kx, 〉. We give an algorithm to compute Cl(L) and we describe its initial ideal under the order filtration. Our main application is an algorithm for constructing a Jordan–Hölder series for a holonomic D -module and a formula for its length. Using the closure, we also reproduce a result ofStrömbeck (1978), who described the initial ideals of left ideals of D under the order filtration, and a result ofCannings and Holland (1994), who described the isomorphism classes of right ideals of D.  相似文献   

19.
Carboxylesterases are ubiquitous enzymes with important physiological, industrial and medical applications such as synthesis and hydrolysis of stereo specific compounds, including the metabolic processing of drugs, and antimicrobial agents. Here, we have performed molecular dynamics simulations of carboxylesterase from hyperthermophilic bacterium Geobacillus stearothermophilus (GsEst) for 10 ns each at five different temperatures namely at 300 K, 343 K, 373 K, 473 K and 500 K. Profiles of root mean square fluctuation (RMSF) identify thermostable and thermosensitive regions of GsEst. Unfolding of GsEst initiates at the thermosensitive α-helices and proceeds to the thermostable β-sheets. Five ion-pairs have been identified as critical ion-pairs for thermostability and are maintained stably throughout the higher temperature simulations. A detailed investigation of the active site residues of this enzyme suggests that the geometry of this site is well preserved up to 373 K. Furthermore, the hydrogen bonds between Asp188 and His218 of the active site are stably maintained at higher temperatures imparting stability of this site. Radial distribution functions (RDFs) show similar pattern of solvent ordering and water penetration around active site residues up to 373 K. Principal component analysis suggests that the motion of the entire protein as well as the active site is similar at 300 K, 343 K and 373 K. Our study may help to identify the factors responsible for thermostability of GsEst that may endeavor to design enzymes with enhanced thermostability.  相似文献   

20.
Let L = K(α) be an Abelian extension of degree n of a number field K, given by the minimal polynomial of α over K. We describe an algorithm for computing the local Artin map associated with the extension L / K at a finite or infinite prime v of K. We apply this algorithm to decide if a nonzero a  K is a norm from L, assuming that L / K is cyclic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号