首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computer Networks》2003,41(1):73-88
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given quality-of-service constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called delay-cost-constrained routing (DCCR), is a variant of the k-shortest-path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use a variant of the Lagrangian relaxation method proposed by Handler and Zang [Networks 10 (1980) 293] to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm search space reduction + DCCR (SSR + DCCR). Through extensive simulations, we confirm that SSR + DCCR performs very well compared to the optimal but very expensive solution.  相似文献   

2.
3.
《Computer Networks》2007,51(11):3172-3196
A search based heuristic for the optimisation of communication networks where traffic forecasts are uncertain and the problem is NP-complete is presented. While algorithms such as genetic algorithms (GA) and simulated annealing (SA) are often used for this class of problem, this work applies a combination of newer optimisation techniques specifically: fast local search (FLS) as an improved hill climbing method and guided local search (GLS) to allow escape from local minima. The GLS + FLS combination is compared with an optimised GA and SA approaches. It is found that in terms of implementation, the parameterisation of the GLS + FLS technique is significantly simpler than that for a GA and SA. Also, the self-regularisation feature of the GLS + FLS approach provides a distinctive advantage over the other techniques which require manual parameterisation. To compare numerical performance, the three techniques were tested over a number of network sets varying in size, number of switch circuit demands (network bandwidth demands) and levels of uncertainties on the switch circuit demands. The results show that the GLS + FLS outperforms the GA and SA techniques in terms of both solution quality and optimisation speed but even more importantly GLS + FLS has significantly reduced parameterisation time.  相似文献   

4.
3-D Networks-on-Chip (NoCs) have been proposed as a potent solution to address both the interconnection and design complexity problems facing future System-on-Chip (SoC) designs. In this paper, two topology-aware multicast routing algorithms, Multicasting XYZ (MXYZ) and Alternative XYZ (AL + XYZ) algorithms in supporting of 3-D NoC are proposed. In essence, MXYZ is a simple dimension order multicast routing algorithm that targets 3-D NoC systems built upon regular topologies. To support multicast routing in irregular regions, AL + XYZ can be applied, where an alternative output channel is sought to forward/replicate the packets whenever the output channel determined by MXYZ is not available. To evaluate the performance of MXYZ and AL + XYZ, extensive experiments have been conducted by comparing MXYZ and AL + XYZ against a path-based multicast routing algorithm and an irregular region oriented multiple unicast routing algorithm, respectively. The experimental results confirm that the proposed MXYZ and AL + XYZ schemes, respectively, have lower latency and power consumption than the other two routing algorithms, meriting the two proposed algorithms to be more suitable for supporting multicasting in 3-D NoC systems. In addition, the hardware implementation cost of AL + XYZ is shown to be quite modest.  相似文献   

5.
Grouping strategy exactly specifies the form of covariance matrix, therefore it is very essential. Most 2DPCA methods use the original 2D image matrices to form the covariance matrix which actually means that the strategy is to group the random variables by row or column of the input image. Because of their grouping strategies these methods have two main drawbacks. Firstly, 2DPCA and some of its variants such as A2DPCA, DiaPCA and MatPCA preserve only the covariance information between the elements of these groups. This directly implies that 2DPCA and these variants eliminate some covariance information while PCA preserves such information that can be useful for recognition. Secondly, all the existing methods suffer from the relatively high intra-group correlation, since the random variables in a row, column, or a block are closely located and highly correlated. To overcome such drawbacks we propose a novel grouping strategy named cross grouping strategy. The algorithm focuses on reducing the redundancy among the row and the column vectors of the image matrix. While doing this the algorithm completely preserves the covariance information of PCA between local geometric structures in the image matrix which is partially maintained in 2DPCA and its variants. And also in the proposed study intra-group correlation is weak according to the 2DPCA and its variants because the random variables spread over the whole face image. These make the proposed algorithm superior to 2DPCA and its variants. In order to achieve this, image cross-covariance matrix is calculated from the summation of the outer products of the column and the row vectors of all images. The singular value decomposition (SVD) is then applied to the image cross-covariance matrix. The right and the left singular vectors of SVD of the image cross-covariance matrix are used as the optimal projective vectors. Further in order to reduce the dimension LDA is applied on the feature space of the proposed method that is proposed method + LDA. The exhaustive experimental results demonstrate that proposed grouping strategy for 2DPCA is superior to 2DPCA, its specified variants and PCA, and proposed method outperforms bi-directional PCA + LDA.  相似文献   

6.
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE + PSO + C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.  相似文献   

7.
This paper presents results of a comparative study with the objective to identify the most effective and efficient way of applying a local search method embedded in a hybrid algorithm. The hybrid metaheuristic employed in this study is called “DE–HS–HJ” because it is comprised of two cooperative metaheusitic algorithms, i.e., differential evolution (DE) and harmony search (HS), and one local search (LS) method, i.e., Hooke and Jeeves (HJ) direct search. Eighteen different ways of using HJ local search were implemented and all of them were evaluated with 19 problems, in terms of six performance indices, covering both accuracy and efficiency. Statistic analyses were conducted accordingly to determine the significance in performance differences. The test results show that overall the best three LS application strategies are applying local search to every generated solution with a specified probability and also to each newly updated solution (NUS + ESP), applying local search to every generated solution with a specified probability (ESP), and applying local search to every generated solution with probability and also to the updated current global best solution (EUGbest + ESP). ESP is found to be the best local search application strategy in terms of success rate. Integrating it with NUS further improve the overall performance. EUGbest + ESP is the most efficient and it is also able to achieve high level of accuracy (the fourth place in terms of success rate with an average above 0.9).  相似文献   

8.
A long positioning range and a high first natural frequency are the two most important quality responses of a compliant focus positioning platform (CFPP). This paper aims to develop a hybrid Taguchi-cuckoo search (HTCS) algorithm to optimize overall the quality responses, simultaneously. The CFPP is designed via using flexure hinges. The length, width, and thickness of flexure hinges are considered as design variables. The Taguchi’s L16 orthogonal array is used to establish the experimental layout and the S/N ratios of each response are computed. The analysis of variance (ANOVA) is computed to investigate the effect of design parameters on the quality responses. Results of ANOVA are then utilized to limit the search space of design parameters that serves as initial population for the cuckoo search meta-heurist algorithm. The results showed that the HTCS algorithm is more effective than DE, GA, PSO, AEDE, and PSOGSA. The CFPP enables a long positioning range of 188.36 μm and a high frequency response of 284.06 Hz. The proposed HTCS approach can effectively optimize the multiple objectives for the CFPP and would be useful technique for related optimization problems.  相似文献   

9.
10.
A new architecture of intelligent audio emotion recognition is proposed in this paper. It fully utilizes both prosodic and spectral features in its design. It has two main paths in parallel and can recognize 6 emotions. Path 1 is designed based on intensive analysis of different prosodic features. Significant prosodic features are identified to differentiate emotions. Path 2 is designed based on research analysis on spectral features. Extraction of Mel-Frequency Cepstral Coefficient (MFCC) feature is then followed by Bi-directional Principle Component Analysis (BDPCA), Linear Discriminant Analysis (LDA) and Radial Basis Function (RBF) neural classification. This path has 3 parallel BDPCA + LDA + RBF sub-paths structure and each handles two emotions. Fusion modules are also proposed for weights assignment and decision making. The performance of the proposed architecture is evaluated on eNTERFACE’05 and RML databases. Simulation results and comparison have revealed good performance of the proposed recognizer.  相似文献   

11.
This paper presents an automatic diagnosis system for detecting breast cancer based on association rules (AR) and neural network (NN). In this study, AR is used for reducing the dimension of breast cancer database and NN is used for intelligent classification. The proposed AR + NN system performance is compared with NN model. The dimension of input feature space is reduced from nine to four by using AR. In test stage, 3-fold cross validation method was applied to the Wisconsin breast cancer database to evaluate the proposed system performances. The correct classification rate of proposed system is 95.6%. This research demonstrated that the AR can be used for reducing the dimension of feature space and proposed AR + NN model can be used to obtain fast automatic diagnostic systems for other diseases.  相似文献   

12.
Protein thermostability information is closely linked to commercial production of many biomaterials. Recent developments have shown that amino acid composition, special sequence patterns and hydrogen bonds, disulfide bonds, salt bridges and so on are of considerable importance to thermostability. In this study, we present a system to integrate these various factors that predict protein thermostability. In this study, the features of proteins in the PGTdb are analyzed. We consider both structure and sequence features and correlation coefficients are incorporated into the feature selection algorithm. Machine learning algorithms are then used to develop identification systems and performances between the different algorithms are compared. In this research, two features, (E + F + M + R)/residue and charged/non-charged, are found to be critical to the thermostability of proteins. Although the sequence and structural models achieve a higher accuracy, sequence-only models provides sufficient accuracy for sequence-only thermostability prediction.  相似文献   

13.
We investigate a statistical model for integrating narrowband cues in speech. The model is inspired by two ideas in human speech perception: (i) Fletcher’s hypothesis (1953) that independent detectors, working in narrow frequency bands, account for the robustness of auditory strategies, and (ii) Miller and Nicely’s analysis (1955) that perceptual confusions in noisy bandlimited speech are correlated with phonetic features. We apply the model to detecting the phonetic feature [ +  /   sonorant] that distinguishes vowels, approximants, and nasals (sonorants) from stops, fricatives, and affricates (obstruents). The model is represented by a multilayer probabilistic network whose binary hidden variables indicate sonorant cues from different parts of the frequency spectrum. We derive the Expectation-Maximization algorithm for estimating the model’s parameters and evaluate its performance on clean and corrupted speech.  相似文献   

14.
Various sensory and control signals in a Heating Ventilation and Air Conditioning (HVAC) system are closely interrelated which give rise to severe redundancies between original signals. These redundancies may cripple the generalization capability of an automatic fault detection and diagnosis (AFDD) algorithm. This paper proposes an unsupervised feature selection approach and its application to AFDD in a HVAC system. Using Ensemble Rapid Centroid Estimation (ERCE), the important features are automatically selected from original measurements based on the relative entropy between the low- and high-frequency features. The materials used is the experimental HVAC fault data from the ASHRAE-1312-RP datasets containing a total of 49 days of various types of faults and corresponding severity. The features selected using ERCE (Median normalized mutual information (NMI) = 0.019) achieved the least redundancies compared to those selected using manual selection (Median NMI = 0.0199) Complete Linkage (Median NMI = 0.1305), Evidence Accumulation K-means (Median NMI = 0.04) and Weighted Evidence Accumulation K-means (Median NMI = 0.048). The effectiveness of the feature selection method is further investigated using two well-established time-sequence classification algorithms: (a) Nonlinear Auto-Regressive Neural Network with eXogenous inputs and distributed time delays (NARX-TDNN); and (b) Hidden Markov Models (HMM); where weighted average sensitivity and specificity of: (a) higher than 99% and 96% for NARX-TDNN; and (b) higher than 98% and 86% for HMM is observed. The proposed feature selection algorithm could potentially be applied to other model-based systems to improve the fault detection performance.  相似文献   

15.
Dynamic time-linkage optimization problems (DTPs) are a special class of dynamic optimization problems (DOPs) with the feature of time-linkage. Time-linkage means that the decisions taken now could influence the problem states in future. Although DTPs are common in practice, attention from the field of evolutionary optimization is little. To date, the prediction method is the major approach to solve DTPs in the field of evolutionary optimization. However, in existing studies, the method of how to deal with the situation where the prediction is unreliable has not been studied yet for the complete Black-Box Optimization (BBO) case. In this paper, the prediction approach EA + predictor, proposed by Bosman, is improved to handle such situation. A stochastic-ranking selection scheme based on the prediction accuracy is designed to improve EA + predictor under unreliable prediction, where the prediction accuracy is based on the rank of the individuals but not the fitness. Experimental results show that, compared with the original prediction approach, the performance of the improved algorithm is competitive.  相似文献   

16.
In this paper, we propose a method for solving constrained optimization problems using interval analysis combined with particle swarm optimization. A set inverter via interval analysis algorithm is used to handle constraints in order to reduce constrained optimization to quasi unconstrained one. The algorithm is useful in the detection of empty search spaces, preventing useless executions of the optimization process. To improve computational efficiency, a space cleaning algorithm is used to remove solutions that are certainly not optimal. As a result, the search space becomes smaller at each step of the optimization procedure. After completing pre-processing, a modified particle swarm optimization algorithm is applied to the reduced search space to find the global optimum. The efficiency of the proposed approach is demonstrated through comprehensive experimentation involving 100 000 runs on a set of well-known benchmark constrained engineering design problems. The computational efficiency of the new method is quantified by comparing its results with other PSO variants found in the literature.  相似文献   

17.
18.
Cost of testing activities is a major portion of the total cost of a software. In testing, generating test data is very important because the efficiency of testing is highly dependent on the data used in this phase. In search-based software testing, soft computing algorithms explore test data in order to maximize a coverage metric which can be considered as an optimization problem. In this paper, we employed some meta-heuristics (Artificial Bee Colony, Particle Swarm Optimization, Differential Evolution and Firefly Algorithms) and Random Search algorithm to solve this optimization problem. First, the dependency of the algorithms on the values of the control parameters was analyzed and suitable values for the control parameters were recommended. Algorithms were compared based on various fitness functions (path-based, dissimilarity-based and approximation level + branch distance) because the fitness function affects the behaviour of the algorithms in the search space. Results showed that meta-heuristics can be effectively used for hard problems and when the search space is large. Besides, approximation level + branch distance based fitness function is generally a good fitness function that guides the algorithms accurately.  相似文献   

19.
Quantification of pavement crack data is one of the most important criteria in determining optimum pavement maintenance strategies. Recently, multi-resolution analysis such as wavelet decompositions provides very good multi-resolution analytical tools for different scales of pavement analysis and distresses classification. This paper present an automatic diagnosis system for detecting and classification pavement crack distress based on Wavelet–Radon Transform (WR) and Dynamic Neural Network (DNN) threshold selection. The algorithm of the proposed system consists of a combination of feature extraction using WR and classification using the neural network technique. The proposed WR + DNN system performance is compared with static neural network (SNN). In test stage; proposed method was applied to the pavement images database to evaluate the system performance. The correct classification rate (CCR) of proposed system is over 99%. This research demonstrated that the WR + DNN method can be used efficiently for fast automatic pavement distress detection and classification. The details of the image processing technique and the characteristic of system are also described in this paper.  相似文献   

20.
In this paper a new mathematical geometric model of spiral triangular wire strands with a construction of (3 + 9) and (3 + 9 + 15) wires is proposed and an accurate computational two-layered triangular strand 3D solid modelling, which is used for a finite element analysis, is presented. The present geometric model fully considers the spatial configuration of individual wires in the strand. The three dimensional curve geometry of wires axes in the individual layers of the triangular strand consists of straight linear and helical segments. The derived mathematical representation of this curve is in the form of parametric equations with variable input parameters which facilitate the determination of the centreline of an arbitrary circular wire of the right and left hand lay triangular one and two-layered strands. Derived geometric equations were used for the generation of accurate 3D geometric and computational strand models. The correctness of the derived parametric equations and performance of the generated strand model are controlled by visualizations. The 3D computational model was used for a finite element behaviour analysis of the two-layered triangular strand subjected to tension loadings. Illustrative examples are presented to highlight the benefits of the proposed geometric parametric equations and computational modelling procedures by using the finite element method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号