首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Incomplete data are often encountered in data sets used in clustering problems, and inappropriate treatment of incomplete data can significantly degrade the clustering performance. In view of the uncertainty of missing attributes, we put forward an interval representation of missing attributes based on nearest-neighbor information, named nearest-neighbor interval, and a hybrid approach utilizing genetic algorithm and fuzzy c-means is presented for incomplete data clustering. The overall algorithm is within the genetic algorithm framework, which searches for appropriate imputations of missing attributes in corresponding nearest-neighbor intervals to recover the incomplete data set, and hybridizes fuzzy c-means to perform clustering analysis and provide fitness metric for genetic optimization simultaneously. Several experimental results on a set of real-life data sets are presented to demonstrate the better clustering performance of our hybrid approach over the compared methods.  相似文献   

2.
Particle swarm optimization algorithm is a inhabitant-based stochastic search procedure, which provides a populace-based search practice for getting the best solution from the problem by taking particles and moving them around in the search space and efficient for global search. Grey Wolf Optimizer is a recently developed meta-heuristic search algorithm inspired by Canis-lupus. This research paper presents solution to single-area unit commitment problem for 14-bus system, 30-bus system and 10-generating unit model using swarm-intelligence-based particle swarm optimization algorithm and a hybrid PSO–GWO algorithm. The effectiveness of proposed algorithms is compared with classical PSO, PSOLR, HPSO, hybrid PSOSQP, MPSO, IBPSO, LCA–PSO and various other evolutionary algorithms, and it is found that performance of NPSO is faster than classical PSO. However, generation cost of hybrid PSO–GWO is better than classical and novel PSO, but convergence of hybrid PSO–GWO is much slower than NPSO due to sequential computation of PSO and GWO.  相似文献   

3.
Time-dependent multi-item problems arise frequently in management applications, communication systems, and production–distribution systems. Our problem belongs to the last category, where we wish to address the feasibility of such systems when all network parameters change over time and product. The objective is to determine whether it is possible to have a dynamic production–shipment circuit within a finite planning horizon. And, if there is no such a flow, the goal is to determine where and when the infeasibility occurs and the approximate magnitude of the infeasibility. This information may help the decision maker in their efforts to resolve the infeasibility of the system. The problem in the discrete-time settings is investigated and a hybrid of scaling approach and penalty function method together with network optimality condition is utilized to develop a network-based algorithm. This algorithm is analysed from theoretical and practical perspectives by means of instances corresponding to some electricity transmission-distribution networks and many random instances. Computational results illustrate the performance of the algorithm.  相似文献   

4.
Fitting data points to curves (usually referred to as curve reconstruction) is a major issue in computer-aided design/manufacturing (CAD/CAM). This problem appears recurrently in reverse engineering, where a set of (possibly massive and noisy) data points obtained by 3D laser scanning have to be fitted to a free-form parametric curve (typically a B-spline). Despite the large number of methods available to tackle this issue, the problem is still challenging and elusive. In fact, no satisfactory solution to the general problem has been achieved so far. In this paper we present a novel hybrid evolutionary approach (called IMCH-GAPSO) for B-spline curve reconstruction comprised of two classical bio-inspired techniques: genetic algorithms (GA) and particle swarm optimization (PSO), accounting for data parameterization and knot placement, respectively. In our setting, GA and PSO are mutually coupled in the sense that the output of one system is used as the input of the other and vice versa. This coupling is then repeated iteratively until a termination criterion (such as a prescribed error threshold or a fixed number of iterations) is attained. To evaluate the performance of our approach, it has been applied to several illustrative examples of data points from real-world applications in manufacturing. Our experimental results show that our approach performs very well, being able to reconstruct with very high accuracy extremely complicated shapes, unfeasible for reconstruction with current methods.  相似文献   

5.

Differential evolution (DE) is a population-based stochastic search algorithm, whose simple yet powerful and straightforward features make it very attractive for numerical optimization. DE uses a rather greedy and less stochastic approach to problem-solving than other evolutionary algorithms. DE combines simple arithmetic operators with the classical operators of recombination, mutation and selection to evolve from a randomly generated starting population to a final solution. Although global exploration ability of DE algorithm is adequate, its local exploitation ability is feeble and convergence velocity is too low and it suffers from the problem of untime convergence for multimodal objective function, in which search process may be trapped in local optima and it loses its diversity. Also, it suffers from the stagnation problem, where the search process may infrequently stop proceeding toward the global optimum even though the population has not converged to a local optimum or any other point. To improve the exploitation ability and global performance of DE algorithm, a novel and hybrid version of DE algorithm is presented in the proposed research. This research paper presents a hybrid version of DE algorithm combined with random search for the solution of single-area unit commitment problem. The hybrid DE–random search algorithm is tested with IEEE benchmark systems consisting of 4, 10, 20 and 40 generating units. The effectiveness of proposed hybrid algorithm is compared with other well-known evolutionary, heuristics and meta-heuristics search algorithms, and by experimental analysis, it has been found that proposed algorithm yields global results for the solution of unit commitment problem.

  相似文献   

6.
This paper discusses the rollon–rolloff vehicle routing problem, a sanitation routing problem in which large containers are left at customer locations such as construction sites and shopping centers. Customers dump their garbage into large waste containers and request for waste treatment services. Tractors then transport a container at a time between customer locations, disposal facility, and depot. The objective of the problem is to determine routes that minimize the number of required tractors and their deadhead time to serve all given customer demands. We propose a hybrid metaheuristic approach that consists of a large neighborhood search and various improvement methods to solve the problem. The effectiveness of the proposed approach is demonstrated by computational experiments using benchmark data. New best-known solutions are found for 17 problems out of 20 benchmark instances.  相似文献   

7.
This paper describes the development of an intelligent technique based on artificial intelligence for automatically detecting incidents on power distribution networks. A hybrid combination of fuzzy logic and genetic algorithms (GAs) has been applied to detect faults in these networks. The robust nature of a fuzzy controller allows it to model functions of arbitrary complexity, while the maximising capabilities of GAs allow optimisation of the fuzzy design parameters to achieve optimal performance. The hybrid approach used in this paper builds on these individual strengths and seeks to blend fuzzy set and GAs techniques to compensate for their inadequacies. The technique for fault detection is described and verified with experiments on a 33 kV test system containing 12 busbars, eight transformers and eight line sections. The results obtained from the test data file of 500 test cases contain only one undetected case (0.2%), 458 correctly detected cases (91.6%) of actual faults and 41 cases (8.2%) where the protection system components either had not operated or had malfunctioned but were correctly identified by the incident detection system.  相似文献   

8.
Data are considered to be important organizational assets because of their assumed value, including their potential to improve the organizational decision-making processes. Such potential value, however, comes with various costs, including those of acquiring, storing, securing and maintaining the given assets at appropriate quality levels. Clearly, if these costs outweigh the value that results from using the data, it would be counterproductive to acquire, store, secure and maintain the data. Thus cost–benefit assessment is particularly important in data warehouse (DW) development; yet very few techniques are available for determining the value that the organization will derive from storing a particular data table and hence determining which data set should be loaded in the DW. This research seeks to address the issue of identifying the set of data with the potential for producing the greatest net value for the organization by offering a model that can be used to perform a cost–benefit analysis on the decision support views that the warehouse can support and by providing techniques for estimating the parameters necessary for this model.  相似文献   

9.
Naïve–Bayes Classifier (NBC) is widely used for classification in machine learning. It is considered as the first choice for many classification problems because of its simplicity and classification accuracy as compared to other supervised learning methods. However, for high dimensional data like gene expression data, it does not perform well due to two major limitations i.e. underflow and overfitting. In order to address the problem of underflow, the existing approach adopted is to add the logarithms of probabilities rather than multiplying probabilities and the estimate approach is used for providing solution to overfitting problem. However, in practice for gene expression data, these approaches do not perform well. In this paper, a novel approach has been proposed to overcome the limitations using a robust function for estimating probabilities in Naïve–Bayes Classifier. The proposed method not only resolves the limitation of NBC but also improves the classification accuracy for gene expression data. The method has been tested over several benchmark gene expression datasets of high dimension. Comparative results of proposed Robust Naïve–Bayes Classifier (R-NBC) and existing NBC for gene expression data have also been illustrated to highlight the effectiveness of the R-NBC. Simulation study has also been performed to depict the robustness of the R-NBC over the existing approaches.  相似文献   

10.
With the development of multimedia group applications and multicasting demands, the construction of multicast routing tree satisfying Quality of Service (QoS) is more important. A multicast tree, which is constructed by existing multicast algorithms, suffers three major weaknesses: (1) it cannot be constructed by multichannel routing, transmitting a message using all available links, thus the data traffic cannot be preferably distributed; (2) it does not formulate duplication capacity; consequently, duplication capacity in each node cannot be optimally distributed; (3) it cannot change the number of links and nodes used optimally. In fact, it cannot employ and cover unused backup multichannel paths optimally. To overcome these weaknesses, this paper presents a polynomial time algorithm for distributed optimal multicast routing and Quality of Service (QoS) guarantees in networks with multichannel paths which is called Distributed Optimal Multicast Multichannel Routing Algorithm (DOMMR). The aim of this algorithm is: (1) to minimize End-to-End delay across the multichannel paths, (2) to minimize consumption of bandwidth by using all available links, and (3) to maximize data rate by formulating network resources. DOMMR is based on the Linear Programming Formulation (LPF) and presents an iterative optimal solution to obtain the best distributed routes for traffic demands between all edge nodes. Computational experiments and numerical simulation results will show that the proposed algorithm is more efficient than the existing methods. The simulation results are obtained by applying network simulation tools such as QSB, OpNet and MATLB to some samples of network. We then introduce a generalized problem, called the delay-constrained multicast multichannel routing problem, and show that this generalized problem can be solved in polynomial time.  相似文献   

11.
The estimation of a crack location and depth in a cantilever beam is formulated as an optimization problem and the optimal location and depth are found by minimizing the cost function which is based on the difference of the first four measured and calculated natural frequencies. Calculated natural frequencies are obtained using a rotational spring model of the crack, and measured natural frequencies are obtained by using cracked beam frequency response and modal analysis. A hybrid particle swarm–Nelder–Mead (PS–NM) algorithm is used for estimating the crack location and depth. The hybrid PS–NM is made-up of a modified particle swarm optimization algorithm (PSO), aimed to identify the most promising areas, and a Nelder–Mead simplex algorithm (NM) for performing local search within these areas. The PS–NM results are compared with those obtained by the PSO, a hybrid genetic–Nelder–Mead algorithm (GA–NM) and a neural network (NN). The proposed PS–NM method outperforms other methods in terms of speed and accuracy. The average estimation errors for crack location and depth are (0.06%, 0%) for the PS–NM, however, (0.09%, 0%), (0.46%, 0.54%) and (0.39%, 1.66%) for the GA–NM, the PSO and the NN methods, respectively. To validate the proposed method and investigate the modeling and measurement errors some experimental results are also included. The average values of experimental location and depth estimation errors are (9.24%, 8.56%) for the PS–NM, but (9.64%, 9.50%), (10.89%, 10.89%), (11.53%, 11.64%) for the GA–NM, the PSO and the NN methods, respectively.  相似文献   

12.
Neural Computing and Applications - Phishing is an attack targeting to imitate the official websites of corporations such as banks, e-commerce, financial institutions, and governmental...  相似文献   

13.
The present paper is a theoretical contribution to the field of iterative methods for solving inconsistent linear least squares problems arising in image reconstruction from projections in computerized tomography. It consists on a hybrid algorithm which includes in each iteration a CG-like step for modifying the right-hand side and a Kaczmarz-like step for producing the approximate solution. We prove convergence of the hybrid algorithm for general inconsistent and rank-deficient least-squares problems. Although the new algorithm has potential for more applied experiments and comparisons, we restrict them in this paper to a regularized image reconstruction problem involving a 2D medical data set.  相似文献   

14.
Engineering with Computers - In this study, we propose a new hybrid algorithm fusing the exploitation ability of the particle swarm optimization (PSO) with the exploration ability of the grey wolf...  相似文献   

15.
16.
The rise of the Semantic Web has provided cultural heritage researchers and practitioners with several tools for providing semantically rich representations and interoperability of cultural heritage collections. Although indeed offering a lot of advantages, these tools, which come mostly in the form of ontologies and related vocabularies, do not provide a conceptual model for capturing contextual and environmental dependencies, contributing to long-term digital preservation. This paper presents one of the key outcomes of the PERICLES FP7 project, the Linked Resource Model, for modelling dependencies as a set of evolving linked resources. The adoption of the proposed model and the consistency of its representation are evaluated via a specific instantiation involving the domain of digital video art.  相似文献   

17.
Matrix-Matrix Multiplication (MMM) is a highly important kernel in linear algebra algorithms and the performance of its implementations depends on the memory utilization and data locality. There are MMM algorithms, such as standard, Strassen–Winograd variant, and many recursive array layouts, such as Z-Morton or U-Morton. However, their data locality is lower than that of the proposed methodology. Moreover, several SOA (state of the art) self-tuning libraries exist, such as ATLAS for MMM algorithm, which tests many MMM implementations. During the installation of ATLAS, on the one hand an extremely complex empirical tuning step is required, and on the other hand a large number of compiler options are used, both of which are not included in the scope of this paper. In this paper, a new methodology using the standard MMM algorithm is presented, achieving improved performance by focusing on data locality (both temporal and spatial). This methodology finds the scheduling which conforms with the optimum memory management. Compared with (Chatterjee et al. in IEEE Trans. Parallel Distrib. Syst. 13:1105, 2002; Li and Garzaran in Proc. of Lang. Compil. Parallel Comput., 2005; Bilmes et al. in Proc. of the 11th ACM Int. Conf. Super-comput., 1997; Aberdeen and Baxter in Concurr. Comput. Pract. Exp. 13:103, 2001), the proposed methodology has two major advantages. Firstly, the scheduling used for the tile level is different from the element level’s one, having better data locality, suited to the sizes of memory hierarchy. Secondly, its exploration time is short, because it searches only for the number of the level of tiling used, and between (1, 2) (Sect. 4) for finding the best tile size for each cache level. A software tool (C-code) implementing the above methodology was developed, having the hardware model and the matrix sizes as input. This methodology has better performance against others at a wide range of architectures. Compared with the best existing related work, which we implemented, better performance up to 55% than the Standard MMM algorithm and up to 35% than Strassen’s is observed, both under recursive data array layouts.  相似文献   

18.
Vendor-managed inventory (VMI) is one of the emerging solutions for improving the supply chain efficiency. It gives the supplier the responsibility to monitor and decide the inventory replenishments of their customers. In this paper, an integrated location–inventory distribution network problem which integrates the effects of facility location, distribution, and inventory issues is formulated under the VMI setup. We presented a Multi-Objective Location–Inventory Problem (MOLIP) model and investigated the possibility of a multi-objective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA2) for solving MOLIP. To assess the performance of our approach, we conduct computational experiments with certain criteria. The potential of the proposed approach is demonstrated by comparing to a well-known multi-objective evolutionary algorithm. Computational results have presented promise solutions for different sizes of problems and proved to be an innovative and efficient approach for many difficult-to-solve problems.  相似文献   

19.
A novel supervised Actor–Critic (SAC) approach for adaptive cruise control (ACC) problem is proposed in this paper. The key elements required by the SAC algorithm namely Actor and Critic, are approximated by feed-forward neural networks respectively. The output of Actor and the state are input to Critic to approximate the performance index function. A Lyapunov stability analysis approach has been presented to prove the uniformly ultimate bounded property of the estimation errors of the neural networks. Moreover, we use the supervisory controller to pre-train Actor to achieve a basic control policy, which can improve the training convergence and success rate. We apply this method to learn an approximate optimal control policy for the ACC problem. Experimental results in several driving scenarios demonstrate that the SAC algorithm performs well, so it is feasible and effective for the ACC problem.  相似文献   

20.
Hybrid manufacturing combines additive manufacturing’s advantages of building complex geometries and subtractive manufacturing’s benefits of dimensional precision and surface quality. This technology shows great potential to support repairing and remanufacturing processes. Hybrid manufacturing is used to repair end-of-life parts or remanufacture them to new features and functionalities. However, process planning for hybrid remanufacturing is still a challenging research topic. This is because current methods require extensive human intervention for feature recognition and knowledge interpretation, and the quality of the derived process plans are hard to quantify. To fill this gap, a cost-driven process planning method for hybrid additive–subtractive remanufacturing is proposed in this paper. An automated additive–subtractive feature extraction method is developed and the process planning task is formulated into a cost-minimization optimization problem to guarantee a high-quality solution. Specifically, an implicit level-set function-based feature extraction method is proposed. Precedence constraints and cost models are also formulated to construct the hybrid process planning task as a mixed-integer programming model. Numerical examples demonstrate the efficacy of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号