首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Forecasting fuzzy time series (FTS) methods are generally divided into two categories, one is based on intervals of universal set and the other is based on clustering algorithms. Since there are some challenging problems with the interval based algorithms such as the ideal interval length, clustering based FTS algorithms are preferred. Fuzzy Logical Relationships (FLRs) are usually used to establish relationships between input and output data in both interval based and clustering based FTS algorithms. Modeling complicated systems demands high number of FLRs that incurs high runtime to train FTS algorithms. In this study, a fast and efficient clustering based fuzzy time series algorithm (FEFTS) is introduced to handle the regression, and classification problems. Superiority of FEFTS algorithm over other FTS algorithms in terms of runtime and training and testing errors is confirmed by applying the algorithm to various benchmark datasets available on the web. It is shown that FEFTS reduces testing RMSE for regression data up to 40% with the least runtime. Also, FEFTS with the same accuracy as compared to Fuzzy-Firefly classification method, diminishes runtime moderately from 324.33 s to 0.0055 s.  相似文献   

2.
This paper presents a novel adaptive cuckoo search (ACS) algorithm for optimization. The step size is made adaptive from the knowledge of its fitness function value and its current position in the search space. The other important feature of the ACS algorithm is its speed, which is faster than the CS algorithm. Here, an attempt is made to make the cuckoo search (CS) algorithm parameter free, without a Levy step. The proposed algorithm is validated using twenty three standard benchmark test functions. The second part of the paper proposes an efficient face recognition algorithm using ACS, principal component analysis (PCA) and intrinsic discriminant analysis (IDA). The proposed algorithms are named as PCA + IDA and ACS–IDA. Interestingly, PCA + IDA offers us a perturbation free algorithm for dimension reduction while ACS + IDA is used to find the optimal feature vectors for classification of the face images based on the IDA. For the performance analysis, we use three standard face databases—YALE, ORL, and FERET. A comparison of the proposed method with the state-of-the-art methods reveals the effectiveness of our algorithm.  相似文献   

3.
3-D Networks-on-Chip (NoCs) have been proposed as a potent solution to address both the interconnection and design complexity problems facing future System-on-Chip (SoC) designs. In this paper, two topology-aware multicast routing algorithms, Multicasting XYZ (MXYZ) and Alternative XYZ (AL + XYZ) algorithms in supporting of 3-D NoC are proposed. In essence, MXYZ is a simple dimension order multicast routing algorithm that targets 3-D NoC systems built upon regular topologies. To support multicast routing in irregular regions, AL + XYZ can be applied, where an alternative output channel is sought to forward/replicate the packets whenever the output channel determined by MXYZ is not available. To evaluate the performance of MXYZ and AL + XYZ, extensive experiments have been conducted by comparing MXYZ and AL + XYZ against a path-based multicast routing algorithm and an irregular region oriented multiple unicast routing algorithm, respectively. The experimental results confirm that the proposed MXYZ and AL + XYZ schemes, respectively, have lower latency and power consumption than the other two routing algorithms, meriting the two proposed algorithms to be more suitable for supporting multicasting in 3-D NoC systems. In addition, the hardware implementation cost of AL + XYZ is shown to be quite modest.  相似文献   

4.
In this paper, we present an efficient and simplified algorithm for the Residue Number System (RNS) conversion to weighted number system which in turn will simplify the implementation of RNS sign detection, magnitude comparison, and overflow detection. The algorithm is based on the Mixed Radix Conversion (MRC). The new algorithm simplifies the hardware implementation and improves the speed of conversion by replacing a number of multiplication operations with small look-up tables. The algorithm requires less ROM size compared to those required by existing algorithms. For a moduli set consisting of eight moduli, the new algorithm requires seven tables to do the conversion with a total table size of 519 bits, while Szabo and Tanaka MRC algorithm [N.S. Szabo, R.I. Tanaka, Residue Arithmetic and its Application to Computer Technology, McGraw-Hill, New York, 1967; C.H. Huang, A fully parallel mixed-radix conversion algorithm for residue number applications, IEEE Transactions on Computers c-32 (4) (1983)] requires 28 tables with a total table size of 8960 bits; and Huang MRC algorithm (Huang, 1983) requires 36 tables with a total table size of 5760 bits.  相似文献   

5.
This study attempts to employ growing self-organizing map (GSOM) algorithm and continuous genetic algorithm (CGA)-based SOM (CGASOM) to improve the performance of SOM neural network (SOMnn). The proposed GSOM + CGASOM approach for SOMnn is consisted of two stages. The first stage determines the SOMnn topology using GSOM algorithm while the weights are fine-tuned by using CGASOM algorithm in the second stage. The proposed CGASOM algorithm is compared with other two clustering algorithms using four benchmark data sets, Iris, Wine, Vowel, and Glass. The simulation results indicate that CGASOM algorithm is able to find the better solution. Additionally, the proposed approach has been also employed to grade Lithium-ion cells and characterize the quality inspection rules. The results can assist the battery manufacturers to improve the quality and decrease the costs of battery design and manufacturing.  相似文献   

6.
Stock index forecasting is a hot issue in the financial arena. As the movements of stock indices are non-linear and subject to many internal and external factors, they pose a great challenge to researchers who try to predict them. In this paper, we select a radial basis function neural network (RBFNN) to train data and forecast the stock indices of the Shanghai Stock Exchange. We introduce the artificial fish swarm algorithm (AFSA) to optimize RBF. To increase forecasting efficiency, a K-means clustering algorithm is optimized by AFSA in the learning process of RBF. To verify the usefulness of our algorithm, we compared the forecasting results of RBF optimized by AFSA, genetic algorithms (GA) and particle swarm optimization (PSO), as well as forecasting results of ARIMA, BP and support vector machine (SVM). Our experiment indicates that RBF optimized by AFSA is an easy-to-use algorithm with considerable accuracy. Of all the combinations we tried in this paper, BIAS6 + MA5 + ASY4 was the optimum group with the least errors.  相似文献   

7.
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.  相似文献   

8.
This paper presents the use of simulated annealing metaheuristic for tuning Mamdani type fuzzy models. Structure of the Mamdani fuzzy model is learned from input–output data pairs using Wang and Mendel’s method and fuzzy c-means clustering algorithm. Then, parameters of the fuzzy system are tuned through simulated annealing. In this paper, we perform experiments to examine effects of (a) initial solution generated by Wang and Mendel’s method and fuzzy c-means clustering method, (b) membership function update procedure, (c) probability parameter for the calculation of the initial temperature, (d) temperature update coefficient used for cooling schedule, and (e) randomness level in the disturbance mechanism used in simulated annealing algorithm on the tuning of Mamdani type fuzzy models. Experiments are performed with Mackey–Glass chaotic time series. The results indicate that Wang and Mendel’s method provides better starting configuration for simulated annealing compared to fuzzy c-means clustering method, and for the membership function update parameter, MFChangeRate   (0, 1], and the probability parameter for the calculation of the initial temperature, P0   (0, 1), values close to zero produced better results.  相似文献   

9.
An accurate contour estimation plays a significant role in classification and estimation of shape, size, and position of thyroid nodule. This helps to reduce the number of false positives, improves the accurate detection and efficient diagnosis of thyroid nodules. This paper introduces an automated delineation method that integrates spatial information with neutrosophic clustering and level-sets for accurate and effective segmentation of thyroid nodules in ultrasound images. The proposed delineation method named as Spatial Neutrosophic Distance Regularized Level Set (SNDRLS) is based on Neutrosophic L-Means (NLM) clustering which incorporates spatial information for Level Set evolution. The SNDRLS takes rough estimation of region of interest (ROI) as input provided by Spatial NLM (SNLM) clustering for precise delineation of one or more nodules. The performance of the proposed method is compared with level set, NLM clustering, Active Contour Without Edges (ACWE), Fuzzy C-Means (FCM) clustering and Neutrosophic based Watershed segmentation methods using the same image dataset. To validate the SNDRLS method, the manual demarcations from three expert radiologists are employed as ground truth. The SNDRLS yields the closest boundaries to the ground truth compared to other methods as revealed by six assessment measures (true positive rate is 95.45 ± 3.5%, false positive rate is 7.32 ± 5.3% and overlap is 93.15 ± 5. 2%, mean absolute distance is 1.8 ± 1.4 pixels, Hausdorff distance is 0.7 ± 0.4 pixels and Dice metric is 94.25 ± 4.6%). The experimental results show that the SNDRLS is able to delineate multiple nodules in thyroid ultrasound images accurately and effectively. The proposed method achieves the automated nodule boundary even for low-contrast, blurred, and noisy thyroid ultrasound images without any human intervention. Additionally, the SNDRLS has the ability to determine the controlling parameters adaptively from SNLM clustering.  相似文献   

10.
Solar cells that convert sunlight into electrical energy are the main component of a solar power system. Quality inspection of solar cells ensures high energy conversion efficiency of the product. The surface of a multi-crystal solar wafer shows multiple crystal grains of random shapes and sizes. It creates an inhomogeneous texture in the surface, and makes the defect inspection task extremely difficult. This paper proposes an automatic defect detection scheme based on Haar-like feature extraction and a new clustering technique. Only defect-free images are used as training samples. In the training process, a binary-tree clustering method is proposed to partition defect-free samples that involve tens of groups. A uniformity measure based on principal component analysis is evaluated for each cluster. In each partition level, the current cluster with the worst uniformity of inter-sample distances is separated into two new clusters using the Fuzzy C-means. In the inspection process, the distance from a test data point to each individual cluster centroid is computed to measure the evidence of a defect. Experimental results have shown that the proposed method is effective and efficient to detect various defects in solar cells. It has shown a very good detection rate, and the computation time is only 0.1 s for a 550 × 550 image.  相似文献   

11.
The success rates of the expert or intelligent systems depend on the selection of the correct data clusters. The k-means algorithm is a well-known method in solving data clustering problems. It suffers not only from a high dependency on the algorithm's initial solution but also from the used distance function. A number of algorithms have been proposed to address the centroid initialization problem, but the produced solution does not produce optimum clusters. This paper proposes three algorithms (i) the search algorithm C-LCA that is an improved League Championship Algorithm (LCA), (ii) a search clustering using C-LCA (SC-LCA), and (iii) a hybrid-clustering algorithm called the hybrid of k-means and Chaotic League Championship Algorithm (KSC-LCA) and this algorithm has of two computation stages. The C-LCA employs chaotic adaptation for the retreat and approach parameters, rather than constants, which can enhance the search capability. Furthermore, to overcome the limitation of the original k-means algorithm using the Euclidean distance that cannot handle the categorical attribute type properly, we adopt the Gower distance and the mechanism for handling a discrete value requirement of the categorical value attribute. The proposed algorithms can handle not only the pure numeric data but also the mixed-type data and can find the best centroids containing categorical values. Experiments were conducted on 14 datasets from the UCI repository. The SC-LCA and KSC-LCA competed with 16 established algorithms including the k-means, k-means++, global k-means algorithms, four search clustering algorithms and nine hybrids of k-means algorithm with several state-of-the-art evolutionary algorithms. The experimental results show that the SC-LCA produces the cluster with the highest F-Measure on the pure categorical dataset and the KSC-LCA produces the cluster with the highest F-Measure for the pure numeric and mixed-type tested datasets. Out of 14 datasets, there were 13 centroids produced by the SC-LCA that had better F-Measures than that of the k-means algorithm. On the Tic-Tac-Toe dataset containing only categorical attributes, the SC-LCA can achieve an F-Measure of 66.61 that is 21.74 points over that of the k-means algorithm (44.87). The KSC-LCA produced better centroids than k-means algorithm in all 14 datasets; the maximum F-Measure improvement was 11.59 points. However, in terms of the computational time, the SC-LCA and KSC-LCA took more NFEs than the k-means and its variants but the KSC-LCA ranks first and SC-LCA ranks fourth among the hybrid clustering and the search clustering algorithms that we tested. Therefore, the SC-LCA and KSC-LCA are general and effective clustering algorithms that could be used when an expert or intelligent system requires an accurate high-speed cluster selection.  相似文献   

12.
传统k-means算法由于初始聚类中心的选择是随机的,因此会使聚类结果不稳定。针对这个问题,提出一种基于离散量改进k-means初始聚类中心选择的算法。算法首先将所有对象作为一个大类,然后不断从对象数目最多的聚类中选择离散量最大与最小的两个对象作为初始聚类中心,再根据最近距离将这个大聚类中的其他对象划分到与之最近的初始聚类中,直到聚类个数等于指定的k值。最后将这k个聚类作为初始聚类应用到k-means算法中。将提出的算法与传统k-means算法、最大最小距离聚类算法应用到多个数据集进行实验。实验结果表明,改进后的k-means算法选取的初始聚类中心唯一,聚类过程的迭代次数也减少了,聚类结果稳定且准确率较高。  相似文献   

13.
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE + PSO + C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.  相似文献   

14.
Vehicle routing problems are at the heart of most decision support systems for real-life distribution problems. In vehicle routing problem a set of routes must be determined at lowest total cost for a number of resources (i.e. fleet of vehicles) located at one or several points (e.g. depots, warehouses) in order to efficiently service a number of demand or supply points. In this paper an efficient evolution strategies algorithm is developed for both capacitated vehicle routing problem and for vehicle routing problem with time window constraints. The algorithm is based on a new multi-parametric mutation procedure that is applied within the 1 + 1 evolution strategies algorithm. Computational testing on six real-life problems and 195 benchmark problems demonstrate that the suggested algorithm is efficient and highly competitive, improving or matching the current best-known solution in 42% of the test cases.  相似文献   

15.
Erasable itemset (EI) mining, a branch of pattern mining, helps managers to establish new plans for the development of new products. Although the problem of mining EIs was first proposed in 2009, many efficient algorithms for mining these have since been developed. However, these algorithms usually require a lot of time and memory usage. In reality, users only need a small number of EIs which satisfy a particular condition. Having this observation in mind, in this study we develop an efficient algorithm for mining EIs with subset and superset itemset constraints (C0  X  C1). Firstly, based on the MEI (Mining Erasable Itemsets) algorithm, we present the MEIC (Mining Erasable Itemsets with subset and superset itemset Constraints) algorithm in which each EI is checked with regard to the constraints before being added to the results. Next, two propositions supporting quick pruning of nodes that do not satisfy the constraints are established. Based on these, we propose an efficient algorithm for mining EIs with subset and superset itemset constraints (called pMEIC – p: pruning). The experimental results show that pMEIC outperforms MEIC in terms of mining time and memory usage.  相似文献   

16.
This paper presents results of a comparative study with the objective to identify the most effective and efficient way of applying a local search method embedded in a hybrid algorithm. The hybrid metaheuristic employed in this study is called “DE–HS–HJ” because it is comprised of two cooperative metaheusitic algorithms, i.e., differential evolution (DE) and harmony search (HS), and one local search (LS) method, i.e., Hooke and Jeeves (HJ) direct search. Eighteen different ways of using HJ local search were implemented and all of them were evaluated with 19 problems, in terms of six performance indices, covering both accuracy and efficiency. Statistic analyses were conducted accordingly to determine the significance in performance differences. The test results show that overall the best three LS application strategies are applying local search to every generated solution with a specified probability and also to each newly updated solution (NUS + ESP), applying local search to every generated solution with a specified probability (ESP), and applying local search to every generated solution with probability and also to the updated current global best solution (EUGbest + ESP). ESP is found to be the best local search application strategy in terms of success rate. Integrating it with NUS further improve the overall performance. EUGbest + ESP is the most efficient and it is also able to achieve high level of accuracy (the fourth place in terms of success rate with an average above 0.9).  相似文献   

17.
In this work we present a new thinning scheme for reducing the noise sensitivity of 3D thinning algorithms. It uses iteration-by-iteration smoothing that removes some border points that are considered as extremities. The proposed smoothing algorithm is composed of two parallel topology preserving reduction operators. An efficient implementation of our algorithm is sketched and its topological correctness for (26, 6) pictures is proved.  相似文献   

18.
The human liver is one of the major organs in the body and liver disease can cause many problems in human life. Fast and accurate prediction of liver disease allows early and effective treatments. In this regard, various data mining techniques help in better prediction of this disease. Because of the importance of liver disease and increase the number of people who suffer from this disease, we studied on liver disease through using two well-known methods in data mining area.In this paper, novel decision tree based algorithms is used which leads to considering more factors in general and predictions with high accuracy compared to other studies in liver disease. In this application, 583 UCI instances of liver disease dataset from the UCI repository are considered. This dataset consists of 416 records of liver disease and 167 records of healthy liver. This dataset is analyzed by two algorithms named Boosted C5.0 and CHAID algorithms. Until now there is no work in the literature that uses boosted C5.0 and CHAID for creating the rules in liver disease. Our results show that in both algorithms, the DB, ALB, SGPT, TB and A/G factors have a significant impact on predicting liver disease which according to the rules generated by both algorithms important ranges are DB = [10.900–1.200], ALB [4.00–4.300], SGPT = [34–37], TB = [0.600–1.200] (by boosted C5.0), A/G = [1.180–1.390], as well as in the Boosted C5.0 algorithm, Alkphos, SGOT and Age have significant impact in prediction of liver disease. By comparing the performance of these algorithms, it becomes clear that C5.0 algorithm via Boosting technique has an accuracy of 93.75% and this result reveals that it has a better performance than the CHAID algorithm which is 65.00%. Another important achievement of this paper is about the ability of both algorithms to produce rules in one class for liver disease. The results of our assessment show that Boosted C5.0 and CHAID algorithms are capable to produce rules for liver disease. Our results also show that boosted C5.0 considers the gender in liver disease, a factor which is missing in many other studies. Meanwhile, using the rules generated in boosted C5.0 algorithm, we obtained the important result about low susceptibility of female to liver disease than male. This factor is missing in other studies of liver disease. Therefore, our proposed computer-aided diagnostic methods as an expert and intelligent system have impressive impact on liver disease detection. Based on obtained results, we observed that our model had better performance compared to existing methods in the literature.  相似文献   

19.
We address the problem of determining a complete set of extreme supported efficient solutions of biobjective minimum cost flow (BMCF) problems. A novel method improving the classical parametric method for this biobjective problem is proposed. The algorithm runs in O(Nn(m + nlogn)) time determining all extreme supported non-dominated points in the outcome space and one extreme supported efficient solution associated with each one of them. Here n is the number of nodes, m is the number of arcs and N is the number of extreme supported non-dominated points in outcome space for the BMCF problem. The memory space required by the algorithm is O(n + m) when the extreme supported efficient solutions are not required to be stored in RAM. Otherwise, the algorithm requires O(N + m) space. Extensive computational experiments comparing the performance of the proposed method and a standard parametric network simplex method are presented.  相似文献   

20.
Psychiatric patients often require continuous monitoring to keep them out of dangerous situations. Accordingly, hospitals hire additional staff to monitor patients' vital signs, maintain patient safety, and ensure that patients do not leave the hospital without notice. However, ward staff have difficulty knowing whenever a psychiatric patient is stepping into potential danger zones or encountering any safety threat. This paper reports the development of a wireless monitoring system to improve patient safety in psychiatric wards and reduce avoidable risks. The proposed system can ease the workload of nurses, help locate patients, and monitor patients' heartbeats. A two-step clustering localization algorithm is proposed for use in tracking patients' locations. This study marks for the first time that heartbeat detection using a ZigBee-based platform with localization function has been proposed. A proof-of-concept system is developed to understand the current hardware challenges and to enable functional analysis of the proposed ZigBee-based patient localization system. The error distance of the proposed localization algorithm is approximately 1 m. Its location accuracy is 90% with the error distance of up to 3 m. The proposed system is expected to improve patient safety significantly in psychiatric wards at low cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号