首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
This study presents a novel version of the Visualization Induced Self-Organizing Map based on the application of a new fusion algorithm for summarizing the results of an ensemble of topology-preserving mapping models. The algorithm is referred to as Weighted Voting Superposition (WeVoS). Its main feature is the preservation of the topology of the map, in order to obtain the most accurate possible visualization of the data sets under study. To do so, a weighted voting process between the units of the maps in the ensemble takes place, in order to determine the characteristics of the units of the resulting map. Several different quality measures are applied to this novel neural architecture known as WeVoS-ViSOM and the results are analyzed, so as to present a thorough study of its capabilities. To complete the study, it has also been compared with the well-know SOM and its fusion version, with the WeVoS-SOM and with two other previously devised fusion Fusion by Euclidean Distance and Fusion by Voronoi Polygon Similarity—based on the analysis of the same quality measures in order to present a complete analysis of its capabilities. All three summarization methods were applied to three widely used data sets from the UCI Repository. A rigorous performance analysis clearly demonstrates that the novel fusion algorithm outperforms the other single and summarization methods in terms of data sets visualization.  相似文献   

2.
The Dissipative Lozi chaotic map is embedded in the discrete self organising migrating algorithm (DSOMA), as a pseudorandom generator. This novel chaotic based algorithm is applied to the constraint based lot-streaming flowshop scheduling problem. Two new and unique data sets generated using the Lozi and Delayed Logistic maps are used to compare the chaos embedded DSOMA and the generic DSOMA utilising the venerable Mersenne Twister. In total, 100 data sets were tested by these two algorithms, for the idling and the non-idling case. From the obtained results, the chaos variant algorithm is shown to significantly improve the performance of generic DSOMA.  相似文献   

3.
The Convolutional Neural Networks (CNNs) based multi-focus image fusion methods have recently attracted enormous attention. They greatly enhanced the constructed decision map compared with the previous state of the art methods that have been done in the spatial and transform domains. Nevertheless, these methods have not reached to the satisfactory initial decision map, and they need to undergo vast post-processing algorithms to achieve a satisfactory decision map. In this paper, a novel CNNs based method with the help of the ensemble learning is proposed. It is very reasonable to use various models and datasets rather than just one. The ensemble learning based methods intend to pursue increasing diversity among the models and datasets in order to decrease the problem of the overfitting on the training dataset. It is obvious that the results of an ensemble of CNNs are better than just one single CNNs. Also, the proposed method introduces a new simple type of multi-focus images dataset. It simply changes the arranging of the patches of the multi-focus datasets, which is very useful for obtaining the better accuracy. With this new type arrangement of datasets, the three different datasets including the original and the Gradient in directions of vertical and horizontal patches are generated from the COCO dataset. Therefore, the proposed method introduces a new network that three CNNs models which have been trained on three different created datasets to construct the initial segmented decision map. These ideas greatly improve the initial segmented decision map of the proposed method which is similar, or even better than, the other final decision map of CNNs based methods obtained after applying many post-processing algorithms. Many real multi-focus test images are used in our experiments, and the results are compared with quantitative and qualitative criteria. The obtained experimental results indicate that the proposed CNNs based network is more accurate and have the better decision map without post-processing algorithms than the other existing state of the art multi-focus fusion methods which used many post-processing algorithms.  相似文献   

4.
In this paper, a new algorithm named polar self-organizing map (PolSOM) is proposed. PolSOM is constructed on a 2-D polar map with two variables, radius and angle, which represent data weight and feature, respectively. Compared with the traditional algorithms projecting data on a Cartesian map by using the Euclidian distance as the only variable, PolSOM not only preserves the data topology and the inter-neuron distance, it also visualizes the differences among clusters in terms of weight and feature. In PolSOM, the visualization map is divided into tori and circular sectors by radial and angular coordinates, and neurons are set on the boundary intersections of circular sectors and tori as benchmarks to attract the data with the similar attributes. Every datum is projected on the map with the polar coordinates which are trained towards the winning neuron. As a result, similar data group together, and data characteristics are reflected by their positions on the map. The simulations and comparisons with Sammon's mapping, SOM and ViSOM are provided based on four data sets. The results demonstrate the effectiveness of the PolSOM algorithm for multidimensional data visualization.  相似文献   

5.
Shadow mapping has been subject to extensive investigation, but previous shadow map algorithms cannot usually generate high-quality shadows with a small memory footprint. In this paper, we present compressed shadow maps as a solution to this problem. A compressed shadow map reduces memory consumption by representing lit surfaces with endpoints of intermediate line segments as opposed to the conventional array-based pixel structures. Compressed shadow maps are only discretized in the vertical direction while the horizontal direction is represented by floating-point accuracy. The compression also helps with the shadow map self-shadowing problems. We compare our algorithm against all of the most popular shadow map algorithms and show, on average, order of magnitude improvements in storage requirements in our test scenes. The algorithm is simple to implement, can be added easily to existing software renderers, and lets us use graphics hardware for shadow visualization.  相似文献   

6.
针对单目视觉目标检测,提出了一种基于single-stage深度学习的H_SFPN算法。该算法与现有的YOLOv3和CenterNet算法相比,在保证实时性能的条件下,可有效提高小目标检测的准确度。首先设计了一种新的网络架构(backbone),这种架构通过改进的沙漏(Hourglass)网络模型来提取特征图,以便充分利用底层特征的高分辨率以及高层特征的高语义信息。然后在特征图融合阶段提出了基于SFPN的特征图加权融合方法。最后,H_SFPN算法对目标位置和大小的损失函数进行了改进,可有效降低训练误差,并加快收敛速度。由MSCOCO数据集上的实验结果可知,所提H_SFPN算法明显优于Faster-RCNN,YOLOv3以及EfficientDet等现有的主流深度学习目标检测算法,其中对小目标的检测指标AP s最高,达到了32.7。  相似文献   

7.
High dimensional data visualization is one of the main tasks in the field of data mining and pattern recognition. The self organizing maps (SOM) is one of the topology visualizing tool that contains a set of neurons that gradually adapt to input data space by competitive learning and form clusters. The topology preservation of the SOM strongly depends on the learning process. Due to this limitation one cannot guarantee the convergence of the SOM in data sets with clusters of arbitrary shape. In this paper, we introduce Constrained SOM (CSOM), the new version of the SOM by modifying the learning algorithm. The idea is to introduce an adaptive constraint parameter to the learning process to improve the topology preservation and mapping quality of the basic SOM. The computational complexity of the CSOM is less than those with the SOM. The proposed algorithm is compared with similar topology preservation algorithms and the numerical results on eight small to large real-world data sets demonstrate the efficiency of the proposed algorithm.  相似文献   

8.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

9.
Mapping quality of the self-organising maps (SOMs) is sensitive to the map topology and initialisation of neurons. In this article, in order to improve the convergence of the SOM, an algorithm based on split and merge of clusters to initialise neurons is introduced. The initialisation algorithm speeds up the learning process in large high-dimensional data sets. We also develop a topology based on this initialisation to optimise the vector quantisation error and topology preservation of the SOMs. Such an approach allows to find more accurate data visualisation and consequently clustering problem. The numerical results on eight small-to-large real-world data sets are reported to demonstrate the performance of the proposed algorithm in the sense of vector quantisation, topology preservation and CPU time requirement.  相似文献   

10.
High-throughput experiments have become more and more prevalent in biomedical research. The resulting high-dimensional data have brought new challenges. Effective data reduction, summarization and visualization are important keys to initial exploration in data mining. In this paper, we introduce a visualization tool, namely a quantile map, to present information contained in a probabilistic distribution. We demonstrate its use as an effective visual analysis tool through the application of a tandem mass spectrometry data set. Information of quantiles of a distribution is presented in gradient colors by concentric doughnuts. The width of the doughnuts is proportional to the Fisher information of the distribution to present unbiased visualization effect. A parametric empirical Bayes (PEB) approach is shown to improve the simple maximum likelihood estimate (MLE) approach when estimating the Fisher information. In the motivating example from tandem mass spectrometry data, multiple probabilistic distributions are to be displayed in two-dimensional grids. A hierarchical clustering to reorder rows and columns and a gradient color selection from a Hue-Chroma-Luminance model, similar to that commonly applied in heatmaps of microarray analysis, are adopted to improve the visualization. Both simulations and the motivating example show superior performance of the quantile map in summarization and visualization of such high-throughput data sets.  相似文献   

11.
Navigation in a GPS-denied environment is an essential requirement for increased robotics autonomy. While this is in some sense solved for a single robot, the next challenge is to design algorithms for a team of robots to be able to map and navigate efficiently.The key requirement for achieving this team autonomy is to provide the robots with a collaborative ability to accurately map an environment. This problem is referred to as cooperative simultaneous localization and mapping (SLAM). In this research, the mapping process is extended to multiple robots with a novel occupancy grid map fusion algorithm. Map fusion is achieved by transforming individual maps into the Hough space where they are represented in an abstract form. Properties of the Hough transform are used to find the common regions in the maps, which are then used to calculate the unknown transformation between the maps.Results are shown from tests performed on benchmark datasets and real-world experiments with multiple robotic platforms.  相似文献   

12.
This paper introduces a self-organizing map dedicated to clustering, analysis and visualization of categorical data. Usually, when dealing with categorical data, topological maps use an encoding stage: categorical data are changed into numerical vectors and traditional numerical algorithms (SOM) are run. In the present paper, we propose a novel probabilistic formalism of Kohonen map dedicated to categorical data where neurons are represented by probability tables. We do not need to use any coding to encode variables. We evaluate the effectiveness of our model in four examples using real data. Our experiments show that our model provides a good quality of results when dealing with categorical data.  相似文献   

13.
多分类器系统作为混合智能系统的分支,集成了具有多样性的分类器集合,使整体得到更优的分类性能.结果融合是该领域中的一个重要问题,在相同分类器成员下,好的融合策略可以有效提升系统整体的分类正确率.随着模型安全性得到重视,传统融合策略可解释性差的问题凸显.本文基于心理学中的知识线记忆理论进行建模,参考人类决策过程,提出了一种...  相似文献   

14.
Chaotic maps contribute a vital part in modern cryptography. The algorithms constructed using chaotic maps are considered more effectual and secure. In this work, a novel technique for image encryption utilizing 3D mixed chaotic map is presented. In the proposed scheme, we used 3D mixed chaotic map for the transformation of pixels position by permuting then in row, column-wise and mixing, aim to fulfill the requirements of secure image transfer. We analyzed the proposed scheme with existing cryptanalysis and compare their results with the results of some other algorithms from literature. The proposed scheme is more efficacious than other algorithms and the outcomes of security analysis, statistical analysis and differential analysis validate that the anticipated scheme is exceptional for secure image transmission.  相似文献   

15.
李波  王娟  覃征  李爱国 《计算机工程》2006,32(24):269-271
多分辨图像融合是图像融合技术的一个研究热点,具有广泛的应用前景。为方便多分辨图像融合仿真应用系统的开发和对新融合算法的比较验证,提出了一个开放的多分辨图像融合应用与开发平台。该平台使用了一个新的通用的多分辨图像融合仿真模型,进行了详细的模块结构设计,并实现了12种典型的多分辨图像融合算法。在平台之上可以构筑相关工程应用仿真系统,也可方便地集成新的算法进行融合比较实验。最后为了验证平台功能,给出了一个工程应用实例和一个新融合算法的比较验证实例。  相似文献   

16.
An ensemble is a group of learners that work together as a committee to solve a problem. The existing ensemble learning algorithms often generate unnecessarily large ensembles, which consume extra computational resource and may degrade the generalization performance. Ensemble pruning algorithms aim to find a good subset of ensemble members to constitute a small ensemble, which saves the computational resource and performs as well as, or better than, the unpruned ensemble. This paper introduces a probabilistic ensemble pruning algorithm by choosing a set of “sparse” combination weights, most of which are zeros, to prune the ensemble. In order to obtain the set of sparse combination weights and satisfy the nonnegative constraint of the combination weights, a left-truncated, nonnegative, Gaussian prior is adopted over every combination weight. Expectation propagation (EP) algorithm is employed to approximate the posterior estimation of the weight vector. The leave-one-out (LOO) error can be obtained as a by-product in the training of EP without extra computation and is a good indication for the generalization error. Therefore, the LOO error is used together with the Bayesian evidence for model selection in this algorithm. An empirical study on several regression and classification benchmark data sets shows that our algorithm utilizes far less component learners but performs as well as, or better than, the unpruned ensemble. Our results are very competitive compared with other ensemble pruning algorithms.  相似文献   

17.
Process mining enables organizations to analyze data about their (business) processes. Visualization is key to gaining insight into these processes and the associated data. Process visualization requires a high‐quality graph layout that intuitively represents the semantics of the process. Process analysis additionally requires interactive filtering to explore the process data and process graph. The ideal process visualization therefore provides a high‐quality, intuitive layout and preserves the mental map of the user during the visual exploration. The current industry standard used for process visualization does not satisfy either of these requirements. In this paper, we propose a novel layout algorithm for processes based on the Sugiyama framework. Our approach consists of novel ranking and order constraint algorithms and a novel crossing minimization algorithm. These algorithms make use of the process data to compute stable, high‐quality layouts. In addition, we use phased animation to further improve mental map preservation. Quantitative and qualitative evaluations show that our approach computes layouts of higher quality and preserves the mental map better than the industry standard. Additionally, our approach is substantially faster, especially for graphs with more than 250 edges.  相似文献   

18.
The naive Bayes model has proven to be a simple yet effective model, which is very popular for pattern recognition applications such as data classification and clustering. This paper explores the possibility of using this model for multidimensional data visualization. To achieve this, a new learning algorithm called naive Bayes self-organizing map (NBSOM) is proposed to enable the naive Bayes model to perform topographic mappings. The training is carried out by means of an online expectation maximization algorithm with a self-organizing principle. The proposed method is compared with principal component analysis, self-organizing maps, and generative topographic mapping on two benchmark data sets and a real-world image processing application. Overall, the results show the effectiveness of NBSOM for multidimensional data visualization.  相似文献   

19.
Sketching is a natural and easy way for humans to express visual information in everyday life. Despite a number of approaches to understand online sketch maps, the automatic understanding of offline, hand-drawn sketch maps still poses a problem. This paper presents a new approach for novel sketch map understanding. To our knowledge, this is the first comprehensive work dealing with this task in an offline way. This paper presents a system for automatic understanding of sketch maps and the underlying algorithms for all steps. Major parts are a region-growing segmentation for sketch map objects, a classification for isolated objects, and a context-aware classification. The context-aware classification uses probabilistic relaxation labeling to integrate dependencies between objects into the recognition. We show how these algorithms can deal with the major problems of sketch map understanding, such as vagueness in interpretation. Our experiments demonstrate the importance of context-aware classification for sketch map understanding. In addition, a new database of annotated sketch maps was developed and is made publicly available. This can be used for training and evaluation of sketch map understanding algorithms.  相似文献   

20.
一种新的基于SOM的数据可视化算法   总被引:1,自引:0,他引:1  
SOM(self—organizing map)所具有的拓扑保持特性使之可用来对高维数据进行低维展现,但由于数据间的距离信息在映射到低维空间中固定有序的神经元上时被丢掉了,因此数据的结构通常是被扭曲了的.为了更自然地展现数据的结构,提出了一种新的基于SOM的数据可视化算法——DPSOM(distance-preserving SOM),它能够按照相应的距离信息对神经元的位置进行自适应调节,从而实现了对数据间距离信息的直观展现,特别地,该算法还能自动避免神经元的过度收缩问题,从而极大地提高了算法的可控性和数据可视化的质量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号