首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Artificial surfaces represent one of the key land cover types, and validation is an indispensable component of land cover mapping that ensures data quality. Traditionally, validation has been carried out by confronting the produced land cover map with reference data, which is collected through field surveys or image interpretation. However, this approach has limitations, including high costs in terms of money and time. Recently, geo-tagged photos from social media have been used as reference data. This procedure has lower costs, but the process of interpreting geo-tagged photos is still time-consuming. In fact, social media point of interest (POI) data, including geo-tagged photos, may contain useful textual information for land cover validation. However, this kind of special textual data has seldom been analysed or used to support land cover validation. This paper examines the potential of textual information from social media POIs as a new reference source to assist in artificial surface validation without photo recognition and proposes a validation framework using modified decision trees. First, POI datasets are classified semantically to divide POIs into the standard taxonomy of land cover maps. Then, a decision tree model is built and trained to classify POIs automatically. To eliminate the effects of spatial heterogeneity on POI classification, the shortest distances between each POI and both roads and villages serve as two factors in the modified decision tree model. Finally, a data transformation based on a majority vote algorithm is then performed to convert the classified points into raster form for the purposes of applying confusion matrix methods to the land cover map. Using Beijing as a study area, social media POIs from Sina Weibo were collected to validate artificial surfaces in GlobeLand30 in 2010. A classification accuracy of 80.68% was achieved through our modified decision tree method. Compared with a classification method without spatial heterogeneity, the accuracy is 10% greater. This result indicates that our modified decision tree method displays considerable skill in classifying POIs with high spatial heterogeneity. In addition, a high validation accuracy of 92.76% was achieved, which is relatively close to the official result of 86.7%. These preliminary results indicate that social media POI datasets are valuable ancillary data for land cover validation, and our proposed validation framework provides opportunities for land cover validation with low costs in terms of money and time.  相似文献   

2.

Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.

  相似文献   

3.
Clustering aims to partition a data set into homogenous groups which gather similar objects. Object similarity, or more often object dissimilarity, is usually expressed in terms of some distance function. This approach, however, is not viable when dissimilarity is conceptual rather than metric. In this paper, we propose to extract the dissimilarity relation directly from the available data. To this aim, we train a feedforward neural network with some pairs of points with known dissimilarity. Then, we use the dissimilarity measure generated by the network to guide a new unsupervised fuzzy relational clustering algorithm. An artificial data set and a real data set are used to show how the clustering algorithm based on the neural dissimilarity outperforms some widely used (possibly partially supervised) clustering algorithms based on spatial dissimilarity.  相似文献   

4.
Swathi  T.  Kasiviswanath  N.  Rao  A. Ananda 《Applied Intelligence》2022,52(12):13675-13688
Applied Intelligence - Stock Price Prediction is one of the hot research topics in financial engineering, influenced by economic, social, and political factors. In the present stock market, the...  相似文献   

5.
Multimedia Tools and Applications - Multi-target regression (MTR) is a challenging research problem which aims to predict more than one continuous variable as output in a pattern. In recent time, a...  相似文献   

6.
User Modeling and User-Adapted Interaction - This paper describes an exploratory investigation into the feasibility of predictive analytics of user behavioral data as a possible aid in developing...  相似文献   

7.
The present and future high-speed networks are expected to support wide variety real-time applications. However, the current Internet architecture offers mainly best-effort service. It means that the network will do its best to deliver the data at the destination without any guarantee. But the future integrated services networks will require guarantee for transferring heterogeneous data. There are many parameters involved in improving the Quality of Service (QoS). QoS is a set of service requirements to be met by the network while transporting a flow. In this paper, we consider four primary parameters are such as reliability, delay, jitter, bandwidth which together determine the QoS. The requirements of the above parameters will vary from one application to another application. Applications like file transfer, remote login, etc., will require high reliability. But, applications like audio, video, etc., will require low reliability, because they can tolerate errors. The objectives of this paper are to propose a novel technique to predict reason(s) for deterioration in the QoS and to identify the algorithm(s)/mechanism(s) responsible for the deterioration. We are sure that this paper will give better results to improve the QoS and to improve the performance of the network.  相似文献   

8.
This paper presents a data mining algorithm based on supervised clustering to learn data patterns and use these patterns for data classification. This algorithm enables a scalable incremental learning of patterns from data with both numeric and nominal variables. Two different methods of combining numeric and nominal variables in calculating the distance between clusters are investigated. In one method, separate distance measures are calculated for numeric and nominal variables, respectively, and are then combined into an overall distance measure. In another method, nominal variables are converted into numeric variables, and then a distance measure is calculated using all variables. We analyze the computational complexity, and thus, the scalability, of the algorithm, and test its performance on a number of data sets from various application domains. The prediction accuracy and reliability of the algorithm are analyzed, tested, and compared with those of several other data mining algorithms.  相似文献   

9.
For the last years, a considerable amount of attention has been devoted to the research about the link prediction (LP) problem in complex networks. This problem tries to predict the likelihood of an association between two not interconnected nodes in a network to appear in the future. One of the most important approaches to the LP problem is based on supervised machine learning (ML) techniques for classification. Although many works have presented promising results with this approach, choosing the set of features (variables) to train the classifiers is still a major challenge. In this article, we report on the effects of three different automatic variable selection strategies (Forward, Backward and Evolutionary) applied to the feature-based supervised learning approach in LP applications. The results of the experiments show that the use of these strategies does lead to better classification models than classifiers built with the complete set of variables. Such experiments were performed over three datasets (Microsoft Academic Network, Amazon and Flickr) that contained more than twenty different features each, including topological and domain-specific ones. We also describe the specification and implementation of the process used to support the experiments. It combines the use of the feature selection strategies, six different classification algorithms (SVM, K-NN, naïve Bayes, CART, random forest and multilayer perceptron) and three evaluation metrics (Precision, F-Measure and Area Under the Curve). Moreover, this process includes a novel ML voting committee inspired approach that suggests sets of features to represent data in LP applications. It mines the log of the experiments in order to identify sets of features frequently selected to produce classification models with high performance. The experiments showed interesting correlations between frequently selected features and datasets.  相似文献   

10.
Partial least squares is a data-driven modeling technique that has been utilized for process monitoring in a variety of industrial processes. This paper develops a novel online partial least squares approach (evolving PLS) and compares it with an existing online PLS technique (global PLS). Both methods are applied to an industrial fed-batch mammalian cell culture process, where process variables are used to predict a key quality variable, product titer. Fault detection and diagnosis are performed using PLS models and statistical metrics. This new detection approach was able to recognize a variety of faults during online monitoring.  相似文献   

11.
A semi-physical auto-regressive moving average model of bending stiffness of the board produced at Assi Domän-Frövifors Bruk AB, as a function of measured control and disturbance variables, was identified. Based on the bending stiffness model, an adaptive on-line bending stiffness index predictor was implemented and found to have an RMS-error within the laboratory measurement accuracy. The predictor has been running for several months and consistently has a prediction accuracy of more than 75% within ± one standard deviation, for all product grades. The main difficulty in developing the predictor was that bending stiffness is seldom measured. The predictor will form the basis of a model predictive regulator of the bending stiffness.  相似文献   

12.
Automatic land cover classification from satellite images is an important topic in many remote sensing applications. In this paper, we consider three different statistical approaches to tackle this problem: two of them, namely the well-known maximum likelihood classification (ML) and the support vector machine (SVM), are noncontextual methods. The third one, iterated conditional modes (ICM), exploits spatial context by using a Markov random field. We apply these methods to Landsat 5 Thematic Mapper (TM) data from Tenerife, the largest of the Canary Islands. Due to the size and the strong relief of the island, ground truth data could be collected only sparsely by examination of test areas for previously defined land cover classes.We show that after application of an unsupervised clustering method to identify subclasses, all classification algorithms give satisfactory results (with statistical overall accuracy of about 90%) if the model parameters are selected appropriately. Although being superior to ML theoretically, both SVM and ICM have to be used carefully: ICM is able to improve ML, but when applied for too many iterations, spatially small sample areas are smoothed away, leading to statistically slightly worse classification results. SVM yields better statistical results than ML, but when investigated visually, the classification result is not completely satisfying. This is due to the fact that no a priori information on the frequency of occurrence of a class was used in this context, which helps ML to limit the unlikely classes.  相似文献   

13.
14.
Generating prediction rules for liquefaction through data mining   总被引:1,自引:0,他引:1  
Prediction of liquefaction is an important subject in geotechnical engineering. Prediction of liquefaction is also a complex problem as it depends on many different physical factors, and the relations between these factors are highly non-linear and complex. Several approaches have been proposed in the literature for modeling and prediction of liquefaction. Most of these approaches are based on classical statistical approaches and neural networks. In this paper a new approach which is based on classification data mining is proposed first time in the literature for liquefaction prediction. The proposed approach is based on extracting accurate classification rules from neural networks via ant colony optimization. The extracted classification rules are in the form of IF–THEN rules which can be easily understood by human. The proposed algorithm is also compared with several other data mining algorithms. It is shown that the proposed algorithm is very effective and accurate in prediction of liquefaction.  相似文献   

15.
A novel logistic multi-class supervised classification model based on multi-fractal spectrum parameters is proposed to avoid the error that is caused by the difference between the real data distribution and the hypothetic Gaussian distribution and avoid the computational burden working in the logistic regression classification directly for hyperspectral data. The multi-fractal spectra and parameters are calculated firstly with training samples along the spectral dimension of hyperspectral data. Secondly, the logistic regression model is employed in our work because the logistic regression classification model is a distribution-free nonlinear model which is based on the conditional probability without the Gaussian distribution assumption of the random variables, and the obtained multi-fractal parameters are applied to establish the multi-class logistic regression classification model. Finally, the Newton–Raphson method is applied to estimate the model parameters via the maximum likelihood algorithm. The classification results of the proposed model are compared with the logistic regression classification model based on an adaptive bands selection method by using the Airborne Visible/Infrared Imaging Spectrometer and airborne Push Hyperspectral Imager data. The results illuminate that the proposed approach achieves better accuracy with lower computational cost simultaneously.  相似文献   

16.
Innovation is vital to find new solutions to problems, increase quality, and improve profitability. Big open linked data (BOLD) is a fledgling and rapidly evolving field that creates new opportunities for innovation. However, none of the existing literature has yet considered the interrelationships between antecedents of innovation through BOLD. This research contributes to knowledge building through utilising interpretive structural modelling to organise nineteen factors linked to innovation using BOLD identified by experts in the field. The findings show that almost all the variables fall within the linkage cluster, thus having high driving and dependence powers, demonstrating the volatility of the process. It was also found that technical infrastructure, data quality, and external pressure form the fundamental foundations for innovation through BOLD. Deriving a framework to encourage and manage innovation through BOLD offers important theoretical and practical contributions.  相似文献   

17.
During the IBIS project a high-quality data library of continuous and intermittent physiological signals and variables from patients during intensive care and surgery has been collected. To facilitate exploration of the full content of this data library a data browser was developed, which offers a flexible graphical display of the collection of multivariate data. To supplement the functionality of the display of the 'raw' data, a set of screening and pre-processing tools has been developed. A separate trend analysis tool offers a convenient overview of an entire recording focusing on the slow changes in the general state of the patient and the interaction between different physiological subsystems seen from a long-term perspective. A frequency analysis tool for processing the electroencephalography (EEG) signals has been integrated in the data browser to facilitate a quick screening of the cerebral function. The data library is the foundation of the development and validation of biosignal interpretation methods. This process can potentially be more productive using the described tool for algorithm prototyping based on a graphical network specifying the interaction between data processing primitives.  相似文献   

18.
19.
根据IEC61069标准对工业控制计算机系统的系统综合性能及其评定的有关问题进行了讨论。  相似文献   

20.
《Computers in Industry》2014,65(9):1242-1252
Ontologies are structural components of modern information systems. The taxonomy, the core of an ontology, is a delicate balance between adequacy considerations, minimal commitments and implementation concerns. However, ontological taxonomies can be quite restrictive and entities that are commonly used in production and services might not find room in a official or de facto standard or ontological system. This mismatch between the company's view and the ontological constraints can limit or even jeoparize the adoption of modern formal ontologies in industry. We study the roots of this problem and individuate a general set of principles to relate the ontology and those non-ontological entities that are yet important for the core business of the company. We then introduce a theoretically sound and formally robust approach to expand a given ontology with new dependency relations, which make available information regarding the non-ontological entities without affecting the consistency of the overall information system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号