首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is argued that the data collection process is the most crucial stage in the model building process. This is primarily due to the influence that data has in providing accurate simulation results. Data collection is an extremely time consuming process predominantly because the task is manually orientated. Hence, automating this process of data collection would be extremely advantageous. This paper presents how simulation could utilise the Corporate Business Systems as the simulation data source. Subsequently a unique interface could be implemented to provide these data directly to the simulation tool. Such an interface would prove to be an invaluable tool for users of simulation.  相似文献   

2.
This paper examines attribute dependencies in data that involve grades, such as a grade to which an object is red or a grade to which two objects are similar. We thus extend the classical agenda by allowing graded, or “fuzzy”, attributes instead of Boolean, yes-or-no attributes in case of attribute implications, and allowing approximate match based on degrees of similarity instead of exact match based on equality in case of functional dependencies. In a sense, we move from bivalence, inherently present in the now-available theories of dependencies, to a more flexible setting that involves grades. Such a shift has far-reaching consequences. We argue that a reasonable theory of dependencies may be developed by making use of mathematical fuzzy logic, a recently developed many-valued logic. Namely, the theory of dependencies is then based on a solid logic calculus the same way classical dependencies are based on classical logic. For instance, rather than handling degrees of similarity in an ad hoc manner, we consistently treat them as truth values, the same way as true (match) and false (mismatch) are treated in classical theories. In addition, several notions intuitively embraced in the presence of grades, such as a degree of validity of a particular dependence or a degree of entailment, naturally emerge and receive a conceptually clean treatment in the presented approach. In the first part of this two-part paper, we discuss motivations, provide basic notions of syntax and semantics and develop basic results which include entailment of dependencies, associated closure structures and a logic of dependencies with two versions of completeness theorem.  相似文献   

3.
Classical dynamic Bayesian networks (DBNs) are based on the homogeneous Markov assumption and cannot deal with non-homogeneous temporal processes. Various approaches to relax the homogeneity assumption have recently been proposed. The present paper presents a combination of a Bayesian network with conditional probabilities in the linear Gaussian family, and a Bayesian multiple changepoint process, where the number and location of the changepoints are sampled from the posterior distribution with MCMC. Our work improves four aspects of an earlier conference paper: it contains a comprehensive and self-contained exposition of the methodology; it discusses the problem of spurious feedback loops in network reconstruction; it contains a comprehensive comparative evaluation of the network reconstruction accuracy on a set of synthetic and real-world benchmark problems, based on a novel discrete changepoint process; and it suggests new and improved MCMC schemes for sampling both the network structures and the changepoint configurations from the posterior distribution. The latter study compares RJMCMC, based on changepoint birth and death moves, with two dynamic programming schemes that were originally devised for Bayesian mixture models. We demonstrate the modifications that have to be made to allow for changing network structures, and the critical impact that the prior distribution on changepoint configurations has on the overall computational complexity.  相似文献   

4.
This paper proposes an activity recognition method that models an end user’s activities without using any labeled/unlabeled acceleration sensor data obtained from the user. Our method employs information about the end user’s physical characteristics such as height and gender to find and select appropriate training data obtained from other users in advance. Then, we model the end user’s activities by using the selected labeled sensor data. Therefore, our method does not require the end user to collect and label her training sensor data. In this paper, we propose and test two methods for finding appropriate training data by using information about the end user’s physical characteristics. Moreover, our recognition method improves the recognition performance without the need for any effort by the end user because the method automatically adapts the activity models to the end user when it recognizes her unlabeled sensor data. We confirmed the effectiveness of our method by using 100 h of sensor data obtained from 40 participants.  相似文献   

5.
Surveys and opinion polls are extremely popular in the media, especially in the months preceding a general election. However, the available tools for analyzing poll results often require specialized training. Hence, data analysis remains out of reach for many casual computer users. Moreover, the visualizations used to communicate the results of surveys are typically limited to traditional statistical graphics like bar graphs and pie charts, both of which are fundamentally noninteractive. We present a simple interactive visualization that allows users to construct queries on large tabular data sets, and view the results in real time. The results of two separate user studies suggest that our interface lowers the learning curve for naive users, while still providing enough analytical power to discover interesting correlations in the data.  相似文献   

6.
The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.  相似文献   

7.
Fitting curve and surface by least-regression is quite common in many scientific fields. It, however cannot properly handle noisy data with impulsive noises and outliers. In this article, we study 1-regression and its associated reweighted least squares for data restoration. Unlike most existing work, we propose the 1-regression based subdivision schemes to handle this problem. In addition, we propose fast numerical optimization method: dynamic iterative reweighted least squares to solve this problem, which has closed form solution for each iteration. The most advantage of the proposed method is that it removes noises and outliers without any prior information about the input data. It also extends the least square regression based subdivision schemes from the fitting of a curve to the set of observations in 2-dimensional space to a p-dimensional hyperplane to a set of point observations in (p+1)-dimensional space. Wide-ranging experiments have been carried out to check the usability and practicality of this new framework.  相似文献   

8.
《国际计算机数学杂志》2012,89(8):1683-1712
Subdivision schemes are multi-resolution methods used in computer-aided geometric design to generate smooth curves or surfaces. We propose two new models for data analysis and compression based on subdivision schemes:(a) The ‘subdivision regression’ model, which can be viewed as a special multi-resolution decomposition.(b) The ‘tree regression’ model, which allows the identification of certain patterns within the data. The paper focuses on analysis and mentions compression as a byproduct. We suggest applying certain criteria on the output of these models as features for data analysis. Differently from existing multi-resolution analysis methods, these new models and criteria provide data features related to the schemes (the filters) themselves, based on a decomposition of the data into different resolution levels, and they also allow analysing data of non-smooth functions and working with varying-resolution subdivision rules. Finally, applications of these methods for music analysis and other potential usages are mentioned.  相似文献   

9.
A gate size estimation algorithm for data association filters   总被引:1,自引:0,他引:1  
The problem of forming validation regions or gates for new sensor measurements obtained when tracking targets in clutter is considered. Since the gate size is an integral part of the data association filter, this paper is intended to describe a way of estimating the gate size via the performance of the data association filter. That is, the gate size can be estimated by looking for the optimal performance of the data association filter. Simulations show that this estimation method of the gate size offers advantages over the common and classical estimation methods of the gate size, especially in a heavy clutter and/or false alarm environment.  相似文献   

10.
The increasing use of computers for transactions and communication have created mountains of data that contain potentially valuable knowledge. To search for this knowledge we have to develop a new generation of tools, which have the ability of flexible querying and intelligent searching. In this paper we will introduce an extension of a fuzzy query language called Summary SQL which can be used for knowledge discovery and data mining. We show how it can be used to search for fuzzy rules.  相似文献   

11.
During the August 2002 Elbe river flood, different satellite sensor data were acquired, and especially Envisat Advanced Synthetic Aperture Radar (ASAR) data. The ASAR instrument was activated in Alternating Polarization (AP) and Image (IM) modes, providing high resolution datasets. Thus, the comparison with a quasi‐simultaneous ERS‐2 scene enables the evaluation of the contribution of polarization configurations to flood boundary delineation. This study highlights the increased capabilities of the Envisat ASAR instrument in flood mapping, especially the benefit of combining like‐ and cross‐polarizations for rapid mapping within a crisis context.  相似文献   

12.
13.
This paper introduces the FMet software package for the scientific processing of Meteosat SEVIRI data. A number of individual modules handle the processing steps from image format conversion to calibration and product generation and presentation. The package is designed for operational applications. It can be freely extended and configured using a dynamic graphical user interface.  相似文献   

14.
A novel ν-twin support vector machine with Universum data (\(\mathfrak {U}_{\nu }\)-TSVM) is proposed in this paper. \(\mathfrak {U}_{\nu }\)-TSVM allows to incorporate the prior knowledge embedded in the unlabeled samples into the supervised learning. It aims to utilize these prior knowledge to improve the generalization performance. Different from the conventional \(\mathfrak {U}\)-SVM, \(\mathfrak {U}_{\nu }\)-TSVM employs two Hinge loss functions to make the Universum data lie in a nonparallel insensitive loss tube, which makes it exploit these prior knowledge more flexibly. In addition, the newly introduced parameters ν1, ν2 in the \(\mathfrak {U}_{\nu }\)-TSVM have better theoretical interpretation than the penalty factor c in the \(\mathfrak {U}\)-TSVM. Numerical experiments on seventeen benchmark datasets, handwritten digit recognition, and gender classification indicate that the Universum indeed contributes to improving the prediction accuracy. Moreover, our \(\mathfrak {U}_{\nu }\)-TSVM is far superior to the other three algorithms (\(\mathfrak {U}\)-SVM, ν-TSVM and \(\mathfrak {U}\)-TSVM) from the prediction accuracy.  相似文献   

15.
As more and more real time spatio-temporal datasets become available at increasing spatial and temporal resolutions, the provision of high quality, predictive information about spatio-temporal processes becomes an increasingly feasible goal. However, many sensor networks that collect spatio-temporal information are prone to failure, resulting in missing data. To complicate matters, the missing data is often not missing at random, and is characterised by long periods where no data is observed. The performance of traditional univariate forecasting methods such as ARIMA models decreases with the length of the missing data period because they do not have access to local temporal information. However, if spatio-temporal autocorrelation is present in a space–time series then spatio-temporal approaches have the potential to offer better forecasts. In this paper, a non-parametric spatio-temporal kernel regression model is developed to forecast the future unit journey time values of road links in central London, UK, under the assumption of sensor malfunction. Only the current traffic patterns of the upstream and downstream neighbouring links are used to inform the forecasts. The model performance is compared with another form of non-parametric regression, K-nearest neighbours, which is also effective in forecasting under missing data. The methods show promising forecasting performance, particularly in periods of high congestion.  相似文献   

16.
1 Introduction Graph processing has received significant attention for its ability to cope with large-scale and complex unstructured data in the real-world.However,most of the graph processing applications exhibit an irregular memory access pattern which leads to a poor locality in the memory access stream[1].  相似文献   

17.
In dense target and false detection scenario of four time difference of arrival (TDOA) for multi-passive-sensor location system, the global optimal data association algo- rithm has to be adopted. In view of the heavy calculation burden of the traditional optimal assignment algorithm, this paper proposes a new global optimal assign- ment algorithm and a 2-stage association algorithm based on a statistic test. Compared with the traditional optimal algorithm, the new optimal algorithm avoids the complicated operations for finding the target position before we calculate as- sociation cost; hence, much of the procedure time is saved. In the 2-stage asso- ciation algorithm, a large number of false location points are eliminated from can- didate associations in advance. Therefore, the operation is further decreased, and the correct data association probability is improved in varying degrees. Both the complexity analyses and simulation results can verify the effectiveness of the new algorithms.  相似文献   

18.
In network data analysis, research about how accurate the estimation model represents the universe is inevitable. As the speed of the network increases, so will the attacking methods on future generation communication network. To correspond to these wide variety of attacks, intrusion detection systems and intrusion prevention systems also need a wide variety of counter measures. As a result, an effective method to compare and analyze network data is needed. These methods are needed because when a method to compare and analyze network data is effective, the verification of intrusion detection systems and intrusion prevention systems can be trusted.In this paper, we use extractable standard protocol information of network data to compare and analyze the data of MIT Lincoln Lab with the data of KDD CUP 99 (modeled from Lincoln Lab). Correspondence Analysis and statistical analyzing method is used for comparing data.  相似文献   

19.
We propose an integrated learning and planning framework that leverages knowledge from a human user along with prior information about the environment to generate trajectories for scientific data collection in marine environments. The proposed framework combines principles from probabilistic planning with nonparametric uncertainty modeling to refine trajectories for execution by autonomous vehicles. These trajectories are informed by a utility function learned from the human operator’s implicit preferences using a modified coactive learning algorithm. The resulting techniques allow for user-specified trajectories to be modified for reduced risk of collision and increased reliability. We test our approach in two marine monitoring domains and show that the proposed framework mimics human-planned trajectories while also reducing the risk of operation. This work provides insight into the tools necessary for combining human input with vehicle navigation to provide persistent autonomy.  相似文献   

20.
Logic for improving integrity checking in relational data bases⋆   总被引:4,自引:0,他引:4  
Summary When an updating operation occurs on the current state of a data base, one has to ensure the new state obeys the integrity constraints. So, some of them have to be evaluated on this new state. The evaluation of an integrity constraint can be time consuming, but one can improve such an evaluation by taking advantage from the fact that the integrity constraint is satisfied in the current state. Indeed, it is then possible to derive a simplified form of this integrity constraint which is sufficient to evaluate in the new state in order to determine whether the initial constraint is still satisfied in this new state. The purpose of this paper is to present a simplification method yielding such simplified forms for integrity constraints. These simplified forms depend on the nature of the updating operation which is the cause of the state change. The operations of inserting, deleting, updating a tuple in a relation as well as transactions of such operations are considered. The proposed method is based on syntactical criteria and is validated through first order logic. Examples are treated and some aspects of the method application are discussed.The work reported in this paper was supported by the D.R.E.T.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号