首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article explains the main role that space windowing plays in preliminary knowledge extraction from multifactor and multivariate databases coming from complex system empirical studies. The explanation is based on the general case of a database with a hyperparallelepipedic structure in which the directions correspond to the factors and where the measurement variables may be quantitative or qualitative, temporal or nontemporal, and objective or subjective. First, the data in each cell of the hyperparallelepiped is transformed into membership values that can be averaged over factors, such as time or individual. Then, several graphic techniques can be exploited to investigate membership values. This article mainly focuses on the use of multiple correspondence analysis (MCA). A didactic example with several factors and several kinds of variables—nontemporal vs. temporal where each one may be either quantitative or qualitative—is used to illustrate the widespread use of the pair “space windowing/MCA.” The discussion presents the advantages and disadvantages of using space windowing to perform a preliminary analysis of a multifactor multivariate system study.  相似文献   

2.
There has been an ongoing trend toward collaborative software development using open and shared source code published in large software repositories on the Internet. While traditional source code analysis techniques perform well in single project contexts, new types of source code analysis techniques are ermerging, which focus on global source code analysis challenges. In this article, we discuss how the Semantic Web, can become an enabling technology to provide a standardized, formal, and semantic rich representations for modeling and analyzing large global source code corpora. Furthermore, inference services and other services provided by Semantic Web technologies can be used to support a variety of core source code analysis techniques, such as semantic code search, call graph construction, and clone detection. In this paper, we introduce SeCold, the first publicly available online linked data source code dataset for software engineering researchers and practitioners. Along with its dataset, SeCold also provides some Semantic Web enabled core services to support the analysis of Internet-scale source code repositories. We illustrated through several examples how this linked data combined with Semantic Web technologies can be harvested for different source code analysis tasks to support software trustworthiness. For the case studies, we combine both our linked-data set and Semantic Web enabled source code analysis services with knowledge extracted from StackOverflow, a crowdsourcing website. These case studies, we demonstrate that our approach is not only capable of crawling, processing, and scaling to traditional types of structured data (e.g., source code), but also supports emerging non-structured data sources, such as crowdsourced information (e.g., StackOverflow.com) to support a global source code analysis context.  相似文献   

3.
The aim of this paper is to raise some questions–and partly, also to answer them –in connection with two important problem groups of fuzzy mathematics: n-fuzzy objects and the sigma-properties of different interactive fuzzy structures. These questions are suggested by the analyzation of natural languages, the common sense thinking – which are typical fields where the most adequate mathematical model is a fuzzy one-especially by complex adjectival structures and subjective “verifying” processes, respectively. They have, however, a real practical significance also in the field of engineering, as, e.g., in learning machine problems.In the first part we try to point to the practical importance of the concept of fuzzy objects of type n (or n-fuzzy objects), from the aspect of modeling natural languages. A useful way to define n-fuzzy algebras, i.e., generalizing ordinary fuzzy algebras for n-fuzzy objects, is also given, with introducing an isomorphism mapping from the fuzzy to the n-fuzzy object space. As an example, R-n-fuzzy algebra is defined. Because of the isomorphic property of the above mapping the later studies can be restricted to ordinary fuzzy objects.

In the second part some very basic concepts in connection with the sigma-properties of fuzzy algebras are given and some simple theorems are proved. These are quite important from the aspect of fuzzy learning processes, as their probability theoretic interpretation leads to several convergence theorems – which are not dealt with here, however.

In this part we raise the concept of the quantified of a fuzzy algebra, and by means of this concept a close relation between interactive fuzzy and Boolean algebras is proved –a very different relation from that between Zadeh's original, noninteractive system and Boolean algebra.

Although any presentation of complete application examples is not at all intended in this paper, finally some aspects of the application of the above results, especially in learning control algorithms, are given, the statements backed up by the experience of a simulation experiment going on at present.  相似文献   

4.
Treating fuzziness in subjective evaluation data   总被引:1,自引:0,他引:1  
This paper proposes a technique to deal with fuzziness in subjective evaluation data, and applies it to principal component analysis and correspondence analysis. In the existing method, or techniques developed directly from it, fuzzy sets are defined from some standpoint on a data space, and the fuzzy parameters of the statistical model are identified with linear programming or the method of least squares. In this paper, we try to map the variation in evaluation data into the parameter space while preserving information as much as possible, and thereby define fuzzy sets in the parameter space. Clearly, it is possible to use the obtained fuzzy model to derive things like the principal component scores from the extension principle. However, with a fuzzy model which uses the extension principle, the possibility distribution spreads out as the explanatory variable values increase. This does not necessarily make sense for subjective evaluations, such as a 5-level evaluation, for instance. Instead of doing so, we propose a method for explicitly expressing the vagueness of evaluation, using certain quantities related to the eigenvalues of a matrix which specifies the fuzzy parameter spread. As a numerical example, we present an analysis of subjective evaluation data on local environments.  相似文献   

5.
Fuzzy statistics provides useful techniques for handling real situations which are affected by vagueness and imprecision. Several fuzzy statistical techniques (e.g., fuzzy regression, fuzzy principal component analysis, fuzzy clustering) have been developed over the years. Among these, fuzzy regression can be considered an important tool for modeling the relation between a dependent variable and a set of independent variables in order to evaluate how the independent variables explain the empirical data which are modeled through the regression system. In general, the standard fuzzy least squares method has been used in these situations. However, several applicative contexts, such as for example, analysis with small samples and short and fat matrices, violation of distributional assumptions, matrices affected by multicollinearity (ill-posed problems), may show more complex situations which cannot successfully be solved by the fuzzy least squares. In all these cases, different estimation methods should instead be preferred. In this paper we address the problem of estimating fuzzy regression models characterized by ill-posed features. We introduce a novel fuzzy regression framework based on the Generalized Maximum Entropy (GME) estimation method. Finally, in order to better highlight some characteristics of the proposed method, we perform two Monte Carlo experiments and we analyze a real case study.  相似文献   

6.
7.
In this paper we focus on the joint problem of tracking humans and recognizing human action in scenarios such as a kitchen scenario or a scenario where a robot cooperates with a human, e.g., for a manufacturing task. In these scenarios, the human directly interacts with objects physically by using/manipulating them or by, e.g., pointing at them such as in “Give me that…”. To recognize these types of human actions is difficult because (a) they ought to be recognized independent of scene parameters such as viewing direction and (b) the actions are parametric, where the parameters are either object-dependent or as, e.g., in the case of a pointing direction convey important information. One common way to achieve recognition is by using 3D human body tracking followed by action recognition based on the captured tracking data. For the kind of scenarios considered here we would like to argue that 3D body tracking and action recognition should be seen as an intertwined problem that is primed by the objects on which the actions are applied. In this paper, we are looking at human body tracking and action recognition from a object-driven perspective. Instead of the space of human body poses we consider the space of the object affordances, i.e., the space of possible actions that are applied on a given object. This way, 3D body tracking reduces to action tracking in the object (and context) primed parameter space of the object affordances. This reduces the high-dimensional joint-space to a low-dimensional action space. In our approach, we use parametric hidden Markov models to represent parametric movements; particle filtering is used to track in the space of action parameters. We demonstrate its effectiveness on synthetic and on real image sequences using human-upper body single arm actions that involve objects.  相似文献   

8.
In this paper, we propose a method for using particle swarm optimization (PSO) to compute optimal guidance paths for various crowd densities in an agent‐based crowd simulation. The inputs of our system are guidance paths that provide hints for the movement directions of agents. Input guidance paths may not be located correctly (e.g., leading to congestion or high traveling cost); therefore, our method adjusts the guidance paths by using PSO. We consider several factors for evaluating the quality of a guidance path, including the average traveling time and interaction distance between agents. We apply our method in several examples. Experimental results show that our method can compute adaptive guidance paths for various crowd densities. Our system can simulate organized crowds that move in directions specified by the guidance paths. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
ExPosition is a new comprehensive R package providing crisp graphics and implementing multivariate analysis methods based on the singular value decomposition (svd). The core techniques implemented in ExPosition are: principal components analysis, (metric) multidimensional scaling, correspondence analysis, and several of their recent extensions such as barycentric discriminant analyses (e.g., discriminant correspondence analysis), multi-table analyses (e.g.,multiple factor analysis, Statis, and distatis), and non-parametric resampling techniques (e.g., permutation and bootstrap). Several examples highlight the major differences between ExPosition and similar packages. Finally, the future directions of ExPosition are discussed.  相似文献   

10.
Spatially aware handheld displays are a promising approach to interact with complex information spaces in a more natural way by extending the interaction space from the 2D surface to the 3D physical space around them. This is achieved by utilizing their spatial position and orientation for interaction purposes. Technical solutions for spatially tracked displays already exist in research laboratories, e.g., embedded in a tabletop environment. Along with a large stationary screen, such multi-display systems provide a rich design space with a variety of benefits to users, e.g., the explicit support of co-located parallel work and collaboration. As we see a great future in the underlying interaction principles, the question is how the technology can be made accessible to the public. With our work, we want to address this issue. In the long term, we envision a low-cost tangible display ecosystem that is suitable for everyday usage and supports both active displays (e.g., the iPad) and passive projection media (e.g., paper screens and everyday objects such as a mug). The two major contributions of this article are a presentation of an exciting design space and a requirement analysis regarding its technical realization with special focus on a broad adoption by the public. In addition, we present a proof of concept system that addresses one technical aspect of this ecosystem: the spatial tracking of tangible displays with a consumer depth camera (Kinect).  相似文献   

11.
Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several populations. In this article, we extend one-way ANOVA to a case where observed data are fuzzy observations rather than real numbers. Two real-data examples are given to show the performance of this method.  相似文献   

12.
13.
Data stream values are often associated with multiple aspects. For example each value observed at a given time-stamp from environmental sensors may have an associated type (e.g., temperature, humidity, etc.) as well as location. Time-stamp, type and location are the three aspects, which can be modeled using a tensor (high-order array). However, the time aspect is special, with a natural ordering, and with successive time-ticks having usually correlated values. Standard multiway analysis ignores this structure. To capture it, we propose 2 Heads Tensor Analysis (2-heads), which provides a qualitatively different treatment on time. Unlike most existing approaches that use a PCA-like summarization scheme for all aspects, 2-heads treats the time aspect carefully. 2-heads combines the power of classic multilinear analysis with wavelets, leading to a powerful mining tool. Furthermore, 2-heads has several other advantages as well: (a) it can be computed incrementally in a streaming fashion, (b) it has a provable error guarantee and, (c) it achieves significant compression ratio against competitors. Finally, we show experiments on real datasets, and we illustrate how 2-heads reveals interesting trends in the data. This is an extended abstract of an article published in the Data Mining and Knowledge Discovery journal.  相似文献   

14.
Successful planning and control of robots strongly depends on the quality of kinematic models, which define mappings between configuration space (e.g. joint angles) and task space (e.g. Cartesian coordinates of the end effector). Often these models are predefined, in which case, for example, unforeseen bodily changes may result in unpredictable behavior. We are interested in a learning approach that can adapt to such changes—be they due to motor or sensory failures, or also due to the flexible extension of the robot body by, for example, the usage of tools. We focus on learning locally linear forward velocity kinematics models by means of the neuro-evolution approach XCSF. The algorithm learns self-supervised, executing movements autonomously by means of goal-babbling. It preserves actuator redundancies, which can be exploited during movement execution to fulfill current task constraints. For detailed evaluation purposes, we study the performance of XCSF when learning to control an anthropomorphic seven degrees of freedom arm in simulation. We show that XCSF can learn large forward velocity kinematic mappings autonomously and rather independently of the task space representation provided. The resulting mapping is highly suitable to resolve redundancies on the fly during inverse, goal-directed control.  相似文献   

15.
In recent years, the availability of complex data repositories (e.g., multimedia, genomic, semistructured databases) has paved the way to new potentials as to data querying. In this scenario, similarity and fuzzy techniques have proven to be successful principles for effective data retrieval. However, most proposals are domain specific and lack of a general and integrated approach to deal with generalized complex queries, i.e., queries where multiple conditions are expressed, possibly on complex as well as on traditional data. To overcome such limitations, much work has been devoted to the development of middleware systems to support query processing on multiple repositories. On a similar line, We present a formal framework to permeate complex similarity and fuzzy queries within a relational database system. As an example, we focus on multimedia data, which is represented in an integrated view with common database data. We have designed an application layer that relies on an algebraic query language, extended with MM-tailored operators, and that maps complex similarity and fuzzy queries to standard SQL statements that can be processed by a relational database system, exploiting standard facilities of modern extensible RDBMS. To show the applicability of our proposal, we implemented a prototype that provides the user with rich query capabilities, ranging from traditional database queries to complex queries gathering a mixture of Boolean, similarity, and fuzzy predicates on the data.  相似文献   

16.
Design of fuzzy systems using neurofuzzy networks   总被引:5,自引:0,他引:5  
Introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, non-noisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.  相似文献   

17.
The bootstrap methodology for functional data and functional estimation target is considered. A Monte Carlo study analyzing the performance of the bootstrap confidence bands (obtained with different resampling methods) of several functional estimators is presented. Some of these estimators (e.g., the trimmed functional mean) rely on the use of depth notions for functional data and do not have received yet much attention in the literature. A real data example in cardiology research is also analyzed. In a more theoretical aspect, a brief discussion is given providing some insights on the asymptotic validity of the bootstrap methodology when functional data, as well as a functional parameter, are involved.  相似文献   

18.
Previous work in natural language génération has exploited discourse focus to guide the selection of propositional content and the génération of referring expressions (e.g., pronominalization, définite noun phrase génération). However, there are many other sources of contextual information which can be used to constrain linguistic realization. The realization of certain classes of temporal and spatial référents, for example, can be guided by more detailed models of time and space. Therefore, this article first identifies a number of contextual coordinates or points of référence (known as indexicals in linguistics). Next, we formalize three of these contextual coordinates – topic, time, and space – in a computational model. In doing so, the article describes the use of a Reichenbachian temporal model which is exploited to guide the realization of verb tense and aspect as well as the realization of temporal référents (e.g., temporal connectives and adverbials such as "meanwhile" and "ten minutes later"). The article then describes a spatial model which is used to guide the linguistic realization of spatial référents (e.g., "here,""there") and spatial adverbials (e.g., "two miles away"). We discuss the relation between these temporal and spatial models and illustrate their use in generating text from two different application systems. We find that the combined use of topical, temporal, and spatial contextual coordinates enhances the fluency, connectivity, and conciseness of the resulting text.  相似文献   

19.
The implementation of quality function deployment based on linguistic data   总被引:5,自引:0,他引:5  
Quality function deployment (QFD) is a customer-driven quality management and product development system for achieving higher customer satisfaction. The QFD process involves various inputs in the form of linguistic data, e.g., human perception, judgment, and evaluation on importance or relationship strength. Such data are usually ambiguous and uncertain. An aim of this paper is to examine the implementation of QFD under a fuzzy environment and to develop corresponding procedures to deal with the fuzzy data. It presented a process model using linguistic variables, fuzzy arithmetic, and defuzzification techniques. Based on an example, this paper further examined the sensitivity of the ranking of technical characteristics to the defuzzification strategy and the degree of fuzziness of fuzzy numbers. Results indicated that selection of the defuzzification strategy and membership function are important. This proposed fuzzy approach allows QFD users to avoid subjective and arbitrary quantification of linguistic data. The paper also presents a scheme to represent and interprete the results.  相似文献   

20.
In many applications, it is useful to extract structured data from sections of unstructured text. A common approach is to use pattern matching (e.g., regular expressions) or more general grammar-based techniques. In cases where exact templates or grammar fragments are not known, it is possible to use machine learning approaches, based on words or n-grams, to identify the structured data. This is generally a two-stage (train/use) process that cannot easily cope with incremental extensions of the training set. In this paper, we combine a fuzzy grammar-based approach with incremental learning. This enables a set of grammar fragments to evolve incrementally, each time a new example is given, while guaranteeing that it can parse previously seen examples. We propose a novel measure of overlap between fuzzy grammar fragments that can also be used to determine the degree to which a string is parsed by a grammar fragment. This measure of overlap allows us to compare the range of two fuzzy grammar fragments (i.e., to estimate and compare the sets of strings that fuzzily conform to each grammar) without explicitly parsing any strings. A simple application shows the method's validity.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号