首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Fuzzy data treated as functional data: A one-way ANOVA test approach   总被引:1,自引:0,他引:1  
The use of the fuzzy scale of measurement to describe an important number of observations from real-life attributes or variables is first explored. In contrast to other well-known scales (like nominal or ordinal), a wide class of statistical measures and techniques can be properly applied to analyze fuzzy data. This fact is connected with the possibility of identifying the scale with a special subset of a functional Hilbert space. The identification can be used to develop methods for the statistical analysis of fuzzy data by considering techniques in functional data analysis and vice versa. In this respect, an approach to the FANOVA test is presented and analyzed, and it is later particularized to deal with fuzzy data. The proposed approaches are illustrated by means of a real-life case study.  相似文献   

2.
This paper systematically studies the minimum input sensitivity analysis problem. The lowest level of sensitivity of system outputs to system inputs is defined as an H- index. A full characterization of the H- index is given, first, in terms of matrix equalities and inequalities, and then in terms of linear matrix inequalities (LMIs), as a dual of the Bounded Real Lemma. A related problem of input observability is also studied, with new necessary and sufficient conditions given, which are necessary for a fault detection system to have a nonzero worst-case fault sensitivity. The above results are applied to the problem of fault detection filter analysis, with numerical examples given to show the effectiveness of the proposed approaches.  相似文献   

3.
The goal of cluster analysis is to assign observations into clusters so that observations in the same cluster are similar in some sense. Many clustering methods have been developed in the statistical literature, but these methods are inappropriate for clustering family data, which possess intrinsic familial structure. To incorporate the familial structure, we propose a form of penalized cluster analysis with a tuning parameter controlling the tradeoff between the observation dissimilarity and the familial structure. The tuning parameter is selected based on the concept of clustering stability. The effectiveness of the method is illustrated via simulations and an application to a family study of asthma.  相似文献   

4.
A studentized range test using a two-stage and a one-stage sampling procedures, respectively, is proposed for testing the null hypothesis that the average deviation of the normal means is falling into a practical indifference zone. Both the level and the power of the proposed test associated with the hypotheses are controllable and they are completely independent of the unknown variances. The two-stage procedure is a design-oriented procedure that satisfies certain probability requirements and simultaneously determines the required sample sizes for an experiment while the one-stage procedure is a data-analysis procedure after the data have been collected, which can supplement the two-stage procedure when the later has to end its experiment sooner than its required experimental process is completed. Tables needed for implementing these procedures are given.  相似文献   

5.
For the design of networked control systems, we employ a subband coding technique to efficiently use the available data rate. Such coding schemes have widely been used in signal processing for data compression. We take an approach suitable from the control perspective in that frequency characteristics of a controller are directly used. In particular, we propose a three-step design procedure for the overall controller having the following features: On the coder side, it uses a controller consisting of a filter bank whose outputs are subband signals of the control input. On the decoder side is another filter bank for reconstruction of the subband signals. This decoder is capable to account for random message losses that occur in communication and is designed via an H-type method. For the quantizers in the coder, an efficient bit allocation scheme is also developed.  相似文献   

6.
The Fuzzy k-Means clustering model (FkM) is a powerful tool for classifying objects into a set of k homogeneous clusters by means of the membership degrees of an object in a cluster. In FkM, for each object, the sum of the membership degrees in the clusters must be equal to one. Such a constraint may cause meaningless results, especially when noise is present. To avoid this drawback, it is possible to relax the constraint, leading to the so-called Possibilistic k-Means clustering model (PkM). In particular, attention is paid to the case in which the empirical information is affected by imprecision or vagueness. This is handled by means of LR fuzzy numbers. An FkM model for LR fuzzy data is firstly developed and a PkM model for the same type of data is then proposed. The results of a simulation experiment and of two applications to real world fuzzy data confirm the validity of both models, while providing indications as to some advantages connected with the use of the possibilistic approach.  相似文献   

7.
A novel method for the robust identification of interpretable fuzzy models, based on the criterion that identification errors are least sensitive to data uncertainties and modelling errors, is suggested. The robustness of identification errors towards unknown disturbances (data uncertainties, modelling errors, etc.) is achieved by bounding (i.e. minimizing) the maximum possible value of energy-gain from disturbances to the identification errors. The solution of energy-gain bounding problem, being robust, shows an improved performance of the identification method. The flexibility of the proposed framework is shown by designing the variable learning rate identification algorithms in both deterministic and stochastic frameworks.  相似文献   

8.
A controller change from a current controller which stabilises the plant to a new controller, designed on the basis of an approximate model of the plant and with guaranteed bounds on the stability properties of the true closed loop, is called a safe controller change. In this paper, we present a model reference approach to the determination of safe controller changes on the basis of approximate closed loop models of the plant and robust stability results in the ν-gap.  相似文献   

9.
In this paper, we consider the coefficient-based regularized least-squares regression problem with the lq-regularizer (1≤q≤2) and data dependent hypothesis spaces. Algorithms in data dependent hypothesis spaces perform well with the property of flexibility. We conduct a unified error analysis by a stepping stone technique. An empirical covering number technique is also employed in our study to improve sample error. Comparing with existing results, we make a few improvements: First, we obtain a significantly sharper learning rate that can be arbitrarily close to O(m−1) under reasonable conditions, which is regarded as the best learning rate in learning theory. Second, our results cover the case q=1, which is novel. Finally, our results hold under very general conditions.  相似文献   

10.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

11.
The guaranteed cost control problem for multimodeling systems with norm bounded uncertainty is investigated. The main contribution in this paper is that a new ?-independent controller is derived by solving the reduced-order slow and fast algebraic Riccati equations (AREs) whose dimension is smaller than the dimension of full-order multiparameter algebraic Riccati equation (MARE). It is shown that if these AREs have a positive definite stabilizing solution then the closed-loop system is quadratically stable and has the cost bound.  相似文献   

12.
In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x,u,w) of the zero-sum game instead of solving for the state dependent value function V(x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming (ADP) model-free approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the GARE of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft.  相似文献   

13.
Approximating clusters in very large (VL=unloadable) data sets has been considered from many angles. The proposed approach has three basic steps: (i) progressive sampling of the VL data, terminated when a sample passes a statistical goodness of fit test; (ii) clustering the sample with a literal (or exact) algorithm; and (iii) non-iterative extension of the literal clusters to the remainder of the data set. Extension accelerates clustering on all (loadable) data sets. More importantly, extension provides feasibility—a way to find (approximate) clusters—for data sets that are too large to be loaded into the primary memory of a single computer. A good generalized sampling and extension scheme should be effective for acceleration and feasibility using any extensible clustering algorithm. A general method for progressive sampling in VL sets of feature vectors is developed, and examples are given that show how to extend the literal fuzzy (c-means) and probabilistic (expectation-maximization) clustering algorithms onto VL data. The fuzzy extension is called the generalized extensible fast fuzzy c-means (geFFCM) algorithm and is illustrated using several experiments with mixtures of five-dimensional normal distributions.  相似文献   

14.
15.
A goodness-of-fit testing procedure for Archimedean copula (AC) models is developed based on right-censored data. The proposed approach extends an existing method, which is suitable for the Clayton model, to general AC models. Asymptotic properties of the proposed test statistics under the true model assumption are derived. Simulation analysis shows that the proposed test has reasonable performance. Finally, two real data examples are analyzed for illustrative purposes.  相似文献   

16.
17.
A model-based fault detection filter is developed for structural health monitoring of a simply supported beam. The structural damage represented in the plant model is shown to decompose into a known fault direction vector maintaining a fixed direction, dependent on the damage location, and an arbitrary fault magnitude representing the extent of the damage. According to detection filter theory, if damage occurs, under certain circumstances the fault will be uniquely detected and identified through an associated invariance in the direction imposed on the fault detection filter residuals. The spectral algorithm used to design the detection filter is based on a left eigenstructure assignment approach which accommodates system sensitivities that are revealed as ill-conditioned matrices formed from the eigenvectors in the construction of the detection filter gains. The detection filter is applied to data from an aluminum simply supported beam with four piezoelectric sensors and one piezoelectric actuator. By exciting the structure at the first natural frequency, damage in the form of a 5 mm saw cut made to one side of the beam is detected and localized.  相似文献   

18.
Recently, Lin and Tsai and Yang et al. proposed secret image sharing schemes with steganography and authentication, which divide a secret image into the shadows and embed the produced shadows in the cover images to form the stego images so as to be transmitted to authorized recipients securely. In addition, these schemes also involve their authentication mechanisms to verify the integrity of the stego images such that the secret image can be restored correctly. Unfortunately, these schemes still have two shortcomings. One is that the weak authentication cannot well protect the integrity of the stego images, so the secret image cannot be recovered completely. The other shortcoming is that the visual quality of the stego images is not good enough. To overcome such drawbacks, in this paper, we propose a novel secret image sharing scheme combining steganography and authentication based on Chinese remainder theorem (CRT). The proposed scheme not only improves the authentication ability but also enhances the visual quality of the stego images. The experimental results show that the proposed scheme is superior to the previously existing methods.  相似文献   

19.
The selection of a subset of input variables is often based on the previous construction of a ranking to order the variables according to a given criterion of relevancy. The objective is then to linearize the search, estimating the quality of subsets containing the topmost ranked variables. An algorithm devised to rank input variables according to their usefulness in the context of a learning task is presented. This algorithm is the result of a combination of simple and classical techniques, like correlation and orthogonalization, which allow the construction of a fast algorithm that also deals explicitly with redundancy. Additionally, the proposed ranker is endowed with a simple polynomial expansion of the input variables to cope with nonlinear problems. The comparison with some state-of-the-art rankers showed that this combination of simple components is able to yield high-quality rankings of input variables. The experimental validation is made on a wide range of artificial data sets and the quality of the rankings is assessed using a ROC-inspired setting, to avoid biased estimations due to any particular learning algorithm.  相似文献   

20.
In most pattern recognition (PR) applications, it is advantageous if the accuracy (or error rate) of the classifier can be evaluated or bounded prior to testing it in a real-life setting. It is also well known that if the two class-conditional distributions have a large overlapping volume (almost all the available work on “overlapping of classes” deals with the case when there are only two classes), the classification accuracy is poor. This is because if we intend to use the classification accuracy as a criterion for evaluating a PR system, the points within the overlapping volume tend to lead to maximal misclassification. Unfortunately, the computation of the indices which quantify the overlapping volume is expensive. In this vein, we propose a strategy of using a prototype reduction scheme (PRS) to approximately, but quickly, compute the latter. In this paper, we demonstrate, first of all, that this is an extremely expedient proposition. Indeed, we show that by completely discarding (we are not aware of any reported scheme which discards “irrelevant” sample (training) points, and which simultaneously attains to an almost-comparable accuracy) the points not included by the PRS, we can obtain a reduced set of sample points, using which, in turn, the measures for the overlapping volume can be computed. The value of the corresponding figures is comparable to those obtained with the original training set (i.e., the one which considers all the data points) even though the computations required to obtain the prototypes and the corresponding measures are significantly less. The proposed method has been rigorously tested on artificial and real-life datasets, and the results obtained are, in our opinion, quite impressive—sometimes faster by two orders of magnitude.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号