首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop a Bayesian approach for the selection of skew in multivariate skew t distributions constructed through hidden conditioning in the manners suggested by either Azzalini and Capitanio (2003) or Sahu et al. (2003). We show that the skew coefficients for each margin are the same for the standardized versions of both distributions. We introduce binary indicators to denote whether there is symmetry, or skew, in each dimension. We adopt a proper beta prior on each non-zero skew coefficient, and derive the corresponding prior on the skew parameters. In both distributions we show that as the degrees of freedom increases, the prior smoothly bounds the non-zero skew parameters away from zero and identifies the posterior. We estimate the model using Markov chain Monte Carlo (MCMC) methods by exploiting the conditionally Gaussian representation of the skew t distributions. This allows for the search through the posterior space of all possible combinations of skew and symmetry in each dimension. We show that the proposed method works well in a simulation setting, and employ it in two multivariate econometric examples. The first involves the modeling of foreign exchange rates and the second is a vector autoregression for intra-day electricity spot prices. The approach selects skew along the original coordinates of the data, which proves insightful in both examples.  相似文献   

2.
This paper is concerned with the problems of robust H and H2 filtering for 2-dimensional (2-D) discrete-time linear systems described by a Fornasini-Marchesini second model with matrices that depend affinely on convex-bounded uncertain parameters. By a suitable transformation, the system is represented by an equivalent difference-algebraic representation. A parameter-dependent Lyapunov function approach is then proposed for the design of 2-D stationary discrete-time linear filters that ensure either a prescribed H performance or H2 performance for all admissible uncertain parameters. The filter designs are given in terms of linear matrix inequalities. Numerical examples illustrate the effectiveness of the proposed filter design methods.  相似文献   

3.
In this paper, the problem of designing observer for a class of uncertain neutral systems. The uncertainties are parametric and norm-bounded. Both robust observation and robust H observation methods are developed by using linear state-delayed observers. In case of robust observation, sufficient conditions are established for asymptotic stability of the system, which is independent of time delay. The results are then extended to robust H observation which renders the augmented system asymptotically stable independent of delay with a guaranteed performance measure. Furthermore, a memoryless state-estimate feedback is designed to stabilize the closed-loop neutral system. In all cases, the gain matrices are determined by linear matrix inequality approach. Two numerical examples are presented to illustrate the validity of the theoretical results.  相似文献   

4.
Various design and model selection methods are available for supersaturated designs having more factors than runs but little research is available on their comparison and evaluation. Simulated experiments are used to evaluate the use of E(s2)-optimal and Bayesian D-optimal designs and to compare three analysis strategies representing regression, shrinkage and a novel model-averaging procedure. Suggestions are made for choosing the values of the tuning constants for each approach. Findings include that (i) the preferred analysis is via shrinkage; (ii) designs with similar numbers of runs and factors can be effective for a considerable number of active effects of only moderate size; and (iii) unbalanced designs can perform well. Some comments are made on the performance of the design and analysis methods when effect sparsity does not hold.  相似文献   

5.
A model-based fault detection filter is developed for structural health monitoring of a simply supported beam. The structural damage represented in the plant model is shown to decompose into a known fault direction vector maintaining a fixed direction, dependent on the damage location, and an arbitrary fault magnitude representing the extent of the damage. According to detection filter theory, if damage occurs, under certain circumstances the fault will be uniquely detected and identified through an associated invariance in the direction imposed on the fault detection filter residuals. The spectral algorithm used to design the detection filter is based on a left eigenstructure assignment approach which accommodates system sensitivities that are revealed as ill-conditioned matrices formed from the eigenvectors in the construction of the detection filter gains. The detection filter is applied to data from an aluminum simply supported beam with four piezoelectric sensors and one piezoelectric actuator. By exciting the structure at the first natural frequency, damage in the form of a 5 mm saw cut made to one side of the beam is detected and localized.  相似文献   

6.
A string-based negative selection algorithm is an immune-inspired classifier that infers a partitioning of a string space Σ? into “normal” and “anomalous” partitions from a training set S containing only samples from the “normal” partition. The algorithm generates a set of patterns, called “detectors”, to cover regions of the string space containing none of the training samples. Strings that match at least one of these detectors are then classified as “anomalous”. A major problem with existing implementations of this approach is that the detector generating step needs exponential time in the worst case. Here we show that for the two most widely used kinds of detectors, the r-chunk and r-contiguous detectors based on partial matching to substrings of length r, negative selection can be implemented more efficiently by avoiding generating detectors altogether: for each detector type, training set SΣ? and parameter r? one can construct an automaton whose acceptance behaviour is equivalent to the algorithm’s classification outcome. The resulting runtime is O(|S|?r|Σ|) for constructing the automaton in the training phase and O(?) for classifying a string.  相似文献   

7.
R package flexmix provides flexible modelling of finite mixtures of regression models using the EM algorithm. Several new features of the software such as fixed and nested varying effects for mixtures of generalized linear models and multinomial regression for a priori probabilities given concomitant variables are introduced. The use of the software in addition to model selection is demonstrated on a logistic regression example.  相似文献   

8.
This paper describes a robust centroid method, which is a variant of principal component analysis. A genetic local search algorithm is presented to perform the calculations. Simulations are carried out to appraise the performance of the genetic local search algorithm. A real data set with missing data and multiple outliers is analyzed.  相似文献   

9.
A bootstrap approach to the multi-sample test of means for imprecisely valued sample data is introduced. For this purpose imprecise data are modelled in terms of fuzzy values. Populations are identified with fuzzy-valued random elements, often referred to in the literature as fuzzy random variables. An example illustrates the use of the suggested method. Finally, the adequacy of the bootstrap approach to test the multi-sample hypothesis of means is discussed through a simulation comparative study.  相似文献   

10.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

11.
We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=⌈log∣A∣⌉, where A is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones.In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1≤bΘ(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio Ω(log(n)/b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n)/b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio kO(1/b) for any choice of Θ(1)≤b≤logk.  相似文献   

12.
Conjoint choice experiments elicit individuals’ preferences for the attributes of a good by asking respondents to indicate repeatedly their most preferred alternative in a number of choice sets. However, conjoint choice experiments can be used to obtain more information than that revealed by the individuals’ single best choices. A way to obtain extra information is by means of best-worst choice experiments in which respondents are asked to indicate not only their most preferred alternative but also their least preferred one in each choice set. To create D-optimal designs for these experiments, an expression for the Fisher information matrix for the maximum-difference model is developed. Semi-Bayesian D-optimal best-worst choice designs are derived and compared with commonly used design strategies in marketing in terms of the D-optimality criterion and prediction accuracy. Finally, it is shown that best-worst choice experiments yield considerably more information than choice experiments.  相似文献   

13.
In conjoint experiments, each respondent receives a set of profiles to rate. Sometimes, the profiles are expensive prototypes that respondents have to test before rating them. Designing these experiments involves determining how many and which profiles each respondent has to rate and how many respondents are needed. To that end, the set of profiles offered to a respondent is treated as a separate block in the design and a random respondent effect is used in the model because profile ratings from the same respondent are correlated. Optimal conjoint designs are then obtained by means of an adapted version of an algorithm for finding D-optimal split-plot designs. A key feature of the design construction algorithm is that it returns the optimal number of respondents and the optimal number of profiles each respondent has to evaluate for a given number of profiles. The properties of the optimal designs are described in detail and some practical recommendations are given.  相似文献   

14.
The goal of cluster analysis is to assign observations into clusters so that observations in the same cluster are similar in some sense. Many clustering methods have been developed in the statistical literature, but these methods are inappropriate for clustering family data, which possess intrinsic familial structure. To incorporate the familial structure, we propose a form of penalized cluster analysis with a tuning parameter controlling the tradeoff between the observation dissimilarity and the familial structure. The tuning parameter is selected based on the concept of clustering stability. The effectiveness of the method is illustrated via simulations and an application to a family study of asthma.  相似文献   

15.
The selection of a subset of input variables is often based on the previous construction of a ranking to order the variables according to a given criterion of relevancy. The objective is then to linearize the search, estimating the quality of subsets containing the topmost ranked variables. An algorithm devised to rank input variables according to their usefulness in the context of a learning task is presented. This algorithm is the result of a combination of simple and classical techniques, like correlation and orthogonalization, which allow the construction of a fast algorithm that also deals explicitly with redundancy. Additionally, the proposed ranker is endowed with a simple polynomial expansion of the input variables to cope with nonlinear problems. The comparison with some state-of-the-art rankers showed that this combination of simple components is able to yield high-quality rankings of input variables. The experimental validation is made on a wide range of artificial data sets and the quality of the rankings is assessed using a ROC-inspired setting, to avoid biased estimations due to any particular learning algorithm.  相似文献   

16.
Recently, Lin and Tsai and Yang et al. proposed secret image sharing schemes with steganography and authentication, which divide a secret image into the shadows and embed the produced shadows in the cover images to form the stego images so as to be transmitted to authorized recipients securely. In addition, these schemes also involve their authentication mechanisms to verify the integrity of the stego images such that the secret image can be restored correctly. Unfortunately, these schemes still have two shortcomings. One is that the weak authentication cannot well protect the integrity of the stego images, so the secret image cannot be recovered completely. The other shortcoming is that the visual quality of the stego images is not good enough. To overcome such drawbacks, in this paper, we propose a novel secret image sharing scheme combining steganography and authentication based on Chinese remainder theorem (CRT). The proposed scheme not only improves the authentication ability but also enhances the visual quality of the stego images. The experimental results show that the proposed scheme is superior to the previously existing methods.  相似文献   

17.
This paper deals with the class of continuous-time singular linear systems with multiple time-varying delays in a range. The global exponential stability problem of this class of systems is addressed. Delay-range-dependent sufficient conditions such that the system is regular, impulse-free and α-stable are developed in the linear matrix inequality (LMI) setting. Moreover, an estimate of the convergence rate of such stable systems is presented. A numerical example is employed to show the usefulness of the proposed results.  相似文献   

18.
Consider the situation where the Structuration des Tableaux à Trois Indices de la Statistique (STATIS) methodology is applied to a series of studies, each study being represented by data and weight matrices. Relations between studies may be captured by the Hilbert-Schmidt product of these matrices. Specifically, the eigenvalues and eigenvectors of the Hilbert-Schmidt matrix S may be used to obtain a geometrical representation of the studies. The studies in a series may further be considered to have a common structure whenever their corresponding points lie along the first axis. The matrix S can be expressed as the sum of a rank 1 matrix λuuT with an error matrix E. Therefore, the components of the vector are sufficient to locate the points associated to the studies. Former models for S where vec(E) are mathematically tractable and yet do not take into account the symmetry of the matrix S. Thus a new symmetric model is proposed as well as the corresponding tests for a common structure. It is further shown how to assess the goodness of fit of such models. An application to the human immunodeficiency virus (HIV) infection is used for assessing the proposed model.  相似文献   

19.
Statistical models are often based on normal distributions and procedures for testing such distributional assumption are needed. Many goodness-of-fit tests are available. However, most of them are quite insensitive in detecting non-normality when the alternative distribution is symmetric. On the other hand all the procedures are quite powerful against skewed alternatives. A new test for normality based on a polynomial regression is presented. It is very effective in detecting non-normality when the alternative distribution is symmetric. A comparison between well known tests and this new procedure is performed by simulation study. Other properties are also investigated.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号