首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Rapid throughput of assays for assessing biological activity of compounds, known as high-throughput screening (HTS), has created a need for a statistical analysis of the resulting data. Conventional methods for separating active compounds from inactive compounds based on using only a portion of the screening data to determine the cutoff threshold value are ad hoc and unsatisfactory. Taking full advantage of the entire set of screening data, we assume that the responses can be sorted into two classes; measurements associated with inactive compounds and measurements associated with active compounds. Both theoretical and practical considerations lead us to model the distribution of the measurements of inactive compounds with a Gaussian distribution. The choice is consistent with the data and our analytical experience. In our examples, active compounds inhibit enzyme and the distribution of measurements from those compounds can be characterized by a gamma or a similar one-sided long-tailed distribution. The application of this mixture of Gaussian and gamma distributions describes activity of our screening data very well. Using this model, we will show its reasoning and derivation along with describing how to optimally set the cutoff threshold value in a statistically well-defined manner. This modeling approach provides a new and useful statistical tool for HTS.  相似文献   

2.
We introduced a strategic "Kinetic Diffusion Multiple" (KDM) that undergoes interdiffusion annealing followed by realistic thermal treatment. The blended spectra of phases and microstructures subjected to treatment in deeply grooved composition gradients enables the microstructure and micromechanical properties of structural materials to be surveyed by high-spatially resolved micro-analysis along the composition arrays. The KDM was demonstrated as a robust high-throughput methodology that enables rapid screening of the composition-microstructure-micromechanical/properties relationships for different metallic materials. It has also proven great success at elucidating the lasting effects of alloying elements and diffusion flux on microstructure, micromechanical properties, phase transformation, and their interrelationships as a whole.  相似文献   

3.
《Computer Networks》2001,35(1):77-95
Service providers typically define quality of service problems using threshold tests, such as “Are HTTP operations greater than 12 per second on server XYZ?” Herein, we estimate the probability of threshold violations for specific times in the future. We model the threshold metric (e.g., HTTP operations per second) at two levels: (1) non-stationary behavior (as is done in workload forecasting for capacity planning) and (2) stationary, time-serial dependencies. Our approach is assessed using simulation experiments and measurements of a production Web server. For both assessments, the probabilities of threshold violations produced by our approach lie well within two standard deviations of the measured fraction of threshold violations.  相似文献   

4.
In this study, a multi-scale phase based sparse disparity algorithm and a probabilistic model for matching uncertain phase are proposed. The features used are oriented edges extracted using steerable filters. Feature correspondences are estimated using phase-similarity at multiple scale using a magnitude weighting scheme. In order to achieve sub-pixel accuracy in disparity, we use a fine tuning procedure which employs the phase difference between corresponding feature points. We also derive a probabilistic model, where phase uncertainty is trained using data from a single image pair. The model is used to provide stable matches. The disparity algorithm and the probabilistic phase uncertainty model are verified on various stereo image pairs.  相似文献   

5.
Quality assessment plays a crucial role in data analysis. In this paper, we present a reduced-reference approach to volume data quality assessment. Our algorithm extracts important statistical information from the original data in the wavelet domain. Using the extracted information as feature and predefined distance functions, we are able to identify and quantify the quality loss in the reduced or distorted version of data, eliminating the need to access the original data. Our feature representation is naturally organized in the form of multiple scales, which facilitates quality evaluation of data with different resolutions. The feature can be effectively compressed in size. We have experimented with our algorithm on scientific and medical data sets of various sizes and characteristics. Our results show that the size of the feature does not increase in proportion to the size of original data. This ensures the scalability of our algorithm and makes it very applicable for quality assessment of large-scale data sets. Additionally, the feature could be used to repair the reduced or distorted data for quality improvement. Finally, our approach can be treated as a new way to evaluate the uncertainty introduced by different versions of data.  相似文献   

6.
This is a reply to the Comment by O. Arslan [Arslan, O., Comment on ”Information matrices for Laplace and Pareto mixtures” by S. Nadarajah, Comput. Stat. Data Anal. (2006), doi:10.1016/j.csda.2004.11.017].  相似文献   

7.
Classical expert systems are rule based, depending on predicates expressed over attributes and their values. In the process of building expert systems, the attributes and constants used to interpret their values need to be specified. Standard techniques for doing this are drawn from psychology, for instance, interviewing and protocol analysis. This paper describes a statistical approach to deriving interpreting constants for given attributes. It is also possible to suggest the need for attributes beyond those given.The approach for selecting an interpreting constant is demonstrated by an example. The data to be fitted are first generated by selecting a representative collection of instances of the narrow decision addressed by a rule, then making a judgement for each instance, and defining an initial set of potentially explanatory attributes. A decision rule graph plots the judgements made against pairs of attributes. It reveals rules and key instances directly. It also shows when no rule is possible, thus suggesting the need for additional attributes. A study of a collection of seven rule based models shows that the attributes defined during the fitting process improved the fit of the final models to the judgements by twenty percent over models built with only the initial attributes.  相似文献   

8.
This paper proposes a statistical parametric approach to video-realistic text-driven talking avatar. We follow the trajectory HMM approach where audio and visual speech are jointly modeled by HMMs and continuous audiovisual speech parameter trajectories are synthesized based on the maximum likelihood criterion. Previous trajectory HMM approaches only focus on mouth animation, which synthesizes simple geometric mouth shapes or video-realistic effects of the lip motion. Our approach uses trajectory HMM to generate visual parameters of the lower face and it realizes video-realistic animation of the whole face. Specifically, we use active appearance model (AAM) to model the visual speech, which offers a convenient and compact statistical model of both the shape and the appearance variations of the face. To realize video-realistic effects with high fidelity, we use Poisson image editing technique to stitch the synthesized lower-face image to a whole face image seamlessly. Objective and subjective experiments show that the proposed approach can produce natural facial animation.  相似文献   

9.
Generalized mean-squared error (GMSE) objective functions are proposed that can be used in neural networks to yield a Bayes optimal solution to a statistical decision problem characterized by a generic loss function.  相似文献   

10.
In multiple-attribute decision-making, the overall values of alternatives would be interval numbers due to the inherent uncertain property of the problems in the ambiguous decision domain. Bryson and Mobolurin proposed the use of linear programming models to compute attribute weights and overall values of the alternatives in the form of interval numbers. The intervals of the overall values of alternatives are then transformed into points or crisp values for comparisons among the alternatives. However, transforming the overall values of the alternatives from intervals to points may result in information loss. In this paper, statistical distributions, normal distribution and uniform distribution, are placed on the intervals of attribute weights and attribute values of the alternatives. By adopting a simulation method, means together with standard deviations and correlations of the overall values of alternatives are used to conduct the comparisons. The proposed statistical approach simplifies and enriches Bryson and Mobolurin's approach by providing superiority possibilities between alternatives, means, standard deviations, and correlations for alternative comparisons. The simulation results show that under a uniform distribution, the intervals of the overall values of alternatives coincide with the ‘level 3 composite’ intervals of Bryson and Mobolurin. Comparisons between the alternatives associated with their evaluated intervals are also discussed for the case of normal distribution on the intervals of attribute weights and crisp values for the attribute values of alternatives.  相似文献   

11.
Subject-specific modeling is increasingly important in biomechanics simulation. However, how to automatically create high-quality finite element (FE) mesh and how to automatically impose boundary condition are challenging.This paper presents a statistical atlas based approach for automatic meshing of subject-specific shapes. In our approach, shape variations among a shape population are explicitly modeled and the correspondence between a given subject-specific shape and the statistical atlas is sought within the “legal” shape variations. This approach involves three parts: (1) constructing a statistical atlas from a shape population, including the statistical shape model and the FE model of the mean shape; (2) establishing the correspondence between a given subject shape and the atlas; and (3) deforming the atlas to the subject shape based on the shape correspondence. Numerical results on 2D hands, 3D femur bones and 3D aorta demonstrate the effectiveness of the proposed approach.  相似文献   

12.
The fuzzy approach to statistical analysis   总被引:1,自引:0,他引:1  
For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms; (ii) to establish well-formalized models for elements combining randomness and fuzziness; (iii) to develop uni- and multivariate statistical methodologies to handle fuzzy-valued data; and (iv) to incorporate fuzzy sets to help in solving traditional statistical problems with non-fuzzy data. In spite of a growing literature concerning the development and application of fuzzy techniques in statistical analysis, the need is felt for a more systematic insight into the potentialities of cross fertilization between Statistics and Fuzzy Logic. In line with the synergistic spirit of Soft Computing, some instances of the existing research activities on the topic are recalled. Particular attention is paid to summarize the papers gathered in this Special Issue, ranging from the position paper on the theoretical management of uncertainty by the “father” of Fuzzy Logic to a wide diversity of topics concerning foundational/methodological/applied aspects of the integration of Fuzzy Sets and Statistics.  相似文献   

13.
Proposes a statistical approach to the formal synthesis and improvement of inspection checklists. The approach is based on defect causal analysis and defect modeling. The defect model is developed using IBM's Orthogonal Defect Classification. A case study describes the steps required and a tool for the implementation. The advantages and disadvantages of both empirical and statistical methods are discussed and compared. It is suggested that a statistical approach should be used in conjunction with the empirical approach. The main advantage of the proposed technique is that it allows us to tune a checklist according to the most recent project experience and to identify optimal checklist items even when a source document does not exist  相似文献   

14.
15.
Lexical stress is primarily important to generate a correct pronunciation of words in many languages; hence its correct placement is a major task in prosody prediction and generation for high-quality TTS (text-to-speech) synthesis systems. This paper proposes a statistical approach to lexical stress assignment for TTS synthesis in Romanian. The method is essentially based on n-gram language models at character level, and uses a modified Katz backoff smoothing technique to solve the problem of data sparseness during training. Monosyllabic words are considered as not carrying stress, and are separated by an automatic syllabification algorithm. A maximum accuracy of 99.11% was obtained on a test corpus of about 47,000 words.  相似文献   

16.
Considering latent heterogeneity is of special importance in nonlinear models in order to gauge correctly the effect of explanatory variables on the dependent variable. A stratified model-based clustering approach is adapted for modeling latent heterogeneity in binary panel probit models. Within a Bayesian framework an estimation algorithm dealing with the inherent label switching problem is provided. Determination of the number of clusters is based on the marginal likelihood and a cross-validation approach. A simulation study is conducted to assess the ability of both approaches to determine on the correct number of clusters indicating high accuracy for the marginal likelihood criterion, with the cross-validation approach performing similarly well in most circumstances. Different concepts of marginal effects incorporating latent heterogeneity at different degrees arise within the considered model setup and are directly at hand within Bayesian estimation via MCMC methodology. An empirical illustration of the methodology developed indicates that consideration of latent heterogeneity via latent clusters provides the preferred model specification over a pooled and a random coefficient specification.  相似文献   

17.
High-throughput computational thermodynamic approaches are becoming an increasingly popular tool to uncover novel compounds. However, traditional methods tend to be limited to stability predictions of stoichiometric phases at absolute zero. Such methods thus carry the risk of identifying an excess of possible phases that do not survive to temperatures of practical relevance. We demonstrate how the Calphad formalism, informed by simple first-principles input can be simply used to overcome this problem at a low computational cost and deliver quantitatively useful phase diagram predictions at all temperatures. We illustrate the method by re-assessing prior compound formation predictions and reconcile these findings with long-standing experimental evidence to the contrary.  相似文献   

18.
19.
In previous work it has been shown how a max-plus algebraic model can be derived for cyclically operated high-throughput screening systems and how such a model can be used to design a controller to handle unexpected deviations from the predetermined cyclic operation during run-time. In this paper, this approach is extended by modeling the system in a general dioid algebraic setting. Then a feedback controller can be computed using residuation theory. The resulting control strategy is optimal in the sense of the just-in-time criterion, which is very common in scheduling practice.  相似文献   

20.
The problem of clustering probability density functions is emerging in different scientific domains. The methods proposed for clustering probability density functions are mainly focused on univariate settings and are based on heuristic clustering solutions. New aspects of the problem associated with the multivariate setting and a model-based perspective are investigated. The novel approach relies on a hierarchical mixture modeling of the data. The method is introduced in the univariate context and then extended to multivariate densities by means of a factorial model performing dimension reduction. Model fitting is carried out using an EM-algorithm. The proposed method is illustrated through simulated experiments and applied to two real data sets in order to compare its performance with alternative clustering strategies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号