首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider a decision maker who shows his/her preferences for different alternatives through a finite set of ordinal values. We analyze the problem of consistency taking into account some transitivity properties within this framework. These properties are based on the very general class of conjunctors on the set of ordinal values. Each reciprocal preference relation on a finite ordinal scale has both a crisp preference and a crisp indifference relation associated to it in a natural way. Taking this into account, we have started by analyzing the problem of propagating transitivity from the preference relation on a finite ordinal scale to the crisp preference and indifference relations. After that, we carried out the analysis in the opposite direction. We provide some necessary and sufficient conditions for that propagation, and therefore, we characterize the consistent class of conjunctors in each direction.  相似文献   

2.
Longitudinal studies involving categorical responses are extensively applied in many fields of research and are often fitted by the generalized estimating equations (GEE) approach and generalized linear mixed models (GLMMs). The assessment of model fit is an important issue for model inference. The purpose of this article is to extend Pan’s (2002a) goodness-of-fit tests for GEE models with longitudinal binary data to the tests for logistic proportional odds models with longitudinal ordinal data. Two proposed methods based on Pearson chi-squared test and unweighted sum of residual squares are developed, and the approximate expectations and variances of the test statistics are easily computed. Four major variants of working correlation structures, independent, AR(1), exchangeable and unspecified, are considered to estimate the variances of the proposed test statistics. Simulation studies in terms of type I error rate and the power performance of the proposed tests are presented for various sample sizes. Furthermore, the approaches are demonstrated by two real data sets.  相似文献   

3.
4.
Simple decision rules are set out that enable developing preference relations in the presence of quantitative estimates of the relative importance of criteria and information to the effect that in the motion along their common scale, the growth of preferences accelerates.  相似文献   

5.
Population models are widely applied in biomedical data analysis since they characterize both the average and individual responses of a population of subjects. In the absence of a reliable mechanistic model, one can resort to the Bayesian nonparametric approach that models the individual curves as Gaussian processes. This paper develops an efficient computational scheme for estimating the average and individual curves from large data sets collected in standardized experiments, i.e. with a fixed sampling schedule. It is shown that the overall scheme exhibits a “client-server” architecture. The server is in charge of handling and processing the collective data base of past experiments. The clients ask the server for the information needed to reconstruct the individual curve in a single new experiment. This architecture allows the clients to take advantage of the overall data set without violating possible privacy and confidentiality constraints and with negligible computational effort.  相似文献   

6.
Exploratory data analysis requires the ability to issue ad hoc queries to filter and summarise data sets. As the sizes of health data sets grow, traditional methods of processing data have difficulty in providing acceptable response times for such queries. An alternative method is described which combines complete vertical partitioning of data with set operations on ordinal mappings (SOOM). An initial implementation of the technique provides significantly better performance than a conventional SQL database on typical exploratory data analysis queries. The use of parallel, distributed computation to further increase the performance of the technique appears to be feasible.  相似文献   

7.
8.
In this paper, we propose a new shape-modeling paradigm based on the concept of Lagrangian surface flow. Given an input polygonal model, the user interactively defines a distance field around regions of interest; the locally or globally affected regions will then automatically deform according to the user-defined distance field. During the deformation process, the model can always maintain its regularity and can properly modify its topology by topology merging when collisions between two different parts of the model occur. Comparing with level-set based methods, our algorithm allows the user to work directly on existing polygonal models without any intermediate model conversion. Besides closed polygonal models, our algorithm also works for mesh models with open boundaries. Within our framework, we developed a number of shape-modeling operators including blending, cutting, drilling, free-hand sketching, and mesh warping. We applied our algorithm to a variety of examples that demonstrate the usefulness and efficacy of the new technique in interactive shape design and surface deformation.  相似文献   

9.
How to compare small multivariate samples using nonparametric tests   总被引:2,自引:1,他引:1  
In the life sciences and other research fields, experiments are often conducted to determine responses of subjects to various treatments. Typically, such data are multivariate, where different variables may be measured on different scales that can be quantitative, ordinal, or mixed. To analyze these data, we present different nonparametric (rank-based) tests for multivariate observations in balanced and unbalanced one-way layouts. Previous work has led to the development of tests based on asymptotic theory, either for large numbers of samples or groups; however, most experiments comprise only small or moderate numbers of experimental units in each individual group or sample. Here, we investigate several tests based on small-sample approximations, and compare their performance in terms of α levels and power for different simulated situations, with continuous and discrete observations. For positively correlated responses, an approximation based on [Brunner, E., Dette, H., Munk, A., 1997. Box-type approximations in nonparametric factorial designs. J. Amer. Statist. Assoc. 92, 1494–1502] ANOVA-Type statistic performed best; for responses with negative correlations, in general, an approximation based on the Lawley–Hotelling type test performed best. We demonstrate the use of the tests based on the approximations for a plant pathology experiment.  相似文献   

10.
The remittance market represents a great business opportunity for financial institutions given the increasing volume of these capital flows throughout the world. However, the corresponding business strategy could be costly and time consuming because immigrants do not respond to general media campaigns. In this paper, the remitting behavior of immigrants have been addressed by a classification approach that predicts the remittance levels sent by immigrants according to their individual characteristics, thereby identifying the most profitable customers within this group. To do so, five nominal and two ordinal classifiers were applied to an immigrant sample and their resulting performances were compared. The ordinal classifiers achieved the best results; the Support Vector Machine with Ordered Partitions (SVMOP) yielded the best model, providing information needed to draw remitting profiles that are useful for financial institutions. The Support Vector Machine with Explicit Constraints (SVOREX), however, achieved the second best results, and these results are presented graphically to study misclassified patterns in a natural and simple way. Thus, financial institutions can use this ordinal SVM-based approach as a tool to generate valuable information to develop their remittance business strategy.  相似文献   

11.
《Computers & Geosciences》1987,13(5):463-494
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√h1) and 4th (b2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range.For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.  相似文献   

12.
In this paper, we are interested in taking preferences into account for a family of queries inspired by the relational division. A division query aims at retrieving the elements associated with a specified set of values and usually the results remain not discriminated. So, we suggest the introduction of preferences inside such queries with the following specificities: (i) the user gives his/her preferences in an ordinal way and (ii) the preferences apply to the divisor which is defined as a hierarchy of sets. Different uses of the hierarchy are investigated, which leads to queries conveying different semantics and the property of the result in terms of a quotient is studied. Special attention is paid to the implementation of such extended division queries using a regular database management system along which some experiments to support the feasibility of the approach. Moreover, the issue of empty or overabundant answers is dealt with.  相似文献   

13.
Contrast-enhanced ultrasound (CEUS) has recently become an important technology for lesion detection and characterization in cancer diagnosis. CEUS is used to investigate the perfusion kinetics in tissue over time, which relates to tissue vascularization. In this paper we present a pipeline that enables interactive visual exploration and semi-automatic segmentation and classification of CEUS data.For the visual analysis of this challenging data, with characteristic noise patterns and residual movements, we propose a robust method to derive expressive enhancement measures from small spatio-temporal neighborhoods. We use this information in a staged visual analysis pipeline that leads from a more local investigation to global results such as the delineation of anatomic regions according to their perfusion properties. To make the visual exploration interactive, we have developed an accelerated framework based on the OpenCL library, that exploits modern many-cores hardware. Using our application, we were able to analyze datasets from CEUS liver examinations, being able to identify several focal liver lesions, segment and analyze them quickly and precisely, and eventually characterize them.  相似文献   

14.
Accurate quantification and clear understanding of regional scale cropland carbon (C) cycling is critical for designing effective policies and management practices that can contribute toward stabilizing atmospheric CO2 concentrations. However, extrapolating site-scale observations to regional scales represents a major challenge confronting the agricultural modeling community. This study introduces a novel geospatial agricultural modeling system (GAMS) exploring the integration of the mechanistic Environmental Policy Integrated Climate model, spatially-resolved data, surveyed management data, and supercomputing functions for cropland C budgets estimates. This modeling system creates spatially-explicit modeling units at a spatial resolution consistent with remotely-sensed crop identification and assigns cropping systems to each of them by geo-referencing surveyed crop management information at the county or state level. A parallel computing algorithm was also developed to facilitate the computationally intensive model runs and output post-processing and visualization. We evaluated GAMS against National Agricultural Statistics Service (NASS) reported crop yields and inventory estimated county-scale cropland C budgets averaged over 2000–2008. We observed good overall agreement, with spatial correlation of 0.89, 0.90, 0.41, and 0.87, for crop yields, Net Primary Production (NPP), Soil Organic C (SOC) change, and Net Ecosystem Exchange (NEE), respectively. However, we also detected notable differences in the magnitude of NPP and NEE, as well as in the spatial pattern of SOC change. By performing crop-specific annual comparisons, we discuss possible explanations for the discrepancies between GAMS and the inventory method, such as data requirements, representation of agroecosystem processes, completeness and accuracy of crop management data, and accuracy of crop area representation. Based on these analyses, we further discuss strategies to improve GAMS by updating input data and by designing more efficient parallel computing capability to quantitatively assess errors associated with the simulation of C budget components. The modularized design of the GAMS makes it flexible to be updated and adapted for different agricultural models so long as they require similar input data, and to be linked with socio-economic models to understand the effectiveness and implications of diverse C management practices and policies.  相似文献   

15.
Proposed were simple precise methods for preference comparison of variants in the multicriteria problems with the importance-ordered criteria having common scale along which the growth in preferences either decelerates or accelerates.  相似文献   

16.
In multilevel modeling, researchers often encounter data with a relatively small number of units at the higher levels. As a result, of this and/or non-normality of the residuals, model parameter estimates, particularly the variance components and standard errors of parameter estimates at the group level, may be biased, thus the corresponding statistical inferences may not be trustworthy. This problem can be addressed by using bootstrap methods to estimate the standard errors of the parameter estimates for significance testing. This study illustrates how to use statistical analysis system (SAS) to conduct nonparametric residual bootstrap multilevel modeling. Specific SAS programs for such modeling are provided.  相似文献   

17.
介绍了空间数据建模和管理的常用方法,重点讨论Oracle Spatial的对象关系建模,分析了使用Oracle Spatial组织管理空间数据的特点。基于Oracle Spatial进行空间数据建模,可以降低专业性,提高空间数据的标准化共享能力。  相似文献   

18.
This paper introduces and formally defines a fuzzy rough object-oriented database (OODB) model based on a formal framework using an algebraic type system and formally defined constraints. This generalized model incorporates both rough set and fuzzy set uncertainty, while remaining compliant with object-oriented database standards set forth by the Object Database Management Group. Rough and fuzzy set uncertainty enhance the OODB model so that it can more accurately model real world applications. Spatial databases have a particular need for uncertainty management that can be achieved through rough and fuzzy techniques.  相似文献   

19.
Interactive modeling of plants   总被引:15,自引:0,他引:15  
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号