首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A novel pairwise decision tree (PDT) framework is proposed for hyperspectral classification, where no partitions and clustering are needed and the original C‐class problem is divided into a set of two‐class problems. The top of the tree includes all original classes. Each internal node consists of either a set of class pairs or a set of class pairs and a single class. The pairs are selected by the proposed sequential forward selection (SFS) or sequential backward selection (SBS) algorithms. The current node is divided into next‐stage nodes by excluding either class of each selected pair. In the classification, an unlabelled pixel is recursively classified into the next node, by excluding the less similar class of each node pair until the classification result is obtained. Compared to the single‐stage classifier approach, the pairwise classifier framework and the binary hierarchical classifier (BHC), experiments on an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data set for a nine‐class problem demonstrated the effectiveness of the proposed framework.  相似文献   

2.
Koyama S  Kass RE 《Neural computation》2008,20(7):1776-1795
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons. With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.  相似文献   

3.
A framework for evaluating software technology   总被引:1,自引:0,他引:1  
Many software development organizations struggle to make informed decisions when investing in new software technologies. The authors' experimental framework can help companies evaluate a new software technology by examining its features in relation to its peers and competitors through a systematic approach that includes modeling experiments  相似文献   

4.
In a multicriteria decision making context, a pairwise comparison matrix A = (aij) is a helpful tool to determine the weighted ranking on a set X of alternatives or criteria. The entry aij of the matrix can assume different meanings: aij can be a preference ratio (multiplicative case) or a preference difference (additive case) or aij belongs to [0, 1] and measures the distance from the indifference that is expressed by 0.5 (fuzzy case). For the multiplicative case, a consistency index for the matrix A has been provided by T.L. Saaty in terms of maximum eigenvalue. We consider pairwise comparison matrices over an abelian linearly ordered group and, in this way, we provide a general framework including the mentioned cases. By introducing a more general notion of metric, we provide a consistency index that has a natural meaning and it is easy to compute in the additive and multiplicative cases; in the other cases, it can be computed easily starting from a suitable additive or multiplicative matrix. © 2009 Wiley Periodicals, Inc.  相似文献   

5.
In simulation software selection problems, packages are evaluated either on their own merits or in comparison with other packages. In either method, a comprehensive list of criteria for evaluation of simulation software is essential for proper selection. Although various simulation software evaluation checklists do exist, there are differences in the lists provided and the terminologies used. This paper presents a hierarchical framework for simulation software evaluation consisting of seven main groups and several subgroups. An explanation for each criterion is provided and an analysis of the usability of the proposed framework is further discussed.  相似文献   

6.
We address the question of how one evaluates the usefulness of a heuristic program on a particular input. If theoretical tools do not allow us to decide for every instance whether a particular heuristic is fast enough, might we at least write a simple, fast companion program that makes this decision on some inputs of interest? We call such a companion program a timer for the heuristic. Timers are related to program checkers, as defined by Blum (1993), in the following sense: Checkers are companion programs that check the correctness of the output produced by (unproven but bounded‐time) programs on particular instances; timers, on the other hand, are companion programs that attempt to bound the running time on particular instances of correct programs whose running times have not been fully analyzed. This paper provides a family of definitions that formalize the notion of a timer and some preliminary results that demonstrate the utility of these definitions.  相似文献   

7.
To deal with the problem of insufficient labeled data in video object classification, one solution is to utilize additional pairwise constraints that indicate the relationship between two examples, i.e., whether these examples belong to the same class or not. In this paper, we propose a discriminative learning approach which can incorporate pairwise constraints into a conventional margin-based learning framework. Different from previous work that usually attempts to learn better distance metrics or estimate the underlying data distribution, the proposed approach can directly model the decision boundary and, thus, require fewer model assumptions. Moreover, the proposed approach can handle both labeled data and pairwise constraints in a unified framework. In this work, we investigate two families of pairwise loss functions, namely, convex and nonconvex pairwise loss functions, and then derive three pairwise learning algorithms by plugging in the hinge loss and the logistic loss functions. The proposed learning algorithms were evaluated using a people identification task on two surveillance video data sets. The experiments demonstrated that the proposed pairwise learning algorithms considerably outperform the baseline classifiers using only labeled data and two other pairwise learning algorithms with the same amount of pairwise constraints.  相似文献   

8.
Pairwise comparison is commonly used to estimate preference values of finite alternatives with respect to a given criterion. We discuss 18 estimating methods for deriving preference values from pairwise judgment matrices under a common framework of effectiveness: distance minimization and correctness in error free cases. We point out the importance of commensurate scales when aggregating all the columns of a judgment matrix and the desirability of weighting the columns according to the preference values. The common framework is useful in differentiating the strength and weakness of the estimated methods. Some comparison results of these 18 methods on two sets of judgment matrices with small and large errors are presented. We also give insight regarding the underlying mathematical structure of some of the methods.Scope and purposePairwise comparison is commonly used to estimate preference values of finite alternatives with respect to a given criterion. This is part of the model structure of the analytical hierarchy process, a widely used multicriteria decision-making methodology. The main difficulty is to reconcile the inevitable inconsistency of the pairwise comparison matrix elicited from the decision makers in real-world applications. We discuss 18 estimating methods for deriving preference values from pairwise judgment matrices under a common framework of effectiveness: the common concepts of minimizing aggregated deviation and correctness in error free cases. The common framework is useful in differentiating the strength and weakness of these methods. For each of these methods, we point out their individual strength in decisional effectiveness. Some comparison results of these 18 methods on two sets of judgment matrices with small and large errors are presented. We also give insight regarding the underlying mathematical structure of some of the methods. We recommend the simple geometric mean method with the stronger feature of distance minimization and the simple normalized column sum method that is based on the simple ideas of commensurate unit and column sum. These two methods have closed-form formulas for easy calculation and good performance on both sets of judgment matrices with small and large errors.  相似文献   

9.
A framework for modeling and evaluating automatic semantic reconciliation   总被引:4,自引:0,他引:4  
The introduction of the Semantic Web vision and the shift toward machine understandable Web resources has unearthed the importance of automatic semantic reconciliation. Consequently, new tools for automating the process were proposed. In this work we present a formal model of semantic reconciliation and analyze in a systematic manner the properties of the process outcome, primarily the inherent uncertainty of the matching process and how it reflects on the resulting mappings. An important feature of this research is the identification and analysis of factors that impact the effectiveness of algorithms for automatic semantic reconciliation, leading, it is hoped, to the design of better algorithms by reducing the uncertainty of existing algorithms. Against this background we empirically study the aptitude of two algorithms to correctly match concepts. This research is both timely and practical in light of recent attempts to develop and utilize methods for automatic semantic reconciliation.Received: 6 December 2002, Accepted: 15 September 2003, Published online: 19 December 2003Edited by: V. Atluri.  相似文献   

10.
This paper discusses the issues involved in evaluating a software bidding model. We found it difficult to assess the appropriateness of any model evaluation activities without a baseline or standard against which to assess them. This paper describes our attempt to construct such a baseline. We reviewed evaluation criteria used to assess cost models and an evaluation framework that was intended to assess the quality of requirements models. We developed an extended evaluation framework and an associated evaluation process that will be used to evaluate our bidding model. Furthermore, we suggest the evaluation framework might be suitable for evaluating other models derived from expert-opinion based influence diagrams.  相似文献   

11.
Numerous formal specification methods for reactive systems have been proposed in the literature. Because the significant differences between the methods are hard to determine, choosing the best method for a particular application can be difficult. We have applied several different methods, including Modechart, VFSM, ESTEREL, Basic LOTOS, Z, SDL, and C, to an application problem encountered in the design of software for AT&T's 5ESS telephone switching system. We have developed a set of criteria for evaluating and comparing the different specification methods. We argue that the evaluation of a method must take into account not only academic concerns, but also the maturity of the method, its compatibility with the existing software development process and system execution environment, and its suitability for the chosen application domain  相似文献   

12.
Nowadays, there are a great number of both specific and general data mining tools available to carry out association rule mining. However, it is necessary to use several of these tools in order to obtain only the most interesting and useful rules for a given problem and dataset. To resolve this drawback, this paper describes a fully integrated framework to help in the discovery and evaluation of association rules. Using this tool, any data mining user can easily discover, filter, visualize, evaluate and compare rules by following a helpful and practical guided process described in this paper. The paper also explains the results obtained using a sample public dataset.  相似文献   

13.
Component-based software development is being identified as the emerging method of developing complex applications consisting of heterogeneous systems. Although more research attention has been given to Commercial Off The Shelf (COTS) components, original software components are also widely used in the software industry. Original components are smaller in size, they have a narrower functional scope and they usually find more uses when it comes to specific and dedicated functions. Therefore, their need for interoperability is equal or greater, than that of COTS components. A quality framework for developing and evaluating original components is proposed in this paper, along with an application methodology that facilitates their evaluation. The framework is based on the ISO9126 quality model which is modified and refined so as to reflect better the notion of original components. The quality model introduced can be tailored according to the organization-reuser and the domain needs of the targeted component. The proposed framework is demonstrated and validated through real case examples, while its applicability is assessed and discussed.  相似文献   

14.
《Information & Management》2004,42(1):179-196
Organizations are implementing knowledge management (KM) systems with the assumption that the result will be an increase in organizational effectiveness, efficiency, and competitiveness. Implementing KM systems, however, may be a problem to organizations: too much or too little effort might lead to unwanted outcomes. This paper shows how the introduction of KM systems, which lead to knowledge-sharing, has a negative as well as a positive effect. Important variables from economic perspectives are identified and presented as an integrated framework to illustrate their interrelationships. This paper also explains the implications of an integrated framework for knowledge flow in organizations.  相似文献   

15.
《Information & Management》2005,42(1):179-196
Organizations are implementing knowledge management (KM) systems with the assumption that the result will be an increase in organizational effectiveness, efficiency, and competitiveness. Implementing KM systems, however, may be a problem to organizations: too much or too little effort might lead to unwanted outcomes. This paper shows how the introduction of KM systems, which lead to knowledge-sharing, has a negative as well as a positive effect. Important variables from economic perspectives are identified and presented as an integrated framework to illustrate their interrelationships. This paper also explains the implications of an integrated framework for knowledge flow in organizations.  相似文献   

16.
The evaluation of environmentally conscious manufacturing programs is similar to many strategic initiatives and their justification methodologies. This similarity arises from the fact that there are multiple factors that need to be considered, many of which have long-term and broad implications for an organization. The types of programs that could be evaluated range from the appropriate selection of product designs and materials to major disassembly programs that may be implemented in parallel with standard assembly programs. The methodology will involve the synthesis of the analytical network process (ANP) and data envelopment analysis (DEA). We consider some of the more recent modeling innovations in each of these areas to help us address a critical and important decision that many managers and organizations are beginning to face. An illustrative example provides some insights into the application of this methodology. Additional issues and research questions within are also identified.  相似文献   

17.
18.
Learning from observation (LfO), also known as learning from demonstration, studies how computers can learn to perform complex tasks by observing and thereafter imitating the performance of a human actor. Although there has been a significant amount of research in this area, there is no agreement on a unified terminology or evaluation procedure. In this paper, we present a theoretical framework based on Dynamic-Bayesian Networks (DBNs) for the quantitative modeling and evaluation of LfO tasks. Additionally, we provide evidence showing that: (1) the information captured through the observation of agent behaviors occurs as the realization of a stochastic process (and often not just as a sample of a state-to-action map); (2) learning can be simplified by introducing dynamic Bayesian models with hidden states for which the learning and model evaluation tasks can be reduced to minimization and estimation of some stochastic similarity measures such as crossed entropy.  相似文献   

19.
The design of a real-time system needs to incorporate methods specifically developed to represent the temporal properties of the system under consideration. Real-time systems contain time and event driven actions. Structured design methods provided a reasonable set of abstractions for design of time and event driven factors in real-time designs. As program complexity, size, and time to market pressure grows, the real-time community migrated towards object-oriented technology. Evidence suggests that object-oriented technology in non-real-time systems is most effective in abstraction, modeling, implementation, and reuse of software systems. Many design models and methods exist for object-oriented real-time designs. However, the selection process of a model for a particular application remains a tedious task. This paper introduces an analysis framework that can be applied to a design model to evaluate its effectiveness according to desired performance specifications. To illustrate our approach, we present a case study using the popular automotive cruise control example on two real-time object-oriented models.  相似文献   

20.
User Modeling and User-Adapted Interaction - One common characteristic of research works focused on fairness evaluation (in machine learning) is that they call for some form of parity (equality)...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号