首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a reasoning system for inferring dimension information in spreadsheets. This system can be used to check the consistency of spreadsheet formulas and thus is able to detect errors in spreadsheets.Our approach is based on three static analysis components. First, the spatial structure of the spreadsheet is analyzed to infer a labeling relationship among cells. Second, cells that are used as labels are lexically analyzed and mapped to potential dimensions. Finally, dimension information is propagated through spreadsheet formulas. An important aspect of the rule system defining dimension inference is that it works bi-directionally, that is, not only “downstream” from referenced arguments to the current cell, but also “upstream” in the reverse direction. This flexibility makes the system robust and turns out to be particularly useful in cases when the initial dimension information that can be inferred from headers is incomplete or ambiguous.We have implemented a prototype system as an add-in to Excel. In an evaluation of this implementation we were able to detect dimension errors in almost 50% of the investigated spreadsheets, which shows (i) that the system works reliably in practice and (ii) that dimension information can be well exploited to uncover errors in spreadsheets.  相似文献   

2.
Whereas to most logicians, the word “theorem” refers to any statement which has been shown to be true, to mathematicians, the word “Theorem” is, relatively speaking, rarely applied, and denotes something far more special. In this paper, we examine some of the underlying reasons behind this difference in terminology, and we show how this discrepancy might be exploited, in order to build a computer system which automatically selects the latter type of “Theorems” from amongst the former. Indeed, we have begun building the automated discovery system MATHsAiD, the design of which is based upon our research. We provide some preliminary results produced by this system, and compare these results to Theorems appearing in various mathematics textbooks.  相似文献   

3.
Network flow control under capacity constraints: A case study   总被引:1,自引:0,他引:1  
In this paper, we demonstrate how tools from nonlinear system theory can play an important role in tackling “hard nonlinearities” and “unknown disturbances” in network flow control problems. Specifically, a nonlinear control law is presented for a communication network buffer management model under physical constraints. Explicit conditions are identified under which the problem of asymptotic regulation of a class of networks against unknown inter-node traffic is solvable, in the presence of control input and state saturation. The conditions include a Lipschitz-type condition and a “PE” condition. Under these conditions, we achieve either asymptotic or practical regulation for a single-node system. We also propose a decentralized, discontinuous control law to achieve (global) asymptotic regulation of large-scale networks. Our main result on controlling large-scale networks is based on an interesting extension of the well-known Young's inequality for the case with saturation nonlinearities. We present computer simulations to illustrate the effectiveness of the proposed flow control schemes.  相似文献   

4.
Technology benefits last years longer than the standard ROI valuation analysis but are rarely enumerated. In this paper, we utilize a nonconstant dividend growth model to “capture” lasting marginal productivity gained through the “reinvestment” of labor capital rather than the standard the one-time gain of reducing the labor force to realize labor productivity gains. This innovative methodology for capturing the productivity value of maintained employees enables the valuation of continuing marginal productivity gains and the management of workload for the affected employees at Intel. This methodology is applied to the valuation of a standard operating system and hardware upgrade.  相似文献   

5.
The paper presents an approach to characterizing a “stop–flow” mode of sensor array operation. The considered operation mode involves three successive phases of sensors exposure: flow (in a stream of measured gas), stop (in zero flow conditions) and recovery (in a stream of pure air). The mode was characterized by describing the distribution of information, which is relevant for classification of measured gases in the response of sensor array. The input data for classifier were the sets of sensors output values, acquired in discrete time moments of the measurement. Discriminant Function Analysis was used for data analysis. Organic vapours of ethanol, acetic acid and ethyl acetate in air were measured and classified. Our attention was focused on data sets which allowed for 100% efficient recognition of analytes. The number, size and composition of those data sets were examined versus time of sensor array response. This methodology allowed to observe the distribution of classification-relevant information in the response of sensor array obtained in “stop–flow” mode. Hence, it provided for the characterization of this mode.  相似文献   

6.
At a recent conference on games in education, we made a radical decision to transform our standard presentation of PowerPoint slides and computer game demonstrations into a unified whole, inserting the PowerPoint presentation to the computer game. This opened up various questions relating to learning and teaching theories, which were debated by the conference delegates. In this paper, we reflect on these discussions, we present our initial experiment, and relate this to various theories of learning and teaching. In particular, we consider the applicability of “concept maps” to inform the construction of educational materials, especially their topological, geometrical and pedagogical significance. We supplement this “spatial” dimension with a theory of the dynamic, temporal dimension, grounded in a context of learning processes, such as Kolb’s learning cycle. Finally, we address the multi-player aspects of computer games, and relate this to the theories of social and collaborative learning. This paper attempts to explore various theoretical bases, and so support the development of a new learning and teaching virtual reality approach.  相似文献   

7.
Agile software development (ASD) is an emerging approach in software engineering, initially advocated by a group of 17 software professionals who practice a set of “lightweight” methods, and share a common set of values of software development. In this paper, we advance the state-of-the-art of the research in this area by conducting a survey-based ex-post-facto study for identifying factors from the perspective of the ASD practitioners that will influence the success of projects that adopt ASD practices. In this paper, we describe a hypothetical success factors framework we developed to address our research question, the hypotheses we conjectured, the research methodology, the data analysis techniques we used to validate the hypotheses, and the results we obtained from data analysis. The study was conducted using an unprecedentedly large-scale survey-based methodology, consisting of respondents who practice ASD and who had experience practicing plan-driven software development in the past. The study indicates that nine of the 14 hypothesized factors have statistically significant relationship with “Success”. The important success factors that were found are: customer satisfaction, customer collaboration, customer commitment, decision time, corporate culture, control, personal characteristics, societal culture, and training and learning.  相似文献   

8.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

9.
We establish that, under appropriate conditions, the solutions of a time-varying system with disturbances converge uniformly on compact time intervals to the solutions of the system's average as the rate of change of time increases to infinity. The notions of “average” used for systems with disturbances are the “strong” and “weak” averages introduced in Ne i and Teel (D. Ne i , A.R. Teel, Input-to-state stability for time-varying nonlinear systems via averaging, 1999, submitted for publication. See also: On averaging and the ISS property, Proceedings of the 38th Conference on Decision and Control, Phoenix, AZ, December 1999, pp. 3346–3351.)  相似文献   

10.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

11.
Efficient constrained local model fitting for non-rigid face alignment   总被引:1,自引:1,他引:0  
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2–3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (>200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU MultiPIE database.  相似文献   

12.
Many artificial intelligence tasks, such as automated question answering, reasoning, or heterogeneous database integration, involve verification of a semantic category (e.g. “coffee” is a drink, “red” is a color, while “steak” is not a drink and “big” is not a color). In this research, we explore completely automated on-the-fly verification of a membership in any arbitrary category which has not been expected a priori. Our approach does not rely on any manually codified knowledge (such as WordNet or Wikipedia) but instead capitalizes on the diversity of topics and word usage on the World Wide Web, thus can be considered “knowledge-light” and complementary to the “knowledge-intensive” approaches. We have created a quantitative verification model and established (1) what specific variables are important and (2) what ranges and upper limits of accuracy are attainable. While our semantic verification algorithm is entirely self-contained (not involving any previously reported components that are beyond the scope of this paper), we have tested it empirically within our fact seeking engine on the well known TREC conference test questions. Due to our implementation of semantic verification, the answer accuracy has improved by up to 16% depending on the specific models and metrics used.  相似文献   

13.
The literature suggests the existence of critical success factors (CSFs) for the development of information systems that support senior executives. Our study of six organizations gives evidence for this notion of CSFs. The study further shows an interesting pattern, namely that companies either “get it right”, and essentially succeed on all CSFs, or “get it completely wrong”, that is, fall short on each of the CSFs. Among the six cases for which data were collected through in-depth interviews with company executives, three organizations seemed to manage all the CSFs properly, while two others managed all CSFs poorly. Only one organization showed a mixed scorecard, managing some factors well and some not so well. At the completion of the study, this organization could neither be judged as a success, nor as a failure. This dichotomy between success and failure cases suggests the existence of an even smaller set of “meta-success” factors. Based on our findings, we speculate that these “meta-success” factors are “championship”, “availability of resources”, and “link to organization objectives”.  相似文献   

14.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

15.
Decisions involving large-scale, complex systems, particularly those in which “society” serves as an ultimate judge of their outcome and effectiveness, also involve multiple, conflicting and noncommensurable goals. Traditional models for the representation and solution of such problems have generally been forced to ignore the multiobjective nature of such problems. As a result, we obtain “optimal” solutions to the simplified models but, since the models do not reflect the actual situation, these solutions can sometimes cause more harm than good. Since large-scale, complex and multiobjective systems are so predominate in urban systems, it is vital that any improved methodology for modeling and solution be at least considered.In this paper we direct our attention to just one of the several new approaches to multiobjective decision analysis; the tool known as goal programming. Considerable interest seems to have been generated in this area recently but the perceptions of goal programming are varied and conflicting and, all too often, erroneous.In this paper an attempt is made to present the reader with a logical structuring of multiobjective optimization and, in particular, to identify goal programming's place and role within this framework. In doing this we hope to dispel a number of myths and misconceptions that have arisen while, at the same time, present an accurate view of the scope and limitations of the methodology. While the paper is primarily tutorial, we will however, also consider the actual and potential implementation of goal programming in problems encountered in the study of urban systems.  相似文献   

16.
17.
An integrated multi-unit chemical plant presents a challenging control design problem due to the existence of recycling streams. In this paper, we develop a framework for analyzing the effects of recycling dynamics on closed-loop performance from which a systematic design of a decentralized control system for a recycled, multi-unit plant is established. In the proposed approach, the recycled streams are treated as unmodelled dynamics of the “unit” model so that their effects on closed-loop stability and performance can be analyzed using the robust control theory. As a result, two measures are produced: (1) the ν-gap metric, which quantifies the strength of recycling effects, and (2) the maximum stability margin of “unit” controller, which represents the ability of the “unit” controller to compensate for such effects. A simple rule for the “unit” control design is then established using the combined two measures in order to guarantee the attainment of good overall closed-loop performances. As illustrated by several design examples, the controllability of a recycled, multi unit process under a decentralized “unit” controller can be determined without requiring any detailed design of the “unit” controller because the simple rule is calculated from the open-loop information only.  相似文献   

18.
“Walkthrough” and “Jogthrough” techniques are well known expert based methodologies for the evaluation of user interface design. In this paper we describe the use of “Graphical” Jogthrough method for evaluating the interface design of the Network Simulator, an educational simulation program that enables users to virtually build a computer network, install hardware and software components, make the necessary settings and test the functionality of the network. Graphical Jogthrough is a further modification of a typical Jogthrough method, where evaluators' ratings produce evidence in the form of a graph, presenting estimated proportion of users who effectively use the interface versus the time they had to work with it in order to succeed effectiveness. We comment on the question: “What are the possible benefits and limitations of the Graphical Jogthrough method when applied in the case of educational software interface design?” We present the results of the evaluation session, and concluding from our experience we argue that the method could offer designers quantitative and qualitative data for formulating a useful (though rough in some aspects) estimation about the novice–becoming–expert pace that end users might follow when working with the evaluated interface.  相似文献   

19.
The threat of cyber attacks motivates the need to monitor Internet traffic data for potentially abnormal behavior. Due to the enormous volumes of such data, statistical process monitoring tools, such as those traditionally used on data in the product manufacturing arena, are inadequate. “Exotic” data may indicate a potential attack; detecting such data requires a characterization of “typical” data. We devise some new graphical displays, including a “skyline plot,” that permit ready visual identification of unusual Internet traffic patterns in “streaming” data, and use appropriate statistical measures to help identify potential cyberattacks. These methods are illustrated on a moderate-sized data set (135,605 records) collected at George Mason University.  相似文献   

20.
Time series data mining (TSDM) techniques permit exploring large amounts of time series data in search of consistent patterns and/or interesting relationships between variables. TSDM is becoming increasingly important as a knowledge management tool where it is expected to reveal knowledge structures that can guide decision making in conditions of limited certainty. Human decision making in problems related with analysis of time series databases is usually based on perceptions like “end of the day”, “high temperature”, “quickly increasing”, “possible”, etc. Though many effective algorithms of TSDM have been developed, the integration of TSDM algorithms with human decision making procedures is still an open problem. In this paper, we consider architecture of perception-based decision making system in time series databases domains integrating perception-based TSDM, computing with words and perceptions, and expert knowledge. The new tasks which should be solved by the perception-based TSDM methods to enable their integration in such systems are discussed. These tasks include: precisiation of perceptions, shape pattern identification, and pattern retranslation. We show how different methods developed so far in TSDM for manipulation of perception-based information can be used for development of a fuzzy perception-based TSDM approach. This approach is grounded in computing with words and perceptions permitting to formalize human perception-based inference mechanisms. The discussion is illustrated by examples from economics, finance, meteorology, medicine, etc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号