首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Correcting design decay in source code is not a trivial task. Diagnosing and subsequently correcting inconsistencies between a software system’s code and its design rules (e.g., database queries are only allowed in the persistence layer) and coding conventions can be complex, time-consuming and error-prone. Providing support for this process is therefore highly desirable, but of a far greater complexity than suggesting basic corrective actions for simplistic implementation problems (like the “declare a local variable for non-declared variable” suggested by Eclipse).We present an abductive reasoning approach to inconsistency correction that consists of (1) a means for developers to document and verify a system’s design and coding rules, (2) an abductive logic reasoner that hypothesizes possible causes of inconsistencies between the system’s code and the documented rules and (3) a library of corrective actions for each hypothesized cause. This work builds on our previous work, where we expressed design rules as equality relationships between sets of source code artifacts (e.g., the set of methods in the persistence layer is the same as the set of methods that query the database). In this paper, we generalize our approach to design rules expressed as user-defined binary relationships between two sets of source code artifacts (e.g., every state changing method should invoke a persistence method).We illustrate our approach on the design of IntensiVE, a tool suite that enables defining sets of source code artifacts intensionally (by means of logic queries) and verifying relationships between such sets.  相似文献   

2.
This paper proposes a strategic production–distribution model for supply chain design with consideration of bills of materials (BOM). Logical constraints are used to represent BOM and the associated relationships among the main entities of a supply chain such as suppliers, producers, and distribution centers. We show how these relationships are formulated as logical constraints in a mixed integer programming (MIP) model, thus capturing the role of BOM in the selection of suppliers in the strategic design of a supply chain. A test problem is presented to illustrate the effectiveness of the formulation and solution strategy. The results and their managerial implications are discussed.Scope and purposeSupply chain design is to provide an optimal platform for efficient and effective supply chain management. The problem is often an important and strategic operations management problem in supply chain management. This paper shows how the mixed integer programming modeling techniques can be applied to supply chain design problem, where some complicated relations, such as bills of materials, are involved. We discuss how to solve such a complicated model efficiently.  相似文献   

3.
Learning complex action models with quantifiers and logical implications   总被引:1,自引:0,他引:1  
Automated planning requires action models described using languages such as the Planning Domain Definition Language (PDDL) as input, but building action models from scratch is a very difficult and time-consuming task, even for experts. This is because it is difficult to formally describe all conditions and changes, reflected in the preconditions and effects of action models. In the past, there have been algorithms that can automatically learn simple action models from plan traces. However, there are many cases in the real world where we need more complicated expressions based on universal and existential quantifiers, as well as logical implications in action models to precisely describe the underlying mechanisms of the actions. Such complex action models cannot be learned using many previous algorithms. In this article, we present a novel algorithm called LAMP (Learning Action Models from Plan traces), to learn action models with quantifiers and logical implications from a set of observed plan traces with only partially observed intermediate state information. The LAMP algorithm generates candidate formulas that are passed to a Markov Logic Network (MLN) for selecting the most likely subsets of candidate formulas. The selected subset of formulas is then transformed into learned action models, which can then be tweaked by domain experts to arrive at the final models. We evaluate our approach in four planning domains to demonstrate that LAMP is effective in learning complex action models. We also analyze the human effort saved by using LAMP in helping to create action models through a user study. Finally, we apply LAMP to a real-world application domain for software requirement engineering to help the engineers acquire software requirements and show that LAMP can indeed help experts a great deal in real-world knowledge-engineering applications.  相似文献   

4.
Strategy and organizational theorists have emphasized the importance of balancing exploitation and exploration for organizations’ sustainable success in regards to organizational learning and adaptation. However, few researchers have addressed the mechanisms or criteria in regards to an organization’s resource allocation for exploitation and exploration with an ambidextrous balance, although previous researchers agree that exploitation and exploration are important for organizational success and that a balance between the two should be achieved in order to obtain sustainable competitiveness. The main purpose of this research is to make a logical argument on how team creativity evolves from the creativity revelation processes through knowledge creation by balancing exploitation and exploration. Specifically, this research presents a new logical mechanism to allocate a team’s limited resources to exploitation and exploration, keeping a balance between the two activities. We prove the validity of the proposed logical mechanism by conducting longitudinal simulations.  相似文献   

5.
Storing and retrieving time-related information are important, or even critical, tasks on many areas of computer science (CS) and in particular for artificial intelligence (AI). The expressive power of temporal databases/query languages has been studied from different perspectives, but the kind of temporal information they are able to store and retrieve is not always conveniently addressed. Here we assess a number of temporal query languages with respect to the modelling of time intervals, interval relationships and states, which can be thought of as the building blocks to represent and reason about a large and important class of historic information. To survey the facilities and issues which are particular to certain temporal query languages not only gives an idea about how useful they can be in particular contexts, but also gives an interesting insight in how these issues are, in many cases, ultimately inherent to the database paradigm. While in the area of AI declarative languages are usually the preferred choice, other areas of CS heavily rely on the extended relational paradigm. This paper, then, will be concerned with the representation of historic information in two well known temporal query languages: Templog in the context of temporal deductive databases, and TSQL2 in the context of temporal relational databases. We hope the results highlighted here will increase cross-fertilisation between different communities. This article can be related to recent publications drawing the attention towards the different approaches followed by the Databases and AI communities when using time-related concepts.  相似文献   

6.
The increasingly complex design has gained difficulty in conducting the rule compliance checking for the Mechanical, Electrical and Plumbing (MEP) system in the design phase. Useful rule-checking systems could contribute to a quicker project delivery time. Currently, an efficient method for checking the logical relationship is still lacking. This study aims to propose an MEP rule checking framework using the subgraph matching technology. First, the MEP components in the BIM model are extracted by utilizing the application programming interface (API), and a graph database is established with point-based and curve-based instances being nodes and relationships, respectively. Second, the graph database is simplified to increase the speed of graph matching. Third, the rules, which regulate how the MEP components should be connected, are represented by a knowledge graph. Finally, rule checking is achieved by comparing the graph database against the knowledge graph, and the critical path in a sub-system is detected by calculating the betweenness centrality. A case study with a rail station is used to evaluate the approach where the overall model checking and rule checking are conducted on the original and simplified graph databases sequentially. The results show that the proposed approach could achieve the rule compliance checking at a high speed, and 6 unconnected instances along with 155 problematic pipe fittings have been found. Besides, the critical path for the selected ACS system is from the water-cooled chiller to the condenser water pump. The proposed framework could help in the overall model checking and rule checking process, improving the efficiency of BIM engineers. This research demonstrates that converting a BIM model into a graph database can benefit conventional BIM analysis methods by incorporating advanced technologies (e.g., artificial intelligence) to enable a more flexible and accurate MEP design process.  相似文献   

7.
We present the RFuzzy framework, a Prolog-based tool for representing and reasoning with fuzzy information. The advantages of our framework in comparison to previous tools along this line of research are its easy, user-friendly syntax, and its expressivity through the availability of default values and types.In this approach we describe the formal syntax, the operational semantics and the declarative semantics of RFuzzy (based on a lattice). A least model semantics, a least fixpoint semantics and an operational semantics are introduced and their equivalence is proven. We provide a real implementation that is free and available. (It can be downloaded from http://babel.ls.fi.upm.es/software/rfuzzy/.) Besides implementation details, we also discuss some actual applications using RFuzzy.  相似文献   

8.
Knowledge acquisition and knowledge representation are the fundamental building blocks of knowledge-based systems (KBSs). How to efficiently elicit knowledge from experts and transform this elicited knowledge into a machine usable format is a significant and time consuming problem for KBS developers. Object-orientation provides several solutions to persistent knowledge acquisition and knowledge representation problems including transportability, knowledge reuse, and knowledge growth. An automated graphical knowledge acquisition tool is presented, based upon object-oriented principles. The object-oriented graphical interface provides a modeling platform that is easily understood by experts and knowledge engineers. The object-oriented base for the automated KA tool provides a representation independent methodology that can easily be mapped into any other object-oriented expert system or other object-oriented intelligent tools.  相似文献   

9.
This paper deals with the implementation of logic queries where array structures are manipulated. Both top-down and bottom-up implementations of the presented logic language, called Datalog A , are considered. Indeed, SLD-resolution is generalized to realize Datalog A top-down query answering. Further, a fixpoint based evaluation of Datalog A queries is introduced, which forms the basis for efficient bottom-up implementation of queries obtained by generalizing rewriting techniques such as magic set method to the case of Datalog A programs.Work partially supported by a European Union grant under the EC-US project DEUS EX MACHINA: nondeterminism for deductive databases and by a MURST grant (40% share) under the project Sistemi formali e strumenti per basi di dati evolute.  相似文献   

10.
The theory of parameterized computation and complexity is a recently developed subarea in theoretical computer science. The theory is aimed at practically solving a large number of computational problems that are theoretically intractable.The theory is based on the observation that many intractable computational problems in practice are associated with a parameter that varies within a small or moderate range. Therefore, by taking the advantages of the small parameters, many theoretically intractable problems can be solved effectively and practically. On the other hand, the theory of parameterized computation and complexity has also offered powerful techniques that enable us to derive strong computational lower bounds for many computational problems, thus explaining why certain theoretically tractable problems cannot be solved effectively and practically. The theory of parameterized computation and complexity has found wide applications in areas such as database systems, programming languages, networks, VLSI design, parallel and distributed computing, computational biology, and robotics. This survey gives an overview on the fundamentals, algorithms, techniques, and applications developed in the research of parameterized computation and complexity. We will also report the most recent advances and excitements, and discuss further research directions in the area.  相似文献   

11.
The SPELL (Spoken Electronic Language Learning) system is a self‐access computer‐assisted language learning (CALL) package that integrates speaker‐independent continuous speech recognition technology with virtual worlds and embodied virtual agents to create an environment in which learners can converse in the target language within meaningful contextualized scenarios. In this paper we provide an overview of the functionality, architecture, and implementation of the SPELL system. We also describe four phases of usability evaluation conducted with the system and summarize the main results of these user assessments. Finally, we discuss the most significant lessons learned in the development and evaluation of the system. The paper focuses on the technological aspects of the system and its evaluation for usability and robustness, rather than its pedagogical methodology. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
Business process models have become an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach to process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions similarly to how they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. An empirical investigation comparing both the modelling outputs and participant behaviour of this virtual world role-play elicitor with an S-BPM process modelling tool found that while the modelling approaches of the two groups varied greatly, the virtual world elicitor may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.  相似文献   

13.
Effective information systems require the existence of explicit process models. A completely specified process design needs to be developed in order to enact a given business process. This development is time consuming and often subjective and incomplete. We propose a method that constructs the process model from process log data, by determining the relations between process tasks. To predict these relations, we employ machine learning technique to induce rule sets. These rule sets are induced from simulated process log data generated by varying process characteristics such as noise and log size. Tests reveal that the induced rule sets have a high predictive accuracy on new data. The effects of noise and imbalance of execution priorities during the discovery of the relations between process tasks are also discussed. Knowing the causal, exclusive, and parallel relations, a process model expressed in the Petri net formalism can be built. We illustrate our approach with real world data in a case study.
Antal Van Den BoschEmail:
  相似文献   

14.
The shift from a product-based to a knowledge-based economy has resulted in an increased demand for knowledge workers who are capable of higher-order thinking and reasoning to solve intricate problems in the workplace. This requires organizations to introduce knowledge management systems (KMS) for employees and has fueled predictions and speculations about what makes KMS effective. Unfortunately, there are very few empirical studies that examine this issue. Thus, this paper develops a validated instrument to measure user satisfaction as a surrogate measure of KMS effectiveness. Based on a survey of 147 respondents practicing mostly in four international semiconductor manufacturing companies in the Hsin-Chu Science-based Industrial Park in Taiwan suggests a 16-item instrument that measures four dimensions of user satisfaction with knowledge management systems (USKMS) is well-validated. The instrument and comprehensive model proposed in this paper would be valuable to researchers and practitioners interested in designing, implementing, and managing knowledge management systems.  相似文献   

15.
The article reflects on experience of action research in the context of regional development, where there has been pressure to produce practical results. The epistemological status of Action Research is explored, in contrast to conventional social science research. The article concludes that an ongoing relationship with conventional social research is necessary.An earlier version of this paper has been presented at the workshop: Innovation and Regional Development Kingston University 10th December 2004  相似文献   

16.
The explosive growth of social media has intrigued many scholars to inquire into why people willingly share information with others. However, relatively little attention has been devoted to how people determine which information they share in the networked environment. In this study, a 2 (network density – dense vs. sparse) × 2 (knowledge – expert vs. novice) × 3 (information valence – negative vs. neutral vs. positive) online experiment was performed to examine how the three factors interact and cross over in shaping individuals’ perceptions of the value of information for themselves and for others in the network. Results show that individuals’ perceptions of information value are influenced not just by their level of knowledge, but also by how the network environment is structured. Implications for the findings are discussed.  相似文献   

17.
18.
The Visual Semantic Web (ViSWeb) is a new paradigm for enhancing the current Semantic Web technology. Based on Object-Process Methodology (OPM), which enables modeling of systems in a single graphic and textual model, ViSWeb provides for representation of knowledge over the Web in a unified way that caters to human perceptions while also being machine processable. The advantages of the ViSWeb approach include equivalent graphic-text knowledge representation, visual navigability, semantic sentence interpretation, specification of system dynamics, and complexity management. Arguing against the claim that humans and machines need to look at different knowledge representation formats, the principles and basics of various graphic and textual knowledge representations are presented and examined as candidates for ViSWeb foundation. Since OPM is shown to be most adequate for the task, ViSWeb is developed as an OPM-based layer on top of XML/RDF/OWL to express knowledge visually and in natural language. Both the graphic and the textual representations are strictly equivalent. Being intuitive yet formal, they are not only understandable to humans but are also amenable to computer processing. The ability to use such bimodal knowledge representation is potentially a major step forward in the evolution of the Semantic Web.Received: 14 December 2002, Accepted: 28 November 2003, Published online: 6 February 2004Edited by: V. AtluriDov Dori: dori@ie.technion.ac.il  相似文献   

19.
Increasingly, customer companies hire external information technology (IT) consultants, often on a special project basis. These consultants are employees of professional service firms, although they receive their assignments from the hiring companies, report to them, and are supervised by them. Using semistructured interviews with 12 IT consultants in Sweden, we examine the factors that influence their work motivation, including the effect of this dual allegiance—to the service firm and to the customer company. The data indicate that the primary motivators are the variety in tasks and the opportunity to influence and/or manage an entire project. Neither monetary incentives nor the consultancy firm norms are strong motivators. A factor that affects work behavior and motivation is the subordinate identity that IT consultants must assume with their powerful clients. The article concludes with practical suggestions for managers who seek to understand what motivates employees who work at a distance, under external control. © 2011 Wiley Periodicals, Inc.  相似文献   

20.
In this paper, we analyze the finite‐horizon fault estimation issue for a kind of time‐varying nonlinear systems with imperfect measurement signals under the stochastic communication protocol (SCP). The imperfect measurements result from randomly occurring sensor nonlinearities obeying sensor‐wise Bernoulli distributions. The Markov‐chain‐driven SCP is introduced to regulate the signal transmission to alleviate the communication congestion. The aim of the considered issue is to propose the design algorithm of a group of time‐varying fault estimators such that the estimation error dynamics satisfies both the H and the finite‐time boundedness (FTB) performance requirements. First, sufficient conditions are set up to guarantee the existence of the satisfactory H FTB fault estimators through intensive stochastic analyses and matrix operations. Then, the gains of such fault estimators are explicitly parameterized by resorting to the solution to recursive linear matrix inequalities. Finally, the correctness of the devised fault estimation approach is demonstrated by a numerical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号