共查询到20条相似文献,搜索用时 15 毫秒
1.
Taskmaster is an interactive environment that employs a unique blend of graphic technologies and iconic images to support user task specification. In this environment, problem solving is based on the selection, specification, and composition of tools that correspond to natural sets of ordered operations. The Taskmaster environment is novel in that it
(provides an interactive, visual-based approach to user task specification;
(encourages and supports task specification and refinement processes from both the top-down and bottom-up perspectives; and
(enables one to specify parallel tasks in a natural and convenient manner
To “program” a given task within the Taskmaster environment, one decomposes it into an ordered set of conceptually simple, high-level operations, and then combines (composes) a corresponding network of software tools that implements these operations. Execution of the specified network provides a task solution. Major system components supporting user task specification include a network editor, a tools database and a network execution monitor. 相似文献
2.
The Total Productivity Model (TPM), developed by Sumanth in 1979, has been applied to a number of situations involving manufacturing as well as service-oriented operations. The diversity of applications on the one hand, and the flexibility of usage of the microcomputers on the other hand, have helped to focus the need for a micro-computer based Decision Support System (DSS). This paper presents such a system using the Macintosh Computer. This DSS is a highly interactive, menu-driven program that can provide on-the screen capabilities of individual operational units as well as the firm that comprises of them. The system has several convenient features to assess the “PRODUCTIVITY- ORIENTED PROFITABILITY” of any type companies/organizations. The system logic and flow chart, data input and output formats, and sensitivity analysis will be shown and discussed in the paper. A balanced critique will be offered to view the system in a proper perspective with respect to structural elements of the Decision Support System. 相似文献
3.
In a previous paper, a hypergraph model for the satisfiability of Datalog formulas was proposed. Here, we extend that approach in order to deal with a class of constraint logic programming (CLP) formulas, that is, Datalog formulas in the presence of constraints. A CLP formula is represented by means of a weighted hypergraph and the problem of evaluating this formula is reduced to a sequence of shortest path computations on hypergraphs. To evaluate the performance of this approach, the bus drivers' scheduling problem is formulated as the problem of checking the satisfiability of a CLP formula and it is solved by means of the hypergraph-based algorithm embedded within a local search procedure. Preliminary experimental results are quite encouraging and suggest that the proposed approach may provide an efficient way to tackle hard real-life combinatorial problems.This research was partially supported by the Progetto Finalizzato Trasporti 2 of the Italian National Research Council, under Contract No. 91.02479.PF74. 相似文献
5.
Answer set programming (ASP) is a knowledge representation and reasoning paradigm with high-level expressive logic-based formalism, and efficient solvers; it is applied to solve hard problems in various domains, such as systems biology, wire routing, and space shuttle control. In this paper, we present an application of ASP to housekeeping robotics. We show how the following problems are addressed using computational methods/tools of ASP: (1) embedding commonsense knowledge automatically extracted from the commonsense knowledge base ConceptNet, into high-level representation, and (2) embedding (continuous) geometric reasoning and temporal reasoning about durations of actions, into (discrete) high-level reasoning. We introduce a planning and monitoring algorithm for safe execution of plans, so that robots can recover from plan failures due to collision with movable objects whose presence and location are not known in advance or due to heavy objects that cannot be lifted alone. Some of the recoveries require collaboration of robots. We illustrate the applicability of ASP on several housekeeping robotics problems, and report on the computational efficiency in terms of CPU time and memory. 相似文献
6.
A long outstanding problem for abduction in logic programming has been on how minimality might be defined. Without minimality, an abductive procedure is often required to generate exponentially many subsumed explanations for a given observation. In this paper, we propose a new definition of abduction in logic programming where the set of minimal explanations can be viewed as a succinct representation of the set of all explanations. We then propose an abductive procedure where the problem of generating explanations is formalized as rewriting with confluent and terminating rewrite systems. We show that these rewrite systems are sound and complete under the partial stable model semantics, and sound and complete under the answer set semantics when the underlying program is so-called odd-loop free. We discuss an application of abduction in logic programming to a problem in reasoning about actions and provide some experimental results. 相似文献
7.
The parallel preconditioned conjugate gradient method (CGM) is used in many applications of scientific computing and often has a critical impact on their performance and energy consumption. This article investigates the energy-aware execution of the CGM on multi-core CPUs and GPUs used in an adaptive FEM. Based on experiments, an application-specific execution time and energy model is developed. The model considers the execution speed of the CPU and the GPU, their electrical power, voltage and frequency scaling, the energy consumption of the memory as well as the time and energy needed for transferring the data between main memory and GPU memory. The model makes it possible to predict how to distribute the data to the processing units for achieving the most energy efficient execution: the execution might deploy the CPU only, the GPU only or both simultaneously using a dynamic and adaptive collaboration scheme. The dynamic collaboration enables an execution minimising the execution time. By measuring execution times for every FEM iteration, the data distribution is adapted automatically to changing properties, e.g. the data sizes. 相似文献
9.
A software development system based upon integrated skeleton technology (ASSIST) is a proposal of a new programming environment oriented to the development of parallel and distributed high-performance applications according to a unified approach. The main goals are: high-level programmability and software productivity for complex multidisciplinary applications, including data-intensive and interactive software; performance portability across different platforms, in particular large-scale platforms and grids; effective reuse of parallel software; efficient evolution of applications through versions that scale according to the underlying technologies. The purpose of this paper is to show the principles of the proposed approach in terms of the programming model (successive papers will deal with the environment implementation and with performance evaluation). The features and the characteristics of the ASSIST programming model are described according to an operational semantics style and using examples to drive the presentation, to show the expressive power and to discuss the research issues. According to our previous experience in structured parallel programming, in ASSIST we wish to overcome some limitations of the classical skeletons approach to improve generality and flexibility, expressive power and efficiency for irregular, dynamic and interactive applications, as well as for complex combinations of task and data parallelism. A new paradigm, called “parallel module” (parmod), is defined which, in addition to expressing the semantics of several skeletons as particular cases, is able to express more general parallel and distributed program structures, including both data-flow and nondeterministic reactive computations. ASSIST allows the programmer to design the applications in the form of generic graphs of parallel components. Another distinguishing feature is that ASSIST modules are able to utilize external objects, including shared data structures and abstract objects (e.g. CORBA), with standard interfacing mechanisms. In turn, an ASSIST application can be reused and exported as a component for other applications, possibly expressed in different formalisms. 相似文献
10.
This contribution presents an integrated constraint programming (CP) model to tackle the problems of tool allocation, machine loading, part routing, and scheduling in a flexible manufacturing system (FMS). The formulation, which is able to take into account a variety of constraints found in industrial environments, as well as several objective functions, has been successfully applied to the solution of various case studies of different sizes. Though some of the problem instances have bigger sizes than the examples reported to date in literature, very good-quality solutions were reached in quite reasonable CPU times. This good computational performance is due to two essential characteristics of the proposed model. The most significant one is the use of two sets of two-index variables to capture manufacturing activities instead of having just one set of four indexes. Thus, dimensionality is greatly reduced. The other relevant feature is the fact that the model relies on an indirect representation of tool needs by means of tool types, thus avoiding the consideration of tool copies. 相似文献
11.
Cooperation is considered an essential attribute of intelligent multi-machine systems. It enhances their flexibility and reliability. Cooperation Requirement Planning (CRP) is the process of generating a consistent and coordinated global execution plan for a set of tasks to be completed by a multi-machine system based on the task cooperation requirements and interactions. CRP is divided into two steps: CRP-I which matches the task requirements to machine and system capabilities to generate cooperation requirements. It also generates task precedence, machine operation, and system resource constraints. CRP-II uses the cooperation requirements and various constraints to generate a task assignment and coordinated and consistent global execution plan. The global execution plan specifies an ordered sequence of actions and the machine sets that execute them such that the assigned tasks are successfully completed, all the constraints are resolved, and the desired performance measure optimized.In this paper, we describe the CRP-II methodology based on the concepts of planning for multiple goals with interactions. Each task is considered to be a goal, and the CRP-I process is viewed as generating alternate plans and associated costs to accomplish each goal. Five different interactions are specified between the various plans: action combination, precedence relation, resource sharing, cooperative action, and independent action. The CRP-II process is viewed as selecting a plan to satisfy each goal and resolving the interactions between them. A planning strategy is proposed which performs plan selection and interaction resolution simultaneously using a best-first search process to generate the optimal global plan. 相似文献
13.
This paper examines the use of speculations, a form of distributed transactions, for improving the reliability and fault tolerance of distributed systems. A speculation is defined as a computation that is based on an assumption that is not validated before the computation is started. If the
assumption is later found to be false, the computation is aborted and the state of the program is rolled back; if the assumption is found to be true, the results of the computation are committed. The primary difference between a speculation and a transaction is that a speculation is not isolated—for example, a speculative
computation may send and receive messages, and it may modify shared objects. As a result, processes that share those objects
may be absorbed into a speculation. We present a syntax, and an operational semantics in two forms. The first one is a speculative
model, which takes full advantage of the speculative features. The second one is a nonspeculative, nondeterministic model,
where aborts are treated as failures. We prove the equivalence of the two models, demonstrating that speculative execution
is equivalent to failure-free computation. 相似文献
15.
Encountering with a nonlinear second-order differential equation including ϵ
r
and μ
r
spatial distributions, while computing the fields inside inhomogeneous media, persuaded us to find their known distributions that give exact solutions. Similarities between random distributions of electric properties and known functions lead us to estimate them using three mathematical tools of artificial neural networks (ANNs), support vector machines (SVMs) and Fuzzy Logic (FL). Assigning known functions after fitting with minimum error to arbitrary inputs using results of machine learning networks leads to achieve an approximate solution for the field inside materials considering boundary conditions. A comparative study between the methods according to the complexity of the structures as well as the accuracy and the calculation time for testing of unforeseen inputs, including classification, prediction and regression is presented. We examined the extracted pairs of ϵ
r
and μ
r with ANN, SVM networks and FL and got satisfactory outputs with detailed results. The application of the presented method in zero reflection subjects is exemplified. 相似文献
16.
本文在虚拟计算环境之上,研究支持具有自主能力、高并发的新型互联网应用开发方法,在已有的基于进程、面向并发的编程模型中引入实体建模机制,扩展出一种兼具进程和自主并发实体的程序设计模型ConEntity,并给出了形式化定义和描述.ConEntity模型具有表达性、并发性和可伸缩性的特点,能对虚拟计算环境资源高效、透明访问.通过扩展Erlang/OTP将其实现为Erlang语言设施UniAgent.本文的模型为在虚拟计算环境上快速直接构建具有自主、高并发能力实体的新型互联网应用提供了模型和语言上的支持. 相似文献
17.
Medicine residency is three to seven years of challenging graduate medical training that puts a lot of mental and physiological burden over the residents. Like other surgical branches, anesthesia and reanimation departments provide 24 h continuous service and the residents are the main providers of this service. The residents are assigned for on-call shifts during their training, as well as working during the regular day shifts. These schedules must address several considerations like preferences of the residents and coverage requirements of two different locations: the intensive care unit (ICU) and the surgery room (SR). In this study we develop a goal programming (GP) model for scheduling the shifts of the residents in the Anesthesia and Reanimation Department of Bezmialem Vakif University Medical School (BUMS). The rules that must be strictly met, like the number of on-duty shifts or preventing block shifts, are formulated as hard constraints. The preferences of the residents like increasing the number of weekends without shifts and assigning duties on the same night to the same social groups are formulated as soft constraints. The penalties for the deviation from the soft constraints are determined by the analytical hierarchy process (AHP). We are able to solve problems of realistic size to optimality in a few seconds. We showed that the proposed formulation, which the department uses currently, has yielded substantial improvements and much better schedules are created with less effort. 相似文献
18.
Regular performance assessment is an integral part of (high-) risk industries. Past research shows, however, that in many fields, inter-rater reliabilities tend to be moderate to low. This study was designed to investigate the variability of performance assessment in a naturalistic setting in aviation. A modified think-aloud protocol was used as research design to investigate the reasoning pairs of pilots use to assess the performance of an airline captain in a high-risk situation. Standard protocol analysis and interaction analysis methods were employed in the analysis of transcribed verbal protocols. The analyses confirm high variability in performance assessment and reveal the good, albeit fuzzy, justifications that assessor pairs use to ground their assessments. A fuzzy logic model exhibits a good approximation between predicted and actual ratings. Implications for the practice of performance assessment are provided. 相似文献
19.
The fast development in China’s economy has caused the rapid expansion of the domestic market. Since many economists do not have optimistic views regarding the bubble economy of China, it is necessary for Taiwanese businessmen to understand in-depth the business operational performance and financial situation of enterprises in China, so as to reduce the risk of a potential investment. In this article, data from the China Economic Research Database (CCER), the financial database of financial corporations are collected for analysis to investigate the business operation and management performance and financial characteristic of enterprises in China. In this article, grey relational analysis is applied first in order to investigate the business operational performance of 600 enterprises in China. Afterwards, a more recent clustering technique is used to divide, based on financial characteristic, enterprises in China into two groups. Finally, three models, namely genetic programming, Back-Propagation Neural Network and Logistic Regression are adopted to construct an Enterprise Operational Performance model and an Enterprise Finance Characteristic model, respectively. Based on the results found, it can be concluded that genetic programming yielded the best classification and forecast performance, compared to the other three techniques. 相似文献
|