共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a knowledge-based system, ‘EFDEX’, the Engineering Functional Design Expert, which was developed using
an expert system shell, CLIPS 6.1, to perform intelligent functional design of engineering systems. On the basis of a flexible,
causal and hierarchical functional modeling framework, we propose a knowledge-based functional reasoning methodology. By using
this intelligent functional reasoning strategy, physical behavior can be reasoned out from a desired function or desired behavior,
and inter-connection of these behaviors is possible when there is compatibility between the functional output of one and the
corresponding functional requirement (e.g. driving input) of the next one. In addition, a complicated, desired function which
cannot be matched with the functional output of any behavior after searching the object-oriented behavior base, will be automatically
decomposed into less complex sub-functions by means of relevant function decomposition rules. An intelligent system for the
functional design of an automatic assembly system provides an application of this intelligent design environment, and a demonstration
of its methodology. In this paper, a knowledge-based functional representation scheme which integrates two popular AI representation
techniques (object-oriented representation and rule-based representation) is also proposed as a prelude to a knowledge-based
functional design system 相似文献
2.
3.
Miklós Erdélyi-Szabó László Kálmán Agi Kurucz 《Journal of Logic, Language and Information》2008,17(1):1-17
The paper sets out to offer an alternative to the function/argument approach to the most essential aspects of natural language
meanings. That is, we question the assumption that semantic completeness (of, e.g., propositions) or incompleteness (of, e.g.,
predicates) exactly replicate the corresponding grammatical concepts (of, e.g., sentences and verbs, respectively). We argue
that even if one gives up this assumption, it is still possible to keep the compositionality of the semantic interpretation
of simple predicate/argument structures. In our opinion, compositionality presupposes that we are able to compare arbitrary
meanings in term of information content. This is why our proposal relies on an ‘intrinsically’ type free algebraic semantic
theory. The basic entities in our models are neither individuals, nor eventualities, nor their properties, but ‘pieces of
evidence’ for believing in the ‘truth’ or ‘existence’ or ‘identity’ of any kind of phenomenon. Our formal language contains
a single binary non-associative constructor used for creating structured complex terms representing arbitrary phenomena. We
give a finite Hilbert-style axiomatisation and a decision algorithm for the entailment problem of the suggested system. 相似文献
4.
The problem of ‘information content’ of an information system appears elusive. In the field of databases, the information
content of a database has been taken as the instance of a database. We argue that this view misses two fundamental points.
One is a convincing conception of the phenomenon concerning information in databases, especially a properly defined notion
of ‘information content’. The other is a framework for reasoning about information content. In this paper, we suggest a modification
of the well known definition of ‘information content’ given by Dretske(Knowledge and the flow of information,1981). We then
define what we call the ‘information content inclusion’ relation (IIR for short) between two random events. We present a set
of inference rules for reasoning about information content, which we call the IIR Rules. Then we explore how these ideas and
the rules may be used in a database setting to look at databases and to derive otherwise hidden information by deriving new
relations from a given set of IIR. A prototype is presented, which shows how the idea of IIR-Reasoning might be exploited
in a database setting including the relationship between real world events and database values.
相似文献
Malcolm CroweEmail: |
5.
A WSDL-based type system for asynchronous WS-BPEL processes 总被引:1,自引:0,他引:1
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services,
and especially to WSDL and WS-BPEL, two of the most used XML-based standard languages for web services. We focus on a simplified fragment of WS-BPEL sufficiently expressive to model asynchronous interactions among web services in a network context. We present this language
as a process calculus-like formalism, that we call ws-calculus, for which we define an operational semantics and a type system. The semantics provides a precise operational model of programs,
while the type system forces a clean programming discipline for integrating collaborating services. We prove that the operational
semantics of ws-calculus and the type system are ‘sound’ and apply our approach to some illustrative examples. We expect that our formal development
can be used to make the relationship between WS-BPEL programs and the associated WSDL documents precise and to support verification of their conformance. 相似文献
6.
In this paper, we demonstrate how craft practice in contemporary jewellery opens up conceptions of ‘digital jewellery’ to
possibilities beyond merely embedding pre-existing behaviours of digital systems in objects, which follow shallow interpretations
of jewellery. We argue that a design approach that understands jewellery only in terms of location on the body is likely to
lead to a world of ‘gadgets’, rather than anything that deserves the moniker ‘jewellery’. In contrast, by adopting a craft
approach, we demonstrate that the space of digital jewellery can include objects where the digital functionality is integrated
as one facet of an object that can be personally meaningful for the holder or wearer. 相似文献
7.
8.
In this article we describe two core ontologies of law that specify knowledge that is common to all domains of law. The first
one, FOLaw describes and explains dependencies between types of knowledge in legal reasoning; the second one, LRI-Core ontology, captures the main concepts in legal information processing. Although FOLaw has shown to be of high practical value in various applied European ICT projects, its reuse is rather limited as it is rather
concerned with the structure of legal reasoning than with legal knowledge itself: as many other “legal core ontologies”, FOLaw is therefore rather an epistemological framework than an ontology. Therefore, we also developed LRI-Core. As we argue here that legal knowledge is based to a large extend on common-sense knowledge, LRI-Core is particularly inspired by research on abstract common-sense concepts. The main categories of LRI-Core are: physical, mental and abstract concepts. Roles cover in particular social worlds. Another special category are occurrences;
terms that denote events and situations. We illustrate the use of LRI-Core with an ontology for Dutch criminal law, developed in the e-Court European project. 相似文献
9.
Marian Counihan 《Journal of Logic, Language and Information》2008,17(4):391-415
In this paper we explore differences in use of the so-called ‘logical’ elements of language such as quantifiers and conditionals,
and use this to explain differences in performance in reasoning tasks across subject groups with different educational backgrounds.
It is argued that quantified sentences are difficult natural bases for reasoning, and hence more prone to elicit variation
in reasoning behaviour, because they are chiefly used with a pre-determined domain in everyday speech. By contrast, it is
argued that conditional sentences form natural premises because of the function they serve in everyday speech. Implications
of this for the role of logic in modelling human reasoning behaviour are briefly considered. 相似文献
10.
11.
Answer set programming (ASP) emerged in the late 1990s as a new logic programming paradigm that has been successfully applied
in various application domains. Also motivated by the availability of efficient solvers for propositional satisfiability (SAT),
various reductions from logic programs to SAT were introduced. All these reductions, however, are limited to a subclass of
logic programs or introduce new variables or may produce exponentially bigger propositional formulas. In this paper, we present
a SAT-based procedure, called ASPSAT, that (1) deals with any (nondisjunctive) logic program, (2) works on a propositional
formula without additional variables (except for those possibly introduced by the clause form transformation), and (3) is
guaranteed to work in polynomial space. From a theoretical perspective, we prove soundness and completeness of ASPSAT. From
a practical perspective, we have (1) implemented ASPSAT in Cmodels, (2) extended the basic procedures in order to incorporate the most popular SAT reasoning strategies, and (3) conducted an
extensive comparative analysis involving other state-of-the-art answer set solvers. The experimental analysis shows that our
solver is competitive with the other solvers we considered and that the reasoning strategies that work best on ‘small but
hard’ problems are ineffective on ‘big but easy’ problems and vice versa. 相似文献
12.
Euripidis N. Loukis 《Artificial Intelligence and Law》2007,15(1):19-48
This paper concerns the development and use of ontologies for electronically supporting and structuring the highest-level
function of government: the design, implementation and evaluation of public policies for the big and complex problems that
modern societies face. This critical government function usually necessitates extensive interaction and collaboration among
many heterogeneous government organizations (G2G collaboration) with different backgrounds, mentalities, values, interests
and expectations, so it can greatly benefit from the use of ontologies. In this direction initially an ontology of public
policy making, implementation and evaluation is described, which has been developed as part of the project ICTE-PAN of the
Information Society Technologies (IST) Programme of the European Commission, based on sound theoretical foundations mainly
from the public policy analysis domain and contributions of experts from the public administrations of four European Union
countries (Denmark, Germany, Greece and Italy). It is a ‘horizontal’ ontology that can be used for electronically supporting
and structuring the whole lifecycle of a public policy in any vertical (thematic) area of government activity; it can also
be combined with ‘vertical’ ontologies of the specific vertical (thematic) area of government activity we are dealing with.
In this paper is also described the use of this ontology for electronically supporting and structuring the collaborative public
policy making, implementation and evaluation through ‘structured electronic forums’, ‘extended workflows’, ‘public policy
stages with specific sub-ontologies’, etc., and also for the semantic annotation, organization, indexing and integration of
the contributions of the participants of these forums, which enable the development of advanced semantic web capabilities
in this area. 相似文献
13.
Philip H. P. Nguyen Ken Kaneiwa Dan R. Corbett Minh-Quang Nguyen 《Artificial Intelligence and Law》2009,17(4):291-320
This paper presents an enhanced ontology formalization, combining previous work in Conceptual Structure Theory and Order-Sorted
Logic. Most existing ontology formalisms place greater importance on concept types, but in this paper we focus on relation
types, which are in essence predicates on concept types. We formalize the notion of ‘predicate of predicates’ as meta-relation
type and introduce the new hierarchy of meta-relation types as part of the ontology definition. The new notion of closure
of a relation or meta-relation type is presented as a means to complete that relation or meta-relation type by transferring
extra arguments and properties from other related types. The end result is an expanded ontology, called the closure of the
original ontology, on which automated inference could be more easily performed. Our proposal could be viewed as a novel and
improved ontology formalization within Conceptual Structure Theory and a contribution to knowledge representation and formal
reasoning (e.g., to build a query-answering system for legal knowledge). 相似文献
14.
Multimodal identification and tracking in smart environments 总被引:1,自引:0,他引:1
We present a model for unconstrained and unobtrusive identification and tracking of people in smart environments and answering
queries about their whereabouts. Our model supports biometric recognition based upon multiple modalities such as face, gait,
and voice in a uniform manner. The key technical idea underlying our approach is to abstract a smart environment by a state transition system in which each state records a set of individuals who are present in various zones of the environment. Since biometric recognition
is inexact, state information is inherently probabilistic in nature. An event abstracts a biometric recognition step, and
the transition function abstracts the reasoning necessary to effect state transitions. In this manner, we are able to integrate
different biometric modalities uniformly and also different criteria for state transitions. Fusion of biometric modalities
is also supported by our model. We define performance metrics for a smart environment in terms of the concepts of ‘precision’
and ‘recall’. We have developed a prototype implementation of our proposed concepts and provide experimental results in this
paper. Our conclusion is that the state transition model is an effective abstraction of a smart environment and serves as
a good basis for developing practical systems. 相似文献
15.
Based on algorithmic and computational studies, we present the Feast Indices which are new indicators for the realistic performance of many recent processors, as typically used in ‘low cost’ PC's/workstations
up to (parallel) supercomputers. Since these tests are very specifically designed for the evaluation of modern numerical simulation
techniques, with a special emphasis on `large scale' FEM computations and iterative solvers of Krylov-space/multigrid type,
they examine new aspects in comparison to the standard benchmark tests (LINPACK, SPEC FP95, NAS, STREAM) and allow new qualitative
and particularly quantitative ratings of the various hardware platforms. We explore the computational efficiency of certain
Linear Algebra components, hereby applying the typical sparse approaches and additionally the sparse banded style from Feast which enables us to exploit a significant percentage of the available Peak performance. The tests include ‘simple’ matrix-vector
applications as well as complete multigrid solvers with very robust smoothers which are necessary for the efficient treatment
of highly adapted meshes.
Received: February 10, 1999; revised: June 14, 1999 相似文献
16.
The way in which humans perceive and react to visual complexity is an important issue in many areas of research and application,
particularly because simplification of complex matter can lead to better understanding of both human behaviour in visual control
tasks as well as the visual environment itself. One area of interest is how people perceive their world in terms of complexity
and how this can be modelled mathematically and/or computationally. A prototype model of complexity has been derived using
subcomponents called ‘SymGeons’ (Symmetrical Geometric Icons) based on Biederman’s original Geon Model for human perception.
The SymGeons are primitive shapes which constitute foreground objects. This paper outlines the derivation and ongoing development
of the ‘SymGeon’ model and how it compares to human perception of visual complexity. The application of the model to understanding
complex human-in-the-loop problems associated with visual remote control operations, e.g. control of remotely operated vehicles,
is discussed. 相似文献
17.
Antony Bryant 《Annals of Software Engineering》2000,10(1-4):273-292
The term software engineering has had a problematic history since its appearance in the 1960s. At first seen as a euphemism
for programming, it has now come to encompass a wide range of activities. At its core lies the desire of software developers
to mimic ‘real’ engineers, and claim the status of an engineering discipline. Attempts to establish such a discipline, however,
confront pressing commercial demands for cheap and timely software products. This paper briefly examines some of the claims
for the engineering nature of software development, before moving to argue that the term ‘engineering’ itself carries with
it some unwanted baggage. This contributes to the intellectual quandary in which software development finds itself, and this
is exacerbated by many writers who rely upon and propagate a mythical view of ‘engineering.’ To complicate matters further,
our understanding of software development is grounded in a series of metaphors that highlight some key aspects of the field,
but push other important issues into the shadows. A re‐reading of Brooks' “No Silver Bullet” paper indicates that the metaphorical
bases of software development have been recognized for some time. They cannot simply be jettisoned, but perhaps they need
widening to incorporate others such as Brooks' concepts of growth and nurture of software. Two examples illustrate the role
played by metaphor in software development, and the paper concludes with the idea that perhaps we need to adopt a more critical
stance to the ‘engineering’ roots of our endeavours*.
*I should like to express my thanks to the anonymous reviewers of the first draft of this paper. Two of them offered useful
advice to enhance the finished version; the third gave vent to a perfectly valid concern, that the argument as stated could
have grave side effects if it was used as a point of leverage in arguments over ownership of the term ‘engineering.’ I understand
this concern and the potential financial implications that prompt its expression; but in the longer term I see this exercise
in clarification as a contribution to such discussions, inasmuch as it helps defuse the potency of terms such as ‘engineering.’
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
18.
Richard T. Mills Chuan Yue Andreas Stathopoulos Dimitrios S. Nikolopoulos 《Journal of Grid Computing》2007,5(2):213-234
The ever increasing memory demands of many scientific applications and the complexity of today’s shared computational resources
still require the occasional use of virtual memory, network memory, or even out-of-core implementations, with well known drawbacks
in performance and usability. In Mills et al. (Adapting to memory pressure from within scientific applications on multiprogrammed
COWS. In: International Parallel and Distributed Processing Symposium, IPDPS, Santa Fe, NM, 2004), we introduced a basic framework for a runtime, user-level library, MMlib, in which DRAM is treated as a dynamic size cache for large memory objects residing on local disk. Application developers
can specify and access these objects through MMlib, enabling their application to execute optimally under variable memory availability, using as much DRAM as fluctuating memory
levels will allow. In this paper, we first extend our earlier MMlib prototype from a proof of concept to a usable, robust, and flexible library. We present a general framework that enables
fully customizable memory malleability in a wide variety of scientific applications. We provide several necessary enhancements
to the environment sensing capabilities of MMlib, and introduce a remote memory capability, based on MPI communication of cached memory blocks between ‘compute nodes’ and
designated memory servers. The increasing speed of interconnection networks makes a remote memory approach attractive, especially
at the large granularity present in large scientific applications. We show experimental results from three important scientific
applications that require the general MMlib framework. The memory-adaptive versions perform nearly optimally under constant memory pressure and execute harmoniously
with other applications competing for memory, without thrashing the memory system. Under constant memory pressure, we observe
execution time improvements of factors between three and five over relying solely on the virtual memory system. With remote
memory employed, these factors are even larger and significantly better than other, system-level remote memory implementations. 相似文献
19.
Jean-Charles Pomerol 《Requirements Engineering》1998,3(3-4):174-181
In this paper, we address the question of how flesh and blood decision makers manage the combinatorial explosion in scenario
development for decision making under uncertainty. The first assumption is that the decision makers try to undertake ‘robust’
actions. For the decision maker a robust action is an action that has sufficiently good results whatever the events are. We
examine the psychological as well as the theoretical problems raised by the notion of robustness. Finally, we address the
false feeling of decision makers who talk of ‘risk control’. We argue that ‘risk control’ results from the thinking that one
can postpone action after nature moves. This ‘action postponement’ amounts to changing look-ahead reasoning into diagnosis.
We illustrate these ideas in the framework of software development and examine some possible implications for requirements
analysis. 相似文献
20.
J. G. Keller S. K. Rogers M. Kabrisky M. E. Oxley 《Pattern Analysis & Applications》1999,2(3):251-263
The automated recognition of targets in complex backgrounds is a difficult problem, yet humans perform such tasks with ease.
We therefore propose a recognition model based on behavioural and physiological aspects of the human visual system. Emulating
saccadic behaviour, an object is first memorised as a sequence of fixations. At each fixation an artificial visual field is
constructed using a multi-resolution/ orientation Gabor filterbank, edge features are extracted, and a new saccadic location
is automatically selected. When a new image is scanned and a ‘familiar’ field of view encountered, the memorised saccadic
sequence is executed over the new image. If the expected visual field is found around each fixation point, the memorised object
is recognised. Results are presented from trials in which individual objects were first memorised and then searched for in
collages of similar objects acting as distractors. In the different collages, entries of the memorised objects were subjected
to various combinations of rotation, translation and noise corruption. The model successfully detected the memorised object
in over 93% of the ‘object present’ trials, and correctly rejected collages in over 98% of the trials in which the object
was not present in the collage. These results are compared with those obtained using a correlation-based recogniser, and the
behavioural model is found to provide superior performance.
Received: 15 July 1998?Received in revised form: 24 December 1998?Accepted: 9 February 1999 相似文献