共查询到20条相似文献,搜索用时 156 毫秒
1.
Euripidis N. Loukis 《Artificial Intelligence and Law》2007,15(1):19-48
This paper concerns the development and use of ontologies for electronically supporting and structuring the highest-level
function of government: the design, implementation and evaluation of public policies for the big and complex problems that
modern societies face. This critical government function usually necessitates extensive interaction and collaboration among
many heterogeneous government organizations (G2G collaboration) with different backgrounds, mentalities, values, interests
and expectations, so it can greatly benefit from the use of ontologies. In this direction initially an ontology of public
policy making, implementation and evaluation is described, which has been developed as part of the project ICTE-PAN of the
Information Society Technologies (IST) Programme of the European Commission, based on sound theoretical foundations mainly
from the public policy analysis domain and contributions of experts from the public administrations of four European Union
countries (Denmark, Germany, Greece and Italy). It is a ‘horizontal’ ontology that can be used for electronically supporting
and structuring the whole lifecycle of a public policy in any vertical (thematic) area of government activity; it can also
be combined with ‘vertical’ ontologies of the specific vertical (thematic) area of government activity we are dealing with.
In this paper is also described the use of this ontology for electronically supporting and structuring the collaborative public
policy making, implementation and evaluation through ‘structured electronic forums’, ‘extended workflows’, ‘public policy
stages with specific sub-ontologies’, etc., and also for the semantic annotation, organization, indexing and integration of
the contributions of the participants of these forums, which enable the development of advanced semantic web capabilities
in this area. 相似文献
2.
Muhammad Younas Irfan Awan Kuo-Ming Chao Jen-Yao Chung 《Information Systems and E-Business Management》2008,6(1):69-82
Service scheduling is one of the crucial issues in E-commerce environment. E-commerce web servers often get overloaded as
they have to deal with a large number of customers’ requests—for example, browse, search, and pay, in order to make purchases
or to get product information from E-commerce web sites. In this paper, we propose a new approach in order to effectively
handle high traffic load and to improve web server’s performance. Our solution is to exploit networking techniques and to
classify customers’ requests into different classes such that some requests are prioritised over others. We contend that such
classification is financially beneficial to E-commerce services as in these services some requests are more valuable than
others. For instance, the processing of “browse” request should get less priority than “payment” request as the latter is
considered to be more valuable to the service provider. Our approach analyses the arrival process of distinct requests and
employs a priority scheduling service at the network nodes that gives preferential treatment to high priority requests. The
proposed approach is tested through various experiments which show significant decrease in the response time of high priority
requests. This also reduces the probability of dropping high priority requests by a web server and thus enabling service providers
to generate more revenue. 相似文献
3.
Miklós Erdélyi-Szabó László Kálmán Agi Kurucz 《Journal of Logic, Language and Information》2008,17(1):1-17
The paper sets out to offer an alternative to the function/argument approach to the most essential aspects of natural language
meanings. That is, we question the assumption that semantic completeness (of, e.g., propositions) or incompleteness (of, e.g.,
predicates) exactly replicate the corresponding grammatical concepts (of, e.g., sentences and verbs, respectively). We argue
that even if one gives up this assumption, it is still possible to keep the compositionality of the semantic interpretation
of simple predicate/argument structures. In our opinion, compositionality presupposes that we are able to compare arbitrary
meanings in term of information content. This is why our proposal relies on an ‘intrinsically’ type free algebraic semantic
theory. The basic entities in our models are neither individuals, nor eventualities, nor their properties, but ‘pieces of
evidence’ for believing in the ‘truth’ or ‘existence’ or ‘identity’ of any kind of phenomenon. Our formal language contains
a single binary non-associative constructor used for creating structured complex terms representing arbitrary phenomena. We
give a finite Hilbert-style axiomatisation and a decision algorithm for the entailment problem of the suggested system. 相似文献
4.
In this paper, we demonstrate how craft practice in contemporary jewellery opens up conceptions of ‘digital jewellery’ to
possibilities beyond merely embedding pre-existing behaviours of digital systems in objects, which follow shallow interpretations
of jewellery. We argue that a design approach that understands jewellery only in terms of location on the body is likely to
lead to a world of ‘gadgets’, rather than anything that deserves the moniker ‘jewellery’. In contrast, by adopting a craft
approach, we demonstrate that the space of digital jewellery can include objects where the digital functionality is integrated
as one facet of an object that can be personally meaningful for the holder or wearer. 相似文献
5.
Summary The informational divergence between stochastic matrices is not a metric. In this paper we show that, however, consistent
definitions can be given of ‘spheres’, ‘segments’ and ‘straight lines’ using the divergence as a sort of ‘distance’ between
stochastic matrices. The geometric nature of many ‘reliability functions’ of Information Theory and Mathematical Statistics
is thus clarified.
This work has been done within the GNIM-CNR research activity. 相似文献
6.
In this paper, we present some of the results from our ongoing research work in the area of ‘agent support’ for electronic
commerce, particularly at the user interface level. Our goal is to provide intelligent agents to assist both the consumers
and the vendors in an electronic shopping environment. Users with a wide variety of different needs are expected to use the
electronic shopping application and their expectations about the interface could vary a lot. Traditional studies of user interface
technology have shown the existence of a ‘gap’ between what the user interface actually lets the users do and the users’ expectations.
Agent technology, in the form of personalized user interface agents, can help to narrow this gap. Such agents can be used
to give a personalized service to the user by knowing the user’s preferences. By doing so, they can assist in the various
stages of the users’ shopping process, provide tailored product recommendations by filtering information on behalf of their
users and reduce the information overload. From a vendor’s perspective, a software sales agent could be used for price negotiation
with the consumer. Such agents would give the flexibility offered by negotiation without the burden of having to provide human
presence to an online store to handle such negotiations.
Published online: 25 July 2001 相似文献
7.
Jean-Charles Pomerol 《Requirements Engineering》1998,3(3-4):174-181
In this paper, we address the question of how flesh and blood decision makers manage the combinatorial explosion in scenario
development for decision making under uncertainty. The first assumption is that the decision makers try to undertake ‘robust’
actions. For the decision maker a robust action is an action that has sufficiently good results whatever the events are. We
examine the psychological as well as the theoretical problems raised by the notion of robustness. Finally, we address the
false feeling of decision makers who talk of ‘risk control’. We argue that ‘risk control’ results from the thinking that one
can postpone action after nature moves. This ‘action postponement’ amounts to changing look-ahead reasoning into diagnosis.
We illustrate these ideas in the framework of software development and examine some possible implications for requirements
analysis. 相似文献
8.
Particular cases of nonlinear systems of delay Volterra integro-differential equations (denoted by DVIDEs) with constant delay
τ > 0, arise in mathematical modelling of ‘predator–prey’ dynamics in Ecology. In this paper, we give an analysis of the global
convergence and local superconvergence properties of piecewise polynomial collocation for systems of this type. Then, from
the perspective of applied mathematics, we consider the Volterra’s integro-differential system of ‘predator–prey’ dynamics
arising in Ecology. We analyze the numerical issues of the introduced collocation method applied to the ‘predator–prey’ system
and confirm that we can achieve the expected theoretical orders of convergence.
相似文献
9.
Ashok Jain 《AI & Society》2002,16(1-2):4-20
The paper investigates the structure and functioning of the science and technology (S&T) system in India as it has evolved
in the post-independence period (1947 onwards). The networks of entities involved in S&T actions, the paper argues, can be
categorised, in terms of adopted approaches to agenda and priority setting and accounting for actions, into two streams. The
origins and expansion of the two streams are traced. One, the ‘Elite’ stream (high profile and visibility linked to big industry),
adopting what the paper has generically termed the ‘Nehruvian’ model of development, is shown to have emerged as a dominant
network. The other socially powerful ‘Subaltern’ stream (less visible, closer to ground realities and linked to village and
cottage industry), adopting the ‘Gandhian’ model of development, still remains dispersed and outside the consideration of
high-level decision-making bodies. The paper stresses the importance of moving the support and attention from the dominant
stream to efforts that attempt a synthesis between the dominant and the subaltern. 相似文献
10.
Chris J. K. Williams 《Nexus Network Journal》2011,13(2):281-295
The theory of heat flow on a surface shows that any curvilinear quadrilateral can be ‘tiled’ with curvilinear squares of varying
size. This paper demonstrates a simple numerical technique for doing this that can also be applied to shapes other than quadrilaterals.
In particular, any curvilinear triangle can be tiled with curvilinear equilateral triangles. 相似文献
11.
We present DiPerF, a DIstributed PERformance evaluation Framework, aimed at simplifying and automating performance evaluation of networked services. DiPerF coordinates a pool of machines that access a target service and collect performance measurements, aggregates these measurements, and generates performance statistics. The aggregate data collected provide information on service throughput, service response time, service ‘fairness’ when serving multiple clients concurrently, and on the impact of network connectivity on service performance. We have used DiPerF in various environments (PlanetLab, Grid3, TeraGrid, and a cluster) and with a large number of services. This paper provides data that demonstrates that DiPerF is accurate: The aggregate client view matches the tested service view within a few percents, and scalable: DiPerF can handle more than 10,000 clients and 100,000 transactions per second. Moreover, rapid adoption and extensive use demonstrate that the ability to automate performance characteristics extraction makes DiPerF a valuable tool. 相似文献
12.
In this paper we present a system to enhance the performance of feature correspondence based alignment algorithms for laser
scan data. We show how this system can be utilized as a new approach for evaluation of mapping algorithms. Assuming a certain
a priori knowledge, our system augments the sensor data with hypotheses (‘Virtual Scans’) about ideal models of objects in
the robot’s environment. These hypotheses are generated by analysis of the current aligned map estimated by an underlying
iterative alignment algorithm. The augmented data is used to improve the alignment process. Feedback between data alignment
and data analysis confirms, modifies, or discards the Virtual Scans in each iteration. Experiments with a simulated scenario
and real world data from a rescue robot scenario show the applicability and advantages of the approach. By replacing the estimated
‘Virtual Scans’ with ground truth maps our system can provide a flexible way for evaluating different mapping algorithms in
different settings. 相似文献
13.
Romain Laufer 《AI & Society》1992,6(3):197-220
The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles
with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can
thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western
society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the
artificial and the radical forms of pragmatism, sophism and marketing from a French philosophical perspective. The paper suggests
that in the postmodern age of the ‘the crisis of the systems of legitimacy’, the question of social acceptability of any action,
especially actions arising out of the application of AI, cannot be avoided. 相似文献
14.
David Martin Jacki O’neill Dave Randall Mark Rouncefield 《Computer Supported Cooperative Work (CSCW)》2007,16(3):231-264
As a comparatively novel but increasingly pervasive organizational arrangement, call centres have been a focus for much recent
research. This paper identifies lessons for organizational and technological design through an examination of call centres
and ‘classification work’ – explicating what Star [1992, Systems/Practice vol. 5, pp. 395–410] terms the ‘open black box’. Classification is a central means by which organizations standardize procedure,
assess productivity, develop services and re-organize their business. Nevertheless, as Bowker and Star [1999, Sorting Things Out: Classification and Its Consequences. Cambridge MA: MIT Press] have pointed out, we know relatively little about the work that goes into making classification
schema what they are. We will suggest that a focus on classification ‘work’ in this context is a useful exemplar of the need
for some kind of ‘meta-analysis’ in ethnographic work also. If standardization is a major ambition for organizations under
late capitalism, then comparison might be seen as a related but as-yet unrealized one for ethnographers. In this paper, we
attempt an initial cut at a comparative approach, focusing on classification because it seemed to be the primary issue that
emerged when we compared studies. Moreover, if technology is the principal means through which procedure and practice is implemented
and if, as we believe, classifications are becoming ever more explicitly embedded within it (for instance with the development
of so-called ‘semantic web’ and associated approaches to ontology-based design), then there is clearly a case for identifying
some themes which might underpin classification work in a given domain. 相似文献
15.
For a variety of reasons, the relative impacts of neural-net inputs on the output of a network’s computation is valuable information
to obtain. In particular, it is desirable to identify the significant features, or inputs, of a data-defined problem before
the data is sufficiently preprocessed to enable high performance neural-net training. We have defined and tested a technique
for assessing such input impacts, which will be compared with a method described in a paper published earlier in this journal.
The new approach, known as the ‘clamping’ technique, offers efficient impact assessment of the input features of the problem.
Results of the clamping technique prove to be robust under a variety of different network configurations. Differences in architecture,
training parameter values and subsets of the data all deliver much the same impact rankings, which supports the notion that
the technique ranks an inherent property of the available data rather than a property of any particular feedforward neural
network. The success, stability and efficiency of the clamping technique are shown to hold for a number of different real-world
problems. In addition, we subject the previously published technique, which we will call the ‘weight product’ technique, to
the same tests in order to provide directly comparable information. 相似文献
16.
Institution Morphisms 总被引:1,自引:0,他引:1
Institutions formalise the intuitive notion of logical system, including syntax, semantics, and the relation of satisfaction
between them. Our exposition emphasises the natural way that institutions can support deduction on sentences, and inclusions
of signatures, theories, etc.; it also introduces terminology to clearly distinguish several levels of generality of the institution
concept. A surprising number of different notions of morphism have been suggested for forming categories with institutions
as objects, and an amazing variety of names have been proposed for them. One goal of this paper is to suggest a terminology
that is uniform and informative to replace the current chaotic nomenclature; another goal is to investigate the properties
and interrelations of these notions in a systematic way. Following brief expositions of indexed categories, diagram categories,
twisted relations and Kan extensions, we demonstrate and then exploit the duality between institution morphisms in the original
sense of Goguen and Burstall, and the ‘plain maps’ of Meseguer, obtaining simple uniform proofs of completeness and cocompleteness
for both resulting categories. Because of this duality, we prefer the name ‘comorphism’ over ‘plain map’; moreover, we argue
that morphisms are more natural than comorphisms in many cases. We also consider ‘theoroidal’ morphisms and comorphisms, which
generalise signatures to theories, based on a theoroidal institution construction, finding that the ‘maps’ of Meseguer are
theoroidal comorphisms, while theoroidal morphisms are a new concept. We introduce ‘forward’ and ‘semi-natural’ morphisms,
and develop some of their properties. Appendices discuss institutions for partial algebra, a variant of order sorted algebra,
two versions of hidden algebra, and a generalisation of universal algebra; these illustrate various points in the main text.
A final appendix makes explicit a greater generality for the institution concept, clarifies certain details and proves some
results that lift institution theory to this level.
Received December 2000 / Accepted in revised form January 2002 相似文献
17.
Sara Bury Johnathan Ishmael Nicholas J. P. Race Paul Smith 《Personal and Ubiquitous Computing》2010,14(3):227-236
This paper documents some of the socio-technical issues involved in developing security measures for wireless mesh networks
(WMNs) that are deployed as part of a community network. We are interested in discovering whether (and exactly how) everyday
social interaction over the network is affected by security issues, and any consequent design implications. We adopt an interdisciplinary
methodological approach to requirements, treating a community as an ‘organization’ and implementing an approach, OCTAVE, originally
designed to uncover security elements for organizations. Using a focus group technique we chart some of the assets and security
concerns of the community, concerns that need to be addressed in order for WMNs, or indeed any network, to become a truly
‘mundane technology’. 相似文献
18.
DSM as a knowledge capture tool in CODE environment 总被引:1,自引:0,他引:1
A design structure matrix (DSM) provides a simple, compact, and visual representation of a complex system/ process. This paper
shows how DSM, a system engineering tool, is applied as a knowledge capture (acquisition) tool in a generic NPD process. The
acquired knowledge (identified in the DSM) is provided in the form of Questionnaires, which are organized into five performance
indicators of the organization namely ‘Marketing’, ‘Technical’, ‘Financial’, ‘Resource Management’, and ‘Project Management’.
Industrial application is carried out for knowledge validation. It is found form the application that the acquired knowledge
helps NPD teams, managers and stakeholders to benchmark their NPD endeavor and select areas to focus their improvement efforts
(up to 80% valid). 相似文献
19.
Bernd Löchner 《Journal of Automated Reasoning》2006,36(4):289-310
The Knuth–Bendix ordering (KBO) is one of the term orderings in widespread use. We present a new algorithm to compute KBO,
which is (to our knowledge) the first asymptotically optimal one. Starting with an ‘obviously correct’ version, we use program
transformation to stepwise develop an efficient version, making clear the essential ideas, while retaining correctness. By
theoretical analysis we show that the worst-case behavior is thereby changed from quadratic to linear. Measurements show the
practical improvements of the different variants. 相似文献
20.
Atsuyoshi Nakamura Jun-ichi Takeuchi Naoki Abe 《Annals of Mathematics and Artificial Intelligence》1998,23(1-2):53-82
We consider a variant of the ‘population learning model’ proposed by Kearns and Seung [8], in which the learner is required
to be ‘distribution-free’ as well as computationally efficient. A population learner receives as input hypotheses from a large
population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample
for the target concept and output a hypothesis. A polynomial time population learner is said to PAC-learn a concept class,
if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial,
even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies,
and some simple concept classes that can be learned by them. These strategies include the ‘supremum hypothesis finder’, the
‘minimum superset finder’ (a special case of the ‘supremum hypothesis finder’), and various voting schemes. When coupled with
appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the ‘high–low game’,
conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these
cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model [11], with appropriate
choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively
a model of ‘population prediction’, in which the learner is to predict the value of the target concept at an arbitrarily drawn
point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning
model is strictly more powerful than the population prediction model. Finally, we consider a variant of this model with classification
noise, and exhibit a population learner for the class of conjunctions in this model.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献