共查询到20条相似文献,搜索用时 218 毫秒
1.
Mark Bishop 《Minds and Machines》2009,19(4):507-516
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers,
especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates
genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented
herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge
in 1988) monograph, “Representation & Reality”, which if correct, has important implications for turing machine functionalism and
the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements
every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine
Q on a particular input set (x)”. Then, equating Q (x) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and
consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate
conscious experience and hence that disembodied minds lurk everywhere. 相似文献
2.
Remote sensing imaging techniques make use of data derived from high resolution satellite sensors. Image classification identifies
and organises pixels of similar spatial distribution or similar statistical characteristics into the same spectral class (theme).
Contextual data can be incorporated, or ‘fused’, with spectral data to improve the accuracy of classification algorithms.
In this paper we use Dempster–Shafer’s theory of evidence to achieve this data fusion. Incorporating a Knowledge Base of evidence
within the classification process represents a new direction for the development of reliable systems for image classification
and the interpretation of remotely sensed data. 相似文献
3.
Retrieval of relevant unstructured information from the ever-increasing textual communications of individuals and businesses
has become a major barrier to effective litigation/defense, mergers/acquisitions, and regulatory compliance. Such e-discovery
requires simultaneously high precision with high recall (high-P/R) and is therefore a prototype for many legal reasoning tasks.
The requisite exhaustive information retrieval (IR) system must employ very different techniques than those applicable in
the hyper-precise, consumer search task where insignificant recall is the accepted norm. We apply Russell, et al.’s cognitive
task analysis of sensemaking by intelligence analysts to develop a semi-autonomous system that achieves high IR accuracy of
F1 ≥ 0.8 compared to F1 < 0.4 typical of computer-assisted human-assessment (CAHA) or alternative approaches such as Roitblat,
et al.’s. By understanding the ‘Learning Loop Complexes’ of lawyers engaged in successful small-scale document review, we
have used socio-technical design principles to create roles, processes, and technologies for scalable human-assisted computer-assessment
(HACA). Results from the NIST-TREC Legal Track’s interactive task from both 2008 and 2009 validate the efficacy of this sensemaking
approach to the high-P/R IR task. 相似文献
4.
Large databases are becoming increasingly common in civil infrastructure applications. Although it is relatively simple to
specifically query these databases at a low level, more abstract questions like ‘How does the environment affect pavement
cracking?’ are difficult to answer with traditional methods. Data mining techniques can provide a solution for learning abstract
knowledge from civil infrastruc-ture databases. However, data mining needs to be performed within a systematic process to
ensure correct and reproducible results. Many decisions must be made during this process, making it difficult for novice analysts
to apply data mining techniques thoroughly. This paper presents an application of a knowledge discovery process to data collected
for an ‘intelligent’ building. The knowledge discovery process is illustrated and explained through this case study. Additionally,
we discuss the importance of this case study in the context of a research effort to develop an interactive guide for the knowledge
discovery process. 相似文献
5.
This article offers a research update on a 3-year programme initiated by the Kamloops Art Gallery and the University College
of the Cariboo in Kamloops, British Columbia. The programme is supported by a ‘Community–University Research Alliance’ grant
from the Social Sciences and Humanities Research Council of Canada, and the collaboration focuses on the cultural future of
small cities – on how cultural and arts organisations work together (or fail to work together) in a small city setting. If
not by definition, then certainly by default, ‘culture’ is associated with big city life: big cities are equated commonly
with ‘big culture’; small cities with something less. The Cultural Future of Small Cities research group seeks to provide
a more nuanced view of what constitutes culture in a small Canadian city. In particular, the researchers are exploring notions
of social capital and community asset building: in this context, ‘visual and verbal representation’, ‘home’, ‘community’ and
the need to define a local ‘sense of place’ have emerged as important themes. As the Small Cities programme begins its second
year, a unique but key aspect has become the artist-as-researcher.
Correspondence and offprint requests to: L. Dubinsky, Kamloops Art Gallery, 101–465 Victoria Street, Kamloops, BC V2C 2A9 Canada. Tel.: 250-828-3543; Email: ldubinsky@museums.ca 相似文献
6.
Istvan S. N. Berkeley 《Minds and Machines》2008,18(1):93-105
The notion of a ‘symbol’ plays an important role in the disciplines of Philosophy, Psychology, Computer Science, and Cognitive
Science. However, there is comparatively little agreement on how this notion is to be understood, either between disciplines,
or even within particular disciplines. This paper does not attempt to defend some putatively ‘correct’ version of the concept
of a ‘symbol.’ Rather, some terminological conventions are suggested, some constraints are proposed and a taxonomy of the
kinds of issue that give rise to disagreement is articulated. The goal here is to provide something like a ‘geography’ of
the various notions of ‘symbol’ that have appeared in the various literatures, so as to highlight the key issues and to permit
the focusing of attention upon the important dimensions. In particular, the relationship between ‘tokens’ and ‘symbols’ is
addressed. The issue of designation is discussed in some detail. The distinction between simple and complex symbols is clarified
and an apparently necessary condition for a system to be potentially symbol, or token bearing, is introduced. 相似文献
7.
David Martin Jacki O’neill Dave Randall Mark Rouncefield 《Computer Supported Cooperative Work (CSCW)》2007,16(3):231-264
As a comparatively novel but increasingly pervasive organizational arrangement, call centres have been a focus for much recent
research. This paper identifies lessons for organizational and technological design through an examination of call centres
and ‘classification work’ – explicating what Star [1992, Systems/Practice vol. 5, pp. 395–410] terms the ‘open black box’. Classification is a central means by which organizations standardize procedure,
assess productivity, develop services and re-organize their business. Nevertheless, as Bowker and Star [1999, Sorting Things Out: Classification and Its Consequences. Cambridge MA: MIT Press] have pointed out, we know relatively little about the work that goes into making classification
schema what they are. We will suggest that a focus on classification ‘work’ in this context is a useful exemplar of the need
for some kind of ‘meta-analysis’ in ethnographic work also. If standardization is a major ambition for organizations under
late capitalism, then comparison might be seen as a related but as-yet unrealized one for ethnographers. In this paper, we
attempt an initial cut at a comparative approach, focusing on classification because it seemed to be the primary issue that
emerged when we compared studies. Moreover, if technology is the principal means through which procedure and practice is implemented
and if, as we believe, classifications are becoming ever more explicitly embedded within it (for instance with the development
of so-called ‘semantic web’ and associated approaches to ontology-based design), then there is clearly a case for identifying
some themes which might underpin classification work in a given domain. 相似文献
8.
Claudia Loebbecke 《Information Systems and E-Business Management》2003,1(1):55-72
Consumers have embraced the concept of e-commerce although less enthusiastically than expected. Major concerns still exist
regarding the use of the Internet for private purchasing. Trust is seen as a factor that is becoming increasingly important
for both consumers and content managers alike. Various trust-related support features for online transactions are available,
but most lack any form of guarantee or insurance for the parties involved. In this context, the paper seeks to explore the
concepts and potential contributions of contract-based guarantees and insurance services with regard to business-to-consumer
online transactions. After an inventory-taking of available seals of approval and insurance solutions for B2C online transactions,
the paper drafts a research framework for investigating different insurance models. The case of ‘Trusted Shops’, backed by
a German insurance provider, illustrates the concept of insurance solutions and analyzes benefits and risks for all parties
involved. The potential advantages and limitations of extending the concept along the dimensions ‘scale’ and ‘scope’ are presented.
The paper concludes by providing some suggestions for further research. 相似文献
9.
A designerly critique on enchantment 总被引:1,自引:0,他引:1
Philip R. Ross C. J. Overbeeke Stephan A. G. Wensveen Caroline M. Hummels 《Personal and Ubiquitous Computing》2008,12(5):359-371
To develop the concept of user experience in HCI, McCarthy et al. introduce the notion of enchantment in interaction design.
They describe five sensibilities that support exploration and evaluation in design for enchantment. In this paper, we discuss
design for enchantment in light of our approach to design for interaction, called design for meaningful mediation. Based on
our experiences from case studies, we argue that ‘considering the whole person with feelings, desires and anxieties’, one
of the sensibilities McCarthy et al. formulate, influences the desirability and realisation of the other four sensibilities.
By way of case studies, we show how we explored the link between ‘the whole person’ and desired interaction experience in
a designerly way. We place enchantment in a context of other interaction experiences and demonstrate possible design techniques
relevant to design for interaction experiences, including enchantment. 相似文献
10.
J. Rogalski 《Cognition, Technology & Work》1999,1(4):247-256
Managing dynamic environments often requires decision making under uncertainty and risk. Two types of uncertainty are involved:
uncertainty about the state and the evolution of the situation, and ‘openness’ of the possible actions to face possible consequences.
In an experimental study on risk management in dynamic situations, two contrasted ‘ecological’ scenarios – transposed from
effective situations of emergency management – were compared in order to identify the impact of their ‘openness’ in the subjects’
strategies for decision making. The ‘Lost Child’ scenario presented qualitative and irreversible consequences (child’s death)
and high uncertainty; it exerted high demands both in risk assessment (risk representation) and action elaboration and choice.
A less open situation (‘Hydrocarbon Fire’) required a main choice between two contrasted actions, with quantitative computable
consequences. The strategies of ‘experimental subjects’ (university students) and ‘operative subjects’ (professional fire-fighter
officers) were compared in order to evaluate the ecological validity of experimental research in this field, from the point
of view of the subjects themselves. The two scenarios appeared to be independent, so that quite different models of decision
making have to be hypothesised, differing by the importance of assessing risk and defining possible actions on the one hand,
and by the process of choice on the other. ‘Experimental’ subjects dramatically differed from ‘operative’ subjects when confronted
with the same scenario, particularly for the less technical but more demanding scenario. It is hypothesised that three components
might account for the effect of the situations and for the differences between and within groups of subjects: importance of
situation assessment, spatial abilities, and global orientation of activity in managing dynamic risk. 相似文献
11.
Atsuyoshi Nakamura Jun-ichi Takeuchi Naoki Abe 《Annals of Mathematics and Artificial Intelligence》1998,23(1-2):53-82
We consider a variant of the ‘population learning model’ proposed by Kearns and Seung [8], in which the learner is required
to be ‘distribution-free’ as well as computationally efficient. A population learner receives as input hypotheses from a large
population of agents and produces as output its final hypothesis. Each agent is assumed to independently obtain labeled sample
for the target concept and output a hypothesis. A polynomial time population learner is said to PAC-learn a concept class,
if its hypothesis is probably approximately correct whenever the population size exceeds a certain bound which is polynomial,
even if the sample size for each agent is fixed at some constant. We exhibit some general population learning strategies,
and some simple concept classes that can be learned by them. These strategies include the ‘supremum hypothesis finder’, the
‘minimum superset finder’ (a special case of the ‘supremum hypothesis finder’), and various voting schemes. When coupled with
appropriate agent algorithms, these strategies can learn a variety of simple concept classes, such as the ‘high–low game’,
conjunctions, axis-parallel rectangles and others. We give upper bounds on the required population size for each of these
cases, and show that these systems can be used to obtain a speed up from the ordinary PAC-learning model [11], with appropriate
choices of sample and population sizes. With the population learner restricted to be a voting scheme, what we have is effectively
a model of ‘population prediction’, in which the learner is to predict the value of the target concept at an arbitrarily drawn
point, as a threshold function of the predictions made by its agents on the same point. We show that the population learning
model is strictly more powerful than the population prediction model. Finally, we consider a variant of this model with classification
noise, and exhibit a population learner for the class of conjunctions in this model.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
12.
Jean-Charles Pomerol 《Requirements Engineering》1998,3(3-4):174-181
In this paper, we address the question of how flesh and blood decision makers manage the combinatorial explosion in scenario
development for decision making under uncertainty. The first assumption is that the decision makers try to undertake ‘robust’
actions. For the decision maker a robust action is an action that has sufficiently good results whatever the events are. We
examine the psychological as well as the theoretical problems raised by the notion of robustness. Finally, we address the
false feeling of decision makers who talk of ‘risk control’. We argue that ‘risk control’ results from the thinking that one
can postpone action after nature moves. This ‘action postponement’ amounts to changing look-ahead reasoning into diagnosis.
We illustrate these ideas in the framework of software development and examine some possible implications for requirements
analysis. 相似文献
13.
The problem of ‘information content’ of an information system appears elusive. In the field of databases, the information
content of a database has been taken as the instance of a database. We argue that this view misses two fundamental points.
One is a convincing conception of the phenomenon concerning information in databases, especially a properly defined notion
of ‘information content’. The other is a framework for reasoning about information content. In this paper, we suggest a modification
of the well known definition of ‘information content’ given by Dretske(Knowledge and the flow of information,1981). We then
define what we call the ‘information content inclusion’ relation (IIR for short) between two random events. We present a set
of inference rules for reasoning about information content, which we call the IIR Rules. Then we explore how these ideas and
the rules may be used in a database setting to look at databases and to derive otherwise hidden information by deriving new
relations from a given set of IIR. A prototype is presented, which shows how the idea of IIR-Reasoning might be exploited
in a database setting including the relationship between real world events and database values.
相似文献
Malcolm CroweEmail: |
14.
Institution Morphisms 总被引:1,自引:0,他引:1
Institutions formalise the intuitive notion of logical system, including syntax, semantics, and the relation of satisfaction
between them. Our exposition emphasises the natural way that institutions can support deduction on sentences, and inclusions
of signatures, theories, etc.; it also introduces terminology to clearly distinguish several levels of generality of the institution
concept. A surprising number of different notions of morphism have been suggested for forming categories with institutions
as objects, and an amazing variety of names have been proposed for them. One goal of this paper is to suggest a terminology
that is uniform and informative to replace the current chaotic nomenclature; another goal is to investigate the properties
and interrelations of these notions in a systematic way. Following brief expositions of indexed categories, diagram categories,
twisted relations and Kan extensions, we demonstrate and then exploit the duality between institution morphisms in the original
sense of Goguen and Burstall, and the ‘plain maps’ of Meseguer, obtaining simple uniform proofs of completeness and cocompleteness
for both resulting categories. Because of this duality, we prefer the name ‘comorphism’ over ‘plain map’; moreover, we argue
that morphisms are more natural than comorphisms in many cases. We also consider ‘theoroidal’ morphisms and comorphisms, which
generalise signatures to theories, based on a theoroidal institution construction, finding that the ‘maps’ of Meseguer are
theoroidal comorphisms, while theoroidal morphisms are a new concept. We introduce ‘forward’ and ‘semi-natural’ morphisms,
and develop some of their properties. Appendices discuss institutions for partial algebra, a variant of order sorted algebra,
two versions of hidden algebra, and a generalisation of universal algebra; these illustrate various points in the main text.
A final appendix makes explicit a greater generality for the institution concept, clarifies certain details and proves some
results that lift institution theory to this level.
Received December 2000 / Accepted in revised form January 2002 相似文献
15.
Summary The informational divergence between stochastic matrices is not a metric. In this paper we show that, however, consistent
definitions can be given of ‘spheres’, ‘segments’ and ‘straight lines’ using the divergence as a sort of ‘distance’ between
stochastic matrices. The geometric nature of many ‘reliability functions’ of Information Theory and Mathematical Statistics
is thus clarified.
This work has been done within the GNIM-CNR research activity. 相似文献
16.
John Underkoffler 《Personal and Ubiquitous Computing》1997,1(1):28-40
Conclusion Four decades of sporadic invention and experimentation of and with non-traditional human-computer interface schemes have congealed
(somewhat abruptly though not without a few clear-sighted antecedents) into a new field of information system design, here
calledAntisedentary Beigeless Computing, that consciously rejects the traditional conception of isolated tete-a-tete between the human and the box-CRT-keyboardmouse.
ABC systems instead favour the complementary directions away from this notion of an immobile info-shrine: more personal, intimate,
and portable information access; and more diffuse, environmentally-integrated information access. Consideration of ABC projects
to date seems to suggest that no single instance can alone express the full generality required of a ‘working’ information
system, so that (on the one hand) system design must acknowledge that a complex set of trade-offs involving capabilities,
universality, specificity, personalization, and generality is inescapable; while (on the other hand) an ideal, eventual ‘information
environment’ will inevitably comprise the careful interweaving of some number of individual ABC systems.
Taxonomies and classification schema can rarely hope to be found complete or flawless before the collection of items that
they purport to describe have themselves reached the evolutionary stasis of ‘adulthood’ — that is, there is typically some
threshold of development or growth beyond which few enough surprises lurk that an encompassing taxonomy can be constructed
and observed to reliably encompass, in the longer term. The domain of ABC thought is still quite nascent, and so we would
be foolish to assume that all its extremities of form and connotation are now visible, but to the extent that we can already
see the outlines of a ‘field’ it is reasonable to make a first run at an analytic taxonomy. The ‘independent character axes’
approach presented here seems broad and loose enough to accommodate any number of additions to the basic stable of ABC systems.
It is, further, a taxonomy amenable to significant revision as may be found necessary: axes can be added, deleted, reconstrued,
etc. as time and consideration clarify our understanding of ABC. However, it should also be anticipated that the field will
eventually coalesce around a much smaller number of better-defined ‘axes’ and thus permit taxonomic reversion to the more
hierarchical (and finally more satisfying) ‘Linnean’ scheme we'd originally imagined establishing. 相似文献
17.
DSM as a knowledge capture tool in CODE environment 总被引:1,自引:0,他引:1
A design structure matrix (DSM) provides a simple, compact, and visual representation of a complex system/ process. This paper
shows how DSM, a system engineering tool, is applied as a knowledge capture (acquisition) tool in a generic NPD process. The
acquired knowledge (identified in the DSM) is provided in the form of Questionnaires, which are organized into five performance
indicators of the organization namely ‘Marketing’, ‘Technical’, ‘Financial’, ‘Resource Management’, and ‘Project Management’.
Industrial application is carried out for knowledge validation. It is found form the application that the acquired knowledge
helps NPD teams, managers and stakeholders to benchmark their NPD endeavor and select areas to focus their improvement efforts
(up to 80% valid). 相似文献
18.
Stuart Jackson Nuala Brady Fred Cummins Kenneth Monaghan 《Artificial Intelligence Review》2006,26(1-2):141-154
Recent findings in neuroscience suggest an overlap between those brain regions involved in the control and execution of movement
and those activated during the perception of another’s movement. This so called ‘mirror neuron’ system is thought to underlie
our ability to automatically infer the goals and intentions of others by observing their actions. Kilner et al. (Curr Biol
13(6):522–525, 2003) provide evidence for a human ‘mirror neuron’ system by showing that the execution of simple arm movements
is affected by the simultaneous perception of another’s movement. Specifically, observation of ‘incongruent’ movements made
by another human, but not by a robotic arm, leads to greater variability in the movement trajectory than observation of movements
in the same direction. In this study we ask which aspects of the observed motion are crucial to this interference effect by
comparing the efficacy of real human movement to that of sparse ‘point-light displays’. Eight participants performed whole
arm movements in both horizontal and vertical directions while observing either the experimenter or a virtual ‘point-light’
figure making arm movements in the same or in a different direction. Our results, however, failed to show an effect of ‘congruency’
of the observed movement on movement variability, regardless of whether a human actor or point-light figure was observed.
The findings are discussed, and future directions for studies of perception-action coupling are considered. 相似文献
19.
Antony Bryant 《Annals of Software Engineering》2000,10(1-4):273-292
The term software engineering has had a problematic history since its appearance in the 1960s. At first seen as a euphemism
for programming, it has now come to encompass a wide range of activities. At its core lies the desire of software developers
to mimic ‘real’ engineers, and claim the status of an engineering discipline. Attempts to establish such a discipline, however,
confront pressing commercial demands for cheap and timely software products. This paper briefly examines some of the claims
for the engineering nature of software development, before moving to argue that the term ‘engineering’ itself carries with
it some unwanted baggage. This contributes to the intellectual quandary in which software development finds itself, and this
is exacerbated by many writers who rely upon and propagate a mythical view of ‘engineering.’ To complicate matters further,
our understanding of software development is grounded in a series of metaphors that highlight some key aspects of the field,
but push other important issues into the shadows. A re‐reading of Brooks' “No Silver Bullet” paper indicates that the metaphorical
bases of software development have been recognized for some time. They cannot simply be jettisoned, but perhaps they need
widening to incorporate others such as Brooks' concepts of growth and nurture of software. Two examples illustrate the role
played by metaphor in software development, and the paper concludes with the idea that perhaps we need to adopt a more critical
stance to the ‘engineering’ roots of our endeavours*.
*I should like to express my thanks to the anonymous reviewers of the first draft of this paper. Two of them offered useful
advice to enhance the finished version; the third gave vent to a perfectly valid concern, that the argument as stated could
have grave side effects if it was used as a point of leverage in arguments over ownership of the term ‘engineering.’ I understand
this concern and the potential financial implications that prompt its expression; but in the longer term I see this exercise
in clarification as a contribution to such discussions, inasmuch as it helps defuse the potency of terms such as ‘engineering.’
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
20.
A fast algorithm for computing moments of gray images based on NAM and extended shading approach 总被引:1,自引:0,他引:1
Computing moments on images is very important in the fields of image processing and pattern recognition. The non-symmetry
and anti-packing model (NAM) is a general pattern representation model that has been developed to help design some efficient
image representation methods. In this paper, inspired by the idea of computing moments based on the S-Tree coding (STC) representation
and by using the NAM and extended shading (NAMES) approach, we propose a fast algorithm for computing lower order moments
based on the NAMES representation, which takes O(N) time where N is the number of NAM blocks. By taking three idiomatic standard
gray images ‘Lena’, ‘F16’, and ‘Peppers’ in the field of image processing as typical test objects, and by comparing our proposed
algorithm with the conventional algorithm and the popular STC representation algorithm for computing the lower order moments,
the theoretical and experimental results presented in this paper show that the average execution time improvement ratios of
the proposed NAMES approach over the STC approach, and also the conventional approach are 26.63%, and 82.57% respectively
while maintaining the image quality. 相似文献