首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Structured Reactive Controllers   总被引:1,自引:0,他引:1  
Service robots, such as autonomous office couriers or robot tourguides, must be both reliable and efficient. This requires them to flexibly interleave their tasks, exploit opportunities, quickly plan their course of action, and, if necessary, revise their intended activities. In this paper, we show how structured reactive controllers (SRCs) satisfy these requirements. The novel feature of SRCs is that they employ and reason about plans that specify and synchronize concurrent percept-driven behavior. Powerful control abstractions enable SRCs to integrate physical action, perception, planning, and communication in a uniform framework and to apply fast but imperfect computational methods without sacrificing reliability and flexibility. Concurrent plans are represented in a transparent and modular form so that automatic planning processes can reason about the plans and revise them. We present experiments in which SRCs are used to control two autonomous mobile robots. In one of them an SRC controlled the course of action of a museum tourguide robot that has operated for thirteen days, more than ninetyfour hours, completed 620 tours, and presented 2668 exhibits.  相似文献   

2.
In the robotics community, there exist implicit assumptions concerning the computational capabilities of robots. Two computational classes of robots emerge as focal points of recent research: robot ants and robot elephants. Ants have poor memory and communication capabilities, but are able to communicate using pheromones, in effect, turning their work area into a shared memory. By comparison, elephants are computationally stronger, have large memory, and are equipped with strong sensing and communication capabilities. Unfortunately, not much is known about the relation between the capabilities of these models in terms of the tasks they can address. In this paper, we present formal models of both ants and elephants, and investigate if one dominates the other. We present two algorithms: AntEater, which allows elephant robots to execute ant algorithms and ElephantGun, which converts elephant algorithms-specified as Turing machines-into ant algorithms. By exploring the computational capabilities of these algorithms, we reach interesting conclusions regarding the computational power of both models.  相似文献   

3.
As a comparatively novel but increasingly pervasive organizational arrangement, call centres have been a focus for much recent research. This paper identifies lessons for organizational and technological design through an examination of call centres and ‘classification work’ – explicating what Star [1992, Systems/Practice vol. 5, pp. 395–410] terms the ‘open black box’. Classification is a central means by which organizations standardize procedure, assess productivity, develop services and re-organize their business. Nevertheless, as Bowker and Star [1999, Sorting Things Out: Classification and Its Consequences. Cambridge MA: MIT Press] have pointed out, we know relatively little about the work that goes into making classification schema what they are. We will suggest that a focus on classification ‘work’ in this context is a useful exemplar of the need for some kind of ‘meta-analysis’ in ethnographic work also. If standardization is a major ambition for organizations under late capitalism, then comparison might be seen as a related but as-yet unrealized one for ethnographers. In this paper, we attempt an initial cut at a comparative approach, focusing on classification because it seemed to be the primary issue that emerged when we compared studies. Moreover, if technology is the principal means through which procedure and practice is implemented and if, as we believe, classifications are becoming ever more explicitly embedded within it (for instance with the development of so-called ‘semantic web’ and associated approaches to ontology-based design), then there is clearly a case for identifying some themes which might underpin classification work in a given domain.  相似文献   

4.
The aim of this article is to investigate whether choosing the appropriate referring expression requires taking into account the hearer’s perspective, as is predicted under some versions of bidirectional Optimality Theory but is unexpected under other versions. We did this by comparing the results of 25 young and 25 elderly adults on an elicitation task based on eight different picture stories, and a comprehension task based on eight similar written stories. With respect to the elicitation task, we found that elderly adults produce pronouns significantly more often than young adults when referring to the old topic in the presence of a new topic. With respect to the comprehension task, no significant differences were found between elderly and young adults. These results support the hypothesis that speakers optimize bidirectionally and take into account hearers when selecting a referring expression. If the use of a pronoun will lead to an unintended interpretation by the hearer, the speaker will use an unambiguous definite noun phrase instead. Because elderly adults are more limited in their processing capacities, as is indicated by their smaller working memory capacity, as speakers they will not always be able to reason about the hearer’s choices. As a result, they frequently produce non-recoverable pronouns.  相似文献   

5.
It is remarkable how much robotics research is promoted by appealing to the idea that the only way to deal with a looming demographic crisis is to develop robots to look after older persons. This paper surveys and assesses the claims made on behalf of robots in relation to their capacity to meet the needs of older persons. We consider each of the roles that has been suggested for robots in aged care and attempt to evaluate how successful robots might be in these roles. We do so from the perspective of writers concerned primarily with the quality of aged care, paying particular attention to the social and ethical implications of the introduction of robots, rather than from the perspective of robotics, engineering, or computer science. We emphasis the importance of the social and emotional needs of older persons—which, we argue, robots are incapable of meeting—in almost any task involved in their care. Even if robots were to become capable of filling some service roles in the aged-care sector, economic pressures on the sector would most likely ensure that the result was a decrease in the amount of human contact experienced by older persons being cared for, which itself would be detrimental to their well-being. This means that the prospects for the ethical use of robots in the aged-care sector are far fewer than first appears. More controversially, we believe that it is not only misguided, but actually unethical, to attempt to substitute robot simulacra for genuine social interaction. A subsidiary goal of this paper is to draw attention to the discourse about aged care and robotics and locate it in the context of broader social attitudes towards older persons. We conclude by proposing a deliberative process involving older persons as a test for the ethics of the use of robots in aged care.We dedicate this paper to the memory of Jean Woodroffe, whose strength and courage at the end of her life journey inspired the authors’ interest in aged-care issues.  相似文献   

6.
Many have bowed before the recently acquired powers of ‘new technologies’. However, in the shift from tekhnē to tekhnologia, it seems we have lost human values. These values are communicative in nature as technological progress has placed barriers like distance, web pages and ‘miscellaneous extras’ between individuals. Certain values, like the interpersonal pleasures of rendering service, have been lost as their domain of predilection has for many become fully commercially oriented, dominated by the cadence of profitability. Though the popular cultures of the artificial have surged forth to deliver us from the twentieth century, they have enabled some very superfluous dreaming—Man has succumbed to the Godly role of simulating himself and creating other beings. Communication is replaced by machines, services are rendered via many automated devices, procreation has entered the public sphere, robots and entertainment agents educate our youth and mesmerising screen-integrating ‘forms of intelligence’ even think for us. As such, this so-called culture threatens the very values Man constructed in the nineteenth and twentieth centuries to guide himself into the future. But what if the phenomena mentioned just reflect our new values? The author presents an investigation into this cultural shift, its impact on human practices with regards the mind and the body and evokes some pros and cons of generally accepting the ‘Culture of the Artificial’.  相似文献   

7.
In Darwin’s Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett’s analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they are implemented as algorithms by physical processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational.  相似文献   

8.
Cognitive capabilities such as perception, reasoning, learning, and planning turn technical systems into systems that “know what they are doing.” Starting from the human brain the Cluster of Excellence “CoTeSys” investigates cognition for technical systems such as vehicles, robots, and factories. Technical systems that are cognitive will be much easier to interact and cooperate with, and will be more robust, flexible, and efficient. For understanding their environment and interacting with humans the cognitive system’s most important sense is the visual one. The talk presents recent results using the visual sensor in building environment models, self localization of autonomous moving robots, navigation, simultaneous tracking of groups of humans and robots, face detection and evaluation of gaze direction and facial expression, and emotional communication between humans and robots.  相似文献   

9.
Much of the behaviour of an interactive system is determined by its user population. This paper describes how assumptions about the user can be brought into system models in order to reason about their behaviour. We describe a system model containing reasonable assumptions about the user as being ‘cognitively plausible’. Before asserting the plausibility of a model however we must first be able to make the assumptions made in that model inspectable. There is a tension between the inspectability of user assumptions and the tractability of models; inspectable models tend to not be very tractable and vice versa. We describe how we can get round this tension, by deriving tractable models from explicit user assumptions. The resulting models may not of themselves be very inspectable to human-factors workers, but the process by which they are derived is inspectable. Hence we claim that we can have both tractability and inspectability. We exemplify our claims using a simple cognitive model and ‘Meeting Maker’, an interactive electronic diary system. Received March 2000 / Accepted in revised form July 2000  相似文献   

10.
Efficient and inefficient ant coverage methods   总被引:4,自引:0,他引:4  
Ant robots are simple creatures with limited sensing and computational capabilities. They have the advantage that they are easy to program and cheap to build. This makes it feasible to deploy groups of ant robots and take advantage of the resulting fault tolerance and parallelism. We study, both theoretically and in simulation, the behavior of ant robots for one-time or repeated coverage of terrain, as required for lawn mowing, mine sweeping, and surveillance. Ant robots cannot use conventional planning methods due to their limited sensing and computational capabilities. To overcome these limitations, we study navigation methods that are based on real-time (heuristic) search and leave markings in the terrain, similar to what real ants do. These markings can be sensed by all ant robots and allow them to cover terrain even if they do not communicate with each other except via the markings, do not have any kind of memory, do not know the terrain, cannot maintain maps of the terrain, nor plan complete paths. The ant robots do not even need to be localized, which completely eliminates solving difficult and time-consuming localization problems. We study two simple real-time search methods that differ only in how the markings are updated. We show experimentally that both real-time search methods robustly cover terrain even if the ant robots are moved without realizing this (say, by people running into them), some ant robots fail, and some markings get destroyed. Both real-time search methods are algorithmically similar, and our experimental results indicate that their cover time is similar in some terrains. Our analysis is therefore surprising. We show that the cover time of ant robots that use one of the real-time search methods is guaranteed to be polynomial in the number of locations, whereas the cover time of ant robots that use the other real-time search method can be exponential in (the square root of) the number of locations even in simple terrains that correspond to (planar) undirected trees. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

11.
This paper presents a methodology for the coordination of multiple robotic agents moving from one location to another in an environment embedded with a network of agents, placed at strategic locations such as intersections. These intersection agents, communicate with robotic agents and also with each other to route robots in a way as to minimize the congestion, thus resulting in the continuous flow of robot traffic. A robot’s path to its destination is computed by the network (in this paper, ‘Network’ refers to the collection of ‘Network agents’ operating at the intersections) in terms of the next waypoints to reach. The intersection agents are capable of identifying robots in their proximity based on signal strength. An intersection agent controls the flow of agent traffic around it with the help of the data it collects from the messages received from the robots and other surrounding intersection agents. The congestion of traffic is reduced using a two-layered hierarchical strategy. The primary layer operates at the intersection to reduce the time delay of robots crossing them. The secondary layer maintains coordination between intersection agents and routes traffic such that delay is reduced through effective load balancing. The objective at the primary level, to reduce congestion at the intersection, is achieved through assigning priorities to pathways leading to the intersection based on the robot traffic density. At the secondary level, the load balancing of robots over multiple intersections is achieved through coordination between intersection agents by communication of robot densities in different pathways. Extensive comparisons show the performance gain of the current method over existing ones. Theoretical analysis apart from simulation show the advantages of load-balanced traffic flow over uncoordinated allotment of robotic agents to pathways. Transferring the burden of coordination to the network releases more computational power for the robots to engage in critical assistive activities.  相似文献   

12.
Ugo Pagallo 《AI & Society》2011,26(4):347-354
This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics. According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution, we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge that some acts of robots should be considered as a new source of legal responsibility for others’ behaviour.  相似文献   

13.
New methodologies are needed for modeling of physically cooperating mobile robots to be able to systematically design and analyze such systems. In this context, we present a method called the ‘P-robot Method’ under which we introduce entities called the p-robots at the environmental contact points and treat the linked mobile robots as a multiple degree-of-freedom object, comprising an articulated open kinematic chain, which is manipulated by the p-robots. The method is suitable to address three critical aspects of physical cooperation: a) analysis of environmental contacts, b) utilization of redundancy, and c) exploitation of system dynamics. Dynamics of the open chain are computed independent of the constraints, thus allowing the same set of equations to be used as the constraint conditions change, and simplifying the addition of multiple robots to the chain. The decoupling achieved through constraining the p-robots facilitates the analysis of kinematic as well as force constraints. We introduce the idea of a ‘tipping cone’, similar to a standard friction cone, to test whether forces on the robots cause undesired tipping. We have employed the P-robot Method for the static as well as dynamic analysis for a cooperative behavior involving two robots. The method is generalizable to analyze cooperative behaviors with any number of robots. We demonstrate that redundant actuation achieved by an adding a third robot to cooperation can help in satisfying the contact constraints. The P-robot Method can be useful to analyze other interesting multi-body robotic systems as well.  相似文献   

14.
Intelligent vehicle systems have introduced the need for designers to consider user preferences so as to make several kinds of driving features as driver friendly as possible. This requirement raises the problem of how to suitably analyse human performance so they can be implemented in automatic driving tasks. The framework of the present work is an adaptive cruise control with stop and go features for use in an urban setting. In such a context, one of the main requirements is to be able to tune the control strategy to the driver’s style. In order to do this, a number of different drivers were studied through the statistical analysis of their behaviour while driving. The aim of this analysis is to decide whether it is possible to determine a driver’s behaviour, what signals are suitable for this task and which parameters can be used to describe a driver’s style. An assignment procedure is then introduced in order to classify a driver’s behaviour within the stop and go task being considered. Finally, the findings were analysed subjectively and compared with a statistically objective one.  相似文献   

15.
Information systems are the glue between people and computers. Both the social and business environments are in a continual, some might say chaotic, state of change while computer hardware continues to double its performance about every 18 months. This presents a major challenge for information system developers.  The term user-friendly is an old one, but one which has come to take on a multitude of meanings. However, in today’s context we might well take a user-friendly system to be one where the technology fits the user’s cognitive models of the activity in hand. This article looks at the relationship between information systems and the changing demands of their users as the underlying theme for the current issue of Cognition, Technology and Work.  People, both as individuals and organisations, change. The functionalist viewpoint, which attempts to freeze and inhibit such change, has failed systems developers on numerous occasions. Responding to, and building on, change in the social environment is still a significant research issue for information systems specialists who need to be able to create living information systems.  相似文献   

16.
Service robot for the elderly   总被引:1,自引:0,他引:1  
In recent years, service robots have received a lot of attention from both industry and academia. They are individually designed to performtasks in an unstructured environment for working with or assisting humans [1], [2]. Such robots thus have to be able to actively interact with potential users in their surroundings and to appropriately offer their services. Until now, a number of service robots have been introduced such as vacuum cleaning robots, home security robots, entertainment robots, and guide robots [2]?[4]. In particular, robots that are able to assist the elderly are becoming very important with a dramatic increase in the aging population and costs of elderly care [1], [2], [5]. Several projects have been initiated in some developed countries to develop service robots, especially robotic aid systems [2], [5].  相似文献   

17.
In order to satisfy need for enhanced user affinity for robots, we are attempting to give robots a “consciousness” such as that identified in humans and animals. We developed software to control a robot’s actions including emotion by introducing the evaluation function of action choice into the hierarchical structure model. This connected the robot’s consciousness with the robot’s action. We named the process Consciousness-based Architecture (CBA). However, it is difficult to change the consciousness of the robot only using this CBA model. In order to induce and change autonomously consciousness and action for the robot, some motivation is required. Therefore, a motivation model has been developed to induce and change autonomously, and is combined with CBA. To inspect CBA including the motivation model, a robot arm (Conbe-I) has been developed with a small Web camera built into the fingers. CBA was installed on this Conbe-I, and the autonomous actions performed to catch an object were inspected. A motivation model of the robot was devised to describe interests for the aim thing of the robot and the desire of the robot. To build this motivation model, we studied the action of dopamine, which added activity to the robot, in conjunction with the incentive to do an action. In this paper described about the expression of the emotion by a robot incorporated this motivation model in Conbe-I. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

18.
The current trends in the robotics field have led to the development of large-scale multiple robot systems, and they are deployed for complex missions. The robots in the system can communicate and interact with each other for resource sharing and task processing. Many of such systems fail despite the availability of necessary resources. The major reason for this is their poor coordination mechanism. Task planning, which involves task decomposition and task allocation, is paramount in the design of coordination and cooperation strategies of multiple robot systems. Task allocation mechanism allocates the task in a mission to the robots by maximizing the overall expected performance, and thereby reducing the total allocation cost for the team. In this paper, we formulate a heuristic search-based task allocation algorithm for the task processing in heterogeneous multiple robot system, by maximizing the efficiency in terms of both communication and processing cost. We assume a set of decomposed tasks of a mission, which needs to be allocated to the robots. The near-optimal allocation schemes are found using the proposed peer structure algorithm for the given problem, where the number of the tasks is more than the robots present in the system. The cost function is the summation of static overhead cost of robots, assignment cost, and the communication cost between the dependent tasks, if they are assigned to different robots. Experiments are performed to verify the effectiveness of the algorithm by comparing it with the existing methods in terms of computational time and quality of solution. The experimental results show that the proposed algorithm performs the best under different problem scales. This proves that the algorithm can be scaled for larger system and it can work for dynamic multiple robot system.  相似文献   

19.
Computational models of emotions have been thriving and increasingly popular since the 1990s. Such models used to be concerned with the emotions of individual agents when they interact with other agents. Out of the array of models for the emotions, we are going to devote special attention to the approach in Adamatzky’s Dynamics of Crowd-Minds. The reason it stands out, is that it considers the crowd, rather than the individual agent. It fits in computational intelligence. It works by mathematical simulation on a crowd of simple artificial agents: by letting the computer program run, the agents evolve, and crowd behaviour emerges. Adamatzky’s purpose is to give an account of the emergence of allegedly “irrational” behaviour. This is not without problem, as the irrational to one person may seem entirely rational to another, and this in turn is an insight that, in the history of crowd psychology, has affected indeed the competition among theories of crowd dynamics. Quite importantly, Adamatzky’s book argues for the transition from individual agencies to a crowd’s or a mob’s coalesced mind as so, and at any rate for coalesced crowd’s agency.  相似文献   

20.
One reason that researchers may wish to demonstrate that an external software quality attribute can be measured consistently is so that they can validate a prediction system for the attribute. However, attempts at validating prediction systems for external subjective quality attributes have tended to rely on experts indicating that the values provided by the prediction systems informally agree with the experts’ intuition about the attribute. These attempts are undertaken without a pre-defined scale on which it is known that the attribute can be measured consistently. Consequently, a valid unbiased estimate of the predictive capability of the prediction system cannot be given because the experts’ measurement process is not independent of the prediction system’s values. Usually, no justification is given for not checking to see if the experts can measure the attribute consistently. It seems to be assumed that: subjective measurement isn’t proper measurement or subjective measurement cannot be quantified or no one knows the true values of the attributes anyway and they cannot be estimated. However, even though the classification of software systems’ or software artefacts’ quality attributes is subjective, it is possible to quantify experts’ measurements in terms of conditional probabilities. It is then possible, using a statistical approach, to assess formally whether the experts’ measurements can be considered consistent. If the measurements are consistent, it is also possible to identify estimates of the true values, which are independent of the prediction system. These values can then be used to assess the predictive capability of the prediction system. In this paper we use Bayesian inference, Markov chain Monte Carlo simulation and missing data imputation to develop statistical tests for consistent measurement of subjective ordinal scale attributes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号