共查询到20条相似文献,搜索用时 31 毫秒
1.
A multiagent framework for coordinated parallel problem solving 总被引:1,自引:1,他引:0
Today’s organizations, under increasing pressure on the effectiveness and the increasing need for dealing with complex tasks
beyond a single individual’s capabilities, need technological support in managing complex tasks that involve highly distributed
and heterogeneous information sources and several actors. This paper describes CoPSF, a multiagent system middle-ware that
simplifies the development of coordinated problem solving applications while ensuring standard compliance through a set of
system services and agents. CoPSF hosts and serves multiple concurrent teams of problem solving contributing both to the limitation
of communication overheads and to the reduction of redundant work across teams and organizations. The framework employs (i) an interleaved task decomposition and allocation approach, (ii) a mechanism for coordination of agents’ work, and (iii) a mechanism that enables synergy between parallel teams. 相似文献
2.
RoboCup Rescue as multiagent task allocation among teams: experiments with task interdependencies 总被引:1,自引:1,他引:0
Paulo Roberto FerreiraJr. Fernando dos Santos Ana L. C. Bazzan Daniel Epstein Samuel J. Waskow 《Autonomous Agents and Multi-Agent Systems》2010,20(3):421-443
This paper addresses distributed task allocation among teams of agents in a RoboCup Rescue scenario. We are primarily concerned
with testing different mechanisms that formalize issues underlying implicit coordination among teams of agents. These mechanisms
are developed, implemented, and evaluated using two algorithms: Swarm-GAP and LA-DCOP. The latter bases task allocation on
a comparison between an agent’s capability to perform a task and the capability demanded by this task. Swarm-GAP is a probabilistic
approach in which an agent selects a task using a model inspired by task allocation among social insects. Both algorithms
were also compared to another one that allocates tasks in a greedy way. Departing from previous works that tackle task allocation
in the rescue scenario only among fire brigades, here we consider the various actors in the RoboCup Rescue, a step forward
in the direction of realizing the concept of extreme teams. Tasks are allocated to teams of agents without explicit negotiation
and using only local information. Our results show that the performance of Swarm-GAP and LA-DCOP are similar and that they
outperform a greedy strategy. Also, it is possible to see that using more sophisticated mechanisms for task selection does
pay off in terms of score. 相似文献
3.
Liesbeth Flobbe Rineke Verbrugge Petra Hendriks Irene Krämer 《Journal of Logic, Language and Information》2008,17(4):417-442
Many social situations require a mental model of the knowledge, beliefs, goals, and intentions of others: a Theory of Mind
(ToM). If a person can reason about other people’s beliefs about his own beliefs or intentions, he is demonstrating second-order
ToM reasoning. A standard task to test second-order ToM reasoning is the second-order false belief task. A different approach
to investigating ToM reasoning is through its application in a strategic game. Another task that is believed to involve the
application of second-order ToM is the comprehension of sentences that the hearer can only understand by considering the speaker’s
alternatives. In this study we tested 40 children between 8 and 10 years old and 27 adult controls on (adaptations of) the
three tasks mentioned above: the false belief task, a strategic game, and a sentence comprehension task. The results show
interesting differences between adults and children, between the three tasks, and between this study and previous research. 相似文献
4.
5.
Distributed Cognition in an Emergency Co-ordination Center 总被引:1,自引:1,他引:0
*Formerly at the Department of Communication Studies, Linko¨ping University, Sweden. Most of this work was conducted during
the author’s employment at the Department of Communication Studies. Recent research concerning the control of complex systems
stresses the systemic character of the work of the controlling system, including the number of people and artefacts as well
as the environment. This study adds to the growing body of knowledge by focusing on the internal working of such a system.
Our vantage point is the theoretical framework of distributed cognition. Through a field study of an emergency co-ordination
centre we try to demonstrate how the team’s cognitive tasks, to assess an event and to dispatch adequate resources, are achieved
by mutual awareness, joint situation assessment, and the co-ordinated use of the technology and the physical arrangement of
the co-ordination room. 相似文献
6.
Towards understanding the relationship between team climate and software quality—a quasi-experimental study 总被引:1,自引:1,他引:0
This paper describes an empirical study that examined the work climate within software development teams. The question was
whether the team climate in software developer teams has any relation to software product quality. We define team climate
as the shared perceptions of the team’s work procedures and practices. The team climate factors examined were West and Anderson’s
participative safety, support for innovation, team vision and task orientation. These four factors were measured before the
project using the Team Selection Inventory (TSI) test to establish subject climate preferences, as well as during and after
the project using the Team Climate Inventory (TCI) test, which establishes the subject’s perceptions of the climate. In this
quasi-experimental study, data were collected from a sample of 35 three-member developer teams in an academic setting. These
teams were formed at random and their members were blind to the quasi-experimental conditions and hypotheses. All teams used
an adaptation of extreme programming (XP) to the students’ environment to develop the same software system. We found that
high team vision preferences and high participative safety perceptions of the team were significantly related to better software.
Additionally, the results show that there is a positive relationship between the categorization of better than preferred,
as preferred and worse than preferred climate and software quality for two of the teamwork climate factors: participative
safety and team vision. So it seems important to track team climate in an organization and team as one (of many) indicators
of the quality of the software to be delivered.
相似文献
Natalia JuristoEmail: |
7.
Mark A. Neerincx 《Personal and Ubiquitous Computing》2011,15(5):445-456
Space crews are in need for excellent cognitive support to perform nominal and off-nominal actions. This paper presents a
coherent cognitive engineering methodology for the design of such support, which may be used to establish adequate usability,
context-specific support that is integrated into astronaut’s task performance and/or electronic partners who enhance human–machine
team’s resilience. It comprises (a) usability guidelines, measures and methods, (b) a general process guide that integrates
task procedure design into user interface design and a software framework to implement such support and (c) theories, methods
and tools to analyse, model and test future human–machine collaborations in space. In empirical studies, the knowledge base
and tools for crew support are continuously being extended, refined and maintained. 相似文献
8.
Eva Jensen 《Cognition, Technology & Work》2009,11(2):103-118
Sensemaking, understanding how to deal with the situation at hand, has a central role in military command. This paper presents
a method for measuring sensemaking performance in command teams during military planning. The method was tested in two experiments
with Army captains serving as participants. The task was to produce parts of a brigade order within 6 h. The participants
worked in teams of 5–7 individuals, 16 teams in the first experiment and 8 teams in the second experiment, with one team member
acting as brigade commander. The independent variables were amount of information and type of communication, respectively.
The characteristics of each team’s sensemaking process were assessed from video recordings of their planning sessions. The
quality of their plans was judged by military experts. Although plan quality was unaffected by the experimental manipulations,
the quality of the sensemaking process was related to the quality of the plans.
相似文献
Eva JensenEmail: |
9.
Pamela J. Hopp-Levine C. A. P. Smith Benjamin A. Clegg Eric D. Heggestad 《Cognition, Technology & Work》2006,8(2):137-145
Tactile cuing has been suggested as a method of interruption management for busy visual environments. This study examined the effectiveness of tactile cues as an interruption management strategy in a multi-tasking environment. Sixty-four participants completed a continuous aircraft monitoring task with periodic interruptions of a discrete gauge memory task. Participants were randomly assigned to two groups; one group had to remember to monitor for interruptions while the other group received tactile cues indicating an interruption’s arrival and location. As expected, the cued participants evidenced superior performance on both tasks. The results are consistent with the notion that tactile cues transform the resource-intensive, time-based task of remembering to check for interruptions into a simpler, event-based task, where cues assume a portion of the workload, permitting the application of valuable resources to other task demands. This study is discussed in the context of multiple resource theory and has practical implications for systems design in environments consisting of multiple, visual tasks and time-sensitive information. 相似文献
10.
Robotic technology is quickly evolving allowing robots to perform more complex tasks in less structured environments with
more flexibility and autonomy. Heterogeneous multi-robot teams are more common as the specialized abilities of individual
robots are used in concert to achieve tasks more effectively and efficiently. An important area of research is the use of
robot teams to perform modular assemblies. To this end, this paper analyzed the relative performance of two robots with different
morphologies and attributes in performing an assembly task autonomously under different coordination schemes using force sensing
through a control basis approach. A rigid, point-to-point manipulator and a dual-armed pneumatically actuated humanoid robot
performed the assembly of parts under a traditional “push-hold” coordination scheme and a human-mimicked “push-push” scheme.
The study revealed that the scheme with higher level of cooperation—the “push-push” scheme—performed assemblies faster and
more reliably, lowering the likelihood of stiction phenomena, jamming, and wedging. The study also revealed that in “push-hold”
schemes industrial robots are better pushers and compliant robots are better holders. The results of our study affirm the
use of heterogeneous robots to perform hard-to-do assemblies and also encourage humans to function as holder’s when working
in concert with a robot assistant for insertion tasks. 相似文献
11.
Intelligent vehicle systems have introduced the need for designers to consider user preferences so as to make several kinds
of driving features as driver friendly as possible. This requirement raises the problem of how to suitably analyse human performance
so they can be implemented in automatic driving tasks. The framework of the present work is an adaptive cruise control with
stop and go features for use in an urban setting. In such a context, one of the main requirements is to be able to tune the
control strategy to the driver’s style. In order to do this, a number of different drivers were studied through the statistical
analysis of their behaviour while driving. The aim of this analysis is to decide whether it is possible to determine a driver’s
behaviour, what signals are suitable for this task and which parameters can be used to describe a driver’s style. An assignment
procedure is then introduced in order to classify a driver’s behaviour within the stop and go task being considered. Finally,
the findings were analysed subjectively and compared with a statistically objective one. 相似文献
12.
Gary Klein 《Cognition, Technology & Work》2006,8(4):227-236
Problem detection in operational settings requires expertise and vigilance. It is a difficult task for individuals. If a problem is not detected early enough, the opportunity to avoid or reduce its consequences may be lost. Teams have many strengths that individuals lack. The team can attend to a wider range of cues than any of the individuals can. They can offer a wider range of expertise, represent different perspectives, reorganize their efforts to adapt to situational demands, and work in parallel. These should improve problem detection. However, teams can also fall victim to a wide range of barriers that may reduce their alertness, mask early problem indicators, confound attempts to make sense of initial data, and restrict their range of actions. Therefore, teams may not necessarily be superior to individuals at problem detection. The capability of a team to detect problems may be a useful measure of the team’s maturity and competence.
相似文献
Gary KleinEmail: Phone: 937-8738166Fax: 937-8738258 |
13.
We first define the basic notions of local and non-local tasks for distributed systems. Intuitively, a task is local if, in a system with no failures, each process can compute its
output value locally by applying some local function on its own input value (so the output value of each process depends only
on the process’ own input value, not on the input values of the other processes); a task is nonlocal otherwise. All the interesting
distributed tasks, including all those that have been investigated in the literature (e.g., consensus, set agreement, renaming,
atomic commit, etc.) are non-local. In this paper we consider non-local tasks and determine the minimum information about
failures that is necessary to solve such tasks in message-passing distributed systems. As part of this work, we also introduces
weak set agreement—a natural weakening of set agreement—and show that, in some precise sense, it is the weakest nonlocal task in message-passing systems. 相似文献
14.
15.
This study assesses the Shodan survey as an instrument for measuring an individual’s or a team’s adherence to the extreme
programming (XP) methodology. Specifically, we hypothesize that the adherence to the XP methodology is not a uni-dimensional
construct as presented by the Shodan survey but a multidimensional one reflecting dimensions that are theoretically grounded
in the XP literature. Using data from software engineers in the University of Sheffield’s Software Engineering Observatory,
two different models were thus tested and compared using confirmatory factor analysis: a uni-dimensional model and a four-dimensional
model. We also present an exploratory analysis of how these four dimensions affect students’ grades. The results indicate
that the four-dimensional model fits the data better than the uni-dimensional one. Nevertheless, the analysis also uncovered
flaws with the Shodan survey in terms of the reliability of the different dimensions. The exploratory analysis revealed that
some of the XP dimensions had linear or curvilinear relationship with grades. Through validating the four-dimensional model
of the Shodan survey this study highlights how psychometric techniques can be used to develop software engineering metrics
of fidelity to agile or other software engineering methods. 相似文献
16.
Gideon Juve Ewa Deelman G. Bruce Berriman Benjamin P. Berman Philip Maechling 《Journal of Grid Computing》2012,10(1):5-21
Workflows are used to orchestrate data-intensive applications in many different scientific domains. Workflow applications
typically communicate data between processing steps using intermediate files. When tasks are distributed, these files are
either transferred from one computational node to another, or accessed through a shared storage system. As a result, the efficient
management of data is a key factor in achieving good performance for workflow applications in distributed environments. In
this paper we investigate some of the ways in which data can be managed for workflows in the cloud. We ran experiments using
three typical workflow applications on Amazon’s EC2 cloud computing platform. We discuss the various storage and file systems
we used, describe the issues and problems we encountered deploying them on EC2, and analyze the resulting performance and
cost of the workflows. 相似文献
17.
James McDonald 《Empirical Software Engineering》2005,10(2):219-234
Data from 135 teams that have participated in a software project planning exercise are analyzed to determine the relationship between team experience and each teams estimate of total project cost. The analysis shows that cost estimates are dependent upon two kinds of team experience: (1) the average experience for the members of each team and (2) whether or not any members of the team have similar project experience. It is shown that if no members of a planning team have had similar project experience then the estimate of cost is correlated with average team experience, with teams having greater average team experience producing higher total cost estimates. If at least one member of the planning team has had similar project experience then there is a weaker relationship between average team experience and cost, and cost estimates produced by those teams with similar project experience are close to those produced by teams with the greatest average team experience. A qualitative examination of the project plans produced by all teams indicates that the primary reasons that teams with less experience of either type produce lower cost estimates are that they have failed to include some tasks that are included by more experienced teams, and that they have estimated shorter task durations than have the more experienced teams. 相似文献
18.
Many of today’s complex computer applications are being modeled and constructed using the principles inherent to real-time
distributed object systems. In response to this demand, the Object Management Group’s (OMG) Real-Time Special Interest Group
(RT SIG) has worked to extend the Common Object Request Broker Architecture (CORBA) standard to include real-time specifications.
This group’s most recent efforts focus on the requirements of dynamic distributed real-time systems. One open problem in this
area is resource access synchronization for tasks employing dynamic priority scheduling.
This paper presents two resource synchronization protocols that meet the requirements of dynamic distributed real-time systems
as specified by Dynamic Scheduling Real-Time CORBA 2.0 (DSRT CORBA). The proposed protocols can be applied to both Earliest
Deadline First (EDF) and Least Laxity First (LLF) dynamic scheduling algorithms, allow distributed nested critical sections,
and avoid unnecessary runtime overhead. These protocols are based on (i) distributed resource preclaiming that allocates resources
in the message-based distributed system for deadlock prevention, (ii) distributed priority inheritance that bounds local and
remote priority inversion, and (iii) distributed preemption ceilings that delimit the priority inversion time further.
Chen Zhang is an Assistant Professor of Computer Information Systems at Bryant University. He received his M.S. and Ph.D. in Computer
Science from the University of Alabama in 2000 and 2002, a B.S. from Tsinghua University, Beijing, China. Dr. Zhang’s primary
research interests fall into the areas of distributed systems and telecommunications. He is a member of ACM, IEEE and DSI.
David Cordes is a Professor of Computer Science at the University of Alabama; he has also served as Department Head since 1997. He received
his Ph.D. in Computer Science from Louisiana State University in 1988, an M.S. in Computer Science from Purdue University
in 1984, and a B.S. in Computer Science from the University of Arkansas in 1982. Dr. Cordes’s primary research interests fall
into the areas of software engineering and systems. He is a member of ACM and a Senior Member of IEEE. 相似文献
19.
The dynamic distributed real-time applications run on clusters with varying execution time, so re-allocation of resources
is critical to meet the applications’s deadline. In this paper we present two adaptive recourse management techniques for
dynamic real-time applications by employing the prediction of responses of real-time tasks that operate in time sharing environment
and run-time analysis of scheduling policies. Prediction of response time for resource reallocation is accomplished by historical
profiling of applications’ resource usage to estimate resource requirements on the target machine and a probabilistic approach
is applied for calculating the queuing delay that a process will experience on distributed hosts. Results show that as compared
to statistical and worst-case approaches, our technique uses system resource more efficiently. 相似文献
20.
Air Quality Forecasting (AQF) is a new discipline that attempts to reliably predict atmospheric pollution. An AQF application has complex workflows and in order to produce timely and reliable forecast results, each execution requires access to diverse and distributed computational and storage resources. Deploying AQF on Grids is one option to satisfy such needs, but requires the related Grid middleware to support automated workflow scheduling and execution on Grid resources. In this paper, we analyze the challenges in deploying an AQF application in a campus Grid environment and present our current efforts to develop a general solution for Grid-enabling scientific workflow applications in the GRACCE project. In GRACCE, an application’s workflow is described using GAMDL, a powerful dataflow language for describing application logic. The GRACCE metascheduling architecture provides the functionalities required for co-allocating Grid resources for workflow tasks, scheduling the workflows and monitoring their execution. By providing an integrated framework for modeling and metascheduling scientific workflow applications on Grid resources, we make it easy to build a customized environment with end-to-end support for application Grid deployment, from the management of an application and its dataset, to the automatic execution and analysis of its results.The work has been performed as part of the University of Houston’s Sun Microsystems Center of Excellence in Geosciences [38]. 相似文献