首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A recent and dramatic increase in the use of automation has not yielded comparable improvements in performance. Researchers have found human operators often underutilize (disuse) and overly rely on (misuse) automated aids (Parasuraman and Riley, 1997). Three studies were performed with Cameron University students to explore the relationship among automation reliability, trust, and reliance. With the assistance of an automated decision aid, participants viewed slides of Fort Sill terrain and indicated the presence or absence of a camouflaged soldier. Results from the three studies indicate that trust is an important factor in understanding automation reliance decisions. Participants initially considered the automated decision aid trustworthy and reliable. After observing the automated aid make errors, participants distrusted even reliable aids, unless an explanation was provided regarding why the aid might err. Knowing why the aid might err increased trust in the decision aid and increased automation reliance, even when the trust was unwarranted. Our studies suggest a need for future research focused on understanding automation use, examining individual differences in automation reliance, and developing valid and reliable self-report measures of trust in automation.  相似文献   

2.
OBJECTIVE: We tested the hypothesis that automation errors on tasks easily performed by humans undermine trust in automation. BACKGROUND: Research has revealed that the reliability of imperfect automation is frequently misperceived. We examined the manner in which the easiness and type of imperfect automation errors affect trust and dependence. METHOD: Participants performed a target detection task utilizing an automated aid. In Study 1, the aid missed targets either on easy trials (easy miss group) or on difficult trials (difficult miss group). In Study 2, we manipulated both easiness and type of error (miss vs. false alarm). The aid erred on either difficult trials alone (difficult errors group) or on difficult and easy trials (easy miss group; easy false alarm group). RESULTS: In both experiments, easy errors led to participants mistrusting and disagreeing more with the aid on difficult trials, as compared with those using aids that generated only difficult errors. This resulted in a downward shift in decision criterion for the former, leading to poorer overall performance. Misses and false alarms led to similar effects. CONCLUSION: Automation errors on tasks that appear "easy" to the operator severely degrade trust and reliance. APPLICATION: Potential applications include the implementation of system design solutions that circumvent the negative effects of easy automation errors.  相似文献   

3.
4.
The present study examined age differences in trust and reliance of an automated decision aid. In Experiment 1, older and younger participants performed a simple mathematical task concurrent with a simulated medication management task. The decision aid was designed to facilitate medication management, but with varying reliability. Trust, self-confidence and usage of the aid were measured. The results indicated that older adults had greater trust in the aid and were less confident in their performance, but they did not calibrate trust differently than younger adults. In Experiment 2, a variant of the same task was used to investigate whether older adults are subject to over-reliance on the automation. Differences in omission and commission errors were examined. The results indicated that older adults were more reliant on the decision aid and committed more automation-related errors. A signal detection analyses indicated that older adults were less sensitive to automation failures. Results are discussed with respect to the perceptual and cognitive factors that influence age differences in the use of fallible automation.  相似文献   

5.
Automation users often disagree with diagnostic aids that are imperfectly reliable. The extent to which users' agreements with an aid are anchored to their personal, self-generated diagnoses was explored. Participants (N = 75) performed 200 trials in which they diagnosed pump failures using an imperfectly reliable automated aid. One group (nonforced anchor, n = 50) provided diagnoses only after consulting the aid. Another group (forced anchor, n = 25) provided diagnoses both before and after receiving feedback from the aid. Within the nonforced anchor group, participants' self-reported tendency to prediagnose system failures significantly predicted their tendency to disagree with the aid, revealing a cognitive anchoring effect. Agreement rates of participants in the forced anchor group indicated that public commitment to a diagnosis did not strengthen this effect. Potential applications include the development of methods for reducing cognitive anchoring effects and improving automation utilization in high-risk domains.  相似文献   

6.
Trust in automation: designing for appropriate reliance   总被引:3,自引:0,他引:3  
Lee JD  See KA 《Human factors》2004,46(1):50-80
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.  相似文献   

7.
Humans: still vital after all these years of automation   总被引:1,自引:0,他引:1  
OBJECTIVE: The authors discuss empirical studies of human-automation interaction and their implications for automation design. BACKGROUND: Automation is prevalent in safety-critical systems and increasingly in everyday life. Many studies of human performance in automated systems have been conducted over the past 30 years. METHODS: Developments in three areas are examined: levels and stages of automation, reliance on and compliance with automation, and adaptive automation. RESULTS: Automation applied to information analysis or decision-making functions leads to differential system performance benefits and costs that must be considered in choosing appropriate levels and stages of automation. Human user dependence on automated alerts and advisories reflects two components of operator trust, reliance and compliance, which are in turn determined by the threshold designers use to balance automation misses and false alarms. Finally, adaptive automation can provide additional benefits in balancing workload and maintaining the user's situation awareness, although more research is required to identify when adaptation should be user controlled or system driven. CONCLUSIONS: The past three decades of empirical research on humans and automation has provided a strong science base that can be used to guide the design of automated systems. APPLICATION: This research can be applied to most current and future automated systems.  相似文献   

8.
Future air traffic management concepts envisage shared decision-making responsibilities between controllers and pilots, necessitating that controllers be supported by automated decision aids. Even as automation tools are being introduced, however, their impact on the air traffic controller is not well understood. The present experiments examined the effects of an aircraft-to-aircraft conflict decision aid on performance and mental workload of experienced, full-performance level controllers in a simulated Free Flight environment. Performance was examined with both reliable (Experiment 1) and inaccurate automation (Experiment 2). The aid improved controller performance and reduced mental workload when it functioned reliably. However, detection of a particular conflict was better under manual conditions than under automated conditions when the automation was imperfect. Potential or actual applications of the results include the design of automation and procedures for future air traffic control systems.  相似文献   

9.
10.
Universal usability is an important component of HCI, particularly as companies promote their products in increasingly global markets to users with diverse cultural backgrounds. Successful anthropomorphic agents must have appropriate computer etiquette and nonverbal communication patterns. Because there are differences in etiquette, tone, formality, and colloquialisms across different user populations, it is unlikely that a generic anthropomorphic agent would be universally appealing. Additionally, because anthropomorphic characters are depicted as capable of human reasoning and possessing human motivations, users may ascribe undue trust in these agents. Trust is a complex construct that exerts an important role in a user’s interactions with an interface or system. Feelings and perceptions about an anthropomorphic agent may impact the construction of a mental model about a system, which may lead to inappropriate calibrations of automation trust that is based on an emotional connection with the anthropomorphic agent rather than on actual system performance.  相似文献   

11.
Previous research has shown that gender stereotypes, elicited by the appearance of the anthropomorphic technology, can alter perceptions of system reliability. The current study examined whether stereotypes about the perceived age and gender of anthropomorphic technology interacted with reliability to affect trust in such technology. Participants included a cross-section of younger and older adults. Through a factorial survey, participants responded to health-related vignettes containing anthropomorphic technology with a specific age, gender, and level of past reliability by rating their trust in the system. Trust in the technology was affected by the age and gender of the user as well as its appearance and reliability. Perceptions of anthropomorphic technology can be affected by pre-existing stereotypes about the capability of a specific age or gender.  相似文献   

12.
We examined the effect of distractor characteristics (modality and processing code) on visual search performance and interaction with an automated decision aid. Multiple Resource Theory suggests that concurrent tasks that are processed similarly (e.g. two visual tasks) will cause greater interference than tasks that are not (e.g., a visual and auditory task). The impact of tasks that share processing and perceptual demands and their interaction with human-automation interaction is not established. In order to examine this, participants completed two blocks of a luggage screening simulation with or without the assistance of an automated aid. For one block, participants performed a concurrent distractor task drawn from one of four combinations of modality and processing code: auditory-verbal; auditory-spatial; visual-verbal; visual-spatial. We measured sensitivity, criterion setting, perceived workload, system trust, perceived system reliability, compliance, reliance, and confidence. Participants demonstrated highest sensitivity when performing with an auditory-spatial secondary task. Automation compliance was higher when the auditory-spatial distraction was present versus absent; however, system trust was highest in the auditory-verbal condition. Confidence (when disagreeing with the aid) was also highest when the distractor was auditory. This study indicates that some forms of auditory ‘distractors’ may actually help performance; these results further contribute to understanding how distractions influence performance when operators interact with automation and have implications for improved work environment and system design.  相似文献   

13.
Merritt SM  Ilgen DR 《Human factors》2008,50(2):194-210
OBJECTIVE: We provide an empirical demonstration of the importance of attending to human user individual differences in examinations of trust and automation use. BACKGROUND: Past research has generally supported the notions that machine reliability predicts trust in automation, and trust in turn predicts automation use. However, links between user personality and perceptions of the machine with trust in automation have not been empirically established. METHOD: On our X-ray screening task, 255 students rated trust and made automation use decisions while visually searching for weapons in X-ray images of luggage. RESULTS: We demonstrate that individual differences affect perceptions of machine characteristics when actual machine characteristics are constant, that perceptions account for 52% of trust variance above the effects of actual characteristics, and that perceptions mediate the effects of actual characteristics on trust. Importantly, we also demonstrate that when administered at different times, the same six trust items reflect two types of trust (dispositional trust and history-based trust) and that these two trust constructs are differentially related to other variables. Interactions were found among user characteristics, machine characteristics, and automation use. CONCLUSION: Our results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions. APPLICATION: Potential applications include the design of flexible automation training programs tailored to individuals who differ in systematic ways.  相似文献   

14.
Although increases in the use of automation have occurred across society, research has found that human operators often underutilize (disuse) and overly rely on (misuse) automated aids (R. Parasuraman & V. Riley, 1997). Nearly 275 Cameron University students participated in 1 of 3 experiments performed to examine the effects of perceived utility (M. T. Dzindolet, H. P. Beck, L. G. Pierce, & L. A. Dawe, 2001) on automation use in a visual detection task and to compare reliance on automated aids with reliance on humans. Results revealed a bias for human operators to rely on themselves. Although self-report data indicate a bias toward automated aids over human aids, performance data revealed that participants were more likely to disuse automated aids than to disuse human aids. This discrepancy was accounted for by assuming human operators have a "perfect automation" schema. Actual or potential applications of this research include the design of future automateddecision aids and training procedures for operators relying on such aids.  相似文献   

15.
When errors of automated vehicles (AVs) occur, drivers' trust can easily be destroyed, resulting in the reduction of the use of AVs. This study aims to examine how error of AVs declines driver's trust by impacting their subjective perceptions. A driving simulator experiment is conducted, in which 104 participants (male = 58; female = 46) experienced automated driving with automation errors and rated their trust. The results indicate that automation error will affect the driver's perceived predictability, perceived reliability, and perceived safety, which will lead to the decline of trust and abandonment of automated driving. With the occurrence of automation error of AVs, perceived safety plays a more critical role in drivers' trust. In addition, when automation errors occur in specific tasks with low risk, the trust of drivers will drop faster than that in high-risk tasks. This paper has explored the internal effects of the decline of driver's trust after automation errors of AVs, and further considers the influence of different external risks on these perception factors and trust. This study can help AVs manufacturers to formulate different degrees of trust repair strategies according to different driving tasks and accident severity.  相似文献   

16.
Ma R  Kaber DB 《Ergonomics》2007,50(8):1351-1364
The objective of this study was to identify task and vehicle factors that may affect driver situation awareness (SA) and its relationship to performance, particularly in strategic (navigation) tasks. An experiment was conducted to assess the effects of in-vehicle navigation aids and reliability on driver SA and performance in a simulated navigation task. A total of 20 participants drove a virtual car and navigated a large virtual suburb. They were required to follow traffic signs and navigation directions from either a human aid via a mobile phone or an automated aid presented on a laptop. The navigation aids operated under three different levels of information reliability (100%, 80% and 60%). A control condition was used in which each aid presented a telemarketing survey and participants navigated using a map. Results revealed perfect navigation information generally improved driver SA and performance compared to unreliable navigation information and the control condition (task-irrelevant information). In-vehicle automation appears to mediate the relationship of driver SA to performance in terms of operational and strategic (navigation) behaviours. The findings of this work support consideration of driver SA in the design of future vehicle automation for navigation tasks.  相似文献   

17.
如何运用DDE数据交换技术及自动化(ActiveX Automation)软件技术,使用VB开发工具,结合InTouch、Excel等应用程序支持DDE数据交互的技术特性,开发模拟屏显示驱动程序,分析了这种驱动方式和传统驱动方式的优缺点。现场使用表明,该驱动程序运行稳定可靠,易于操作。  相似文献   

18.
《Ergonomics》2012,55(6):897-908
Though it has been reported that air traffic controllers' (ATCos') performance improves with the aid of a conflict resolution aid (CRA), the effects of imperfect automation on CRA are so far unknown. The main objective of this study was to examine the effects of imperfect automation on conflict resolution. Twelve students with ATC knowledge were instructed to complete ATC tasks in four CRA conditions including reliable, unreliable and high time pressure, unreliable and low time pressure, and manual conditions. Participants were able to resolve the designated conflicts more accurately and faster in the reliable versus unreliable CRA conditions. When comparing the unreliable CRA and manual conditions, unreliable CRA led to better conflict resolution performance and higher situation awareness. Surprisingly, high time pressure triggered better conflict resolution performance as compared to the low time pressure condition. The findings from the present study highlight the importance of CRA in future ATC operations.

Practitioner Summary: Conflict resolution aid (CRA) is a proposed automation decision aid in air traffic control (ATC). It was found in the present study that CRA was able to promote air traffic controllers' performance even when it was not perfectly reliable. These findings highlight the importance of CRA in future ATC operations.  相似文献   

19.
The International Society of Automation recently released ISA100.11a as an open standard for reliable wireless networks for industrial automation. ISA100.11a uses the TDMA scheme in the medium access layer to provide deterministic services. However, ISA100.11a adopts the CSMA-CA mechanism with priorities for retransmission from failure on dedicated links, sporadic data, and network configuration.This paper evaluates ISA100.11a CSMA-CA by simulation, considering the effects of backoff procedures and priority settings to probability of collision and successful use of slots. It's demonstrated that a high number of priority classes enable better network utilization resulting in less number of packets exceeding their lifetime.  相似文献   

20.
OBJECTIVE: To examine whether continually updated information about a system's confidence in its ability to perform assigned tasks improves operators' trust calibration in, and use of, an automated decision support system (DSS). BACKGROUND: The introduction of decision aids often leads to performance breakdowns that are related to automation bias and trust miscalibration. This can be explained, in part, by the fact that operators are informed about overall system reliability only, which makes it impossible for them to decide on a case-by-case basis whether to follow the system's advice. METHOD: The application for this research was a neural net-based decision aid that assists pilots with detecting and handling in-flight icing encounters. A multifactorial experiment was carried out with two groups of 15 instructor pilots each flying a series of 28 approaches in a motion-base simulator. One group was informed about the system's overall reliability only, whereas the other group received updated system confidence information. RESULTS: Pilots in the updated group experienced significantly fewer icing-related stalls and were more likely to reverse their initial response to an icing condition when it did not produce desired results. Their estimate of the system's accuracy was more accurate than that of the fixed group. CONCLUSION: The presentation of continually updated system confidence information can improve trust calibration and thus lead to better performance of the human-machine team. APPLICATION: The findings from this research can inform the design of decision support systems in a variety of event-driven high-tempo domains.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号