首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
BackgroundFuture-proof EHR systems must be capable of interpreting information structures for medical concepts that were not available at the build-time of the system. The two-model approach of CEN 13606/openEHR using archetypes achieves this by separating generic clinical knowledge from domain-related knowledge. The presentation of this information can either itself be generic, or require design time awareness of the domain knowledge being employed.ObjectiveTo develop a Graphical User Interface (GUI) that would be capable of displaying previously unencountered clinical data structures in a meaningful way.MethodsThrough “reasoning by analogy” we defined an approach for the representation and implementation of “presentational knowledge”. A proof-of-concept implementation was built to validate its implementability and to test for unanticipated issues.ResultsA two-model approach to specifying and generating a screen representation for archetype-based information, inspired by the two-model approach of archetypes, was developed. There is a separation between software-related display knowledge and domain-related display knowledge and the toolkit is designed with the reuse of components in mind.ConclusionsThe approach leads to a flexible GUI that can adapt not only to information structures that had not been predefined within the receiving system, but also to novel ways of displaying the information. We also found that, ideally, the openEHR Archetype Definition Language should receive minor adjustments to allow for generic binding.  相似文献   

2.
We have investigated the influence of post-filtering virtual screening results, with pharmacophoric features generated from an X-ray structure, on enrichment rates. This was performed using three docking softwares, zdock+, Surflex and FRED, as virtual screening tools and pharmacophores generated in UNITY from co-crystallized complexes. Sets of known actives along with 9997 pharmaceutically relevant decoy compounds were docked against six chemically diverse protein targets namely CDK2, COX2, ER, fXa, MMP3, and NA. To try to overcome the inherent limitations of the well-known docking problem, we generated multiple poses for each compound. The compounds were first ranked according to their scores alone and enrichment rates were calculated using only the top scoring pose of each compound. Subsequently, all poses for each compound were passed through the different pharmacophores generated from co-crystallized complexes and the enrichment factors were re-calculated based on the top-scoring passing pose of each compound. Post-filtering with a pharmacophore generated from only one X-ray complex was shown to increase enrichment rates in all investigated targets compared to docking alone. This indicates that this is a general method, which works for diverse targets and different docking softwares.  相似文献   

3.
Wordnets are large-scale lexical databases of related words and concepts, useful for language-aware software applications. They have recently been built for many languages by using various approaches. The Finnish wordnet, FinnWordNet (FiWN), was created by translating the more than 200,000 word senses in the English Princeton WordNet (PWN) 3.0 in 100 days. To ensure quality, they were translated by professional translators. The direct translation approach was based on the assumption that most synsets in PWN represent language-independent real-world concepts. Thus also the semantic relations between synsets were assumed mostly language-independent, so the structure of PWN could be reused as well. This approach allowed the creation of an extensive Finnish wordnet directly aligned with PWN and also provided us with a translation relation and thus a bilingual wordnet usable as a dictionary. In this paper, we address several concerns raised with regard to our approach, many of them for the first time. We evaluate the craftsmanship of the translators by checking the spelling and translation quality, the viability of the approach by assessing the synonym quality both on the lexeme and concept level, as well as the usefulness of the resulting lexical resource both for humans and in a language-technological task. We discovered no new problems compared with those already known in PWN. As a whole, the paper contributes to the scientific discourse on what it takes to create a very large wordnet. As a side-effect of the evaluation, we extended FiWN to contain 208,645 word senses in 120,449 synsets, effectively making version 2.0 of FiWN currently the largest wordnet in the world by these statistics.  相似文献   

4.
Simulation coupling (or cosimulation) techniques provide a framework for the analysis of decomposed dynamical systems with the use of independent numerical procedures for decomposed subsystems. These methods are often seen as very promising because they enable the utilization of the existing software for subsystem analysis and usually are easy to parallelize, and run in a distributed environment. For example, in the domain of multibody systems dynamics, a general setup for “Gluing Algorithms” was proposed by Wang et al. It was intended to provide a basis for multilevel distributed simulation environments. The authors presented an example where Newton’s method was used to synchronize the responses of subsystem simulators.  相似文献   

5.
The formal concept of logical equivalence in fuzzy logic, while theoretically sound, seems impractical. The misinterpretation of this concept has led to some pessimistic conclusions. Motivated by practical interpretation of truth values for fuzzy propositions, we take the class (lattice) of all subintervals of the unit interval [0, 1] as the truth value space for fuzzy logic, subsuming the traditional class of numerical truth values from [0, 1]. The associated concept of logical equivalence is stronger than the traditional one. Technically, we are dealing with much smaller set of pairs of equivalent formulas, so that we are able to check equivalence algorithmically. The checking is done by showing that our strong equivalence notion coincides with the equivalence in logic programming. © 1996 John Wiley & Sons, Inc.  相似文献   

6.
In analyzing the human tendency to treat computers as social actors (CASA), researchers tend to rule out the anthropomorphism explanation because anthropomorphism is understood to be “a sincere, conscious belief” that computers are human and/or deserving of human attributions. But, does anthropomorphism have to be necessarily mindful? Could it not also be a mindless tendency, especially given that most of us have somewhat long associations with our computers and have built human-like bonds with them? We examined these questions empirically by investigating whether the user tendency to treat computers as human beings is conscious (mindful) or non-conscious (mindless). We manipulated two variables (presence/absence of human-like agent and the low/high interactivity) on a health website and experimentally investigated whether they serve as anthropomorphic cues to trigger mindful attributions of human-ness to the website or mindless evaluations of the site in human terms. We found evidence for mindless anthropomorphism, with implications for user judgments of credibility of information on the site.  相似文献   

7.
Much computer science literature addresses the mechanics of the Unified Modelling Language (UML) and requirements modelling, but little research has addressed the role of UML in the broader organizational and project development context. This study uses a socio-technical approach to consider UML as a technology embedded in a social environment. In this study, project developers were interviewed in detail about their use of UML along with influences on their decisions to use this approach and the results of using it. Data were analyzed using causal mapping. Major findings included: (1) that definitions of success may differ by unit of analysis (e.g., developer, project, organization) and that the relationship among these definitions are complex; (2) a very large number of variables impacting project success were identified; (3) a number of important variables exist in complex (non-linear) relationships with project success; and (4) the majority of interviewees linked the use of UML to project success.  相似文献   

8.
《Ergonomics》2012,55(6):770-779
Abstract

Questions have been raised regarding the impact that providing concurrent verbal protocols has on task performance in various settings; however, there has been little empirical testing of this in road transport. The aim of this study was to examine the impact of providing concurrent verbal protocols on driving performance. Participants drove an instrumented vehicle around a set route, twice whilst providing a concurrent verbal protocol, and twice without. A comparison revealed no differences in behaviour related to speed, braking and steering wheel angle when driving mid-block, but a significant difference in aspects of braking and acceleration at roundabouts. When not providing a verbal protocol, participants were found to brake harder on approach to a roundabout and accelerate more heavily coming out of roundabouts. It is concluded that providing verbal protocols may have a positive effect on braking and accelerating. Practical implications related to driver training and future research are discussed.

Practitioner Summary: Verbal protocol analysis is used by ergonomists to understand aspects of cognition and decision-making during complex tasks such as driving and control room operation. This study examines the impact that it has on driving performance, providing evidence to support its continued use in ergonomics applications.  相似文献   

9.
Lidar technology has become an important data source in 3D terrain modelling. In Spain, the National Plan for Aerial Orthophotography will soon release public low-density lidar data (0.5–1 pulses/m2) for most of the country territory. Taking advantage of this fact, this article experimentally assesses the possibility of classifying a rural landscape into eight classes using multitemporal and multidensity lidar data and analyses the effect of point density on classification accuracy. Two statistical methods (transformed divergence and the Jeffries–Matusita distance) were used to assess the possibility of discriminating the eight classes and to determine which data layers were best suited for classification purposes. The results showed that ‘dirt road’ cannot be discriminated from ‘bare earth’ and that the possibility of discriminating ‘bare earth’, ‘pavement’, and ‘low vegetation’ decreases when using densities below 4 pulses/m2. Two non-parametric tests, the Kruskal–Wallis test and the Friedman test, were used to strengthen the results by assessing their statistical significance. According to the results of the Kruskal–Wallis test, lidar point density does not significantly affect the classification, whereas the results of the Friedman test show that bands could be considered as the only parameter affecting the possibility of discriminating some of the classes, such as ‘high vegetation’. Finally, the J48 algorithm was used to perform cross-validation in order to obtain the most familiar quantitative values in the international literature (e.g. overall accuracy). Mean overall accuracy was around 85% when the eight classes were considered and increased up to 95% when ‘dirt road’ was disregarded.  相似文献   

10.
We treat the problem of evaluating the statistical characteristics at the output signal of automatic control systems containing non-linear elements with zero memory excited by a random input with a normal distribution.

In this paper, in order to analyse the statistical characteristics we propose the statistical linearization technique determined by the condition that the variances at the output signal of a true nonlinear control system and of an equivalent linearized one are equal. The several examples are presented to illustrate the proposed method  相似文献   

11.
Canada is unique among the countries included in this volume with regards to the immigration status of care workers; they are much more likely to be immigrants or permanent residents rather than temporary workers (migrants). One program specific to Canada that enables care workers to migrate to Canada is the Live in Caregiver Program (LCP). Through this program workers are able to migrate without having to meet the qualifications of the immigration points system or family sponsorship. One of the key requirements is that they work for at least 24 out of 36 months as a care worker in the home of their client who in turn is their immigration sponsor. Though this has typically been a means to bring in care workers to work with children, increasingly care workers are attending to elderly clients. Interviews we conducted with 19 immigrant care workers in the home and long-term care sector who came to Canada through the LCP, contributed to a broader understanding of the way in which this recent shift in focus can help to address the growing need for care of older persons in their home. However, it has been implemented with little of the additional resources needed for this increasingly complex clientele. This program holds clear potential, but not without it being better customized to meet the needs of older persons and their care workers.  相似文献   

12.
It is no secret that Microsoft Windows NT is the hacker’s favourite Operating System (OS). According to defacement-tracking site Attrition.org, (www.Attrition.org) Windows NT received 54.41% of all recorded OS attacks between August 1999 and April 2001. In stark contrast, some of the lesser-known Operating Systems made up for as little as 0.1% of all OS attacks. However, despite the huge difference in the quantity of attacks, companies that employ the lesser-known systems may be at greater risk…  相似文献   

13.
A consulting firm has interviewed nursing executives at 24 hospitals throughout the country assessing nursing automation needs and comparing two of the top 10 patient care systems vendors on a wide range of variables. Nursing involvement in system selection is vital.  相似文献   

14.
The utilization of the results of control theory in the process control field has been lagging behind other application fields such as aerospace by many years. It is argued that the availability of high capacity computing at low price will change this situation and that new powerful control techniques can now be implemented in process control.  相似文献   

15.
16.
《Information & Management》2002,40(2):115-131
A study was conducted to examine the effect of implementing a new system on its users, specifically, the relationship between pre-implementation expectations and their perceived benefits based on post-implementation experience. Disconfirmation theory was used as the theoretical basis; this predicts that unrealistically high expectations will result in lower levels of perceived benefit than those associated with realistic expectations (i.e. where expectations match experience). Support was found for this prediction, refuting the predictions of dissonance theory. In addition to examining expectations of system use generally, six expectation categories were examined to identify the critical categories where managers should keep expectations from becoming unrealistically high. Significant relationships were found for three expectation categories: system usefulness, ease of use, and information quality. The results indicate that creating and maintaining realistic expectations of future system benefits really does matter.  相似文献   

17.
18.
19.
This study addresses the question of how transmission delay affects user perception during speech communication over telephone systems. It aims to show that the occurrence of pure delay should not be neglected when planning a telephone or conferencing system even if no impact on the perceived quality of the call can be found. It is, for instance, known that, the communication surface structure changes dramatically when transmission delay is inserted by the communication system. Furthermore, studies suggest a change in the perception of the interlocutor at the far-end. This paper describes two experiments that assess the misattribution of the technical impairment delay to personality and behavior-related attributes of the conversation partners. The first experiment shows that interlocutors are perceived as being less attentive when conversing in a three-party setting with symmetrical and asymmetrical delay conditions. In the second experiment, the misattribution is considered in more detail looking at ascribed personality attributes in two-party interaction under transmission delay. For both experiments, comparing the conversation surface structure of delayed to non-delayed calls helped to understand the found outcomes.  相似文献   

20.
It is customary to use open-system trace-driven simulations to evaluate the performance of parallel-system schedulers. As a consequence, all schedulers have evolved to optimize the packing of jobs in the schedule, as a means to improve a number of performance metrics that are conjectured to be correlated with user satisfaction, with the premise that this will result in a higher productivity in reality. We argue that these simulations suffer from severe limitations that lead to suboptimal scheduler designs and to even dismissing potentially good design alternatives. We propose an alternative simulation methodology called site-level simulation, in which the workload for the evaluation is generated dynamically by user models that interact with the system. We present a novel scheduler called CREASY that exploits knowledge on user behavior to directly improve user satisfaction and compare its performance to the original packing-based EASY scheduler. We show that user productivity improves by up to 50 percent under the user-aware design, while according to the conventional metrics, performance may actually degrade.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号