共查询到20条相似文献,搜索用时 0 毫秒
1.
W W Stead 《M.D. computing : computers in medical practice》1989,6(2):74-81
Early developments and subsequent progress in the area of computer-based medical records are reviewed. The current state of the art and the level of penetration through the healthcare system are much less than the first decade's advances pointed towards. Possible explanations are explored. 相似文献
2.
Drug effects can mimic a wide variety of diseases. Experts note that adverse drug reactions (ADRs) have become the 'greatest imitator' of disease in clinical medicine. Quick Medical Reference (QMR) is a decision support system providing diagnostic data about more than 600 medical diseases. Currently, QMR contains only limited drug information. Just as physicians have difficulty diagnosing ADRs, QMR has similar problems in differentiating natural disease manifestations from drug toxicity syndromes. To remedy this problem, two prototype Drug Syndromes (DS), Carbamazepine Toxicity and Penicillin Toxicity, were incorporated into the QMR Knowledge Base (KB). Using detailed case reports, we demonstrated that a DS-augmented version of QMR was successful in discriminating these DS from the other diseases in QMR's KB. The addition of DS significantly improves QMR's diagnostic performance in cases in which some of the pathologic features are the consequence of drugs. 相似文献
3.
4.
Heffernan H 《Computers in healthcare》1992,13(7):51-52
Congressional and Health and Human Services leaders are spearheading an effort to institute a nationwide electronic claims network. The ultimate result, however, could be something healthcare information system professionals have dreamed of for years--a national computer-based patient record system. 相似文献
5.
Carlos A. Acosta Calderon Rajesh E. Mohan Lingyun Hu Changjiu Zhou Huosheng Hu 《Robotics and Autonomous Systems》2009,57(8):860-869
Recently, interest in analysis and generation of human and human-like motion has increased in various areas. In robotics, in order to operate a humanoid robot, it is necessary to generate motions that have strictly dynamic consistency. Furthermore, human-like motion for robots will bring advantages such as energy optimization.This paper presents a mechanism to generate two human-like motions, walking and kicking, for a biped robot using a simple model based on observation and analysis of human motion. Our ultimate goal is to establish a design principle of a controller in order to achieve natural human-like motions. The approach presented here rests on the principle that in most biological motor learning scenarios some form of optimization with respect to a physical criterion is taking place. In a similar way, the equations of motion for the humanoid robot systems are formulated in such a way that the resulting optimization problems can be solved reliably and efficiently.The simulation results show that faster and more accurate searching can be achieved to generate an efficient human-like gait. Comparison is made with methods that do not include observation of human gait. The gait has been successfully used to control Robo-Erectus, a soccer-playing humanoid robot, which is one of the foremost leading soccer-playing humanoid robots in the RoboCup Humanoid League. 相似文献
6.
The problems associated with testing a dynamical model, using a data record of finite length, are insufficiency of the data for statistically meaningful decisions, coupling of mean, covariance, and correlation related errors, difficulty of detecting midcourse model departures, and inadequacy of traditional techniques for computing test power for given model alternatives. This paper attempts to provide a comprehensive analysis of nonstationary models via significance tests, specifically addressing these problems. Data records from single and from multiple system operations are analyzed, and the models considered are possibly varying both with respect to time and with respect to operations. Quadratic form distributions prove effective in the statistical analysis. 相似文献
7.
Van Sint Jan S.L. Clapworthy G.J. Rooze M. 《Computer Graphics and Applications, IEEE》2000,20(2):46-52
Modern medical imaging lets us create accurate computer models of anatomical structures. Some of these models can be animated to visualize joint kinematics. A model's size and complexity can significantly affect the efficiency of any desired animation or interactive manipulation. The model normally takes the form of a polygonal mesh; the more facets in the mesh, the slower the rendering process. Beyond a certain limit, real time interaction becomes impractical because the frame rate (image regeneration) is too slow. The many methods proposed for reducing the number of polygons in computer models normally entail a loss of detail in the final model. In some applications, retaining detail may be important. Joint kinematics, which we were investigating, falls into this category, and we sought a way to reduce the input data volume without introducing a corresponding decrease in the isosurface resolution. Our application requires only the bone's external surface, which is found by segmenting radiological data obtained from computerized tomodensitometry (a CT scan). By analyzing local bone morphology, we were able to identify and eliminate nearly 50 percent of the polygons generated by standard segmentation techniques, while retaining the full resolution of the required isosurface. The article discusses the relationships between bone morphology and bone intensity in a medical imaging data set and describes how these relationships can help us reduce the polygon count in the surface models generated 相似文献
8.
9.
Previous research has emphasized the virtues of customer insights as a key source of competitive advantage. The rise of customers’ social media use allows firms to collect customer data in an ever-increasing volume and variety. However, to date, little is known about the capabilities required of firms to turn social media data into valuable customer insights and exploit these insights to create added value for customers. Based on the dynamic capabilities perspective, in particular the concept of absorptive capacity (ACAP), the authors conducted multiple case studies of seven mid-sized and large B2C firms in Switzerland and Germany. The results provide an in-depth analysis of the underlying processes of ACAP as well as contingent factors – that is, physical, human and organizational resources that underpin the firms’ ACAP. 相似文献
10.
C. Palmer J. A. Harding R. Swarnkar B. P. Das R. I. M. Young 《Journal of Intelligent Manufacturing》2013,24(2):313-330
A Moderator is a knowledge based system that supports collaborative working by raising awareness of the priorities and requirements of other team members. However, the amount of advice a Moderator can provide is limited by the knowledge it contains on team members. The use of data mining techniques can contribute towards automating the process of knowledge acquisition for a Moderator and enable hidden data patterns and relationships to be discovered to facilitate the moderation process. A novel approach is presented, consisting of a knowledge discovery framework which provides a semi-automatic methodology to generate rules by inserting relationships discovered as a result of data mining into a generic template. To demonstrate the knowledge discovery framework methodology an application case is described. The application case acquires knowledge for a Moderator to make project partners aware of how to best formulate a proposal for a European research project by data mining summaries of successful past projects. Findings from the application case are presented. 相似文献
11.
Generating an interpretable family of fuzzy partitions from data 总被引:1,自引:0,他引:1
In this paper, we propose a new method to construct fuzzy partitions from data. The procedure generates a hierarchy including best partitions of all sizes from n to two fuzzy sets. The maximum size n is determined according to the data distribution and corresponds to the finest resolution level. We use an ascending method for which a merging criterion is needed. This criterion is based on the definition of a special metric distance suitable for fuzzy partitioning, and the merging is done under semantic constraints. The distance we define does not handle the point coordinates, but directly their membership degrees to the fuzzy sets of the partition. This leads to the introduction of the notions of internal and external distances. The hierarchical fuzzy partitioning is carried independently over each dimension, and, to demonstrate the partition potential, they are used to build fuzzy inference system using a simple selection mechanism. Due to the merging technique, all the fuzzy sets in the various partitions are interpretable as linguistic labels. The tradeoff between accuracy and interpretability constitutes the most promising aspect in our approach. Well known data sets are investigated and the results are compared with those obtained by other authors using different techniques. The method is also applied to real world agricultural data, the results are analyzed and weighed against those achieved by other methods, such as fuzzy clustering or discriminant analysis. 相似文献
12.
13.
This paper describes the processing and transformation of medical data from a clinical database to a statistical data matrix. Precise extraction and linking tools must be available for the desired data to be processed for statistical purposes. We show that flexible mechanisms are required for the different types of users, such as physicians and statisticians. In our retrieval tools we use logical queries based on operands and operators. The paper describes the method and appliance of the operators with which the desired matrix is created through a process of selection and linking. Examples with a Kaplan-Meier function and time-dependent covariables demonstrate how our model is useful for different user groups. 相似文献
14.
The Quantization Theorem I (QT I) implies that the likelihood function can be reconstructed from quantized sensor observations, given that appropriate dithering noise is added before quantization. We present constructive algorithms to generate such dithering noise. The application to maximum likelihood estimation (mle) is studied in particular. In short, dithering has the same role for amplitude quantization as an anti-alias filter has for sampling, in that it enables perfect reconstruction of the dithered but unquantized signal’s likelihood function. Without dithering, the likelihood function suffers from a kind of aliasing expressed as a counterpart to Poisson’s summation formula which makes the exact mle intractable to compute. With dithering, it is demonstrated that standard mle algorithms can be re-used on a smoothed likelihood function of the original signal, and statistically efficiency is obtained. The implication of dithering to the Cramér–Rao Lower Bound (CRLB) is studied, and illustrative examples are provided. 相似文献
15.
Generating finite-state transducers for semi-structured data extraction from the Web 总被引:13,自引:0,他引:13
Integrating a large number of Web information sources may significantly increase the utility of the World-Wide Web. A promising solution to the integration is through the use of a Web Information mediator that provides seamless, transparent access for the clients. Information mediators need wrappers to access a Web source as a structured database, but building wrappers by hand is impractical. Previous work on wrapper induction is too restrictive to handle a large number of Web pages that contain tuples with missing attributes, multiple values, variant attribute permutations, exceptions and typos. This paper presents SoftMealy, a novel wrapper representation formalism. This representation is based on a finite-state transducer (FST) and contextual rules. This approach can wrap a wide range of semistructured Web pages because FSTs can encode each different attribute permutation as a path. A SoftMealy wrapper can be induced from a handful of labeled examples using our generalization algorithm. We have implemented this approach into a prototype system and tested it on real Web pages. The performance statistics shows that the sizes of the induced wrappers as well as the required training effort are linear with regard to the structural variance of the test pages. Our experiment also shows that the induced wrappers can generalize over unseen pages. 相似文献
16.
The widely adoption of Electronic Medical Records (EMRs) causes an explosive growth of the medical and clinical data. It makes the medical search technologies become critical to find useful patient information in the large medical dataset. However, the high quality medical search is a challenging task, in particular due to the inherent complexity and ambiguity of medical terminology. In this paper, by exploiting the uncertainty in ambiguous medical queries, we propose a novel semantic-based approach to achieve the diversity-aware retrieval of EMRs, i.e., both the relevance and novelty are considered for EMR ranking. With the support of medical domain ontologies, we first mine all the potential semantics (concepts and relations between them) from a user query and consume them to model the multiple query aspects. Then, we propose a novel diversification strategy, which considers not only the aspect importance but also the aspect similarity, to perform the diversity-aware EMR ranking. A real-world pilot study, which utilizes the proposed medical search approach to improve the second use of the EMRs, is reported. We believe that our experience can serve as an important reference for the development of similar applications in a medical data utilization and sharing environment. 相似文献
17.
18.
Park JS Jung YW Lee JW Shin DS Chung MS Riemer M Handels H 《Computer methods and programs in biomedicine》2008,92(3):257-266
For the Visible Korean Human (VKH), a male cadaver was serially ground off to acquire the serially sectioned images (SSIs) of a whole human body. Thereafter, more than 700 structures in the SSIs were outlined to produce detailed segmented images; the SSIs and segmented images were volume- and surface-reconstructed to create three-dimensional models. For outlining and reconstruction, popular software (Photoshop, MRIcro, Maya, AutoCAD, 3ds max, and Rhino) was mainly used; the technique can be reproduced by other investigators for creating their own images. For refining the segmentation and volume reconstruction, the VOXEL-MAN system was used. The continuously upgraded technique was applied to a female cadaver's pelvis to produce the SSIs with 0.1mm sized intervals and 0.1mm x 0.1mm sized pixels. The VKH data, distributed worldwide, encouraged researchers to develop virtual dissection, virtual endoscopy, and virtual lumbar puncture contributing to medical education and clinical practice. In the future, a virtual image library including all the Visible Human Project data, Chinese Visible Human data, and VKH data will hopefully be established where users will be able to download one of the data sets for medical applications. 相似文献
19.
J R Campbell N Givner C B Seelig A L Greer K Patil R S Wigton T Tape 《M.D. computing : computers in medical practice》1989,6(5):282-287
Formal studies of computerized information systems for ambulatory patients are rare. As part of an evaluation of the effects of such a system on clinic function, we divided the residents in our teaching clinic into a study group with access to COSTAR and a control group with access to conventional medical records alone. Nurses and clerical personnel in the clinic were allowed to use the computerized records only for patients of residents in the study group. We sampled the attitudes of nurses and clerical personnel toward use of the computer and performed detailed time studies of patient flow in the clinic. Responses to questionnaires reflected acceptance of computerization by the personnel sampled, who favored COSTAR records over conventional records, primarily because of the increased availability of information for telephone management and demand care. The residents never became facile users of COSTAR--a problem that we attribute to the infrequency of their clinic sessions. As a result, and because the workloads of residents using COSTAR were larger, waiting times were longer in clinics attended by these residents. Overall, the most intensive users of the computerized medical records were not the physicians. Improved productivity and better use of time among the nurses and clerical personnel were thought to outweigh the residents' perceptions. 相似文献
20.
Generating software test data by evolution 总被引:5,自引:0,他引:5
Michael C.C. McGraw G. Schatz M.A. 《IEEE transactions on pattern analysis and machine intelligence》2001,27(12):1085-1110
This paper discusses the use of genetic algorithms (GAs) for automatic software test data generation. This research extends previous work on dynamic test data generation where the problem of test data generation is reduced to one of minimizing a function. In our work, the function is minimized by using one of two genetic algorithms in place of the local minimization techniques used in earlier research. We describe the implementation of our GA-based system and examine the effectiveness of this approach on a number of programs, one of which is significantly larger than those for which results have previously been reported in the literature. We also examine the effect of program complexity on the test data generation problem by executing our system on a number of synthetic programs that have varying complexities 相似文献