首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Handle vibration from equipment or machines influences musculoskeletal activity as well as comfort in handling the same. New technology can be worse than no technology if it was not developed correctly as ergonomic research has clearly demonstrated the relationship between injury risk and poorly designed hand tools. Clinical and epidemiological studies have shown that operators of handheld power tools are prone to develop various vibration‐induced disorders of the hand and arm, which are collectively referred to as “hand–arm vibration syndrome.'' The vibration direction has a great influence on the transmitted vibration. The present study focuses the effects of low‐frequency vertical vibration on hand to shoulder from handles of different size. The electrodynamic exciter is used for simulating vibration to a vertical handles of four different diameters. PULSE LabShop software is used for evaluating the magnitude of vibration in different frequency bands. The vibration characteristic data were acquired in the yh axis at the wrist, elbow, and shoulder for bent arm and extended arm postures with vibration excitation of 4.5 m/s2. Transmissibility characteristics are computed to determine the influence of handle diameter in yh vibration transmitted to the hand–arm system. The magnitude of vibration transmitted within the hand, elbow, and shoulder was observed to be dependent on the handle size; larger handles cause higher vibration transmissibility. The results also show that the human hand–arm system in an extended arm posture amplifies the vibration transmitted than bent arm in a small difference. © 2011 Wiley Periodicals, Inc.  相似文献   

2.
Natural language processing (NLP) has been used to process text pertaining to patient records and narratives. However, most of the methods used were developed for specific systems, so new research is necessary to assess whether such methods can be easily retargeted for new applications and goals, with the same performance. In this paper, open‐source tools are reused as building blocks on which a new system is built. The aim of our work is to evaluate the applicability of the current NLP technology to a new domain: automatic knowledge acquisition of diagnostic and therapeutic procedures from clinical practice guideline free‐text documents. In order to do this, two publicly available syntactic parsers, several terminology resources and a tool oriented to identify semantic predications were tailored to increase the performance of each tool individually. We apply this new approach to 171 sentences selected by the experts from a clinical guideline, and compare the results with those of the tools applied with no tailoring. The results of this paper show that with some adaptation, open‐source NLP tools can be retargeted for new tasks, providing an accuracy that is equivalent to the methods designed for specific tasks.  相似文献   

3.
The extensible markup language XML can be used to support the integration of several component programming environments to create a flexible physical simulation system. Data exchange via open‐standard‐based plain text files allows system components to be loosely‐coupled, rather than combined into an integrated development environment, so that the most appropriate tools can be used for each component and the system can be extended with minimal disruption. This paper details an example application using this technology to configure a simulation of robotic manipulation. Those parts of the system that require real‐time data exchange use simple UNIX socket‐based interactions, which are configured using shared XML setup files. The approach provides a reusable template for other, similar projects. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical mixed reality system built with the optical see‐through head‐mounted display, a key problem is how to guarantee the accuracy of hand–eye coordination by decreasing the instability of the eye and the head‐mounted display in long‐term use. In this paper, we propose a real‐time latent active correction algorithm to decrease hand–eye calibration errors accumulated over time. Experimental results show that we can guarantee an effective calibration result and improve the user experience with the proposed latent active correction algorithm. Based on the proposed system, experiments about virtual buttons are also designed, and the interactive performance regarding different scales of virtual buttons is presented. Finally, a direct physics‐inspired input method is constructed, which shares a similar performance with the gesture‐based input method but provides a lower learning cost due to its naturalness.  相似文献   

5.
This research mentions integration problems and describes a novel model‐driven approach that intend to reach a higher degree of interoperability among different software development tools coming from different technological spaces (TSs) by representing data of tools through the models. The proposed concept introduce a way to integrate various software related tools and aim to provide a modular syntax for tool integration that leverage the collaboration of different tools. In this work, the proposed approach has been tested through a case–study by demonstrating a single aspect of the model‐driven tool integration because the model‐driven tool integration has a wide scope and it is difficult to show all aspects of it in one research. It is proved that the model‐driven tool integration is possible based on the proposed concept between different TSs, and the formulation of the proposed approach is provided. As the results indicate, the proposed system integrates selected software‐related tools coming from different TSs and enables them to use each other's capabilities. This work paves the way to contribute for the standardization efforts of the model‐driven tool integration. Finally, further research opportunities are provided. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Heterogeneous performance prediction models are valuable tools to accurately predict application runtime, allowing for efficient design space exploration and application mapping. The existing performance models require intricate system architecture knowledge, making the modeling task difficult. In this research, we propose a regression‐based performance prediction framework for general purpose graphical processing unit (GPGPU) clusters that statistically abstracts the system architecture characteristics, enabling performance prediction without detailed system architecture knowledge. The regression‐based framework targets deterministic synchronous iterative algorithms using our synchronous iterative GPGPU execution model and is broken into two components: the computation component that models the GPGPU device and host computations and the communication component that models the network‐level communications. The computation component regression models use algorithm characteristics such as the number of floating‐point operations and total bytes as predictor variables and are trained using several small, instrumented executions of synchronous iterative algorithms that include a range of floating‐point operations‐to‐byte requirements. The regression models for network‐level communications are developed using micro‐benchmarks and employ data transfer size and processor count as predictor variables. Our performance prediction framework achieves prediction accuracy over 90% compared with the actual implementations for several tested GPGPU cluster configurations. The end goal of this research is to offer the scientific computing community, an accurate and easy‐to‐use performance prediction framework that empowers users to optimally utilize the heterogeneous resources. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
To support self-regulated learning (SRL), computer-based learning environments (CBLEs) are often designed to be open-ended and multidimensional. These systems incorporate diverse features that allow students to enact and reveal their SRL strategies via the choices they make. However, research shows that students' use of such features is limited; students often neglect SRL-supportive tools in CBLEs. In this study, we examined middle school students' feature use and strategy development over time using a teachable agent system called Betty's Brain. Students learned about climate change and thermoregulation in two units spanning several weeks. Learning was assessed using a pretest–posttest design, and students' interactions with the system were logged. Results indicated that use of SRL-supportive tools was positively correlated with learning outcomes. However, promising strategy patterns weakened over time due to shallow strategy development, which also negatively impacted the efficacy of the system. Although students seemed to acquire one beneficial strategy, they did so at the cost of other beneficial strategies. Understanding this phenomenon may be a key avenue for future research on SRL-supportive CBLEs. We consider two hypotheses for explaining and perhaps reducing shallow strategy development: a student-centered hypothesis related to “gaming the system,” and a design-centered hypothesis regarding how students are scaffolded via the system.  相似文献   

8.
Recent trends in manufacturing and health care move these two work systems closer together from a system ergonomics point of view. Individual treatment of products, especially patients, by specialists in a distributed environment demand information technology (IT)‐based support suitable for complex systems. IT‐based support of processes in complex systems is difficult due to the lack of standard processes. IT support also means to rethink processes to use efficiency potentials. Close cooperation of users and software developers is needed to increase the ergonomic quality of the system. Therefore, suitable tools are needed: UML is available as the standard industry modeling language, Zope/Plone as the quasi‐standard for content management systems, SimPy as an object‐oriented simulation tool for event‐triggered processes, and ACT‐R as a powerful cognitive architecture for simulation of human information processes. The integration of these tools enables system‐ergonomic support of processes in the complex work system as well as of the development and deployment process. It is the base of an integral system‐ergonomic approach for IT‐based process management. Knowledge gained during process analysis either enters models or leads to the extension and adaptation of the tool chain. The models serve as basis for discussion among system ergonomists, programmers, and specialists from the work system. Further, they are understood by simulation and process support tools. Transcoding efforts between humans with different professional backgrounds and machines are reduced, and the flexibility demanded by complex systems is met. © 2008 Wiley Periodicals, Inc.  相似文献   

9.
This paper addresses the specified‐time control problem for control‐affine systems and rigid bodies, wherein the specified‐time duration can be designed in advance according to the task requirements. By using the time‐rescaling approach, a novel framework to solve the specified‐time control problem is proposed, and the original systems are converted to the transformation systems based on which the specified‐time control laws for both control‐affine systems and rigid bodies are studied. Compared with the existing approaches, our proposed specified‐time control laws can be derived from the known stabilization control laws. To our best knowledge, it is the first time that transformation system–based specified‐time control framework for control‐affine system and rigid body dynamics is proposed. To further improve the convergence performance of specified‐time control, a finite‐time attitude synchronization control law for rigid bodies on rotation matrices is proposed, and thereby, the finite‐time–based specified‐time control law is designed eventually. In the end, numerical simulations and SimMechanics experiments are provided to illustrate effectiveness of the theoretical results.  相似文献   

10.
We present a non‐trivial case study designed to highlight some of the practical issues that arise when using mixed‐µ or complex‐µ robust synthesis methodologies. By considering a multi‐input multi‐output three‐cart mass–spring–dashpot (MSD) with uncertain parameters and dynamics, it is demonstrated that optimized performance (disturbance‐rejection) is reduced as the level of uncertainty in one or two real parameters is increased. Comparisons are made (a) in the frequency domain, (b) by RMS values of key signals and (c) in time‐domain simulations. The mixed‐µ controllers designed are shown to yield superior performance as compared with the classical complex‐µ design. The singular value decomposition analysis shows the directionality changes resulting from different uncertainty levels and from the use of different frequency weights. The nominal and marginal stability regions of the closed‐loop system are studied and discussed, illustrating how stability margins can be extended at the cost of reducing performance. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
We have developed a machine learning framework to accurately extract complex genetic interactions from text. Employing type‐specific classifiers, this framework processes research articles to extract various biological events. Subsequently, the algorithm identifies regulation events that take other events as arguments, allowing a nested structure of predictions. All predictions are merged into an integrated network, useful for visualization and for deduction of new biological knowledge. In this paper, we discuss several design choices for an event‐based extraction framework. These detailed studies help improving on existing systems, which is illustrated by the relative performance gain of 10% of our system compared to the official results in the recent BioNLP’09 Shared Task. Our framework now achieves state‐of‐the‐art performance with 37.43 recall, 54.81 precision and 44.48 F‐score. We further present the first study of feature selection for bio‐molecular event extraction from text. While producing more cost‐effective models, feature selection can also lead to a better insight into the complexity of the challenge. Finally, this paper tries to bridge the gap between theoretical relation extraction from text and experimental work on bio‐molecular interactions by discussing interesting opportunities to employ event‐based text mining tools for real‐life tasks such as hypothesis generation, database curation and knowledge discovery.  相似文献   

12.
In recent years, extensive research has been conducted in the area of simulation to model large complex systems and understand their behavior, especially in parallel and distributed systems. At the same time, a variety of design principles and approaches for computer‐based simulation have evolved. As a result, an increasing number of simulation tools have been designed and developed. Therefore, the aim of this paper is to develop a comprehensive taxonomy for design of computer‐based simulations, and apply this taxonomy to categorize and analyze various simulation tools for parallel and distributed systems. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
A method was developed to accurately predict the risk of injuries in industrial jobs based on datasets not meeting the assumptions of parametric statistical tools, or being incomplete. Previous research used a backward‐elimination process for feedforward neural network (FNN) input variable selection. Simulated annealing (SA) was used as a local search method in conjunction with a conjugate‐gradient algorithm to develop an FNN. This article presents an incremental step in the use of FNNs for ergonomics analyses, specifically the use of forward selection of input variables. Advantages to this approach include enhancing the effectiveness of the use of neural networks when observations are missing from ergonomics datasets, and preventing overspecification or overfitting of an FNN to training data. Classification performance across two methods involving the use of SA combined with either forward selection or backward elimination of input variables was comparable for complete datasets, and the forward‐selection approach produced results superior to previously used methods of FNN development, including the error back‐propagation algorithm, when dealing with incomplete data. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 31–49, 2004.  相似文献   

14.
This work is designed to control the movement of hand structural agents under external action, using the implicit animation driven by explicit animation technique (AI‐CAE technique). Starting from the configuration of a hand at rest obtained by a 3D scanner and after meshing of the structural agents, we seek the configuration of the rigid agents under orthopaedic surgeon external action and interacting reliance of deformable and rigid agents. We have developed a model and software tools to answer this interactive application with adaptive execution. The first contribution comes from notations and definition of a versatile multi‐body system dedicated to the explicit and implicit animation. The second contribution comes from the implicit animation driven by explicit animation itself, and from its ability to mimic the role of cartilages and ligaments. The resulting technique is applied to the bone structure consistency of a specific human hand in the context of virtual hand orthopaedic surgery. The versatile specific multi‐body is made up of hierarchical interacting agents conceivable as a construction set of rigid bones with cartilages–ligaments and underlying links. The explicit animation produces a desired configuration from geometric command parameters of torsion, flexion, pivot and axis shifting, given in a scenario subdivided into temporal sequences. The implicit animation controls the movement by implementing a physics‐based model and fuzzy constraints of position and orientation. It gives better configuration than the explicit animation because it takes into account the interactions between agents, and it gives a neat solution without the problems of complexity due to geometric modelling. A methodology based on the AI‐CAE technique is discussed, medical expertise and validation tests are presented. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
We present a real‐time approach for acquiring 3D objects with high fidelity using hand‐held consumer‐level RGB‐D scanning devices. Existing real‐time reconstruction methods typically do not take the point of interest into account, and thus might fail to produce clean reconstruction results of desired objects due to distracting objects or backgrounds. In addition, any changes in background during scanning, which can often occur in real scenarios, can easily break up the whole reconstruction process. To address these issues, we incorporate visual saliency into a traditional real‐time volumetric fusion pipeline. Salient regions detected from RGB‐D frames suggest user‐intended objects, and by understanding user intentions our approach can put more emphasis on important targets, and meanwhile, eliminate disturbance of non‐important objects. Experimental results on real‐world scans demonstrate that our system is capable of effectively acquiring geometric information of salient objects in cluttered real‐world scenes, even if the backgrounds are changing.  相似文献   

16.
The research presented in this paper aims to support the macroergonomics adoption improvement process by developing a broader understanding of relationships between key macroergonomics factors and management styles. The methodology involves knowledge acquisition, identifying, and categorizing a holistic set of key criteria about the macroergonomics adoption process. The Analytic Hierarchy Process is suggested as a multi‐attribute decision‐making methodology to effectively enhance adoption of macroergonomics and to improve management decision performance in measuring and comparing the overall performance of different management styles based on macroergonomical criteria. The study found that in terms of company culture, participation, human capability, and attitudes, the best management style in improving macroergonomics adoption is Management by Values. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 353–377, 2004.  相似文献   

17.
R.W. Ehrich 《Automatica》1983,19(6):655-662
As the complexity of human-computer interfaces increases, those who use such interfaces as well as those responsible for their design have recognized an urgent need for substantive research in the human factors of software development. Because of the magnitude of the task of producing software for human-computer interfaces, appropriate tools are needed for defining and improving such interfaces, both in research and in production environments. DMS (dialogue management system) is a complete system for defining, modifying, simulating, executing, and monitoring human-computer dialogues. It is based upon the hypotheses that: (1) dialogue software should be designed separately from the code that implements the computational parts of an application, and (2) different roles are defined for the dialogue author and the programmer to achieve that goal. This paper discusses several of the technical aspects underlying the design of DMS.  相似文献   

18.
Abstract Recent educational computer‐based technologies have offered promising lines of research that promote social constructivist learning goals, develop skills required to operate in a knowledge‐based economy ( Roschelle et al. 2000 ), and enable more authentic science‐like problem‐solving. In our research programme, we have been interested in combining these aims for curricular reform in school science by developing innovative and progressive hand‐held and wearable computational learning tools. This paper reports on one such line of research in which the learning outcomes of two distinct technological platforms (wearable computers and Palm hand‐helds) are compared using the same pedagogical strategy of Participatory Simulations. Participatory Simulations use small wearable or hand‐held computers to engage participants in simulations that enable inquiry and experimentation ( Colella 2000 ) allowing students to act out the simulation themselves. The study showed that the newer and more easily distributable version of Participatory Simulations on Palms was equally as capable as the original Tag‐based simulations in engaging students collaboratively in a complex problem‐solving task. We feel that this robust and inexpensive technology holds great promise for promoting collaborative learning as teachers struggle to find authentic ways to integrate technology into the classroom in addition to engaging and motivating students to learn science.  相似文献   

19.
Most visual diagramming tools provide point‐and‐click construction of computer‐drawn diagram elements using a conventional desktop computer and mouse. SUMLOW is a unified modelling language (UML) diagramming tool that uses an electronic whiteboard (E‐whiteboard) and sketching‐based user interface to support collaborative software design. SUMLOW allows designers to sketch UML constructs, mixing different UML diagram elements, diagram annotations, and hand‐drawn text. A key novelty of the tool is the preservation of hand‐drawn diagrams and support for manipulation of these sketches using pen‐based actions. Sketched diagrams can be automatically ‘formalized’ into computer‐recognized and ‐drawn UML diagrams and then exported to a third party CASE tool for further extension and use. We describe the motivation for SUMLOW, illustrate the use of the tool to sketch various UML diagram types, describe its key architecture abstractions and implementation approaches, and report on two evaluations of the toolset. We hope that our experiences will be useful for others developing sketching‐based design tools or those looking to leverage pen‐based interfaces in software applications. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
Web-based learning is widespread in educational settings. The popularity of Web-based learning is in great measure because of its flexibility. Multiple navigation tools provided some of this flexibility. Different navigation tools offer different functions. Therefore, it is important to understand how the navigation tools are used by learners with different backgrounds, knowledge, and skills. This article presents two empirical studies in which data-mining approaches were used to analyze learners' navigation behavior. The results indicate that prior knowledge and subject content are two potential factors influencing the use of navigation tools. In addition, the lack of appropriate use of navigation tools may adversely influence learning performance. The results have been integrated into a model that can help designers develop Web-based learning programs and other Web-based applications that can be tailored to learners' needs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号