首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
A model for predicting the incidence rate of carpal tunnel syndrome (CTS) for a given job, was developed using known biomechanical data, mechanical properties of human tendons and reliability engineering techniques to simplify the problem. In addition, time-dependent stress-strength interference theory was used to quantify the stress on the tendons during a job cycle, based on wrist position and grip strength and to estimate the tendon failure rate (or CTS incidence) for a given job. Higher failure probabilities were predicted for greater wrist deviations, for higher grasp forces, for females as compared to males, for wrist extension as compared to wrist flexion, and for two-fingered pinches as compared to four-fingered grasps. The predictions closely matched previously reported CTS incidence rates for a poultry thigh boning task.  相似文献   

2.
The publication and reuse of intellectual resources using the Web technologies provide no support for us to clip out any portion of Web pages, to combine them together for their local reuse, nor to publish the newly composed object as a new Web page for its reuse by other people. This paper shows how the meme-media architecture is applied to the Web to provide such support for us. This makes the Web work as a shared repository not only for publishing intellectual resources, but also for their collaborative reediting. We will propose a general framework for clipping arbitrary Web contents as live objects, for defining IO ports on such a clip, and for the recombination and linkage of such clips based on both the original and some user-defined relationships among them. In our previous works, we proposed two separate frameworks for these three purposes; one works for the first two, and the other for the last. Here we will propose a unified framework for these three purposes, as well as its detailed internal mechanisms. Then we show how it can be easily applied to various legacy Web applications to develop innovative services.  相似文献   

3.
Scientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate workflow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for similarity search. Effective similarity search requires both high quality algorithms for the comparison of scientific workflows and efficient strategies for indexing, searching, and ranking of search results. Yet, the graph structure of scientific workflows poses severe challenges to each of these steps. Here, we present a complete system for effective and efficient similarity search in scientific workflow repositories, based on the Layer Decomposition approach to scientific workflow comparison. Layer Decomposition specifically accounts for the directed dataflow underlying scientific workflows and, compared to other state-of-the-art methods, delivers best results for similarity search at comparably low runtimes. Stacking Layer Decomposition with even faster, structure-agnostic approaches allows us to use proven, off-the-shelf tools for workflow indexing to further reduce runtimes and scale similarity search to sizes of current repositories.  相似文献   

4.
Given the recent trend in Data Science (DS) and Sports Analytics, an opportunity has arisen for utilizing Machine Learning (ML) and Data Mining (DM) techniques in sports. This paper reviews background and advanced basketball metrics used in National Basketball Association (NBA) and Euroleague games. The purpose of this paper is to benchmark existing performance analytics used in the literature for evaluating teams and players. Basketball is a sport that requires full set enumeration of parameters in order to understand the game in depth and analyze the strategy and decisions by minimizing unpredictability. This research provides valuable information for team and player performance basketball analytics to be used for better understanding of the game. Furthermore, these analytics can be used for team composition, athlete career improvement and assessing how this could be materialized for future predictions. Hence, critical analysis of these metrics are valuable tools for domain experts and decision makers to understand the strengths and weaknesses in the game, to better evaluate opponent teams, to see how to optimize performance indicators, to use them for team and player forecasting and finally to make better choices for team composition.  相似文献   

5.
Estimates of temperate mangrove forest cover are required for management of estuarine ecosystems, particularly in areas experiencing rapid change in mangrove distribution. However, it remains challenging to obtain accurate estimates of temperate mangrove cover using remote sensing because of the unique physical features and environmental conditions of temperate mangroves. The objectives of this study were (1) to develop an improved image analysis approach for estimating temperate mangrove forest cover using remote sensing and (2) to test the new approach by comparing its accuracy and uncertainty with those of traditional image analysis. The study area (around 1500 ha) is located in the southern part of the Waitemata Harbour, Auckland, New Zealand. Landsat images and field surveys were used for mapping, and uncertainty was quantified using a Monte Carlo approach. This study showed that, using a traditional approach of mapping, misclassification was the highest source of uncertainty (up to 19% for dwarf mangroves and 16% for tall mangroves), followed by water column effects (up to 7% for dwarf mangroves and 5% for tall mangroves) and positional errors (up to 4% for dwarf mangroves and 5% for tall mangroves). Improved image analysis enhanced accuracy from 72% to 95% for tall mangroves and from 69% to 90% for dwarf mangroves. The improved approach minimized the overall uncertainty by up to 68% for tall mangroves and 57% for dwarf mangroves. Adopting this innovative approach to image analysis can improve accuracy of estimates of long-term trends in temperate mangrove forest cover.  相似文献   

6.
Achieving high performance for parallel or distributed programs often requires substantial amounts of information about the programs themselves, about the systems on which they are executing, and about specific program runs. The monitoring system that collects, analyzes, and makes application-dependent monitoring information available to the programmer and to the executing program is presented. The system may be used for off-line program analysis, for on-line debugging, and for making on-line, dynamic changes to parallel or distributed programs to enhance their performance. The authors use a high-level, uniform data model for the representation of program information and monitoring data. They show how this model may be used for the specification of program views and attributes for monitoring, and demonstrate how such specifications can be translated into efficient, program-specific monitoring code that uses alternative mechanisms for the distributed analysis and collection to be performed for the specified views. The model's utility has been demonstrated on a wide variety of parallel machines  相似文献   

7.
Athletes engaged in competition, particularly those involved in international competitions such as the Olympics, are increasingly being tested for a greater variety of banned substances; it is not unusual for tests to be conducted for 100 drugs and another 400 as metabolites. Previous studies related to the accuracy of drug testing processes have failed to properly consider the effects of testing for more than one drug. In order to identify appropriate indicators for the multiple-drug case, probability theory and accuracy concepts applicable to testing for multiple drugs are developed and applied to illustrative data. The probability that a drug-free individual will test positive for drug use is shown to be much higher than indicated by previous studies, and it is shown that an increase in the number of drugs tested for yields an approximately proportionate increase in the probability that a positive test result is erroneous. Therefore, while testing for one drug may result in a comfortably low rate of false accusations of drug use, testing for multiple drugs may well result in an unacceptably high rate. Finally, a set of empirical measures is suggested for use in cases of tests for multiple drugs; the measures will provide for comparability among laboratory proficiency studies.  相似文献   

8.
Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regarding their suitability for different modelling purposes. Weighting of inputs and variables for testing was set from the aspect of crop modelling. The software allows users to adjust weights according to their specific requirements. Background information is given for the variables with respect to their relevance for modelling and possible uncertainties. Examples are given for data sets of the different classes. The framework helps to assemble high quality data bases, to select data from data bases according to modellers requirements and gives guidelines to experimentalists for experimental design and decide on the most effective measurements to improve the usefulness of their data for modelling, statistical analysis and data assimilation.  相似文献   

9.
A study of the reasons for delay in software development is described. The aim of the study was to gain an insight into the reasons for differences between plans and reality in development activities in order to be able to take actions for improvement. A classification was used to determine the reasons. 160 activities, comprising over 15000 hours of work, have been analyzed. The results and interpretations of the results are presented. Insight into the predominant reasons for delay enabled actions for improvements to be taken in the department concerned. Because the distribution of reasons for delay varied widely from one department to another, it is recommended that every department should gain an insight into its reasons for delay in order to be able to take adequate actions for improvement  相似文献   

10.
Recent advances in computers, networking, and telecommunications offer new opportunities for using simulation and gaming as methodological tools for improving crisis management. It has become easy to develop virtual environments to support games, to have players at distributed workstations interacting with each other, to have automated controllers supply exogenous events to the players, to enable players to query online data files during the game, and to prepare presentation graphics for use during the game and for post-game debriefings. Videos can be used to present scenario updates to players in “newscast” format and to present pre-taped briefings by experts to players. Organizations responsible for crisis management are already using such technologies in constructing crisis management systems (CMSs) to coordinate response to a crisis, provide decision support during a crisis, and support activities prior to the crisis and after the crisis. If designed with gaming in mind, those same CMSs could be easily used in a simulation mode to play a crisis management game. Such a use of the system would also provide personnel with opportunities to rehearse for real crises using the same tools they would have available to them in a real crisis. In this paper, we provide some background for the use of simulation and gaming in crisis management training, describe an architecture for simulation and gaming, and present a case study to illustrate how virtual environments can be used for crisis management training.  相似文献   

11.
A critical component of landscape dynamics is the recovery of vegetation following disturbance. The objective of this research was to characterize the forest recovery trends associated with a range of spectral indicators and report their observed performance and identified limitations. Forest disturbances were mapped for a random sample of three major bioclimate zones of North American boreal forests. The mean number of years for forest to recover, defined as time required to for a pixel to attain 80% of the mean spectral value of the 2 years prior to disturbance, was estimated for each disturbed pixel. The majority of disturbed pixels recovered within the first 5 years regardless of the index ranging from approximately 78% with normalized burn ratio (NBR) to 95% with tasselled cap greenness (TCG) and after 10 years more than 93% of disturbed pixels had recovered. Recovery rates suggest that normalized differenced vegetation index (NDVI) and TCG saturate earlier than indices that emphasize longer wavelengths. Thus, indices such as NBR and the mid-infrared spectral band offer increased capacity to characterize different levels of forest recovery. The mean length of time for spectral indices to recover to 80% of the pre-disturbance value for pixels disturbed 10 or more years ago was highest for NBR, 5.6 years, and lowest for TCG, 1.7 years. The mid-infrared spectral band had the greatest difference in recovered pixels among bioclimate zones 1 year after disturbance, ranging from approximately 42% of disturbed pixels for the cold and mesic bioclimate zone to 60% for the extremely cold and mesic bioclimate zone. The cold and mesic bioclimate zone had the longest mean years to recover ranging from 1.9 years for TCG to 4.2 years for NBR, while the cool temperate and dry bioclimate zone had the shortest mean years to recover ranging from 1.6 years for TCG to 2.9 years for NBR suggesting differences in pre-disturbance conditions or successional processes. The results highlight the need for caution when selecting and interpreting a spectral index for recovery characterization, as spectral indices, based upon the constituent wavelengths, are sensitive to different vegetation conditions and will provide a variable representation of structural conditions of forests.  相似文献   

12.
Quantitative proteomics can be used for the identification of cancer biomarkers that could be used for early detection, serve as therapeutic targets, or monitor response to treatment. Several quantitative proteomics tools are currently available to study differential expression of proteins in samples ranging from cancer cell lines to tissues to body fluids. 2-DE, which was classically used for proteomic profiling, has been coupled to fluorescence labeling for differential proteomics. Isotope labeling methods such as stable isotope labeling with amino acids in cell culture (SILAC), isotope-coded affinity tagging (ICAT), isobaric tags for relative and absolute quantitation (iTRAQ), and (18) O labeling have all been used in quantitative approaches for identification of cancer biomarkers. In addition, heavy isotope labeled peptides can be used to obtain absolute quantitative data. Most recently, label-free methods for quantitative proteomics, which have the potential of replacing isotope-labeling strategies, are becoming popular. Other emerging technologies such as protein microarrays have the potential for providing additional opportunities for biomarker identification. This review highlights commonly used methods for quantitative proteomic analysis and their advantages and limitations for cancer biomarker analysis.  相似文献   

13.
科研绩效考核是研究所实施绩效管理的主要内容之一,其主要目的在于:通过定期地对各团队科技工作进行回顾与评估,为各团队分析不足、明确方向提供依据,为研究所确定与调整发展目标提供支持,从而促进科研绩效的提高,推动研究所科技工作的持续发展.同时,通过绩效考核,为岗位聘任、绩效分配、薪酬调整等人力资源管理工作提供参考和依据.因此如何公平、公正又高效地进行科研团队绩效考核正是研究所人力资源管理工作的重点之一,也是管理信息化在人力资源管理方面的重点之一.针对此问题,本文首先从管理学的角度介绍了平衡记分卡(The BalancedScoreCard,BSC)理论在科研院校绩效考核中的应用,之后结合MVC设计模式在普元EOS开发平台上面,设计并实现了一套适用于研究所科研团队绩效考核的系统.  相似文献   

14.
针对不同用户对语义Web服务需要的偏好及发现的准确性和高效性的问题,提出了一种支持QoS预测的语义Web服务发现方法。首先,对历史用户按照用户情境进行聚类预处理,然后在每个簇中采用BP神经网络算法为当前用户预测其所需的QoS,匹配了服务的功能性需求后,在候选服务中,按照预测的QoS,为用户推荐相应服务。最后通过实验验证了该方法的可行性和有效性。  相似文献   

15.
《Advanced Robotics》2013,27(4):461-482
In hand-eye systems for advanced robotic applications such as assembly, the degrees of freedom of the vision sensor should be increased and actively made use of to cope with unstable scene conditions. Particularly, in the case of using a simple vision sensor, an intelligent adaptation of the sensor is essential to compensate for its inability to adapt to a changing environment. This paper proposes a vision sensor setup planning system which operates based on environmental models and generates plans for using the sensor and its illumination assuming freedom of positioning for both. A typical vision task in which the edges of an object are measured to determine its position and orientation is assumed for the sensor setup planning. In this context, the system is able to generate plans for the camera and illumination position, and to select a set of edges best suited for determining the object's position. The system operates for stationary or moving objects by evaluating scene conditions such as edge length, contrast, and relative angles based on a model of the object and the task environment. Automatic vision sensor setup planning functions, as shown in this paper, will play an important role not only for autonomous robotic systems, but also for teleoperation systems in assisting advanced tasks.  相似文献   

16.
Uforia is a simple, flexible and extensible framework for analysis and parsing of file metadata. It has been written in Python and is available under the GPLv2. Uforia traverses a file-system and triggers a configurable set of modules for every file it encounters. Out-of-the-box, Uforia conforms to the NIST standard for forensic hashing by storing the currently most common three cryptographic hashes for each file: the MD5, SHA-1 and SHA-256 hash. Uforia strives for optimal scaling of the metadata-analysis by offering an easily configurable threading model of both its Producers and Consumers. Additionally, the interface is written and intended to be as loosely coupled as possible, as to easily reduce, replace or increase the Producer’s and Consumer’s functionalities to match the specific needs of the user. Uforia also attempts to reduce database redundancy to a minimum in the same way, by only loosely coupling database tables and delegating the relevant parts of the data-model to be handled by the individual modules. Each of these modules will perform its tasks asynchronously of Uforia and is automatically detected, registered and called to handle its specific filetypes. Uforia does not yet come with a front-end interface for viewing the information stored in the database, but the database contents stored could theoretically already be applied to a wide variety of situations, such as searching for specific metadata or information during a forensic investigation, for filesystem-level deduplication or even for creating custom known file hash tables. The interface for creating new database handlers and modules has been simplified as much as possible, allowing for easy extensibility and tailoring to each use-case’s specific requirements.  相似文献   

17.
Surveillance for security requires communication between systems and humans, involves behavioural and multimedia research, and demands an objective benchmarking for the performance of system components. Metadata representation schemes are extremely important to facilitate (system) interoperability and to define ground truth annotations for surveillance research and benchmarks. Surveillance places specific requirements on these metadata representation schemes. This paper offers a clear and coherent terminology, and uses this to present these requirements and to evaluate them in three ways: their fitness in breadth for surveillance design patterns, their fitness in depth for a specific surveillance scenario, and their realism on the basis of existing schemes. It is also validated that no existing metadata representation scheme fulfils all requirements. Guidelines are offered to those who wish to select or create a metadata scheme for surveillance for security.  相似文献   

18.
The implementations of design for assembly and design for manufacture (DFM) led to enormous benefits including simplification of products, reduction of assembly and manufacturing costs, improvement of quality, and reduction of time to market. More recently, environmental concerns required that disassembly and recycling issues should be considered during the design stages. The effort to reduce total life-cycle costs for a product through design innovation is becoming an essential part of the current manufacturing industry. Therefore, researchers begin to focus their attention on design for environment, design for recyclability, design for life-cycle (DFLC), etc. These studies are sometimes referred to as Design for X (DFX). Since the late 1990s, hundreds of papers have been published pertaining to DFX applications in manufacturing. Most of them are widely distributed over many different disciplines and publications. This makes it very difficult for one to locate all the information necessary for the application of DFX in manufacturing. A paper that can help researchers and practitioners applying this emerging technology is highly desirable. The objective of this paper is to present the concepts, applications, and perspectives of ‘DFX’ in manufacturing, thus providing some guidelines and references for future research and implementation.  相似文献   

19.
现有三维颅面复原技术大多依据颅骨特征点的软组织厚度统计值。针对现有统计值指标涵盖的年龄、胖瘦等属性段较宽泛导致复原面貌缺乏个性的缺点,提出了一种改进方法。首先通过CT扫描仪获得颅面样本数据,并通过图像重构获得三维颅骨和人脸模型;然后采用一种半自动特征点标定方法对三维颅骨样本进行特征点标定,并求解特征点软组织厚度;之后采用支持向量回归方法构建特征点软组织厚度与属性之间的函数关系;最后根据待复原颅骨的属性以及回归函数计算特征点软组织厚度,在此基础上采用薄板样条函数对参考人脸模型进行变形获得复原面貌。实验结果表明,相比于已有方法,该方法能获得更准确的软组织厚度,提高颅面复原的准确度。  相似文献   

20.
The current textual and graphical interfaces to computing, including the Web, is a dream come true for the hearing impaired. However, improved technology for voice and audio interface threaten to end this dream. Requirements are identified for continued access to computing for the hearing impaired. Consideration is given also to improving access to the sight impaired.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号