首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
《Ergonomics》2012,55(2):81-88
Indirect psychological or physiological measures of driving performance are often used without supporting evidence, or even comment, on their validity. In this validation the performance of ten subjects on a subsidiary reaction time (RT) task and a visual detection task was correlated. On the RT task, 93?dB auditory signals were presented with an average intersignal interval of SO s. On the detection task, the subjects had to brake as fast as possible when they perceived a 40 × 40?cm obstacle at the side of the road. Over the test of three hours, in night driving conditions on a closed 5?km track, the correlation between group averages was —0.78 and the average within-subject correlation was —0.47. From these results, and a discussion of the predictive and the construct validity of the RT-task, it is concluded that subsidiary RT may be used as a valid indicator of changes in efficiency of driving performance.  相似文献   

2.
The mystery surrounding emotions, how they work and how they affect our lives has not yet been unravelled. Scientists still debate the real nature of emotions, whether they are evolutionary, physiological or cognitive are just a few of the different approaches used to explain affective states. Regardless of the various emotional paradigms, neurologists have made progress in demonstrating that emotion is as, or more, important than reason in the process of making decisions and deciding actions. The significance of these findings should not be overlooked in a world that is increasingly reliant on computers to accommodate to user needs. In this paper, a novel approach for recognizing and classifying positive and negative emotional changes in real time using physiological signals is presented. Based on sequential analysis and autoassociative networks, the emotion detection system outlined here is potentially capable of operating on any individual regardless of their physical state and emotional intensity without requiring an arduous adaptation or pre-analysis phase. Results from applying this methodology on real-time data collected from a single subject demonstrated a recognition level of 71.4% which is comparable to the best results achieved by others through off-line analysis. It is suggested that the detection mechanism outlined in this paper has all the characteristics needed to perform emotion recognition in pervasive computing.  相似文献   

3.
随着生物医学文献的快速增长,在海量的生物医学文献中存在大量有关疾病、病症和治疗物质的信息,这些信息对疾病的治疗和药物的研制有着重要的意义。针对疾病与治疗物质之间的信息抽取,重点训练两个模型,即疾病与病症模型和病症与治疗物质模型。疾病与病症模型判断一种疾病是否会存在或者导致一种生理现象的产生;病症与治疗物质模型判断一种物质是否改变人的生理现象或者生理过程。使用半监督学习的Tri-training的方法,利用大量未标注数据辅助少量有标注数据进行训练提高分类性能。实验结果表明,Tri-training方法中利用未标注数据有助于提高实验结果;且在训练过程中使用集成学习的思想将三个分类器器集成在一起,提高了学习性能。  相似文献   

4.
人脸动作编码系统从人脸解剖学的角度定义了一组面部动作单元(action unit,AU),用于精确刻画人脸表情变化。每个面部动作单元描述了一组脸部肌肉运动产生的表观变化,其组合可以表达任意人脸表情。AU检测问题属于多标签分类问题,其挑战在于标注数据不足、头部姿态干扰、个体差异和不同AU的类别不均衡等。为总结近年来AU检测技术的发展,本文系统概述了2016年以来的代表性方法,根据输入数据的模态分为基于静态图像、基于动态视频以及基于其他模态的AU检测方法,并讨论在不同模态数据下为了降低数据依赖问题而引入的弱监督AU检测方法。针对静态图像,进一步介绍基于局部特征学习、AU关系建模、多任务学习以及弱监督学习的AU检测方法。针对动态视频,主要介绍基于时序特征和自监督AU特征学习的AU检测方法。最后,本文对比并总结了各代表性方法的优缺点,并在此基础上总结和讨论了面部AU检测所面临的挑战和未来发展趋势。  相似文献   

5.
How to improve the control of batch processes is not an easy task because of modeling errors and time delays. In this work, novel iterative learning control (ILC) strategies, which can fully use previous batch control information and are attached to the existing control systems to improve tracking performance through repetition, are proposed for SISO processes which have uncertainties in modeling and time delays. The dynamics of the process are represented by transfer function plus pure time delay. The stability properties of the proposed strategies for batch processes in the presence of uncertainties in modeling and/or time delays are analyzed in the frequency domain. Sufficient conditions guaranteeing convergence of tracking error are stated and proven. Simulation and experimental examples demonstrating these methods are presented.  相似文献   

6.
Processing keyword search on XML: a survey   总被引:1,自引:0,他引:1  
Ziyang Liu  Yi Chen 《World Wide Web》2011,14(5-6):671-707
Keyword search is a user-friendly approach for users to retrieve information from XML data. Since an XML document can have a large size and contain a lot of information, an XML keyword search result should be a fragment of an XML document dynamically constructed at query time, which is achievable due to the structuredness of XML. Processing keyword searches on XML has several challenges, e.g., what are the elements in the XML document that are relevant to the query? How to generate the results efficiently and rank the results meaningfully? How to present the results to the user in a way such that the user can quickly find the desired information? In this survey, we review the papers in the literature that attempted to address these problems. We divide the existing approaches into several classes based on the problem they tackled, and perform a comprehensive analysis of these works.  相似文献   

7.
For hyperspectral target detection, it is usually the case that only part of the targets pixels can be used as target signatures, so can we use them to construct the most proper background subspace for detecting all the probable targets? In this paper, a dynamic subspace detection (DSD) method which establishes a multiple detection framework is proposed. In each detection procedure, blocks of pixels are calculated by the random selection and the succeeding detection performance distribution analysis. Manifold analysis is further used to eliminate the probable anomalous pixels and purify the subspace datasets, and the remaining pixels construct the subspace for each detection procedure. The final detection results are then enhanced by the fusion of target occurrence frequencies in all the detection procedures. Experiments with both synthetic and real hyperspectral images (HSI) evaluate the validation of our proposed DSD method by using several different state-of-the-art methods as the basic detectors. With several other single detectors and multiple detection methods as comparable methods, improved receiver operating characteristic curves and better separability between targets and backgrounds by the DSD methods are illustrated. The DSD methods also perform well with the covariance-based detectors, showing their efficiency in selecting covariance information for detection.  相似文献   

8.
By focusing on two dimensions of the digital divide—computer use and computer knowledge, this study explores four research questions: (1) What are the undergraduates doing with the computers they use at colleges? (2) How do undergraduates perform in regard to computer knowledge and skills? (3) With what is the digital divide among college students correlated? (4) What consequences does the digital divide have for student academic performance? In order to answer these research questions, a national survey was conducted. The survey investigated 3083 first-year college students of 12 4-year universities in Taiwan. A total of 2719 of them completed the questionnaires resulting in a response rate of 88.2%. In this study, the digital divide is measured in terms of computer use, which includes a variety of purposes for using computers and academic-related work as a proportion of total computer hours, and computer knowledge. Multiple regressions and a generalized ordered logit, i.e. a partial proportional odds model, are employed. The main findings include the following: (1) Undergraduates use computers not only for fulfilling their academic requirements and searching for information, but also for entertainment. On average, undergraduates spend about 19 h per week using computers, of which 5 h are academic-related. (2) Most undergraduates perform at the middle average level in terms of computer knowledge. (3) No significant differences among correlates in relating to demographic and socioeconomic family background were found in predicting the various purposes in using computers. (4) Students who are female, whose fathers and/or whose mothers are from minorities, whose fathers are blue-collar workers or unemployed, who study in the fields of the humanities and social sciences, and who enter private universities are at a disadvantage in terms of computer skills and knowledge. However, female students, students whose mothers were less educated and students who enroll in private universities are more focused computer users in terms of allocating time to academic-related work. (5) Computer knowledge and devotion to using computers for academic-related work have a moderate effect on college student learning, while the various other uses of computers do not. Of the different kinds of computer knowledge, it is the knowledge of software that helps students to learn the most.  相似文献   

9.
The problem of adaptive segmentation of time series with abrupt changes in the spectral characteristics is addressed. Such time series have been encountered in various fields of time series analysis such as speech processing, biomedical signal processing, image analysis and failure detection. Mathematically, these time series often can be modeled by zero mean gaussian distributed autoregressive (AR) processes, where the parameters of the process, including the gain factor, remain constant for certain time intervals and then jump abruptly to new values. Identification of such processes requires adaptive segmentation: the times of parameter jumps have to be estimated thoroughly to constitute boundaries of “homogeneous” segments which can be described by stationary AR processes. In this paper, a new effective method for sequential adaptive segmentation is proposed, which is based on parallel application of two sequential parameter estimation procedures. The detection of a parameter change as well as the estimation of the accurate position of a segment boundary is effectively performed by a sequence of suitable generalized likelihood ratio (GLR) tests. Flow charts as well as a block diagram of the algorithm are presented. The adjustment of the three control parameters of the procedure (the AR model order, a threshold for the GLR test and the length of a “test window”) is discussed with respect to various performance features. The results of simulation experiments are presented which demonstrate the good detection properties of the algorithm and in particular an excellent ability to allocate the segment boundaries even within a sequence of short segments. As an application to biomedical signals, the analysis of human electroencephalograms (EEG) is considered and an example is shown.  相似文献   

10.
For real-time knowledge based systems (RTKBS) to become viable complements to traditional information systems the application of proven methodologies for system analysis and design is of utmost performance. However, these methodologies do not provide any support to knowledge engineers about issues that are related to the design of RTKBS, such as: What is the necessary knowledge for the RTKBS to perform a certain task? How can this knowledge be used by inference strategies? How should the knowledge model and the inference strategies be implemented, such that the resulting model is maintainable and meets all time requirements? Answers to these questions are also not provided by tools that are suitable for implementing a RTKBS, such as COGSYS or G2. In this paper we will show that PERFECT (Programming EnviRonment For Expertsystems Constrained in reasoning Time) does support knowledge engineers in answering these questions, and hence that it bridges the gap between the traditional analysis and design methodologies, and implementation tools for RTKBS.  相似文献   

11.
Computerized processes are supportive in the new age of medical treatment. Biomedical signals which are collected from the human body supply or important useful data that are related with the biological actions of human body organs. However, these signals may also contain some noise. Heart waves are commonly classified as biomedical signals and are non-stationary due to their statistical specifications. The probability distributions of the noise are very different, and for this reason there is no common method to remove the noise. In this study, adaptive filters are used for noise elimination and the transcranial Doppler signal is analyzed. The artificial bee colony algorithm was employed to design the adaptive IIR filters for noise elimination on the transcranial Doppler signal and the results were compared to those obtained by the methods based on popular and recently introduced evolutionary algorithms and conventional methods.  相似文献   

12.
生理信号通常涵盖机体的生物电活动、温度、压力等关键信息,监测其数值波动有助于预警临床事件风险。深度模型是包含多级非线性变换的层级机器学习模型,在特征提取与建模方面优势显著,在计算机辅助诊断领域有着巨大的应用前景。随着连续生理参数监测技术的进步,深度模型在生理电信号异常检测中的效用逐渐提高,研究重点也向临床应用领域拓展。报告了深度模型在生理电信号异常检测中的研究进展。从临床应用出发,分析了经典信号异常检测方法的优势与不足,简述了当前深度模型的建模方式。从判别模型和生成模型的角度总结了经典模型的建模原理及最新应用,同时讨论了深度模型的训练架构和训练策略。结合异常检测在临床中的应用、深度模型的研究进展以及生理数据集的可用性三方面进行总结与讨论,并对未来研究进行展望。  相似文献   

13.
近年来,心律失常分类成为生理信号分析中的研究热点。心律失常现象在临床上十分常见,其出现时伴随心电信号中的心拍呈现具有反常形态和节律的波形。正确及时地检测、发现心律失常,并准确地进行心血管疾病的预警,在临床诊断初期具有重要意义。但人工判断异常心电图的远程系统实时性较低,可能延误病人的最佳治疗时机。将心律失常分类算法应用在可穿戴设备等边缘侧智能终端,一方面能够对心电信号进行实时分析处理,另一方面也提高了设备的灵活性及安全性。可编程逻辑门阵列器件作为边缘计算的一种实现形式,在生理信号处理中已经得到了广泛的应用,虽然可编程逻辑门阵列可进行实时流水线操作,但其基于 Verilog 或 VHDL 硬件描述语言,具有开发周期长、成本高、难度大及调试困难等缺点。针对这一问题,该文采用 Xilinx 公司新推出的高层次综合工具 Vivado HLS,以实现基于 MIT-BIH 数据集的心律失常五分类算法,并使用 Xilinx Zynq FPGA 作为硬件平台,在心电信号测试集上进行测试。测试结果显示,该系统的平均分类准确率可达 99.12%,单个心拍分类平均耗时 3.185 ms,与纯 PS 端的单 ARM 核相比,该系统实现了 5.64 倍以上的加速性能。  相似文献   

14.
基于小波包分解的时变脑电节律提取   总被引:1,自引:0,他引:1  
研究从时变非平稳脑电信号中提取脑电动态节律的新方法。首先用小波包分解构造不同频率特性的时变滤波器以提取各种时变的脑电节律,研究临床脑电信号瞬时变化。在此基础上测试并分析两种不同功能状态下的脑电信号,并由此构造各种节律的时变脑电地形图。实验结果表明,小波包分解可以有效提取脑电不同节律的动态特性,此方法也适用于分析其他生物医学信号。  相似文献   

15.
姚杰  程春玲  韩静  刘峥 《计算机应用》2021,41(6):1701-1708
云计算数据中心在日常部署和运行过程中产生的大量日志可以帮助系统运维人员进行异常分析。路径异常和时延异常是云工作流中常见的异常。针对传统的异常检测方法分别对两种异常检测任务训练相应的学习模型,而忽略了两种异常检测任务之间的关联性,导致异常检测准确率下降的问题,提出了一种基于多任务时序卷积网络的日志异常检测方法。首先,基于日志流的事件模板,生成事件序列和时间序列;然后,训练基于多任务时序卷积网络的深度学习模型,该模型通过共享时序卷积网络中的浅层部分来从系统正常执行的流程中并行地学习事件和时间特征;最后,对云计算工作流中的异常进行分析,并设计了相关异常检测逻辑。在OpenStack数据集上的实验结果表明,与日志异常检测的领先算法DeepLog和基于主成分分析(PCA)的方法比较,所提方法的异常检测准确率至少提升了7.7个百分点。  相似文献   

16.
The construction of ultra-high-rise and long-span structures requires higher requirements for the integrity detection of piles. The acoustic signal detection has been verified an efficient and accurate nondestructive testing method. In fact, the integrity of piles is closely related to the onset time of signals. The accuracy of onset time directly affects the integrity evaluation of a pile. To achieve high-precision onset detection, continuous wavelet transform (CWT) preprocessing and machine learning algorithms were integrated into the software of high-sampling rate testing equipment. The distortion of waveforms, which could interfere with the accuracy of detection, was eliminated by CWT preprocessing. To make full use of the collected waveform data, three types of machine learning algorithms were used for classifying whether the data points are ambient or ultrasonic signals. The models involve a commonly used classifier (ELM), an individual classification tree model (DTC), an ensemble tree model (RFC) and a deep learning model (DBN). The classification accuracy of the ambient and ultrasonic signals of these models was compared by 5-fold validation. Results indicate that RFC performance is better than DBN and DTC after training. It is more suitable for the classification of points in waveforms. Then, a detection method of onset time based on classification results was therefore proposed to minimize the interference of classification errors on detection. In addition to the three data mining methods, the autocorrelation function method was selected as the control method to compare the proposed data mining based methods with the traditional one. The accuracy and error analysis of 300 waveforms proved the feasibility and stability of the proposed method. The RFC-based detection method is recommended because of the highest accuracy, lowest errors, and the most favorable error distribution among four onset detection methods. Successful applications demonstrate that it could provide a new way for ensuring the accurate testing of pile foundation integrity.  相似文献   

17.
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management.  相似文献   

18.
In this paper, we present the expert systems for time-varying biomedical signals classification and determine their accuracies. The combined neural network (CNN), mixture of experts (ME), and modified mixture of experts (MME) were tested and benchmarked for their performance on the classification of the studied time-varying biomedical signals (ophthalmic arterial Doppler signals, internal carotid arterial Doppler signals and electroencephalogram signals). Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The inputs of these expert systems composed of diverse or composite features were chosen according to the network structures. The present study was conducted with the purpose of answering the question of whether the expert system with diverse features (MME) or composite feature (CNN, ME) improve the capability of classification of the time-varying biomedical signals. The purpose was to determine an optimum classification scheme for the problem and also to infer clues about the extracted features. Our research demonstrated that the power levels of power spectral density (PSD) estimations obtained by the eigenvector methods are the valuable features which are representing the time-varying biomedical signals and the CNN, ME, and MME trained on these features achieved high classification accuracies.  相似文献   

19.
The existing host-based intrusion detection methods are mainly based on recording and analyzing the system calls of the invasion processes (such as exploring the sequences of system calls and their occurring probabilities). However, these methods are not efficient enough on the detection precision as they do not reveal the inherent intrusion events in detail (e.g., where are the system vulnerabilities and what causes the invasion are both not mentioned). On the other hand, though the log-based forensic analysis can enhance the understanding of how these invasion processes break into the system and what files are affected by them, it is a very cumbersome process to manually acquire information from logs which consist of the users’ normal behavior and intruders’ illegal behavior together.This paper proposes to use provenance, the history or lineage of an object that explicitly represents the dependency relationship between the damaged files and the intrusion processes, rather than the underlying system calls, to detect and analyze intrusions. Provenance more accurately reveals and records the data and control flow between files and processes, reducing the potential false alarm caused by system call sequences. Moreover, the warning report during intrusion can explicitly output system vulnerabilities and intrusion sources, and provide detection points for further provenance graph based forensic analysis. Experimental results show that this framework can identify the intrusion with high detection rate, lower false alarm rate, and smaller detection time overhead compared to traditional system call based method. In addition, it can analyze the system vulnerabilities and attack sources quickly and accurately.  相似文献   

20.
Fault detection and isolation in water distribution networks is an active topic due to the nonlinearities of flow propagation and recent increases in data availability due to sensor deployment. Here, we propose an efficient two-step data driven alternative: first, we perform sensor placement taking the network topology into account; second, we use incoming sensor data to build a network model through online dictionary learning. Online learning is fast and allows tackling large networks as it processes small batches of signals at a time. This brings the benefit of continuous integration of new data into the existing network model, either in the beginning for training or in production when new data samples are gathered. The proposed algorithms show good performance in our simulations on both small and large-scale networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号