首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we propose a novel method for brain SPECT image feature extraction based on the empirical mode decomposition (EMD). The proposed method applied to assist the diagnosis of Alzheimer Disease (AD) selects the most discriminant voxels for support vector machine (SVM) classification from the transformed EMD feature space. In particular, the combination of frequency components of the EMD transformation are found to retain regional differences in functional activity which is characteristic of AD. In general, the EMD represents a fully data-driven, unsupervised and additive signal decomposition and does not need any a priori defined basis system. Several experiments were carried out on a balanced SPECT database collected from the “Virgen de las Nieves” Hospital in Granada (Spain), containing 96 recordings and yielding up to 100% maximum accuracy and 93.52 ± 4.92% on average, with a acceptable biased estimate of the cross-validation (CV) true error, in separating AD and normal controls on this SPECT database. In this way, we achieve the “gold standard” labeling outperforming recently proposed CAD systems.  相似文献   

2.
In order to detect the cerebral microbleed (CMB) voxels within brain, we used susceptibility weighted imaging to scan the subjects. Then, we used undersampling to solve the accuracy paradox caused from the imbalanced data between CMB voxels and non-CMB voxels. we developed a seven-layer deep neural network (DNN), which includes one input layer, four sparse autoencoder layers, one softmax layer, and one output layer. Our simulation showed this method achieved a sensitivity of 95.13%, a specificity of 93.33%, and an accuracy of 94.23%. The result is better than three state-of-the-art approaches.  相似文献   

3.
This paper presents a novel method for intensity normalization of DaTSCAN SPECT brain images. The proposed methodology is based on Gaussian mixture models (GMMs) and considers not only the intensity levels, but also the coordinates of voxels inside the so-defined spatial Gaussian functions. The model parameters are obtained according to a maximum likelihood criterion employing the expectation maximization (EM) algorithm. First, an averaged control subject image is computed to obtain a threshold-based mask that selects only the voxels inside the skull. Then, the GMM is obtained for the DaTSCAN-SPECT database, performing space quantization by populating it with Gaussian kernels whose linear combination approximates the image intensity. According to a probability threshold that measures the weight of each kernel or “cluster” in the striatum area, the voxels in the non-specific region are intensity-normalized by removing clusters whose likelihood is negligible.  相似文献   

4.
We propose a 3D symmetric homotopic thinning method based on the critical kernels framework. It may produce either curvilinear or surface skeletons, depending on the criterion that is used to prevent salient features of the object from deletion. In our new method, rather than detecting curve or surface extremities, we detect isthmuses, that is, parts of an object that are “locally like a curve or a surface”. This allows us to propose a natural extension of our new method that copes with the robustness to noise issue, this extension is based on a notion of “isthmus persistence”. As far as we know, this is the first method that permits to obtain 3D symmetric and robust curvilinear/surface skeletons of objects made of voxels.  相似文献   

5.
为了提高核极限学习机(KELM)数据分类的精度,提出了一种结合K折交叉验证(K-CV)与遗传算法(GA)的KELM分类器参数优化方法(GA-KELM),将CV训练所得多个模型的平均精度作为GA的适应度评价函数,为KELM的参数优化提供评价标准,用获得GA优化最优参数的KELM算法进行数据分类.利用UCI中数据集进行仿真,实验结果表明:所提方法在整体性能上优于GA结合支持向量机法(GA-SVM)和GA结合反向传播(GA-BP)算法,具有更高的分类精度.  相似文献   

6.
We present an accurate and fast approach for MR-image segmentation of brain tissues, that is robust to anatomical variations and takes an average of less than 1 min for completion on modern PCs. The method first corrects voxel values in the brain based on local estimations of the white-matter intensities. This strategy is inspired by other works, but it is simple, fast, and very effective. Tissue classification exploits a recent clustering approach based on the motion of optimum-path forest (OPF), which can find natural groups such that the absolute majority of voxels in each group belongs to the same class. First, a small random set of brain voxels is used for OPF clustering. Cluster labels are propagated to the remaining voxels, and then class labels are assigned to each group. The experiments used several datasets from three protocols (involving normal subjects, phantoms, and patients), two state-of-the-art approaches, and a novel methodology which finds the best choice of parameters for each method within the operational range of these parameters using a training dataset. The proposed method outperformed the compared approaches in speed, accuracy, and robustness.  相似文献   

7.
ReLU激活函数优化研究   总被引:1,自引:0,他引:1  
门控循环单元(GRU)是一种改进型的长短期记忆模型(LSTM)结构,有效改善了LSTM训练耗时的缺点.在GRU的基础上,对激活函数sigmoid,tanh,ReLU等性能进行了比较和研究,详细分析了几类激活函数的优缺点,提出了一种新的激活函数双曲正切线性单元(TLU).实验证明:新的激活函数既能显著地加快深度神经网络的训练速度,又有效降低训练误差.  相似文献   

8.
利用立体图对的三维人脸模型重建算法   总被引:1,自引:0,他引:1  
利用人脸正面立体图对重建三维人脸模型。无需三维激光扫描仪和通用人脸模型.获取立体图对并校正后,利用种子像素扩张算法实现图像匹配.种子像素选取算法能使足够数量的种子像素具有可靠视差;还提出了基于视差置信度的扩张算法,降低了视差图中大面积误匹配区域出现的可能性;最后,利用碟状粒子描述和Delaunay三角剖分重建三维人脸模型.实验结果表明,文中算法能够产生光滑逼真的三维人脸模型.  相似文献   

9.
Current improvements in the performance of deep neural networks are partly due to the proposition of rectified linear units. A ReLU activation function outputs zero for negative component, inducing the death of some neurons and a bias shift of the outputs, which causes oscillations and impedes learning. According to the theory that “zero mean activations improve learning ability”, a softplus linear unit (SLU) is proposed as an adaptive activation function that can speed up learning and improve performance in deep convolutional neural networks. Firstly, for the reduction of the bias shift, negative inputs are processed using the softplus function, and a general form of the SLU function is proposed. Secondly, the parameters of the positive component are fixed to control vanishing gradients. Thirdly, the rules for updating the parameters of the negative component are established to meet back- propagation requirements. Finally, we designed deep auto-encoder networks and conducted several experiments with them on the MNIST dataset for unsupervised learning. For supervised learning, we designed deep convolutional neural networks and conducted several experiments with them on the CIFAR-10 dataset. The experiments have shown faster convergence and better performance for image classification of SLU-based networks compared with rectified activation functions.  相似文献   

10.
Looking back on the development of computer technology, particularly in the context of manufacturing, we can distinguish three big waves of technological exuberance with a wave length of roughly 30 years: In the first wave, during the 1950s, mainframe computers at that time were conceptualized as “electronic brains” and envisaged as central control unit of an “automatic factory” (Wiener). Thirty years later, during the 1980s, knowledge-based systems in computer-integrated manufacturing (CIM) were adored as the computational core of the “unmanned factory”. Both waves dismally stranded on the contumacies of reality. Nevertheless, again thirty years later, we now experience the departure of the “smart factory” based on networks of “artificially intelligent” multi-agent or “cyber-physical systems” (often also addressed as “internet of things”). From the very beginning, these technological exuberances rooted in mistaken metaphors describing the artifacts (e.g. “electronic brain”, “knowledge-based” or “intelligent systems”) and, hence, in delusions about the true nature of computer systems. The behaviour of computers is, as computing science teaches us, strictly restrained to executing computable functions by means of algorithms, it thus neither resembles the performance of a brain as part of a complex sensitive living body nor is it in any meaningful sense “knowledgeable” or “intelligent” (this predicate remaining reserved for the programmer designing the algorithms). When the delusion of being able to implement “smart factories”, despite the countless accomplishment failures before, gains momentum anew, it appears absolutely essential to reflect on underlying misconceptions.  相似文献   

11.
We propose to detect brain activation from fMR time-series of a group study by modeling fuzzy features. Five discriminating features are automatically extracted from fMRI data by a sequence of temporal-sliding-windows. A fuzzy model based on these features is first derived by a gradient method on a set of initial training data and then incrementally enhanced. The resulting fuzzy activation maps of all subjects are then combined to provide a measure of strength of activation of each voxel, based on the group of subjects. A two-way thresholding scheme is introduced to determine true activated voxels. The method is tested on both synthetic and real fMRI datasets. The method is less vulnerable to correlated noise and able to capture the key activation from a group of subjects by adapting to hemodynamic variability across subjects.  相似文献   

12.
Confidence Transformation for Combining Classifiers   总被引:1,自引:0,他引:1  
This paper investigates a number of confidence transformation methods for measurement-level combination of classifiers. Each confidence transformation method is the combination of a scaling function and an activation function. The activation functions correspond to different types of confidences: likelihood (exponential), log-likelihood, sigmoid, and the evidence combination of sigmoid measures. The sigmoid and evidence measures serve as approximates to class probabilities. The scaling functions are derived by Gaussian density modeling, logistic regression with variable inputs, etc. We test the confidence transformation methods in handwritten digit recognition by combining variable sets of classifiers: neural classifiers only, distance classifiers only, strong classifiers, and mixed strong/weak classifiers. The results show that confidence transformation is efficient to improve the combination performance in all the settings. The normalization of class probabilities to unity of sum is shown to be detrimental to the combination performance. Comparing the scaling functions, the Gaussian method and the logistic regression perform well in most cases. Regarding the confidence types, the sigmoid and evidence measures perform well in most cases, and the evidence measure generally outperforms the sigmoid measure. We also show that the confidence transformation methods are highly robust to the validation sample size in parameter estimation.  相似文献   

13.
This study aimed to investigate the association between inappropriate workstation components and neck pain intensity. This cross-sectional study was conducted on 309 Japanese office workers. Workstation questionnaires were developed based on previous studies and consisted of 11 items that assessed the armrest, monitor screen, desk, and keyboard. For each of the 11 items, we defined whether the item was “inadequate” based on previous studies. Neck pain experienced was measured using a numerical rating scale (NRS) of 0–10, and the NRS scores were categorized into three (0: “no pain,” 1–3: “low pain intensity,” and 4–10: “high pain intensity”). Crude and adjusted ordered logistic regressions were used to determine the association between the inadequate workstation and neck pain intensity. In the first analysis model, each item of the workstation questionnaire was used as an explanatory variable. Then, in the second analysis model, the number of inadequate workstation components was used as the explanatory variable. Ordered logistic regression analyses showed that the distance of the eye to the monitor was significantly associated with neck pain intensity (crude, OR: 2.21; 95% CI: 1.41–3.19; adjusted, OR: 1.79; 95%CI: 1.08–2.50). The number of inadequate workstation components had a significant positive association with neck pain intensity (crude, OR: 1.19, 95%CI: 1.04–1.34; adjusted, OR: 1.15, 95% CI: 1.00–1.32). The results of this study suggest that the distance between the eye and the monitor might be the most important workstation factor contributing to neck pain, and workstation components should be assessed collectively.  相似文献   

14.
Zhang  Lili  Hu  Jiexiang  Meng  Xiangzheng  Jin  Peng 《Engineering with Computers》2021,38(2):1095-1109

The design optimization of periodic lattice cellular structure relying exclusively on the computational simulation model is a time-consuming, even computationally prohibitive process. To relieve the computational burden, an efficient optimization method for periodic lattice cellular structure design based on the K-fold support vector regression model (K-SVR) is proposed in this paper. First, based on the loading experiments, the most promising unit cell of periodic lattice cellular structure is selected from five typical unit cells. Second, an initial SVR model is constructed to replace the simulation model of the periodic lattice cellular structure, and the K-fold cross-validation approach is used to extract the error information from the SVR model at the sample points. According to the error information, the sample points are sorted and classified into several sub-sets. Then, a global K-SVR model is re-constructed by aggregating each SVR model under each sub-set. Third, considering that there exists prediction errors between the K-SVR model and the simulation model, which may lead to infeasible optimal solutions, an uncertainty quantification approach is developed to ensure the feasibility of the optimal solution for the periodic lattice cellular structure design. Finally, the effectiveness and merits of the proposed approach are demonstrated on the design optimization of the A-pillar and seat-bottom frame.

  相似文献   

15.
The Curriculum Vitae (CV, also referred to as “résumé”) is an established representation of a person's academic and professional history. A typical CV is comprised of multiple sections associated with spatio‐temporal, nominal, hierarchical, and ordinal data. The main task of a recruiter is, given a job application with specific requirements, to compare and assess CVs in order to build a short list of promising candidates to interview. Commonly, this is done by viewing CVs in a side‐by‐side fashion. This becomes challenging when comparing more than two CVs, because the reader is required to switch attention between them. Furthermore, there is no guarantee that the CVs are structured similarly, thus making the overview cluttered and significantly slowing down the comparison process. In order to address these challenges, in this paper we propose “CV3”, an interactive exploration environment offering users a new way to explore, assess, and compare multiple CVs, to suggest suitable candidates for specific job requirements. We validate our system by means of domain expert feedback whose results highlight both the efficacy of our approach and its limitations. We learned that CV3 eases the overall burden of recruiters thereby assisting them in the selection process.  相似文献   

16.
A copula density is the joint probability density function (PDF) of a random vector with uniform marginals. An approach to bivariate copula density estimation is introduced that is based on maximum penalized likelihood estimation (MPLE) with a total variation (TV) penalty term. The marginal unity and symmetry constraints for copula density are enforced by linear equality constraints. The TV-MPLE subject to linear equality constraints is solved by an augmented Lagrangian and operator-splitting algorithm. It offers an order of magnitude improvement in computational efficiency over another TV-MPLE method without constraints solved by the log-barrier method for the second order cone program. A data-driven selection of the regularization parameter is through K-fold cross-validation (CV). Simulation and real data application show the effectiveness of the proposed approach. The MATLAB code implementing the methodology is available online.  相似文献   

17.
In this article, we present an electromagnetic study of electrically programmable graphene‐based metasurface with individual scattering control. Our investigation is based on the method of moments combined with the generalized equivalent circuit (MoM‐GEC) approach. We show that, tuning the unit cell's conductivity leads to change its input impedance and scattering matrix. So, each unit cell of the metasurface exhibits' a dynamic phase response that can be switched between 0° and ?180° by controlling high transmission and total reflection states. Based on this feature, a 1‐bit coding metasurface consisting of discrete codes of “0” and “1” is used to synthesis 3D beams. Hence, tailorable anomalous reflection and diffusion are studied under normal incidence at a fixed frequency of 3.9 THz. This survey opens new opportunities in the domain of Terahertz beam engineering and security scanner applications.  相似文献   

18.
Abstract

Background: We proposed a new automatic and rapid computer-aided diagnosis system to detect pathological brain images obtained in the scans of magnetic resonance imaging (MRI). Methods: For simplification, we transformed the problem to a binary classification task (pathological or normal). It consisted of two steps: first, Hu moment invariants (HMI) were extracted from a specific MR brain image; then, seven HMI features were fed into two classifiers: twin support vector machine (TSVM) and generalised eigenvalue proximal SVM (GEPSVM). Results: Then, a 5 × 5-fold cross validation on a data set containing 90 MR brain images, demonstrated that the proposed methods “HMI + GEPSVM” and “HMI + TSVM” achieved classification accuracy of 98.89%, higher than eight state-of-the-art methods: “DWT + PCA + BP-NN”, “DWT + PCA + RBF-NN”, “DWT + PCA + PSO-KSVM”, “WE + BP-NN”, “WE + KSVM”, “DWT + PCA + GA-KSVM”, “WE + PSO-KSVM” and “WE + BBO-KSVM”. Conclusion: The proposed methods are superior to other methods on pathological brain detection (p < 0.05).  相似文献   

19.
Natural sensory stimuli elicit complex brain responses that manifest in fMRI as widely distributed and overlapping clusters of hemodynamic responses. We propose a statistical signal processing method for finding synchronous hemodynamic activity that directly or transiently reflects information about the experimental condition. When applied to fMRI data, the method searches for voxels with activation patterns exhibiting high coherence and simultaneously high variance across brain scans. The crux of the method is functional principal component analysis (fPCA) of activation patterns stored in a two-dimensional data matrix, with rows and columns representing voxels and scans, respectively. Without external information, fPCA is performed directly on this data matrix. Otherwise, the data matrix is first transformed to highlight a specific source of variation, enabling fully or partially supervised fPCA with a single parameter determining the degree of supervision. We evaluated our method on a public benchmark of fMRI scans of subjects viewing natural movies. Our method turns out to be very suitable for flexibly uncovering distributed and overlapping hemodynamic patterns that distinguish well between experimental conditions or cognitive states.  相似文献   

20.
Traditional activation functions such as hyperbolic tangent and logistic sigmoid have seen frequent use historically in artificial neural networks. However, nowadays, in practice, they have fallen out of favor, undoubtedly due to the gap in performance observed in recognition and classification tasks when compared to their well-known counterparts such as rectified linear or maxout. In this paper, we introduce a simple, new type of activation function for multilayer feed-forward architectures. Unlike other approaches where new activation functions have been designed by discarding many of the mainstays of traditional activation function design, our proposed function relies on them and therefore shares most of the properties found in traditional activation functions. Nevertheless, our activation function differs from traditional activation functions on two major points: its asymptote and global extremum. Defining a function which enjoys the property of having a global maximum and minimum, turned out to be critical during our design-process since we believe it is one of the main reasons behind the gap observed in performance between traditional activation functions and their recently introduced counterparts. We evaluate the effectiveness of the proposed activation function on four commonly used datasets, namely, MNIST, CIFAR-10, CIFAR-100, and the Pang and Lee’s movie review. Experimental results demonstrate that the proposed function can effectively be applied across various datasets where our accuracy, given the same network topology, is competitive with the state-of-the-art. In particular, the proposed activation function outperforms the state-of-the-art methods on the MNIST dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号