首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Inclusive design has unique challenges because it aims to improve usability for a wide range of users. This typically includes people with lower levels of ability, as well as mainstream users. This paper examines the effectiveness of two methods that are used in inclusive design: user trials and exclusion calculations (an inclusive design inspection method). A study examined three autoinjectors using both methods (n = 30 for the user trials). The usability issues identified by each method are compared and the effectiveness of the methods is discussed. The study found that each method identified different kinds of issues, all of which are important for inclusive design. We therefore conclude that a combination of methods should be used in inclusive design rather than relying on a single method. Recommendations are also given for how the individual methods can be used more effectively in this context.  相似文献   

2.
The potential of using National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) images for large areas is often limited by cloud cover. It could be increased when small clouds are replaced by estimated reflection and emission values. In this study seven replacement methods are compared, ranging from simple replacement to stratified co-kriging. Images of subsequent days serve as co-variable, enabling the use of spatial and temporal information. For validation, cloud-free pixels were replaced with four patterns of artificially clouded pixels. Co-kriging as a combination of both temporal and spatial information resulted in the best estimates, reducing the mean squared errors by 20-70%. Stratification of the image did not result in better cloud replacement. Once kriging options have been implemented in existing image processing packages, co-kriging will be an easy-to-use solution to missing values, provided that images of subsequent days of low cloud coverage are available.  相似文献   

3.
A segmentation approach based on a Markov random field (MRF) model is an iterative algorithm; it needs many iteration steps to approximate a near optimal solution or gets a non-suitable solution with a few iteration steps. In this paper, we use a genetic algorithm (GA) to improve an unsupervised MRF-based segmentation approach for multi-spectral textured images. The proposed hybrid approach has the advantage that combines the fast convergence of the MRF-based iterative algorithm and the powerful global exploration of the GA. In experiments, synthesized color textured images and multi-spectral remote-sensing images were processed by the proposed approach to evaluate the segmentation performance. The experimental results reveal that the proposed approach really improves the MRF-based segmentation for the multi-spectral textured images.  相似文献   

4.
In this study, we compared classical principal components analysis (PCA), generalized principal components analysis (GPCA), linear principal components analysis using neural networks (PCA-NN), and non-linear principal components analysis using neural networks (NLPCA-NN). Data were extracted from the patient satisfaction query with regard to the satisfaction of patients from hospital staff, which was applied in 2005 at the outpatient clinics of Trakya University Medical Faculty. We found that percentages of explained variance of principal components from PCA-NN and NLPCA-NN were highest for doctor, nurse, radiology technician, laboratory technician, and other staff using a patient satisfaction data set. Results show that methods using NN which have higher percentages of explained variances than classical methods could be used for dimension reduction.  相似文献   

5.
Structural and Multidisciplinary Optimization - Methods of uncertainty analysis based on statistical moments are more convenient than methods that use a Taylor series expansion because the moments...  相似文献   

6.
Zhang  Xiuwei  Yue  Yuanzeng  Han  Lin  Li  Fei  Yuan  Xiuzhong  Fan  Minhao  Zhang  Yanning 《Multimedia Tools and Applications》2021,80(19):28989-29004
Multimedia Tools and Applications - Spatially detailed characterization of the distribution amount and timing of river ice are important for identifying and predicting potential ice hazards. In...  相似文献   

7.
《国际计算机数学杂志》2012,89(3-4):307-320
In this paper the 'Stride of 3' reduction method is compared with the wtfell known cyclic reduction method for solving tridiagonal systems derived from the discretiped steady state convection diffusion equation. The Stride of 3 algorithm is shown to be superior for moderate to large linear systems (e.g., of order > 20).  相似文献   

8.
DBC和MDBC估计分数维时信息丢失较多,导致估计精度降低和范围缩小。针对此局限性,提出了利用图像灰度均值反映整体特征的GDBC分数维算法,结合实际SAR舰船图像目标识别的应用,从分数维估计、拟和误差计算和动态范围测量等方面,将GDBC和其它计算方法进行了比较和分析。实验结果表明该算法更加有效,覆盖范围更广。  相似文献   

9.
A unified framework for detecting both linear and planar structures in three-dimensional (3D) images is developed. The method uses an iterative detection and removal strategy. The dimension reduction scheme reduces the search space for lines by first finding 2D planes and then searching for lines in the selected planes only. Thus the computational time of the method is lower than the 3D Hough Transform (HT) for lines. The proposed method is tested using experimental Ground Penetrating Radar (GPR) data taken over buried pipes, however the method is general enough to be applied to any situation where linear or planar structures need to be identified in 3D data.  相似文献   

10.
分形维数在一维时间序列的分形特性分析中应用非常广泛,其计算方法多种多样,但是相关计算方法的全面对比鲜见文献报道。针对常用的八种一维时间序列分形维数计算方法,以WCF合成时间序列为研究对象,分别就算法的准确性和效率,对数据长度的依赖性进行分析对比。结果表明:准确性较好的三种算法是FA,DFA和Higuchi算法;而运算效率最高的是Sevcik,Katz和Castiglioni算法,但是它们的准确性偏低,而FA和Higuchi算法在计算时间上略微增加,但准确性比较高;在数据长度为4 096点时,各算法的计算值基本稳定,尤其是FA、Higuchi和DFA算法,在数据长度为4 096点时,计算值与理论值比较吻合。由此可以得出结论,Higuchi和DFA算法在计算一维时间序列的分形维数时性能优越,在相关的计算中优先选择。  相似文献   

11.
In this paper, we propose a novel supervised dimension reduction algorithm based on K-nearest neighbor (KNN) classifier. The proposed algorithm reduces the dimension of data in order to improve the accuracy of the KNN classification. This heuristic algorithm proposes independent dimensions which decrease Euclidean distance of a sample data and its K-nearest within-class neighbors and increase Euclidean distance of that sample and its M-nearest between-class neighbors. This algorithm is a linear dimension reduction algorithm which produces a mapping matrix for projecting data into low dimension. The dimension reduction step is followed by a KNN classifier. Therefore, it is applicable for high-dimensional multiclass classification. Experiments with artificial data such as Helix and Twin-peaks show ability of the algorithm for data visualization. This algorithm is compared with state-of-the-art algorithms in classification of eight different multiclass data sets from UCI collection. Simulation results have shown that the proposed algorithm outperforms the existing algorithms. Visual place classification is an important problem for intelligent mobile robots which not only deals with high-dimensional data but also has to solve a multiclass classification problem. A proper dimension reduction method is usually needed to decrease computation and memory complexity of algorithms in large environments. Therefore, our method is very well suited for this problem. We extract color histogram of omnidirectional camera images as primary features, reduce the features into a low-dimensional space and apply a KNN classifier. Results of experiments on five real data sets showed superiority of the proposed algorithm against others.  相似文献   

12.
针对大数据的人体行为识别时实时性差和识别率低的问题,提出了优化投影对线性近似稀疏表示分类(OP-LASRC)的监督降维算法。OP-LASRC将高维的行为数据优化投影到低维空间,与线性近似稀疏表示(LASCR)快速分类算法相结合应用大数据的人体行为识别。首先利用LASCR的残差计算规律设计OP-LASRC算法,实现监督降维;利用线性正交投影缩减高维数据的维度,投影时减小训练样本的本类重构残差及增大类间重构残差,从而保留训练样本的类别特征。然后,对降维后的行为数据,利用LASCR算法进行分类;用L2范数估算稀疏系数,选出前k个最大的稀疏系数对应的训练样本,缩减训练样本库后用L1范数最小化和残差最小化计算得到识别结果,从识别率、鲁棒性、执行时间三个方评价此方法,在KTH行为数据库上进行实验测试。实验表明:OP-LASRC监督降维后,LASRC在分类时不仅识别率高达96.5%,执行时间比同类算法短,而且保证了强鲁棒性,证明了OP-LASRC能完美匹配LASCR算法用于行为识别,这为大数据的行为识别提供了一种新的思路。  相似文献   

13.
Recently, a moment-based sufficient dimension reduction methodology in multivariate regression, focusing on the first two moments, was introduced. We present in this article a novel approach of the earlier method in roughly the same context. This novel method possesses several desirable properties that the earlier method did not have such as dimension tests with chi-squared distributions, predictor effects test without assuming any model, and so on. Simulated and real data examples are presented for studying various properties of the proposed method and for a numerical comparison to the earlier method.  相似文献   

14.
In the context of process industries, online monitoring of quality variables is often restricted by inadequacy of measurement techniques or low reliability of measuring devices. Therefore, there has been a growing interest in the development of inferential sensors to provide frequent online estimates of key process variables on the basis of their correlation with real-time process measurements. Representation of multi-modal processes is one of the challenging issues that may arise in the design of inferential sensors. In this paper, Bayesian procedures for the development and implementation of adaptive multi-model inferential sensors are presented. It is shown that the application of a Bayesian scheme allows for accommodating the overlapping operating modes and facilitating the inclusion of prior knowledge. The effectiveness of the proposed procedures are first demonstrated through a simulation case study. The efficacy of the method is further highlighted by a successful industrial application of an adaptive multi-model inferential sensor designed for real-time monitoring of a key quality variable in an oil sands processing unit.  相似文献   

15.
Ligand based virtual screening approaches were applied to the CRF1 receptor. We compared ECFP6 fingerprints, FTrees, Topomers, Cresset FieldScreen, ROCS OpenEye shape Tanimoto, OpenEye combo-score and OpenEye electrostatics. The 3D methods OpenEye Shape Tanimoto, combo-score and Topomers performed the best at separating actives from inactives in retrospective experiments. By virtue of their higher enrichment the same methods identified more active scaffolds. However, amongst a given number of active compounds the Cresset and OpenEye electrostatic methods contained more scaffolds and returned ranked compounds with greater diversity. A selection of the methods were employed to recommend compounds for screening in a prospective experiment. New CRF1 actives antagonists were found. The new actives contained different underlying chemical architecture to the query molecules, results indicative of successful scaffold-hopping.  相似文献   

16.
Ligand based virtual screening approaches were applied to the CRF1 receptor. We compared ECFP6 fingerprints, FTrees, Topomers, Cresset FieldScreen, ROCS OpenEye shape Tanimoto, OpenEye combo-score and OpenEye electrostatics. The 3D methods OpenEye Shape Tanimoto, combo-score and Topomers performed the best at separating actives from inactives in retrospective experiments. By virtue of their higher enrichment the same methods identified more active scaffolds. However, amongst a given number of active compounds the Cresset and OpenEye electrostatic methods contained more scaffolds and returned ranked compounds with greater diversity. A selection of the methods were employed to recommend compounds for screening in a prospective experiment. New CRF1 actives antagonists were found. The new actives contained different underlying chemical architecture to the query molecules, results indicative of successful scaffold-hopping.  相似文献   

17.
Engineering design problems are often multi-objective in nature, which means trade-offs are required between conflicting objectives. In this study, we examine the multi-objective algorithms for the optimal design of reinforced concrete structures. We begin with a review of multi-objective optimization approaches in general and then present a more focused review on multi-objective optimization of reinforced concrete structures. We note that the existing literature uses metaheuristic algorithms as the most common approaches to solve the multi-objective optimization problems. Other efficient approaches, such as derivative-free optimization and gradient-based methods, are often ignored in structural engineering discipline. This paper presents a multi-objective model for the optimal design of reinforced concrete beams where the optimal solution is interested in trade-off between cost and deflection. We then examine the efficiency of six established multi-objective optimization algorithms, including one method based on purely random point selection, on the design problem. Ranking and consistency of the result reveals a derivative-free optimization algorithm as the most efficient one.  相似文献   

18.
This paper presents a combined reliability analysis approach which is composed of Dimension Reduction Method (DRM) and Maximum Entropy Method (MEM). DRM has emerged as a new approach in this field with the advantages of its sensitivity-free nature and efficiency instead of searching for the most probable point (MPP). However, in some recent implementations, the Moment Based Quadrature Rule (MBQR) in the DRM was found to be numerically instable when solving a system of linear equations for the integration points. In this study, a normalized Moment Based Quadrature Rule (NMBQR) is proposed to solve this problem, which can reduce the condition number of the coefficient matrix of the linear equations considerably and improve the robustness and stableness. Based on the statistical moments obtained by DRM+NMBQR, the MEM is applied to construct the probability density function (PDF) of the response. A number of numerical examples are calculated and compared to the Monte Carlo simulation (MCS), the First Order Reliability Method (FORM), the Extended Generalized Lambda Distribution (EGLD) and Saddlepoint Approximation (SA). The results show the accuracy and efficiency of the proposed method, especially for the multimodal PDF problem and multiple design point problem.  相似文献   

19.
An approach to transform continuous data to finite dimensional data is briefly outlined. A model to reduce the dimension of the finite dimensional data is developed for the case when the covariance matrices are not necessarily equal. Necessary and sufficient conditions with respect to the spatial properties of the means and covariance matrices are given so that the linear transformation of data of higher dimensions to lower dimensions does not increase the probabilities of misclassification.  相似文献   

20.
In this paper, we study dimension reduction of the three-dimensional (3D) Gross-Pitaevskii equation (GPE) modeling Bose-Einstein condensation under different limiting interaction and trapping frequency parameter regimes. Convergence rates for the dimension reduction of 3D ground state and dynamics of the GPE in the case of disk-shaped condensation and cigar-shaped condensation are reported based on our asymptotic and numerical results. In addition, the parameter regimes in which the 3D GPE cannot be reduced to lower dimensions are identified.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号