首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A computational framework for scale‐bridging in multi‐scale simulations is presented. The framework enables seamless combination of at‐scale models into highly dynamic hierarchies to build a multi‐scale model. Its centerpiece is formulated as a standalone module capable of fully asynchronous operation. We assess its feasibility and performance for a two‐scale model applied to two challenging test problems from impact physics. We find that the computational cost associated with using the framework may, as expected, become substantial. However, the framework has the ability of effortlessly combining at‐scale models to render complex multi‐scale models. The main source of the computational inefficiency of the framework is related to poor load balancing of the lower‐scale model evaluation We demonstrate that the load balancing can be efficiently addressed by recourse to conventional load‐balancing strategies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This work focuses on providing accurate low‐cost approximations of stochastic finite elements simulations in the framework of linear elasticity. In a previous work, an adaptive strategy was introduced as an improved Monte‐Carlo method for multi‐dimensional large stochastic problems. We provide here a complete analysis of the method including a new enhanced goal‐oriented error estimator and estimates of CPU (computational processing unit) cost gain. Technical insights of these two topics are presented in details, and numerical examples show the interest of these new developments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
The feature extraction from electroencephalogram (EEG) signals is widely used for computer‐aided epileptic seizure detection. However, multiple channels of EEG signals and their correlations have not been completely harnessed. In this article, a novel automatic seizure detection approach is proposed by analyzing the spatiotemporal correlation of multi‐channel EEG signals. This approach combines the maximum cross‐correlation, robust‐principal component analysis, and least square‐support vector machine to detect the events. Our proposed method delivers higher detection sensitivity, specificity, and accuracy than the state‐of‐the‐art approaches based on the 19 channels’ EEG signals of 37 absence epilepsy patients experiencing 57 seizure events.  相似文献   

4.
We present an iterative scheme for adaptive smoothing of functional magnetic resonance images. We propose a novel similarity measure to estimate the weights of the smoothing filter based on the functional similarity of the voxels under the smoothing kernel with the voxel under consideration as well as their similarity with a reference time‐course representing the expected BOLD response. We demonstrate the performance of the proposed method by applying the method to preprocess both simulated and real fMRI data. The method improves the functional SNR of the data while preserving the shapes of the functionally active region and its performance is not compromised when structured noise is the dominant noise source. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 260‐270, 2011;  相似文献   

5.
An adaptive mesh refinement (AMR) technique is proposed for level set simulations of incompressible multiphase flows. The present AMR technique is implemented for two‐dimensional/three‐dimensional unstructured meshes and extended to multi‐level refinement. Smooth variation of the element size is guaranteed near the interface region with the use of multi‐level refinement. A Courant–Friedrich–Lewy condition for zone adaption frequency is newly introduced to obtain a mass‐conservative solution of incompressible multiphase flows. Finite elements around the interface are dynamically refined using the classical element subdivision method. Accordingly, finite element method is employed to solve the problems governed by the incompressible Navier–Stokes equations, using the level set method for dynamically updated meshes. The accuracy of the adaptive solutions is found to be comparable with that of non‐adaptive solutions only if a similar mesh resolution near the interface is provided. Because of the substantial reduction in the total number of nodes, the adaptive simulations with two‐level refinement used to solve the incompressible Navier–Stokes equations with a free surface are about four times faster than the non‐adaptive ones. Further, the overhead of the present AMR procedure is found to be very small, as compared with the total CPU time for an adaptive simulation. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
7.
The global trend towards performance‐based maintenance contracting has presented new challenges to maintenance service providers as they are compensated or penalized based on performance outcomes instead of time and materials consumed during maintenance service. The problem becomes more complex when uncertainties exist in reliability performance and maintenance activities of technical systems. In this paper, a general framework for managing performance‐based maintenance contract under risks is proposed. We illustrate our approach with an application in a multi‐echelon multi‐system spare parts control problem. Several different performance measures are considered and a probabilistic constrained optimization problem is formulated from the perspective of the service provider. Hybrid simulation/analytic heuristics are proposed to solve the problem based on the monotonic properties of performance measures. This approach is flexible and can be applied to a wide range of problems with similar properties. Numerical example shows that the probability of violating performance requirements is high if the risk is overlooked. We also provide guidelines on how to apply this approach in practice. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we present a solution framework for high‐order discretizations of conjugate heat transfer problems on non‐body‐conforming meshes. The framework consists of and leverages recent developments in discontinuous Galerkin discretization, simplex cut‐cell techniques, and anisotropic output‐based adaptation. With the cut‐cell technique, the mesh generation process is completely decoupled from the interface definitions. In addition, the adaptive scheme combined with the discontinuous Galerkin discretization automatically adjusts the mesh in each sub‐domain and achieves high‐order accuracy in outputs of interest. We demonstrate the solution framework through several multi‐domained conjugate heat transfer problems consisting of laminar and turbulent flows, curved geometry, and highly coupled heat transfer regions. The combination of these attributes yield nonintuitive coupled interactions between fluid and solid domains, which can be difficult to capture with user‐generated meshes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
针对脑电信号中的眼电伪迹去除问题,提出了一种基于几何子空间分解的眼电伪迹去除方法。最大噪声分量分析帮助构建几何子空间并将多维脑电信号分解成一系列分量,利用眼电分量间的高相关度,使用Spearman秩相关准则确定相关程度从细节中实现眼电伪迹分量的抽取;将处理后各个分量投影回信号空间并进行重构,于是在无需记录眼电的情况下得到去除眼电伪迹后的脑电信号。为了验证该方法的有效性,分别对自行叠加眼电伪迹的脑电信号及实际测量的脑电信号进行了研究,结合脑地形图能量分布可视化的优势,结果表明该方法能够对脑电信号进行有效降噪。  相似文献   

10.
This article presents a detailed study on the potential and limitations of performing higher‐order multi‐resolution topology optimization with the finite cell method. To circumvent stiffness overestimation in high‐contrast topologies, a length‐scale is applied on the solution using filter methods. The relations between stiffness overestimation, the analysis system, and the applied length‐scale are examined, while a high‐resolution topology is maintained. The computational cost associated with nested topology optimization is reduced significantly compared with the use of first‐order finite elements. This reduction is caused by exploiting the decoupling of density and analysis mesh, and by condensing the higher‐order modes out of the stiffness matrix. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Highly efficient human skin systems transmit fast adaptive (FA) and slow adaptive (SA) pulses selectively or consolidatively to the brain for a variety of external stimuli. The integrated analysis of these signals determines how humans perceive external physical stimuli. Here, a self‐powered mechanoreceptor sensor based on an artificial ion‐channel system combined with a piezoelectric film is presented, which can simultaneously implement FA and SA pulses like human skin. This device detects stimuli with high sensitivity and broad frequency band without external power. For the feasibility study, various stimuli are measured or detected. Vital signs such as the heart rate and ballistocardiogram can be measured simultaneously in real time. Also, a variety of stimuli such as the mechanical stress, surface roughness, and contact by a moving object can be distinguished and detected. This opens new scientific fields to realize the somatic cutaneous sensor of the real skin. Moreover, this new sensing scheme inspired by natural sensing structures is able to mimic the five senses of living creatures.  相似文献   

12.
Reliability evaluation based on degradation data has received significant attentions in recent years. However, existing works often assume that the degradation evolution over time is governed by a single stochastic process, which may not be realistic if change points exist. Here, for cases of degradation with change points, this paper attempts to capture the degradation process with a multi‐phase degradation model and find the method to evaluate the real‐time reliability of the product being monitored. Once new degradation information becomes available, the evaluation results are adaptively updated through the Bayesian method. In particular, for a two‐stage degradation process of liquid coupling devices (LCDs), a model named as change‐point gamma and Wiener process is developed, after which issues of real‐time reliability evaluation and parameters’ estimation are addressed in detail. Finally, the proposed method is illustrated by a case study of LCDs, and the corresponding results indicate that trustful evaluation results depend on the fitting accuracy in cases of multi‐phase degradation process. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
It is a significant challenge to accurately reconstruct medical computed tomography (CT) images with important details and features. Reconstructed images always suffer from noise and artifact pollution because the acquired projection data may be insufficient or undersampled. In reality, some “isolated noise points” (similar to impulse noise) always exist in low‐dose CT projection measurements. Statistical iterative reconstruction (SIR) methods have shown greater potential to significantly reduce quantum noise but still maintain the image quality of reconstructions than the conventional filtered back‐projection (FBP) reconstruction algorithm. Although the typical total variation‐based SIR algorithms can obtain reconstructed images of relatively good quality, noticeable patchy artifacts are still unavoidable. To address such problems as impulse‐noise pollution and patchy‐artifact pollution, this work, for the first time, proposes a joint regularization constrained SIR algorithm for sparse‐view CT image reconstruction, named “SIR‐JR” for simplicity. The new joint regularization consists of two components: total generalized variation, which could process images with many directional features and yield high‐order smoothness, and the neighborhood median prior, which is a powerful filtering tool for impulse noise. Subsequently, a new alternating iterative algorithm is utilized to solve the objective function. Experiments on different head phantoms show that the obtained reconstruction images are of superior quality and that the presented method is feasible and effective.  相似文献   

14.
Quantitative parameter mapping in MRI is typically performed as a two‐step procedure where serial imaging is followed by pixelwise model fitting. In contrast, model‐based reconstructions directly reconstruct parameter maps from raw data without explicit image reconstruction. Here, we propose a method that determines T1 maps directly from multi‐channel raw data as obtained by a single‐shot inversion‐recovery radial FLASH acquisition with a Golden Angle view order. Joint reconstruction of a T1, spin‐density and flip‐angle map is formulated as a nonlinear inverse problem and solved by the iteratively regularized Gauss‐Newton method. Coil sensitivity profiles are determined from the same data in a preparatory step of the reconstruction. Validations included numerical simulations, in vitro MRI studies of an experimental T1 phantom, and in vivo studies of brain and abdomen of healthy subjects at a field strength of 3 T. The results obtained for a numerical and experimental phantom demonstrated excellent accuracy and precision of model‐based T1 mapping. In vivo studies allowed for high‐resolution T1 mapping of human brain (0.5–0.75 mm in‐plane, 4 mm section thickness) and liver (1.0 mm, 5 mm section) within 3.6–5 s. In conclusion, the proposed method for model‐based T1 mapping may become an alternative to two‐step techniques, which rely on model fitting after serial image reconstruction. More extensive clinical trials now require accelerated computation and online implementation of the algorithm. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 254–263, 2016  相似文献   

15.
The time‐parallel framework for constructing parallel implicit time‐integration algorithms (PITA) is revisited in the specific context of linear structural dynamics and near‐real‐time computing. The concepts of decomposing the time‐domain in time‐slices whose boundaries define a coarse time‐grid, generating iteratively seed values of the solution on this coarse time‐grid, and using them to time‐advance the solution in each time‐slice with embarrassingly parallel time‐integrations are maintained. However, the Newton‐based corrections of the seed values, which so far have been computed in PITA and related approaches on the coarse time‐grid, are eliminated to avoid artificial resonance and numerical instability. Instead, the jumps of the solution on the coarse time‐grid are addressed by a projector which makes their propagation on the fine time‐grid computationally feasible while avoiding artificial resonance and numerical instability. The new PITA framework is demonstrated for a complex structural dynamics problem from the aircraft industry. Its potential for near‐real‐time computing is also highlighted with the solution of a relatively small‐scale problem on a Linux cluster system. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
The objective of the present work is to propose a new adaptive wavelet‐Galerkin method based on the lowest‐order hat interpolation wavelets. The specific application of the present method is made on the one‐dimensional analysis of thin‐walled box beam problems exhibiting rapidly varying local end effects. Higher‐order interpolation wavelets have been used in the wavelet‐collocation setting, but the lowest‐order hat interpolation is applied here first and a hat interpolation wavelet‐based Galerkin method is newly formulated. Unlike existing orthogonal or biorthogonal wavelet‐based Galerkin methods, the present method does not require special treatment in dealing with general boundary conditions. Furthermore, the present method directly works with nodal values and does not require special formula for the evaluation of system matrices. Though interpolation wavelets do not have any vanishing moment, an adaptive scheme based on multi‐resolution approximations is possible and a preconditioned conjugate gradient method can be used to enhance numerical efficiency. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
This paper discusses the contribution of mesh adaptation to high‐order convergence of unsteady multi‐fluid flow simulations on complex geometries. The mesh adaptation relies on a metric‐based method controlling the L p‐norm of the interpolation error and on a mesh generation algorithm based on an anisotropic Delaunay kernel. The mesh‐adaptive time advancing is achieved, thanks to a transient fixed‐point algorithm to predict the solution evolution coupled with a metric intersection in the time procedure. In the time direction, we enforce the equidistribution of the error, i.e. the error minimization in L norm. This adaptive approach is applied to an incompressible Navier–Stokes model combined with a level set formulation discretized on triangular and tetrahedral meshes. Applications to interface flows under gravity are performed to evaluate the performance of this method for this class of discontinuous flows. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
This study presents a gradient‐based shape optimization over a fixed mesh using a non‐uniform rational B‐splines‐based interface‐enriched generalized finite element method, applicable to multi‐material structures. In the proposed method, non‐uniform rational B‐splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. Verification and illustrative problems are solved to demonstrate the precision and capability of the method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we propose an efficient process of physiological artifact elimination methodology from brain waves (BW), which are also commonly known as electroencephalogram (EEG) signal. In a clinical environment during the acquisition of BW several artifacts contaminates the actual BW component. This leads to inaccurate and ambiguous diagnosis. As the statistical nature of the EEG signal is more non-stationery, adaptive filtering is the more promising method for the process of artifact elimination. In clinical conditions, the conventional adaptive techniques require many numbers of computational operations and leads to data samples overlapping and instability of the algorithm used. This causes delay in diagnosis and decision making. To overcome this problem in our work we propose to set a threshold value to diminish the problem of round off error. The resultant adaptive algorithm based on this strategy is Non-linear Least mean square (NL2MS) algorithm. Again, to improve this algorithm in terms of filtering capability we perform data normalization, using this algorithm several hybrid versions are developed to improve filtering and reduce computational operations. Using the method, a new signal enhancement unit (SEU) is realized and performance of various hybrid versions of algorithms examined using real EEG signals recorded from the subject. The ability of the proposed schemes is measured in terms of convergence, enhancement and multiplications required. Among various SEUs, the MCN2L2MS algorithm achieves 14.6734, 12.8732, 10.9257, 15.7790 dB during the artifact removal of RA, EMG, CSA and EBA components with only two multiplications. Hence, this algorithm seems to be better candidate for artifact elimination.  相似文献   

20.
Image thresholding is critical to computer vision systems designed to detect very small numbers of contaminant particles from analysis of images acquired by in‐line process monitoring. The objective of this work was to obtain a thresholding method that would permit in‐line, “real‐time,” determination of both the number of particles in an image and their size. An additional requirement was that it automatically adapt to inevitable variations in the image quality. A new global image thresholding method, the MaxMin method (“MaxMin”), was developed. MaxMin notes the size of the smallest detected particle in an image as threshold value is progressively changed from black to white. The selected threshold value is the one providing the largest size. MaxMin was tested on thousands of images, and it was shown to readily adapt to images of different background noise levels and provided particle counts as accurate as those of a human observer in less than three seconds per image. Error in particle size measurement was a function of the particle size and the image resolution. It was about 3% for 50 μm particles, using a CCD camera with 2× lens, calibrated for each pixel to represent ~5 μm2. The error was significantly higher for smaller particles, when the same system resolution was used. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 9–14, 2006  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号