首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The reproducibility of functional magnetic resonance imaging (fMRI) is important for fMRI‐based neuroscience research and clinical applications. Previous studies show considerable variation in amplitude and spatial extent of fMRI activation across repeated sessions on individual subjects even using identical experimental paradigms and imaging conditions. Most existing fMRI reproducibility studies were typically limited by time duration and data analysis techniques. Particularly, the assessment of reproducibility is complicated by a fact that fMRI results may depend on data analysis techniques used in reproducibility studies. In this work, the long‐term fMRI reproducibility was investigated with a focus on the data analysis methods. Two spatial smoothing techniques, including a wavelet‐domain Bayesian method and the Gaussian smoothing, were evaluated in terms of their effects on the long‐term reproducibility. A multivariate support vector machine (SVM)‐based method was used to identify active voxels, and compared to a widely used general linear model (GLM)‐based method at the group level. The reproducibility study was performed using multisession fMRI data acquired from eight healthy adults over 1.5 years' period of time. Three regions‐of‐interest (ROI) related to a motor task were defined based upon which the long‐term reproducibility were examined. Experimental results indicate that different spatial smoothing techniques may lead to different reproducibility measures, and the wavelet‐based spatial smoothing and SVM‐based activation detection is a good combination for reproducibility studies. On the basis of the ROIs and multiple numerical criteria, we observed a moderate to substantial within‐subject long‐term reproducibility. A reasonable long‐term reproducibility was also observed from the inter‐subject study. It was found that the short‐term reproducibility is usually higher than the long‐term reproducibility. Furthermore, the results indicate that brain regions with high contrast‐to‐noise ratio do not necessarily exhibit high reproducibility. These findings may provide supportive information for optimal design/implementation of fMRI studies and data interpretation.  相似文献   

2.
In this work, the extended finite element method (XFEM) is for the first time coupled with face‐based strain‐smoothing technique to solve three‐dimensional fracture problems. This proposed method, which is called face‐based smoothed XFEM here, is expected to combine both the advantages of XFEM and strain‐smoothing technique. In XFEM, arbitrary crack geometry can be modeled and crack advance can be simulated without remeshing. Strain‐smoothing technique can eliminate the integration of singular term over the volume around the crack front, thanks to the transformation of volume integration into area integration. Special smoothing scheme is implemented in the crack front smoothing domain. Three examples are presented to test the accuracy, efficiency, and convergence rate of the face‐based smoothed XFEM. From the results, it is clear that smoothing technique can improve the performance of XFEM for three‐dimensional fracture problems. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, a new 4‐node hybrid stress element is proposed using a node‐based smoothing technique of tetrahedral mesh. The conditions for hybrid stress field required are summarized, and the field should be continuous for better performance of a constant‐strain tetrahedral element. Nodal stress is approximated by the node‐based smoothing technique, and the stress field is interpolated with standard shape functions. This stress field is linear within each element and continuous across elements. The stress field is expressed by nodal displacements and no additional variables. The element stiffness matrix is calculated using the Hellinger‐Reissner functional, which guarantees the strain field from displacement field to be equal to that from the stress field in a weak sense. The performance of the proposed element is verified by through several numerical examples.  相似文献   

4.
A three‐dimensional microstructure‐based finite element framework is presented for modeling the mechanical response of rubber composites in the microscopic level. This framework introduces a novel finite element formulation, the meshfree‐enriched FEM, to overcome the volumetric locking and pressure oscillation problems that normally arise in the numerical simulation of rubber composites using conventional displacement‐based FEM. The three‐dimensional meshfree‐enriched FEM is composed of five‐noded tetrahedral elements with a volume‐weighted smoothing of deformation gradient between neighboring elements. The L2‐orthogonality property of the smoothing operator enables the employed Hu–Washizu–de Veubeke functional to be degenerated to an assumed strain method, which leads to a displacement‐based formulation that is easily incorporated with the periodic boundary conditions imposed on the unit cell. Two numerical examples are analyzed to demonstrate the effectiveness of the proposed approach. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
A stabilized conforming (SC) nodal integration, which meets the integration constraint in the Galerkin mesh‐free approximation, is generalized for non‐linear problems. Using a Lagrangian discretization, the integration constraints for SC nodal integration are imposed in the undeformed configuration. This is accomplished by introducing a Lagrangian strain smoothing to the deformation gradient, and by performing a nodal integration in the undeformed configuration. The proposed method is independent to the path dependency of the materials. An assumed strain method is employed to formulate the discrete equilibrium equations, and the smoothed deformation gradient serves as the stabilization mechanism in the nodally integrated variational equation. Eigenvalue analysis demonstrated that the proposed strain smoothing provides a stabilization to the nodally integrated discrete equations. By employing Lagrangian shape functions, the computation of smoothed gradient matrix for deformation gradient is only necessary in the initial stage, and it can be stored and reused in the subsequent load steps. A significant gain in computational efficiency is achieved, as well as enhanced accuracy, in comparison with the mesh‐free solution using Gauss integration. The performance of the proposed method is shown to be quite robust in dealing with non‐uniform discretization. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
Virtual view synthesis is one of the most important techniques to realize free viewpoint television and three‐dimensional (3D) video. In this article, we propose a view synthesis method to generate high‐quality intermediate views in such applications and new evaluation metrics named as spatial peak signal‐to‐noise ratio and temporal peak signal‐to‐noise ratio to measure spatial and temporal consistency, respectively. The proposed view synthesis method consists of five major steps: depth preprocessing, depth‐based 3D warping, depth‐based histogram matching, base plus assistant view blending, and depth‐based hole‐filling. The efficiency of the proposed view synthesis method has been verified by evaluating the quality of synthesized images with various metrics such as peak signal‐to‐noise ratio, structural similarity, discrete cosine transform (DCT)‐based video quality metric, and the newly proposed metrics. We have also confirmed that the synthesized images are objectively and subjectively natural. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 378–390, 2010  相似文献   

7.
The simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) recording technique has recently received considerable attention and has been used in many studies on cognition and neurological disease. EEG‐fMRI simultaneous recording has the advantage of enabling the monitoring of brain activity with both high temporal resolution and high spatial resolution in real time. The successful removal of the ballistocardiographic (BCG) artifact from the EEG signal recorded during an MRI is an important prerequisite for real‐time EEG‐fMRI joint analysis. We have developed a new framework dedicated to BCG artifact removal in real‐time. This framework includes a new real‐time R‐peak detection method combining a k‐Teager energy operator, a thresholding detector, and a correlation detector, as well as a real‐time BCG artifact reduction procedure combining average artifact template subtraction and a new multi‐channel referenced adaptive noise cancelling method. Our results demonstrate that this new framework is efficient in the real‐time removal of the BCG artifact. The multi‐channel adaptive noise cancellation (mANC) method performs better than the traditional ANC method in eliminating the BCG residual artifact. In addition, the computational speed of the mANC method fulfills the requirements of real‐time EEG‐fMRI analysis. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 209–215, 2016  相似文献   

8.
The quality of finite element meshes is one of the key factors that affects the accuracy and reliability of numerical simulation results of many science and engineering problems. In order to solve the problem wherein the surface elements of the mesh generated by the grid‐based method have poor quality, this paper studied mesh quality improvement methods, including node position smoothing and topological optimization. A curvature‐based Laplacian scheme was used for smoothing of nodes on the C‐edges, which combined the normal component with the tangential component of the Laplacian operator at the curved boundary. A projection‐based Laplacian algorithm for smoothing the remaining boundary nodes was established. The deviation of the newly smoothed node from the practical surface of the solid model was solved. A node‐ and area‐weighted combination method was proposed for smoothing of interior nodes. Five element‐inserting modes, three element‐collapsing modes and three mixed modes for topological optimization were newly established. The rules for harmonious application and conformity problem of each mode, especially the mixed mode, were provided. Finally, several examples were given to demonstrate the practicability and validity of the mesh quality improvement methods presented in this paper. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Described herein are the advantages of using sub‐sinograms for single photon emission computed tomography image reconstruction. A sub‐sinogram is a sinogram acquired with an entire data acquisition protocol, but in a fraction of the total acquisition time. A total‐sinogram is the summation of all sub‐sinograms. Images can be reconstructed from the total‐sinogram or from sub‐sinograms and then be summed to produce the final image. For a linear reconstruction method such as the filtered backprojection algorithm, there is no advantage of using sub‐sinograms. However, for nonlinear methods such as the maximum likelihood (ML) expectation maximization algorithm, the use of sub‐sinograms can produce better results. The ML estimator is a random variable, and one ML reconstruction is one realization of the random variable. The ML solution is better obtained via the mean value of the random variable of the ML estimator. Sub‐sinograms can provide many realizations of the ML estimator. We show that the use of sub‐sinograms can produce better estimations for the ML solution than can the total‐sinogram and can also reduce the statistical noise within iteratively reconstructed images. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 247–252, 2011;  相似文献   

10.
In the edge‐based smoothed finite element method (ES‐FEM), one needs only the assumed displacement values (not the derivatives) on the boundary of the edge‐based smoothing domains to compute the stiffness matrix of the system. Adopting this important feature, a five‐node crack‐tip element is employed in this paper to produce a proper stress singularity near the crack tip based on a basic mesh of linear triangular elements that can be generated automatically for problems with complicated geometries. The singular ES‐FEM is then formulated and used to simulate the crack propagation in various settings, using a largely coarse mesh with a few layers of fine mesh near the crack tip. The results demonstrate that the singular ES‐FEM is much more accurate than X‐FEM and the existing FEM. Moreover, the excellent agreement between numerical results and the reference observations shows that the singular ES‐FEM offers an efficient and high‐quality solution for crack propagation problems. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
To economically and efficiently lower the venting noise, the development of a high‐quality muffler with compact volume has become crucial in the modern industrial field. The research work of shape optimization of straight silencers in conjunction with plug/non‐plug perforated ducts which may noticeably increase the acoustical performance is rarely addressed; therefore, the main purpose of this paper is not only to analyze the sound transmission loss (STL) of a one‐chamber plug/non‐plug perforated muffler but also to optimize the best design shape under a limited space. In this paper, on the basis of plane wave theory, the four‐pole system matrix in evaluating the acoustic performance is derived by using the decoupled numerical method. Moreover, a simulated annealing (SA) algorithm searching for the global optimum by imitating the softening process of metal has been adopted during the muffler's optimization. To assure SA's correctness, the STL's maximization of one‐chamber perforated plug mufflers at a targeted frequency of 500 Hz is exemplified first. Furthermore, a numerical case in dealing with a broadband noise emitted from a fan by using one‐chamber plug/non‐plug mufflers has been introduced and fully discussed. To achieve a better optimization in SA, various SA parameter sets of cooling rate and iteration parameter values were used. Before the SA operation can be carried out, the accuracy check of the mathematical models with respect to plug/non‐plug perforated mufflers has to be supported by experimental data. The optimal result in eliminating broadband noise reveals that the muffler with a plug acoustical mechanism has a better noise reduction than that of a non‐plug muffler. Consequently, the approach used for the optimal design of the noise elimination proposed in this study is certainly easy, economical, and quite effective. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
We present a robust method for generating high‐order nodal tetrahedral curved meshes. The approach consists of modifying an initial linear mesh by first, introducing high‐order nodes, second, displacing the boundary nodes to ensure that they are on the computer‐aided design surface, and third, smoothing and untangling the mesh obtained after the displacement of the boundary nodes to produce a valid curved high‐order mesh. The smoothing algorithm is based on the optimization of a regularized measure of the mesh distortion relative to the original linear mesh. This means that whenever possible, the resulting mesh preserves the geometrical features of the initial linear mesh such as shape, stretching, and size. We present several examples to illustrate the performance of the proposed algorithm. Furthermore, the examples show that the implementation of the optimization problem is robust and capable of handling situations in which the mesh before optimization contains a large number of invalid elements. We consider cases with polynomial approximations up to degree ten, large deformations of the curved boundaries, concave boundaries, and highly stretched boundary layer elements. The meshes obtained are suitable for high‐order finite element analyses. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Voxel‐based micro‐finite‐element (μFE) models are used extensively in bone mechanics research. A major disadvantage of voxel‐based μFE models is that voxel surface jaggedness causes distortion of contact‐induced stresses. Past efforts in resolving this problem have only been partially successful, ie, mesh smoothing failed to preserve uniformity of the stiffness matrix, resulting in (excessively) larger solution times, whereas reducing contact to a bonded interface introduced spurious tensile stresses at the contact surface. This paper introduces a novel “smooth” contact formulation that defines gap distances based on an artificial smooth surface representation while using the conventional penalty contact framework. Detailed analyses of a sphere under compression demonstrated that the smooth formulation predicts contact‐induced stresses more accurately than the bonded contact formulation. When applied to a realistic bone contact problem, errors in the smooth contact result were under 2%, whereas errors in the bonded contact result were up to 42.2%. We conclude that the novel smooth contact formulation presents a memory‐efficient method for contact problems in voxel‐based μFE models. It presents the first method that allows modeling finite slip in large‐scale voxel meshes common to high‐resolution image‐based models of bone while keeping the benefits of a fast and efficient voxel‐based solution scheme.  相似文献   

14.
Ensemble Methods are proposed as a means to extendbiAdaptive One‐Factor‐at‐a‐Time (aOFAT) experimentation. The proposed method executes multiple aOFAT experiments on the same system with minor differences in experimental setup, such as ‘starting points’. Experimental conclusions are arrived at by aggregating the multiple, individual aOFATs. A comparison is made to test the performance of the new method with that of a traditional form of experimentation, namely a single fractional factorial design which is equally resource intensive. The comparisons between the two experimental algorithms are conducted using a hierarchical probability meta‐model and an illustrative case study. The case is a wet clutch system with the goal of minimizing drag torque. In this study, the proposed procedure was superior in performance to using fractional factorial arrays consistently across various experimental settings. At the best, the proposed algorithm provides an expected value of improvement that is 15% higher than the traditional approach; at the worst, the two methods are equally effective, and on average the improvement is about 10% higher with the new method. These findings suggest that running multiple adaptive experiments in parallel can be an effective way to make improvements in quality and performance of engineering systems and also provides a reasonable aggregation procedure by which to bring together the results of the many separate experiments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
A new implementation of the conjugate gradient method is presented that economically overcomes the problem of severe numerical noise superimposed on an otherwise smooth underlying objective function of a constrained optimization problem. This is done by the use of a novel gradient‐only line search technique, which requires only two gradient vector evaluations per search direction and no explicit function evaluations. The use of this line search technique is not restricted to the conjugate gradient method but may be applied to any line search descent method. This method, in which the gradients may be computed by central finite differences with relatively large perturbations, allows for the effective smoothing out of any numerical noise present in the objective function. This new implementation of the conjugate gradient method, referred to as the ETOPC algorithm, is tested using a large number of well‐known test problems. For initial tests with no noise introduced in the objective functions, and with high accuracy requirements set, it is found that the proposed new conjugate gradient implementation is as robust and reliable as traditional first‐order penalty function methods. With the introduction of severe relative random noise in the objective function, the results are surprisingly good, with accuracies obtained that are more than sufficient compared to that required for engineering design optimization problems with similar noise. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
Abstract: In responding to the needs of the material characterization community, the recently developed mesh‐free random grid method (MFRGM) has been exhibiting very promising characteristics of accuracy, adaptability, implementation flexibility and efficiency. To address the design specification of the method according to an intended application, we are presenting a sensitivity analysis that aids into determining the effects of the experimental and computational parameters characterizing the MFRGM in terms of its performance. The performance characteristics of the MFRGM are mainly its accuracy, sensitivity, smoothing properties and efficiency. In this paper, we are presenting a classification of a set of parameters associated with the characteristics of the experimental set‐up and the random grid applied on the specimen under measurement. The applied sensitivity analysis is based on synthetic images produced from analytic solutions of specific isotropic and orthotropic elasticity boundary value problems. This analysis establishes the trends in the performance characteristics of the MFRGM that will enable the selection of the user controlled variables for a desired performance specification.  相似文献   

17.
The virtual fields method (VFM) is a powerful technique for the calculation of spatial distributions of material properties from experimentally determined displacement fields. A Fourier‐series‐based extension to the VFM (the F‐VFM) is presented here, in which the unknown stiffness distribution is parameterised in the spatial frequency domain rather than in the spatial domain as used in the classical VFM. We present in this paper the theory of the F‐VFM for the case of elastic isotropic thin structures with known boundary conditions. An efficient numerical algorithm based on the two‐dimensional Fast Fourier Transform (FFT) is presented, which reduces the computation time by three to four orders of magnitude compared with a direct implementation of the F‐VFM for typical experimental dataset sizes. Artefacts specific to the F‐VFM (ringing at the highest spatial frequency near to modulus discontinuities) can be largely removed through the use of appropriate filtering strategies. Reconstruction of stiffness distributions with the F‐VFM has been validated on three stiffness distribution scenarios under varying levels of noise in the input displacement fields. Robust reconstructions are achieved even when the displacement noise is higher than in typical experimental fields.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Organic field‐effect transistors and near‐infrared (NIR) organic phototransistors (OPTs) have attracted world's attention in many fields in the past decades. In general, the sensitivity, distinguishing the signal from noise, is the key parameter to evaluate the performance of NIR OPTs, which is decided by responsivity and dark current. 2D single crystal films of organic semiconductors (2DCOS) are promising functional materials due to their long‐range order in spite of only few molecular layers. Herein, for the first time, air‐stable 2DCOS of n‐type organic semiconductors (a furan‐thiophene quinoidal compound, TFT‐CN) with strong absorbance around 830 nm, by the facile drop‐casting method on the surface of water are successfully prepared. Almost millimeter‐sized TFT‐CN 2DCOS are obtained and their thickness is below 5 nm. A competitive field‐effect electron mobility (1.36 cm2 V?1 s?1) and high on/off ratio (up to 108) are obtained in air. Impressively, the ultrasensitive NIR phototransistors operating at the off‐state exhibit a very low dark current of ≈0.3 pA and an ultrahigh detectivity (D*) exceeding 6 × 1014 Jones because the devices can operate in full depletion at the off‐state, superior to the majority of the reported organic‐based NIR phototransistors.  相似文献   

19.
An octree‐based mesh generation method is proposed to create reasonable‐quality, geometry‐adapted unstructured hexahedral meshes automatically from triangulated surface models without any sharp geometrical features. A new, easy‐to‐implement, easy‐to‐understand set of refinement templates is developed to perform local mesh refinement efficiently even for concave refinement domains without creating hanging nodes. A buffer layer is inserted on an octree core mesh to improve the mesh quality significantly. Laplacian‐like smoothing, angle‐based smoothing and local optimization‐based untangling methods are used with certain restrictions to further improve the mesh quality. Several examples are shown to demonstrate the capability of our hexahedral mesh generation method for complex geometries. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
A cluster‐based method has been used successfully to analyze parametric profiles in Phase I of the profile monitoring process. Performance advantages have been demonstrated when using a cluster‐based method of analyzing parametric profiles over a non‐cluster based method with respect to more accurate estimates of the parameters and improved classification performance criteria. However, it is known that, in many cases, profiles can be better represented using a nonparametric method. In this study, we use the cluster‐based method to analyze profiles that cannot be easily represented by a parametric function. The similarity matrix used during the clustering phase is based on the fits of the individual profiles with p‐spline regression. The clustering phase will determine an initial main cluster set that contains greater than half of the total profiles in the historical data set. The profiles with in‐control T2 statistics are sequentially added to the initial main cluster set, and upon completion of the algorithm, the profiles in the main cluster set are classified as the in‐control profiles and the profiles not in the main cluster set are classified as out‐of‐control profiles. A Monte Carlo study demonstrates that the cluster‐based method results in superior performance over a non‐cluster based method with respect to better classification and higher power in detecting out‐of‐control profiles. Also, our Monte Carlo study shows that the cluster‐based method has better performance than a non‐cluster based method whether the model is correctly specified or not. We illustrate the use of our method with data from the automotive industry. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号