In this perfusion magnetic resonance imaging study, the performances of different pseudo-continuous arterial spin labeling (PCASL) sequences were compared: two-dimensional (2D) single-shot readout with simultaneous multislice (SMS), 2D single-shot echo-planar imaging (EPI) and multishot three-dimensional (3D) gradient and spin echo (GRASE) sequences combined with a background-suppression (BS) module.
Materials and methods
Whole-brain PCASL images were acquired from seven healthy volunteers. The performance of each protocol was evaluated by extracting regional cerebral blood flow (rCBF) measures using an inline morphometric segmentation prototype. Image data postprocessing and subsequent statistical analyses enabled comparisons at the regional and sub-regional levels.
Results
The main findings were as follows: (i) Mean global CBF obtained across methods was were highly correlated, and these correlations were significantly higher among the same readout sequences. (ii) Temporal signal-to-noise ratio and gray-matter-to-white-matter CBF ratio were found to be equivalent for all 2D variants but lower than those of 3D-GRASE.
Discussion
Our study demonstrates that the accelerated SMS readout can provide increased acquisition efficiency and/or a higher temporal resolution than conventional 2D and 3D readout sequences. Among all of the methods, 3D-GRASE showed the lowest variability in CBF measurements and thus highest robustness against noise.
This work presents new stabilised finite element methods for a bending moments formulation of the Reissner-Mindlin plate model. The introduction of the bending moment as an extra unknown leads to a new weak formulation, where the symmetry of this variable is imposed strongly in the space. This weak problem is proved to be well-posed, and stabilised Galerkin schemes for its discretisation are presented and analysed. The finite element methods are such that the bending moment tensor is sought in a finite element space constituted of piecewise linear continuos and symmetric tensors. Optimal error estimates are proved, and these findings are illustrated by representative numerical experiments. 相似文献
Determinism is very useful to multithreaded programs in debugging, testing, etc. Many deterministic ap-proaches have been proposed, such as deterministic multithreading (DMT) and deterministic replay. ... 相似文献
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population. 相似文献
We study assignment games in which jobs select machines, and in which certain pairs of jobs may conflict, which is to say they may incur an additional cost when they are both assigned to the same machine, beyond that associated with the increase in load. Questions regarding such interactions apply beyond allocating jobs to machines: when people in a social network choose to align themselves with a group or party, they typically do so based upon not only the inherent quality of that group, but also who amongst their friends (or enemies) chooses that group as well. We show how semi-smoothness, a recently introduced generalization of smoothness, is necessary to find tight bounds on the robust price of anarchy, and thus on the quality of correlated and Nash equilibria, for several natural job-assignment games with interacting jobs. For most cases, our bounds on the robust price of anarchy are either exactly 2 or approach 2. We also prove new convergence results implied by semi-smoothness for our games. Finally we consider coalitional deviations, and prove results about the existence and quality of strong equilibrium. 相似文献
Testing of reactive systems is challenging because long input sequences are often needed to drive them into a state to test a desired feature. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the amount of time required for resetting is high. This article presents an approach to discovering a test case chain—a single software execution that covers a group of test goals and minimizes overall test execution time. Our technique targets the scenario in which test goals for the requirements are given as safety properties. We give conditions for the existence and minimality of a single test case chain and minimize the number of test case chains if a single test case chain is infeasible. We report experimental results with our ChainCover tool for C code generated from Simulink models and compare it to state-of-the-art test suite generators. 相似文献
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored. This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver. The article discusses two modular implementation approaches and shows, by empirical evaluation, that search combinators can be implemented without overhead compared to a native, direct implementation in a constraint solver. 相似文献