This work investigates emulsion templating to synthesize hexadecane oil/geopolymer composites. In a system with hexadecane as the internal (dispersed) phase and an alkali activated continuous phase without added surfactant, adding aluminosilicate clay particles does not increase resistance against creaming or coalescence, while adding a surfactant (L35 or CTAB) stabilizes the solid-liquid interface. Infrared studies and rheological studies of the associated geopolymerization determined that the presence of the organic phase or surfactant has no significant effect on the geopolymerization kinetics, as determined by the change in time of the Si-O-T IR stretching frequency and the rheological moduli involved during the process. The stabilization of the organic template is reminiscent of Pickering emulsion even though we employ a much greater amount of inorganic material for geopolymer formation. Although the addition of surfactant has a significant effect on the behavior of the paste, the percolation of the network remains unmodified, highlighting the fact that the phenomenon is not dependent on viscosity. Finally, rheological measurements were used to obtain the mass fractal dimension of the as-made gel network, which is able to differentiate the interfacial effect between surfactant molecules with a slightly denser interphase when a cationic surfactant is used. 相似文献
In this perfusion magnetic resonance imaging study, the performances of different pseudo-continuous arterial spin labeling (PCASL) sequences were compared: two-dimensional (2D) single-shot readout with simultaneous multislice (SMS), 2D single-shot echo-planar imaging (EPI) and multishot three-dimensional (3D) gradient and spin echo (GRASE) sequences combined with a background-suppression (BS) module.
Materials and methods
Whole-brain PCASL images were acquired from seven healthy volunteers. The performance of each protocol was evaluated by extracting regional cerebral blood flow (rCBF) measures using an inline morphometric segmentation prototype. Image data postprocessing and subsequent statistical analyses enabled comparisons at the regional and sub-regional levels.
Results
The main findings were as follows: (i) Mean global CBF obtained across methods was were highly correlated, and these correlations were significantly higher among the same readout sequences. (ii) Temporal signal-to-noise ratio and gray-matter-to-white-matter CBF ratio were found to be equivalent for all 2D variants but lower than those of 3D-GRASE.
Discussion
Our study demonstrates that the accelerated SMS readout can provide increased acquisition efficiency and/or a higher temporal resolution than conventional 2D and 3D readout sequences. Among all of the methods, 3D-GRASE showed the lowest variability in CBF measurements and thus highest robustness against noise.
This work aims at demonstrating the interest of a new methodology for the design and optimization of composite materials and structures. Coupling reliability methods and homogenization techniques allow the consideration of probabilistic design variables at different scales. The main advantage of such an original micromechanics-based approach is to extend the scope of solutions for engineering composite materials to reach or to respect a given reliability level. This approach is illustrated on a civil engineering case including reinforced fiber composites. Modifications of microstructural components properties, manufacturing process, and geometry are investigated to provide new alternatives for design and guidelines for quality control. 相似文献
We present the multi-period orienteering problem with multiple time windows (MuPOPTW), a new routing problem combining objective and constraints of the orienteering problem (OP) and team orienteering problem (TOP), constraints from standard vehicle routing problems, and original constraints from a real-world application. The problem itself comes from a real industrial case. Specific route duration constraints result in a route feasibility subproblem. We propose an exact algorithm for this subproblem, and we embed it in a variable neighborhood search method to solve the whole routing problem. We then provide experimental results for this method. We compare them to a commercial solver. We also adapt our method to standard benchmark OP and TOP instances, and provide comparative tables with state-of-the-art algorithms. 相似文献
We provide an algorithm for the exact computation of the lattice width of a set of points K in Z2 in linear-time with respect to the size of K. This method consists in computing a particular surrounding polygon. From this polygon, we deduce a set of candidate vectors allowing the computation of the lattice width. Moreover, we describe how this new algorithm can be extended to an arbitrary dimension thanks to a greedy and practical approach to compute a surrounding polytope. Indeed, this last computation is very efficient in practice as it processes only a few linear time iterations whatever the size of the set of points. Hence, it avoids complex geometric processings. 相似文献
Inspired by the Multiplicative Exponential fragment of Linear Logic, we define a framework called the prismoid of resources where each vertex is a language which refines the λ-calculus by using a different choice to make explicit or implicit (meta-level) the definition of the contraction, weakening, and substitution operations. For all the calculi in the prismoid we show simulation of β-reduction, confluence, preservation of β-strong normalisation and strong normalisation for typed terms. Full composition also holds for all the calculi of the prismoid handling explicit substitutions. The whole development of the prismoid is done by making the set of resources a parameter of the formalism, so that all the properties for each vertex are obtained as a particular case of the general abstract proofs. 相似文献
Automatic parallelization in the polyhedral model is based on affine transformations from an original computation domain (iteration space) to a target space-time domain, often with a different transformation for each variable. Code generation is an often ignored step in this process that has a significant impact on the quality of the final code. It involves making a trade-off between code size and control code simplification/optimization. Previous methods of doing code generation are based on loop splitting, however they have nonoptimal behavior when working on parameterized programs. We present a general parameterized method for code generation based on dual representation of polyhedra. Our algorithm uses a simple recursion on the dimensions of the domains, and enables fine control over the tradeoff between code size and control overhead. 相似文献
Computational tools for normal mode analysis, which are widely used in physics and materials science problems, are designed here in a single package called NMscatt (Normal Modes & scattering) that allows arbitrarily large systems to be handled. The package allows inelastic neutron and X-ray scattering observables to be calculated, allowing comparison with experimental data produced at large scale facilities. Various simplification schemes are presented for analyzing displacement vectors, which are otherwise too complicated to understand in very large systems.
Program summary
Title of program:NMscattCatalogue identifier:ADZA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZA_v1_0.htmlProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions:noNo. of lines in distributed program, including test data, etc.:573 535No. of bytes in distributed program, including test data, etc.:4 516 496Distribution format:tar.gzProgramming language:FORTRAN 77Computer:x86 PCOperating system:GNU/Linux, UNIXRAM:Depends on the system size to be simulatedWord size:32 or 64 bitsClassification:16.3External routines:LAPACKNature of problem: Normal mode analysis, phonons calculation, derivation of incoherent and coherent inelastic scattering spectra.Solution method: Full diagonalization (producing eigen-vectors and eigen-values) of dynamical matrix which is obtained from potential energy function derivation using finite difference method.Running time: About 7 hours per one k-point evaluation in sampling all modes dispersion curves for a system containing 3550 atoms in the unit cell on AMD Athlon 64 X2 Dual Core Processor 4200+. 相似文献