In this study, epoxy-based nanocomposites containing multi-wall carbon nanotubes (CNTs) were produced by a calendering approach.
The electrical conductivities of these composites were investigated as a function of CNT content. The conductivity was found
to obey a percolation-like power law with a percolation threshold below 0.05 vol.%. The electrical conductivity of the neat
epoxy resin could be enhanced by nine orders of magnitude, with the addition of only 0.6 vol.% CNTs, suggesting the formation
of a well-conducting network by the CNTs throughout the insulating polymer matrix. To characterize the dispersion and the
morphology of CNTs in epoxy matrix, different microscopic techniques were applied to characterize the dispersion and the morphology
of CNTs in epoxy matrix, such as atomic force microscopy, transmission electron microscopy, and scanning electron microscopy
(SEM). In particular, the charge contrast imaging in SEM allows a visualization of the overall distribution of CNTs at a micro-scale,
as well as the identification of CNT bundles at a nano-scale. On the basis of microscopic investigation, the electrical conduction
mechanism of CNT/epoxy composites is discussed. 相似文献
One the most challenging problems for enterprise information integration is to deal with heterogeneous information sources on the Web. The reason is that they usually provide information that is in human-readable form only, which makes it difficult for a software agent to understand it. Current solutions build on the idea of annotating the information with semantics. If the information is unstructured, proposals such as S-CREAM, MnM, or Armadillo may be effective enough since they rely on using natural language processing techniques; furthermore, their accuracy can be improved by using redundant information on the Web, as C-PANKOW has proved recently. If the information is structured and closely related to a back-end database, deep annotation ranges among the most effective proposals, but it requires the information providers to modify their applications; if deep annotation is not applicable, the easiest solution consists of using a wrapper and transforming its output into annotations. In this paper, we prove that this transformation can be automated by means of an efficient, domain-independent algorithm. To the best of our knowledge, this is the first attempt to devise and formalize such a systematic, general solution 相似文献
Clinical interventional hemodynamic studies quantify the ventricular function from two-dimensional (2-D) X-ray projection images without having enough information of the actual three-dimensional (3-D) shape of this cardiac cavity. This paper reports a left ventricle 3-D reconstruction method from two orthogonal angiographic projections. This investigation is motivated by the lack of information about the actual 3-D shape of the cardiac cavity. The proposed algorithm works in 3-D space and considers the oblique projection geometry associated with the biplane image acquisition equipment. The reconstruction process starts by performing an approximate reconstruction based on the Cylindrical Closure Operation and the Dempster-Shafer theory. This approximate reconstruction is appropriately deformed in order to match the given projections. The deformation procedure is carried out by an iterative process that, by means of the Dempster-Shafer and the fuzzy integral theory, combines the information provided by the projection error and the connectivity between voxels. The performance of the proposed reconstruction method is evaluated by considering first the reconstruction of two 3-D binary databases from two orthogonal synthetized projections, obtaining errors as low as 6.48%. The method is then tested on real data, where two orthogonal preprocessed angiographic images are used for reconstruction. The performance of the technique, in this case, is assessed by means of the projection error, whose average for both views is 7.5%. The reconstruction method is also tested by performing the 3-D reconstruction of a ventriculographic sequence throughout an entire cardiac cycle. 相似文献
This article presents the analysis, comparison, and application of two alternative models to the optimal long–term operation planning of an hydro–thermal power system under conditions of uncertainty. The electrical system considered comprises one large reservoir, with interannual regulation capacity, and several smaller ones. The analyzed models employ stochastic dynamic programming as the solution methodology. The fundamental problem is to decide, on every temporal stage, how much water to use for generating purposes and how much to store, in order to minimize the total thermal and shortage costs. The original version of the studied model, created originally to forecast fuel consumption, assumes that the decision regarding the water release from the main reservoir is taken knowing the future hydrologic conditions. This criterion is known as wait–and–see . On the contrary, the new versions of the model, proposed in this article, consider a here–and–now criterion. Specifically, it is assumed that the future hydrologic conditions are not known at the time of making the operational decisions. The difference between the optimal cost of the proposed models and the original model defines the value of having the information regarding future hydrologic conditions before taking any decision. This value is generally known as the expected value of perfect information. 相似文献
The accurate and efficient discretization of singularly perturbed advection–diffusion equations on arbitrary 2D and 3D domains remains an open problem. An interesting approach to tackle this problem is the complete flux scheme (CFS) proposed by G. D. Thiart and further investigated by J. ten Thije Boonkkamp. For the CFS, uniform second order convergence has been proven on structured grids. We extend a version of the CFS to unstructured grids for a steady singularly perturbed advection–diffusion equation. By construction, the novel finite volume scheme is nodally exact in 1D for piecewise constant source terms. This property allows to use elegant continuous arguments in order to prove uniform second order convergence on unstructured one-dimensional grids. Numerical results verify the predicted bounds and suggest that by aligning the finite volume grid along the velocity field uniform second order convergence can be obtained in higher space dimensions as well. 相似文献
This paper presents four centrality measurements applied to an alternating current (AC) microgrid (MG) modeled as a multiplex network. The MG secondary control is separated into a frequency and a power-sharing layers, each one with a different adjacency matrix. A physical layer is also considered with an admittance matrix representing the impedances among the inverters. Centrality measures are used to determine the importance of nodes in separate layers, thereafter adjacency and Laplacian matrices are redefined to calculate the role of nodes in the multiplex system. First, a global adjacency matrix is calculated by the matrix sum of each adjacency matrix. Second, the adjacency matrix is calculated by a supra-Laplacian matrix. The first eigenvalue of the perturbed matrix is used to determine the diffusivity in the network using as leaders the sets obtained by the centrality measures. The role of the nodes in the system is verified in a simulated MG model of 37 nodes. Degree centrality and energy Laplacian measures present similar sets of nodes; however, the fastest set of nodes is found using the Eigenvector measurements for uniform and supra Laplacian approach.
Current software process models (CMM, SPICE, etc.) strongly recommend the application of statistical control and measure guides to define, implement, and evaluate the effects of different process improvements. However, whilst quantitative modeling has been widely used in other fields, it has not been considered enough in the field of software process improvement. During the last decade software process simulation has been used to address a wide diversity of management problems. Some of these problems are related to strategic management, technology adoption, understanding, training and learning, and risk management, among others. In this work a dynamic integrated framework for software process improvement is presented. This framework combines traditional estimation models with an intensive utilization of dynamic simulation models of the software process. The aim of this framework is to support a qualitative and quantitative assessment for software process improvement and decision making to achieve a higher software development process capability according to the Capability Maturity Model. The concepts underlying this framework have been implemented in a software process improvement tool that has been used in a local software organization. The results obtained and the lessons learned are also presented in this paper. 相似文献
This paper addresses the optimization of noninvasive diagnostic schemes using evolutionary algorithms in medical applications based on the interpretation of biosignals. A general diagnostic methodology using a set of definable characteristics extracted from the biosignal source followed by the specific diagnostic scheme is presented. In this framework, multiobjective evolutionary algorithms are used to meet not only classification accuracy but also other objectives of medical interest, which can be conflicting. Furthermore, the use of both multimodal and multiobjective evolutionary optimization algorithms provides the medical specialist with different alternatives for configuring the diagnostic scheme. Some application examples of this methodology are described in the diagnosis of a specific cardiac disorder-paroxysmal atrial fibrillation. 相似文献
We develop a new family of well-balanced path-conservative quadrature-free one-step ADER finite volume and discontinuous Galerkin finite element schemes on unstructured meshes for the solution of hyperbolic partial differential equations with non-conservative products and stiff source terms. The fully discrete formulation is derived using the recently developed framework of explicit one-step PNPM schemes of arbitrary high order of accuracy in space and time for conservative hyperbolic systems [Dumbser M, Balsara D, Toro EF, Munz CD. A unified framework for the construction of one-step finite-volume and discontinuous Galerkin schemes. J Comput Phys 2008;227:8209–53]. The two key ingredients of our high order approach are: first, the high order accurate PNPM reconstruction operator on unstructured meshes, using the WENO strategy presented in [Dumbser M, Käser M, Titarev VA Toro EF. Quadrature-free non-oscillatory finite volume schemes on unstructured meshes for nonlinear hyperbolic systems. J Comput Phys 2007;226:204–43] to ensure monotonicity at discontinuities, and second, a local space–time Galerkin scheme to predict the evolution of the reconstructed polynomial data inside each element during one time step to obtain a high order accurate one-step time discretization. This approach is also able to deal with stiff source terms as shown in [Dumbser M, Enaux C, Toro EF. Finite volume schemes of very high order of accuracy for stiff hyperbolic balance laws. J Comput Phys 2008;227:3971–4001]. These two key ingredients are combined with the recently developed path-conservative methods of Parés [Parés C. Numerical methods for nonconservative hyperbolic systems: a theoretical framework. SIAM J Numer Anal 2006;44:300–21] and Castro et al. [Castro MJ, Gallardo JM, Parés C. High-order finite volume schemes based on reconstruction of states for solving hyperbolic systems with nonconservative products. Applications to shallow-water systems. Math Comput 2006;75:1103–34] to treat the non-conservative products properly. We show applications of our method to the two-layer shallow water equations as well as applications to the recently published depth-averaged two-fluid flow model of Pitman and Le [Pitman EB, Le L. A two-fluid model for avalanche and debris flows. Philos Trans Roy Soc A 2005;363:1573–601]. 相似文献