The current research work presents a facile and cost–effective co-precipitation method to prepare doped (Co & Fe) CuO and undoped CuO nanostructures without usage of any type of surfactant or capping agents. The structural analysis reveals monoclinic crystal structure of synthesized pure CuO and doped-CuO nanostructures. The effect of different morphologies on the performance of supercapacitors has been found in CV (cyclic voltammetry) and GCD (galvanic charge discharge) investigations. The specific capacitances have been obtained 156 (±5) Fg?1, 168(±5) Fg?1 and 186 (±5) Fg?1 for CuO, Co-doped CuO and Fe-doped CuO electrodes, respectively at scan rate of 5 mVs?1, while it is found to be 114 (±5) Fg?1, 136 (±5) Fg?1 and 170 (±5) Fg?1 for CuO, Co–CuO and Fe–CuO, respectively at 0.5 Ag-1 as calculated from the GCD. The super capacitive performance of the Fe–CuO nanorods is mainly attributed to the synergism that evolves between CuO and Fe metal ion. The Fe-doped CuO with its nanorods like morphology provides superior specific capacitance value and excellent cyclic stability among all studied nanostructured electrodes. Consequently, it motivates to the use of Fe-doped CuO nanostructures as electrode material in the next generation energy storage devices. 相似文献
A forming model based on a viscoplastic flow formulation is derived which includes the effects of small elastic strains. A significant feature of the formulation is its reliance on the dominant inelastic material characteristics in the formation of the stiffness matrix for large strain problems. The resultant non-linear system of equations is solved by an adaptive descent method which combines the rapid convergence of Newton's method near the solution with the robustness of a method of successive approximations. The use of the adaptive descent method effectively extends the viscoplastic flow formulations into the nearly rate-insensitive range of behaviours exhibited, for example, by metals at low temperature, where slow convergence of the non-linear solution algorithm has traditionally hampered their use. 相似文献
Al-SBA-15 of varying Si/Al ratios in the range 11.4–78.4 was synthesized using tri-block copolymer P123. The calcined materials were examined by XRD, pore size distribution, surface area, 27Al NMR spectroscopy. The acidity and acid strength distribution were studied using microcalorimetric adsorption of NH3. The acidic properties were also examined by cumene cracking reaction as a function of Si/Al ratios. Systematic variation of acidity and activity was observed as a function of Si/Al ratio. The initial heats of NH3 adsorption correlated well with activity indicate that acid sites with ΔH > 100 kJ/mole is responsible for cumene cracking activity. Linear correlations were obtained with total acidity and cumene cracking activities. The tetrahedral aluminum was found to be responsible for the observed acidities and catalytic activities. 相似文献
Cost-effectiveness ratios usually appear as point estimates without confidence intervals, since the numerator and denominator are both stochastic and one cannot estimate the variance of the estimator exactly. The recent literature, however, stresses the importance of presenting confidence intervals for cost-effectiveness ratios in the analysis of health care programmes. This paper compares the use of several methods to obtain confidence intervals for the cost-effectiveness of a randomized intervention to increase the use of Medicaid's Early and Periodic Screening, Diagnosis and Treatment (EPSDT) programme. Comparisons of the intervals show that methods that account for skewness in the distribution of the ratio estimator may be substantially preferable in practice to methods that assume the cost-effectiveness ratio estimator is normally distributed. We show that non-parametric bootstrap methods that are mathematically less complex but computationally more rigorous result in confidence intervals that are similar to the intervals from a parametric method that adjusts for skewness in the distribution of the ratio. The analyses also show that the modest sample sizes needed to detect statistically significant effects in a randomized trial may result in confidence intervals for estimates of cost-effectiveness that are much wider than the boundaries obtained from deterministic sensitivity analyses. 相似文献
A generalized mapping strategy that uses a combination of graph theory, mathematical programming, and heuristics is proposed. The authors use the knowledge from the given algorithm and the architecture to guide the mapping. The approach begins with a graphical representation of the parallel algorithm (problem graph) and the parallel computer (host graph). Using these representations, the authors generate a new graphical representation (extended host graph) on which the problem graph is mapped. An accurate characterization of the communication overhead is used in the objective functions to evaluate the optimality of the mapping. An efficient mapping scheme is developed which uses two levels of optimization procedures. The objective functions include minimizing the communication overhead and minimizing the total execution time which includes both computation and communication times. The mapping scheme is tested by simulation and further confirmed by mapping a real world application onto actual distributed environments 相似文献
26 clinician trainees' recollections of experiences in a diagnostic preschool program were analyzed in terms of strength and weaknesses of the program. 相似文献
Multimedia Tools and Applications - Automated bank cheque verification using image processing is an attempt to complement the present cheque truncation system, as well as to provide an alternate... 相似文献
Image captured by low dynamic range (LDR) camera fails to capture entire exposure level of scene, and instead only covers certain range of exposures. In order to cover entire exposure level in single image, bracketed exposure LDR images are combined. The range of exposures in different images results in information loss in certain regions. These regions need to be addressed and based on this motive a novel methodology of layer based fusion is proposed to generate high dynamic range image. High and low-frequency layers are formed by dividing each image based on pixel intensity variations. The regions are identified based on information loss section created in differently exposed images. High-frequency layers are combined using region based fusion with Dense SIFT which is used as activity level testing measure. Low-frequency layers are combined using weighted sum. Finally combined high and low-frequency layers are merged together on pixel to pixel basis to synthesize fused image. Objective analysis is performed to compare the quality of proposed method with state-of-the-art. The measures indicate superiority of the proposed method.
We study entanglement dynamics of qubit–qutrit pair under Dzyaloshinskii–Moriya (DM) interaction. The qubit–qutrit pair acts as a closed system and one external qubit serve as the environment for the pair. The external qubit interact with qubit of closed system via DM interaction. This interaction frequently kills the entanglement between qubit–qutrit pair, which is also periodically recovered. On the other hand two parameter class of state of qubit–qutrit pair also affected by DM interaction and one parameter class of state remains unaffected. The frequency of occurrence of entanglement sudden death and entanglement sudden birth in two parameter class of state is half than qubit–qutrit pure state. We used our quantification of entanglement as negativity measure. 相似文献