We studied the catalytic effects of Titanium, Iron and FeTi intermetallic on the desorption kinetics of magnesium hydride. In order to separate the catalytic effects of each element from additional synergistic and alloying effects, Mg-Ti and Mg-Fe mixtures were studied as a baseline for Mg-Fe-Ti elemental and Mg-(FeTi) intermetallic composites. Sub-micron dimensions for MgH2 particles and excellent nanoscale catalyst dispersion was achieved by high-energy ball-milling as confirmed by analytical electron microscopy techniques. The composites containing Fe shows desorption temperature of 170 K lower than as-received MgH2 powder, which makes it suitable to be cycled at relatively low temperature of 523 K. Furthermore, the low cycling temperature prevents the formation of Mg2FeH6. In sorption cycling tests, Mg-10% Ti and Mg-10% (FeTi), after about 5 activation cycles, show fast desorption kinetics initially, but the kinetics also degrade faster than for all other composites, eventually slowing down by a factor of 7 and 4, respectively. The ternary Mg-Fe-Ti composite shows best performance. With the highest BET surface area of 40 m2/g, it also shows much less degradation during cycling. This is attributed to titanium hydride acting as a size control agent preventing agglomeration of particles; while Fe works as a very strong catalyst with uniform and nanoscale dispersion on the surface of MgH2 particles. 相似文献
In this paper a new algorithm for allocating energy and determining the optimum amount of network active power reserve capacity and the share of generating units and demand side contribution in providing reserve capacity requirements for day-ahead market is presented. In the proposed method, the optimum amount of reserve requirement is determined based on network security set by operator. In this regard, Expected Load Not Supplied (ELNS) is used to evaluate system security in each hour. The proposed method has been implemented over the IEEE 24-bus test system and the results are compared with a deterministic security approach, which considers certain and fixed amount of reserve capacity in each hour. This comparison is done from economic and technical points of view. The promising results show the effectiveness of the proposed model which is formulated as mixed integer linear programming (MILP) and solved by GAMS software. 相似文献
The assessment of climate change and its impacts on hydropower generation is a complex issue. This paper evaluates the application of representative concentration pathways (RCPs, 2.6, 4.5, and 8.5) with the change factor (CF) method and the statistical downscaling method (SDSM) to generate six climatic scenarios of monthly temperature and rainfall over the period 2020–2049 in the Karkheh basin, Iran. The identification of unit hydrographs and component flows from rainfall, evaporation and streamflow data (IHACRES) model was employed to simulate runoff for the purpose of designing a run-of-river hydropower plant in the Karkheh basin. The non-dominated sorting genetic algorithm (NSGA)-II was employed to maximize yearly energy generation and the plant factor, simultaneously. Results indicate the runoff scenarios associated with the SDSM lead to higher run-of-river hydropower generation in 2020–2049 compared to the CF results. 相似文献
In theory, emergence of robustness concept has pushed decision-makers toward designing alternatives, such as resistant against the potential fluctuations fueled by uncertain surrounding environment. This study promotes an objective-based multi-attributes decision-making framework that takes into account the uncertainties associated with the impacts of the climate change on water resources systems. To capture the uncertainties of climate change, Monte Carlo approach has been used to generate a series of ensembles. These generated ensembles represent the stochastic behavior of the hydro-climatic variables under climate change. This framework represents the inherent uncertainties associated with hydro-climatic simulations. Next, a coupled TOPSIS/Entropy multi-attribute decision-making framework has been formed to prioritize the feasible alternatives using system performance measures. The main objective of this framework is to minimize the risk of deceptive and subjective assessments during decision-making process. Karkheh River basin has been selected as a case study to demonstrate the implication of this framework. Using a set of system performance attributes, the performance of two hydropower systems has been estimated during the baseline period and under the future climate change conditions. According to the conducted frequency analysis, the alternative in which both hydropower projects would go under construction emerged as the robust solution (i.e., there was a 99.9% chance that it outperforms other solutions). The results indicate that the construction of these hydropower systems leads to the increase of Karkheh River basin robustness in the future.
A continuum approach is presented for predicting the constitutive response of HCP polycrystals using a simple non-hardening constitutive model incorporating both slip and twinning. This has been achieved by considering a physically based methodology for restricting the amount of the twinning activity. A continuum approach is used in modeling the texture evolution that eliminates the need for increasing the number of discrete crystal orientations to account for new orientations created by twinning during deformation. The polycrystal is represented by an orientation distribution function using the Rodrigues parameterization. A total Lagrangian framework is used to model the evolution of microstructure. Numerical examples are used to show the application of the methodology for modeling deformation processes. 相似文献
In this paper, we present a spatial perturbation method to control the optical patterns in semiconductor microresonators in the far‐field configuration. We propose a fast all‐optical switch which operates at a low light level. The switching beam controls the behavior of output beams with strong intensities. The method has been applied successfully to different optical patterns such as rolls, squares, and hexagons. 相似文献
Judging by the increasing impact of machine learning on large-scale data analysis in the last decade, one can anticipate a substantial growth in diversity of the machine learning applications for “big data” over the next decade. This exciting new opportunity, however, also raises many challenges. One of them is scaling inference within and training of graphical models. Typical ways to address this scaling issue are inference by approximate message passing, stochastic gradients, and MapReduce, among others. Often, we encounter inference and training problems with symmetries and redundancies in the graph structure. A prominent example are relational models that capture complexity. Exploiting these symmetries, however, has not been considered for scaling yet. In this paper, we show that inference and training can indeed benefit from exploiting symmetries. Specifically, we show that (loopy) belief propagation (BP) can be lifted. That is, a model is compressed by grouping nodes together that send and receive identical messages so that a modified BP running on the lifted graph yields the same marginals as BP on the original one, but often in a fraction of time. By establishing a link between lifting and radix sort, we show that lifting is MapReduce-able. Still, in many if not most situations training relational models will not benefit from this (scalable) lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for the first scalable lifted training approaches based on stochastic gradients, both in an online and a MapReduced fashion. On several datasets, the online training, for instance, converges to the same quality solution over an order of magnitude faster, simply because it starts optimizing long before having seen the entire mega-example even once. 相似文献