Monte Carlo simulations were used to investigate the compatibilizing effects of diblock copolymers in A/B/A-B diblock copolymer ternary blends and triblock copolymers in A/B/triblock copolymer ternary blends, respectively. The volume fraction of homopolymer A was 19% and was the dispersed phase. The simulation results show that diblock copolymers with longer A-blocks are more efficient as compatibilizers, and symmetric triblock copolymers with a shorter middle block length are easily able to bridge each other through the association of the end blocks. This kind of triblock copolymers have relatively high ability to retard phase separation as compatibilizers. 相似文献
Neural networks (NNs) are extensively used in modelling, optimization, and control of nonlinear plants. NN-based inverse type point prediction models are commonly used for nonlinear process control. However, prediction errors (root mean square error (RMSE), mean absolute percentage error (MAPE) etc.) significantly increase in the presence of disturbances and uncertainties. In contrast to point forecast, prediction interval (PI)-based forecast bears extra information such as the prediction accuracy. The PI provides tighter upper and lower bounds with considering uncertainties due to the model mismatch and time dependent or time independent noises for a given confidence level. The use of PIs in the NN controller (NNC) as additional inputs can improve the controller performance. In the present work, the PIs are utilized in control applications, in particular PIs are integrated in the NN internal model-based control framework. A PI-based model that developed using lower upper bound estimation method (LUBE) is used as an online estimator of PIs for the proposed PI-based controller (PIC). PIs along with other inputs for a traditional NN are used to train the PIC to predict the control signal. The proposed controller is tested for two case studies. These include, a chemical reactor, which is a continuous stirred tank reactor (case 1) and a numerical nonlinear plant model (case 2). Simulation results reveal that the tracking performance of the proposed controller is superior to the traditional NNC in terms of setpoint tracking and disturbance rejections. More precisely, 36% and 15% improvements can be achieved using the proposed PIC over the NNC in terms of IAE for case 1 and case 2, respectively for setpoint tracking with step changes.
Modern ergonomic chairs typically have several dimensions that can be adjusted independently of one another. Finding a desirable setting for any one dimension can depend on how other dimensions are set, thereby confronting users with a significant control problem. One design strategy for dealing with this problem has been to link changes in seatpan and backrest angles in some ratio, such that a one‐degree change in seatpan angle is associated with a two‐ or three‐degree change in backrest angle. However, there is no evidence to justify the choice of a particular ratio. This article presents data that addresses this issue. Subjects, performing either an entry or verification task, could adjust the chair to any position. Backrest and seatpan angles were plotted over time and analyzed using both graphical and statistical methods. The resulting scatter plots do not support the industry standard, 1:2 or 1:3 ratio, of changes in seatpan to backrest angles. The possibility of a variable linkage is discussed; however, problems associated with such a solution raise the possibility that control issues might be best addressed through training and exploration. 相似文献
It is known that nonadditive quantum codes can have higher code dimensions than stabilizer codes for the same length and minimum distance. The class of codeword stabilized codes (CWS) provides tools to obtain new nonadditive quantum codes by reducing the problem to finding nonlinear classical codes. In this work, we establish some results on the kind of non-Pauli operators that can be used as observables in the decoding scheme of CWS codes and propose a procedure to obtain those observables. 相似文献
In this study we implemented a comprehensive analysis to validate the MODIS and GOES satellite active fire detection products (MOD14 and WFABBA, respectively) and characterize their major sources of omission and commission errors which have important implications for a large community of fire data users. Our analyses were primarily based on the use of 30 m resolution ASTER and ETM+ imagery as our validation data. We found that at the 50% true positive detection probability mark, WFABBA requires four times more active fire area than is necessary for MOD14 to achieve the same probability of detection, despite the 16× factor separating the nominal spatial resolutions of the two products. Approximately 75% and 95% of all fires sampled were omitted by the MOD14 and WFABBA instantaneous products, respectively; whereas an omission error of 38% was obtained for WFABBA when considering the 30-minute interval of the GOES data. Commission errors for MOD14 and WFABBA were found to be similar and highly dependent on the vegetation conditions of the areas imaged, with the larger commission errors (approximately 35%) estimated over regions of active deforestation. Nonetheless, the vast majority (> 80%) of the commission errors were indeed associated with recent burning activity where scars could be visually confirmed in the higher resolution data. Differences in thermal dynamics of vegetated and non-vegetated areas were found to produce a reduction of approximately 50% in the commission errors estimated towards the hours of maximum fire activity (i.e., early-afternoon hours) which coincided with the MODIS/Aqua overpass. Lastly, we demonstrate the potential use of temporal metrics applied to the mid-infrared bands of MODIS and GOES data to reduce the commission errors found with the validation analyses. 相似文献
Real-time and embedded systems have traditionally been designed for closed environments where operating conditions, input workloads, and resource availability are known a priori, and are subject to little or no change at runtime. There is increasing demand, however, for adaptive capabilities in distributed real-time and embedded (DRE) systems that execute in open environments where system operational conditions, input workload, and resource availability cannot be characterized accurately a priori. A challenging problem faced by researchers and developers of such systems is devising effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements of applications. To address key resource management challenges of open DRE systems, this paper presents the Hierarchical Distributed Resource-management Architecture (HiDRA), which provides adaptive resource management using control techniques that adapt to workload fluctuations and resource availability for both bandwidth and processor utilization simultaneously. This paper presents three contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of HiDRA. Second, we present an analytical model of HiDRA that formalizes its control-theoretic behavior and presents analytical assurance of system performance. Third, we evaluate the performance of HiDRA via experiments on a representative DRE system that performs real-time distributed target tracking. Our analytical and empirical results indicate that HiDRA yields predictable, stable, and efficient system performance, even in the face of changing workload and resource availability. 相似文献