Novel hexagonal two dimensional ZnO nanosheets were successfully and economically synthesized using zinc acetate and urea based on a facile microwave hydrothermal method. The structure, morphology and size of the ZnO nanosheets were investigated by X-ray diffraction (X-ray), field emission scanning electron microscopy (FESEM), energy dispersive analysis of x-ray (EDS), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and Fourier transform infrared spectroscopy (FTIR). X-ray analysis showed that the obtained ZnO nanosheets are crystalline corresponding to the pure ZnO phase with an average particle size of 12 nm. Optical properties of ZnO nanosheets were investigated by UV-Vis absorption and photoluminescence (PL) techniques. The band gap energy of ZnO nanosheets was found to be 3.29 eV. The photoluminescence (PL) measurement shows a strong UV emission, blue emission and blue-green emission bands. ZnO nano sheets possess a higher photocatalytic activity leading to the degradation of methylene blue (MB). The ZnO nanosheets are expected to have new opportunities in vast research areas and for application in catalysts and optoelectronic devices. 相似文献
The present work deals with the synthesis of nanostructured Co–MgO mixed oxides with different weight ratios of cobalt by a facile co-precipitation method as a catalyst for low-temperature CO oxidation. The prepared samples were characterized by X-ray diffraction (XRD), N2 adsorption/desorption (BET), Fourier transform infrared spectroscopy (FTIR), and transmission and scanning electron microscopies (TEM and SEM) techniques. The results revealed that inexpensive cobalt–magnesium mixed metal oxide nanoparticles have a high potential as catalyst in low-temperature CO oxidation. The Co–MgO mixed oxide with 30 wt.% cobalt had the highest activity. The results showed that the catalysts pretreated under O2-containing atmosphere possessed higher activity compared to the catalyst pretreated under H2 atmosphere. Co–MgO catalyst showed a good repeatability in reaction condition. The stability test exhibited that the Co–MgO mixed oxides were highly stable for CO oxidation over a 30 h time on stream in the feed gas containing a high amount of moisture and CO2. 相似文献
The Journal of Supercomputing - Power consumption is likely to remain a significant concern for exascale performance in the foreseeable future. In addition, graphics processing units (GPUs) have... 相似文献
Compacts made from chemically grade Fe2O3 were fired at 1473K for 6 hrs. The fired compacts were isothermally reduced either by hydrogen or carbon monoxide at 1073–1373K. The O2 weight‐loss resulting from the reduction process was continuously recorded as a function of time using TGA technique, whereas the volume change at different reduction conditions was measured by displacement method. Porosity measurements, microscopic examination and X‐ray diffraction analysis were used to characterize the fired and reduced products. The rate of reduction at both the initial and final stages was increased with temperature. The reduction mechanism deduced from the correlations between apparent activation energy values, structure of partially reduced compacts and application of gas‐solid reaction models revealed the reduction rate (dr/dt) at both the initial and final stages. At early stages, the reduction was controlled by a combined effect of gaseous diffusion and interfacial chemical reaction mechanism, while at the final stages the interfacial chemical reaction was the rate determining step. In H2 reduction, maximum swelling (80%) was obtained at 1373K, which was attributed to the formation of metallic iron plates. In CO reduction, catastrophic swelling (255%) was obtained at 1198K due to the formation of metallic iron plates and whiskers. 相似文献
Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2 latent classes fits the data well. The larger class (76%) consists of individuals whose mood is highly variable, whose general well-being is comparatively lower, and whose mood variability is influenced by daily hassles and uplifts. The smaller class (24%) represents individuals who are rather stable and happier and whose mood is influenced only by daily uplifts but not by daily hassles. A simulation study on the model without covariates with 5 sets of sample sizes and 5 sets of number of occasions revealed that the appropriateness of the parameter estimates of this model depends on number of observations (the higher the better) and number of occasions (the higher the better). Another simulation study estimated Type I and II errors of the Lo-Mendell-Rubin test. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
We present explanation-based learning (EBL) methods aimed at improving the performance of diagnosis systems integrating associational and model-based components. We consider multiple-fault model-based diagnosis (MBD) systems and describe two learning architectures. One, EBLIA, is a method for learning in advance. The other, EBL(p), is a method for learning while doing. EBLIA precompiles models into associations and relies only on the associations during diagnosis. EBL(p) performs compilation during diagnosis whenever reliance on previously learned associational rules results in unsatisfactory performance—as defined by a given performance threshold p. We present results of empirical studies comparing MBD without learning versus EBLIA and EBL(p). The main conclusions are as follows. EBLIA is superior when it is feasible, but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required. 相似文献
Automated techniques for Arabic content recognition are at a beginning period contrasted with their partners for the Latin and Chinese contents recognition. There is a bulk of handwritten Arabic archives available in libraries, data centers, historical centers, and workplaces. Digitization of these documents facilitates (1) to preserve and transfer the country’s history electronically, (2) to save the physical storage space, (3) to proper handling of the documents, and (4) to enhance the retrieval of information through the Internet and other mediums. Arabic handwritten character recognition (AHCR) systems face several challenges including the unlimited variations in human handwriting and the leakage of large and public databases. In the current study, the segmentation and recognition phases are addressed. The text segmentation challenges and a set of solutions for each challenge are presented. The convolutional neural network (CNN), deep learning approach, is used in the recognition phase. The usage of CNN leads to significant improvements across different machine learning classification algorithms. It facilitates the automatic feature extraction of images. 14 different native CNN architectures are proposed after a set of try-and-error trials. They are trained and tested on the HMBD database that contains 54,115 of the handwritten Arabic characters. Experiments are performed on the native CNN architectures and the best-reported testing accuracy is 91.96%. A transfer learning (TF) and genetic algorithm (GA) approach named “HMB-AHCR-DLGA” is suggested to optimize the training parameters and hyperparameters in the recognition phase. The pre-trained CNN models (VGG16, VGG19, and MobileNetV2) are used in the later approach. Five optimization experiments are performed and the best combinations are reported. The highest reported testing accuracy is 92.88%.