Ba(Zn1/3Nb2/3)O3 nanoparticles have been synthesized by a polymerised complex method by using precursor materials of barium nitrate, zinc acetate, niobium oxide, hydrofluoric acid and citric acid. Thermal decomposition characteristics and crystallization behavior of the powders were investigated by the thermogravimetric and differential thermal analysis, X-ray diffractometer and Fourier transform infrared spectroscopy. Ba(Zn1/3Nb2/3)O3 phase started to form at low temperature of 400 °C and, single phase Ba(Zn1/3Nb2/3)O3 perovskite structure was obtained at 1000 °C. Microstructural investigation revealed that the major particle size of Ba(Zn1/3Nb2/3)O3 nanoparticles were in the range of 80–110 nm with spherical morphology and homogeneous size distribution. But the powders also contained some agglomeration. 相似文献
Isotope labeling has revolutionized NMR studies of small nucleic acids, but to extend this technology to larger RNAs, site‐specific labeling tools to expedite NMR structural and dynamics studies are required. Using enzymes from the pentose phosphate pathway, we coupled chemically synthesized uracil nucleobase with specifically 13C‐labeled ribose to synthesize both UTP and CTP in nearly quantitative yields. This chemoenzymatic method affords a cost‐effective preparation of labels that are unattainable by current methods. The methodology generates versatile 13C and 15N labeling patterns which, when employed with relaxation‐optimized NMR spectroscopy, effectively mitigate problems of rapid relaxation that result in low resolution and sensitivity. The methodology is demonstrated with RNAs of various sizes, complexity, and function: the exon splicing silencer 3 (27 nt), iron responsive element (29 nt), Pro‐tRNA (76 nt), and HIV‐1 core encapsidation signal (155 nt). 相似文献
Abstract: A series of numerical aircraft crash simulations and thermal behavior analyses were made at Purdue University to study the response of the World Trade Center Tower 1 (WTC‐1) on September 11, 2001. The process included accuracy verification for the computational tools using available experiment data. Numerical models for the Boeing 767–200ER aircraft and the structural system for the top 20 stories of WTC‐1 were developed for the simulations. A second aircraft model, simpler yet comparable in effect, was developed and used for a parametric sensitivity analysis. Results from these simulations and published by other researchers indicate that while the observed impact damage to tower exterior framing can be estimated accurately, the unseen impact damage to the core structure of the tower could not be estimated with high confidence. Although the computational tools helped in developing an understanding as to what might have happened as the aircraft penetrated and disintegrated into the structure, they were not able to reduce the uncertainty in the core damage estimate. However, reflecting insight from the behavior of the Pentagon building under the impact loads it received on the same day and studying the effects of elevated temperature on mechanical properties of steel in light of experimental data, the uncertainty in the core structural damage estimate was found to be of negligible importance with regards to the ultimate fate of the tower. It is demonstrated that through use of numerical simulations and engineering reasoning, a dominant factor in the collapse of the tower could be proposed with confidence. It was the loss of fire‐proofing in the tower core during aircraft impact that left the core vulnerable to ensuing thermal loads and resulted in the eventual collapse of the tower.相似文献
In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results. 相似文献
The use of unmanned aerial vehicles (UAVs) for military, scientific, and civilian sectors are increasing drastically in recent years. This study presents algorithms for the visual-servo control of an UAV, in which a quadrotor helicopter has been stabilized with visual information through the control loop. Unlike previous study that use pose estimation approach which is time consuming and subject to various errors, the visual-servo control is more reliable and fast. The method requires a camera on-board the vehicle, which is already available on various UAV systems. The UAV with a camera behaves like an eye-in-hand visual servoing system. In this study the controller was designed by using two different approaches; image based visual servo control method and hybrid visual servo control method. Various simulations are developed on Matlab, in which the quadrotor aerial vehicle has been visual-servo controlled. In order to show the effectiveness of the algorithms, experiments were performed on a model quadrotor UAV, which suggest successful performance. 相似文献
Head-operated computer accessibility tools (CATs) are useful solutions for the ones with complete head control; but when it comes to people with only reduced head control, computer access becomes a very challenging task since the users depend on a single head-gesture like a head nod or a head tilt to interact with a computer. It is obvious that any new interaction technique based on a single head-gesture will play an important role to develop better CATs to enhance the users’ self-sufficiency and the quality of life. Therefore, we proposed two novel interaction techniques namely HeadCam and HeadGyro within this study. In a nutshell, both interaction techniques are based on our software switch approach and can serve like traditional switches by recognizing head movements via a standard camera or a gyroscope sensor of a smartphone to translate them into virtual switch presses. A usability study with 36 participants (18 motor-impaired, 18 able-bodied) was also conducted to collect both objective and subjective evaluation data in this study. While HeadGyro software switch exhibited slightly higher performance than HeadCam for each objective evaluation metrics, HeadCam was rated better in subjective evaluation. All participants agreed that the proposed interaction techniques are promising solutions for computer access task.
In this study, a unified scheme using divergence analysis and genetic search is proposed to determine significant components of feature vectors in high-dimensional spaces, without having to deal with singular matrix problems.In the literature it is observed that three main problems exist in the feature selection process performed in a high-dimensional space. These problems are high computational load, local minima, and singular matrices. In this study, feature selection is realized by increasing the dimension one by one, rather than reducing the dimension. In this sense, the recursive covariance matrices are formulated to decrease the computational load. The use of genetic algorithms is proposed to avoid local optima and singular matrix problems in high-dimensional feature spaces. Candidate strings in the genetic pool represent the new features formed by increasing the dimension. The genetic algorithms investigate the combination of features which give the highest divergence value.In this study, two methods are proposed for the selection of features. In the first method, features in a high-dimensional space are determined by using divergence analysis and genetic search (DAGS) together. If the dimension is not high, the second method is offered which uses only recursive divergence analysis (RDA) without any genetic search. In Section 3 two experiments are presented: Feature determination in a two-dimensional phantom feature space, and feature determination for ECG beat classification in a real data space. 相似文献
Evolving heterogeneous networks, which contain different types of nodes and links that change over time, appear in many domains including protein–protein interactions, scientific collaborations, telecommunications. In this paper, we aim to discover temporal information from a heterogenous evolving network in order to improve node classification. We propose a framework, Genetic Algorithm enhanced Time Varying Relational Classifier for evolving Heterogeneous Networks (GA-TVRC-Het), to extract the effects of different relationship types in different time periods in the past. These effects are discovered adaptively by utilizing genetic algorithms. A relational classifier is extended as the classification method in order to be able to work with different types of nodes. The proposed framework is tested on two real world data sets. It is shown that using the optimal time effect improves the classification performance to a large extent. It is observed that the optimal time effect does not necessarily follow a certain functional trend, for example linear or exponential decay in time. Another observation is that the optimal time effect may be different for each type of interaction. Both observations reveal the reason why GA-TVRC-Het outperforms methods that rely on a predefined form of time effect or the same time effect for each link type. 相似文献