A directional comparison digital protection scheme has been implemented with a 16-b single-board computer at each end of a physical model of a transmission line, with communication between the two ends. The protection algorithm makes use of the fundamental frequency components of the deviation signals of the voltage and phase-shifted current. Software routines have been developed for fault monitoring, directional determination, and the trip/block decision. Graphics features incorporated in the software are explained. Tests for various faults conducted on the physical model of a double-circuit transmission line show that the direction to a fault is determined in 3 to 7 ms. The blocking features of the relay are demonstrated 相似文献
Silicon - This work focuses on the optical properties of single- and double-layer amorphous silicon nitride (a-SiNx:H) thin films of different stoichiometry relevant for photovoltaic applications... 相似文献
Multimedia Tools and Applications - In this paper, we propose a novel interpolation and a new reversible data hiding scheme for upscaling the original image and hiding secret data into the... 相似文献
Kinetic energy, angular distribution, and isobaric cross section data for A = 7-25 fragments formed in p + 27Al reaction at bombarding energy of 180 MeV are compared with the calculations of the Binary Cascade Model (BIC), the Cascade Exciton Model (CEM), JQMD/PHITS, as well as the Statistical Model with Final State Interaction (SMFSI). For completeness, the kinetic energy spectra of light particles (n, p, α) formed in p + 27Al reaction at bombarding energy of 156 MeV are also presented. A general agreement between the data and predictions of these models is found. However, disagreement with the data for the yields of light-mass fragments as well as near-target fragments is also found and discussed. The importance of this comparative study to simulation and analysis of radiation effects on microscopic electrical components operating in space is also discussed. 相似文献
Three-dimensional shape recovery from one or multiple observations is a challenging problem of computer vision. In this paper, we present a new Focus Measure for the estimation of a depth map using image focus. This depth map can subsequently be used in techniques and algorithms leading to the recovery of a three-dimensional structure of the object, a requirement of a number of high level vision applications. The proposed Focus Measure has shown robustness in the presence of noise as compared to the earlier Focus Measures. This new Focus Measure is based on an optical transfer function implemented in the Fourier domain. The results of the proposed Focus Measure have shown drastic improvements in estimation of a depth map, with respect to the earlier Focus Measures, in the presence of various types of noise including Gaussian, Shot, and Speckle noises. The results of a range of Focus Measures are compared using root mean square error and correlation metric measures. 相似文献
With gate counts of ten million, field-programmable gate arrays (FPGAs) are becoming suitable for floating-point computations. Addition is the most complex operation in a floating-point unit and can cause major delay while requiring a significant area. Over the years, the VLSI community has developed many floating-point adder algorithms aimed primarily at reducing the overall latency. An efficient design of the floating-point adder offers major area and performance improvements for FPGAs. Given recent advances in FPGA architecture and area density, latency has become the main focus in attempts to improve performance. This paper studies the implementation of standard; leading-one predictor (LOP); and far and close datapath (2-path) floating-point addition algorithms in FPGAs. Each algorithm has complex sub-operations which contribute significantly to the overall latency of the design. Each of the sub-operations is researched for different implementations and is then synthesized onto a Xilinx Virtex-II Pro FPGA device. Standard and LOP algorithms are also pipelined into five stages and compared with the Xilinx IP. According to the results, the standard algorithm is the best implementation with respect to area, but has a large overall latency of 27.059 ns while occupying 541 slices. The LOP algorithm reduces latency by 6.5% at the cost of a 38% increase in area compared to the standard algorithm. The 2-path implementation shows a 19% reduction in latency with an added expense of 88% in area compared to the standard algorithm. The five-stage standard pipeline implementation shows a 6.4% improvement in clock speed compared to the Xilinx IP with a 23% smaller area requirement. The five-stage pipelined LOP implementation shows a 22% improvement in clock speed compared to the Xilinx IP at a cost of 15% more area. 相似文献
Water Resources Management - Quantification and prediction of drought events are important for planning and management of water resources in coping with climate change scenarios at global and local... 相似文献
In last few decades, with the advent of World Wide Web (WWW), world is being overloaded with huge data. This huge data carries potential information that once extracted, can be used for betterment of humanity. Information from this data can be extracted using manual and automatic analysis. Manual analysis is not scalable and efficient, whereas, the automatic analysis involves computing mechanisms that aid in automatic information extraction over huge amount of data. WWW has also affected overall growth in scientific literature that makes the process of literature review quite laborious, time consuming and cumbersome job for researchers. Hence a dire need is felt to automatically extract potential information out of immense set of scientific articles to automate the process of literature review. Therefore, in this study, aim is to present the overall progress concerning automatic information extraction from scientific articles. The information insights extracted from scientific articles are classified in two broad categories i.e. metadata and key-insights. As available benchmark datasets carry a significant role in overall development in this research domain, existing datasets against both categories are extensively reviewed. Later, research studies in literature that have applied various computational approaches applied on these datasets are consolidated. Major computational approaches in this regard include Rule-based approaches, Hidden Markov Models, Conditional Random Fields, Support Vector Machines, Naïve-Bayes classification and Deep Learning approaches. Currently, there are multiple projects going on that are focused towards the dataset construction tailored to specific information needs from scientific articles. Hence, in this study, state-of-the-art regarding information extraction from scientific articles is covered. This study also consolidates evolving datasets as well as various toolkits and code-bases that can be used for information extraction from scientific articles.
This research aims to develop a method for the amalgamation of graphene nanoplatelets in glass/epoxy composites. The poor interface bonding between the fiber and matrix is critical and hinders the full performance of the composites. Glass fabric and epoxy were used as reinforcement and matrix in the composite, respectively. Graphene nanoplatelets were utilized as an additional nano-materials filler for the composites. Glass/graphene/epoxy and glass/epoxy composites were fabricated via vacuum infusion molding. The new method of applying graphene nanoplatelets as secondary reinforcement in the composite was developed based on proper functionalization in the sonication process. The physical, tensile, flexural, and short beam interlaminar properties of fabricated composites were examined to analyze the method's effectiveness. The results showed that density decreased by around 5 %; however, thickness increased by around 34 % after introducing graphene nanoplatelets into the composites. The tensile strength and modulus of the composites declined by approximately 19 %, on the other hand, flexural strength and modulus increased by around 63.3 % and 8.3 %, respectively, after the addition of graphene nanoplatelets into the composites. Moreover, interlaminar shear strength of the composite was enhanced by approximately 50 %. 相似文献