As databases increasingly integrate different types of information such as time-series, multimedia and scientific data, it becomes necessary to support efficient retrieval of multi-dimensional data. Both the dimensionality and the amount of data that needs to be processed are increasing rapidly. As a result of the scale and high dimensional nature, the traditional techniques have proven inadequate. In this paper, we propose search techniques that are effective especially for large high dimensional data sets. We first propose VA+-file technique which is based on scalar quantization of the data. VA+-file is especially useful for searching exact nearest neighbors (NN) in non-uniform high dimensional data sets. We then discuss how to improve the search and make it progressive by allowing some approximations in the query result. We develop a general framework for approximate NN queries, discuss various approaches for progressive processing of similarity queries, and develop a metric for evaluation of such techniques. Finally, a new technique based on clustering is proposed, which merges the benefits of various approaches for progressive similarity searching. Extensive experimental evaluation is performed on several real-life data sets. The evaluation establishes the superiority of the proposed techniques over the existing techniques for high dimensional similarity searching. The techniques proposed in this paper are effective for real-life data sets, which are typically non-uniform, and they are scalable with respect to both dimensionality and size of the data set. 相似文献
Intensive competition and rapid technology development of Twisted-Pair Cables (TPC) industry have left no room for competing manufacturers to harbour system inefficiencies. TPC are used in various communication and networks hardware applications; their manufacturing facilities face many challenges including various product configurations with different equipment settings, different product flows and Work in Process (WIP) space limitations. The quest for internal efficiency and external effectiveness forces companies to align their internal settings and resources with external requirements/orders, or in different words, significant factors must be set appropriately and identified prior to manufacturing processes. Integrated definition models (IDEF0, IDEF3) in conjunction with a simulation model and a design of experiments (DOE) have been developed to characterize the TPC production system, identify the significant process parameters and examine various production setting scenarios aiming to get the best product flow time. 相似文献
The term “water quality” is used to describe the condition of water, including its chemical, physical, and biological characteristics. Modeling water quality parameters is a very important aspect in the analysis of any aquatic systems. Prediction of surface water quality is required for proper management of the river basin so that adequate measure can be taken to keep pollution within permissible limits. Accurate prediction of future phenomena is the life blood of optimal water resources management. The artificial neural network is a new technique with a flexible mathematical structure that is capable of identifying complex non-linear relationships between input and output data when compared to other classical modeling techniques. Johor River Basin located in Johor state, Malaysia, which is significantly degrading due to human activities and development along the river. Accordingly, it is very important to implement and adopt a water quality prediction model that can provide a powerful tool to implement better water resource management. Several modeling methods have been applied in this research including: linear regression models (LRM), multilayer perceptron neural networks and radial basis function neural networks (RBF-NN). The results showed that the use of neural networks and more specifically RBF-NN models can describe the behavior of water quality parameters more accurately than linear regression models. In addition, we observed that the RBF finds a solution faster than the MLP and is the most accurate and most reliable tool in terms of processing large amounts of non-linear, non-parametric data.
Computational Grids (CGs) have become an appealing research area. They suggest a suitable environment for developing large
scale parallel applications. CGs integrate a huge mount of distributed heterogeneous resources for constituting a powerful
virtual supercomputer. Scheduling is the most important issue for enhancing the performance of CGs. Various strategies have
been introduced, including static and dynamic behaviors. The former maps tasks to resources at submission time, while the
latter operates at run time. While static scheduling is unsuitable for the dynamic Grid environment, scheduling in CGs is
still more complex than the proposed dynamic solutions. This paper introduces a decentralized Adaptive Grid Scheduler (AGS)
based on a novel rescheduling mechanism. AGS has several salient properties as it is; hybrid, adaptive, decentralized, and
efficient. Also, AGS is a robust mechanism as it has the ability to; (i) detect resource failures, (ii) continue its functionality
in spite of the failure existence, then (iii) recover back. Moreover, it integrates both static and dynamic scheduling behaviors.
An initial static scheduling map is proposed for an input Direct Acyclic Graph (DAG). However, DAG tasks may be rescheduled
if the performance of the allocated resources changes in away that may affect the tasks’ response time. AGS overcomes drawbacks
of traditional schedulers by utilizing the mobile agent unique features to enhance the resource discovery and monitoring processes.
Experimental results have shown that AGS outperforms traditional Grid schedulers as it introduces a better scheduling efficiency. 相似文献
The increasing trend for integrating renewable energy sources into the grid to achieve a cleaner energy system is one of the main reasons for the development of sustainable microgrid (MG) technologies. As typical power-electronized power systems, MGs make extensive use of power electronics converters, which are highly controllable and flexible but lead to a profound impact on the dynamic performance of the whole system. Compared with traditional large-capacity power systems, MGs are less resistant to perturbations, and various dynamic variables are coupled with each other on multiple timescales, resulting in a more complex system in-stability mechanism. To meet the technical and economic challenges, such as active and reactive power-sharing, volt-age, and frequency deviations, and imbalances between power supply and demand, the concept of hierarchical control has been introduced into MGs, allowing systems to control and manage the high capacity of renewable energy sources and loads. However, as the capacity and scale of the MG system increase, along with a multi-timescale control loop design, the multi-timescale interactions in the system may become more significant, posing a serious threat to its safe and stable operation. To investigate the multi-timescale behaviors and instability mechanisms under dynamic inter-actions for AC MGs, existing coordinated control strategies are discussed, and the dynamic stability of the system is defined and classified in this paper. Then, the modeling and assessment methods for the stability analysis of multi-timescale systems are also summarized. Finally, an outlook and discussion of future research directions for AC MGs are also presented. 相似文献
Lung cancer (LC) is one of the leading causes of cancer occurrence and mortality worldwide. Treatment of patients with advanced and metastatic LC presents a significant challenge, as malignant cells use different mechanisms to resist chemotherapy. Drug resistance (DR) is a complex process that occurs due to a variety of genetic and acquired factors. Identifying the mechanisms underlying DR in LC patients and possible therapeutic alternatives for more efficient therapy is a central goal of LC research. Advances in nanotechnology resulted in the development of targeted and multifunctional nanoscale drug constructs. The possible modulation of the components of nanomedicine, their surface functionalization, and the encapsulation of various active therapeutics provide promising tools to bypass crucial biological barriers. These attributes enhance the delivery of multiple therapeutic agents directly to the tumor microenvironment (TME), resulting in reversal of LC resistance to anticancer treatment. This review provides a broad framework for understanding the different molecular mechanisms of DR in lung cancer, presents novel nanomedicine therapeutics aimed at improving the efficacy of treatment of various forms of resistant LC; outlines current challenges in using nanotechnology for reversing DR; and discusses the future directions for the clinical application of nanomedicine in the management of LC resistance. 相似文献
Ceramic materials are increasingly used in micro-electro-mechanical systems(MEMS)as they offer many advantages such as high-temperature resistance,high wear resistance,low density,and favourable mechanical and chemical properties at elevated temperature.However,with the emerging of additive manufacturing,the use of ceramics for functional and structural MEMS raises new opportunities and challenges.This paper provides an extensive review of the manufacturing processes used for ceramic-based MEMS,including additive and conventional manufacturing technologies.The review covers the micro-fabrication techniques of ceramics with the focus on their operating principles,main features,and processed materials.Challenges that need to be addressed in applying additive technologies in MEMS include ceramic printing on wafers,post-processing at the micro-level,resolution,and quality control.The paper also sheds light on the new possibilities of ceramic additive micro-fabrication and their potential applications,which indicates a promising future. 相似文献
The paper addresses the application of effective tools for investigating structural integrity under earthquake loading, namely experimental testing, computer analysis and field data collection, and their combination. In particular, the application of hybrid (experimental-analytical) distributed (using several sites) simulation, hereafter referred to as HDS, to investigate seismic response of complex systems is presented. After briefly reviewing and comparing approaches of field surveys, testing, analysis and hybrid simulation, the paper deals with five applications where at least two of the three tools have been successfully applied. The collapse of the I-880 (Cypress Viaduct) during the Loma Prieta (California) earthquake of 1989 was investigated analytically following field observation of the failure modes. A plausible mechanism was postulated based on advanced analysis and supported by field data. Analytical studies aimed at quantifying the demand imposed on steel moment frames in the Northridge (California) earthquake of 1994 pointed towards a possible contribution of vertical beam modes to the increased rotational demand imposed on connections, leading to their failure. Next, issues of irregularity and lack of seismic detailing in RC buildings, repeatedly observed to be a major contributor to damage, are studied at full scale using laboratory testing, supported by advanced analysis to steer the model design and tune the level of input motion. Reinforced concrete bridges are also studied, using advanced analysis and field observations to investigate the partial collapse of a ramp structure part of the Santa Monica Freeway (I-10). Finally, two examples of applications of hybrid distributed simulation (HDS) are presented, one of which is a continuation of the steel moment frames analytical investigation. It is emphasized that it is only through integrating the available investigation tools that the response of complex systems may be understood, leading to more economical and safer built environments in regions subjected to earthquakes. 相似文献
In this paper, we propose data space mapping techniques for storage and retrieval in multi-dimensional databases on multi-disk architectures. We identify the important factors for an efficient multi-disk searching of multi-dimensional data and develop secondary storage organization and retrieval techniques that directly address these factors. We especially focus on high dimensional data, where none of the current approaches are effective. In contrast to the current declustering techniques, storage techniques in this paper consider both inter- and intra-disk organization of the data. The data space is first partitioned into buckets, then the buckets are declustered to multiple disks while they are clustered in each disk. The queries are executed through bucket identification techniques that locate the pages. One of the partitioning techniques we discuss is especially practical for high dimensional data, and our disk and page allocation techniques are optimal with respect to number of I/O accesses and seek times. We provide experimental results that support our claims on two real high dimensional datasets. 相似文献