Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs. 相似文献
In cognitive radio networks, cognitive nodes operate on a common pool of spectrum where they opportunistically access and use parts of the spectrum not being used by others. Though cooperation among nodes is desirable for efficient network operations and performance, there might be some malicious nodes whose objective could be to hinder communications and disrupt network operations. The absence of a central authority or any policy enforcement mechanism makes these kinds of open-access network more vulnerable and susceptible to attacks.In this paper, we analyze a common form of denial-of-service attack, i.e., collaborative jamming. We consider a network in which a group of jammers tries to jam the channels being used by legitimate users who in turn try to evade the jammed channels. First, we compute the distribution of the jamming signal that a node experiences by considering a random deployment of jammers. Then, we propose different jamming and defending schemes that are employed by the jammers and legitimate users, respectively. In particular, we model and analyze the channel availability when the legitimate users randomly choose available channels and the jammers jam different channels randomly. We propose a multi-tier proxy-based cooperative defense strategy to exploit the temporal and spatial diversity for the legitimate secondary users in an infrastructure-based centralized cognitive radio network. Illustrative results on spectrum availability rates show how to improve resiliency in cognitive radio networks in the presence of jammers. 相似文献
With the exponential growth of end users and web data, the internet is undergoing the change of paradigm from a user-centric model to a content-centric one, popularly known as information-centric networks (ICN). Current ICN research evolves around three key-issues namely (i) content request searching, (ii) content routing, and (iii) in-network caching scheme to deliver the requested content to the end user. This would improve the user experience to obtain requested content because it lowers the download delay and provides higher throughput. Existing researches have mainly focused on on-path congestion or expected delivery time of a content to determine the optimized path towards custodian. However, it ignores the cumulative effect of the link-state parameters and the state of the cache, and consequently it leads to degrade the delay performance. In order to overcome this shortfall, we consider both the congestion of a link and the state of on-path caches to determine the best possible routes. We introduce a generic term entropy to quantify the effects of link congestion and state of on-path caches. Thereafter, we develop a novel entropy dependent algorithm namely ENROUTE for searching of content request triggered by any user, routing of this content, and caching for the delivery this requested content to the user. The entropy value of an intra-domain node indicates how many popular contents are already cached in the node, which, in turn, signifies the degree of enrichment of that node with the popular contents. On the other hand, the entropy for a link indicates how much the link is congested with the traversal of contents. In order to have reduced delay, we enhance the entropy of caches in nodes, and also use path with low entropy for downloading contents. We evaluate the performance of our proposed ENROUTE algorithm against state-of-the-art schemes for various network parameters and observe an improvement of 29–52% in delay, 12–39% in hit rate, and 4–39% in throughput.
Traditionally, a cost-efficient control chart for monitoring product quality characteristic is designed using prior knowledge regarding the process distribution. In practice, however, the functional form of the underlying process distribution is rarely known a priori. Therefore, the nonparametric (distribution-free) charts have gained more attention in the recent years. These nonparametric schemes are statistically designed either with a fixed in-control average run length or a fixed false alarm rate. Robust and cost-efficient designs of nonparametric control charts especially when the true process location parameter is unknown are not adequately addressed in literature. For this purpose, we develop an economically designed nonparametric control chart for monitoring unknown location parameter. This work is based on the Wilcoxon rank sum (hereafter WRS) statistic. Some exact and approximate procedures for evaluation of the optimal design parameters are extensively discussed. Simulation results show that overall performance of the exact procedure based on bootstrapping is highly encouraging and robust for various continuous distributions. An approximate and simplified procedure may be used in some situations. We offer some illustration and concluding remarks. 相似文献
We propose a new link metric called normalized advance (NADV) for geographic routing in multihop wireless networks. NADV selects neighbors with the optimal trade-off between proximity and link cost. Coupled with the local next hop decision in geographic routing, NADV enables an adaptive and efficient cost-aware routing strategy. Depending on the objective or message priority, applications can use the NADV framework to minimize various types of link cost.We present efficient methods for link cost estimation and perform detailed experiments in simulated environments. Our results show that NADV outperforms current schemes in many aspects: for example, in high noise environments with frequent packet losses, the use of NADV leads to 81% higher delivery ratio. When compared to centralized routing under certain settings, geographic routing using NADV finds paths whose cost is close to the optimum. We also conducted experiments in Emulab testbed and the results demonstrate that our proposed approach performs well in practice. 相似文献
In this paper the maximum sidelobe level (SLL) reductions, optimal beam patterns and optimal beam widths of various designs of three-ring planar concentric circular antenna arrays (PCCAA) are examined using three different classes of evolutionary optimization techniques to finally determine the global optimal three-ring PCCAA design and then establish some sort of ranking among the techniques. Apart from physical construction of a PCCAA, one may broadly classify its design into two major categories: uniformly excited arrays and non-uniformly excited arrays. The present paper assumes non-uniform excitations and uniform spacing of excitation elements in each three-ring PCCAA design and a design goal of maximizing SLL reduction associated with optimal beam patterns and beam widths. The design problem is modeled as an optimization problem for each PCCAA design and solved using different evolutionary optimization techniques to determine an optimum set of normalized excitation weights for PCCAA elements, which, when incorporated, results in a radiation pattern with optimal (maximum) SLL reduction. Among the various PCCAA designs, one which yields the global minimum SLL with global minimum first null beamwidth is the global optimal design. In this work the three-ring PCCAA containing (N1 = 4, N2 = 6, N3 = 8) elements proves to be such global optimal design. The optimization techniques employed are real coded GA (RGA), canonical PSO (CPSO), craziness based PSO (CRPSO), evolutionary programming (BEP), hybrid evolutionary programming (HEP). While ranking the techniques after 30 total runs for each design, HEP, CRPSO, RGA, CPSO, BGA hold the first five ranks in order of optimization capability. HEP yields global minimum SLL (?32.86 dB) and global minimum BWFN (77.0°) for the optimal design. BEP often changes the rank from second to fifth depending on the design set. Further, when compared to a uniformly excited PCCAA having equal number of elements and same radii a reduction of major lobe beamwidth is also observed in the optimal non-uniformly excited case. 相似文献
In this paper, we present a new variant of Particle Swarm Optimization (PSO) for image segmentation using optimal multi-level thresholding. Some objective functions which are very efficient for bi-level thresholding purpose are not suitable for multi-level thresholding due to the exponential growth of computational complexity. The present paper also proposes an iterative scheme that is practically more suitable for obtaining initial values of candidate multilevel thresholds. This self iterative scheme is proposed to find the suitable number of thresholds that should be used to segment an image. This iterative scheme is based on the well known Otsu’s method, which shows a linear growth of computational complexity. The thresholds resulting from the iterative scheme are taken as initial thresholds and the particles are created randomly around these thresholds, for the proposed PSO variant. The proposed PSO algorithm makes a new contribution in adapting ‘social’ and ‘momentum’ components of the velocity equation for particle move updates. The proposed segmentation method is employed for four benchmark images and the performances obtained outperform results obtained with well known methods, like Gaussian-smoothing method (Lim, Y. K., & Lee, S. U. (1990). On the color image segmentation algorithm based on the thresholding and the fuzzy c-means techniques. Pattern Recognition, 23, 935–952; Tsai, D. M. (1995). A fast thresholding selection procedure for multimodal and unimodal histograms. Pattern Recognition Letters, 16, 653–666), Symmetry-duality method (Yin, P. Y., & Chen, L. H. (1993). New method for multilevel thresholding using the symmetry and duality of the histogram. Journal of Electronics and Imaging, 2, 337–344), GA-based algorithm (Yin, P. -Y. (1999). A fast scheme for optimal thresholding using genetic algorithms. Signal Processing, 72, 85–95) and the basic PSO variant employing linearly decreasing inertia weight factor. 相似文献
In this paper, performance of piezoelectrically actuated pyramidal valveless micropumps is studied experimentally in detail. Valveless micropumps based on silicon and glass substrate are fabricated using MEMS technology. Two different sizes of micropumps having overall dimensions of 5 mm × 5 mm × 1 mm and 10 mm × 10 mm × 1 mm are fabricated and characterized. In the fabricated micropumps, the thickness of silicon diaphragm is <20 µm which gives the advantage of operating pump at low voltage with excellent stability and consistency. The performance of micropumps in terms of flowrate and backpressure is evaluated for a wide range of driving frequency and actuating voltages. The maximum flowrate of water in the 10-mm micropump is 355 µl/min and backpressure of 3.1 kPa at zero flowrate for an applied voltage of 80 V at frequency 1.05 kHz. The reported micropumps have low footprint, high flowrate and backpressure. Thus, these micropumps are especially suited for biological applications as these can withstand adequate amount of backpressure. Comparative study of the performance of these micropumps with those available in the literature brings out the efficacy of these micropumps. 相似文献
A compact microwave driven plasma based multi-element focused ion beam system has been developed. In the present work, the effect of reduced beam limiter (BL) aperture on the focused ion beam parameters, such as current and spot size, and a method of controlling beam energy independently by introducing a biased collector at focal point (FP) are investigated. It is found that the location of FP does not change due to the reduction of BL aperture. The location of FP and beam size are found to be weakly dependent on the collector potential in the range from -8 kV to -18 kV. 相似文献