首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
在超宽带实时定位系统中,针对有大量目标高动态移动的定位场景中数据量显著增加时,数据访问的实时性和一致性问题,引入内存数据库Redis作为实时定位数据的缓冲,数据访问和更新速度显著提高,且容易扩展到分布式集群中。  相似文献   

2.
Starting from mid 1980s, there has been a debate about what data model is most appropriate for temporal databases. A fundamental choice one has to make is whether to use intervals of time or temporal elements to timestamp objects and events with the periods of validity. The advantage of using interval timestamps is that Start and End columns can be added to relations for treating them within the framework of classical databases, leading to quick implementation. Temporal elements are finite unions of intervals. The advantage of temporal elements is that timestamps become implicitly associated with values, tuples, and relations. Furthermore, since temporal elements, by design, are closed under set theoretical operations such as union, intersection and complementation, they lead to query languages that are natural. Here, we investigate the ease of use as well as system performance for the two approaches to help settle the debate.  相似文献   

3.
4.
The growing amount of on-line data demands efficient parallel and distributed indexing mechanisms to manage large resource requirements and unpredictable system failures. Parallel and distributed indices built using commodity hardware like personal computers (PCs) can substantially save cost because PCs are produced in bulk, achieving the scale of economy. However, PCs have limited amount of random access memory (RAM) and the effective utilization of RAM for in-memory inversion is crucial. This paper presents an analytical investigation and an empirical evaluation of storage-efficient inmemory extensible inverted files, which are represented by fixed- or variable-sized linked list nodes. The size of these linked list nodes is determined by minimizing the storage wastes or maximizing storage utilization under different conditions, which lead to different storage allocation schemes. Minimizing storage wastes also reduces the number of address indirections (i.e., chaining). We evaluated our storage allocation schemes using a number of reference collections. We found that the arrival rate scheme is the best in terms of both storage utilization and the mean number of chainings per term. The final storage utilization can be over 90% in our evaluation if there is a sufficient number of documents indexed. The mean number of chainings is not large (less than 2.6 for all the reference collections). We have also showed that our best storage allocation scheme can be used for our extensible compressed inverted file. The final storage utilization of the extensible compressed inverted file can be over 90% in our evaluation provided that there is a sufficient number of documents indexed. The proposed storage allocation schemes can also be used by compressed extensible inverted files with word positions  相似文献   

5.
Indexing high-dimensional data for efficient in-memory similarity search   总被引:3,自引:0,他引:3  
In main memory systems, the L2 cache typically employs cache line sizes of 32-128 bytes. These values are relatively small compared to high-dimensional data, e.g., >32D. The consequence is that existing techniques (on low-dimensional data) that minimize cache misses are no longer effective. We present a novel index structure, called /spl Delta/-tree, to speed up the high-dimensional query in main memory environment. The /spl Delta/-tree is a multilevel structure where each level represents the data space at different dimensionalities: the number of dimensions increases toward the leaf level. The remaining dimensions are obtained using principal component analysis. Each level of the tree serves to prune the search space more efficiently as the lower dimensions can reduce the distance computation and better exploit the small cache line size. Additionally, the top-down clustering scheme can capture the feature of the data set and, hence, reduces the search space. We also propose an extension, called /spl Delta//sup +/-tree, that globally clusters the data space and then partitions clusters into small regions. The /spl Delta//sup +/-tree can further reduce the computational cost and cache misses. We conducted extensive experiments to evaluate the proposed structures against existing techniques on different kinds of data sets. Our results show that the /spl Delta//sup +/-tree is superior in most cases.  相似文献   

6.
7.
E-government benchmarking reports are characterized by their multitude as well as by the dissimilarity of the approaches applied to assessing development. This article will present and compare the research methods and the main results of four selected e-government benchmarking series. Built in this remarks the authors raise the question whether these reports are able to contribute to the realization of benefits inherent in the benchmarking of e-government. It will be demonstrated, that depending on the specification of different system parameters benchmarking series show varying relevance for governments seeking to exercise certain functions. Subsequent to this functional analysis the authors will discuss whether these reports represent an appropriate foundation for governments striving for the implementation of e-government designed in a comprehensive and sustainable manner. For this purpose key elements of e-government are brought together into a holistic evaluation model. Based on this comprehensive approach it is argued that the functionality of the presented benchmarking reports remains confined to certain elements of e-government as well as to certain stakeholders’ views.  相似文献   

8.
The Journal of Supercomputing - During recent years, big data explosion and the increase in main memory capacity, on the one hand, and the need for faster data processing, on the other hand, have...  相似文献   

9.
周狄波  王迪峰 《计算机应用》2009,29(7):1974-1977
从计算机存储系统的角度,提出以内存取代硬盘作为运行系统的存储介质,解决磁盘I/O瓶颈,以提升WebGIS响应速度的技术思路。通过分析Linux操作系统上ram disk、ramfs、tmpfs三种内存盘技术和Linux操作系统的组成,针对内存的易失性问题,给出了基于tmpfs和initrd的内存式WebGIS实现方法,并详细阐述了内存式WebGIS的系统组成、系统框架和内存规划。在此基础上,采用64位Debian GNU/Linux操作系统和MapServer WebGIS平台,从构建硬盘WebGIS母系统、构建内存系统镜像文件、重构initrd、内存分配、系统引导和WebGIS应用更新六个方面对内存式WebGIS的实现进行了阐述。实际测试和应用结果表明,应用内存存储提升WebGIS应用响应速度是正确可行的。  相似文献   

10.
The last decade has witnessed the prevalence of sensor and GPS technologies that produce a high volume of trajectory data representing the motion history of moving objects. However some characteristics of trajectories such as variable lengths and asynchronous sampling rates make it difficult to fit into traditional database systems that are disk-based and tuple-oriented. Motivated by the success of column store and recent development of in-memory databases, we try to explore the potential opportunities of boosting the performance of trajectory data processing by designing a novel trajectory storage within main memory. In contrast to most existing trajectory indexing methods that keep consecutive samples of the same trajectory in the same disk page, we partition the database into frames in which the positions of all moving objects at the same time instant are stored together and aligned in main memory. We found this column-wise storage to be surprisingly well suited for in-memory computing since most frames can be stored in highly compressed form, which is pivotal for increasing the memory throughput and reducing CPU-cache miss. The independence between frames also makes them natural working units when parallelizing data processing on a multi-core environment. Lastly we run a variety of common trajectory queries on both real and synthetic datasets in order to demonstrate advantages and study the limitations of our proposed storage.  相似文献   

11.
12.
A variety of largely automated methods have been proposed for finite-state verification of computer systems. Although anecdotal accounts of success are widely reported, there is very little empirical data on the relative strengths and weaknesses of those methods across a broad range of analysis questions and systems. This information, however, is critical for the transfer of the technology from research to practice. We review some of the problems involved in obtaining this information and suggest several ways in which the community can facilitate empirical evaluation of finite-state verification tools.  相似文献   

13.
As workstations become more popular, it is increasingly important to be able to measure and compare their relative performance. However, benchmarks are often either too specific to be of general interest or too general to be relevant to a specific application. We have constructed a collection of benchmarks?for a common kind of engineering workstation?that give a good approximation of processor, graphics, file access, and multitasking performance while being small and simple enough to easily transport. We have run the benchmarks on a number of workstations, including Sun-3, Apollo 560, VAXstation-II, and IRIS 2400, with overall results meeting our approximate expectations. Our approach allows us to quickly assess the performance of new workstations.  相似文献   

14.
Benchmarking quality measurement   总被引:2,自引:1,他引:1  
This paper gives a simple benchmarking procedure for companies wishing to develop measures for software quality attributes of software artefacts. The procedure does not require that a proposed measure is a consistent measure of a quality attribute. It requires only that the measure shows agreement most of the time. The procedure provides summary statistics for measures of quality attributes of a software artefact. These statistics can be used to benchmark subjective direct measurement of a quality attribute by a company’s software developers. Each proposed measure is expressed as a set of error rates for measurement on an ordinal scale and these error rates enable simple benchmarking statistics to be derived. The statistics can also be derived for any proposed objective indirect measure or prediction system for the quality attribute. For an objective measure or prediction system to be of value to the company it must be ‘better’ or ‘more objective’ than the organisation’s current measurement or prediction capability; and thus confidence that the benchmark’s objectivity has been surpassed must be demonstrated. By using Bayesian statistical inference, the paper shows how to decide whether a new measure should be considered ‘more objective’ or whether a prediction system’s predictive capability can be considered ‘better’ than the current benchmark. Furthermore, the Bayesian inferential approach is easy to use and provides clear advantages for quantifying and inferring differences in objectivity.
John MosesEmail:
  相似文献   

15.
16.
In-memory (transactional) data stores, also referred to as data grids, are recognized as a first-class data management technology for cloud platforms, thanks to their ability to match the elasticity requirements imposed by the pay-as-you-go cost model. On the other hand, determining how performance and reliability/availability of these systems vary as a function of configuration parameters, such as the amount of cache servers to be deployed, and the degree of in-memory replication of slices of data, is far from being a trivial task. Yet, it is an essential aspect of the provisioning process of cloud platforms, given that it has an impact on the amount of cloud resources that are planned for usage. To cope with the issue of predicting/analysing the behavior of different configurations of cloud in-memory data stores, in this article we present a flexible simulation framework offering skeleton simulation models that can be easily specialized in order to capture the dynamics of diverse data grid systems, such as those related to the specific (distributed) protocol used to provide data consistency and/or transactional guarantees. Besides its flexibility, another peculiar aspect of the framework lies in that it integrates simulation and machine-learning (black-box) techniques, the latter being used to capture the dynamics of the data-exchange layer (e.g. the message passing layer) across the cache servers. This is a relevant aspect when considering that the actual data-transport/networking infrastructure on top of which the data grid is deployed might be unknown, hence being not feasible to be modeled via white-box (namely purely simulative) approaches. We also provide an extended experimental study aimed at validating instances of simulation models supported by our framework against execution dynamics of real data grid systems deployed on top of either private or public cloud infrastructures. Particularly, our validation test-bed has been based on an industrial-grade open-source data grid, namely Infinispan by JBoss/Red-Hat, and a de-facto standard benchmark for NoSQL platforms, namely YCSB by Yahoo. The validation study has been conducted by relying on both public and private cloud systems, scaling the underlying infrastructure up to 100 (resp. 140) Virtual Machines for the public (resp. private) cloud case. Further, we provide some experimental data related to a scenario where our framework is used for on-line capacity planning and reconfiguration of the data grid system.  相似文献   

17.
Multimedia technologies are being adopted both in the professional and commercial world with great enthusiasm. This has led to a significant interest in the research and development of multimedia databases. However, none of these efforts have really addressed the issues related to the benchmarking of multimedia databases. We analyze the problem of benchmarking multimedia databases in this paper and suggest a methodology.  相似文献   

18.
For natural interaction with augmented reality (AR) applications, good tracking technology is key. But unlike dense stereo, optical flow or multi-view stereo, template-based tracking which is most commonly used for AR applications lacks benchmark datasets allowing a fair comparison between state-of-the-art algorithms. Until now, in order to evaluate objectively and quantitatively the performance and the robustness of template-based tracking algorithms, mainly synthetically generated image sequences were used. The evaluation is therefore often intrinsically biased. In this paper, we describe the process we carried out to perform the acquisition of real-scene image sequences with very precise and accurate ground truth poses using an industrial camera rigidly mounted on the end effector of a high-precision robotic measurement arm. For the acquisition, we considered most of the critical parameters that influence the tracking results such as: the texture richness and the texture repeatability of the objects to be tracked, the camera motion and speed, and the changes of the object scale in the images and variations of the lighting conditions over time. We designed an evaluation scheme for object detection and interframe tracking algorithms suited for AR and other computer vision applications and used the image sequences to apply this scheme to several state-of-the-art algorithms. The image sequences are freely available for testing, submitting and evaluating new template-based tracking algorithms, i.e. algorithms that detect or track a planar object in an image sequence given only one image of the object (called the template).  相似文献   

19.
Elhanany  I. Tabatabaee  V. 《Computer》2003,36(10):109-110
Switch fabrics are a principal building block in networking and communications platforms, but the growing use of merchant fabric silicon for diverse market segments is making it increasingly challenging to evaluate and compare the various product offerings. Current fabric selection methodology involves complex comparisons of speeds and feeds using limited data that switch-fabric vendors provide. This data is commonly based on idealistic traffic patterns and environmental parameters suited to a vendor-specific architecture rather than real-world, application-oriented scenarios that stress fabric implementations. To address this problem, the Network Processing Forum has launched a task group to develop a standard fabric benchmarking framework and suite of performance test benches that provide system OEMs with open, objective, and verifiable results while enabling fabric vendors to leverage their core intellectual property. The task group is focusing on traffic modeling, performance metrics, and actual test benches. Although the emphasis is on switches specifically pertaining to Internet-based platforms, the same framework is applicable to storage area networks and other switching applications.  相似文献   

20.
Benchmarking software development productivity   总被引:1,自引:0,他引:1  
The article examines a statistical analysis of a productivity variation, involving a unique database containing 206 business software projects from 26 Finnish companies. The authors examine differences in the factors, explaining productivity in the banking, insurance, manufacturing, wholesale/retail, and public administration sectors. The authors provide productivity benchmarking equations that are useful both for estimating expected productivity at the start of a new project and for benchmarking a completed project for each business sector  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号