首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
一种实化视图的合并算法   总被引:1,自引:0,他引:1  
陈长清  程恳 《计算机应用》2005,25(4):814-816
对于拥有大量实化视图的实际数据库应用系统,提出了视图合并的方法以减少整个视图 的数量,缩减实化视图的搜索空间;还提出了归并树和基于归并树的快速有效的合并算法。实验表 明,实化视图的合并是快速寻找可能响应查询的实化视图的一种有效途径,可以显著改进查询处理的 性能。  相似文献   

2.
深度卷积神经网络因规模庞大、计算复杂而限制了其在实时要求高和资源受限环境下的应用,因此有必要对卷积神经网络现有的结构进行优化压缩和加速。为了解决这一问题,提出了一种结合剪枝、流合并的混合压缩方法。该方法通过不同角度去压缩模型,进一步降低了参数冗余和结构冗余所带来的内存消耗和时间消耗。首先,从模型的内部将每层中冗余的参数剪去;然后,从模型的结构上将非必要的层与重要的层进行流合并;最后,通过重新训练来恢复模型的精度。在MNIST数据集上的实验结果表明,提出的混合压缩方法在不降低模型精度前提下,将LeNet-5压缩到原来的1/20,运行速度提升了8倍。  相似文献   

3.
The early algorithms for in-place merging were mainly focused on the time complexity, whereas their structures themselves were ignored. Most of them therefore are elusive and of only theoretical significance. For this reason, the paper simplifies the unstable in-place merge by Geffert et al. [V. Geffert, J. Katajainen, T. Pasanen, Asymptotically efficient in-place merging, Theoret. Comput. Sci. 237 (2000) 159-181]. The simplified algorithm is simple yet practical, and has a small time complexity.  相似文献   

4.
This paper addresses the problem of grid map merging for multi-robot systems, which can be resolved by acquiring the map transformation matrix (MTM) among robot maps. Without the initial correspondence or any rendezvous among robots, the only way to acquire the MTM is to find and match the common regions of individual robot maps. This paper proposes a novel map merging technique which is capable of merging individual robot maps by matching the spectral information of robot maps. The proposed technique extracts the spectra of robot maps and enhances the extracted spectra using visual landmarks. Then, the MTM is accurately acquired by finding the maximum cross-correlation among the enhanced spectra. Experimental results in outdoor environments show that the proposed technique was performed successfully. Also, the comparison result shows that the map merging errors were significantly reduced by the proposed technique.  相似文献   

5.
Neural Computing and Applications - Fuzzy logic is, inter alia, a simple and flexible approach of modelling that can be used in river basins where adequate hydrological data are unavailable. In...  相似文献   

6.
We present a high-performance solution to the I/O retrieval problem in a distributed multimedia system. Parallelism of data retrieval is achieved by striping the data across multiple disks. We identify the components that contribute to media data-retrieval delay. The variable delays among these have a great bearing on the server throughput under varying load conditions. We present a buffering scheme to minimize these variations. We have implemented our model on the Intel Paragon parallel computer. The results of component-wise instrumentation of the server operation are presented and analyzed. Experimental results that demonstrate the efficacy of the buffering scheme are presented. Based on our experiments, a dynamic admission-control policy that takes server workloads into account is proposed.  相似文献   

7.
8.
Software development is a collaborative activity that requires teams of software engineers to cooperate and work in parallel on versions of models. However, model management techniques such as model differencing, merging, and versioning have turned out to be difficult challenges, due to the complexity of operations and graph-like nature of models. Therefore, a well-developed support for model merging process, as well as conflict management, is highly desired. This paper presents a novel process for model merging, called the Epsilon-based Three-way Merging Process (E3MP) process. Model merging is a significant problem where there are different versions of a system model amongst modeler teams. E3MP includes three components implemented into the Epsilon framework. First, modelers can define domain-specific rules that customize the merging process. Second, E3MP enables an automated method for syntactic and semantic conflict detection amongst different versions of the system model. Third, E3MP puts forward a pattern-based approach for conflict resolution. We applied two generic benchmarks to assess conflict detection and resolution capabilities of our approach and carried out an initial scalability evaluation for the model merge with large models and large change sets. The results of our experiments revealed that the proposed process allows generating consistent and semantically correct merged models.  相似文献   

9.
Stream X-machines are a state based formalism that has associated with it a particular development process in which a system is built from trusted components. Testing thus essentially checks that these components have been combined in a correct manner and that the orders in which they can occur are consistent with the specification. Importantly, there are test generation methods that return a checking experiment: a test that is guaranteed to determine correctness as long as the implementation under test (IUT) is functionally equivalent to an unknown element of a given fault domain Ψ. Previous work has show how three methods for generating checking experiments from a finite state machine (FSM) can be adapted to testing from a stream X-machine. However, there are many other methods for generating checking experiments from an FSM and these have a variety of benefits that correspond to different testing scenarios. This paper shows how any method for generating a checking experiment from an FSM can be adapted to generate a checking experiment for testing an implementation against a stream X-machine. This is the case whether we are testing to check that the IUT is functionally equivalent to a specification or we are testing to check that every trace (input/output sequence) of the IUT is also a trace of a nondeterministic specification. Interestingly, this holds even if the fault domain Ψ used is not that traditionally associated with testing from a stream X-machine. The results also apply for both deterministic and nondeterministic implementations.  相似文献   

10.
Conditional branches are expensive. Branches require a significant percentage of execution cycles since they occur frequently and cause pipeline flushes when mispredicted. In addition, branches result in forks in the control flow, which can prevent other code‐improving transformations from being applied. In this paper we describe profile‐based techniques for replacing the execution of a set of two or more branches with a single branch on a conventional scalar processor. These sets of branches can include tests of multiple variables. For instance, the test if (p1 != 0 && p2 != 0) , which is testing for NULL pointers, can be replaced with if (p1 & p2 != 0) . Program profiling is performed to target condition merging along frequently executed paths. The results show that eliminating branches by merging conditions can significantly reduce the number of conditional branches executed in non‐numerical applications. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
12.
In this paper, we study the merging of two sorted arrays and on EREW PRAM with two restrictions: (1) The elements of two arrays are taken from the integer range [1,n], where n=Max(n 1,n 2). (2) The elements are taken from either uniform distribution or non-uniform distribution such that , for 1≤ip (number of processors). We give a new optimal deterministic algorithm runs in time using p processors on EREW PRAM. For ; the running time of the algorithm is O(log (g) n) which is faster than the previous results, where log (g) n=log log (g−1) n for g>1 and log (1) n=log n. We also extend the domain of input data to [1,n k ], where k is a constant.
Hazem M. BahigEmail:
  相似文献   

13.
In inverted file systems, queries can be written as Boolean expressions of inverted attributes. In response to a query, the system accesses address lists associated with the attributes in the query, merges them, and selects those records that satisfy the search logic. In this paper we consider the minimization of the CPU time needed for the merging operation. The time can possibly be reduced by taking address lists that occur in several product terms as a common factor of these products. This means that the union operation must be performed before the intersection operation. We present formulas which can be used to decide whether the above method is advantageous. The time can also be reduced by choosing the order of intersection operations so that it takes into consideration the occurrences of the address lists in the products and the lengths of the address lists. For choosing the order of intersection operations we give a heuristic algorithm that minimizes the total time needed for intersections.  相似文献   

14.
We present an adaptive load shedding approach for windowed stream joins. In contrast to the conventional approach of dropping tuples from the input streams, we explore the concept ofselective processing for load shedding. We allow stream tuples to be stored in the windows and shed excessive CPU load by performing the join operations, not on the entire set of tuples within the windows, but on a dynamically changing subset of tuples that are learned to be highly beneficial. We support such dynamic selective processing through three forms of runtimeadaptations: adaptation to input stream rates, adaptation to time correlation between the streams and adaptation to join directions. Our load shedding approach enables us to integrateutility-based load shedding withtime correlation-based load shedding. Indexes are used to further speed up the execution of stream joins. Experiments are conducted to evaluate our adaptive load shedding in terms of output rate and utility. The results show that our selective processing approach to load shedding is very effective and significantly outperforms the approach that drops tuples from the input streams. Bugra Gedik received the B.S. degree in C.S. from the Bilkent University, Ankara, Turkey, and the Ph.D. degree in C.S. from the College of Computing at the Georgia Institute of Technology, Atlanta, GA, USA. He is with the IBM Thomas J. Watson Research Center, currently a member of the Software Tools and Techniques Group. Dr. Gedik's research interests lie in data intensive distributed computing systems, spanning data-centric peer-to-peer overlay networks, mobile and sensor-based distributed data management systems, and distributed data stream processing systems. His research focus is on developing system-level architectures and techniques to address scalability problems in distributed continual query systems and applications. He is the recipient of the ICDCS 2003 best paper award. He has served in the program committees of several international conferences, such as ICDE, MDM, and CollaborateCom. Kun-Lung Wu received the B.S. degree in E.E. from the National Taiwan University, Taipei, Taiwan, the M.S. and Ph.D. degrees in C.S. both from the University of Illinois at Urbana-Champaign. He is with the IBM Thomas J. Watson Research Center, currently a member of the Software Tools and Techniques Group. His recent research interests include data streams, continual queries, mobile computing, Internet technologies and applications, database systems and distributed computing. He has published extensively and holds many patents in these areas. Dr. Wu is a Senior Member of the IEEE Computer Society and a member of the ACM. He is the Program Co-Chair for the IEEE Joint Conference on e-Commerce Technology (CEC 2007) and Enterprise Computing, e-Commerce and e-Services (EEE 2007). He was an Associate Editor for the IEEE Trans. on Knowledge and Data Engineering, 2000–2004. He was the general chair for the 3rd International Workshop on E-Commerce and Web-Based Information Systems (WECWIS 2001). He has served as an organizing and program committee member on various conferences. He has received various IBM awards, including IBM Corporate Environmental Affair Excellence Award, Research Division Award, and several Invention Achievement Awards. He received a best paper award from IEEE EEE 2004. He is an IBM Master Inventor. Philip S. Yu received the B.S. Degree in E.E. from National Taiwan University, the M.S. and Ph.D. degrees in E.E. from Stanford University, and the M.B.A. degree from New York University. He is with the IBM Thomas J. Watson Research Center and currently manager of the Software Tools and Techniques group. His research interests include data mining, Internet applications and technologies, database systems, multimedia systems, parallel and distributed processing, and performance modeling. Dr. Yu has published more than 430 papers in refereed journals and conferences. He holds or has applied for more than 250 US patents. Dr. Yu is a Fellow of the ACM and a Fellow of the IEEE. He is associate editors of ACM Transactions on the Internet Technology and ACM Transactions on Knowledge Discovery in Data. He is a member of the IEEE Data Engineering steering committee and is also on the steering committee of IEEE Conference on Data Mining. He was the Editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering (2001–2004), an editor, advisory board member and also a guest co-editor of the special issue on mining of databases. He had also served as an associate editor of Knowledge and Information Systems. In addition to serving as program committee member on various conferences, he will be serving as the general chair of 2006 ACM Conference on Information and Knowledge Management and the program chair of the 2006 joint conferences of the 8th IEEE Conference on E-Commerce Technology (CEC' 06) and the 3rd IEEE Conference on Enterprise Computing, E-Commerce and E-Services (EEE' 06). He was the program chair or co-chairs of the 11th IEEE Intl. Conference on Data Engineering, the 6th Pacific Area Conference on Knowledge Discovery and Data Mining, the 9th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, the 2nd IEEE Intl. Workshop on Research Issues on Data Engineering: Transaction and Query Processing, the PAKDD Workshop on Knowledge Discovery from Advanced Databases, and the 2nd IEEE Intl. Workshop on Advanced Issues of E-Commerce and Web-based Information Systems. He served as the general chair of the 14th IEEE Intl. Conference on Data Engineering and the general co-chair of the 2nd IEEE Intl. Conference on Data Mining. He has received several IBM honors including 2 IBM Outstanding Innovation Awards, an Outstanding Technical Achievement Award, 2 Research Division Awards and the 84th plateau of Invention Achievement Awards. He received an Outstanding Contributions Award from IEEE Intl. Conference on Data Mining in 2003 and also an IEEE Region 1 Award for “promoting and perpetuating numerous new electrical engineering concepts” in 1999. Dr. Yu is an IBM Master Inventor. Ling Liu is an associate professor at the College of Computing at Georgia Tech. There, she directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining research issues and technical challenges in building large scale distributed computing systems that can grow without limits. Dr. Liu and the DiSL research group have been working on various aspects of distributed data intensive systems, ranging from decentralized overlay networks, exemplified by peer to peer computing, data grid computing, to mobile computing systems and location based services, sensor network computing, and enterprise computing systems. She has published over 150 international journal and conference articles. Her research group has produced a number of software systems that are either open sources or directly accessible online, among which the most popular ones are WebCQ and XWRAPElite. Dr. Liu is currently on the editorial board of several international journals, including IEEE Transactions on Knowledge and Data Engineering, International Journal of Very large Database systems (VLDBJ), International Journal of Web Services Research, and has chaired a number of conferences as a PC chair, a vice PC chair, or a general chair, including IEEE International Conference on Data Engineering (ICDE 2004, ICDE 2006, ICDE 2007), IEEE International Conference on Distributed Computing (ICDCS 2006), IEEE International Conference on Web Services (ICWS 2004). She is a recipient of IBM Faculty Award (2003, 2006). Dr. Liu's current research is partly sponsored by grants from NSF CISE CSR, ITR, CyberTrust, a grant from AFOSR, an IBM SUR grant, and an IBM faculty award.  相似文献   

15.
16.
The proliferation of ontologies and taxonomies in many domains increasingly demands the integration of multiple such ontologies. We propose a new taxonomy merging algorithm called Atom that, given as input two taxonomies and a match mapping between them, can generate an integrated taxonomy in a largely automatic manner. The approach is target-driven, i.e. we merge a source taxonomy into the target taxonomy and preserve the target ontology as much as possible. In contrast to previous approaches, Atom does not aim at fully preserving all input concepts and relationships but strives to reduce the semantic heterogeneity of the merge results for improved understandability. Atom can also exploit advanced match mappings containing is-a relationships in addition to equivalence relationships between concepts of the input taxonomies. We evaluate Atom for synthetic and real-world scenarios and compare it with a full merge solution.  相似文献   

17.
18.
融合各机器人独自创建的环境地图,实现信息共享,是提高分布式多移动机器人系统环境探索效率的关键.研究了在没有公共参考坐标系及机器人相对位置信息未知情况下的栅格地图融合问题,提出了一种基十免疫自适应遗传算法的栅格地图融合方法,该算法把反映两个栅格地图重叠区域相异程度的优化函数作为抗原,每个可能的平移、旋转平面转换对应一个抗体.仿真结果表明了该算法可以较快的收敛速度和较强的全局搜索能力,搜索到两个栅格地图的最佳重叠区域,实现地图融合.  相似文献   

19.
元搜索引擎排序技术综述*   总被引:5,自引:0,他引:5  
摘要:如何排序是实现元搜索引擎的一项关键技术,排序算法的好坏直接决定着元搜索引擎的性能。对元搜索引擎常用的排序算法根据其发展先后顺序作了介绍,对一些经典的算法进行了分析和评价,归纳出元搜索引擎排序算法适用的不同环境,最后对元搜索引擎排序算法未来发展方向作了技术展望。  相似文献   

20.
This paper presents a new method for evaluating boolean set operations between Binary Space Partition (BSP) trees. Our algorithm has many desirable features, including both numerical robustness and O(n) output sensitive time complexity, while simultaneously admitting a straightforward implementation. To achieve these properties, we present two key algorithmic improvements. The first is a method for eliminating null regions within a BSP tree using linear programming. This replaces previous techniques based on polygon cutting and tree splitting. The second is an improved method for compressing BSP trees based on a similar approach within binary decision diagrams. The performance of the new method is analyzed both theoretically and experimentally. Given the importance of boolean set operations, our algorithms can be directly applied to many problems in graphics, CAD and computational geometry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号