首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
黄坤  吴玉佳  李晶 《电子学报》2018,46(8):1804-1814
高效用项集挖掘已成为关联规则中的一个热点研究问题.一些基于垂直结构的算法已用来挖掘高效用项集,此类算法的主要优点是将项集的事务和效用信息存储到效用列表中.在求一个项集的超集所在事务可以通过对它的子集进行一次交集运算得到.这种算法在稀疏数据集中非常的有效.但在稠密数据集中存在一个问题,即列表中存储的事务太多,在计算用于剪枝的效用上界时,需要耗费大量的存储空间,同时也影响运行速度.并且在现有的算法中,缺乏针对稠密数据集的高效用项集挖掘算法,往往需要设置很高的最小效用阈值,影响算法的运行效率.针对此问题,提出一个新的算法D-HUI (mining High Utility Itemsets using Diffsets)以及一个新的数据结构—项集列表,首次在高效用项集挖掘中引入差集的概念.利用事务的差集求项集的效用上界,减少计算量以及存储空间,从而提高算法的运行效率.实验结果表明,提出的算法在稠密数据集中,执行速度更快,内存消耗更少.  相似文献   

2.
Mining sequential patterns is an important research issue in data mining and knowledge discovery with broad applications. However, the existing sequential pattern mining approaches consider only binary frequency values of items in sequences and equal importance/significance values of distinct items. Therefore, they are not applicable to actually represent many real‐world scenarios. In this paper, we propose a novel framework for mining high‐utility sequential patterns for more real‐life applicable information extraction from sequence databases with non‐binary frequency values of items in sequences and different importance/significance values for distinct items. Moreover, for mining high‐utility sequential patterns, we propose two new algorithms: UtilityLevel is a high‐utility sequential pattern mining with a level‐wise candidate generation approach, and UtilitySpan is a high‐utility sequential pattern mining with a pattern growth approach. Extensive performance analyses show that our algorithms are very efficient and scalable for mining high‐utility sequential patterns.  相似文献   

3.
Utility based resource allocation strategy in multi-cell orthogonal frequency-division multiplexing (OFDM) system plays a critical role in next generation mobile communication systems. Based on the analysis of risk aversion utility functions, this article proposed the system utility based utility, which is named the customer satisfaction (CS) utility. Compared with the proportional fairness (PF) utility, the CS utility reflects the user demands better, and enables the system to adjust its resource allocation according to both the traffic requirements and the resource situation.  相似文献   

4.
1IntroductionWith the increasing competing in telecom,telecomoperators are under the pressure of providing more highperformance services based on the li mited network re-source. Adaptive QoS service is such a solution, whichcan increase the network efficiency and produce moreprofit ulti mately.Service QoScontrol andserviceinven-tory management are key techniques for adaptive QoSservices. Many issues in these aspects ,such as charac-terization of QoS, mapping QoS to resource ,efficiencyand …  相似文献   

5.
A new approach, named TCP-I2 NC, is proposed to improve the interaction between network coding and TCP and to maximize the network utility in interference-free multi-radio multi-channel wireless mesh networks. It is grounded on a Network Utility Maxmization ( NUM ) formulation which can be decomposed into a rate control problem and a packet scheduling problem. The solutions to these two problems perform resource allocation among different flows. Simulations demonstrate that TCP-I2NC results in a significant throughput gain and a small delay jitter. Network resource is fairly allocated via the solution to the NUM problem and the whole system also runs stably. Moreover, TCP-I2NC is compatible with traditional TCP variants.  相似文献   

6.
Algorithms are presented in this paper for VLSI layout of come binary tree structures. In particular, we consider X-Tree and Hypertree multicomputers and show that they can be laid out in O(8n) and O(nn) area respectively. Our algorithms are based on an original algorithm proposed by Horowitz and Zorat for an arbitrary tree structure. The layout follows Thompson's VLSI model of computation that allows two layers of metalization.  相似文献   

7.
针对MIMO-CDMA系统中的无线数据业务,本文研究了分布式非合作功率控制博弈。对MIMO-CDMA系统中的无线数据业务建立了收益函数,该收益函数对功率效率和频谱效率都进行了考虑,并能够反映系统中无线数据用户对服务质量(QoS)的满意程度。以收益函数为基础,建立了两种非合作功率控制博弈模型,并对模型纳什均衡的存在性和唯一性进行了推导。另外,还研究了两种代价函数机制。最后,给出了一种获得纳什均衡的算法,数值仿真结果表明该算法具有良好的性能,有效地控制了各用户的发射功率。  相似文献   

8.
We examine various algorithms for calculating quality of service (QoS)-enabled paths spanning multiple autonomous systems (ASs) using the path computation element (PCE) architecture. The problem is divided into two parts. We first calculate an AS path, then the node-by-node path. Using extensive simulation, we compared various AS-path calculation algorithms based on border gateway protocol (BGP) and various AS-aggregation procedures, such as mesh, star and nodal aggregation. For node-to-node path calculation, we employed the per-domain backward algorithm and the per-domain backward tree algorithm (also known as backward recursive PCE-based computation). Results point to the fact that complex AS-path calculation algorithms do not perform significantly better than BGP. However, if the service quality provided by ASs varies greatly, either in time or space, then we expect a QoS-aware AS-path computation algorithm, e.g., static nodal aggregation, to outperform BGP. Although the per-domain backward tree algorithm generally performs better than the per-domain backward algorithm, using a persistent variant of the latter makes it outperform the per-domain backward tree algorithm. The cost is a negligible increase in computational complexity and a slightly increased connection setup delay.  相似文献   

9.
In this paper, we offer a new technique to discover frequent spatiotemporal patterns from a moving object database. Though the search space for spatiotemporal knowledge is extremely challenging, imposing spatial and timing constraints on moving sequences makes the computation feasible. The proposed technique includes two algorithms, AllMOP and MaxMOP, to find all frequent patterns and maximal patterns, respectively. In addition, to support the service provider in sending information to a user in a push‐driven manner, we propose a rule‐based location prediction technique to predict the future location of the user. The idea is to employ the algorithm AllMOP to discover the frequent movement patterns in the user's historical movements, from which frequent movement rules are generated. These rules are then used to estimate the future location of the user. The performance is assessed with respect to precision and recall. The proposed techniques could be quite efficiently applied in a location‐based service (LBS) system in which diverse types of data are integrated to support a variety of LBSs.  相似文献   

10.
The computation of Chebyshev polynomial over finite field is a dominating operation for a public key cryptosystem.Two generic algorithms with running time of have been presented for this computation:the matrix algorithm and the characteristic polynomial algorithm,which are feasible but not optimized.In this paper,these two algorithms are modified in procedure to get faster execution speed.The complexity of modified algorithms is still,but the number of required operations is reduced,so the execution speed is improved.Besides,a new algorithm relevant with eigenvalues of matrix in representation of Chebyshev polynomials is also presented,which can further reduce the running time of that computation if certain conditions are satisfied.Software implementations of these algorithms are realized,and the running time comparison is given.Finally an efficient scheme for the computation of Chebyshev polynomial over finite field is presented.  相似文献   

11.
In this paper, we develop a measure-theoretic version of the junction tree algorithm to compute desired marginals of a product function. We reformulate the problem in a measure-theoretic framework, where the desired marginals are viewed as corresponding conditional expectations of a product of random variables. We generalize the notions of independence and junction trees to collections of /spl sigma/-fields on a space with a signed measure. We provide an algorithm to find such a junction tree when one exists. We also give a general procedure to augment the /spl sigma/-fields to create independencies, which we call "lifting." This procedure is the counterpart of the moralization and triangulation procedure in the conventional generalized distributive law (GDL) framework, in order to guarantee the existence of a junction tree. Our procedure includes the conventional GDL procedure as a special case. However, it can take advantage of structures at the atomic level of the sample space to produce junction tree-based algorithms for computing the desired marginals that are less complex than those GDL can discover, as we argue through examples. Our formalism gives a new way by which one can hope to find low-complexity algorithms for marginalization problems.  相似文献   

12.
本文研究了不确定型模糊Kripke结构的计算树逻辑的模型检测问题,并说明了该问题可以在对数多形式时间内解决.首先给出了不确定型模糊Kripke结构的定义,引入了模糊计算树逻辑的语法和语义.为了刻画存在量词∃和任意量词∀在不确定型模糊Kripke结构中的两种语义解释,在模糊计算树逻辑语法中引入了路径量词∃sup,∃inf和∀sup,∀inf,分别用于替换存在量词∃和任意量词∀.其次讨论了基于不确定型模糊Kripke结构的计算树逻辑模型检测算法,特别地对于模糊计算树逻辑公式∃suppUq,∀suppUq,∃infpUq和∀infpUq分别给出时间复杂度为对数多项式时间的改进算法.  相似文献   

13.
The structures of convolutional self-orthogonal codes and convolutional self-doubly-orthogonal codes for both belief propagation and threshold iterative decoding algorithms are analyzed on the basis of difference sets and computation tree. It is shown that the double orthogonality property of convolutional self-doubly-orthogonal codes improves the code structure by maximizing the number of independent observations over two successive decoding iterations while minimizing the number of cycles of lengths$6$and$8$on the code graphs. Thus, the double orthogonality may improve the iterative decoding in both convergence speed and error performance. In addition, the double orthogonality makes the computation tree rigorously balanced. This allows the determination of the best weighing technique, so that the error performance of the iterative threshold decoding algorithm approaches that of the iterative belief propagation decoding algorithm, but at a substantial reduction of the implementation complexity.  相似文献   

14.
Classification and compression play important roles in communicating digital information. Their combination is useful in many applications, including the detection of abnormalities in compressed medical images. In view of the similarities of compression and low-level classification, it is not surprising that there are many similar methods for their design. Because some of these methods are useful for designing vector quantizers, it seems natural that vector quantization (VQ) is explored for the combined goal. We investigate several VQ-based algorithms that seek to minimize both the distortion of compressed images and errors in classifying their pixel blocks. These algorithms are investigated with both full search and tree-structured codes. We emphasize a nonparametric technique that minimizes both error measures simultaneously by incorporating a Bayes risk component into the distortion measure used for the design and encoding. We introduce a tree-structured posterior estimator to produce the class posterior probabilities required for the Bayes risk computation in this design. For two different image sources, we demonstrate that this system provides superior classification while maintaining compression close or superior to that of several other VQ-based designs, including Kohonen's (1992) "learning vector quantizer" and a sequential quantizer/classifier design.  相似文献   

15.
Utility is an important factor for serviceproviders, and they try to increase their utilities through adopting different policies and strategies. Because of unpredictable failures in systems, there are many scenarios in which the failures may cause random losses for service providers. Loss sharing can decrease negative effects of unexpected random losses. Because of capabilities of learning automata in random and stochastic environments, in this paper, a new learning automaton based method is presented for loss sharing purpose. It is illustrated that the loss sharing can be useful for service providers and helps them to decrease negative effect of the random losses. The presented method can be used especially in collaborative environments such as federated clouds. Results of the conducted experiments show the usefulness of the presented approach to improve utility of service providers.  相似文献   

16.
基于边际效用函数的网络资源调度   总被引:1,自引:0,他引:1  
宋亚楠  仲茜  刘斌 《电子学报》2013,41(4):632-638
为解决当前基于效用的网络资源调度中模型通用性不强、求解算法效果差、速度慢等问题,提出了基于边际效用函数的效用最优化资源调度方法.它根据边际效用函数的特点,将网络应用分为弹性和非弹性应用,并通过每个应用的边际效用函数求出其效用函数.将上述效用函数应用于网络资源调度问题中,给出了问题求解的高效算法.仿真实验表明,与目前最新的同类算法及经典优化求解工具Lingo9.0中的算法相比,本文算法求得的总效用值平均提高5%和4%,而所用时间仅为上述算法的0.2%和0.003%.  相似文献   

17.
The least squares (LS) minimization problem constitutes the core of many real-time signal processing problems, such as adaptive filtering, system identification and adaptive beamforming. Recently efficient implementations of the recursive least squares (RLS) algorithm and the constrained recursive least squares (CRLS) algorithm based on the numerically stable QR decomposition (QRD) have been of great interest. Several papers have proposed modifications to the rotation algorithm that circumvent the square root operations and minimize the number of divisions that are involved in the Givens rotation. It has also been shown that all the known square root free algorithms are instances of one parametric algorithm. Recently, a square root free and division free algorithm has also been proposed. In this paper, we propose a family of square root and division free algorithms and examine its relationship with the square root free parametric family. We choose a specific instance for each one of the two parametric algorithms and make a comparative study of the systolic structures based on these two instances, as well as the standard Givens rotation. We consider the architectures for both the optimal residual computation and the optimal weight vector extraction. The dynamic range of the newly proposed algorithm for QRD-RLS optimal residual computation and the wordlength lower bounds that guarantee no overflow are presented. The numerical stability of the algorithm is also considered. A number of obscure points relevant to the realization of the QRD-RLS and the QRD-CRLS algorithms are clarified. Some systolic structures that are described in this paper are very promising, since they require less computational complexity (in various aspects) than the structures known to date and they make the VLSI implementation easier  相似文献   

18.
基于自注意力机制的干扰信号检测识别   总被引:2,自引:0,他引:2  
为了解决卫星通信系统在对抗电磁环境中的干扰实时检测识别问题,提出了一种基于自注意力(Self-attention,SA)机制的高效轻量化网络模型。通过采用DenseNet加速对原始IQ信号的特征提取,并引入自注意力模块代替参数量较大的多重卷积层,实现对卫星通信系统中常见的干扰样式进行分类识别。仿真结果表明,在识别准确率方面达到常规的神经网络模型和算法性能水平的条件下,所提模型在网络复杂度和运算时延方面得到有效压缩。  相似文献   

19.
As a promising computing paradigm, Mobile Edge Computing (MEC) provides communication and computing capability at the edge of the network to address the concerns of massive computation requirements, constrained battery capacity and limited bandwidth of the Internet of Things (IoT) systems. Most existing works on mobile edge task ignores the delay sensitivities, which may lead to the degraded utility of computation offloading and dissatisfied users. In this paper, we study the delay sensitivity-aware computation offloading by jointly considering both user's tolerance towards delay of task execution and the network status under computation and communication constraints. Specifically, we use a specific multi-user and multi-server MEC system to define the latency sensitivity of task offloading based on the analysis of delay distribution of task categories. Then, we propose a scoring mechanism to evaluate the sensitivity-dependent utility of task execution and devise a Centralized Iterative Redirection Offloading (CIRO) algorithm to collect all information in the MEC system. By starting with an initial offloading strategy, the CIRO algorithm enables IoT devices to cooperate and iteratively redirect task offloading decisions to optimize the offloading strategy until it converges. Extensive simulation results show that our method can significantly improve the utility of computation offloading in MEC systems and has lower time complexity than existing algorithms.  相似文献   

20.
Recently graphic processing units (GPUs) are rising as a new vehicle for high-performance, general purpose computing. It is attractive to unleash the power of GPU for Electronic Design Automation (EDA) computations to cut the design turn-around time of VLSI systems. EDA algorithms, however, generally depend on irregular data structures such as sparse matrix and graphs, which pose major challenges for efficient GPU implementations. In this paper, we propose high-performance GPU implementations for a set of important irregular EDA computing patterns including sparse matrix, graph algorithms and message-passing algorithms. In the sparse matrix domain, we solve a core problem, sparse-matrix vector product (SMVP). On a wide range of EDA problem instances, our SMVP implementation outperforms all prior work and achieves a speedup up to 50× over the CPU baseline implementation. The GPU based SMVP procedure is applied to successfully accelerate two core EDA computing engines, timing analysis and linear system solution. In the graph algorithm domain, we developed a SMVP based formulation to efficiently solve the breadth-first search (BFS) problem on GPUs. We also developed efficient solutions for two message-passing algorithms, survey propagation (SP) based SAT solution and a register-transfer level (RTL) simulation. Our results prove that GPUs have a strong potential to accelerate EDA computing through designing GPU-friendly algorithms and/or re-organizing computing structures of sequential algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号