首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 568 毫秒
1.
Mirror adaptive random testing   总被引:2,自引:0,他引:2  
Recently, adaptive random testing (ART) has been introduced to improve the fault-detection effectiveness of random testing for non-point types of failure patterns. However, ART requires additional computations to ensure an even spread of test cases, which may render ART less cost-effective than random testing. This paper presents a new technique, namely mirror ART, to reduce these computations. It is an integration of the technique of mirroring and ART. Our simulation results clearly show that mirror ART does improve the cost-effectiveness of ART.  相似文献   

2.
ABSTRACT

This paper presents a symmetric cipher that is actually a variation of the Hill cipher. The new scheme makes use of “random” permutations of columns and rows of a matrix to form a “different” key for each data encryption. The cipher has matrix products and permutations as the only operations which may be performed “efficiently” by primitive operators, when the system parameters are carefully chosen.  相似文献   

3.
针对适应性随机测试中的边缘效应问题,借鉴镜像适应性随机测试(MART)的基本思想,在引入镜像距离的基础上,提出3n MART算法,通过改变候选用例与成功用例之间的距离判断策略,使其产生的用例在输入域中更均匀地分布。仿真实验结果证明,该算法可以较好地解决边缘效应问题,相比基于距离的适应性随机测试算法与MART算法,失效发现效率更高。  相似文献   

4.
Random testing (RT) is a fundamental software testing technique. Adaptive random testing (ART), an enhancement of RT, generally uses fewer test cases than RT to detect the first failure. ART generates test cases in a random manner, together with additional test case selection criteria to enforce that the executed test cases are evenly spread over the input domain. Some studies have been conducted to measure how evenly an ART algorithm can spread its test cases with respect to some distribution metrics. These studies observed that there exists a correlation between the failure detection capability and the evenness of test case distribution. Inspired by this observation, we aim to study whether failure detection capability of ART can be enhanced by using distribution metrics as criteria for the test case selection process. Our simulations and empirical results show that the newly proposed algorithms not only improve the evenness of test case distribution, but also enhance the failure detection capability of ART.  相似文献   

5.
Adaptive random testing (ART) has recently been proposed to enhance the failure-detection capability of random testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some ART algorithms by offsetting the edge preference, and propose a new family of ART algorithms. A series of simulations have been conducted and it is shown that these new algorithms not only select test cases more evenly, but also have better failure detection capabilities.  相似文献   

6.
We propose compilation methods for the efficient support of set-term matching in Horn-clause programs. Rather than using general-ourpose set-matching algorithms, we take the approach of formulating at compile time specialized computation plans that, by taking advantage of information available in the given rules, limit the number of alternatives explored. Our strategy relies on rewriting techniques to transform the problem into an “ordinary” Horn-clause compilation problem, with minimal additional overhead. The execution cost of the rewritten rules is substantially lower than that of the original rules, and the additional cost of compilation can thus be amortized over many executions.  相似文献   

7.
Adaptive random testing (ART), an enhancement of random testing (RT), aims to both randomly select and evenly spread test cases. Recently, it has been observed that the effectiveness of some ART algorithms may deteriorate as the number of program input parameters (dimensionality) increases. In this article, we analyse various problems of one ART algorithm, namely fixed-sized-candidate-set ART (FSCS-ART), in the high dimensional input domain setting, and study how FSCS-ART can be further enhanced to address these problems. We propose to add a filtering process of inputs into FSCS-ART to achieve a more even-spread of test cases and better failure detection effectiveness in high dimensional space. Our study shows that this solution, termed as FSCS-ART-FE, can improve FSCS-ART not only in the case of high dimensional space, but also in the case of having failure-unrelated parameters. Both cases are common in real life programs. Therefore, we recommend using FSCS-ART-FE instead of FSCS-ART whenever possible. Other ART algorithms may face similar problems as FSCS-ART; hence our study also brings insight into the improvement of other ART algorithms in high dimensional space.
Fei-Ching KuoEmail:
  相似文献   

8.
The second-order random walk has recently been shown to effectively improve the accuracy in graph analysis tasks.Existing work mainly focuses on centralized second-order random walk (SOW) algorithms.SOW algorithms rely on edge-to-edge transition probabilities to generate next random steps.However,it is prohibitively costly to store all the probabilities for large-scale graphs,and restricting the number of probabilities to consider can negatively impact the accuracy of graph analysis tasks.In this paper,we propose and study an alternative approach,SOOP (second-order random walks with on-demand probability computation),that avoids the space overhead by computing the edge-to-edge transition probabilities on demand during the random walk.However,the same probabilities may be computed multiple times when the same edge appears multiple times in SOW,incurring extra cost for redundant computation and communication.We propose two optimization techniques that reduce the complexity of computing edge-to-edge transition probabilities to generate next random steps,and reduce the cost of communicating out-neighbors for the probability computation,respectively.Our experiments on real-world and synthetic graphs show that SOOP achieves orders of magnitude better performance than baseline precompute solutions,and it can efficiently computes SOW algorithms on billion-scale graphs.  相似文献   

9.
Probabilistic flooding has been frequently considered as a suitable dissemination information approach for limiting the large message overhead associated with traditional (full) flooding approaches that are used to disseminate globally information in unstructured peer-to-peer and other networks. A key challenge in using probabilistic flooding is the determination of the forwarding probability so that global network outreach is achieved while keeping the message overhead as low as possible. In this paper, by showing that a probabilistic flooding network, generated by applying probabilistic flooding to a connected random graph network, can be (asymptotically) “bounded” by properly parameterized random graph networks and by invoking random graph theory results, asymptotic values of the forwarding probability are derived guaranteeing (probabilistically) successful coverage, while significantly reducing the message overhead with respect to traditional flooding. Asymptotic expressions with respect to the average number of messages and the average time required to complete network coverage are also derived, illustrating the benefits of the properly parameterized probabilistic flooding scheme. Simulation results support the claims and expectations of the analytical results and reveal certain aspects of probabilistic flooding not covered by the analysis.  相似文献   

10.
We present an efficient approach to the determination of the convolution of random variables when their probability density functions are given as continuous functions over finite support. We determine that the best approach for large activity networks is to descretize the density function using Chebychev's points.Scope and purposeThe convolution operation occurs frequently in the analysis of sums of independent random variables. Although the operation of convolution is elementary in its concept, it is rather onerous to implement on the computer for several reasons that are spelled out in the paper. It is our objective to present a computational scheme, based on discretization of continuous density functions, that is both easy to implement and mathematically “correct”, in the sense of satisfying equality of several moments of the approximation to the moments of the original density function. The approach presented in the paper can be easily programmed on the computer, and gives the desired convolution to any desired degree of accuracy.  相似文献   

11.
Adaptive Random Testing: The ART of test case diversity   总被引:1,自引:0,他引:1  
Random testing is not only a useful testing technique in itself, but also plays a core role in many other testing methods. Hence, any significant improvement to random testing has an impact throughout the software testing community. Recently, Adaptive Random Testing (ART) was proposed as an effective alternative to random testing. This paper presents a synthesis of the most important research results related to ART. In the course of our research and through further reflection, we have realised how the techniques and concepts of ART can be applied in a much broader context, which we present here. We believe such ideas can be applied in a variety of areas of software testing, and even beyond software testing. Amongst these ideas, we particularly note the fundamental role of diversity in test case selection strategies. We hope this paper serves to provoke further discussions and investigations of these ideas.  相似文献   

12.
失效区域紧致性对适应性随机测试的性能影响   总被引:1,自引:0,他引:1       下载免费PDF全文
陈宗岳  郭斐菁  孙昌爱 《软件学报》2006,17(12):2438-2449
适应性随机测试是一种增强的随机测试方法.已有的研究发现:失效区域的紧致程度是影响适应性随机测试性能的几个基本因素之一,并仅在失效区域为长方形的情形下验证了上述猜想.采用仿真实验的方法进一步研究失效区域的紧致程度与适应性随机测试的性能之间的精确关系.研究了几种基本规则形状的和不规则形状的失效区域.实验结果表明:适应性随机测试方法的性能随着失效区域的紧致程度的增强而提高.该研究进一步地揭示了适应性随机测试优于随机测试的基本条件.  相似文献   

13.
为了抵抗量子算法的攻击和应对恶意签名者利用环签名技术的完全匿名性输出多个签名从而进行双重开销攻击这一缺陷,同时为了解决不必要的系统开销浪费问题,提出了一种新的格上基于身份的可链接环签名方案。该方案以格上近似最短向量问题为安全基础,将该问题的求解规约于碰撞问题的求解,利用矩阵向量间的线性运算生成签名,同时结合了基于身份的密码技术。解决了系统开销浪费问题,不涉及陷门生成和高斯采样等复杂算法,提高了签名效率,降低了存储开销,并在随机预言模型下验证了方案满足完全匿名性和强存在不可伪造性。经分析,该方案是一个安全高效的环签名方案。  相似文献   

14.
针对现有低轨卫星网络认证方案采用集中认证方式存在认证时延大和采用复杂的双线性映射存在计算开销大的问题。引入无证书认证模型,在Gayathri方案的基础上;设计了一种高效无证书认证方案。该方案将用户的公钥和真实身份统一起来,使得认证过程中不需要第三方参与,降低了认证时延;通过椭圆曲线上少量点乘和点加运算构建认证消息,避免使用双线性映射,降低了计算开销;并在随机预言模型下,基于椭圆曲线离散数学对问题假设,对其安全性进行了证明。最后,通过实验仿真,与现有低轨卫星身份认证方案相比,所提方案的认证时延、计算开销和通信开销较低。  相似文献   

15.
This paper presents a new scheme for mapping high dimensional data onto two-dimensional viewing spaces. The mapping procedure is carried out in two stages. In the first stage, the fuzzy c-means (FCM) algorithm is applied to the N-dimensional data to find membership functions of clusters in the data. Core subsets are then selected from the original data based upon threshold values applied to the membership functions found by FCM. In the second stage feature vectors in the selected “core” subsets are submitted to various feature extraction mappings, which yield scatterplots of the image points in 2D space. The proposed approach has two significant advantages over many previous schemes. First, changes in the core structure imposed on the original data under feature extraction can be used to gauge the relative quality of competing extraction techniques. And second, the cores provide a way to generalize almost any known method, resulting in new extraction algorithms. We also discuss various ways to color the selected data that enhance the 2D display. Our approach incorporates a means for assessing the “quality” of the 2D display via parameters which provide an evaluation of (i) the validity of clusters in the original data set and (ii) the relative ability of various extraction mappings to preserve certain well-defined structural properties of the original data. The feasibility of our approach is illustrated using two sets of data: the well known Iris data; and a set of flow cytometric data. Color displays are used to visually assess scatterplot configurations in 2-space.  相似文献   

16.
PERT is widely used as a tool for managing large-scale projects. The traditional PERT approach uses the beta distribution as the distribution of activity duration and estimates the mean and the variance of activity duration based on the “pessimistic”, “optimistic” and “most likely” time estimates. Several authors have modified the original three point PERT estimators to improve the accuracy of the estimates. This article proposes new approximations for the mean and the variance of activity based on “pessimistic”, “optimistic” and “most likely” time estimates. By numerical comparison with actual values, the proposal is shown as more accurate than the original PERT estimates and its modifications. Another advantage of the proposed approximation is that it requires no assumptions about the parameters of the beta distribution as in the case of existing ones.Scope and purposeThe traditional PERT model uses beta distribution as the distribution of activity duration and estimates the mean and the variance of activity duration using “pessimistic”, “optimistic” and “most likely” time estimates proposed by an expert. In the past several authors have modified the original PERT estimators to improve the accuracy. This article proposes new approximations for the mean and the variance of activity duration which are more accurate than the original PERT estimates and their modifications. Another advantage of the proposal is that it is not based on any assumptions about the parameters of the beta distribution as in the case of the existing ones.  相似文献   

17.
基于身份的签名(IBS)方案大多需要复杂的双线性对运算,因此签名算法效率很低,不适用于无线自组织网络的密钥管理、安全路由等通信安全协议。针对该问题,提出一个无需双线性对的IBS方案。在随机预言模型下证明该方案满足不可伪造性,可抵抗选择消息攻击。理论分析表明,与同类方案相比,该方案的计算量和传输代价更小,效率更高。  相似文献   

18.
Dr. X. Merrheim 《Computing》1994,53(3-4):219-232
Many hardware-oriented algorithms computing the usual elementary functions (sine, cosine, exponential, logarithm, ...) only use shifts and additions. In this paper, we present new algorithms using shifts, adds and “small multiplications” (i. e. multiplications by few-digit-numbers). These CORDIC-like algorithms compute the elementary functions in radix 2 p (instead of the standard radix 2) and use table look-ups. The number of the required steps to compute functions with a given accuracy is reduced and since we use a quick “small multiplier”, the computation time is reduced.  相似文献   

19.
As typical wireless sensor networks (WSNs) have resource limitations, predistribution of secret keys is possibly the most practical approach for secure network communications. In this paper, we propose a key management scheme based on random key predistribution for heterogeneous wireless sensor networks (HSNs). As large-scale homogeneous networks suffer from high costs of communication, computation, and storage requirements, the HSNs are preferred because they provide better performance and security solutions for scalable applications in dynamic environments. We consider hierarchical HSN consisting of a small number high-end sensors and a large number of low-end sensors. To address storage overhead problem in the constraint sensor nodes, we incorporate a key generation process, where instead of generating a large pool of random keys, a key pool is represented by a small number of generation keys. For a given generation key and a publicly known seed value, a keyed-hash function generates a key chain; these key chains collectively make a key pool. As dynamic network topology is native to WSNs, the proposed scheme allows dynamic addition and removal of nodes. This paper also reports the implementation and the performance of the proposed scheme on Crossbow’s MicaZ motes running TinyOS. The results indicate that the proposed scheme can be applied efficiently in resource-constrained sensor networks. We evaluate the computation and storage costs of two keyed-hash algorithms for key chain generation, HMAC-SHA1 and HMAC-MD5.
Ashraf MasoodEmail:
  相似文献   

20.
针对非原点最优的复杂优化问题(最优解不在坐标原点),提出了一种基于随机交叉-自学策略的教与学优化算法(teaching and learning optimization algorithm based on random crossover-self-study strategy, CSTLBO)。对标准教与学优化算法的“教阶段”和“学阶段”的空间扰动进行了几何解释,改进了原有的“教阶段”和“学阶段”,并引入随机交叉策略和“自学”策略来提高算法的全局寻优能力。通过使用20个Benchmark函数进行仿真,并与6种改进的教与学优化算法进行结果比较及Wilcoxon秩和检验分析,结果表明CSTLBO算法能有效避免陷入局部最优,具有良好的全局搜索能力,求解精度高,稳定性好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号