共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
数据流的特点要求挖掘算法只能经过一次扫描获得挖掘结果,并且要求较低的空间复杂度。结合数据流的特点,提出一种基于滑动窗口的数据流频繁项集挖掘新算法MFIM。该算法采用二进制向量矩阵表示滑动窗口中的事务序列,以这种新的结构来记录频繁项集的动态变化,有效地挖掘数据流频繁项集。理论分析与实验结果表明该算法能获得较好的时间复杂度与空间复杂度。 相似文献
3.
为改进基于数据库垂直表示的频繁项集挖掘算法的性能,给出了用索引数组方法来改进计算性能的思路.提出了索引数组的概念及其计算方法,并提出了一种新的高效的频繁项集挖掘算法Index-FIMiner.该算法大大减少了不必要的tidset求交及相应的频繁性判断操作,同时也论证了代表项可直接与其包含索引中的所有项集的组合进行连接,这些结果项集的支持度均与代表项的支持度相等,从而降低了这些频繁项集的处理代价,提高了算法的性能.实验结果表明,Index-FIMiner算法具有较高的挖掘效率. 相似文献
4.
针对频繁项集挖掘存在数据和模式冗余的问题,对数据流最大频繁项集挖掘算法进行了研究。针对目前典型的数据流最大频繁模式挖掘算法DSM-MFI存在消耗大量存储空间及执行效率低等问题,提出了一种挖掘数据流界标窗口内最大频繁项集的算法MMFI-DS,该算法首先采用SEFI-tree存储包含在不断增长的数据流中相关最大频繁项集的重要信息,同时删除SEFI-tree中大量不频繁项目,然后使用自顶向下和自底向上双向搜索策略挖掘界标窗口中一系列的最大频繁项集。理论分析与实验表明,该算法比DSM-MFI算法具有更高的效率,并能节省存储空间。 相似文献
5.
6.
关联规则挖掘其主要研究目的是从大型数据集中发现隐藏的、有趣的、属性间存在的规律与数据间的联系。关联规则挖掘算法主要目的是从事务数据集项间挖掘出有意义的关联关系。Apriori算法是关联规则挖掘算法中最经典的方法。由Apriori算法产生的候选项集仍是巨量的。通过对Apriori算法中的候选项集支持频度的深入研究总结五条规律,并将这五条规律应用到Apriori算法中。 相似文献
7.
实物试验与虚拟试验相结合是测试性试验的发展趋势.经典Bayesian样本量确定方法仅能确定单一试验类型样本量,不满足一体化试验方案设计要求,本文针对此问题,引入了设计效应指标,利用测试性虚拟试验可信度和Bayesian最大后验区间(HPD)平均长度构造设计效应,建立虚实样本等效模型,将虚实一体化试验样本转换为实物试验样本,进而给出了不计虚拟试验代价的测试性虚实结合试验方案设计方法.最后针对某装备控制系统舵伺服子系统提出的测试性指标评估要求,利用本文提出的方法设计试验方案,分析了试验方案与虚拟样机可信度的变化关系,验证了本文方法能有效减小实物试验样本量. 相似文献
8.
本文主要介绍了故障注入与故障诊断在集成电路发计上的应用,以及故障注入及故障诊断对于集成电路容错性和测试性设计的意义,将集成电路的测试性设计方法聚边界扫描技术与故障注入和故障诊断的方法相结合,讲述了边界扫描、故障注入及故障诊断技术的电路原理。讨电路系统测试性辅助设计进行了归纳和总结。 相似文献
9.
10.
11.
12.
13.
14.
Shintaro Yamasaki Shinji Nishiwaki Takayuki Yamada Kazuhiro Izui Masataka Yoshimura 《International journal for numerical methods in engineering》2010,83(12):1580-1624
Structural optimization methods based on the level set method are a new type of structural optimization method where the outlines of target structures can be implicitly represented using the level set function, and updated by solving the so‐called Hamilton–Jacobi equation based on a Eulerian coordinate system. These new methods can allow topological alterations, such as the number of holes, during the optimization process whereas the boundaries of the target structure are clearly defined. However, the re‐initialization scheme used when updating the level set function is a critical problem when seeking to obtain appropriately updated outlines of target structures. In this paper, we propose a new structural optimization method based on the level set method using a new geometry‐based re‐initialization scheme where both the numerical analysis used when solving the equilibrium equations and the updating process of the level set function are performed using the Finite Element Method. The stiffness maximization, eigenfrequency maximization, and eigenfrequency matching problems are considered as optimization problems. Several design examples are presented to confirm the usefulness of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
15.
A move limit strategy is proposed for level set based structural optimization, which offers a tool to restrict the allowable change of the free boundary of a structure in each iterative step of optimization. To realize the move limit strategy, a function for modifying the extended design velocity is proposed. Application of the move limit strategy is demonstrated by several numerical examples of 2D structures. The results show that the move limit strategy is helpful to improve the stability and convergence rate of level set based structural optimization. 相似文献
16.
随着故障诊断技术的发展,利用专业的仿真工具对实际电路进行可测试性分析仿真用的越来越普遍。LASAR(逻辑自动激励与响应)就是一套优秀的用于数字电路测试开发和逻辑分析的仿真软件系统。介绍了利用LASAR故障仿真进行数字电路可测试性分析的方法。通过对一个实际电路进行仿真,具体说明了该方法在实际工程当中的应用。 相似文献
17.
18.
Woo‐Young Choi Dae‐Young Kwak Il‐Heon Son Yong‐Taek Im 《International journal for numerical methods in engineering》2003,58(12):1857-1872
In recent years, demand for three‐dimensional simulations has continued to grow in the field of computer‐aided engineering. Especially, in the analysis of forming processes a fully automatic and robust mesh generator is necessary for handling complex geometries used in industry. For three‐dimensional analyses, tetrahedral elements are commonly used due to the advantage in dealing with such geometries. In this study, the advancing front technique has been implemented and modified using an optimization scheme. In this optimization scheme, the distortion metric determines ‘when and where’ to smooth, and serves as an objective function. As a result, the performance of the advancing front technique is improved in terms of mesh quality generated. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
19.
An inversion scheme based on first-order optimization is developed for eddy current flaw reconstruction problems with arbitrary specimen, probe and defect shapes. As an essential component of this scheme, a new 3-D forward solver is introduced for the purpose of rapid flaw signal prediction in the inversion loop. This forward solver, whose numerical formulation is basically a discrete reaction variational technique, relies on a reaction data set in the form of an equation system, constructed before entering the inversion loop by a finite element electromagnetic field simulator. The anomalous region is subdivided into small subregions, called flaw cells, and a flaw is represented by a complete set of current dipole density pulses defined in these flaw cells. The coefficient matrix of the equation system consists of reactions between the dipole current density pulses while the elements of the right-hand-side vector are reactions between the pulses and the probe coils. The gradient of the error function, which represents the sensitivity with respect to the flaw parameters, can also be computed quickly from the same pre-calculated reaction dataset, thereby ensuring the efficient implementation of a first-order optimization algorithm. In order to avoid being trapped in a local minimum of the error function, good initial flaw estimates are generated by a neural network signal processing system developed recently by the authors. Various reconstruction examples demonstrate the efficiency of the reconstruction system. 相似文献
20.
Acquiring good throughput and diminishing interference to primary users (PU) are the main objectives for secondary users in a cognitive radio (CR) network. This paper proposes a centralized subcarrier and power allocation scheme for underlay multi-user orthogonal frequency division multiplexing considering the rate loss and the interference those the PU can tolerate. The main purpose of the proposed scheme is to efficiently distribute the available subcarriers among cognitive users to enhance both the fairness and the throughput performance of the cognitive network while maintaining the QoS of primary users. Simulation results show that the proposed scheme achieves a significantly higher CR network throughput than that of the conventional interference power constraint (IPC) based schemes and provides a significantly enhanced fairness performance. Also, contrary to the conventional IPC based schemes, the proposed scheme is able to significantly increase the achieved throughput as the number of CR users increases. 相似文献