首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Conversion of the flip-flops of the circuit into scan cells helps ease the test challenge; yet test application time is increased as serial shift operations are employed. Furthermore, the transitions that occur in the scan chains during these shifts reflect into significant levels of circuit switching unnecessarily, increasing the power dissipated. Judicious encoding of the correlation among the test vectors and construction of a test vector through predecessor updates helps reduce not only test application time but also scan chain transitions as well. Such an encoding scheme, which additionally reduces test data volume, can be further enhanced through appropriately ordering and padding of the test cubes given. The experimental results confirm the significant reductions in test application time, test data volume and test power achieved by the proposed compression methodology.  相似文献   

2.
UMC-Scan Test Methodology: Exploiting the Maximum Freedom of Multicasting   总被引:2,自引:0,他引:2  
Industry has used scan-based designs widely to promote test quality. However, for larger designs, the growing test data volume has significantly increased test cost because of excessively long test times and elevated tester memory and external test channel requirements. To address these problems, researchers have proposed numerous test compression architectures. In this article, we propose a flexible scan test methodology called universal multicasting scan (UMC scan). It has three major features: First, it provides a better than state-of-the-art test compression ratio using multicasting. Second, it accepts any existing test patterns and doesn't need ATPG support. Third, unlike most previous multicasting schemes that use mapping logic to partition the scan chains into hard configurations, UMC scan's compatible scan chain groups are defined by control bits, as in the segmented addressable scan (SAS) architecture. We have developed several techniques to reduce the extra control bits so that the overall test compression ratio can approach that of the ideal multicasting scheme.  相似文献   

3.
This paper describes a new compression/decompression methodology for using an embedded processor to test the other components of a system-on-a-chip (SoC). The deterministic test vectors for each core are compressed using matrix-based operations that significantly reduce the amount of test data that needs to be stored on the tester. The compressed data is transferred from the tester to the processor's on-chip memory. The processor executes a program which decompresses the data and applies it to the scan chains of each core-under-test. The matrix-based operations that are used to decompress the test vectors can be performed very efficiently by the embedded processor thereby allowing the decompression program to be very fast and provide high throughput of the test data to minimize test time. Experimental results demonstrate that the proposed approach provides greater compression than previous methods.  相似文献   

4.
Historical Perspective on Scan Compression   总被引:1,自引:0,他引:1  
The beginnings of the modern-day IC test trace back to the introduction of such fundamental concepts as scan, stuck-at faults, and the D-algorithm. Since then, several subsequent technologies have made significant improvements to the state of the art. Today, IC test has evolved into a multifaceted industry that supports innovation. Scan compression technology has proven to be a powerful antidote to this problem, as it has catalyzed reductions in test data volume and test application time of up to 100 times. This article sketches a brief history of test technology research, tracking the evolution of compression technology that has led to the success of scan compression. It is not our intent to identify specific inventors on a finegrained timeline. Instead, we present the important concepts at a high level, on a coarse timeline. Starting in 1998 and continuing to the present, numerous scan-compression-related inventions have had a major impact on the test landscape. However, this article also is not a survey of the various scan compression methods. Rather, we focus on the evolution of the types of constructs used to create breakthrough solutions.  相似文献   

5.
The generation of test data for state-based specifications is a computationally expensive process. This problem is magnified if we consider that time constraints have to be taken into account to govern the transitions of the studied system. The main goal of this paper is to introduce a complete methodology, supported by tools, that addresses this issue by representing the test data generation problem as an optimization problem. We use heuristics to generate test cases. In order to assess the suitability of our approach we consider two different case studies: a communication protocol and the scientific application BIPS3D. We give details concerning how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs outperform random search and seem to scale well as the problem size increases. It is worth to mention that we use a very simple fitness function that can be easily adapted to be used with other evolutionary search techniques.  相似文献   

6.
具有剪切的矢量压缩立体绘制算法   总被引:2,自引:1,他引:1  
利用矢量压缩的方法进行立体绘制,一个不足是根据码本生成的Pixmap图无法应用于立体投影中的每一点,为了克服这些缺点,提出了具有Shear Warp的矢量化算法,它充分发挥了矢量量化的压缩比,以及不需要解压即可直接进行体绘制的优点,有效地利用了具有Shear Warp的体绘制方法的快速和适合立体投影的优点,克服了基于矢量量化的绘制方法的不足,几乎能够达到实时的交互绘制,是一种基于网络的绘制模式。  相似文献   

7.
减少多种子内建自测试方法硬件开销的有效途径   总被引:9,自引:0,他引:9  
提出一个基于重复播种的新颖的BIST方案,该方案使用侦测随机向量难测故障的测试向量作为种子,并利用种子产生过程中剩余的随意位进行存储压缩;通过最小化种子的测试序列以减少测试施加时间.实验表明,该方案需要外加硬件少,测试施加时间较短,故障覆盖率高,近似等于所依赖的ATPG工具的故障覆盖率.在扼要回顾常见的确定性BIST方案的基础上,着重介绍了文中的压缩存储硬件的方法、合成方法和实验结果.  相似文献   

8.
The ever-increasing test data volume and test power consumption are the two major issues in testing of digital integrated circuits. This paper presents an efficient technique to reduce test data volume and test power simultaneously. The pre-generated test sets are divided into two groups based on the number of unspecified bits in each test set. Test compression procedure is applied only to the group of test sets which contain more unspecified bits and the power reduction technique is applied to the remaining test sets. In the proposed approach, the unspecified bits in the pre-generated test sets are selectively mapped with 0s or 1s based on their effectiveness in reducing the test data volume and power consumptions. We also present a simple decoder architecture for on-chip decompression. Experimental results on ISCAS’89 benchmark circuits demonstrate the effectiveness of the proposed technique compared with other test-independent compression techniques.  相似文献   

9.
Editor's note:Containing production cost is a major concern for today's complex SoCs. One of the key contributors to production cost is test time and test data volume, for which numerous compression techniques were proposed. This article introduces a different approach to test data volume reduction, namely the use of modular test based on IEEE Std 1500 architecture, and it provides modeling, analysis, and quantification to support the proposed approach.—Yervant Zorian, VirageLogic  相似文献   

10.
刘鹏  张云  尤志强  邝继顺  彭程 《计算机工程》2011,37(14):254-255
为进一步降低测试功耗及测试应用时间,提出一种基于扫描链阻塞技术且针对非相容测试向量的压缩方法.该方法考虑前后2个测试向量之间不相容的扫描子链,后一个测试向量可以由扫描输入移入若干位以及前一个测试向量的前若干位组合而成.实验结果表明,该方法能够有效减少测试应用时间,提升效率.  相似文献   

11.
In the design of electronic embedded systems, the allocation of data structures to memory banks is a main challenge faced by designers. Indeed, if this optimization problem is solved correctly, a great improvement in terms of efficiency can be obtained. In this paper, we consider the dynamic memory allocation problem, where data structures have to be assigned to memory banks in different time periods during the execution of the application. We propose a GRASP to obtain high quality solutions in short computational time, as required in this type of problem. Moreover, we also explore the adaptation of the ejection chain methodology, originally proposed in the context of tabu search, for improved outcomes. Our experiments with real and randomly generated instances show the superiority of the proposed methods compared to the state-of-the-art method.  相似文献   

12.
Array partitioning is an important research problem in array management area, since the partitioning strategies have important influence on storage, query evaluation, and other components in array management systems. Meanwhile, compression is highly needed for the array data due to its growing volume. Observing that the array partitioning can affect the compression performance significantly, this paper aims to design the efficient partitioning method for array data to optimize the compression performance. As far as we know, there still lacks research efforts on this problem. In this paper, the problem of array partitioning for optimizing the compression performance (PPCP for short) is firstly proposed. We adopt a popular compression technique which allows to process queries on the compressed data without decompression. Secondly, because the above problem is NP-hard, two essential principles for exploring the partitioning solution are introduced, which can explain the core idea of the partitioning algorithms proposed by us. The first principle shows that the compression performance can be improved if an array can be partitioned into two parts with different sparsities. The second principle introduces a greedy strategy which can well support the selection of the partitioning positions heuristically. Supported by the two principles, two greedy strategy based array partitioning algorithms are designed for the independent case and the dependent case respectively. Observing the expensive cost of the algorithm for the dependent case, a further optimization based on random sampling and dimension grouping is proposed to achieve linear time cost. Finally, the experiments are conducted on both synthetic and real-life data, and the results show that the two proposed partitioning algorithms achieve better performance on both compression and query evaluation.  相似文献   

13.
The present article proposes an advanced methodology for numerically simulating complex noise problems. More precisely, we consider the so-called multi-stage acoustic hybrid approach, which principle is to couple sound generation and acoustic propagation stages. Under that approach, we propose an advanced hybrid method which acoustic propagation stage relies on Computational AeroAcoustics (CAA) techniques. To this end, first, an innovative weak-coupling technique is developed, which allows an implicit forcing of the CAA stage with a given source signal coming from an a priori evaluation, whether the latter evaluation is of analytical or computational nature. Then, thanks to additional innovative solutions, the resulting CAA-based hybrid approach is optimized so that it can be applied to realistic and complex acoustic problems in an easier and safer way. All these innovative features are then validated on the basis of an academic test case, before the resulting advanced CAA-based hybrid methodology is applied to two problems of flow-induced noise radiation. This demonstrates the ability of the here proposed method to address realistic problems, by offering to handle at the same time both acoustic generation and propagation phenomena, despite their intrinsic multiscale character.  相似文献   

14.
图形硬件的发展为实时体数据可视化提供了硬件保证,然而随着扫描技术的发展,大数据可视化仍然面临显存不足问题,因此研究保持数据特征的压缩表达方法就非常重要。应用张量近似思想建立了体数据的多尺度表达与可视化方法,一方面多尺度张量近似实现了数据压缩,解决了大数据的绘制问题;另一方面,张量近似的自适应压缩基保持了体数据的尺度特征。实验结果表明,该方法是有效的。  相似文献   

15.
This paper presents a new job release (JR) and scheduling methodology for one-stage parallel machines where sequence dependent setup times exist. A decision support system (DSS) based on job release is developed in order to enable the application of the methodology. First, mathematical programming models for both job release and job scheduling are devised. Then, due to the NP-hard nature of the problems, heuristics are proposed. As for the interaction between JR and scheduling, job scheduling is integrated with job release for the proposed heuristic solutions so that the capacity achievement provided by scheduling can be utilized for job release. In brief, product design characteristics oriented scheduling affects JR in the proposed approach. Moreover, value stream mapping (VSM) approach is used with the aim of stating the effect of the proposed methodology. Furthermore, the presented methodology was applied in a real life electric wire-harness production system. The application, based on 120-day production data, revealed that the proposed methodology provided 25% decrease in in-plant manufacturing lead time.  相似文献   

16.
In an exciting new application, wireless sensor networks (WSNs) are increasingly being deployed to monitor the structure health of underground subway tunnels, promising many advantages over traditional monitoring methods. As a result, ensuring efficient data communication, transmission, and storage have become a huge challenge for these systems as they try to cope with ever increasing quantities of data collected by ever growing numbers of sensor nodes. A key approach of managing big data in WSNs is through data compression. Reducing the volume of data traveling between sensor nodes can reduce the high energy cost of data transmission, as well as save space for storage of big data. In this paper, we propose an algorithm for the compression of spatial–temporal data from one data type of sensor node in a WSN deployed in an underground tunnel. The proposed algorithm works efficiently because it considers temporal as well as spatial features of sensor data. A recovery process is required for recovering the data with a close approximation to the original data form nodes. We validate the proposed recovery technique through computational experiments carried out using the data acquired from a real WSN.  相似文献   

17.
Semiconductor is a capital-intensive industry in which equipment costs account for more than seventy percentage of the capital investment in semiconductor test facilities. In an industrial investigation, the machine interference may be 10% of machine time. Hence, there is a need to assign an appropriate number of machines to the operators to minimize machine interference time or labor cost. This paper aims to develop an effective methodology to determine the optimal assignment relationships between the test machines and the operators for different product mixes to enhance utilization for the optimal system performance. In particular, we employed response surface methodology and genetic algorithms to explore alternative assignment rations and thus identify well-performed assignment alternatives for the test machines and operators in various decision contexts with simulation. An empirical study with real data collected was conducted in a semiconductor test facility to validate this approach. The results have shown the validity of the proposed approach in real settings. Indeed, the developed approach has been implemented on line.  相似文献   

18.
基于结肠癌基因表达数据,运用信息科学的方法和技术建立结肠癌的预测分类模型,对结肠癌的识别具有重要意义。在建立模型的过程中,如何能够有效的排除噪声基因进而挑选出分类特征基因对结肠癌预测的准确性有着非常重要的影响。针对该类问题,这篇文章提出了一种新的特征基因选取方法,并以支持向量机作为分类器建立结肠癌分类预测模型,最后以结肠癌的基因表达谱作为实验数据进行了实验,实验结果表明上述方法的可行性和有效性.  相似文献   

19.
In this work we investigate techniques for embedding domain-specific spatial invariances into highly-constrained neural networks. This information is used to drastically reduce the number of weights which have to be determined during the learning phase, thus allowing us to apply artificial neural networks to problems characterized by a relatively small number of available examples. As an application of the proposed methodology, we study the problem of optical inspection of machined parts. More specifically, we have characterized the performance of a network created according to this strategy, which accepts images of parts under inspection at its input and issues a flag at its output which states whether the part is defective. The results obtained so far show that the proposed methodology provides a potentially relevant approach for the quality control of industrial parts, as it offers both accuracy and short software development time, when compared with a classifier implemented using a standard approach.  相似文献   

20.
分析了集成电路测试面临的测试数据量大、测试应用时间长等问题,对常用的测试压缩方法进行了介绍,并在扫描阻塞测试结构基础上,提出了对数据进行部分编码压缩的方案。在附加硬件开销很小的情况下,进一步压缩了测试数据。理论分析和实验结果都表明了本压缩方案的可行性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号