首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
陈天超  冯百明 《计算机应用》2013,33(6):1531-1539
计算机中进行浮点数加法运算时,需要进行对阶和右规格化操作,该操作会进行舍入处理,这种处理过程会产生误差,浮点数累加运算会造成误差的累积,导致计算结果精度不够甚至计算结果错误。通过实验手段研究单精度浮点数累加过程中不同结合顺序对浮点数累加和误差的影响,探索结合顺序导致计算误差的规律,为多核计算、GPU计算、多处理器计算等计算范型和计算结构提供选择结合方法的依据,便于发挥其并行计算的优势。  相似文献   

2.

Optical character recognition (OCR) systems help to digitize paper-based historical achieves. However, poor quality of scanned documents and limitations of text recognition techniques result in different kinds of errors in OCR outputs. Post-processing is an essential step in improving the output quality of OCR systems by detecting and cleaning the errors. In this paper, we present an automatic model consisting of both error detection and error correction phases for OCR post-processing. We propose a novel approach of OCR post-processing error correction using correction pattern edits and evolutionary algorithm which has been mainly used for solving optimization problems. Our model adopts a variant of the self-organizing migrating algorithm along with a fitness function based on modifications of important linguistic features. We illustrate how to construct the table of correction pattern edits involving all types of edit operations and being directly learned from the training dataset. Through efficient settings of the algorithm parameters, our model can be performed with high-quality candidate generation and error correction. The experimental results show that our proposed approach outperforms various baseline approaches as evaluated on the benchmark dataset of ICDAR 2017 Post-OCR text correction competition.

  相似文献   

3.
许瑾晨  郭绍忠  黄永忠  王磊  周蓓 《软件学报》2015,26(12):3088-3103
异常会造成程序错误,实现完全没有异常的浮点计算软件也很艰难,因此,实现有效的异常处理方法很重要.但现有的异常处理并不针对浮点运算,并且研究重点都集中在整数溢出错误上,而浮点类型运算降低了整数溢出存在的可能.针对上述现象,面向基于汇编实现的数学函数,提出了一种针对浮点运算的分段式异常处理方法.通过将异常类型映射为64位浮点数,以核心运算为中心,将异常处理过程分为3个阶段:输入参数检测(处理INV异常)、特定代码检测(处理DZE异常和INF异常)以及输出结果检测(处理FPF异常和DNO异常),并从数学运算的角度对该方法采用分段式处理的原因进行了证明.实验将该方法应用于Mlib浮点函数库,对库中600多个面向不同平台的浮点函数进行了测试.测试结果表明:该方法能够将出现浮点异常即中断的函数个数从90%降到0%.同时,实验结果验证了该方法的高效性.  相似文献   

4.
The discrete linear transforms implemented on computers require a very great number of computations. The round-off errors inherent in the floating-point arithmetic of the computer generate errors in the results which may be fairly large.

In this paper we propose an algorithm based on the La Porte-Vignes Perturbation Method which is able to automatically analyze the round-off error in any discrete linear transform. Furthermore, this algorithm supplies the local accuracy in any discrete transform in the case of experimental data (data errors and round-off errors).  相似文献   

5.
We present a system, ESOLID, that performs exact boundary evaluation of low-degree curved solids in reasonable amounts of time. ESOLID performs accurate Boolean operations using exact representations and exact computations throughout. The demands of exact computation require a different set of algorithms and efficiency improvements than those found in a traditional inexact floating-point based modeler. We describe the system architecture, representations, and issues in implementing the algorithms. We also describe a number of techniques that increase the efficiency of the system based on lazy evaluation, use of floating-point filters, arbitrary floating-point arithmetic with error bounds, and lower-dimensional formulation of subproblems.ESOLID has been used for boundary evaluation of many complex solids. These include both synthetic datasets and parts of a Bradley Fighting Vehicle designed using the BRL-CAD solid modeling system. It is shown that ESOLID can correctly evaluate the boundary of solids that are very hard to compute using a fixed-precision floating-point modeler. In terms of performance, it is about an order of magnitude slower as compared to a floating-point boundary evaluation system on most cases.  相似文献   

6.
We introduce a concrete semantics for floating-point operations which describes the propagation of roundoff errors throughout a calculation. This semantics is used to assert the correctness of a static analysis which can be straightforwardly derived from it. In our model, every elementary operation introduces a new first order error term, which is later propagated and combined with other error terms, yielding higher order error terms. The semantics is parameterized by the maximal order of error to be examined and verifies whether higher order errors actually are negligible. We consider also coarser semantics computing the contribution, to the final error, of the errors due to some intermediate computations. As a result, we obtain a family of semantics and we show that the less precise ones are abstractions of the more precise ones.  相似文献   

7.
Soft errors are becoming a prominent problem for massive parallel scientific applications. Dual-modular redundancy (DMR) can provide approximately 100% error coverage, but it has the problem of overhead excessive. Stencil kernel is one of the most important routines applied in the context of structured grids. In this paper, we propose Grid Sampling DMR (GS-DMR), a low-overhead soft error detection scheme for stencil-based computation. Instead of comparing the whole set of the results in the traditional DMR, GS-DMR just compares a subset of the results according to sampling on the grid data, which is based on the error propagation pattern on the grid. We also design a fault tolerant (FT) framework combining GS-DMR with checkpoint technology, and provide theoretical analysis and an algorithm for the optimal FT parameters. Experimental results on Tianhe-2 supercomputer demonstrate that GS-DMR can achieve a good FT effect for stencil-based computation, and the effect is greatly improved for massively parallel applications, reducing the total FT overhead up to 51%.  相似文献   

8.

Neural state classification (NSC) is a recently proposed method for runtime predictive monitoring of hybrid automata (HA) using deep neural networks (DNNs). NSC trains a DNN as an approximate reachability predictor that labels an HA state x as positive if an unsafe state is reachable from x within a given time bound, and labels x as negative otherwise. NSC predictors have very high accuracy, yet are prone to prediction errors that can negatively impact reliability. To overcome this limitation, we present neural predictive monitoring (NPM), a technique that complements NSC predictions with estimates of the predictive uncertainty. These measures yield principled criteria for the rejection of predictions likely to be incorrect, without knowing the true reachability values. We also present an active learning method that significantly reduces the NSC predictor’s error rate and the percentage of rejected predictions. We develop two versions of NPM based, respectively, on the use of frequentist and Bayesian techniques to learn the predictor and the rejection rule. Both versions are highly efficient, with computation times on the order of milliseconds, and effective, managing in our experimental evaluation to successfully reject almost all incorrect predictions. In our experiments on a benchmark suite of six hybrid systems, we found that the frequentist approach consistently outperforms the Bayesian one. We also observed that the Bayesian approach is less practical, requiring a careful and problem-specific choice of hyperparameters.

  相似文献   

9.
Error detection in arithmetic code is usually achieved by inserting markers in the source sequence during encoding. Transmission errors can then be detected in the decoding process if the inserted markers do not appear at the expected positions. Unlike the existing approaches in which the marker symbol is selected from the set of source symbols, we propose that the marker be created artificially so as not to affect the original distribution of the source symbols. Our scheme is proved to possess a better compression ratio than existing marker approaches at the same error misdetection probability. The relationship between codeword length expansion and error misdetection probability within a coded block is well formulated, which makes it easy to adapt to channels with different bit error rates. Simulation results show that, for adaptive arithmetic coding implemented using finite-precision computation, the distribution of error detection delay has a peak at a value slightly larger than the length of the decoding register. With a sufficiently long register, our approach can detect most error patterns in long source sequences at a high probability.  相似文献   

10.
Single-precision floatingpoint computations may yield an arbitrary false result due to cancellation and rounding errors. This is true even for very simple, structured arithmetic expressions such as Horner's scheme for polynomial evaluation. A simple procedure will be presented for fast calculation of the value of an arithmetic expression to least significant bit accuracy in single precision computation. For this purpose in addition to the floating-point arithmetic only a precise scalar product (cf. [2]) is required. If the initial floatingpoint approximation is not too bad, the computing time of the new algorithm is approximately the same as for usual floating-point computation. If not, the essential progress of the presented algorithm is that the inaccurate approximation is recognized and corrected. The algorithm achieves high accuracy, i.e. between the left and the right bound of the result there is at most one more floating-point number. A rigorous estimation of all rounding errors introduced by floating-point arithmetic is given for general triangular linear systems. The theorem is applied to the evaluation of arithmetic expressions.  相似文献   

11.
Reducing Data Cache Susceptibility to Soft Errors   总被引:1,自引:0,他引:1  
Data caches are a fundamental component of most modern microprocessors. They provide for efficient read/write access to data memory. Errors occurring in the data cache can corrupt data values or state, and can easily propagate throughout the memory hierarchy. One of the main threats to data cache reliability is soft (transient, nonreproducible) errors. These errors can occur more often than hard (permanent) errors, and most often arise from single event upsets (SEUs) caused by strikes from energetic particles such as neutrons and alpha particles. Many protection techniques exist for data caches; the most common are ECC (error correcting codes) and parity. These protection techniques detect all single bit errors and, in the case of ECC, correct them. To make proper design decisions about which protection technique to use, accurate design-time modeling of cache reliability is crucial. In addition, as caches increase in storage capacity, another important goal is to reduce the failure rate of a cache, to limit disruption to normal system operation. In this paper, we present our modeling approach for assessing the impact of soft errors using architectural simulators. We also describe a new technique for reducing the vulnerability of data caches: refetching. By selectively refetching cache lines from the ECC-protected L2 cache, we can significantly reduce the vulnerability of the L1 data cache. We discuss and present results for two different algorithms that perform selective refetch. Experimental results show that we can obtain an 85 percent decrease in vulnerability when running the SPEC2K benchmark suite while only experiencing a slight decrease in performance. Our results demonstrate that selective refetch can cost-effectivety decrease the error rate of an L1 data cache  相似文献   

12.
在DCS中,主控单元模块的数据存储资源有限,为了节省数据的存储空间,开发算法块时尽可能使用低精度的数据类型。由于浮点运算超过精度能表示的范围就需要近似或舍入,这样就会产生误差,对于一些涉及到复杂运算的算法,数据精度不足有时会造成算法计算错误。所以在算法测试过程中,应包含数据精度方面的测试。  相似文献   

13.
The error estimates of automatic integration by pure floating-point arithmetic are intrinsically embedded with uncertainty. This in critical cases can make the computation problematic. To avoid the problem, we use product rules to implement a self-validating subroutine for bivariate cubature over rectangular regions. Different from previous self-validating integrators for multiple variables (Storck in Scientific Computing with Automatic Result Verification, pp. 187–224, Academic Press, San Diego, [1993]; Wolfe in Appl. Math. Comput. 96:145–159, [1998]), which use derivatives of specific higher orders for the error estimates, we extend the ideas for univariate quadrature investigated in (Chen in Computing 78(1):81–99, [2006]) to our bivariate cubature to enable locally adaptive error estimates by full utilization of Peano kernels theorem. The mechanism for active recognition of unreachable error bounds is also set up. We demonstrate the effectiveness of our approach by comparing it with a conventional integrator.  相似文献   

14.
The detection of cracks on concrete surfaces is the most important step during the inspection of concrete structures. Conventional crack detection methods are performed by experienced human inspectors who sketch crack patterns manually; however, such detection methods are expensive and subjective. Therefore, automated crack detection techniques that utilize image processing have been proposed. Although most the image-based approaches focus on the accuracy of crack detection, the computation time is also important for practical applications because the size of digital images has increased up to 10 megapixels. We introduce an efficient and high-speed crack detection method that employs percolation-based image processing. We propose termination- and skip-added procedures to reduce the computation time. The percolation process is terminated by calculating the circularity during the processing. Moreover, percolation processing can be skipped in subsequent pixels according to the circularity of neighboring pixels. The experimental result shows that the proposed approach efficiently reduces the computation cost.  相似文献   

15.
浮点数是实数的有限精度编码,在进行浮点计算时,可能会导致不精确或者异常的结果,因此实现有效的浮点异常检测方法很重要。现有异常检测方法不面向浮点数学函数,由此提出了一种面向浮点数学函数的异常检测方法。该方法依据IEEE-754标准中定义的上溢出、下溢出、被零除、无效操作和不精确5类异常,并结合申威高性能数学函数库中使用的浮点控制寄存器FPCR和IEEE-754标准定义的浮点异常产生条件的相关理论,通过将异常类型和浮点运算指令进行对应分类,在程序编译时进行插桩以检测出浮点数学函数中出现的异常,同时记录代码覆盖率。最后将该方法应用于数学函数库,对库中100多个浮点数学函数进行了测试实验。实验结果表明,该浮点异常检测方法能够有效检测各类异常。  相似文献   

16.
On certain recently developed architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment and the compiler choices. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved.  相似文献   

17.
目的 光场相机可以通过一次拍摄,获取立体空间中的4D光场数据,渲染出焦点堆栈图像,然后采用聚焦性检测函数从中提取深度信息。然而,不同聚焦性检测函数响应特性不同,不能适应于所有的场景,且现有多数方法提取的深度信息散焦误差较大,鲁棒性较差。针对该问题,提出一种新的基于光场聚焦性检测函数的深度提取方法,获取高精度的深度信息。方法 设计加窗的梯度均方差聚焦性检测函数,提取焦点堆栈图像中的深度信息;利用全聚焦彩色图像和散焦函数标记图像中的散焦区域,使用邻域搜索算法修正散焦误差。最后利用马尔可夫随机场(MRF)将修正后的拉普拉斯算子提取的深度图与梯度均方差函数得到的深度图融合,得到高精确度的深度图像。结果 在Lytro数据集和自行采集的测试数据上,相比于其他先进的算法,本文方法提取的深度信息噪声较少。精确度平均提高约9.29%,均方误差平均降低约0.056。结论 本文方法提取的深度信息颗粒噪声更少;结合彩色信息引导,有效修正了散焦误差。对于平滑区域较多的场景,深度提取效果较好。  相似文献   

18.
Error Detection and Fault Tolerance in ECSM Using Input Randomization   总被引:1,自引:0,他引:1  
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. In this paper, error-detecting and fault-tolerant elliptic curve cryptosystems are considered. Error detection may be a sufficient countermeasure for many security applications; however, fault-tolerant characteristic enables a system to perform its normal operation in spite of faults. For the purpose of detecting errors due to faults, a number of schemes and hardware structures are presented based on recomputation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of the elliptic curve scalar multiplication (ECSM). Additionally, we show that using parallel computation along with either PV or recomputation, it is possible to have fault-tolerant structures for the ECSM. If certain conditions are met, these schemes are more efficient than others such as the well-known triple modular redundancy. Prototypes of the proposed structures for error detection and fault tolerance have been implemented, and experimental results have been presented.  相似文献   

19.

Early assessment of the energy performance of buildings (EPB) is focused in this study. This task is carried out by predicting the cooling load (CL) in a residential building. To this end, due to the drawbacks of neural computing approaches (e.g., local minima), a novel metaheuristic technique, namely teaching–learning-based optimization (TLBO) is employed to modify a multi-layer perceptron neural network (MLPNN). The complexity of the proposed model is also optimized by a trial and error process. Evaluating the results revealed a high efficiency for this scheme. In this sense, the prediction error of the MLPNN was reduced by around 20%, and the correlation between the measured and forecasted CLs rose from 0.8875 to 0.9207. It was also deduced that the TLBO outperforms two benchmark optimizers of cuckoo optimization algorithm (COA) and league championship algorithm (LCA) in terms of both modeling accuracy and network complexity. Moreover, the TLBO-MLP emerged as the most time-effective hybrid as it required considerably lower computation time than COA-MLP and LCA-MLP. Regarding these advantages, the proposed model can be promisingly used for early assessment of EPB in practice.

  相似文献   

20.
Yue  Kaiyu  Xu  Fuxin  Yu  Jianing 《Neural computing & applications》2019,31(2):409-419

Convolutional network (ConvNet) has been shown to be able to increase the depth as well as improve performance. Deep net, however, is not perfect yet because of vanishing/exploding gradients and some weights avoid learning anything during the training. To avoid this, can we just keep the depth shallow and simply make network wide enough to achieve a similar or better performance? To answer this question, we empirically investigate the architecture of popular ConvNet models and try to widen the network enough in the fixed depth. Following this method, we carefully design a shallow and wide ConvNet configured with fractional max-pooling operation with a reasonable number of parameters. Based on our technical approach, we achieve 6.43% test error on CIFAR-10 classification dataset. At the same time, optimal performances are also achieved on benchmark datasets MNIST (0.25% test error) and CIFAR-100 (25.79% test error) compared with related methods.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号