首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于日志恢复技术的内存数据库快速恢复模型的研究   总被引:1,自引:0,他引:1  
在研究已有恢复技术的基础上,提出了"快速恢复模型".该模型依据快速日志驱动检查点算法、重栽算法.以及模型架构上的支持,不仅能保证系统的可靠运行,同时能在系统崩溃时提供快速、高效的恢复系统的手段.通过实验证明,该方法相对于其他恢复方法,能控制日志的产生数量,使得在系统崩溃并重新启动时,能以最快的速度恢复到系统崩溃前的最近一致点上.  相似文献   

2.
Two techniques are presented that allow fast and accurate simulation of fractional-N synthesizers. A uniform time step allows implementation of these techniques in various simulation frameworks, such as Verilog, Matlab, and C or C++ programs. The techniques are also applicable to the simulation of other PLL systems, such as clock and data recovery circuits  相似文献   

3.
A PPY/SWCNTs nanocomposite-based sensor with relatively high sensitivity and fast response–recovery was developed for detection of NH3 gas at room temperature. The gas-sensitive composite thin film was prepared using chemical polymerization and spin-coating techniques, and characterized by Fourier transformed infrared spectra and field-emission scanning electron microscopy. The results reveal that the conjugated structure of the PPY layer was formed and the functionalized SWCNTs were well-embedded. The effects of film thickness, annealing temperature, and SWCNTs content on gas-sensing properties of the composite thin film were investigated to optimize the gas-sensing performance. The as-prepared thin film PPY/SWCNTs composite sensor with optimized process parameters had a response of 26–276% upon exposure to NH3 gas concentration from 10 to 800 ppm, and their response and recovery times were around 22 and 38 s, respectively.  相似文献   

4.
5.
In this paper we present recovery techniques for distributed main-memory databases, specifically for client-server and shared-disk architectures. We present a recovery scheme for client-server architectures which is based on shipping log records to the server, and two recovery schemes for shared-disk architectures—one based on page shipping, and the other based on broadcasting of the log of updates. The schemes offer different tradeoffs, based on factors such as update rates.Our techniques are extensions to a distributed-memory setting of a centralized recovery scheme for main-memory databases, which has been implemented in the Dalì main-memory database system. Our centralized as well as distributed-memory recovery schemes have several attractive features—they support an explicit multi-level recovery abstraction for high concurrency, reduce disk I/O by writing only redo log records to disk during normal processing, and use per-transaction redo and undo logs to reduce contention on the system log. Further, the techniques use a fuzzy checkpointing scheme that writes only dirty pages to disk, yet minimally interferes with normal processing—all but one of our recovery schemes do not require updaters to even acquire a latch before updating a page. Our log shipping/broadcasting schemes also support concurrent updates to the same page at different sites.  相似文献   

6.
Stereo images acquired by a stereo camera setup provide depth estimation of a scene. Numerous machine vision applications deal with retrieval of 3D information. Disparity map recovery from a stereo image pair involves computationally complex algorithms. Previous methods of disparity map computation are mainly restricted to software-based techniques on general-purpose architectures, presenting relatively high execution time. In this paper, a new hardware-implemented real-time disparity map computation module is realized. This enables a hardware-based fuzzy inference system parallel-pipelined design, for the overall module, implemented on a single FPGA device with a typical operating frequency of 138 MHz. This provides accurate disparity map computation at a rate of nearly 440 frames per second, given a stereo image pair with a disparity range of 80 pixels and 640 × 480 pixels spatial resolution. The proposed method allows a fast disparity map computational module to be built, enabling a suitable module for real-time stereo vision applications.  相似文献   

7.
The rise in multicast implementations has seen with it an increased support for fast failure recovery from link and node failures. Most recovery mechanisms augment additional services to existing protocols causing excessive overhead, and these modifications are predominantly protocol-specific. In this paper, we develop a multicast failure recovery mechanism that constructs protocol independent fast reroute paths to recover from single link and single node failures. We observe that single link failure recovery in multicast networks is similar to recovering unicast traffic, and we use existing unicast recovery mechanisms for multicast traffic. We construct multicast protection trees that provide instantaneous failure recovery from single node failures. For a given node x, the multicast protection tree spans all its neighbors and does not include itself. Thus, when the node fails, the neighbors of the node are connected through the multicast protection tree instead of node x, and forward the traffic over the multicast protection tree for the duration of failure recovery. The multicast protection trees are constructed a priori, without the knowledge of the multicast traffic in the network. Based on simulations on three realistic network topologies, we observe that the multicast protection trees increase the routing table size only by 38% on average and the path length between any source–destination pair by 13% on average.  相似文献   

8.
Accurate recovery from a cyber attack depends on fast and perfect damage assessment. For damage assessment, traditional recovery methods require that the log of an affected database must be scanned starting from the attacking transaction until the end. This is a time-consuming task. Our objective in this research is to provide techniques that can be used to accelerate the damage appraisal process and produce a correct result. We have presented a damage assessment model and four data structures associated with the model. Each of these structures uses dependency relationships among transactions, which update the database. These relationships are later used to determine exactly which transactions and exactly which data items are affected by the attacker. A performance comparison analysis obtained using simulation is provided to demonstrate the benefit of our model  相似文献   

9.
The escalation of electronic attacks on databases in recent times demands fast and efficient recovery methods. The existing recovery techniques are too time-consuming as they first undo all malicious and affected transactions individually, and then redo all affected transactions, again, individually. In this paper, we propose a method that accelerates the undo and redo phases of the recovery. The method developed involves combining or fusing malicious or affected transactions occurring in groups. These fused transactions are executed during undo and redo phases instead of execution of individual transactions. By fusing relevant transactions into a single transaction, the number of operations such as start, commit, read, and write are minimized. Thus, data items which were required to be accessed multiple times in case of individual transactions are accessed only once in a fused transaction. The amount of log I/O's is reduced. This expedites the recovery procedure in the event of information attacks. A simulation analysis of the proposed model confirmed our claim.  相似文献   

10.
In this paper, we improve the password recovery attack to Authentication Post Office Protocol (APOP) from two aspects. First, we propose new tunnels to control more fixed bits of MD5 collision, hence, we can recover passwords with more characters, for example, as long as 43 characters can be recovered practically. Second, we propose a group satisfaction scheme, apply divide-and-conquer strategy and a new suitable MD5 collision attack, to greatly reduce the computational complexity in collision searching with high number of chosen bits. We propose a fast password recovery attack to application APOP in local that can recover a password with 11 characters in >1 min, recover a password with 31 characters extremely fast, about 6 min, and for 43 characters in practical time. These attacks truly simulate the practical password recovery attacks launched by malware in real life, and further confirm that the security of APOP is totally broken.  相似文献   

11.
《Information Sciences》1987,42(3):255-282
The paper proposes a technique for providing software fault tolerance in real-time applications demanding fast response and a high degree of reliability. It is shown that a reasonably flexible interprocess communication can be supported with only a small increase in complexity and overhead. The two most prominent features of the proposed scheme are (1) it attempts to exploit fault-avoidance techniques as much as possible to reduce the overhead of fault tolerance and (2) it controls the propagation of errors so as to enable efficient recovery. Formal proofs of the system operation are developed. Besides showing that the scheme works as expected, the arguments serve to highlight the assumptions needed for provably correct operation. Some issues relating to hardware fault tolerance are also considered.  相似文献   

12.
Fault-tolerance techniques based on checkpointing and message logging have been increasingly used in real-world applications to reduce service down-time. Most industrial applications have chosen pessimistic logging because it allows fast and localized recovery. The price that they must pay, however, is the high failure-free overhead. In this paper, we introduce the concept of K-optimistic logging where K is the degree of optimism that can be used to fine-tune the trade-off between failure-free overhead and recovery efficiency. Traditional pessimistic logging and optimistic logging then become the two extremes in the entire spectrum spanned by K-optimistic logging. Our results generalize several previously known protocols.Our approach is to prove that only dependencies on those states that may be lost upon a failure need to be tracked on-line, and so transitive dependency tracking can be performed with a variable-size vector. The size of the vector piggy-backed on a message then indicates the number of processes whose failures may revoke the message, and K corresponds to the upper bound on the vector size. Furthermore, the parameter K is dynamically tunable in response to changing system characteristics.  相似文献   

13.
14.
We investigate the robust stabilization of a class of nonlinear systems in the presence of unmodeled actuator and sensor dynamics. We show that, given any globally bounded stabilizing state-feedback control, the closed-loop system performance can be recovered by a sufficiently fast high-gain observer in the presence of sufficiently fast actuator and sensor dynamics. The performance recovery includes recovery of exponential stability of the origin, the region of attraction and state trajectories. Moreover, it is shown that the sensor dynamics should be sufficiently faster than the observer dynamics; a restriction that does not apply to the actuator dynamics.  相似文献   

15.
Level set models combine a low‐level volumetric representation, the mathematics of deformable implicit surfaces and powerful, robust numerical techniques to produce a novel approach to shape design. While these models offer many benefits, their large‐scale representation and numerical requirements create significant challenges when developing an interactive system. This paper describes the collection of techniques and algorithms (some new, some pre‐existing) needed to overcome these challenges and to create an interactive editing system for this new type of geometric model. We summarize the algorithms for producing level set input models and, more importantly, for localizing/minimizing computation during the editing process. These algorithms include distance calculations, scan conversion, closest point determination, fast marching methods, bounding box creation, fast and incremental mesh extraction, numerical integration and narrow band techniques. Together these algorithms provide the capabilities required for interactive editing of level set models.  相似文献   

16.
随着信息系统在关键应用中的普及,信息系统的容灾能力日益成为人们关注的焦点,而心跳技术是构建容灾系统所必需的关键技术之一。本文研究了一种基于PULL模型的自适应心跳算法。该算法根据节点及网络状况输出一个量化结果,通过该结果与阈值的比较来判断系统的可用性,可用于容灾系统可用性的检测。  相似文献   

17.
Li  Xianzhen  Zhang  Zhao  Zhang  Li  Wang  Meng 《Neural computing & applications》2020,32(17):13363-13376

In this paper, we propose a simple yet effective low-rank representation (LRR) and subspace recovery model called mutual-manifold regularized robust fast latent LRR. Our model improves the representation ability and robustness from twofold. Specifically, our model is built on the Frobenius norm-based fast latent LRR decomposing given data into a principal feature part, a salient feature part and a sparse error, but improves it clearly by designing mutual-manifold regularization to encode, preserve and propagate local information between coefficients and salient features. The mutual-manifold regularization is defined by using the coefficients as the adaptive reconstruction weights for salient features and constructing a Laplacian matrix over salient features for the coefficients. Thus, some important local topology structure information can be propagated between them, which can make the discovered subspace structures and features potentially more accurate for the data representations. Besides, our approach also considers to improve the robust properties of subspace recovery against noise and sparse errors in coefficients, which is realized by decomposing original coefficients matrix into an error-corrected part and a sparse error part fitting noise in coefficients, and the recovered coefficients are then used for robust subspace recovery. Experimental results on several public databases demonstrate that our method can outperform other related algorithms.

  相似文献   

18.
Loop transfer recovery (LTR) techniques are known to enhance the input or output robustness properties of linear quadratic gaussian (LQG) designs. One restriction of the existing discrete-time LQG/LTR methods is that they can obtain arbitrarily good recovery only for minimum-phase plants. A number of researchers have attempted to devise new techniques to cope with non-minimum-phase plants and have achieved some degrees of success.6-9 Nevertheless, their methods only work for a restricted class of non-minimum-phase systems. Here, we explore the zero placement capability of generalized sampled-data hold functions (GSHF) developed in Reference 14 and show that using the arbitrary zero placement capability of GSHF, the discretized plant can always be made minimum-phase. As a consequence, we are able to achieve discrete-time perfect recovery using a GSHF-based compensator irrespective of whether the underlying continuous-time plant is minimum-phase or not.  相似文献   

19.
Two improved algorithms for string matching with k mismatches are presented. One algorithm is based on fast integer multiplication algorithms whereas the other follows more closely classic string-matching techniques.  相似文献   

20.
Deployed software systems are typically composed of many pieces, not all of which may have been created by the main development team. Often, the provenance of included components—such as external libraries or cloned source code—is not clearly stated, and this uncertainty can introduce technical and ethical concerns that make it difficult for system owners and other stakeholders to manage their software assets. In this work, we motivate the need for the recovery of the provenance of software entities by a broad set of techniques that could include signature matching, source code fact extraction, software clone detection, call flow graph matching, string matching, historical analyses, and other techniques. We liken our provenance goals to that of Bertillonage, a simple and approximate forensic analysis technique based on bio-metrics that was developed in 19th century France before the advent of fingerprints. As an example, we have developed a fast, simple, and approximate technique called anchored signature matching for identifying the source origin of binary libraries within a given Java application. This technique involves a type of structured signature matching performed against a database of candidates drawn from the Maven2 repository, a 275 GB collection of open source Java libraries. To show the approach is both valid and effective, we conducted an empirical study on 945 jars from the Debian GNU/Linux distribution, as well as an industrial case study on 81 jars from an e-commerce application.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号