首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The main issues when supporting fault tolerance based on checkpointing and rollback recovery for High‐Performance applications are related to the scalability of the introduced support, the possibility of analyzing the induced overhead and, in more general terms, the optimization of the trade‐off between failure‐free and recovery performances. In this paper we describe our contribution in fault tolerance for high‐level structured parallelism models. We take a different viewpoint w.r.t. existing contributions, by introducing a methodology to derive interesting properties to support fault tolerance. We show how to apply this methodology to a general data parallel model, deriving useful properties to introduce a class of checkpointing protocols. Thanks to this methodology, this class of protocols is not affected by the described issues. We exemplify two checkpointing protocols and the related rollback recovery techniques. For each protocol we also derive cost models statically describing the failure‐free performance, which can be used for performance tuning or to target some Quality of Service parameter. To assess the innovation of the results we analytically and experimentally compare the introduced protocols with two literature protocols. Results show that while the protocols introduced in this paper permit the definition of cost models and have a good scalability, the literature protocols do not always have these properties. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
A large portion of high-level computer programs consists of data declaration. Thus, an increased focus on testing the data flow aspects of programs should be considered. In this paper, we consider testing the data flow in Java programs dynamically. Data flow analysis has been applied for testing procedural and some object-oriented programs. We have extended the dynamic data flow analysis technique to test Java programs and show how it can be applied to detect data flow anomalies.  相似文献   

3.
Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs. goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. Pointers are responsible for the existence of conditional aliasing problems that usually provoke the failure of the goal-oriented test data generation process. In this paper, we propose an overall constraint-based method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating goal-oriented test data. This approach correctly handles multi-levels stack-directed pointers that are mainly used in C programs. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs are presented.  相似文献   

4.
5.
A method of designing data processing programs is described which leads to the expression of a program as a pipeline of simple processes analogous to an engineering assembly line. Each process is formally specified by a translation grammar which defines both its function and its interface with other processes. The resulting processes are highly suited to the engineering of data processing systems using multimicroprocessor hardware.  相似文献   

6.
Computer programs for the analysis of protein fluorescence quenching data   总被引:1,自引:0,他引:1  
Educational computer programs for analysis of fluorescence quenching studies of proteins have been developed. The program is written according to the classical Stern-Volmer equation and some modified Stern-Volmer equations. The static and dynamic quenching constants as well as the accessibility of the quencher molecules to the fluorescence groups are calculated. The experimental data are plotted on a high resolution graph. The calculated data or the graphs can be printed out to obtain a hard copy for filing.  相似文献   

7.
Exploring process data   总被引:2,自引:0,他引:2  
With the growth of computer usage at all levels in the process industries, the volume of available data has also grown enormously, sometimes to levels that render analysis difficult. Most of this data may be characterized as historical in the sense that it was not collected on the basis of experiments designed to test specific statistical hypotheses. Consequently, the resulting datasets are likely to contain unexpected features (e.g. outliers from various sources, unsuspected correlations between variables, etc.). This observation is important for two reasons: first, these data anomalies can completely negate the results obtained by standard analysis procedures, particularly those based on squared error criteria (a large class that includes many SPC and chemometrics techniques). Secondly and sometimes more importantly, an understanding of these data anomalies may lead to extremely valuable insights. For both of these reasons, it is important to approach the analysis of large historical datasets with the initial objective of uncovering and understanding their gross structure and character. This paper presents a brief survey of some simple procedures that have been found to be particularly useful at this preliminary stage of analysis.  相似文献   

8.
GP is a Data Manipulation Language (DML) used in connection with TASKMASTER, a Data Base Management System (DBMS) developed specifically to facilitate Engineering design. Experience with GP is described, and a number of inferences made. In particular, a case is made for the consistent use of stand-alone Data Manipulation Programs (DMPs), which interface between the Data Base and independently-developed external application programs. A DMP would be written in the DML associated with a specific DBMS, and it would in general be of one of two types: either a preprocessor, which extracts selected data from the Data Base and formats it as input to the external program, or a postprocessor, which stores selected output of the external program in the Data Base. DMPs should be operable either in Interpret or Compile mode and should include powerful facilities for User intervention at run-time. The DML should contain a LOOP facility, similar in form and use to, but rather different in concept from, the common DO/PERFORM feature.  相似文献   

9.
User Modeling and User-Adapted Interaction - This paper describes an exploratory investigation into the feasibility of predictive analytics of user behavioral data as a possible aid in developing...  相似文献   

10.
New compact, low-power implementation technologies for processors and imaging arrays can enable a new generation of portable video products. However, software compatibility with large bodies of existing applications written in C prevents more efficient, higher performance data parallel architectures from being used in these embedded products. If this software could be automatically retargeted explicitly for data parallel execution, product designers could incorporate these architectures into embedded products. The key challenge is exposing the parallelism that is inherent in these applications but that is obscured by artifacts imposed by sequential programming languages. This paper presents a recognition-based approach for automatically extracting a data parallel program model from sequential image processing code and retargeting it to data parallel execution mechanisms. The explicitly parallel model presented, called multidimensional data flow (MDDF), captures a model of how operations on data regions (e.g., rows, columns, and tiled blocks) are composed and interact. To extract an MDDF model, a partial recognition technique is used that focuses on identifying array access patterns in loops, transforming only those program elements that hinder parallelization, while leaving the core algorithmic computations intact. The paper presents results of retargeting a set of production programs to a representative data parallel processor array to demonstrate the capacity to extract parallelism using this technique. The retargeted applications yield a potential execution throughput limited only by the number of processing elements, exceeding thousands of instructions per cycle in massively parallel implementations.  相似文献   

11.
Nowadays, high performance applications exploit multiple level architectures, due to the presence of hardware accelerators like GPUs inside each computing node. Data transfers occur at two different levels: inside the computing node between the CPU and the accelerators and between computing nodes. We consider the case where the intra-node parallelism is handled with HMPP compiler directives and message-passing programming with MPI is used to program the inter-node communications. This way of programming on such an heterogeneous architecture is costly and error-prone. In this paper, we specifically demonstrate the transformation of HMPP programs designed to exploit a single computing node equipped with a GPU into an heterogeneous HMPP + MPI exploiting multiple GPUs located on different computing nodes.  相似文献   

12.
Ben Shneidebman 《Software》1976,6(4):555-567
The proliferation of papers on programming methodology focus on the program development process but only hint at the form of the final program. This paper distinguishes between the development process and the program product and presents a catalogue of possible program organizations and data structures with examples drawn from the published literature. The methods for sharing data among modules and a classification scheme for programs and data structures is presented.  相似文献   

13.
14.
15.
Conjoint analysis is used to understand how consumers develop preferences for products or services, which encompass, as usual, multi-attributes and multi-attribute levels. Conjoint analysis has been one of the popular tools for multi-attribute decision-making problems on products and services for consumers over the last 30 years. It has also been used to market segmentation and optimal product positioning. In spite of its popularity and commercial success, a major weakness of conjoint analysis has been pointed such that respondents participating in conjoint experiment have to evaluate a number of hypothetical product profiles. To reduce the number of hypothetical products, this paper proposes a systematic method, called data fusion, and explores the usability of various data fusion techniques. The paper evaluates traditional data fusion (correlation-based), hierarchical Bayesian-based data fusion, and neural network-based data fusion.  相似文献   

16.
A package of FORTRAN IV programs is used to process routine analyses of silicates, sulfides, oxides, and carbonates, following an empirical correction procedure. This approach has two important advantages; (1) computing costs are minimal, and (2) persons who lack experience with the probe or computer have been able to use the package effectively. Various options have been built into the package to give it the flexibility needed by a variety of users with differing objectives. The options include calculations of mole and weight-percent oxides or end members, construction of triangular diagrams, and use of a differing background correction for inhomogeneous minerals.  相似文献   

17.
18.
With increased use of programmable logic controllers (PLCs) in implementing critical systems, quality assurance became an important issue. Regulation requires structural testing be performed for safety-critical systems by identifying coverage criteria to be satisfied and accomplishment measured. Classical coverage criteria, based on control flow graphs, are inadequate when applied to a data flow language function block diagram (FBD) which is a PLC programming language widely used in industry. We propose three structural coverage criteria for FBD programs, analyze relationship among them, and demonstrate their effectiveness using a real-world reactor protection system. Using test cases that had been manually prepared by FBD testing professionals, our technique found many aspects of the FBD logic that were not tested sufficiently. Domain experts, having found the approach highly intuitive, found the technique effective.  相似文献   

19.
Two computer programs for the IBM personal computer are described for rapid and accurate entry of DNA sequence data. The DNA sequence files produced can be used directly by the DNA sequence manipulation programs by R.Staden (the DataBase system), the University of Wisconsin Genetics Computer Group, DNASTAR, or D.Mount. The first program, DIGISEQ, utilizes a sonic digitizer for semi-automation of sequence entry. To enter the DNA sequence each band of a gel reading is touched by the stylus of the sonic digitizer. DIGISEQ corrects for both changes in lane width and lane curvature. The algorithm is extremely efficient and rarely requires re-entering the centers of the lanes. The second program, TYPESEQ, uses only the keyboard for input. The keyboard is reconfigured to place nucleotides and ambiguity codes under the fingers of one hand, corresponding to the order of the nucleotides on the gel defined by the user. Both programs produce individual tones for each nucleotide, and certain ambiguity codes. This verifies input of the correct nucleotide or ambiguity code, and thus eliminates the need to visually check the screen display during sequence entry.  相似文献   

20.
Recent developments in grid and cloud computing technologies have enhanced the performance and scale of storage media. Data management and backup are becoming increasingly important in these environments. Backup systems constitute an important component of operating system security. However, it is difficult to recover backup data from an environment where the operating system does not work because the storage hardware has been damaged. This study analyzes the Volume Shadow Copy Service (VSS) used by the Windows operating system. Windows 8 has been implemented for mobile environments; hence, it could be used for data recovery from damaged mobile devices. VSS is a backup infrastructure provided by Windows that creates point-in-time copies of a volume (known as volume shadow copies). Windows Vista and later versions use this service instead of the restore point feature used in earlier versions of the operating system. The restore point feature logically copied and stored specified files, whereas VSS copies and stores only data that have changed in the volume. In a live system, volume shadow copies can be checked and recovered using built-in system commands. However, it is difficult to analyze the files stored in the volume shadow copies of a nonfunctioning system, such as a disk image, because only the changed data are stored. Therefore, this study analyzes the structure of Volume Shadow Copy (VSC) files that were logically stored. This analysis confirms the locations of the changed data and original copies by identifying a structure that stores the file data stream to file system metadata. On the basis of our research, we propose a practical application to develop tools for the recovery of snapshot data stored within the VSC files. We also present results of our successful performance test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号