共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
高通量测序技术的出现在极大的改变了生命科学研究方式的同时也产生了海量的测序数据,如何将这些数据快速而准确的比对到参考基因组上是许多生物医学研究过程中的关键一步。为此自2007年以来,研究者们开发出了超过70种用于高通量测序序列比对的软件以解决这一问题。在本文中,我们将系统的回顾这些比对软件所运用的策略和算法,从它们的起源及发展进行比较,从而帮助生物信息工作者更好的理解和应用这些比对软件。 相似文献
3.
This paper presents a methodology for using simulated execution to assist a theorem prover in verifying safety properties of distributed systems. Execution-based techniques such as testing can increase confidence in an implementation, provide intuition about behavior, and detect simple errors quickly. They cannot by themselves demonstrate correctness. However, they can aid theorem provers by suggesting necessary lemmas and providing tactics to structure proofs. This paper describes the use of these techniques in a machine-checked proof of correctness of the Paxos algorithm for distributed consensus . 相似文献
4.
Daigu Zhang Xiaofeng Liao Meikang Qiu Jingtong Hu Edwin H.-M. Sha 《Journal of Systems Architecture》2012,58(10):426-438
In recent years, the applications of smart cards become wider and wider. Smart cards are used in various areas and play a vital role in many security systems. At the same time, the security issue of smart cards has been more and more important. Smart cards are vulnerable to several attack approaches such as power analysis attacks. In this paper, we propose four novel algorithms to defend against power analysis attacks via randomized execution of programs. The experimental results confirm that the new approaches can even the power distribution of applications in smart cards, which makes smart cards less susceptible to power analysis attacks. Our approaches are general and not limited to a certain application. 相似文献
5.
6.
《Interacting with computers》2007,19(3):314-329
Pupillary response is a valid indicator of mental workload and is being increasingly leveraged to identify lower cost moments for interruption, evaluate complex interfaces, and develop further understanding of psychological processes. Existing tools are not sufficient for analyzing this type of data, as it typically needs to be analyzed in relation to the corresponding task’s execution. To address this emerging need, we have developed a new interactive analysis tool, TAPRAV. The primary components of the tool include; (i) a visualization of pupillary response aligned to the corresponding model of task execution, useful for exploring relationships between these two data sources; (ii) an interactive overview + detail metaphor, enabling rapid inspection of details while maintaining global context; (iii) synchronized playback of the video of the user’s screen interaction, providing awareness of the state of the task; and (iv) interaction supporting discovery driven analysis. Results from a user study showed that users are able to efficiently interact with the tool to analyze relationships between pupillary response and task execution. The primary contribution of our tool is that it demonstrates an effective visualization and interaction design for rapidly exploring pupillary response in relation to models of task execution, thereby reducing the analysis effort. 相似文献
7.
Nadia Nedjah Luneque Silva Junior Luiza de Macedo Mourelle 《Expert systems with applications》2013,40(16):6661-6673
Networks-on-Chip (NoC) is an interesting option in design of communication infrastructures for embedded systems. It provides a scalable structure and balanced communication between the cores. Parallel applications that take advantage of the NoC architectures, are usually are communication-intensive. Thus, a big deal of data packets is transmitted simultaneously through the network. In order to avoid congestion delays that deteriorate the execution time of the implemented applications, an efficient routing strategy must be thought of carefully. In this paper, the ant colony optimization paradigm is explored to find and optimize routes in a mesh-based NoC. The proposed routing algorithms are simple yet efficient. The routing optimization is driven by the minimization of total latency during packets transmission between the tasks that compose the application. The presented performance evaluation is threefold: first, the impact of well-known synthetic traffic patterns is assessed; second, randomly generated applications are mapped into the NoC infrastructure and some synthetic communication traffics, that follow known patterns, are used to simulate real situations; third, sixteen real-world applications of the E3S and one specific application for digital image processing are mapped and their execution time evaluated. In both cases, the obtained results are compared to those obtained with known general purpose algorithms for deadlock free routing. The comparison avers the effectiveness and superiority of the ant colony inspired routing. 相似文献
8.
Ogier Maitre Frédéric Krüger Stéphane Querry Nicolas Lachiche Pierre Collet 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(2):261-279
EASEA is a framework designed to help non-expert programmers to optimize their problems by evolutionary computation. It allows
to generate code targeted for standard CPU architectures, GPGPU-equipped machines as well as distributed memory clusters.
In this paper, EASEA is presented by its underlying algorithms and by some example problems. Achievable speedups are also
shown onto different NVIDIA GPGPUs cards for different optimization algorithm families. 相似文献
9.
10.
In a heterogeneous distributed computing system, machine and network failures are inevitable and can have an adverse effect on applications executing on the system. To reduce the effect of failures on an application executing on a failure-prone system, matching and scheduling algorithms which minimize not only the execution time but also the probability of failure of the application must be devised. However, because of the conflicting requirements, it is not possible to minimize both of the objectives at the same time. Thus, the goal of this paper is to develop matching and scheduling algorithms which account for both the execution time and the reliability of the application. This goal is achieved by modifying an existing matching and scheduling algorithm. The reliability of resources is taken into account using an incremental cost function proposed in this paper and the new algorithm is referred to as the reliable dynamic level scheduling algorithm. The incremental cost function can be defined based on one of the three cost functions developed here. These cost functions are unique in the sense that they are not restricted to tree-based networks and a specific matching and scheduling algorithm. The simulation results confirm that the proposed incremental cost function can be incorporated into matching and scheduling algorithms to produce schedules where the effect of failures of machines and network resources on the execution of the application is reduced and the execution time of the application is minimized as well 相似文献
11.
A. Staphylopatis 《Acta Informatica》1982,17(3):311-325
Summary Loosely coupled multiprocessor systems seem to offer an interesting alternative to the solution of large numerical problems. It is in the context of such an investigation that we treat here a special parallel-processing case concerning the solution of a numerical problem on two independent processors including simultaneous input-output operations. We present a short discussion of the underlying numerical algorithm, a modelling approach to the parallel computing system and a comparison of the theoretically obtained results with simulation and experimental results. The experimental setting is the XANTHOS multi-microprocessor system implemented at the Laboratoire de Recherche en Informatique, Université de Paris-Sud.This work was supported by a DGRST Research Scholarship at Université Paris-Sud 相似文献
12.
13.
The author uses treemaps to present the state of work in a grid or other distributed computing environments, providing users with a view of overall performance and allowing performance analysts to explore data in a creative manner. 相似文献
14.
The high efficiency video coding (HEVC) standard shows enhanced video compression efficiency at the cost of high performance requirements. To address these requirements different approaches, like algorithmic optimization, parallelization and hardware acceleration can be used leading to a complex design space. In order to find an efficient solution, early design verification and performance evaluation is crucial. Hereby the prevailing methodology is the simulation of the complex HW/SW architecture. Targeting heterogeneous designs, different simulation models have different performance evaluation capabilities making a combined HW/SW co-analysis of the entire system a cumbersome task. To facilitate this co-analysis, we propose a non-intrusive instrumentation methodology for simulation models, which automatically adapts to the model under observation.With the help of this instrumentation methodology we perform the analysis and exploration of different design aspects of a SystemC-based heterogeneous multi-core model of an HEVC intra encoder. In the course of this HW/SW co-analysis various aspects of the parallelization and hardware acceleration of the video coding algorithms are presented and further improved. Due to its cycle accurate nature the developed model is well suited to facilitate various performance evaluations and to drive HW/SW co-optimizations of the explored system, as discussed in this paper. 相似文献
15.
16.
Data fusion is the process of integrating multiple sources of information such that their combination yields better results than if the data sources are used individually. This paper applies the idea of data fusion to feature location, the process of identifying the source code that implements specific functionality in software. A data fusion model for feature location is presented which defines new feature location techniques based on combining information from textual, dynamic, and web mining or link analyses algorithms applied to software. A novel contribution of the proposed model is the use of advanced web mining algorithms to analyze execution information during feature location. The results of an extensive evaluation on three Java systems indicate that the new feature location techniques based on web mining improve the effectiveness of existing approaches by as much as 87%. 相似文献
17.
Paul Valckenaers Hendrik Van BrusselAuthor VitaePaul VerstraeteAuthor Vitae Bart Saint GermainAuthor VitaeHadeliAuthor Vitae 《Journal of Manufacturing Systems》2007
This paper discusses a manufacturing execution system (MES) that prefers and attempts to follow a given schedule. The MES performs this task in an autonomic manner, filling in missing details, providing alternatives for unfeasible assignments, handling auxiliary tasks, and so on. The paper presents the research challenge, depicts the MES design, and gives experimental results. The research contribution resides in the novel architecture in which the MES cooperates with schedulers without inheriting the limitations of the world model employed by the scheduler. The research forms a first development, and a list of further research is given. 相似文献
18.
We study an access trace containing a sample of Wikipedia’s traffic over a 107-day period aiming to identify appropriate replication and distribution strategies in a fully decentralized hosting environment. We perform a global analysis of the whole trace, and a detailed analysis of the requests directed to the English edition of Wikipedia. In our study, we classify client requests and examine aspects such as the number of read and save operations, significant load variations and requests for nonexisting pages. We also review proposed decentralized wiki architectures and discuss how they would handle Wikipedia’s workload. We conclude that decentralized architectures must focus on applying techniques to efficiently handle read operations while maintaining consistency and dealing with typical issues on decentralized systems such as churn, unbalanced loads and malicious participating nodes. 相似文献
19.
James R. Larus 《LISP and Symbolic Computation》1991,4(1):29-99
Curare, the program restructurer described in this paper automatically transforms a sequential Lisp program into an equivalent concurrent program that runs on a multiprocessor.Data dependences constrain the program's concurrent execution because, in general, two conflicting statements cannot execute in a different order without affecting the program's result. Not all dependences are essential to produce the program's result.Curare attempts to transform the program so it computes its result with fewer conflicts. An optimized program will execute with less synchronization and more concurrency.
Curare then examines loops in a program to find those that are unconstrained or lightly constrained by dependences. By necessity,Curare treats recursive functions as loops and does not limit itself to explicit program loops. Recursive functions offer several advantages over explicit loops since they provide a convenient framework for inserting locks and handling the dynamic behavior of symbolic programs. Loops that are suitable for concurrent execution are changed to execute on a set of concurrent server processes. These servers execute single loop iterations and therefore need to be extremely inexpensive to invoke.Restructured programs execute significantly faster than the original sequential programs. This improvement is large enough to attract programmers to a multiprocessor, particularly since it requires little effort on their part.This research was funded by DARPA contract numbers N00039-85-C-0269 (SPUR) and N00039-84-C-0089 (XCS) and by an NSF Presidential Young Investigator award to Paul N. Hilfinger. Additional funding came from the California MICRO program (in conjunction with Texas Instruments, Xerox, Honeywell, and Phillips/Signetics). 相似文献
20.
Execution profiles are important in analyzing the performance of computer programs on a given computer system. However, accurate and complete profiles are difficult to arrive at for programs that follow the client-server model of computing, as in the popular X Window System. In X Window applications, considerable computation is invoked at the display server and this computation is an important part of the overall execution profile. The profiler presented in this paper generates meaningful profiles for X Window applications by estimating the time spent in servicing the messages in the display server. The central idea is to analyze a protocol-level trace of the interaction between the application and the display server and thereby construct an execution profile from the trace and a set of metrics about the target display server. Experience using the profiler for examining bottlenecks is presented. 相似文献