首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Homology detection is a fundamental step in sequence analysis. In the recent years, pairwise statistical significance has emerged as a promising alternative to database statistical significance for homology detection. Although more accurate, currently it is much time consuming because it involves generating tens of hundreds of alignment scores to construct the empirical score distribution. This paper presents a parallel algorithm for pairwise statistical significance estimation, called MPIPairwiseStatSig, implemented in C using MPI library. We further apply the parallelization technique to estimate non‐conservative pairwise statistical significance using standard, sequence‐specific, and position‐specific substitution matrices, which has earlier demonstrated superior sequence comparison accuracy than original pairwise statistical significance. Distributing the most compute‐intensive portions of the pairwise statistical significance estimation procedure across multiple processors has been shown to result in near‐linear speed‐ups for the application. The MPIPairwiseStatSig program for pairwise statistical significance estimation is available for free academic use at www.cs.iastate.edu~ankitag/MPIPairwiseStatSig.html . Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
3.
An optimized parallel algorithm is proposed to solve the problem occurred in the process of complicated backward substitution of cyclic reduction during solving tridiagonal linear systems. Adopting a hybrid parallel model, this algorithm combines the cyclic reduction method and the partition method. This hybrid algorithm has simple backward substitution on parallel computers comparing with the cyclic reduction method. In this paper, the operation count and execution time are obtained to evaluate and make comparison for these methods. On the basis of results of these measured parameters, the hybrid algorithm using the hybrid approach with a multi‐threading implementation achieves better efficiency than the other parallel methods, that is, the cyclic reduction and the partition methods. In particular, the approach involved in this paper has the least scalar operation count and the shortest execution time on a multi‐core computer when the size of equations meets some dimension threshold. The hybrid parallel algorithm improves the performance of the cyclic reduction and partition methods by 19.2% and 13.2%, respectively. In addition, by comparing the single‐iteration and multi‐iteration hybrid parallel algorithms, it is found that increasing iteration steps of the cyclic reduction method does not affect the performance of the hybrid parallel algorithm very much. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Identifying a nonlinear radial basis function‐based state‐dependent autoregressive (RBF‐AR) time series model is the basis for solving the corresponding prediction and control problems. This paper studies some recursive parameter estimation algorithms for the RBF‐AR model. Considering the difficulty of the nonlinear optimal problem arising in estimating the RBF‐AR model, an overall forgetting gradient algorithm is deduced based on the negative gradient search. A numerical method with a forgetting factor is provided to solve the problem of determining the optimal convergence factor. In order to improve the parameter estimation accuracy, the multi‐innovation identification theory is applied to develop an overall multi‐innovation forgetting gradient (O‐MIFG) algorithm. The simulation results indicate that the estimation model based on the O‐MIFG algorithm can capture the dynamics of the RBF‐AR model very well.  相似文献   

5.
Understanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time. Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance. In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi‐scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short‐term resource sharing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
An increasing number of enterprise applications are intensive in their consumption of IT but are infrequently used. Consequently, either organizations host an oversized IT infrastructure or they are incapable of realizing the benefits of new applications. A solution to the challenge is provided by the large‐scale computing infrastructures of clouds and grids, which allow resources to be shared. A major challenge is the development of mechanisms that allow efficient sharing of IT resources. Market mechanisms are promising, but there is a lack of research in scalable market mechanisms. We extend the multi‐attribute combinatorial exchange mechanism with greedy heuristics to address the scalability challenge. The evaluation shows a trade‐off between efficiency and scalability. There is no statistical evidence for an influence on the incentive properties of the market mechanism. This is an encouraging result as theory predicts heuristics to ruin the mechanism's incentive properties. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Designing oligonucleotide strands that selectively hybridize to reduce undesired reactions is a critical step for successful DNA computing. To accomplish this, DNA molecules must be restricted to a wide window of thermodynamical and logical conditions, which in turn facilitate and control the algorithmic processes implemented by chemical reactions. In this paper, we propose a multiobjective evolutionary algorithm for DNA sequence design that, unlike preceding evolutionary approaches, uses a matrix-based chromosome as encoding strategy. Computational results show that a matrix-based GA along with its specific genetic operators may improve the performance for DNA sequence optimization compared to previous methods.  相似文献   

8.
With the advancement of new processor and memory architectures, supercomputers of multicore and multinode architectures have become general tools for large‐scale engineering and scientific simulations. However, the nonuniform latencies between intranode and internode communications on these machines introduce new challenges that need to be addressed in order to achieve optimal performance. In this paper, a novel hybrid solver that is especially designed for supercomputers of multicore and multinode architectures is proposed. The new hybrid solver is characterized by its two‐level parallel computing approach on the basis of the strategies of two‐level partitioning and two‐level condensation. It distinguishes intranode and internode communications to minimize the communication overheads. Moreover, it further reduces the size of interface equation system to improve its convergence rate. Three numerical experiments of structural linear static analysis were conducted on DAWNING‐5000A supercomputer to demonstrate the validity and efficiency of the proposed method. Test results show that the proposed approach was superior in performance compared with the conventional Schur complement method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
While large‐scale parallel/distributed simulations are rapidly becoming critical research modalities in academia and industry, their efficient and scalable implementations continue to present many challenges. A key challenge is that the dynamic and complex communication/coordination required by these applications (dependent on the state of the phenomenon being modeled) are determined by the specific numerical formulation, the domain decomposition and/or sub‐domain refinement algorithms used, etc. and are known only at runtime. This paper presents Seine, a dynamic geometry‐based shared‐space interaction framework for scientific applications. The framework provides the flexibility of shared‐space‐based models and supports extremely dynamic communication/coordination patterns, while still enabling scalable implementations. The design and prototype implementation of Seine are presented. Seine complements and can be used in conjunction with existing parallel programming systems such as MPI and OpenMP. An experimental evaluation using an adaptive multi‐block oil‐reservoir simulation is used to demonstrate the performance and scalability of applications using Seine. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
The catastrophic incidents involving hazardous materials (hazmats) have often been termed as low probability and high consequence (LPHC). The purpose of this article is to address some fundamental questions with regard to hazmats incidents: What is the expected consequence of a hazmats incident? How should the consequences of incidents involving hazmats be predicted? An exhaustive statistical analysis is performed on the hazmat incident data available from the U.S. Hazardous Materials Incident Reporting System (HMIRS). We present a sequence of logically deduced, linear statistical models to estimate the two major areas of impact of an incident: (1) population affected and (2) cost incurred due to an incident based on the outcomes of the incident. Our initial experiments indicated that linear models are not sufficient for predicting the consequences. Subsequently, we extended our work to evaluate the effectiveness of three multivariate statistical methods, namely (1) partial least squares, (2) spline regression, and (3) Box‐Cox transformations. Based on our experiments, Box‐Cox transformation showed significant improvement in estimating the consequences. Last, we summarize our findings and provide some general guidelines to entities interested in estimating events categorized as LPHC. © 2010 Wiley Periodicals, Inc.  相似文献   

11.
In this paper, we introduce MRMOGA (Multiple Resolution Multi‐Objective Genetic Algorithm), a new parallel multi‐objective evolutionary algorithm which is based on an injection island approach. This approach is characterized by adopting an encoding of solutions which uses a different resolution for each island. This approach allows us to divide the decision variable space into well‐defined overlapped regions to achieve an efficient use of multiple processors. Also, this approach guarantees that the processors only generate solutions within their assigned region. In order to assess the performance of our proposed approach, we compare it to a parallel version of an algorithm that is representative of the state‐of‐the‐art in the area, using standard test functions and performance measures reported in the specialized literature. Our results indicate that our proposed approach is a viable alternative to solve multi‐objective optimization problems in parallel, particularly when dealing with large search spaces. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Over the past few years, research and development in bioinformatics (e.g. genomic sequence alignment) has grown with each passing day fueling continuing demands for vast computing power to support better performance. This trend usually requires solutions involving parallel computing techniques because cluster computing technology reduces execution times and increases genomic sequence alignment efficiency. One example, mpiBLAST is a parallel version of NCBI BLAST that combines NCBI BLAST with message passing interface (MPI) standards. However, as most laboratories cannot build up powerful cluster computing environments, Grid computing framework concepts have been designed to meet the need. Grid computing environments coordinate the resources of distributed virtual organizations and satisfy the various computational demands of bioinformatics applications. In this paper, we report on designing and implementing a BioGrid framework, called G‐BLAST, that performs genomic sequence alignments using Grid computing environments and accessible mpiBLAST applications. G‐BLAST is also suitable for cluster computing environments with a server node and several client nodes. G‐BLAST is able to select the most appropriate work nodes, dynamically fragment genomic databases, and self‐adjust according to performance data. To enhance G‐BLAST capability and usability, we also employ a WSRF Grid Service Portal and a Grid Service GUI desk application for general users to submit jobs and host administrators to maintain work nodes. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
The visual world around us displays a rich set of light effects because of translucent and participating media. It is hard and time consuming to render these effects with scattering, caustic, and shaft because of the complex interaction between light and different media. This paper presents a new rendering method based on adaptive lattice for lighting participating media of translucent materials such as marble, wax, and shaft light. Firstly, on the basis of the lattice‐based photon tracing model, multi‐scale hierarchical lattice was constructed by mixed lattice types sampling combined cubic Cartesian and face‐centered cubic with view‐dependent adaptive resolution. Then, an adaptive method to trace diffuse photons and marked specular photons with different phase functions was suggested. Multiple lights and heterogeneous materials were also considered here. Further, the mixed rendering method and GPU accelerate technology were introduced to render different light effects under different participating media. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
Making resources closer to the user might facilitate the integration of new technologies such as edge, fog, cloud computing, and big data. However, this brings many challenges shall be overridden when distributing a real‐time stream processing, executing multiapplication in a safe multitenant environment, and orchestrating and managing the services and resources into a hybrid fog/cloud federation. In this article, first, we propose a business process model and notation (BPMN) extension to enable the Internet of Things (IoT)‐aware business process (BP) modeling. The proposed extension takes into consideration the heterogeneous IoT and non‐IoT resources, resource capacities, quality of service constraints, and so forth. Second, we present a new IoT‐fog‐cloud based architecture, which (i) supports the distributed inter and intralayer communication as well as the real‐time stream processing in order to treat immediately IoT data and improve the entire system reliability, (ii) enables the multiapplication execution within a multitenancy architecture using the single sign‐on technique to guarantee the data integrity within a multitenancy environment, and (iii) relies on the orchestration and federation management services for deploying BP into the appropriate fog and/or cloud resources. Third, we model, by using the proposed BPMN 2.0 extension, smart autistic child and coronavirus disease 2019 monitoring systems. Then we propose the prototypes for these two smart systems in order to carry out a set of extensive experiments illustrating the efficiency and effectiveness of our work.  相似文献   

15.
Recently, multi‐ and many‐objective meta‐heuristic algorithms have received considerable attention due to their capability to solve optimization problems that require more than one fitness function. This paper presents a comprehensive study of these techniques applied in the context of machine learning problems. Three different topics are reviewed in this work: (a) feature extraction and selection, (b) hyper‐parameter optimization and model selection in the context of supervised learning, and (c) clustering or unsupervised learning. The survey also highlights future research towards related areas.  相似文献   

16.
This paper presents an approximation design for a decentralized adaptive output‐feedback control of large‐scale pure‐feedback nonlinear systems with unknown time‐varying delayed interconnections. The interaction terms are bounded by unknown nonlinear bounding functions including unmeasurable state variables of subsystems. These bounding functions together with the algebraic loop problem of virtual and actual control inputs in the pure‐feedback form make the output‐feedback controller design difficult and challenging. To overcome the design difficulties, the observer‐based dynamic surface memoryless local controller for each subsystem is designed using appropriate Lyapunov‐Krasovskii functionals, the function approximation technique based on neural networks, and the additional first‐order low‐pass filter for the actual control input. It is shown that all signals in the total controlled closed‐loop system are semiglobally uniformly bounded and control errors converge to an adjustable neighborhood of the origin. Finally, simulation examples are provided to illustrate the effectiveness of the proposed decentralized control scheme. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
This paper discusses the implementation, architecture, and use of a graphical web‐based application called ReliaCloud‐NS that allows users to (1) evaluate the reliability of a cloud computing system (CCS) and (2) design a CCS to a specified reliability level for both public and private clouds. The software was designed with a RESTful application programming interface for performing nonsequential Monte Carlo simulations to perform reliability evaluations of a CCS. Simulation results are stored and presented to the user in the form of interactive charts and graphs from within a web browser. The software contains multiple types of CCS components, simulations, and virtual machine allocation schemes. ReliaCloud‐NS also contains a novel feature that evaluates CCS reliability across a range of varying virtual machine allocations and establishes and graphs a CCS reliability curve. This paper discusses the software architecture, the interactive web‐based interface, and the different types of simulations available in ReliaCloud‐NS and presents an overview of the results generated from a simulation.  相似文献   

18.
Digital watermarking evaluation and benchmarking are challenging tasks because of multiple evaluation and conflicting criteria. A few approaches have been presented to implement digital watermarking evaluation and benchmarking frameworks. However, these approaches still possess a number of limitations, such as fixing several attributes on the account of other attributes. Well‐known benchmarking approaches are limited to robust watermarking. Therefore, this paper presents a new methodology for digital watermarking evaluation and benchmarking based on large‐scale data by using external evaluators and a group decision making context. Two experiments are performed. In the first experiment, a noise gate‐based digital watermarking approach is developed, and the scheme for the noise gate digital watermarking approach is enhanced. Sixty audio samples from different audio styles are tested with two algorithms. A total of 120 samples were evaluated according to three different metrics, namely, quality, payload, and complexity, to generate a set of digital watermarking samples. In the second experiment, the situation in which digital watermarking evaluators have different preferences is discussed. Weight measurement with a decision making solution is required to solve this issue. The analytic hierarchy process is used to measure evaluator preference. In the decision making solution, the technique for order of preference by similarity to the ideal solution with different contexts (e.g., individual and group) is utilized. Therefore, selecting the proper context with different aggregation operators to benchmark the results of experiment 1 (i.e., digital watermarking approaches) is recommended. The findings of this research are as follows: (1) group and individual decision making provide the same result in this case study. However, in the case of selection where the priority weights are generated from the evaluators, group decision making is the recommended solution to solve the trade‐off reflected in the benchmarking process for digital watermarking approaches. (2) Internal and external aggregations show that the enhanced watermarking approach demonstrates better performance than the original watermarking approach. © 2016 The Authors. Software: Practice and Experience published by John Wiley & Sons Ltd.  相似文献   

19.
Chee Shin Yeo  Rajkumar Buyya 《Software》2006,36(13):1381-1419
In utility‐driven cluster computing, cluster Resource Management Systems (RMSs) need to know the specific needs of different users in order to allocate resources according to their needs. This in turn is vital to achieve service‐oriented Grid computing that harnesses resources distributed worldwide based on users' objectives. Recently, numerous market‐based RMSs have been proposed to make use of real‐world market concepts and behavior to assign resources to users for various computing platforms. The aim of this paper is to develop a taxonomy that characterizes and classifies how market‐based RMSs can support utility‐driven cluster computing in practice. The taxonomy is then mapped to existing market‐based RMSs designed for both cluster and other computing platforms to survey current research developments and identify outstanding issues. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
Real‐time rendering of large‐scale engineering computer‐aided design (CAD) models has been recognized as a challenging task. Because of the constraints of limited graphics processing unit (GPU) memory size and computation capacity, a massive model with hundreds of millions of triangles cannot be loaded and rendered in real‐time using most of modern GPUs. In this paper, an efficient GPU out‐of‐core framework is proposed for interactively visualizing large‐scale CAD models. To improve efficiency of data fetching from CPU host memory to GPU device memory, a parallel offline geometry compression scheme is introduced to minimize the storage cost of each primitive by compressing the levels of detail (LOD) geometries into a highly compact format. At the rendering stage, occlusion culling and LOD processing algorithms are integrated and implemented with an efficient GPU‐based approach to determine a minimal scale of primitives to be transferred for each frame. A prototype software system is developed to preprocess and render massive CAD models with the proposed framework. Experimental results show that users can walkthrough massive CAD models with hundreds of millions of triangles at high frame rates using our framework. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号