首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the major new developments in computing technology is the mini-computer. This paper presents the results of the implementation of the ANSYS computer program (a large-scale structural analysis system) on a mini-computer.The implementation of ANSYS started with an evaluation of the existing and proposed mini-computer hardware and software systems, relative to a set of criteria derived from the structural software requirements.After benchmarking several systems to compare the claimed performances with the actual performance, the system which most nearly met the requirements for the ANSYS program was selected and ordered.The configuration chosen for this development machine is illustrated and variations from the development configuration which would be desirable for a production environment are discussed.The results of the implementation of ANSYS on the selected mini-computer system are presented. Included in the presentation are run times, run costs, accuracy of results and computer storage requirements.Finally, this paper discusses the future directions which will be pursued in this development effort. Included are discussions of the interface between the mini-computer and a larger central computer, the limits on problem size imposed by core memory and solution time and suggestions for the improvement of the performance of structural problems in a mini-computer environment.  相似文献   

2.
Wood  D. C. 《Computer Journal》1969,12(4):317-319
  相似文献   

3.
Many problems in business, engineering, defence, resource exploitation, and even the medical sciences with location aspects can be expressed as grid-based location problems (GBLPs), modeled as integer linear programming problems. Such problems are often very computationally complex to solve. We develop a relax-and-fix-based decomposition approach to solve large-scale GBLPs, which we demonstrate will significantly reduce solution runtimes while not severely impacting optimality. We also introduce problem-specific logical restrictions, constraints that reduce the feasible region and the resulting branch-and-bound tree with minimal reductions in optimality.  相似文献   

4.
Many problems in business, engineering, defence, resource exploitation, and even the medical sciences with location aspects can be expressed as grid-based location problems (GBLPs), modeled as integer linear programming problems. Such problems are often very computationally complex to solve. We develop a relax-and-fix-based decomposition approach to solve large-scale GBLPs, which we demonstrate will significantly reduce solution runtimes while not severely impacting optimality. We also introduce problem-specific logical restrictions, constraints that reduce the feasible region and the resulting branch-and-bound tree with minimal reductions in optimality.  相似文献   

5.
《Computers & chemistry》1993,17(2):191-201
The improved efficiency of similarity search programs and the affordability of even faster computers allow studies where whole sequence databases can be the target of various comparisons with increasingly larger or numerous query sequences. However, the usefulness of those “brute force” methods now becomes limited by the time it takes an experienced scientist to sift the biologically relevant matches from overwhelming, albeit “statistically significant” outputs. The discrepancy between statistical vs biological significance has different causes: erroneous database entries, repetitive sequence elements, and the ubiquity of low complexity segments with biased composition. We present two masking methods (programs XNU and XBLAST) capable of eliminating most of the irrelevant outputs in a variety of large scale sequence analysis situations: global “all against all” database comparisons, massive partial cDNA sequencing (EST), positional cloning and genomic data analysis.  相似文献   

6.
大规模存储系统可靠性参数最优化分析   总被引:1,自引:0,他引:1       下载免费PDF全文
在大规模存储系统中,数据的可靠性越来越受到人们的关注。已有的研究分析了在系统规模已知的条件下,某些系统参数,如副本分布策略、存储对象数目等,对可靠性的粗略影响,但较少提及它们的最优值或者最优组合。提出了一种基于对象粒度恢复的可靠性新模型;基于该模型,在分析三种主流的副本分布策略的基础上,分别计算出了各个系统参数的独立最优值及其组合最优值。与已有模型相比,该模型更易于求解,且获得了更加综合实用的最优值,这些最优参数值能直接有效地指导系统设计者构建更可靠的大规模存储系统。  相似文献   

7.
Due to the intrinsic nature of multi-physics, it is prohibitively complex to design and implement a simulation software platform for study of structural responses to a detonation shock. In this article, a partitioned fluid-structure interaction computing platform is designed for parallel simulating structural responses to a detonation shock. The detonation and wave propagation are modeled in an open-source multi-component solver based on OpenFOAM and blastFoam, and the structural responses are simulated through the finite element library deal.II. To capture the interaction dynamics between the fluid and the structure, both solvers are adapted to preCICE. For improving the parallel performance of the computing platform, the inter-solver data is exchanged by peer-to-peer communications and the intermediate server in conventional multi-physics software is eliminated. Furthermore, the coupled solver with detonation support has been deployed on a computing cluster after considering the distributed data storage and load-balancing between solvers. The 3D numerical result of structural responses to a detonation shock is presented and analyzed. On 256 processor cores, the speedup ratio of the simulations for a detonation shock reach 178.0 with 5.1 million of mesh cells and the parallel efficiency achieve 69.5%. The results demonstrate good potential of massively parallel simulations. Overall, a general-purpose fluid-structure interaction software platform with detonation support is proposed by integrating open source codes. And this work has important practical significance for engineering application in fields of construction blasting, mining, and so forth.  相似文献   

8.
Given a linear quadtree forming a region's contour, an algorithm is presented to determine all the pixels 4-connected to the border's elements. The procedure, based on a connectivity technique, associates a two-valued state (“blocked” or “unblocked”) with each node and fills increasingly larger quadrants with black nodes whose state is known to be unblocked. Advantages of the proposed procedure over existing ones are: (i) multiply connected regions can be reconstructed; (ii) the border can be given as a set of either 4- or 8-connected pixels.  相似文献   

9.
In this paper we develop a new approach to the design of near-optimal decentralized controllers for large-scale linear interconnected dynamical systems. All calculations in the design stage are done on subsystems. This is achieved by using a simple reduced order model for the interactions. The resulting controller is decentralized, independent of initial conditions and capable of accommodating constant unknown disturbances. The approach is illustrated on a 22nd order river pollution control example. In addition, it is shown that by a simple modification it is possible to ensure that the controller is asymptotically stable with a pre-specified degree of stability and we show that the modified controller is also connectively asymptotically stable under structural perturbations.  相似文献   

10.
Formal scenarios have many uses in requirements engineering, validation, performance modeling, and test generation. Many tools and methodologies can handle scenarios when the number of steps (interleaved inputs and outputs of the target system) is reasonably small. However, scenario based techniques do not scale well with the number of steps, number of actors, and complexity of behaviors and system interactions to be specified in the scenario. First, it is impractically tedious and error-prone to specify thousands of input steps and corresponding expected outputs. Second, even if one can write down such large scale scenarios, confidence in their correctness is naturally low. Third, complex systems requiring large scale scenarios tend to require many such scenarios to adequately cover the behavior space. This paper describes the motivations for and problems of large scale scenarios, as well as the LSS method, which uses automated and semi-automated techniques in describing, maintaining, communicating, and using large scale scenarios in requirements engineering. The method is illustrated in two widely divergent application domains: military live training instrumentation and electronic mail servers. A case study demonstrates the practical and beneficial use of LSS in architectural modeling of a complex, real-world system design. A two page extended abstract of this paper appeared in Proc. 21st ACM/IEEE Intl. Conf. on Software Engineering (ASE 2006).  相似文献   

11.
12.
The automation of the analysis of large volumes of seismic data is a data mining problem, where a large database of 3D images is searched by content, for the identification of the regions that are of most interest to the oil industry. In this paper we perform this search using the 3D orientation histogram as a texture analysis tool to represent and identify regions within the data, which are compatible with a query texture.  相似文献   

13.
Nama  Sukanta 《Applied Intelligence》2021,51(11):7881-7902
Applied Intelligence - SOS is a global optimization algorithm, based on nature, and is utilized to execute the various complex hard optimization problems. Be that as it may, some basic highlights...  相似文献   

14.
An adaptive finite element technique for structural dynamic analysis   总被引:2,自引:0,他引:2  
An adaptive finite element discretization technique, which utilizes specially derived Ritz vectors, is presented for solving structural dynamics problems. The special Ritz vectors are applied as the bases of transformation in geometric coordinates for mode superposition dynamic analysis. To capture the low frequency response and the high frequency response using multigrid principles, a hierarchical formulation for the formation of the coefficient matrices is proposed and it is utilized in the framework of the adaptive h-refinement. Assuming that the solution can be resolved into a set of orthogonal vectors and the refined mesh which passes the refinement criteria for all the vectors can satisfy the refinement criteria for the solution, the Ritz vectors are used as sources to discretize the continuous spatial domain. An a posteriori energy norm of residual error serves as the error measure. Finally, the performance and the efficiency of the proposed technique is demonstrated by solving several examples.  相似文献   

15.
Based on the pull and prune technique proposed here for computing source-to-sink reliability, a very powerful 30-line Basic personal computer program reduces a complex reliability structure one node at a time by pruning return-loops from the evolving sink. Compared to the methodical application of the recursive pivotal decomposition technique, the proposed algorithm is particularly efficient for complex structures intertwined with many counter-directed components.  相似文献   

16.
Communication is in this paper seen as the foundation for purposeful human–human activity in dynamic environments. Coordination is a central issue in large systems such as military organisations, enterprises, or rescue organisations, and communication is needed in order to achieve coordination in such systems. This paper suggest a holistic approach to control, where control in a large system is seen as an emergent product of human interaction, focusing on human–human communication from a technical, organisational, temporal, and social perspective.
Erik HollnagelEmail:
  相似文献   

17.
We report on a portable communication environment, ‘SCIDDLE’, for distributing computations over heterogenous networks of UNIX computers. SCIDDLE is based on the client-server model. It was designed to support the distribution of large scale numerical computations and to keep its usage as simple as possible. All interprocess communication is done via remote procedure calls. The user defines the interface between communicating processes in a simple declarative language. Parallel programming is supported by asynchronous RPCs. A convenient array handing has been implemented. We demonstrate the usefulness of the system with an application from quantum chemistry on internet-connected workstations and supercomputers.  相似文献   

18.
《Parallel Computing》2007,33(7-8):572-591
The Grid Information Service (GIS) is a core component in the Grid software infrastructure. It provides diverse information to users or other service components in Grid environments. In this paper, we propose a scalable GIS architecture for information management in a large scale Grid Virtual Organization (VO). This architecture consists of the VO layer, site layer and resource layer: at the resource layer, information agents and pluggable information sensors are deployed on each resource monitored. This information agent and sensor approach provides a flexible framework that enables specific information to be captured; at the site layer, a site information service component with caching capability aggregates and maintains up-to-date information of all the resources monitored within an administrative domain; at the VO layer, a peer-to-peer approach is used to build a virtual network of site information services for information discovery and query in a large scale Grid VO. This decentralized approach makes information management scalable and robust. Furthermore, we propose a security framework for the GIS, which provide security policies for authentication and authorization control of the GIS at both the site and the VO layers. Our GIS has been implemented based on the Globus Toolkit 4 as Web services compliant to Web Services Resource Framework (WSRF) specifications. The experimental results show that the GIS presents satisfactory scalability in handling information for large scale Grids.  相似文献   

19.
This paper is concerned with the overall design of the Terabit File Store — a network storage facility based on a Braegen Automated Tape Library. The characteristics of the tape library — in particular its large capacity and slow access — provide both challenges and opportunities for the system designer. The use of disc cache and the optimization of file placement have been used to provide reasonable performance in the face of substantial tape handling times. Catalogue facilities have been tailored to cater for the support of large file holdings and user file back-up applications. It has been possible to automate the back-up of essential file-store information which together with automatic integrity checks, helps to minimize the damage that can be caused by faults. Considerable attention has been given to facilities that automate much of the management of the system in a network environment. Other aspects of the design discussed in the paper include protection, housekeeping and host interfacing.  相似文献   

20.
The effectiveness of using minimization techniques for the solution of nonlinear structural analysis problems is discussed and demonstrated by comparison with the conventional pseudo force technique. The comparison involves nonlinear problems with a relatively few degrees of freedom. A survey of the state-of-the-art of algorithms for unconstrained minimization reveals that extension of the technique to large scale nonlinear systems is possible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号