首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Atmospheric Radiation Measurement (ARM) Data Integrator (ADI) is a framework designed to streamline the development of scientific algorithms that analyze, and models that use time-series NetCDF data. ADI automates the process of retrieving and preparing data for analysis, provides a modular, flexible framework that simplifies software development, and supports a data integration workflow. Algorithm and model input data, preprocessing, and output data specifications are defined through a graphical interface. ADI includes a library of software modules to support the workflow, and a source code generator that produces C, IDL®, and Python™ templates to jump start development. While developed for processing climate data, ADI can be applied to any time-series data. This paper discusses the ADI framework, and how ADI's capabilities can decrease the time and cost of implementing scientific algorithms allowing modelers and scientists to focus their efforts on their research rather than preparing and packaging data.  相似文献   

2.
陶金花  苏林  李树楷 《计算机应用》2007,27(10):2578-2580
分析LiDAR数据处理流程,结合开放网格服务体系结构(OGSA),提出一种LiDAR数据处理平台体系,将数据处理任务合理划分并分配到各个分布的网格节点上,通过各节点并行、协同计算,达到提高运算速度的目的。最后以对激光点云重采样生成格网DEM为例,说明算法在该体系下的计算过程。  相似文献   

3.
The Journal of Supercomputing - Developing efficient graph algorithms implementations is an extremely important problem of modern computer science, since graphs are frequently used in various...  相似文献   

4.
Data-intensive architecture for scientific knowledge discovery   总被引:1,自引:0,他引:1  
This paper presents a data-intensive architecture that demonstrates the ability to support applications from a wide range of application domains, and support the different types of users involved in defining, designing and executing data-intensive processing tasks. The prototype architecture is introduced, and the pivotal role of DISPEL as a canonical language is explained. The architecture promotes the exploration and exploitation of distributed and heterogeneous data and spans the complete knowledge discovery process, from data preparation, to analysis, to evaluation and reiteration. The architecture evaluation included large-scale applications from astronomy, cosmology, hydrology, functional genetics, imaging processing and seismology.  相似文献   

5.
Talia  D. 《Computer》2000,33(9):44-52
Cellular automata offer a powerful modeling approach for complex systems in which global behavior arises from the collective effect of many locally interacting, simple components. Several tools based on CA are providing meaningful results for real-world applications. Cellular automata represent an efficient paradigm for the computer solution of important problems in science and engineering. Moreover, the CA model lets researchers effectively use parallel computers to achieve scalable performance. As researchers use parallel computers to solve scientific problems, they will need problem representations (paradigms) for this class of computers. Abstract mathematical models that offer an implicitly parallel representation of problems better match those architectures, but could benefit from new high-level languages, environments, and techniques. The three should support all the development steps of computational science applications while hiding architectural details from users. Computational science is also an interdisciplinary field in which many areas converge, and developing applications in this field requires the cooperation of people from different domains. Modeling and simulation using parallel cellular methods helps researchers cooperate by offering both a way to code an algorithm and an integrated environment for developing software  相似文献   

6.
Decomposition of knowledge for concurrent processing   总被引:1,自引:0,他引:1  
In some environments, it is more difficult for distributed systems to cooperate. In fact, some distributed systems are highly heterogeneous and might not readily cooperate. In order to alleviate these problems, we have developed an environment that preserves the autonomy of the local systems, while enabling distributed processing. This is achieved by: modeling the different application systems into a central knowledge base (called a Metadatabase); providing each application system with a local knowledge processor; and distributing the knowledge within these local shells. This paper is concerned with describing the knowledge decomposition process used for its distribution. The decomposition process is used to minimize the needed cooperation among the local knowledge processors, and is accomplished by “serializing” the rule execution process. A rule is decomposed into an ordered set of subrules, each of which is executed in sequence and located in a specific local knowledge processor. The goals of the decomposition algorithm are to minimize the number of subrules produced, hence reducing the time spent in communication, and to assure that the sequential execution of the subrules is “equivalent” to the execution of the original rule  相似文献   

7.
An approach to solving high-performance data-stream processing is proposed based on hardware solutions that use a field-programmable gate array. The described HDG hardware solution was successfully applied to video data streams. The computation capacity of the employed crystal of the Xilinx Virtex5 family is sufficient for the real-time implementation of image analysis algorithms based on the mean-square deviation with a frequency of up to 100 frames per second.  相似文献   

8.
A framework for data-flow distributed processing is established through the definition of a data-flow model and a set of language constructs for concurrent programming. The proposed approach is based on the following characteristics: i) the exploitation of parallelism at the operation level leads to the efficient and natural exploitation of parallelism at the program level, and ii) parallelism, communication, nondeterminism and history sensitivity are primitive concepts. The aim of the defined data-flow constructs is to enhance modularity and parallelism of programs. Two structuring levels are introduced, called «modules» and «frames», to permit both symmetric and asymmetric communication. Single assignment and guarded commands are employed inside modules. Examples of tipical programming problems, including shared resources management, are given together with a short account of a distributed data-flow architecture able to support data-flow distributed processing efficiently.  相似文献   

9.
Two major problems appear during the design of a framework. The first is related to synthesizing generic elements for a family of applications and connecting them to an integrated control flow. The second lies in the design of a powerful, modular, reliable architecture that is easy to (re)use and understand. The fact of including design patterns in the architecture of frameworks minimizes the second problem. Indeed, design patterns provide proven, flexible, well‐engineered design solutions at a higher abstraction level than classes. Their associated documentation records information from experienced object‐oriented designers about solutions to recurrent problems, about contexts in which the patterns are applicable, about forces involved and consequences related to their use. This paper presents a number of the benefits of integrating design patterns in the development of an object‐oriented framework related to fuzzy logic control. It also reports on an object‐oriented design for Fuzzy Knowledge Based Control (FKBC) that includes design patterns to facilitate the development, maintenance and documentation of the FKBC framework. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

10.
In the ongoing discussion about combining rules and ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper, we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL, we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way.  相似文献   

11.
A Logical Framework for Knowledge Base Maintenance   总被引:3,自引:1,他引:2       下载免费PDF全文
The maintenance sequences of a knowledge base and their limits are introduced.Some concepts used in knowledge base maintenance,such as new laws,user‘s rejections,and reconstructions of a knowledge base are defined;the related theorems are proved.A procedure is defined using transition systems;it generates maintenance sequences for a given user‘s model and a knowledge base.It is proved that all sequences produced by the procedure are convergent,and their limit is the set of true sentences of the model.Some computational aspects of reconstructions are studied.An R-calculus is given to deduce a reconstruction when a knowledge base meets a user‘s rejection.The work is compared with AGM‘s theory of belief revision.  相似文献   

12.
Knowledge graph (KG) embedding methods are at the basis of many KG-based data mining tasks, such as link prediction and node clustering. However, graphs may contain confidential information about people or organizations, which may be leaked via embeddings. Research recently studied how to apply differential privacy to a number of graphs (and KG) analyses, but embedding methods have not been considered so far. This study moves a step toward filling such a gap, by proposing the Differential Private Knowledge Graph Embedding (DPKGE) framework.DPKGE extends existing KG embedding methods (e.g., TransE, TransM, RESCAL, and DistMult) and processes KGs containing both confidential and unrestricted statements. The resulting embeddings protect the presence of any of the former statements in the embedding space using differential privacy. Our experiments identify the cases where DPKGE produces useful embeddings, by analyzing the training process and tasks executed on top of the resulting embeddings.  相似文献   

13.
14.
Myriad frameworks have been developed for knowledge management. However, the field has been slow in formulating a generally accepted, comprehensive framework for knowledge management. This paper reviews the existing knowledge management frameworks and provides suggestions for what a general framework should include. The distinguishing feature of this research is that it emphasizes placing knowledge management in a larger context of systems thinking so that the influencing factors on its success or failure can better be recognized and understood.  相似文献   

15.
This paper presents AIBench (SING group, Ourense, Spain), a JAVA desktop application framework mainly focused on scientific software development, with the goal of improving the productivity of research groups. Following the MVC design pattern, the programmer is able to develop applications using only three types of concepts: operations, data‐typesand views. The framework provides the rest of the functionality present in typical scientific applications, including user parameter requests, logging facilities, multithreading execution, experiment repeatability and graphic user interface generation, among others. The proposed framework is implemented following a plugin‐based architecture, which also allows assembling new applications by the reuse of modules from past development projects. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Effective high-level data management is becoming an important issue with more and more scientific applications manipulating huge amounts of secondary-storage and tertiary-storage data using parallel processors. A major problem facing the current solutions to this data management problem is that these solutions either require a deep understanding of specific data storage architectures and file layouts to obtain the best performance (as in high-performance storage management systems and parallel file systems), or they sacrifice significant performance in exchange for ease-of-use and portability (as in traditional database management systems). We discuss the design, implementation, and evaluation of a novel application development environment for scientific computations. This environment includes a number of components that make it easy for the programmers to code and run their applications without much programming effort and, at the same time, to harness the available computational and storage power on parallel architectures.  相似文献   

17.
18.
Reconfigurable computing (RC) applications employing both microprocessors and FPGAs have potential for large speedup when compared with traditional (software) parallel applications. However, this potential is marred by the additional complexity of these dual-paradigm systems, making it difficult to identify performance bottlenecks and achieve desired performance. Performance analysis concepts and tools are well researched and widely available for traditional parallel applications but are lacking in RC, despite being of great importance due to the applications’ increased complexity. In this paper, we explore challenges and present new techniques in automated instrumentation, runtime measurement, and visualization of RC application behavior. We also present ideas for integration with conventional performance analysis tools to create a unified tool for RC applications as well as our initial framework for FPGA instrumentation and measurement. Results from a case study are provided using a prototype of this new tool.  相似文献   

19.
以日地系统活动规律研究为背景,基于美国新近提出的应用于大规模科学计算领域的组件规范CCA(Common Component Architecture),设计提出了日地空间信息分布式协同高性能计算框架DCHF-SI,它集物理模型组件化封装、模拟应用的构建和管理、模型互操作、分布式容错和计算驾驭可视化等服务于一体,能够充分利用网络集成大量的分布式高性能计算资源和空间物理模型资源来构建多物理松耦合模拟应用,支持日地空间信息的分布式协同高性能计算,解决了多物理耦合模拟的复杂性问题,最终为空间天气预报服务系统提供支持。  相似文献   

20.
Online aggregation provides estimates to the final result of a computation during the actual processing. The user can stop the computation as soon as the estimate is accurate enough, typically early in the execution. This allows for the interactive data exploration of the largest datasets. In this paper we introduce the first framework for parallel online aggregation in which the estimation virtually does not incur any overhead on top of the actual execution. We define a generic interface to express any estimation model that abstracts completely the execution details. We design a novel estimator specifically targeted at parallel online aggregation. When executed by the framework over a massive 8 TB TPC-H instance, the estimator provides accurate confidence bounds early in the execution even when the cardinality of the final result is seven orders of magnitude smaller than the dataset size and without incurring overhead.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号