首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Investments in cyberinfrastructure and e-Science initiatives are motivated by the desire to accelerate scientific discovery. Always viewed as a foundation of science, data sharing is appropriately seen as critical to the success of such initiatives, but new technologies supporting increasingly data-intensive and collaborative science raise significant challenges and opportunities. Overcoming the technical and social challenges to broader data sharing is a common and important research objective, but increasing the supply and accessibility of scientific data is no guarantee data will be applied by scientists. Before reusing data created by others, scientists need to assess the data’s relevance, they seek confidence the data can be understood, and they must trust the data. Using interview data from earthquake engineering researchers affiliated with the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES), we examine how these scientists assess the reusability of colleagues’ experimental data for model validation.  相似文献   

2.
Collaboration within eScience teams depends on participants learning each others’ disciplinary perspectives sufficiently to generate cross-disciplinary research questions of interest. Participants in new teams often have a limited understanding of each other’s research interests; hence early team interactions must revolve around exploratory cross-disciplinary learning and the search for interesting linkages between disciplines. This article investigates group learning and creative processes that impact the efficacy of early team interactions, and the impact of those interactions on the generation of integrated conceptual frameworks from which co-created research problems may emerge. Relevant learning and creativity theories were used to design a management intervention that was applied within the context of an incipient eScience team. Project evaluation indicated that the intervention enabled participants to effectively cross disciplines, integrate conceptualizations, and generate research ideas. The findings suggest that attention to group learning and creativity issues may help overcome some barriers to collaboration on eScience teams.  相似文献   

3.
Commonality and variability in software engineering   总被引:1,自引:0,他引:1  
The article describes how to perform domain engineering by identifying the commonalities and variabilities within a family of products. Through interesting examples dealing with reuse libraries, design patterns, and programming language design, the authors suggest a systematic scope, commonalities, and variabilities approach to formal analysis. Their SCV analysis has been an integral part of the FAST (Family-oriented Abstraction, Specification, and Translation) technology applied to over 25 domains at Lucent Technologies  相似文献   

4.
Faceted browsing has become ubiquitous with modern digital libraries and online search engines, yet the process is still difficult to abstractly model in a manner that supports the development of interoperable and reusable interfaces. We propose category theory as a theoretical foundation for faceted browsing and demonstrate how the interactive process can be mathematically abstracted. Existing efforts in facet modeling are based upon set theory, formal concept analysis, and light-weight ontologies, but in many regards, they are implementations of faceted browsing rather than a specification of the basic, underlying structures and interactions. We will demonstrate that category theory allows us to specify faceted objects and study the relationships and interactions within a faceted browsing system. Resulting implementations can then be constructed through a category-theoretic lens using these models, allowing abstract comparison and communication that naturally support interoperability and reuse.  相似文献   

5.
The unprecedented growth of Internet technologies has made resources on the World Wide Web instantly accessible to various user communities through digital libraries. Since the early 1990s, there have been several digital library initiatives sponsored by government agencies and/or private organizations all over the world. A digital library is a networked system environment that provides diverse user communities with coherent, seamless and transparent access to large, organized, and digitized information resources. This article provides a comprehensive overview of major digital library projects that are currently being undertaken across the globe. We also identify and discuss major challenges and research issues to be addressed in the design and implementation of digital libraries for the next millennium. We believe that digital libraries are ripe with research opportunities, offer many challenges, and will continue to grow in the next several years.  相似文献   

6.
《Information & Management》2002,39(4):255-260
This paper presents the results of my action research. I was involved in establishing and running a digital library that was founded by the government of South Korea. The process involved understanding the relationship between the national IT infrastructure and the success factors of the digital library. In building, the national IT infrastructure, a digital library system was implemented; it combines all existing digitized university libraries and can provide overseas information, such as foreign journal articles, instantly and freely to every Korean researcher. An empirical survey was made as a part of the action research; the survey determined user satisfaction in the newly established national digital library. After obtaining the survey results, I suggested that the current way of running the nationwide government-owned digital library should be retained.  相似文献   

7.
A digital library (DL) consists of a database which contains library information and a user interface which provides a visual window for users to search relevant information stored in the database. Thus, an abstract structure of a digital library can be defined as a combination of a special purpose database and a user-friendly interface. This paper addresses one of the fundamental aspects of such a combination. This is the formal data structure for linking an object oriented database with hypermedia to support digital libraries. It is important to establish a formal structure for a digital library in order to efficiently maintain different types of library information. This article discusses how to build an object oriented hybrid system to support digital libraries. In particular, we focus on the discussion of a general purpose data model for digital libraries and the design of the corresponding hypermedia interface. The significant features of this research are, first, a formalized data model to define a digital library system structure; second, a practical approach to manage the global schema of a library system; and finally, a design strategy to integrate hypermedia with databases to support a wide range of application areas. Received: 15 December 1997 / Revised: June 1999  相似文献   

8.
In the context of collaborative eScience, digital libraries are one of many distributed, interoperable resources available to scientists that facilitate both human and machine collaboration: machine collaboration in the form of standards such as the Open Archives Initiative Protocol for Metadata Harvesting and human collaboration in the form of collaborative workspaces. This paper describes a set of collaborative workspaces created at the Los Alamos National Laboratory Research Library, initial patterns of use, and additional user requirements determined based on these initial patterns of use.  相似文献   

9.
To solve today’s ecological problems, scientists need well documented, validated, and coherent data archives. Historically, however, ecologists have collected and stored data idiosyncratically, making data integration even among close collaborators difficult. Further, effective ecology data warehouses and subsequent data mining require that individual databases be accurately described with metadata against which the data themselves have been validated. Using database technology would make documenting data sets for archiving, integration, and data mining easier, but few ecologists have expertise to use database technology and they cannot afford to hire programmers. In this paper, we identify the benefits that would accrue from ecologists’ use of modern information technology and the obstacles that prevent that use. We describe our prototype, the Canopy DataBank, through which we aim to enable individual ecologists in the forest canopy research community to be their own database programmers. The key feature that makes this possible is domain-specific database components, which we call templates. We also show how additional tools that reuse these components, such as for visualization, could provide gains in productivity and motivate the use of new technology. Finally, we suggest ways in which communities might share database components and how components might be used to foster easier data integration to solve new ecological problems.  相似文献   

10.
Recently, an initiative within the hydrologic science and environmental engineering communities has emerged for the establishment of cooperative, large-scale environmental observatories. Scientists’ ability to access and use data collected within observatories to address broad research questions depends on the successful implementation of cyberinfrastructure. In this paper, we describe the architecture and functional requirements for an environmental observatory information system that supports collection, organization, storage, analysis, and publication of hydrologic observations. We then describe a unique system that has been developed to meet these requirements and that has been implemented within the Little Bear River, Utah environmental observatory test bed, as well as across a nation-wide network of 11 similar observatory test bed sites. The components demonstrated comprise an observatory information system that enables not only the management, analysis, and synthesis of environmental observations data for a single observatory, but also publication of the data on the Internet in simple to use formats that are easily accessible, discoverable by others, and interoperable with data from other observatories.  相似文献   

11.
The paper describes a formal framework to integrate EXPRESS models and facilitate sharing and exchanging of data in an Extended Enterprise context. We perceive in the creation of an Extended Enterprise an opportunity to use standardized data models. Hence our research is based on three important ISO standards whose primary objective is to enhance data exchange. These standards are ISO 10303, ISO 15531 and ISO 13584, known as STEP, MANDATE and PLIB, respectively. Although they are intended to overcome incompatibility problems for the computer-based applications that are used during the product life-cycle, they turned out to be semantically incompatible among themselves. This seems to be a major drawback when individual organizations want to share core competences, such as resources, manufacturing processes, or product design, to create an Extended Enterprise. The constructs we propose harmonize incompatible model components so that core competences can be transparent to the net of enterprises. The proposal is exemplified by creating a mediator application and a repository. The mediator application is used by individual firms to gain access to the core abilities that are shared, whereas the repository is a neutral means to store such competences. It complies with part 21 of the ISO 10303 standard. The proposed formal framework provides a sound model of the information system and facilitates data sharing in the Extended Enterprise.  相似文献   

12.
Digital libraries (DLs) have eluded definitional consensus and lack agreement on common theories and frameworks. This makes comparison of DLs extremely difficult, promotes ad-hoc development, and impedes interoperability. In this paper we propose a formal ontology for DLs that defines the fundamental concepts, relationships, and axiomatic rules that govern the DL domain, therefore providing a frame of reference for the discussion of essential concepts of DL design and construction. The ontology is an axiomatic, formal treatment of DLs, which distinguishes it from other approaches that informally define a number of architectural variants. The process of construction of the ontology was guided by 5S, a formal framework for digital libraries. To test its expressibility we have used the ontology to create a taxonomy of DL services and to reason about issues of reusability, extensibility, and composability. Some practical applications of the ontology are also described including: the definition of a digital library services taxonomy, the proposal of a modeling language for digital libraries, and the specification of quality metrics to evaluate digital libraries. We also demonstrate how to use the ontology to formally describe DL architectures and to prove some properties about them, thus helping to further validate the ontology.  相似文献   

13.
14.
I want increased confidence in my programs. I want my own and other people's programs to be more readable. I want a new discipline of programming that augments my thought processes. Therefore, I create and explore a new discipline of programming in my BabyUML laboratory. I select, simplify and twist UML and other languages to demonstrate how they help bridge the gap between me as a programmer and the objects running in my computer The focus is on the run time objects; their structure, their interaction, and their individual behaviors. Trygve Reenskaug is professor emeritus of informatics at the University of Oslo. He has 40 years experience in software engineering research and the development of industrial strength software products. He has extensive teaching and speaking experience including keynotes, talks and tutorials. His firsts include the Autokon system for computer aided design of ships with end user programming language, structured programming, and a data base oriented architecture from 1960; object oriented applications and role (collaboration) modeling from 1973; Model-View-Controller, the world's first reusable object oriented framework, from 1979; OOram role modeling method and tool from 1983. Trygve was a member of the UML Core Team and was a contributor to UML 1.4. The goal of his current research is to create a new, high level discipline of programming that lets us reclaim the mastery of software.  相似文献   

15.
The advances of multimedia models and tools popularized the access and production of multimedia contents: in this new scenario, there is no longer a clear distinction between authors and end-users of a production. These user-authors often work in a collaborative way. As end-users, they collectively participate in interactive environments, consuming multimedia artifacts. In their authors’ role, instead of starting from scratch, they often reuse others’ productions, which can be decomposed, fusioned and transformed to meet their goals. Since the need for sharing and adapting productions is felt by many communities, there has been a proliferation of standards and mechanisms to exchange complex digital objects, for distinct application domains. However, these initiatives have created another level of complexity, since people have to define which share/ reuse solution they want to adopt, and may even have to resort to programming tasks. They also lack effective strategies to combine these reused artifacts. This paper presents a solution to this demand, based on a user-author centered multimedia building block model—the digital content component (DCC). DCCs upgrade the notion of digital objects to digital components, as they homogenously wrap any kind of digital content (e.g., multimedia artifacts, software) inside a single component abstraction. The model is fully supported by a software infrastructure, which exploits the model’s semantic power to automate low level technical activities, thereby freeing user-authors to concentrate on creative tasks. Model and infrastructure improve recent research initiatives to standardize the means of sharing and reuse domain specific digital contents. The paper’s contributions are illustrated using examples implemented in a DCC-based authoring tool, in real life situations.  相似文献   

16.
Computer scientists who work on tools and systems to support eScience (a variety of parallel and distributed) applications usually use actual applications to prove that their systems will benefit science and engineering (e.g., improve application performance). Accessing and building the applications and necessary data sets can be difficult because of policy or technical issues, and it can be difficult to modify the characteristics of the applications to understand corner cases in the system design. In this paper, we present the Application Skeleton, a simple yet powerful tool to build synthetic applications that represent real applications, with runtime and I/O close to those of the real applications. This allows computer scientists to focus on the system they are building; they can work with the simpler skeleton applications and be sure that their work will also be applicable to the real applications. In addition, skeleton applications support simple reproducible system experiments since they are represented by a compact set of parameters.Our Application Skeleton tool (available as open source at https://github.com/applicationskeleton/Skeleton) currently can create easy-to-access, easy-to-build, and easy-to-run bag-of-task, (iterative) map-reduce, and (iterative) multistage workflow applications. The tasks can be serial, parallel, or a mix of both. The parameters to represent the tasks can either be discovered through a manual profiling of the applications or through an automated method. We select three representative applications (Montage, BLAST, CyberShake Postprocessing), then describe and generate skeleton applications for each. We show that the skeleton applications have identical (or close) performance to that of the real applications. We then show examples of using skeleton applications to verify system optimizations such as data caching, I/O tuning, and task scheduling, as well as the system resilience mechanism, in some cases modifying the skeleton applications to emphasize some characteristic, and thus show that using skeleton applications simplifies the process of designing, implementing, and testing these optimizations.  相似文献   

17.
18.
Problems with portability of applications across various Linux distributions is one of the major sore spots of independent software vendors (ISVs) wishing to support the Linux platform in their products. The source of the problem is that different distributions have different sets of system libraries that vary in the interfaces (APIs) provided. And the critical questions arise for ISVs such as “which distributions my application would run on?” or “what can I specifically do to make my application run on a greater number of distributions?”. This article refers to an industry-wide approach to mitigate the problem of Linux platform fragmentation through standardization of common interfaces—the Linux Standard Base (LSB) standard, the leading effort for the “single Linux specification”. The article shows how extending this approach with a knowledge base about the composition of real world Linux distributions can enable automatic portability analysis for Linux applications even if they use interfaces outside the scope of the standard. The knowledge base powered Linux Application Checker tool is described that can help answer the above questions by automatically analyzing the target application and confronting collected data about its external dependencies with what various distributions provide. Additionally, Linux Application Checker is an official tool approved by the Linux Foundation for certifying applications for compliance with the LSB standard.  相似文献   

19.
Kiem‐Phong Vo 《Software》2000,30(2):107-128
Over the past few years, my colleagues and I have written a number of software libraries for fundamental computing tasks, including I/O, memory allocation, container data types and sorting. These libraries have proved to be good software building blocks, and are used widely by programmers around the world. This success is due in part to a library architecture that employs two main interface mechanisms: disciplines to define resource requirements; and methods to parameterize resource management. Libraries built this way are called discipline and method libraries. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
Making Tea (MT) is a design elicitation method developed in eScience specifically to deal with situations in which (1) the designers do not share domain or artifact knowledge with design-domain experts, (2) the processes in the space are semi-structured and (3) the processes to be modeled can last for periods exceeding the availability of most ethnographers. We have used the method in two distinct eScience contexts, and may offer an effective, low cost way to deal with bridging between software design teams and scientists to develop useful and usable eScience artifacts. To that end, we propose a set of criteria in order to understand why MT works. Through these criteria we also reflect upon the relation of MT to other design elicitation methods in order to propose a kind of method framework from which other designers may be assisted in choosing elicitation methods and in developing new methods both for eScience contexts and beyond.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号