首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
If empirical software engineering is to grow as a valid scientific endeavor, the ability to acquire, use, share, and compare data collected from a variety of sources must be encouraged. This is necessary to validate the formal models being developed within computer science. However, within the empirical software engineering community this has not been easily accomplished. This paper analyses experiences from a number of projects, and defines the issues, which include the following: (1) How should data, testbeds, and artifacts be shared? (2) What limits should be placed on who can use them and how? How does one limit potential misuse? (3) What is the appropriate way to credit the organization and individual that spent the effort collecting the data, developing the testbed, and building the artifact? (4) Once shared, who owns the evolved asset? As a solution to these issues, the paper proposes a framework for an empirical software engineering artifact agreement. Such an agreement is intended to address the needs for both creator and user of such artifacts and should foster a market in making available and using such artifacts. If this framework for sharing software engineering artifacts is commonly accepted, it should encourage artifact owners to make the artifacts accessible to others (gaining credit is more likely and misuse is less likely). It may be easier for other researchers to request artifacts since there will be a well-defined protocol for how to deal with relevant matters.  相似文献   

3.
Analogy is proposed as an alternative paradigm for the reuse of specifications during requirements analysis. First, critical determinants of analogies between software engineering problems are discussed in relation to a specification retrieval mechanism. Second, the process of specification reuse is examined. Specification reuse by analogy is knowledge-intensive, hence an important role is proposed for the analyst during specification reuse: analyst involvement would appear necessary to categorize a new problem, select between candidate reusable specifications, and customize the selected specification to the new domain. Finally, a specification reuse tool is proposed that recognises the collaborative nature of reuse by analogy. This tool assists and advises the analyst during reuse founded on cognitive models of analyst behaviour during analogous reasoning and reuse. The prototype version of this intelligent reuse advisor (Ira) is outlined.  相似文献   

4.
The present globalized market is forcing many companies to invest in new strategies and tools for supporting knowledge management. This aspect is becoming a key factor in the industrial competitiveness for the presence of extended enterprises that normally deal with huge data exchange and share processes. This scenario is due to the presence of partners geographically distributed over the entire globe, that participate in different steps of the product lifecycle (product development, maintenance and recycling). At present, Product Lifecycle Management (PLM) seems to be the appropriate solution to support enterprises in this complex scenario, even though a real standardized approach for the implementation of knowledge sharing and management tools does not exist today. For this reason, the aim of this paper is to develop a knowledge management operative methodology able to support the formalization and the reuse of the enterprise expertise acquired while working on previous products. By focusing on consumer packaged goods enterprises and on the concept development phase (which is one of the most knowledge intensive phases of the whole product lifecycle), this research work has developed a new systematic methodology to support knowledge codification and knowledge management operations. The new methodology integrates the Quality Function Deployment (QFD) and the Teoriya Resheniya Izobreatatelskikh Zadatch (TRIZ). Also, a case study on the problem of waste disposal has been conducted to validate the proposed methodology.  相似文献   

5.
Personalized recommendation for knowledge reuse provides a framework to share product knowledge such as assembly process, environmental impact and energy efficiency in manufacturing, in order to help engineers make their best decisions. It can reduce the search efforts of engineers and mitigate the encumbrance of information overload. However, traditional personalized knowledge recommendation method assumes that engineers differing characteristics—most notably their level of experience for simplify are the same. In this paper, we present a new method for handling personalized knowledge recommendation problem. A measurement model of cognitive information gain to predict the helpfulness of knowledge for engineers based on their level of experience is proposed. Knowledge is analyzed and helpfulness scores of knowledge are calculated by using the cognitive information gain measurement model. Then knowledge recommendations that are optimally helpful relative to engineers' experiences are generated. An example is used to depict the procedure of the proposed method. The example results show that the proposed method is effective and accurate in recommending knowledge that takes into account the level of engineer’s experience.  相似文献   

6.
Nowadays, in an industrial context, cost and delay reduction, as well as quality improvement are of major interest in engineering design. Therefore, in order to make a decision as early as possible and according to the product specifications, mechanical analysis is used more and more, and earlier and earlier in the engineering process. Then, a multitude of mechanical models are elaborated during engineering design, and management difficulties appear with engineering changes or evolution of specifications. Moreover, when the designer is faced with design or modelling options, previous analysis could answer the choice of options for decision making. Then, the reuse of a previous analysis must be envisaged. The paper presented deals with the aim and the different use of mechanical analysis in embodiment design. Afterwards, different levels of models handled by the designer during the engineering process are proposed. A particular type of analysis, namely instructional' is identif ied in a further step and its interest in a reuse context is emphasized. Finally, information structuring is proposed in order to allow mechanical analysis reuse during engineering design.  相似文献   

7.
This paper argues for a return to fundamentals and for a balanced assessment of the contribution that Information Technology can make as we enter the new millennium. It argues that the field of Information Systems should no longer be distracted from its natural locus of concern and competence, or claim more than it can actually achieve. More specifically, and as a case in point, we eschew IT-enabled Knowledge Management, both in theory and in practice. We view Knowledge Management as the most recent in a long line of fads and fashions embraced by the Information Systems community that have little to offer. Rather, we argue for a refocusing of our attention back on the management ofdata, since IT processes data-notinformation and certainly notknowledge. In so doing, we develop a model that provides a tentative means of distinguishing between the terms. This model also forms the basis for on-going empirical research designed to test the efficacy of our argument in a number of case companies currently implementing ERP and Knowledge Management Systems.  相似文献   

8.
The development of an image-processing (IP) application is a complex activity, which can be greatly alleviated by user-friendly graphical programming environments. The major objective of the work described in this paper is to help IP experts reuse parts of their applications. A first step towards knowledge reuse has been to propose a suitable representation of the strategies of IP experts by means of IP plans (trees of tasks, methods and tools). This paper describes the CBR module of an interactive system for the development of IP plans. After a brief presentation of the overall architecture of the system and its other modules, the authors explain the distinction between an IP case and an IP plan, and give the selection criteria and functions that are used for similarity calculation. The core of the CBR module is a search/adaptation algorithm, whose main steps are detailed: retrieval of suitable cases, recursive adaptation of the selected one and memorization of new cases. The system’s implementation is presently completed; its functioning is described in a session showing the kind of assistance provided by the CBR module during the development of a new IP application.  相似文献   

9.
PETROS is a fixed-format magnetic tape data bank of major-element chemical analyses of igneous rocks divided into groups representing selected geographic areas and petrologic provinces. The 20,000 analyses and additional calculated average igneous rock compositions may be used for a variety of computer-based research and teaching applications. Interactive programs greatly expand the accessibility and usefulness of PETROS.  相似文献   

10.
KEYBAM is a system of interactive FORTRAN IV programs for accessing and operating on major-element whole-rock chemical analyses stored in data bank PETROS. KEYBAM's capabilities include subfile creation based on user-supplied criteria, normative calculations and rock classification, graphical displays including histograms, X—Y plots, triangular plots, and various statistical analyses based on the SPSS System. We have attempted to design KEYBAM so that it is machine independent and can be used for research and teaching in petrology at most modern computer installations.  相似文献   

11.
Billions of Linked Data triples exist in thousands of RDF knowledge graphs on the Web, but few of those graphs can be queried live from Web applications. Only a limited number of knowledge graphs are available in a queryable interface, and existing interfaces can be expensive to host at high availability. To mitigate this shortage of live queryable Linked Data, we designed a low-cost Triple Pattern Fragments interface for servers, and a client-side algorithm that evaluates SPARQL queries against this interface. This article describes the Linked Data Fragments framework to analyze Web interfaces to Linked Data and uses this framework as a basis to define Triple Pattern Fragments. We describe client-side querying for single knowledge graphs and federations thereof. Our evaluation verifies that this technique reduces server load and increases caching effectiveness, which leads to lower costs to maintain high server availability. These benefits come at the expense of increased bandwidth and slower, but more stable query execution times. These results substantiate the claim that lightweight interfaces can lower the cost for knowledge publishers compared to more expressive endpoints, while enabling applications to query the publishers’ data with the necessary reliability.  相似文献   

12.

Context

Software developers spend considerable effort implementing auxiliary functionality used by the main features of a system (e.g., compressing/decompressing files, encryption/decription of data, scaling/rotating images). With the increasing amount of open source code available on the Internet, time and effort can be saved by reusing these utilities through informal practices of code search and reuse. However, when this type of reuse is performed in an ad hoc manner, it can be tedious and error-prone: code results have to be manually inspected and integrated into the workspace.

Objective

In this paper we introduce and evaluate the use of test cases as an interface for automating code search and reuse. We call our approach Test-Driven Code Search (TDCS). Test cases serve two purposes: (1) they define the behavior of the desired functionality to be searched; and (2) they test the matching results for suitability in the local context. We also describe CodeGenie, an Eclipse plugin we have developed that performs TDCS using a code search engine called Sourcerer.

Method

Our evaluation consists of two studies: an applicability study with 34 different features that were searched using CodeGenie; and a performance study comparing CodeGenie, Google Code Search, and a manual approach.

Results

Both studies present evidence of the applicability and good performance of TDCS in the reuse of auxiliary functionality.

Conclusion

This paper presents an approach to source code search and its application to the reuse of auxiliary functionality. Our exploratory evaluation shows promising results, which motivates the use and further investigation of TDCS.  相似文献   

13.
The National Coal Resources Data System (NCRDS) was designed by the U.S. Geological Survey (USGS) to meet the increasing demands for rapid retrieval of information on coal location, quantity, quality, and accessibility. An interactive conversational query system devised by the USGS retrieves information from the data bank through a standard computer terminal. The system is being developed in two phases.Phase I, which currently is available on a limited basis, contains published areal resource and chemical data. The primary objective of this phase is to retrieve, calculate, and tabulate coal-resource data by area on a local, regional, or national scale. Factors available for retrieval include: state, county, quadrangle, township, coal field, coal bed, formation, geologic age, source and reliability of data, and coal-bed rank, thickness, overburden, and tonnage, or any combinations of variables. In addition, the chemical data items include individual values for proximate and ultimate analyses, BTU value, and several other physical and chemical tests. Information will be validated and deleted or updated as needed.Phase II is being developed to store, retrieve, and manipulate basic point source coal data (e.g., field observations, drill-hole logs), including geodetic location; bed thickness; depth of burial; moisture; ash; sulfur; major-, minor-, and trace-element content; heat value; and characteristics of overburden, roof rocks, and floor rocks. The computer system may be used to generate interactively structure-contour or isoline maps of the physical and chemical characteristics of a coal bed or to calculate coal resources.  相似文献   

14.
This paper addresses the supervised learning in which the class memberships of training data are subject to ambiguity. This problem is tackled in the ensemble learning and the Dempster-Shafer theory of evidence frameworks. The initial labels of the training data are ignored and by utilizing the main classes’ prototypes, each training pattern is reassigned to one class or a subset of the main classes based on the level of ambiguity concerning its class label. Multilayer perceptron neural network is employed to learn the characteristics of the data with new labels and for a given test pattern its outputs are considered as basic belief assignment. Experiments with artificial and real data demonstrate that taking into account the ambiguity in labels of the learning data can provide better classification results than single and ensemble classifiers that solve the classification problem using data with initial imperfect labels.  相似文献   

15.
The structure of data in computer-based files is information which may not be explicit in the files themselves, but is incorporated in part in the computer software designed to process the files. If a computer-processable file of data is to be processed using a “system” other than the one used to generate the file initially, conversion of the file to another format is normally necessary. A format, called FILEMATCH, is presented which for structures encountered in earth science data, incorporates the structural information in the files themselves, thus providing a medium for interchange of files among a variety of software systems.  相似文献   

16.
Reuse of software assets in application development has held promise but faced challenges. In addressing these challenges, research has focused on organizational- and project-level factors while neglecting grass-root level adoption of reusable assets. Our research investigated factors associated with individual software developers’ intention to reuse software assets and integrated them in TAM. Towards that end, 13 project managers were interviewed and 207 software developers were surveyed in India. Results revealed that the technological-level (infrastructure), and individual-level factors (reuse-related experience and self-efficacy) were major determinants. Implications are discussed.  相似文献   

17.
In this paper we address the problem of integrating independent and possibly heterogeneous data warehouses, a problem that has received little attention so far, but that arises very often in practice. We start by tackling the basic issue of matching heterogeneous dimensions and provide a number of general properties that a dimension matching should fulfill. We then propose two different approaches to the problem of integration that try to enforce matchings satisfying these properties. The first approach refers to a scenario of loosely coupled integration, in which we just need to identify the common information between data sources and perform join operations over the original sources. The goal of the second approach is the derivation of a materialized view built by merging the sources, and refers to a scenario of tightly coupled integration in which queries are performed against the view. We also illustrate architecture and functionality of a practical system that we have developed to demonstrate the effectiveness of our integration strategies. A preliminary version this paper appeared, under the title “Integrating Heterogeneous Multidimensional Databases” [9], in 17th Int. Conference on Scientific and Statistical Database Management, 2005.  相似文献   

18.
 The combination of objective measurements and human perceptions using hidden Markov models with particular reference to sequential data mining and knowledge discovery is presented in this paper. Both human preferences and statistical analysis are utilized for verification and identification of hypotheses as well as detection of hidden patterns. As another theoretical view, this work attempts to formalize the complementarity of the computational theories of hidden Markov models and perceptions for providing solutions associated with the manipulation of the internet.  相似文献   

19.
This paper presents a comprehensive survey of web log/usage mining based on over 100 research papers. This is the first survey dedicated exclusively to web log/usage mining. The paper identifies several web log mining sub-topics including specific ones such as data cleaning, user and session identification. Each sub-topic is explained, weaknesses and strong points are discussed and possible solutions are presented. The paper describes examples of web log mining and lists some major web log mining software packages.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号