首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
2.
3.
ContextSoftware effort estimation is a core task regarding planning, budgeting and controlling software development projects. However, providing accurate effort estimates is challenging. Estimation work is increasingly group based, and to support it, there is a need to reveal how work practices are carried out as collaborative efforts.ObjectiveThis paper examines the use of concepts in software effort estimation by analysing group work as communicative practice. The objective is to improve our understanding of how software professionals invoke different types of knowledge when talking, reasoning and reaching a decision on a software effort estimate.MethodEstimation meetings in the industry where planning poker was used as the estimation method have been video recorded and analysed by means of the interaction analysis technique, focusing on the communicative and collaborative aspects of the group work.ResultsThe user story mediates the types of resources and knowledge needed to solve the task. Concepts from the knowledge domain are used to frame the task and allow the participants to reach consensus, sufficient to take the next step in the problem-solving activity. Individual knowledge seems to be the dominating orientation when it comes to specifying the work needed for solving the tasks.ConclusionThe step from reasoning to decision-making has been called the “magic step” in software effort estimation. We argue that the magic step is found in the analysis of the social interaction in which the concepts used are anchored in the knowledge domain of software engineering and in the historical experiences of the participants and subsequently become activated. We propose that by taking a socio-cultural perspective on concepts in activities, the ways in which software professionals reach a decision can be unpacked. The paper contributes to an understanding of the role of concepts in group work and of software effort estimation as a specific work practice.  相似文献   

4.
Multimod Data Manager: a tool for data fusion   总被引:2,自引:0,他引:2  
Nowadays biomedical engineers regularly have to combine data from multiple medical imaging modalities, biomedical measurements and computer simulations and this can demand the knowledge of many specialised software tools. Acquiring this knowledge to the depth necessary to perform the various tasks can require considerable time and thus divert the researcher from addressing the actual biomedical problems. The aim of the present study is to describe a new application called the Multimod Data Manager, distributed as a freeware, which provides the end user with a fully integrated environment for the fusion and manipulation of all biomedical data. The Multimod Data Manager is generated using a software application framework, called the Multimod Application Framework, which is specifically designed to support the rapid development of computer aided medicine applications. To understand the general logic of the Data Manager, we first introduce the framework from which it is derived. We then illustrate its use by an example--the development of a complete subject-specific musculo-skeletal model of the lower limb from the Visible Human medical imaging data to be used for predicting the stresses in the skeleton during gait. While the Data Manager is clearly still only at the prototype stage, we believe that it is already capable of being used to solve a large number of problems common to many biomedical engineering activities.  相似文献   

5.
Creating and maintaining software systems is a knowledge intensive task. One needs to have a good understanding of the application domain, the problem to solve and all its requirements, the software process used, technical details of the programming language(s), the system’s architecture and how the different parts fit together, how the system interacts with its environment, etc. All this knowledge is difficult and costly to gather. It is also difficult to store and usually lives only in the mind of the software engineers who worked on a particular project.If this is a problem for development of new software, it is even more for maintenance, when one must rediscover lost information of an abstract nature from legacy source code among a swarm of unrelated details.In this paper, we submit that this lack of knowledge is one of the prominent problems in software maintenance. To try to solve this problem, we adapted a knowledge extraction technique to the knowledge needs specific to software maintenance. We explain how we explicit the knowledge discovered on a legacy software during maintenance so that it may be recorded for future use. Some applications on industry maintenance projects are reported.  相似文献   

6.
This paper is about tool support for knowledge-intensive engineering tasks. In particular, it introduces software technology to assist the design of complex technical systems. There is a long tradition in automated design problem solving in the field of artificial intelligence, where, especially in the early stages, the search paradigm dictated many approaches. Later, in the so-called modern period, a better problem understanding led to the development of more adequate problem solving techniques. However, search still constitutes an indispensable part in computer-based design problem solving—albeit many human problem solvers get by without (almost). We tried to learn lessons from this observation, and one is presented in this paper. We introduce design problem solving by functional abstraction which follows the motto: construct a poor solution with little search, which then must be repaired. For the domain of fluidic engineering we have operationalized the paradigm by the combination of several high-level techniques. The red thread of this paper is design automation, but the presented technology does also contribute in the following respects: (a) productivity enhancement by relieving experts from auxiliary and routine tasks; (b) formulation, exchange, and documentation of knowledge about design; (c) requirements engineering, feasibility analysis, and validation. This research was supported by DFG grants Schw 120/56-3, KL 529/10-3, KL 529/7-3, and KL 529/10-1.  相似文献   

7.
Program understanding is an essential part of all software maintenance and enhancement activities. As currently practiced, program understanding consists mainly of code reading. The few automated understanding tools that are actually used in industry provide helpful but relatively shallow information, such as the line numbers on which variable names occur or the calling structure possible among system components. These tools rely on analyses driven by the nature of the programming language used. As such, they are adequate to answer questions concerning implementation details, so called what questions. They are severely limited, however, when trying to relate a system to its purpose or requirements, the why questions. Application programs solve real‐world problems. The part of the world with which a particular application is concerned is that application's domain. A model of an application's domain can serve as a supplement to programming‐language‐based analysis methods and tools. A domain model carries knowledge of domain boundaries, terminology, and possible architectures. This knowledge can help an analyst set expectations for program content. Moreover, a domain model can provide information on how domain concepts are related. This article discusses the role of domain knowledge in program understanding. It presents a method by which domain models, together with the results of programming‐language‐based analyses, can be used to answers both what and why questions. Representing the results of domain‐based program understanding is also important, and a variety of representation techniques are discussed. Although domain‐based understanding can be performed manually, automated tool support can guide discovery, reduce effort, improve consistency, and provide a repository of knowledge useful for downstream activities such as documentation, reengineering, and reuse. A tools framework for domain‐based program understanding, a dowser, is presented in which a variety of tools work together to make use of domain information to facilitate understanding. Experience with domain‐based program understanding methods and tools is presented in the form of a collection of case studies. After the case studies are described, our work on domain‐based program understanding is compared with that of other researchers working in this area. The paper concludes with a discussion of the issues raised by domain‐based understanding and directions for future work. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

8.
The concept of knowledge bases originated in artificial intelligence as one side of expert systems-namely, the fundamental body of knowledge available to a domain. KBs are particularly appropriate in knowledge-intensive activities like software development. They offer context-based access to complex information, including informal documents and multimedia, as well as a centralized means of storing and preserving digital assets. KBs can help software engineers with many tasks-from project management and design rationale to version control, defect tracking, code reuse, and staff training and development. We recently implemented an open source KB to support the Consortium for Studying Open Source in Public Administrations. COSPA originated in a EU initiative to study the use of open source software to reduce public administrative software and system support costs. The KB project aimed to build a multilingual knowledge base for comparing and pooling knowledge and experience.  相似文献   

9.
Building application domain models is a time-consuming activity in software engineering. In small teams, it is an activity that involves almost all participants, including developers and domain experts. In our approach, we support the knowledge engineering activity by reusing tagging done by team participants when they search information on the Web about the application’s domain. Team participants collaborate implicitly when they do tagging because their individually created tags are collected and form a folksonomy. This folksonomy reflects their knowledge about the domain and it is the base for eliciting domain model elements in the knowledge acquisition and conceptualization tasks in a consensual way. Experiments provide evidence that our approach helps team participants to build richer domain models than if they do not use our software tool. The tool allows the reuse of simple annotations as long as users learn about the application’s domain.  相似文献   

10.
This paper discusses the development of task-specific information retrieval systems for software engineers. We discuss how software engineers interact with information and information retrieval systems and investigate to what extent a domain-specific search and recommendation system can be developed in order to support their work related activities. We have conducted a user study which is based on the “Cognitive Research Framework” to identify the relation between the information objects used during the code development (code snippets and search queries), the tasks users engage in and the associated use of search interfaces. Based on our user studies, a questionnaire and an automated observation of user interactions with the browser and software development environment, we identify that software engineers engage in a finite number of work related tasks and they also develop a finite number of “work practices”/“archetypes of behaviour”. Secondly we identify a group of domain specific behaviours that can successfully be used as a source of strong implicit relevance feedback. Based on our results, we design a snippet recommendation interface, and a code related recommendation interface which are embedded within the standard search engine.  相似文献   

11.
The Internet knowledge (iKnow) measure   总被引:1,自引:1,他引:0  
Despite increasing development and administration of Internet surveys, tests, and many other applications to be used by employees and the general public, little is known about the knowledge individuals bring to Internet tasks. This research improves our understanding of the concept of Internet knowledge, and provides initial support for the construct validity of a new measure of Internet knowledge with respect to its factor structure, internal consistency reliability, and concurrent validity. From a practical perspective, clearer definition of Internet knowledge and the availability of a reliable measure of such knowledge can advance our understanding of how individuals develop Internet experience through its use and may also inform the process by which web sites and Internet applications are designed.  相似文献   

12.
Computer studies educators have a challenging task in keeping pace with the rapidly changing content of computer software. One way to meet this challenge is to examine the nature of knowledge transfer. Instead of focusing on unique software packages, teachers could concentrate on knowledge that is likely to transfer from one software application to another. The purpose of the current study was to describe what kind of knowledge is used in learning new software, assess the relative effectiveness of this knowledge in aiding the learning process, and examine how the results could advance educational learning theory and practice. Thirty-six adults (18 male, 18 female), representing three computer ability levels (beginner, intermediate, and advanced), volunteered to think out loud while they learned the rudimentary steps (moving the cursor, using a menu, entering data) required to use a spreadsheet software package (Lotus 1-2-3). Previous understanding of terminology, software concepts and actions, and other software packages had the largest impact, both positive and negative, on learning. A basic understanding of the keyboard and common movement keys was also important, although higher level knowledge (e.g., terms, concepts, actions) is probably necessary for significant gains in learning performance. Computer ability had little impact on the type of transfer knowledge used, except with respect to the use of software concepts and, to a lesser extent, terminology. The interaction between problem type and effectiveness of a specific transfer area suggests that identifying specific common tasks among software packages is important in detecting useful transfer knowledge. It is equally important that computer users understand labeling idiosyncrasies of these common tasks.  相似文献   

13.
14.
The aim of this paper is to present a design strategy for collaborative knowledge-management systems based on a semiotic approach. The contents and structure of experts' knowledge is highly dependent on professional or individual practice. Knowledge-management systems that support cooperation between experts from different (sub-)fields need to be situated and tailored to provide effective support even if the common aspects of the data need to be described by ontologies that are generic in respect to the sub-disciplines involved. To understand and approach this design problem, we apply a semiotic perspective to computer application and human–computer interaction. From a semiotic perspective, the computer application is both a message from the designer to the user about the structure of the problem domain, as well as about interaction with it, and a structured channel for the user's communication with herself, himself or other users of the software. Tailoring or “end-user development” – i.e. adapting the knowledge-management system to a specific (sub-)discipline, task or context – then refines both the message and adapts the structure of the interaction to the situated requirements.The essential idea of this paper is to define a new perspective for designing and developing interactive systems to support collaborative knowledge management. The key concept is to involve domain experts in participatory knowledge design for mapping and translating their professional models into the proper vocabularies, notations, and suitable visual structures for navigating among interface elements. To this end, the paper describes how our semiotic approach supports processes for representing, storing, accessing, and transferring knowledge through which the information architecture of an interactive system can be defined. Finally, the results of applying our approach to a real-world case in an archaeological context are presented.  相似文献   

15.
An important part of many software maintenance tasks is to gain a sufficient level of understanding of the system at hand. The use of dynamic information to aid in this software understanding process is a common practice nowadays. A major issue in this context is scalability: due to the vast amounts of information, it is a very difficult task to successfully navigate through the dynamic data contained in execution traces without getting lost.In this paper, we propose the use of two novel trace visualization techniques based on the massive sequence and circular bundle view, which both reflect a strong emphasis on scalability. These techniques have been implemented in a tool called Extravis. By means of distinct usage scenarios that were conducted on three different software systems, we show how our approach is applicable in three typical program comprehension tasks: trace exploration, feature location, and top-down analysis with domain knowledge.  相似文献   

16.
Program understanding is an essential part of all software maintenance and enhancement activities. As currently practiced, program understanding consists mainly of code reading. The few automated understanding tools that are actually used in industry provide helpful but relatively shallow information, such as the line numbers on which variable names occur or the calling structure possible among system components. These tools rely on analyses driven by the nature of the programming language used. As such, they are adequate to answer questions concerning implementation details, so called what questions. They are severely limited, however, when trying to relate a system to its purpose or requirements, the why questions. Application programs solve real‐world problems. The part of the world with which a particular application is concerned is that application's domain. A model of an application's domain can serve as a supplement to programming‐language‐based analysis methods and tools. A domain model carries knowledge of domain boundaries, terminology, and possible architectures. This knowledge can help an analyst set expectations for program content. Moreover, a domain model can provide information on how domain concepts are related. This article discusses the role of domain knowledge in program understanding. It presents a method by which domain models, together with the results of programming‐language‐based analyses, can be used to answers both what and why questions. Representing the results of domain‐based program understanding is also important, and a variety of representation techniques are discussed. Although domain‐based understanding can be performed manually, automated tool support can guide discovery, reduce effort, improve consistency, and provide a repository of knowledge useful for downstream activities such as documentation, reengineering, and reuse. A tools framework for domain‐based program understanding, a dowser, is presented in which a variety of tools work together to make use of domain information to facilitate understanding. Experience with domain‐based program understanding methods and tools is presented in the form of a collection of case studies. After the case studies are described, our work on domain‐based program understanding is compared with that of other researchers working in this area. The paper concludes with a discussion of the issues raised by domain‐based understanding and directions for future work.  相似文献   

17.
This paper sets out to illustrate the importance of transparency within software support systems and in particular for those intelligent assistant systems performing complex industrial design tasks. Such transparency (with the meaning ‘clear’ or ‘easy to understand’) can be achieved by two distinct strategies that complement each other:
  • 1.(i) The design of intelligible systems that would avoid the need for in depth explanation.
  • 2.(ii) The flexible generation of those definitions or aspects of the system or domain that remain ambiguous.
The paper illustrates that for the generation of useful explanations going beyond a simple justification of a problem solving trace, specific explanatory knowledge must be acquired. By itself the problem solving techniques are not sufficient. A new approach to acquire and model explanatory knowledge for software systems is presented. The new four-layer explanatory model can be used to determine the range of explanation suitable for a given systems domain. This model has been successfully used for the development of an explanation component for the design assistant system ASSIST that supports factory layout planning, in itself a complex design task.  相似文献   

18.
Expert systems have traditionally captured the explicit knowledge of a single expert or source of expertise in order to automatically provide conclusions or classifications within a narrow problem domain. This is in stark contrast to social software which enables knowledge communities to share implicit knowledge of a more practical or experiential nature to inform individuals and groups to arrive at their own conclusions. Specialists are often needed to elicit and encode the knowledge in the case of expert systems, whereas one of the (claimed) hallmarks of social software and the Web 2.0 trend, such as Wikis and Blogs, is that everyone, anywhere can chose to contribute input. This openness in authoring and sharing content, however, tends to produce unstructured knowledge that is difficult to execute, reason over or automatically validate. This also poses limitations for its reuse. To facilitate the capture of knowledge-in-action which spans both explicit and tacit knowledge types, a knowledge engineering approach which offers Wiki-style collaboration is introduced. The approach extends a combined rule and case-based knowledge acquisition technique known as Multiple Classification Ripple Down Rules to allow multiple users to collaboratively view, define and refine a knowledge base over time and space.  相似文献   

19.
20.
《IT Professional》2001,3(2):29-36
New methodologies and better techniques are the rule in software engineering, and users of large and complex methodologies benefit greatly from specialized software support tools. However, developing such tools is both difficult and expensive, because developers must implement a lot of functionality in a short time. A promising solution is component-based software development, in particular package-oriented programming (POP). POP fails, however, to satisfy all the requirements of large, complex software engineering tasks. A more generic POP architecture would better serve the development of software engineering environments for large and complex methodologies. Such an architecture emerged from our development experiences with two software engineering research tools: Holmes, a domain analysis support tool; and Egidio, a unified-modeling-language-based business modeling tool. We found this particular architecture simple to understand, easy to implement, and a natural candidate for a generic POP architecture. Our generic architecture satisfies the additional requirements we deem important for larger, more complex software engineering activities. Our experiences show that the strength of this architecture lies in its simplicity and ability to work with multiple users and quickly integrate a wide variety of applications. It is not perfect, but we present it as a first step toward a more general package-oriented architecture to encourage further research in this area  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号