首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一种基于Agent的自适应软件过程模型   总被引:17,自引:3,他引:17       下载免费PDF全文
传统的软件过程模型大多是静态的、机械的、被动的,它们要求软件工程人员在描述软件过程时预期所有可能发生的情况,并且显式地定义这些问题的解决方案.当软件过程所处的环境发生变化时,软件过程无法自适应地对这些变更作出相应的调整.提出了一种基于Agent的自适应软件过程模型.在这种软件过程模型中,软件过程被描述为一组相互独立而对等的实体--软件过程Agent.这些软件过程Agent能够对软件过程环境的变化主动地、自治地作出反应,动态地确定和变更其行为以实现软件开发的目标.  相似文献   

2.
需求变更管理过程及其工具分析与展望   总被引:1,自引:0,他引:1  
对软件开发中的需求变更的产生原因及对项目的影响做了分析和讨论,介绍和研究了不同方法学中对于需求变更管理的要求和处理过程,并对这些主流的方法学中的变更管理过程做了一些总结和比较.对需求变更管理工具应该提供的功能做了一些讨论,比较了部分主流的需求管理工具中的变更管理功能,最后对软件需求管理工具的未来的发展方向进行了展望.  相似文献   

3.
软件配置管理为软件开发提供了一套管理办法和活动原则,它贯穿于整个软件生命周期,对变更的有效控制提高了整个开发团队的工作效率,使软件开发项目的质量管理过程规范而有效.有效的软件配置管理过程降低了由于软件变更可能导致的风险.  相似文献   

4.
随着软件开发规模的急速扩大,软件项目复杂性程度越来越高,软件项目的失败率也越来越高,这与业界缺乏对软件过程的深入理解有关.在软件开发项目中,进度、成本、质量是核心问题,产生这些问题的因素也是多方面的,每一个软件项目都是一个动态的复杂系统,没有对软件开发动态过程及影响软件开发项目绩效的因素的深入理解,就无法制定出提高项目绩效的有效策略.基于此,文中将这些因索统一命名为失信因子,并按照软件生命周期对其进行识别划分,最后提出了一种基于系统动力学思维的失信因子分析方法.通过文中的探讨,在以后的开发项目中相关人员会对失信因子进行重点关注和研究,进而对软件项目进行有效的控制和预防.  相似文献   

5.
传统的软件开发模式下,从提出需求到完成设计之间跨越的鸿沟是降低开发效率的主要因素。为了辅助设计人员进行高效的软件开发,文中提出一个基于本体的需求分析和软件体系结构设计方法。首先建立领域本体模型、需求本体模型和软件体系结构本体模型;接着在需求分析阶段,通过本体映射将用户需求映射到本体概念上,进行准确地需求质量评估;然后在体系结构设计阶段,通过对网上共享的设计文档进行五种维度的语义标注生成语义索引,实现跨领域的语义搜索,为设计人员提供更全面、更详细的设计文档作为参考;最后结合自己项目的特点,逐步构建、完善系统体系结构。本文将本体作为描述需求和体系结构的基础,实现了需求到设计的平滑过度,减少了设计人员和用户进行交流的时间开销,对于整体提高软件开发效率来说具有一定的帮助。  相似文献   

6.
基于MESSAGE和JADE的Agent开发方法研究   总被引:1,自引:1,他引:1  
针对面向Agent软件工程方法在实际工作中的某些不足,以MESSAGE(软件Agent系统工程方法学)的建模概念和建模语言为基础,扩展了现有Agent软件工程分析和设计阶段的内容.以JADE为实现平台,将JADE中的相关术语与MESSAGE的建模概念通过一种映射进行有效转化,从而提出了一个系统和完整的Agent软件开发过程,给出企业战略合理性诊断系统,并证实该方法可以应用于现实的软件开发中.  相似文献   

7.
林泽琦  邹艳珍  赵俊峰  曹英魁  谢冰 《软件学报》2019,30(12):3714-3729
自然语言文本形式的文档是软件项目的重要组成部分.如何帮助开发者在大量文档中进行高效、准确的信息定位,是软件复用领域中的一个重要研究问题.提出了一种基于代码结构知识的软件文档语义搜索方法.该方法从软件项目的源代码中解析出代码结构图,并以此作为领域特定的知识来帮助机器理解自然语言文本的语义.这一语义信息与信息检索技术相结合,从而实现了对软件文档的语义检索.在StackOverflow问答文档数据集上的实验表明,与多种文本检索方法相比,该方法在平均准确率(mean average precision,简称MAP)上可以取得至少13.77%的提升.  相似文献   

8.
运用软件度量的方法改进传统的变更管理过程,论述了在软件变更管理中应用度量的完整过程,并以实际数据为例分析了软件变更状况的相关特性,从而帮助软件开发人员改善软件变更管理策略,是软件度量方法在变更管理中的一次成功实践.  相似文献   

9.
谢利子  王青  肖俊超 《软件学报》2010,21(12):3029-3041
提出一种风险驱动的项目缓冲分配方法,并开发了相应的项目模拟执行工具对方法进行验证.方法旨在通过综合考虑软件项目的风险因素、任务之间的进度约束和资源约束对项目的可用缓冲进行合理分配.模拟实验的结果表明,在风险较高的软件开发项目中,该方法相对于关键链项目管理中尾部集中的项目缓冲分配方法,可确保在对项目平均执行工期产生较小影响的同时,显著降低项目执行时计划变更的发生频率.该缓冲分配方法与项目模拟工具可以帮助软件项目经理确定合适的项目缓冲时间长度以及缓冲分配方案,进而提高软件项目计划的可信性和执行的稳定性.  相似文献   

10.
熊文军  张璇  王旭  李彤  尹春林 《计算机科学》2017,44(11):146-155
在Issue跟踪系统中存在大量长期未关闭的变更请求报告,增加了开发者不断点击和阅读这些报告的可能性,严重影响了软件需求管理任务的实施和用户的反馈体验。准确和及时地 预测这些报告关闭的可能性或重要性可以提高软件维护任务的质量。定义若干衡量变更请求报告特征的指标,选择在训练数据集上预测效果最佳的指标构建Logistic回归预测模型。使用提出的方法对20个SourceForge项目构成的测试数据集进行实验,得到平均查全率为94%和平均伪正率为14%的结果。实验结果表明,提出的方法能在测试数据集上取得很好的预测性能;关闭状态的变更请求报告所占的百分比或数量大小并不影响模型的性能;变更请求报告具有的某些特征可用于预测其在下一版本中得到关闭的可能性。  相似文献   

11.
Developers commonly make use of a web search engine such as Google to locate online resources to improve their productivity. A better understanding of what developers search for could help us understand their behaviors and the problems that they meet during the software development process. Unfortunately, we have a limited understanding of what developers frequently search for and of the search tasks that they often find challenging. To address this gap, we collected search queries from 60 developers, surveyed 235 software engineers from more than 21 countries across five continents. In particular, we asked our survey participants to rate the frequency and difficulty of 34 search tasks which are grouped along the following seven dimensions: general search, debugging and bug fixing, programming, third party code reuse, tools, database, and testing. We find that searching for explanations for unknown terminologies, explanations for exceptions/error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries/services are the most frequent search tasks that developers perform, while searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. Our study sheds light as to why practitioners often perform some of these tasks and why they find some of them to be challenging. We also discuss the implications of our findings to future research in several research areas, e.g., code search engines, domain-specific search engines, and automated generation and refinement of search queries.  相似文献   

12.
Searching application programming interfaces (APIs) is very important for developers to reuse software projects. Existing natural language based API search mainly faces the following challenges. 1) More accurate results are required as software projects evolve to be more heterogeneous and complex. 2) The semantic relationships between APIs (e.g., inheritances between classes, and invocations between methods) need to be illustrated so that developers can better understand their usage scenarios. To deal with these issues, we propose GeAPI, a novel graph embedding based approach for API graph search and recommendation in this paper. First, we build a software project's API graph automatically from its source code and represent each API using graph embedding methods. Second, we search the API graph with a question in natural language, and return the corresponding subgraph that is composed of relevant code elements and their associated relationships, as the best answer of the question. In experiments, we select three well-known open source projects, JodaTime, Apache Lucene and POI, as examples to perform API search tasks. The experimental results show that our approach GeAPI improves F1-score by 10% compared with the existing shortest path based API search approach, while reduces the average response time about 60 times.  相似文献   

13.
Our current understanding of how programmers perform feature location during software maintenance is based on controlled studies or interviews, which are inherently limited in size, scope and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. In this paper, we report on a field study about how software developers perform feature location within source code during their daily development activities. Our study is based on two complementary field data sets: one that reflects complete IDE activity of 67 professional developers over approximately one month, and the other that reflects usage of an IR-based code search tool by nearly 600 developers. Analyzing this data, we report results on how often developers use which type of code search tools, on the types of queries and retreival strategies used by developers, and on patterns of developer feature location behavior following code search. The results of the study suggest that there is (1) a need for helping developers to devise better code search queries; (2) a lack of adoption of niche code search tools; (3) a need for code search tool to handle both lookup and exploratory queries; and (4) a need for better integration between code search, structured navigation, and debugging tools in feature location tasks.  相似文献   

14.
现有开发者推荐算法通过对任务和开发者的显式信息进行挖掘, 抽取任务和开发者的显式特征, 完成针对任务的开发者推荐. 然而, 由于显式信息中的描述信息是主观的, 往往是不精确的, 现有基于显式特征的开发者推荐算法性能不够理想. 众包软件开发平台除包含大量不精确的描述信息外, 还包含客观的、较准确的"任务—开发者"成绩信息...  相似文献   

15.
Raijlich  V. Wilde  N. Buckellew  M. Page  H. 《Computer》2001,34(9):24-28
To work effectively with legacy code, software engineers need to understand a legacy computer program's culture - the combination of the programmer's background, the hardware environment and the programming techniques that guided its creation. Software systems typically pass through a series of stages. During the initial development stage, software developers create a first functioning version of the code. An evolution stage follows, during which developmental efforts focus on extending system capabilities to meet user needs. During the servicing stage, only minor repairs and simple functional changes are possible. In the phase-out stage, the system is essentially frozen, but it still produces value. Finally, during the close-down stage, the developers withdraw the system and possibly replace it. Most of the tasks in the evolution and servicing phases require program comprehension nderstanding how and why a software program functions in order to work with it effectively. Effective comprehension requires viewing a legacy program not simply as a product of inefficiency or stupidity, but instead as an artifact of the circumstances in which it was developed. This information can be an important factor in determining appropriate strategies for the software program's transition from the evolution stage to the servicing or phase-out stage  相似文献   

16.
Much of software developers' time is spent understanding unfamiliar code. To better understand how developers gain this understanding and how software development environments might be involved, a study was performed in which developers were given an unfamiliar program and asked to work on two debugging tasks and three enhancement tasks for 70 minutes. The study found that developers interleaved three activities. They began by searching for relevant code both manually and using search tools; however, they based their searches on limited and misrepresentative cues in the code, environment, and executing program, often leading to failed searches. When developers found relevant code, they followed its incoming and outgoing dependencies, often returning to it and navigating its other dependencies; while doing so, however, Eclipse's navigational tools caused significant overhead. Developers collected code and other information that they believed would be necessary to edit, duplicate, or otherwise refer to later by encoding it in the interactive state of Eclipse's package explorer, file tabs, and scroll bars. However, developers lost track of relevant code as these interfaces were used for other tasks, and developers were forced to find it again. These issues caused developers to spend, on average, 35 percent of their time performing the mechanics of navigation within and between source files. These observations suggest a new model of program understanding grounded in theories of information foraging and suggest ideas for tools that help developers seek, relate, and collect information in a more effective and explicit manner  相似文献   

17.
ContextDevelopers often need to find answers to questions regarding the evolution of a system when working on its code base. While their information needs require data analysis pertaining to different repository types, the source code repository has a pivotal role for program comprehension tasks. However, the coarse-grained nature of the data stored by commit-based software configuration management systems often makes it challenging for a developer to search for an answer.ObjectiveWe present Replay, an Eclipse plug-in that allows developers to explore the change history of a system by capturing the changes at a finer granularity level than commits, and by replaying the past changes chronologically inside the integrated development environment, with the source code at hand.MethodWe conducted a controlled experiment to empirically assess whether Replay outperforms a baseline (SVN client in Eclipse) on helping developers to answer common questions related to software evolution.ResultsThe experiment shows that Replay leads to a decrease in completion time with respect to a set of software evolution comprehension tasks.ConclusionWe conclude that there are benefits in using Replay over the state of the practice tools for answering questions that require fine-grained change information and those related to recent changes.  相似文献   

18.
This study investigates the cognitive strategies of 80 participants as they engaged in two researcher-defined tasks and two participant-defined information-seeking tasks using the WWW. Each researcher-defined task and participant-defined task was further divided into a directed search task and a general-purpose browsing task. On the basis of retrospective verbal protocols, log-file data and observations, 12 cognitive search strategies were identified and explained. The differences in cognitive search strategy choice between researcher-defined and participant-defined tasks and between directed search and general-purpose tasks were examined using correspondence analysis. These cognitive search strategies were compared to earlier investigations of search strategies on the WWW.

Relevance to industry

Describing information-seeking behaviours and cognitive search strategies in detail provides website developers and search engine developers with valuable insights into how users seek (and find) information of value to them. Using this information, website developers might gain some knowledge as to how to best represent the content and navigational properties of websites. Search engine developers might wish to make the search and collection strategies more transparent to users. There are also design implications for the designers of web browsers.  相似文献   


19.
Modern software development builds on external Web services reuse as a promising way that allows developers delivering feature-rich software by composing existing Web service Application Programming Interfaces, known as APIs. With the overwhelming number of Web services that are available on the Internet, finding the appropriate Web services for automatic service composition, i.e., mashup creation, has become a time-consuming, difficult, and error-prone task for software designers and developers when done manually. To help developers, a number of approaches and techniques have been proposed to automatically recommend Web services. However, they mostly focus on recommending individual services. Nevertheless, in practice, service APIs are intended to be used together forming a social network between different APIs, thus should be recommended collectively. In this paper, we introduce a novel automated approach, called SerFinder, to recommend service sets for automatic mashup creation. We formulate the service set recommendation as a multi-objective combinatorial problem and use the non-dominated sorting genetic algorithm (NSGA-II) as a search method to extract an optimal set of services to create a given mashup. We aim at guiding the search process towards generating the adequate compromise among three objectives to be optimized (i) maximize services historical co-usage, (ii) maximize services functional matching with the mashup requirements, and (iii) maximize services functional diversity. We perform a large-scale empirical experiment to evaluate SerFinder on a benchmark of real-world mashups and services. The obtained results demonstrate the effectiveness of SerFinder in comparison with recent existing approaches for mashup creation and services recommendation. The statistical analysis results provide an empirical evidence that SerFinder, significantly outperforms four state-of-the-art widely-used multi-objective search-based algorithms as well as random search.  相似文献   

20.
Interaction traces (ITs) are developers’ logs collected while developers maintain or evolve software systems. Researchers use ITs to study developers’ editing styles and recommend relevant program entities when developers perform changes on source code. However, when using ITs, they make assumptions that may not necessarily be true. This article assesses the extent to which researchers’ assumptions are true and examines noise in ITs. It also investigates the impact of noise on previous studies. This article describes a quasi-experiment collecting both Mylyn ITs and video-screen captures while 15 participants performed four realistic software maintenance tasks. It assesses the noise in ITs by comparing Mylyn ITs and the ITs obtained from the video captures. It proposes an approach to correct noise and uses this approach to revisit previous studies. The collected data show that Mylyn ITs can miss, on average, about 6% of the time spent by participants performing tasks and can contain, on average, about 85% of false edit events, which are not real changes to the source code. The approach to correct noise reveals about 45% of misclassification of ITs. It can improve the precision and recall of recommendation systems from the literature by up to 56% and 62%, respectively. Mylyn ITs include noise that biases subsequent studies and, thus, can prevent researchers from assisting developers effectively. They must be cleaned before use in studies and recommendation systems. The results on Mylyn ITs open new perspectives for the investigation of noise in ITs generated by other monitoring tools such as DFlow, FeedBag, and Mimec, and for future studies based on ITs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号