首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Large repositories of source code available over the Internet, or within large organizations, create new challenges and opportunities for data mining and statistical machine learning. Here we first develop Sourcerer, an infrastructure for the automated crawling, parsing, fingerprinting, and database storage of open source software on an Internet-scale. In one experiment, we gather 4,632 Java projects from SourceForge and Apache totaling over 38 million lines of code from 9,250 developers. Simple statistical analyses of the data first reveal robust power-law behavior for package, method call, and lexical containment distributions. We then develop and apply unsupervised, probabilistic, topic and author-topic (AT) models to automatically discover the topics embedded in the code and extract topic-word, document-topic, and AT distributions. In addition to serving as a convenient summary for program function and developer activities, these and other related distributions provide a statistical and information-theoretic basis for quantifying and analyzing source file similarity, developer similarity and competence, topic scattering, and document tangling, with direct applications to software engineering an software development staffing. Finally, by combining software textual content with structural information captured by our CodeRank approach, we are able to significantly improve software retrieval performance, increasing the area under the curve (AUC) retrieval metric to 0.92– roughly 10–30% better than previous approaches based on text alone. A prototype of the system is available at: . Erik Linstead, Sushil Bajracharya, and Trung Ngo have contributed equally to this work.  相似文献   

2.
A software repository provides a central information source for understanding and reengineering code in a software project. Complex reverse engineering tools can be built by analyzing information stored in the repository without reparsing the original source code. The most critical design aspect of a repository is its data model, which directly affects how effectively the repository supports various analysis tasks. This paper focuses on the design rationales behind a data model for a C++ software repository that supports reachability analysis and dead code detection at the declaration level. These two tasks are frequently needed in large software projects to help remove excess software baggage, select regression tests and support software reuse studies. The language complexity introduced by class inheritance, friendship, and template instantiation in C++ requires a carefully designed model to catch all necessary dependencies for correct reachability analysis. We examine the major design decisions and their consequences in our model and illustrate how future software repositories can be evaluated for completeness at a selected abstraction level. Examples are given to illustrate how our model also supports variants of reachability analysis: impact analysis, class visibility analysis, and dead code detection. Finally, we discuss the implementation and experience of our analysis tools on a few C++ software projects  相似文献   

3.
ContextDevelopers often need to find answers to questions regarding the evolution of a system when working on its code base. While their information needs require data analysis pertaining to different repository types, the source code repository has a pivotal role for program comprehension tasks. However, the coarse-grained nature of the data stored by commit-based software configuration management systems often makes it challenging for a developer to search for an answer.ObjectiveWe present Replay, an Eclipse plug-in that allows developers to explore the change history of a system by capturing the changes at a finer granularity level than commits, and by replaying the past changes chronologically inside the integrated development environment, with the source code at hand.MethodWe conducted a controlled experiment to empirically assess whether Replay outperforms a baseline (SVN client in Eclipse) on helping developers to answer common questions related to software evolution.ResultsThe experiment shows that Replay leads to a decrease in completion time with respect to a set of software evolution comprehension tasks.ConclusionWe conclude that there are benefits in using Replay over the state of the practice tools for answering questions that require fine-grained change information and those related to recent changes.  相似文献   

4.
周光有  谢琦  余啸 《软件学报》2024,35(6):2863-2879
代码搜索是当下自然语言处理和软件工程交叉领域的一个重要分支. 开发高效的代码搜索算法能够显著提高代码重用的能力, 从而有效提高软件开发人员的工作效率. 代码搜索任务是以描述代码片段功能的自然语言作为输入, 在海量代码库中搜索得到相关代码片段的过程. 基于序列模型的代码搜索方法DeepCS虽然取得了很好的效果, 但这种方法不能捕捉代码的深层语义. 基于图嵌入的代码搜索方法GraphSearchNet能缓解这个问题, 但没有对代码与文本进行细粒度匹配, 也忽视了代码图和文本图的全局关系. 为了解决以上局限性, 提出基于关系图卷积网络的代码搜索方法, 对构建的文本图和代码图编码, 从节点层面对文本查询和代码片段进行细粒度匹配, 并应用神经张量网络捕捉它们的全局关系. 在两个公开数据集上的实验结果表明, 所提方法比先进的基线模型DeepCS和GraphSearchNet搜索精度更高.  相似文献   

5.
代码质量度量是软件质量分析的一个重要研究方向。静态分析方法因其具有成本低、容易实现而且不依赖于程序特定的运行环境的优点,在当前软件网络化、服务化的趋势下倍受关注。针对Java代码质量度量进行研究,使用Ant工具整合各种开源的静态测试工具,并制定基于静态分析的Java代码质量综合评价方案,可支持包括代码规模、规范性、可维护性、可扩展性和潜在危险等方面的综合检测,为项目的开发者、管理者和使用者提供了实用的代码质量评价方法。  相似文献   

6.
代码搜索引擎(code search engines,CSE)的产生和互联网上日益增加的开源代码工程,使得软件开发人员在软件开发的过程中可以大量的重用已有的源代码。然而大部分开发人员使用CSEs只是简单完成相关代码搜索。该文给出了一种通用的范型挖掘过程模型,能够充分利用CSEs,通过挖掘源代码范型保证重用代码的质量,并详细的说明了该范型挖掘过程模型在三个方面辅助软件质量改进。  相似文献   

7.
代码搜索引擎(code search engines,CSE)产生和互联网上日益增加的开源代码工程,使得软件开发人员在软件开发的过程中可以大量的重用已有的源代码。然而大部分开发人员使用CSEs只是简单完成相关代码搜索。该文给出了一种通用的范型挖掘过程模型,能够充分利用CSEs,通过挖掘源代码范型保证重用代码的质量,并详细的说明了该范型挖掘过程模型在三个方面辅助软件质量改进。  相似文献   

8.
Open source software provides organizations with new options for component-based development. As with commercial off-the-shelf software, project developers acquire open source components from a vendor (or a community) and use them "as is" or with minor modifications. Although they have access to the component's source code, developers aren't required to do anything with it. If the component's community is large and active, the adopting organization can expect frequent software updates, reasonable quality assurance, responsive hug fixes, and good technical support. Also, having freely available source code addresses two typical concerns with using COTS components: unknown implementation quality and long-term vendor support.  相似文献   

9.
Quantitative empirical software engineering research benefits mightily from processing large open source software repository data sets. The diversity of repository management tools and the long history of some projects, renders the task of working with those datasets a tedious and error-prone exercise. The Alitheia Core analysis platform preprocesses repository data into an intermediate format that allows researchers to provide custom analysis tools. Alitheia Core automatically distributes the processing load on multiple processors while enabling programmatic access to the raw data, the metadata, and the analysis results. The tool has been successfully applied on hundreds of medium to large-sized open-source projects, enabling large-scale empirical studies.  相似文献   

10.
11.
The Tools We Use     
Spinellis  D. 《Software, IEEE》2007,24(4):20-21
What's the state of the art in the tools we use to build software? To answer this question, I let a powerful server build from source code about 7,000 open source packages over a period of a month. The packages I built form a subset of the FreeBSD operating system ports collection, comprising a wide spectrum of application domains: from desktop utilities and biology applications to databases and development tools. The collection is representative of modern software because, unlike say a random sample of SourceForge.net projects, FreeBSD developers have found these programs useful enough to port to FreeBSD.  相似文献   

12.
Developers commonly make use of a web search engine such as Google to locate online resources to improve their productivity. A better understanding of what developers search for could help us understand their behaviors and the problems that they meet during the software development process. Unfortunately, we have a limited understanding of what developers frequently search for and of the search tasks that they often find challenging. To address this gap, we collected search queries from 60 developers, surveyed 235 software engineers from more than 21 countries across five continents. In particular, we asked our survey participants to rate the frequency and difficulty of 34 search tasks which are grouped along the following seven dimensions: general search, debugging and bug fixing, programming, third party code reuse, tools, database, and testing. We find that searching for explanations for unknown terminologies, explanations for exceptions/error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries/services are the most frequent search tasks that developers perform, while searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. Our study sheds light as to why practitioners often perform some of these tasks and why they find some of them to be challenging. We also discuss the implications of our findings to future research in several research areas, e.g., code search engines, domain-specific search engines, and automated generation and refinement of search queries.  相似文献   

13.
Much of software developers' time is spent understanding unfamiliar code. To better understand how developers gain this understanding and how software development environments might be involved, a study was performed in which developers were given an unfamiliar program and asked to work on two debugging tasks and three enhancement tasks for 70 minutes. The study found that developers interleaved three activities. They began by searching for relevant code both manually and using search tools; however, they based their searches on limited and misrepresentative cues in the code, environment, and executing program, often leading to failed searches. When developers found relevant code, they followed its incoming and outgoing dependencies, often returning to it and navigating its other dependencies; while doing so, however, Eclipse's navigational tools caused significant overhead. Developers collected code and other information that they believed would be necessary to edit, duplicate, or otherwise refer to later by encoding it in the interactive state of Eclipse's package explorer, file tabs, and scroll bars. However, developers lost track of relevant code as these interfaces were used for other tasks, and developers were forced to find it again. These issues caused developers to spend, on average, 35 percent of their time performing the mechanics of navigation within and between source files. These observations suggest a new model of program understanding grounded in theories of information foraging and suggest ideas for tools that help developers seek, relate, and collect information in a more effective and explicit manner  相似文献   

14.
As developers modify software entities such as functions or variables to introduce new features, enhance old ones, or fix bugs, they must ensure that other entities in the software system are updated to be consistent with these new changes. Many hard to find bugs are introduced by developers who did not notice dependencies between entities, and failed to propagate changes correctly. Most modern development environments offer tools to assist developers in propagating changes. For example, dependency browsers show static code dependencies between source code entities. Other sources of information such as historical co-change or code layout information could be used by tools to support developers in propagating changes. We present the Development Replay (DR) approach which empirically assess and compares the effectiveness of several not-yet-existing change propagation tools by reenacting the changes stored in source control repositories using these tools. We present a case study of five large open source systems with a total of over 40 years of development history. Our empirical results show that historical co-change information recovered from source control repositories along with code layout information can guide developers in propagating changes better than simple static dependency information.
Richard C. HoltEmail:
  相似文献   

15.
Our current understanding of how programmers perform feature location during software maintenance is based on controlled studies or interviews, which are inherently limited in size, scope and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. In this paper, we report on a field study about how software developers perform feature location within source code during their daily development activities. Our study is based on two complementary field data sets: one that reflects complete IDE activity of 67 professional developers over approximately one month, and the other that reflects usage of an IR-based code search tool by nearly 600 developers. Analyzing this data, we report results on how often developers use which type of code search tools, on the types of queries and retreival strategies used by developers, and on patterns of developer feature location behavior following code search. The results of the study suggest that there is (1) a need for helping developers to devise better code search queries; (2) a lack of adoption of niche code search tools; (3) a need for code search tool to handle both lookup and exploratory queries; and (4) a need for better integration between code search, structured navigation, and debugging tools in feature location tasks.  相似文献   

16.
多年来的软件开发积累了大量的源代码,同时不少代码搜索工具也被开发出来,但是,现有的工具都不够精确,因而很少有人使用。本文提出一种高效的源代码搜索算法,通过识别查询语句与API的关系,以提高代码搜索的准确性。基于该算法,本文实现一个针对C#源代码的搜索工具,并通过客观实验与用户调研对算法进行评估。实验结果表明本文提出的搜索算法是十分有效的。   相似文献   

17.
Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.  相似文献   

18.
免费开源软件的进步促进了开源网络地理信息系统的发展,使其呈现多样的技术体系和应用模式。为准确了解其当前的技术体系、功能特征和应用领域等,综述了当前主流开源网络地理信息系统软件、开发工具、系统特点、研发模式及典型应用,并归纳了当前发展趋势。为国产网络地理信息系统开发和技术应用提供技术参考和经验借鉴。  相似文献   

19.
Edwards  J. 《Computer》1998,31(10):11-13
Whether it's called freeware or open source software, the popularity of software that is either given away or provided at a nominal price along with its source code is growing rapidly. There are numerous examples of popular open source software, including the Linux operating system and the market leading Apache Web server. With its increasing popularity, freeware is beginning to challenge long held concepts about software development and distribution. Developers give away software for a number of reasons. Netscape recently turned Navigator into open source software in an effort to regain market share captured by Microsoft's Internet Explorer (which is also distributed at no cost but without the source code) and to boost the sale of Netscape's server software and related products. Developers also give away software and source code to get users to sample their creations or to encourage independent programmers to enhance a product. In addition, many developers give away software because they support an open, independent “freeware culture” in which software is available to all developers, who can then use, adapt, and improve it as they see fit  相似文献   

20.
We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号