Abstract: | Large repositories of source code available over the Internet, or within large organizations, create new challenges and opportunities
for data mining and statistical machine learning. Here we first develop Sourcerer, an infrastructure for the automated crawling,
parsing, fingerprinting, and database storage of open source software on an Internet-scale. In one experiment, we gather 4,632
Java projects from SourceForge and Apache totaling over 38 million lines of code from 9,250 developers. Simple statistical
analyses of the data first reveal robust power-law behavior for package, method call, and lexical containment distributions.
We then develop and apply unsupervised, probabilistic, topic and author-topic (AT) models to automatically discover the topics
embedded in the code and extract topic-word, document-topic, and AT distributions. In addition to serving as a convenient
summary for program function and developer activities, these and other related distributions provide a statistical and information-theoretic
basis for quantifying and analyzing source file similarity, developer similarity and competence, topic scattering, and document
tangling, with direct applications to software engineering an software development staffing. Finally, by combining software
textual content with structural information captured by our CodeRank approach, we are able to significantly improve software
retrieval performance, increasing the area under the curve (AUC) retrieval metric to 0.92– roughly 10–30% better than previous
approaches based on text alone. A prototype of the system is available at: .
Erik Linstead, Sushil Bajracharya, and Trung Ngo have contributed equally to this work. |