共查询到20条相似文献,搜索用时 31 毫秒
1.
The Web has become an important knowledge source for resolving system installation problems and for working around software
bugs. In particular, web-based bug tracking systems offer large archives of useful troubleshooting advice. However, searching
bug tracking systems can be time consuming since generic search engines do not take advantage of the semi-structured knowledge
recorded in bug tracking systems. We present work towards a semantics-based bug search system which tries to take advantage
of the semi-structured data found in many widely used bug tracking systems. We present a study of bug tracking systems and
we describe how to crawl them in order to extract semi-structured data. We describe a unified data model to store bug tracking
data. The model has been derived from the analysis of the most popular systems. Finally, we describe how the crawled data
can be fed into a semantic search engine to facilitate semantic search.
相似文献
2.
Twenty-seven automatically extractable bug fix patterns are defined using the syntax components and context of the source
code involved in bug fix changes. Bug fix patterns are extracted from the configuration management repositories of seven open
source projects, all written in Java (Eclipse, Columba, JEdit, Scarab, ArgoUML, Lucene, and MegaMek). Defined bug fix patterns
cover 45.7% to 63.3% of the total bug fix hunk pairs in these projects. The frequency of occurrence of each bug fix pattern
is computed across all projects. The most common individual patterns are MC-DAP (method call with different actual parameter
values) at 14.9–25.5%, IF-CC (change in if conditional) at 5.6–18.6%, and AS-CE (change of assignment expression) at 6.0–14.2%.
A correlation analysis on the extracted pattern instances on the seven projects shows that six have very similar bug fix pattern
frequencies. Analysis of if conditional bug fix sub-patterns shows a trend towards increasing conditional complexity in if conditional fixes. Analysis of five developers in the Eclipse projects shows overall consistency with project-level bug fix
pattern frequencies, as well as distinct variations among developers in their rates of producing various bug patterns. Overall,
data in the paper suggest that developers have difficulty with specific code situations at surprisingly consistent rates.
There appear to be broad mechanisms causing the injection of bugs that are largely independent of the type of software being
produced.
相似文献
3.
The ability to predict the time required to repair software defects is important for both software quality management and
maintenance. Estimated repair times can be used to improve the reliability and time-to-market of software under development.
This paper presents an empirical approach to predicting defect repair times by constructing models that use well-established
machine learning algorithms and defect data from past software defect reports. We describe, as a case study, the analysis
of defect reports collected during the development of a large medical software system. Our predictive models give accuracies
as high as 93.44%, despite the limitations of the available data. We present the proposed methodology along with detailed
experimental results, which include comparisons with other analytical modeling approaches.
相似文献
4.
The benefits of software reuse have been studied for many years. Several previous studies have observed that reused software
has a lower defect density than newly built software. However, few studies have investigated empirically the reasons for this
phenomenon. To date, we have only the common sense observation that as software is reused over time, the fixed defects will
accumulate and will result in high-quality software. This paper reports on an industrial case study in a large Norwegian Oil
and Gas company, involving a reused Java class framework and two applications that use that framework. We analyzed all trouble
reports from the use of the framework and the applications according to the Orthogonal Defect Classification (ODC), followed
by a qualitative Root Cause Analysis (RCA). The results reveal that the framework has a much lower defect density in total
than one application and a slightly higher defect density than the other. In addition, the defect densities of the most severe
defects of the reused framework are similar to those of the applications that are reusing it. The results of the ODC and RCA
analyses reveal that systematic reuse (i.e. clearly defined and stable requirements, better design, hesitance to change, and
solid testing) lead to lower defect densities of the functional-type defects in the reused framework than in applications
that are reusing it. However, the different “nature” of the framework and the applications (e.g. interaction with other software,
number and complexity of business logic, and functionality of the software) may confound the causal relationship between systematic
reuse and the lower defect density of the reused software. Using the results of the study as a basis, we present an improved
overall cause–effect model between systematic reuse and lower defect density that will facilitate further studies and implementations
of software reuse.
相似文献
5.
Fault prediction by negative binomial regression models is shown to be effective for four large production software systems
from industry. A model developed originally with data from systems with regularly scheduled releases was successfully adapted
to a system without releases to identify 20% of that system’s files that contained 75% of the faults. A model with a pre-specified
set of variables derived from earlier research was applied to three additional systems, and proved capable of identifying
averages of 81, 94 and 76% of the faults in those systems. A primary focus of this paper is to investigate the impact on predictive
accuracy of using data about the number of developers who access individual code units. For each system, including the cumulative
number of developers who had previously modified a file yielded no more than a modest improvement in predictive accuracy.
We conclude that while many factors can “spoil the broth” (lead to the release of software with too many defects), the number
of developers is not a major influence.
相似文献
6.
This study proposes a software quality evaluation model and its computing algorithm. Existing software quality evaluation
models examine multiple characteristics and are characterized by factorial fuzziness. The relevant criteria of this model
are derived from the international norm ISO. The main objective of this paper is to propose a novel Analytic Hierarchy Process
(AHP) approach for addressing uncertainty and imprecision in service evaluation during pre-negotiation stages, where comparative
judgments of decision makers are represented as fuzzy triangular numbers. A new fuzzy prioritization method, which derives
crisp priorities from consistent and inconsistent fuzzy comparison matrices, is proposed. The Fuzzy Analytic Hierarchy Process
(FAHP)-based decision-making method can provide decision makers or buyers with a valuable guideline for evaluating software
quality. Importantly, the proposed model can aids users and developers in assessing software quality, making it highly applicable
for academic and commercial purposes.
相似文献
7.
As developers modify software entities such as functions or variables to introduce new features, enhance old ones, or fix
bugs, they must ensure that other entities in the software system are updated to be consistent with these new changes. Many
hard to find bugs are introduced by developers who did not notice dependencies between entities, and failed to propagate changes
correctly. Most modern development environments offer tools to assist developers in propagating changes. For example, dependency
browsers show static code dependencies between source code entities. Other sources of information such as historical co-change
or code layout information could be used by tools to support developers in propagating changes. We present the Development Replay (DR) approach which empirically assess and compares the effectiveness of several not-yet-existing change propagation tools
by reenacting the changes stored in source control repositories using these tools. We present a case study of five large open
source systems with a total of over 40 years of development history. Our empirical results show that historical co-change
information recovered from source control repositories along with code layout information can guide developers in propagating
changes better than simple static dependency information.
相似文献
8.
The quality of software systems are determined in part by their optimal configurations. Optimal configurations are desired
when the software is being deployed and during its lifetime. However, initial deployment and subsequent dynamic reconfiguration
of a software system is difficult because of the interplay of many interdependent factors, including cost, time, application
state, and system resources. As the size and complexity of software systems increases, procedures (manual or automated) that
assume a static software architecture and environment are becoming untenable. We have developed a novel technique for carrying
out the deployment and reconfiguration planning processes that leverages recent advances in the field of temporal planning.
We describe a tool called Planit, which manages the deployment and reconfiguration of a software system utilizing a temporal
planner. Given a model of the structure of a software system, the network upon which the system should be hosted, and a goal
configuration, Planit will use the temporal planner to devise possible deployments of the system. Given information about
changes in the state of the system, network and a revised goal, Planit will use the temporal planner to devise possible reconfigurations
of the system. We present the results of a case study in which Planit is applied to a system consisting of various components
that communicate across an application-level overlay network.
An earlier version of this paper was presented at ICTAI’03.
相似文献
9.
3D computer graphics have been an important feature in games development since it was first introduced in the early 80s and
there is no doubt that 3D based content is often viewed as more attractive in games than the more abstract 2D graphics. Many
games publishers are keen to leverage their success in the console market into the mobile phone platform. However, the resource
constraints of mobile phones and the fragmented nature of the environment present considerable challenges for games developers.
In this paper we consider some of the current constraints together with current and, probable, future developments both in
the software and hardware of mobile phones. As part of this process we benchmark some of the latest and most prevalent software
and hardware devices to ascertain both the quality of the graphics produced and the effects upon battery life. Whilst our
test results highlight that the current market does indeed present challenges, our research into the future developments highlights
the fact we are approaching greater standardization, which will be an important factor for the successful development of 3D
mobile games.
相似文献
10.
Quantitative usability requirements are a critical but challenging, and hence an often neglected aspect of a usability engineering process. A case study is described where quantitative usability requirements played a key role in the development of a new user interface of a mobile phone. Within the practical constraints of the project, existing methods for determining usability requirements and evaluating the extent to which these are met, could not be applied as such, therefore tailored methods had to be developed. These methods and their applications are discussed. 相似文献
11.
An important area of Human Reliability Assessment in interactive systems is the ability to understand the causes of human
error and to model their occurrence. This paper investigates a new approach to analysis of task failures based on patterns
of operator behaviour, in contrast with more traditional event-based approaches. It considers, as a case study, a formal model
of an Air Traffic Control system operator’s task which incorporates a simple model of the high-level cognitive processes involved.
The cognitive model is formalised in the CSP process algebra. Various patterns of behaviour that could lead to task failure
are described using temporal logic. Then a model-checking technique is used to verify whether the set of selected behavioural
patterns is sound and complete with respect to the definition of task failure. The decomposition is shown to be incomplete
and a new behavioural pattern is identified, which appears to have been overlooked in the informal analysis of the problem.
This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive
systems.
相似文献
12.
This paper describes the simulated car racing competition that was arranged as part of the 2007 IEEE Congress on Evolutionary
Computation. Both the game that was used as the domain for the competition, the controllers submitted as entries to the competition
and its results are presented. With this paper, we hope to provide some insight into the efficacy of various computational
intelligence methods on a well-defined game task, as well as an example of one way of running a competition. In the process,
we provide a set of reference results for those who wish to use the simplerace game to benchmark their own algorithms. The paper is co-authored by the organizers and participants of the competition.
相似文献
13.
Second-generation portals are far from being monolithic pieces of software. Their complexity calls for a component-based approach
where portlets are the technical enabler. That being the case nowadays portals tend to be constructed by means of portlets,
i.e. a multi-step, user-facing application to be delivered through a Web application. The proposal for and ample support given
to the WSRP (Web Services for Remote Portlets) portlet standard predict an emerging portlet market. A main requirement for
the blossoming of this market is the existence of portlet quality models that assist portal developers to select the appropriate
portlet. This paper focuses on usability. The aim, therefore, is to develop a usability model for portlets. The paper presents
such a model and its realisation for a sample case.
相似文献
14.
This paper proposes an appearance generative mixture model based on key frames for meanshift tracking. Meanshift tracking
algorithm tracks an object by maximizing the similarity between the histogram in tracking window and a static histogram acquired
at the beginning of tracking. The tracking therefore could fail if the appearance of the object varies substantially. In this
paper, we assume the key appearances of the object can be acquired before tracking and the manifold of the object appearance
can be approximated by piece-wise linear combination of these key appearances in histogram space. The generative process is
described by a Bayesian graphical model. An Online EM algorithm is proposed to estimate the model parameters from the observed
histogram in the tracking window and to update the appearance histogram. We applied this approach to track human head motion
and to infer the head pose simultaneously in videos. Experiments verify that our online histogram generative model constrained
by key appearance histograms alleviates the drifting problem often encountered in tracking with online updating, that the
enhanced meanshift algorithm is capable of tracking object of varying appearances more robustly and accurately, and that our
tracking algorithm can infer additional information such as the object poses.
Electronic supplementary material The online version of this article (doi:) contains supplementary material, which is available to authorized users.
相似文献
15.
The paper reflects on the unique experience of social and technological development in Lithuania since the regaining of independence
as a newly reshaped society constructing a distinctive competitive IST-based model at global level. This has presented Lithuanian
pattern of how to integrate different experiences and relations between generations in implementing complex information society
approaches. The resulting programme in general is linked to the Lisbon objectives of the European Union. The experience of
transitional countries in Europe, each different but facing some common problems, may be useful to developing countries in
Africa.
相似文献
17.
Assumptions are frequently made during requirements analysis of a system about the trustworthiness of its various components
(including human components). These trust assumptions, whether implicit or explicit, affect the scope of the analysis, derivation
of security requirements, and in some cases how functionality is realized. This paper presents trust assumptions in the context
of analysis of security requirements. A running example shows how trust assumptions can be used by a requirements engineer
to help define and limit the scope of analysis and to document the decisions made during the process. The paper concludes
with a case study examining the impact of trust assumptions on software that uses the secure electronic transaction specification.
相似文献
18.
In this paper, we present a new model for time-series forecasting using radial basis functions (RBFs) as a unit of artificial neural networks (ANNs), which allows the inclusion of exogenous information (EI) without additional pre-processing. We begin by summarizing the most well-known EI techniques used ad hoc, i.e., principal component analysis (PCA) and independent component analysis (ICA). We analyze the advantages and disadvantages of these techniques in time-series forecasting using Spanish bank and company stocks. Then, we describe a new hybrid model for time-series forecasting which combines ANNs with genetic algorithms (GAs). We also describe the possibilities when implementing the model on parallel processing systems. 相似文献
19.
The complexity of group dynamics occurring in small group interactions often hinders the performance of teams. The availability
of rich multimodal information about what is going on during the meeting makes it possible to explore the possibility of providing
support to dysfunctional teams from facilitation to training sessions addressing both the individuals and the group as a whole.
A necessary step in this direction is that of capturing and understanding group dynamics. In this paper, we discuss a particular
scenario, in which meeting participants receive multimedia feedback on their relational behaviour, as a first step towards
increasing self-awareness. We describe the background and the motivation for a coding scheme for annotating meeting recordings
partially inspired by the Bales’ Interaction Process Analysis. This coding scheme was aimed at identifying suitable observable
behavioural sequences. The study is complemented with an experimental investigation on the acceptability of such a service.
相似文献
20.
This paper addresses the possibility of measuring perceived usability in an absolute way. It studies the impact of the nature
of the tasks performed in perceived software usability evaluation, using for this purpose the subjective evaluation of an
application’s performance via the Software Usability Measurement Inventory (SUMI). The paper reports on the post-hoc analysis
of data from a productivity study for testing the effect of changes in the graphical user interface (GUI) of a market leading
drafting application. Even though one would expect similar evaluations of an application’s usability for same releases, the
analysis reveals that the output of this subjective appreciation is context sensitive and therefore mediated by the research
design. Our study unmasked a significant interaction between the nature of the tasks used for the usability evaluation and
how users evaluate the performance of this application. This interaction challenges the concept of absolute benchmarking in
subjective usability evaluation, as some software evaluation methods aspire to provide, since subjective measurement of software
quality will be affected most likely by the nature of the testing materials used for the evaluation.
相似文献
|