ContextUser stories have become widely accepted in agile software development. Consequently, a great number of software tools that provide, inter alia, support for practices based on user stories have emerged in recent years. These tools may have different features and focus in terms of support for agile requirements engineering (RE) concepts and practices.ObjectiveThe present study aims to provide a deep insight into the current capabilities and future trends of software support for agile RE practices based on user stories.MethodA comparative qualitative study of a set of agile software tools has been conducted according to the following criteria: coverage of the key functional requirements, support for basic agile RE concepts and practices, and user satisfaction with the tool. The criteria for tool selection were: diversity of software tools, high rating on the user-stories community Web site (http://www.userstories.com), and availability for review.ResultsThe results show a generally good coverage of key functional requirements related to management of user stories and epics, high-level release planning and low-level iteration planning. On the other hand, user-role modeling and persona support have not been addressed at all, and it has been found that requirements for acceptance testing support were completely covered by only one tool. More importantly, the study has revealed significant differences in the way different tools support agile RE concepts and practices (if at all). Finally, qualitative analysis of user reviews has demonstrated that practitioners prefer tools that are easy to set up, easy to learn, easy to use, and easy to customize, over more sophisticated but simultaneously more demanding tools.ConclusionAlthough the progress that has been made since the inception of these tools is quite clear, there is still room for improvements in terms of support for various agile RE practices within a specific agile process. 相似文献
Software evolution studies have traditionally focused on individual products. In this study we scale up the idea of software
evolution by considering software compilations composed of a large quantity of independently developed products, engineered
to work together. With the success of libre (free, open source) software, these compilations have become common in the form
of ‘software distributions’, which group hundreds or thousands of software applications and libraries into an integrated system.
We have performed an exploratory case study on one of them, Debian GNU/Linux, finding some significant results. First, Debian
has been doubling in size every 2 years, totalling about 300 million lines of code as of 2007. Second, the mean size of packages
has remained stable over time. Third, the number of dependencies between packages has been growing quickly. Finally, while
C is still by far the most commonly used programming language for applications, use of the C++, Java, and Python languages
have all significantly increased. The study helps not only to understand the evolution of Debian, but also yields insights
into the evolution of mature libre software systems in general.
Daniel M. GermanEmail:
Jesus M. Gonzalez-Barahona
teaches and researches in Universidad Rey Juan Carlos, Mostoles (Spain). His research interests include libre software development,
with a focus on quantitative and empirical studies, and distributed tools for collaboration in libre software projects. He
works in the GSyC/LibreSoft research team, .
Gregorio Robles
is Associate Professor at the Universidad Rey Juan Carlos, where he earned his PhD in 2006. His research interests lie in
the empirical study of libre software, ranging from technical issues to those related to the human resources of the projects.
Martin Michlmayr
has been involved in various free and open source software projects for well over 10 years. He acted as the leader of the
Debian project for two years and currently serves on the board of the Open Source Initiative (OSI). Martin works for HP as
an Open Source Community Expert and acts as the community manager of FOSSBazaar. Martin holds Master degrees in Philosophy,
Psychology and Software Engineering, and earned a PhD from the University of Cambridge.
Juan José Amor
has a M.Sc. in Computer Science from the Universidad Politécnica de Madrid and he is currently pursuing a Ph.D. at the Universidad
Rey Juan Carlos, where he is also a project manager. His research interests are related to libre software engineering, mainly
effort and schedule estimates in libre software projects. Since 1995 he has collaborated in several libre software organizations;
he is also co-founder of LuCAS, the best known libre software documentation portal in Spanish, and Hispalinux, the biggest
spanish Linux user group. He also collaborates with and Linux+.
Daniel M. German
is associate professor of computer science at the University of Victoria, Canada. His main areas of interest are software
evolution, open source software engineering and intellectual property.
相似文献
This paper defines two suites of metrics, which address static and dynamic aspects of component assembly. The static metrics measure complexity and criticality of component assembly, wherein complexity is measured using Component Packing Density and Component Interaction Density metrics. Further, four criticality conditions namely, Link, Bridge, Inheritance and Size criticalities have been identified and quantified. The complexity and criticality metrics are combined to form a Triangular Metric, which can be used to classify the type and nature of applications. Dynamic metrics are collected during the runtime of a complete application. Dynamic metrics are useful to identify super-component and to evaluate the degree of utilization of various components. In this paper both static and dynamic metrics are evaluated using Weyuker’s set of properties. The result shows that the metrics provide a valid means to measure issues in component assembly. We relate our metrics suite with McCall’s Quality Model and illustrate their impact on product quality and to the management of component-based product development. 相似文献
In this paper, a testing method suitable for strengthening fault tolerance in the event of unexpected situations within a software system is presented. It is based on the idea of testing an integrated system, by substituting system components with other, similar in design and functionality that operate in an erroneous and even malicious manner. The approach adopted, is similar to the concept of inserting a virus within an organization so that the defense mechanisms of the latter can be tested and the necessary lines of defense are formed, so that the virus cannot affect any of the organization critical parts. The focal point is to ensure that in case of a module malfunction, the integrated system will continue to operate, isolating the malfunctioning software at the greatest possible extend, preventing the erroneous behavior from affecting other (and sometimes critical) modules. The testing method proposed is based first on isolated components testing adopting and enhancing the Component Off The Self method, and second on integrated system testing using malicious components that emulate erroneous operation. 相似文献
Recent technological advances are increasing the spread of Ubiquitous Computing, leading to the appearance of numerous software systems, which benefit from the features of this new paradigm. Nevertheless, there are a lack of methodologies to properly support the development process of these systems. An important part of the Software Engineering lifecycle is the Requirements Engineering stage, as it grounds the bases for system design for their success. In particular, systematically addressing Non-Functional Requirements such as dynamicity and adaptation, that are important features of ubiquitous systems, eventually leads to higher quality designs. In this paper, a Requirements Engineering Method for the analysis of Ubiquitous Systems, called REUBI, is introduced. It is a goal-based method that represents the influence of context and adverse situations, providing an evaluation procedure to help in the decision making about objectives satisfaction. The proposal is illustrated through the analysis of a Positioning Service of a real system. Additionally, the application of the method has been evaluated by a team of software engineers for the analysis of an Ambient Assisted Living (AAL) health care system. 相似文献
The embedded real-time software requirements are analyzed, and an object-oriented software requirements model is proposed. At the same time, an example, employing this requirements model is introduced in practice. 相似文献
The addition of redundancy to data structures can be used to improve the ability of a software system to detect and correct errors, and to continue to operate according to its specifications. A case study is presented which indicates how such redundancy can be deployed and exploited at reasonable cost to improve software fault tolerance. Experimental results are reported for the small data base system considered. 相似文献
The computational grid is rapidly evolving into a service-oriented computing infrastructure that facilitates resource sharing
for solving large-scale data and computationally intensive problems. Peer-to-peer (P2P) systems have emerged as an infrastructure
enabling technologies for enhanced scalability and reliability in file sharing and content distribution. It is envisioned
that P2P enabled service-oriented grid systems would virtualize various resources as services with high scalability and reliability.
Many legacy software resources exist nowadays, but making them grid aware services for effective resource sharing has become
an issue of vital importance. This paper presents GSLab, a toolkit for automatically wrapping legacy software into services
that can be published, discovered and reused in grid environments. GSLab employs Sun Grid Engine (SGE) to enhance its performance
in execution of wrapped services. Using GSLab, we have automatically wrapped a legacy computer animation rendering code written
in C as a service that can be discovered and accessed in a SGE environment. The evaluation results show that the performance
of GSLab improves with an increase in the number of computing nodes involved.
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait–Speech–Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes. 相似文献
Building a software architecture that meets functional requirements is a quite consolidated activity, whereas keeping high quality attributes is still an open challenge. In this paper we introduce an optimization framework that supports the decision whether to buy software components or to build them in-house upon designing a software architecture. We devise a non-linear cost/quality optimization model based on decision variables indicating the set of architectural components to buy and to build in order to minimize the software cost while keeping satisfactory values of quality attributes. From this point of view, our tool can be ideally embedded into a Cost Benefit Analysis Method to provide decision support to software architects. The novelty of our approach consists in building costs and quality attributes on a common set of decision variables related to software development. We start from a special case of the framework where the quality constraints are related to the delivery time and the product reliability, and the model solution also devises the amount of unit testing to be performed on built components. We generalize the framework formulation to represent a broader class of architectural cost-minimization problems under quality constraints, and discuss advantages and limitations of such approach. 相似文献
The design and analysis of the structure of software systems has typically been based on purely qualitative grounds. In this paper we report on our positive experience with a set of quantitative measures of software structure. These metrics, based on the number of possible paths of information flow through a given component, were used to evaluate the design and implementation of a software system (the UNIX operating system kernel) which exhibits the interconnectivity of components typical of large-scale software systems. Several examples are presented which show the power of this technique in locating a variety of both design and implementation defects. Suggested repairs, which agree with the commonly accepted principles of structured design and programming, are presented. The effect of these alterations on the structure of the system and the quantitative measurements of that structure lead to a convincing validation of the utility of information flow metrics. 相似文献
ContextTraditionally, Embedded Systems (ES) are tightly linked to physical products, and closed both for communication to the surrounding world and to additions or modifications by third parties. New technical solutions are however emerging that allow addition of plug-in software, as well as external communication for both software installation and data exchange. These mechanisms in combination will allow for the construction of Federated Embedded Systems (FES). Expected benefits include the possibility of third-party actors developing add-on functionality; a shorter time to market for new functions; and the ability to upgrade existing products in the field. This will however require not only new technical solutions, but also a transformation of the software ecosystems for ES.ObjectiveThis paper aims at providing an initial characterization of the mechanisms that need to be present to make a FES ecosystem successful. This includes identification of the actors, the possible business models, the effects on product development processes, methods and tools, as well as on the product architecture.MethodThe research was carried out as an explorative case study based on interviews with 15 senior staff members at 9 companies related to ES that represent different roles in a future ecosystem for FES. The interview data was analyzed and the findings were mapped according to the Business Model Canvas (BMC).ResultsThe findings from the study describe the main characteristics of a FES ecosystem, and identify the challenges for future research and practice.ConclusionsThe case study indicates that new actors exist in the FES ecosystem compared to a traditional supply chain, and that their roles and relations are redefined. The business models include new revenue streams and services, but also create the need for trade-offs between, e.g., openness and dependability in the architecture, as well as new ways of working. 相似文献
Software process improvement (SPI) is challenging, particularly for small and medium sized enterprises. Most existing SPI frameworks are either too expensive to deploy, or do not take an organizations’ specific needs into consideration. There is a need for light weight SPI frameworks that enable practitioners to base improvement efforts on the issues that are the most critical for the specific organization.
This paper presents a step-by-step guide to process assessment and improvement planning using improvement framework utilizing light weight assessment and improvement planning (iFLAP), aimed at practitioners undertaking SPI initiatives. In addition to the guide itself the industrial application of iFLAP is shown through two industrial cases. iFLAP is a packaged improvement framework, containing both assessment and improvement planning capabilities, explicitly developed to be light weight in nature. Assessment is performed by eliciting improvements issues based on the organization’s experience and knowledge. The findings are validated through triangulation utilizing multiple data sources. iFLAP actively involves practitioners in prioritizing improvement issues and identifying dependencies between them in order to package improvements, and thus establish a, for the organization, realistic improvement plan. The two cases of iFLAP application in industry are presented together with lessons learned in order to exemplify actual use of the framework as well as challenges encountered. 相似文献
A total of 64 references to papers, books and conference proceedings on the subject of software reliability have been selected. Each of these references is provided with an annotation consisting of a paragraph of commentary. Sections of the bibliography are devoted to requirements definition, programming methodology, certification, fault-tolerance and reliability modelling. 相似文献
Software product line (SPL) engineering has been applied in several domains, especially in large-scale software development. Given the benefits experienced and reported, SPL engineering has increasingly garnered interest from small to medium-sized companies. It is possible to find a wide range of studies reporting on the challenges of running a SPL project in large companies. However, very little reports exist that consider the situation for small to medium-sized enterprises and these studies try develop universal truths for SPL without lessons learned from empirical evidence need to be contextualized. This study is a step towards bridging this gap in contextual evidence by characterizing the weaknesses discovered in the scoping (SC) and requirements (RE) disciplines of SPL. Moreover, in this study we conducted a case study in a small to medium sized enterprises (SMEs) to justify the use of agile methods when introducing the SPL SC and RE disciplines through the characterization of their bottlenecks. The results of the characterization indicated that ineffective communication and collaboration, long iteration cycles, and the absence of adaptability and flexibility can increase the effort and reduce motivation during project development. These issues can be mitigated by agile methods. 相似文献