首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Many small software organizations have recognized the need to improve their software product. Evaluating the software product alone seems insufficient since it is known that its quality is largely dependant on the process that is used to create it. Thus, small organizations are asking for evaluation of their software processes and products. The ISO/IEC 14598-5 standard is already used as a methodology basis for evaluating software products. This article explores how it can be combined with the CMMI to produce a methodology that can be tailored for process evaluation in order to improve their software processes. SM: CMMI is a service mark of Carnegie-Mellon University. Sylvie Trudel has over 20 years of experience in software. She worked for more than 10 years in development and implementation of management information systems and embedded real-time systems. Since 1996, she works as a process improvement specialist, implementing best practices into organizations processes from CMM and CMMI models. She performed several CMM and CMMI assessments and participated in many other CMM assessments such as CBA IPI, SCE, and other proprietary methods. She obtained a bachelors degree in computer science in 1986 from Laval University in Québec City and a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal. Sylvie is currently working as a software engineering advisor at the Centre de Recherche Informatique de Montréal (CRIM). Jean-Marc Lavoie has been working in software development for over 10 years. He performed and published a comparative study between the guide to the SWEBOK and the CMMI in 2003. Jean-Marc obtained a bachelor degree in Electrical Engineering. He is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software architect at Trisotech. Marie-Claude Pare has been working in software development for 7 years. Marie-Claude obtained a bachelor degree in Software Engineering from école Polytechnique in Montréal. She is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software engineer at Motorola GSG Canada. Dr Witold Suryn is a Professor at the école de technologie supérieure, Montreal, Canada (engineering school of the Université du Québec network of institutions) where he teaches graduate and undergraduate software engineering courses and conducts research in the domain of software quality engineering, software engineering body of knowledge and software engineering fundamental principles. Dr Suryn is also the principal researcher and the director of GELOG : IQUAL, the Software Quality Engineering Research Group at école de technologie supérieure. From October 2003 Dr. Suryn holds the position of the International Secretary of ISO/IEC SC7 – System and Software Engineering.  相似文献   

2.
ContextThe establishment of effective and efficient project management practices still remains a challenge to software organizations. In striving to address these needs, “best practice” models, such as, CMMI or PMBOK, are being developed to assist organizations interested in improving project management. And, although, those models share overlapping content, there are still differences and, therefore, each of the models offers different advantages.ObjectiveThis paper proposes a set of unified project management best practices by integrating and harmonizing on a high-level perspective PMBOK (4th ed.) processes and CMMI-DEV v1.2 specific practices of the basic project management process areas PP, PMC and SAM.MethodBased on the analysis of both models, a unified set of best practices has been defined by a group of researchers with theoretical and practical expertise on the CMMI framework and software process improvement as well as project management and the PMBOK. The proposed set has been revised by different researchers from different institutions in several review rounds until consensus was achieved.ResultsAs a result, a set of unified best practices is defined and explicitly mapped to the correspondent PMBOK processes and CMMI specific practices of the current versions of both models.ConclusionWe can conclude that an integration and harmonization of both models is possible and may help to implement and assess project management processes more effectively and efficiently, optimizing software process improvement investments.  相似文献   

3.
Abstract

Organizations have been relying on collaboration for knowledge sharing and productivity improvement in order to reduce costs or boost revenue. However, organizations still cannot assure that collaboration is properly conducted in daily work. This paper presents an approach to stimulating collaboration between professionals in an organization. The approach, combining a BPM methodology with the CollabMM collaboration maturity model and its corresponding method, is a result of an exploratory study in a real setting in an oil company in Brazil. The project is a move towards improving decision-making during one of the company's business processes and establishing collaboration among professionals through information sharing.  相似文献   

4.
ContextSoftware engineering organizations routinely define and implement processes to support, guide and control project execution. An assumption underlying this process-centric approach to business improvement is that the quality of the process will influence the quality, cost and time-to-release of the software produced. A critical question thus arises of what constitutes quality for software engineering processes.ObjectiveTo identify criteria used by experienced practitioners to judge the quality of software engineering processes and to understand how knowledge of these criteria and their relationships may be useful for those undertaking software process improvement activities.MethodInterviews were conducted with 17 experienced software engineering practitioners from a range of geographies, roles and industry sectors. Published reports from 30 software process improvement case-studies were selected from multiple peer-reviewed software journals. A qualitative Grounded Theory-based methodology was employed to systematically analyze the collected data to synthesize a model of quality for software engineering processes.ResultsThe synthesized model suggests that practitioners perceive the overall quality of a software engineering process based on four quality attributes: suitability, usability, manageability and evolvability. Furthermore, these judgments are influenced by key properties related to the semantic content, structure, representation and enactment of that process. The model indicates that these attributes correspond to particular organizational perspectives and that these differing views may explain role-based conflicts in the judgement of process quality.ConclusionConsensus exists amongst practitioners about which characteristics of software engineering process quality most influence project outcomes. The model presented provides a terminological framework that can facilitate precise discussion of software engineering process issues and a set of criteria that can support activities for software process definition, evaluation and improvement. The potential exists for further development of this model to facilitate optimization of process properties to match organizational needs.  相似文献   

5.
ContextModel-Driven Development (MDD) is an alternative approach for information systems development. The basic underlying concept of this approach is the definition of abstract models that can be transformed to obtain models near implementation. One fairly widespread proposal in this sphere is that of Model Driven Architecture (MDA). Business process models are abstract models which additionally contain key information about the tasks that are being carried out to achieve the company’s goals, and two notations currently exist for modelling business processes: the Unified Modelling Language (UML), through activity diagrams, and the Business Process Modelling Notation (BPMN).ObjectiveOur research is particularly focused on security requirements, in such a way that security is modelled along with the other aspects that are included in a business process. To this end, in earlier works we have defined a metamodel called secure business process (SBP), which may assist in the process of developing software as a source of highly valuable requirements (including very abstract security requirements), which are transformed into models with a lower abstraction level, such as analysis class diagrams and use case diagrams through the approach presented in this paper.MethodWe have defined all the transformation rules necessary to obtain analysis class diagrams and use case diagrams from SBP, and refined them through the characteristic iterative process of the action-research method.ResultsWe have obtained a set of rules and a checklist that make it possible to automatically obtain a set of UML analysis classes and use cases, starting from SBP models. Our approach has additionally been applied in a real environment in the area of the payment of electrical energy consumption.ConclusionsThe application of our proposal shows that our semi-automatic process can be used to obtain a set of useful artifacts for software development processes.  相似文献   

6.
ContextBuilding defect prediction models in large organizations has many challenges due to limited resources and tight schedules in the software development lifecycle. It is not easy to collect data, utilize any type of algorithm and build a permanent model at once. We have conducted a study in a large telecommunications company in Turkey to employ a software measurement program and to predict pre-release defects. Based on our prior publication, we have shared our experience in terms of the project steps (i.e. challenges and opportunities). We have further introduced new techniques that improve our earlier results.ObjectiveIn our previous work, we have built similar predictors using data representative for US software development. Our task here was to check if those predictors were specific solely to US organizations or to a broader class of software.MethodWe have presented our approach and results in the form of an experience report. Specifically, we have made use of different techniques for improving the information content of the software data and the performance of a Naïve Bayes classifier in the prediction model that is locally tuned for the company. We have increased the information content of the software data by using module dependency data and improved the performance by adjusting the hyper-parameter (decision threshold) of the Naïve Bayes classifier. We have reported and discussed our results in terms of defect detection rates and false alarms. We also carried out a cost–benefit analysis to show that our approach can be efficiently put into practice.ResultsOur general result is that general defect predictors, which exist across a wide range of software (in both US and Turkish organizations), are present. Our specific results indicate that concerning the organization subject to this study, the use of version history information along with code metrics decreased false alarms by 22%, the use of dependencies between modules further reduced false alarms by 8%, and the decision threshold optimization for the Naïve Bayes classifier using code metrics and version history information further improved false alarms by 30% in comparison to a prediction using only code metrics and a default decision threshold.ConclusionImplementing statistical techniques and machine learning on a real life scenario is a difficult yet possible task. Using simple statistical and algorithmic techniques produces an average detection rate of 88%. Although using dependency data improves our results, it is difficult to collect and analyze such data in general. Therefore, we would recommend optimizing the hyper-parameter of the proposed technique, Naïve Bayes, to calibrate the defect prediction model rather than employing more complex classifiers. We also recommend that researchers who explore statistical and algorithmic methods for defect prediction should spend less time on their algorithms and more time on studying the pragmatic considerations of large organizations.  相似文献   

7.
Software development methodologies usually contain guidance on what steps to follow in order to obtain the desired product. At the same time, capability assessment frameworks usually assess the process that is followed on a project in practice in the context of a process reference model, defined separately and independently of any particular methodology. This results in the need for extra effort when trying to match a given process reference model with an organisation’s enacted processes. This paper introduces a metamodel for the definition of assessable methodologies, that is, methodologies that are constructed with assessment in mind and that contain a built-in process reference model. Organisations using methodologies built from this metamodel will benefit from automatically ensuring that their executed work conforms to the appropriate assessment model. Cesar Gonzalez-Perez is a post-doctoral research fellow in the Faculty of Information Technology at UTS, where he is currently researching with Professor Henderson-Sellers in object-oriented methodologies, with particular emphasis on metamodelling and component-based, assessable methodologies. He is the founder and former technical director of Neco, a company based in Spain specializing in software development support services, which include the deployment and use of the OPEN/Metis methodology at small and mid-sized organizations. He has also worked for the University of Santiago de Compostela in Spain as a researcher in computing & archaeology, and received his Ph.D. in this topic in 2000. Tom McBride has more than twenty years in the computer industry in positions ranging from computer operator, developer, project manager to QA manager. He is significantly involved in standards development, both locally in Australia and internationally for the International Standards Organisation. Tom is Chairman of the Australian Computer Society National Standards Committee and is assisting the development of the OOSPICE Component Based Development methodology. He is also a lecturer in software development-related subjects at the University of Technology, Sydney and is currently enrolled as a Ph.D.student investigating coordination in software development. Brian Henderson-Sellers is Director of the Centre for Object Technology Applications and Research and Professor of Information Systems at UTS. He is author of eleven books on object technology and is well-known for his work in OO methodologies (MOSES, COMMA, OPEN, OOSPICE) and in OO metrics. Brian has been Regional Editor of Object-Oriented Systems, a member of the editorial board of Object Magazine/Component Strategies and Object Expert for many years and is currently on the editorial board of Journal of Object Technology and Software and Systems Modelling. He was the Founder of the Object-Oriented Special Interest Group of the Australian Computer Society (NSW Branch) and Chairman of the Computerworld Object Developers’ Awards committee for ObjectWorld 94 and 95 (Sydney). He is a frequent, invited speaker at international OT conferences. In 1999, he was voted number 3 in the Who’s Who of Object Technology (Handbook of Object Technology, CRC Press, Appendix N). He is currently a member of the Review Panel for the OMG’s Software Process Engineering Model (SPEM) standards initiative and is a member of the UML 2.0 review team. In July 2001, Professor Henderson-Sellers was awarded a Doctor of Science (D.Sc.) from the University of London for his research contributions in object-oriented methodologies.  相似文献   

8.
The Value of Outsourcing: A Field Study   总被引:2,自引:0,他引:2  
This article examines the effects of information systems outsourcing on the business processes of organizations. Rather than simply comparing outsourcing and not outsourcing, the study also addresses a third and increasingly common strategy, that of using software purchased “off-the-shelf.” An extensive survey was distributed to business process managers over a cross-section of financial services processes and companies. Results show that outsourcing information systems can create lower overall process costs and may lead to superior overall process performance compared to processes that used software purchased off-the-shelf. Further, information systems built in house lead to superior overall process performance compared to processes that used software purchased off-the-shelf. These results should assist business managers in gauging the possible effects of outsourcing information systems (or not) on their core processes.  相似文献   

9.
Abstract

A number of authors and multi‐national organizations have suggested that providing information services, and in particular software engineering and programming services, for export afford an important economic opportunity for poor countries. Throughout the world, developing countries have acted on this advice. This paper will argue that the opportunities for software engineering services in particular are limited, at least for small developing economies. The main argument is that software engineering and programming are labor‐intensive activities and that small developing countries simply do not have the required resources to acquire or train a sufficient number of software engineers and programmers. Any development policy that blindly follows the tenet that small developing countries can improve their economic position through the provision of information services for export is therefore bound to fail. Hence, more sophisticated policies are called for. This paper will also examine a number of such policy options, including an innovative human resource development policy being developed in Jamaica. Keywords: Information services for export, economic development policy, small developing countries, Jamaica  相似文献   

10.
The process of pollution risk assessment requires the assimilation of data that are spatially variable in nature, making geographical information systems (GIS) an ideal tool for such assessments. Over half of Britain's drinking water is obtained from surface water abstractions, many of which are situated in upland areas. In order to optimise the quality of abstracted waters it is important to assess the possible risks of pollution upstream from the point of abstraction. This paper describes the use of the PC-based WINGS™ and MapInfo Professional™ geographical information systems in the evolution of a risk assessment methodology to assess catchment risk. The work illustrates how such technology can assist in environmental decision-making to optimise the quality of drinking water supplies and enhance treatment efficiencies. Examples are given showing how raster and vector-based data can be used within a GIS framework to produce maps indicating areas of potential hazard to water quality, and coupled with existing models to predict and quantify risk frequency and impact. GIS techniques are further utilised in the formulation of a raw water monitoring programme to assist in intake operation and land-use planning in the catchment. The availability of suitable digital data was found to be variable, and some problems encountered in their integration and implementation within the system framework were resolved. Comment is given on the suitability and relative performance of the two software packages in the assessment of catchment risk. The work was carried out on a medium specification desktop PC, and therefore has the potential to be utilised across the intranet of a large utility company.  相似文献   

11.
ContextIn an industry in which technological developments are rapid, in order to keep up with the continuously increasing competition and to obtain competitive advantage, the software development organizations (SDOs) need to obtain the correct knowledge, use it efficiently and pass it to future projects evolving it accordingly.ObjectiveThe main aim of this paper is to propose a novel model, AiOLoS, for assessing the level and characteristics of organizational learning (OL) in SDOs.MethodThe primary contributions of this two-legged AiOLoS model are the identification of the major process areas and the core processes that a learning software organization (LSO) follows during its OL process and to provide the necessary measures and the corresponding definitions/interpretations for the assessment of the learning characteristics of the SDO. The research is supported with a multiple case-study work to identify the mapping of the core processes and the applicability of the AiOLoS model to SDOs, its utilization as a tool for assessing OL and providing a basis for software process improvement (SPI).ResultsThe case studies have shown that not only the AiOLoS measures are applicable to SDOs but also that they measure in great extent the actual OL that is realized in the organization and that the major process areas and core processes are actually related to the OL process of SDOs.ConclusionAiOLoS has been designed to provide a starting point for the enhancement of OL capabilities of SDOs, which in turn should provide a basis to conduct SPI activities. Therefore, it is also important to investigate a possible binding of AiOLoS to SPICE and the inclusion of a maturity dimension to AiOLoS.  相似文献   

12.
ABSTRACT

Organizations normally do not possess a way to communicate those needs back to the rest of an organization. This paper demonstrates that organizations are vigilant to activity within their environment, so this research project will focus on process improvement to better organizations through internal processes. Prior to this project, Company X was unable to communicate and address threats to their organization. Prior to this project, each employee was not trained on security. However, each employee understood the norms and values of company processes on an individual level. Each employee was able to contribute details of security issues as they perceived them to make a comprehensive security model. This Security Working Group (SWG) project describes the steps necessary to create a self-educating, self-perpetuating process that spurns co-generative learning among an entire organization. Security training prepared each employee to be more attentive to risks to potential security issues. The result of this research proves that employees can detect threats in an organization with relatively little training.  相似文献   

13.
ContextAlthough Agile software development models have been widely used as a base for the software project life-cycle since 1990s, the number of studies that follow a sound empirical method and quantitatively reveal the effect of using these models over Traditional models is scarce.ObjectiveThis article explains the empirical method of and the results from systematic analyses and comparison of development performance and product quality of Incremental Process and Agile Process adapted in two projects of a middle-size, telecommunication software development company. The Incremental Process is an adaption of the Waterfall Model whereas the newly introduced Agile Process is a combination of the Unified Software Development Process, Extreme Programming, and Scrum.MethodThe method followed to perform the analyses and comparison is benefited from the combined use of qualitative and quantitative methods. It utilizes; GQM Approach to set measurement objectives, CMMI as the reference model to map the activities of the software development processes, and a pre-defined assessment approach to verify consistency of process executions and evaluate measure characteristics prior to quantitative analysis.ResultsThe results of the comparison showed that the Agile Process had performed better than the Incremental Process in terms of productivity (79%), defect density (57%), defect resolution effort ratio (26%), Test Execution V&V Effectiveness (21%), and effort prediction capability (4%). These results indicate that development performance and product quality achieved by following the Agile Process was superior to those achieved by following the Incremental Process in the projects compared.ConclusionThe acts of measurement, analysis, and comparison enabled comprehensive review of the two development processes, and resulted in understanding their strengths and weaknesses. The comparison results constituted objective evidence for organization-wide deployment of the Agile Process in the company.  相似文献   

14.
15.
基于CMMI的小型软件过程自评估工具   总被引:1,自引:0,他引:1  
CMMI-SW是SEI在CMM-SW的基础上开发并在全世界推广实施的一种软件能力成熟度评估标准,主要用于指导软件开发过程的改进和进行软件开发能力的评估。本文基于CMMI和小型过程评估的思想设计了一个小型软件过程评估工具,帮助中小型软件组织根据CMMI进行快速的软件过程自评估。  相似文献   

16.
Abstract

Despite their rising popularity, distributed teams face a number of collaboration challenges that may potentially hinder their ability to productively coordinate their resources, activities, and information, often in dynamic and uncertain task environments. In this paper, we focus principally on the criticality of information alignment for supporting coordinated task performance in complex operational environments. As organizations become more expertise, geographically, and temporally distributed, appropriate alignment and coordination among distributed team members becomes more critical for minimizing the occurrence of information flow failures, poor decision-making, and degraded team performance. We first describe these coordination processes using the metaphor of an ‘information clutch’ that allows for smooth transitions of task priorities and activities in expert teams. We then present two case study examples that illustrate the potentially significant impact of information sharing and information alignment on productivity and coordination in organizations. We conclude with a discussion of future directions in this area.  相似文献   

17.
This paper presents the original software process model (currently called n 1) as 1) as it has been developed by the SPICE project and delivered to ISO in June 1995 to become the international reference for process assessment. This model is used for software process assessments, in order to compare the actual status of an organization's software processes, to the requirements of the model. The process profile resulting from the assessment is used as a major input for a process improvement initiative.  相似文献   

18.
ContextMany large organizations juggle an application portfolio that contains different applications that fulfill similar tasks in the organization. In an effort to reduce operating costs, they are attempting to consolidate such applications. Before consolidating applications, the work that is done with these applications must be harmonized. This is also known as process harmonization.ObjectiveThe increased interest in process harmonization calls for measures to quantify the extent to which processes have been harmonized. These measures should also uncover the factors that are of interest when harmonizing processes. Currently, such measures do not exist. Therefore, this study develops and validates a measurement model to quantify the level of process harmonization in an organization.MethodThe measurement model was developed by means of a literature study and structured interviews. Subsequently, it was validated through a survey, using factor analysis and correlations with known related constructs.ResultsAs a result, a valid and reliable measurement model was developed. The factors that are found to constitute process harmonization are: the technical design of the business process and its data, the resources that execute the process, and the information systems that are used in the process. In addition, strong correlations were found between process harmonization and process standardization and between process complexity and process harmonization.ConclusionThe measurement model can be used by practitioners, because it shows them the factors that must be taken into account when harmonizing processes, and because it provides them with a means to quantify the extent to which they succeeded in harmonizing their processes. At the same time, it can be used by researchers to conduct further empirical research in the area of process harmonization.  相似文献   

19.
BackgroundFunctional size measurement methods are increasingly being adopted by software organizations due to the benefits they provide to software project managers. The Function Point Analysis (FPA) measurement method has been used extensively and globally in software organizations. The COSMIC measurement method is considered a second generation FSM method, because of the novel aspects it brings to the FSM field. After the COSMIC method was proposed, the issue of convertibility from FPA to COSMIC method arose, the main problem being the ability to convert FPA historical data to the corresponding COSMIC Function Point (CFP) data with a high level of accuracy, which would give organizations the ability to use the data in their future planning. Almost all the convertibility studies found in the literature involve converting FPA measures to COSMIC measures statistically, based on the final size generated by both methods.ObjectivesThis paper has three main objectives. The first is to explore the accuracy of the conversion type that converts FPA measures to COSMIC measures statistically, and that of the type that converts FPA transaction function measures to COSMIC measures. The second is to propose a new conversion type that predicts the number of COSMIC data movements based on the number of file type references referenced by all the elementary processes in a single application. The third is to compare the accuracy of our proposed conversion type with the other two conversion types found in the literature.MethodOne dataset from the management information systems domain was used to compare the accuracy of all three conversion types using a systematic conversion approach that applies three regression models: Ordinary Least Squares, Robust Least Trimmed Squares, and logarithmic transformation were used. Four datasets from previous studies were used to evaluate the accuracy of the three conversion types, to which the Leave One Out Cross Validation technique was applied to obtain the measures of fitting accuracy.ResultsThe conversion type most often used as well as the conversion type based on transaction function size were found to generate nonlinear, inaccurate and invalid results according to measurement theory. In addition, they produce a loss of measurement information in the conversion process, because of the FPA weighting system and FPA structural problems, such as illegal scale transformation. Our proposed conversion type avoids the problems inherent in the other two types but not the nonlinearity problem. Furthermore, the proposed conversion type has been found to be more accurate than the other types when the COSMIC functional processes comprise dataset applications that are systematically larger than their corresponding FPA elementary processes, or when the processes vary from small to large. Finally, our proposed conversion type delivered better results over the tested datasets, whereas, in general, there is no statistical significant difference between the accuracy of the conversion types examined for every dataset, particularly the conversion type most often used is not the most accurate.ConclusionsOur proposed conversion type achieves accurate results over the tested datasets. However, the lack of knowledge needed to use it over all the datasets in the literature limits the value of this conclusion. Consequently, practitioners converting from FPA to COSMIC should not stay with only one conversion type, assuming that it is the best. In order to achieve a high level of accuracy in the conversion process, all three conversion types must be tested via a systematic conversion approach.  相似文献   

20.
ContextMore and more, small and medium-sized enterprises (SMEs) are using software to augment the functionality of their products and offerings. Variability management of software is becoming an interesting topic for SMEs with expanding portfolios and increasingly complex product structures. While the use of software product lines to resolve high variability is well known in larger organizations, there is less known about the practices in SMEs.ObjectiveThis paper presents results from a survey of software developing SMEs. The purpose of the paper is to provide a snapshot of the current awareness and practices of variability modeling in organizations that are developing software with the constraints present in SMEs.MethodA survey with questions regarding the variability practices was distributed to software developing organizations in a region of Sweden that has many SMEs. The response rate was 13% and 25 responses are used in this analysis.ResultsWe find that, although there are SMEs that develop implicit software product lines and have relatively sophisticated variability structures for their solution space, the structures of the problem space and the product space have room for improvement.ConclusionsThe answers in the survey indicate that SMEs are in situations where they can benefit from more structured variability management, but the awareness need to be raised. Even though the problem space similarity is high, there is little reuse and traceability activities performed. The existence of SMEs with qualified variability management and product line practices indicates that small organizations are capable to apply such practices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号