首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

Semantic Web Applications (SWAs) differ from Web applications and software as they capture semantics and support heterogeneous information. Therefore, evaluation of SWA’s quality should be treated differently. Quality framework for SWAs should not only retain the attributes of Web applications and software that are relevant to SWAs, but also accommodate attributes unique to SWAs. We propose SWAQ framework comprising metrics for evaluation of SWA quality attributes, which have been validated using standard criteria for quality models. SWAQ uses analytic hierarchy process for multiple criteria decision making to rank SWAs. Moreover, a fuzzy system is employed for handling uncertainty involved in quality evaluation.  相似文献   

2.
ContextVariability is the ability of a software artifact (e.g., a system, component) to be adapted for a specific context, in a preplanned manner. Variability not only affects functionality, but also quality attributes (e.g., security, performance). Service-based software systems consider variability in functionality implicitly by dynamic service composition. However, variability in quality attributes of service-based systems seems insufficiently addressed in current design practices.ObjectiveWe aim at (a) assessing methods for handling variability in quality attributes of service-based systems, (b) collecting evidence about current research that suggests implications for practice, and (c) identifying open problems and areas for improvement.MethodA systematic literature review with an automated search was conducted. The review included studies published between the year 2000 and 2011. We identified 46 relevant studies.ResultsCurrent methods focus on a few quality attributes, in particular performance and availability. Also, most methods use formal techniques. Furthermore, current studies do not provide enough evidence for practitioners to adopt proposed approaches. So far, variability in quality attributes has mainly been studied in laboratory settings rather than in industrial environments.ConclusionsThe product line domain as the domain that traditionally deals with variability has only little impact on handling variability in quality attributes. The lack of tool support, the lack of practical research and evidence for the applicability of approaches to handle variability are obstacles for practitioners to adopt methods. Therefore, we suggest studies in industry (e.g., surveys) to collect data on how practitioners handle variability of quality attributes in service-based systems. For example, results of our study help formulate hypotheses and questions for such surveys. Based on needs in practice, new approaches can be proposed.  相似文献   

3.
ContextThe software architecture of a system is the result of a set of architectural decisions. The topic of architectural decisions in software engineering has received significant attention in recent years. However, no systematic overview exists on the state of research on architectural decisions.ObjectiveThe goal of this study is to provide a systematic overview of the state of research on architectural decisions. Such an overview helps researchers reflect on previous research and plan future research. Furthermore, such an overview helps practitioners understand the state of research, and how research results can help practitioners in their architectural decision-making.MethodWe conducted a systematic mapping study, covering studies published between January 2002 and January 2012. We defined six research questions. We queried six reference databases and obtained an initial result set of 28,895 papers. We followed a search and filtering process that resulted in 144 relevant papers.ResultsAfter classifying the 144 relevant papers for each research question, we found that current research focuses on documenting architectural decisions. We found that only several studies describe architectural decisions from the industry. We identified potential future research topics: domain-specific architectural decisions (such as mobile), achieving specific quality attributes (such as reliability or scalability), uncertainty in decision-making, and group architectural decisions. Regarding empirical evaluations of the papers, around half of the papers use systematic empirical evaluation approaches (such as surveys, or case studies). Still, few papers on architectural decisions use experiments.ConclusionOur study confirms the increasing interest in the topic of architectural decisions. This study helps the community reflect on the past ten years of research on architectural decisions. Researchers are offered a number of promising future research directions, while practitioners learn what existing papers offer.  相似文献   

4.
ContextQuality of Service (QoS) is a major issue in various web service related activities. Quality models have been proposed as the engineering artefact to provide a common framework of understanding for QoS, by defining the quality factors that apply to web service usage.ObjectiveThe goal of this study is to evaluate the current state of the art of the proposed quality models for web services, specifically: (1) which are these proposals and how are they related; (2) what are their structural characteristics; (3) what quality factors are the most and least addressed; and (4) what are their most consolidated definitions.MethodWe have conducted a systematic mapping by defining a robust protocol that combines automatic and manual searches from different sources. We used a rigorous method to elicitate the keywords from the research questions and a selection criteria to retrieve the final papers to evaluate. We have adopted the ISO/IEC 25010 standard to articulate our analysis.ResultsWe have evaluated 47 different quality models from 65 papers that fulfilled the selection criteria. By analyzing in depth these quality models, we have: (1) distributed the proposals along the time dimension and identified their relationships; (2) analyzed their size (visualizing the number of nodes and levels) and definition coverage (as indicator of quality of the proposals); (3) quantified the coverage of the different ISO/IEC 25010 quality factors by the proposals; (4) identified the quality factors that appeared in at least 30% of the surveyed proposals and provided the most consolidated definitions for them.ConclusionsWe believe that this panoramic view on the anatomy of the quality models for web services may be a good reference for prospective researchers and practitioners in the field and especially may help avoiding the definition of new proposals that do not align with current research.  相似文献   

5.
ContextAlthough SPEM 2.0 has great potential for software process modeling, it does not provide concepts or formalisms for precise modeling of process behavior. Indeed, SPEM fails to address process simulation, execution, monitoring and analysis, which are important activities in process management. On the other hand, BPMN 2.0 is a widely used notation to model business processes that has associated tools and techniques to facilitate the aforementioned process management activities. Using BPMN to model software development processes can leverage BPMN’s infrastructure to improve the quality of these processes. However, BPMN lacks an important feature to model software processes: a mechanism to represent process tailoring.ObjectiveThis paper proposes BPMNt, a conservative extension to BPMN that aims at creating a tailoring representation mechanism similar to the one found in SPEM 2.0.MethodWe have used the BPMN 2.0 extensibility mechanism to include the representation of specific tailoring relationships namely suppression, local contribution, and local replacement, which establish links between process elements (such as in the case of SPEM). Moreover, this paper also presents some rules to ensure the consistency of BPMN models when using tailoring relationships.ResultsIn order to evaluate our proposal we have implemented a tool to support the BPMNt approach and have applied it for representing real process adaptations in the context of an academic management system development project. Results of this study showed that the approach and its support tool can successfully be used to adapt BPMN-based software processes in real scenarios.ConclusionWe have proposed an approach to enable reuse and adaptation of BPMN-based software process models as well as derivation traceability between models through tailoring relationships. We believe that bringing such capabilities into BPMN will open new perspectives to software process management.  相似文献   

6.
ContextSoftware quality attributes are assessed by employing appropriate metrics. However, the choice of such metrics is not always obvious and is further complicated by the multitude of available metrics. To assist metrics selection, several properties have been proposed. However, although metrics are often used to assess successive software versions, there is no property that assesses their ability to capture structural changes along evolution.ObjectiveWe introduce a property, Software Metric Fluctuation (SMF), which quantifies the degree to which a metric score varies, due to changes occurring between successive system's versions. Regarding SMF, metrics can be characterized as sensitive (changes induce high variation on the metric score) or stable (changes induce low variation on the metric score).MethodSMF property has been evaluated by: (a) a case study on 20 OSS projects to assess the ability of SMF to differently characterize different metrics, and (b) a case study on 10 software engineers to assess SMF's usefulness in the metric selection process.ResultsThe results of the first case study suggest that different metrics that quantify the same quality attributes present differences in their fluctuation. We also provide evidence that an additional factor that is related to metrics’ fluctuation is the function that is used for aggregating metric from the micro to the macro level. In addition, the outcome of the second case study suggested that SMF is capable of helping practitioners in metric selection, since: (a) different practitioners have different perception of metric fluctuation, and (b) this perception is less accurate than the systematic approach that SMF offers.ConclusionsSMF is a useful metric property that can improve the accuracy of metrics selection. Based on SMF, we can differentiate metrics, based on their degree of fluctuation. Such results can provide input to researchers and practitioners in their metric selection processes.  相似文献   

7.
As business transitions into the new economy, e-system successful use has become a strategic goal. Especially in business to consumer (e-commerce) applications, users highly evaluate the quality of their interactive shopping experience. However, quality is difficult to define and measure and most importantly, it is difficult to measure its impact on the end-user. Among the many research questions that arise, some of the most important concern the exact nature of the quality attributes that define an e-commerce system, and how one could model these attributes in order to increase its acceptance. Bearing in mind that e-commerce systems are actually user/data-intensive web-based software systems, this work performed a survey which resulted in a theoretical model that helps to measure such systems’ dynamics through their decomposition into primary quality characteristics. The proposed model is based on Bayesian Networks and ISO 9126. Besides the emphasis on specific software quality attributes, it also provides a quality assessment process aiding developers to design and produce e-commerce systems of high quality. Using a Bayesian Network the model can be used to combine different types of evidences and provide reasoning from effect to cause and vice versa.
Michalis XenosEmail:
  相似文献   

8.
ContextSoftware quality models provide either abstract quality characteristics or concrete quality measurements; there is no seamless integration of these two aspects. Quality assessment approaches are, hence, also very specific or remain abstract. Reasons for this include the complexity of quality and the various quality profiles in different domains which make it difficult to build operationalised quality models.ObjectiveIn the project Quamoco, we developed a comprehensive approach aimed at closing this gap.MethodThe project combined constructive research, which involved a broad range of quality experts from academia and industry in workshops, sprint work and reviews, with empirical studies. All deliverables within the project were peer-reviewed by two project members from a different area. Most deliverables were developed in two or three iterations and underwent an evaluation.ResultsWe contribute a comprehensive quality modelling and assessment approach: (1) A meta quality model defines the structure of operationalised quality models. It includes the concept of a product factor, which bridges the gap between concrete measurements and abstract quality aspects, and allows modularisation to create modules for specific domains. (2) A largely technology-independent base quality model reduces the effort and complexity of building quality models for specific domains. For Java and C# systems, we refined it with about 300 concrete product factors and 500 measures. (3) A concrete and comprehensive quality assessment approach makes use of the concepts in the meta-model. (4) An empirical evaluation of the above results using real-world software systems showed: (a) The assessment results using the base model largely match the expectations of experts for the corresponding systems. (b) The approach and models are well understood by practitioners and considered to be both consistent and well suited for getting an overall view on the quality of a software product. The validity of the base quality model could not be shown conclusively, however. (5) The extensive, open-source tool support is in a mature state. (6) The model for embedded software systems is a proof-of-concept for domain-specific quality models.ConclusionWe provide a broad basis for the development and application of quality models in industrial practice as well as a basis for further extension, validation and comparison with other approaches in research.  相似文献   

9.
ContextThe quality of business process models (i.e., software artifacts that capture the relations between the organizational units of a business) is essential for enhancing the management of business processes. However, such modeling is typically carried out manually. This is already challenging and time consuming when (1) input uncertainty exists, (2) activities are related, and (3) resource allocation has to be considered. When including optimization requirements regarding flexibility and robustness it becomes even more complicated potentially resulting into non-optimized models, errors, and lack of flexibility.ObjectiveTo facilitate the human work and to improve the resulting models in scenarios subject to uncertainty, we propose a software-supported approach for automatically creating configurable business process models from declarative specifications considering all the aforementioned requirements.MethodFirst, the scenario is modeled through a declarative language which allows the analysts to specify its variability and uncertainty. Thereafter, a set of optimized enactment plans (each one representing a potential execution alternative) are generated from such a model considering the input uncertainty. Finally, to deal with this uncertainty during run-time, a flexible configurable business process model is created from these plans.ResultsTo validate the proposed approach, we conduct a case study based on a real business which is subject to uncertainty. Results indicate that our approach improves the actual performance of the business and that the generated models support most of the uncertainty inherent to the business.ConclusionsThe proposed approach automatically selects the best part of the variability of a declarative specification. Unlike existing approaches, our approach considers input uncertainty, the optimization of multiple objective functions, as well as the resource and the control-flow perspectives. However, our approach also presents a few limitations: (1) it is focused on the control-flow and the data perspective is only partially addressed and (2) model attributes need to be estimated.  相似文献   

10.
Both software organisations and the academic community are aware that the requirements phase of software development is in need of further support. We address this problem by creating a specialised Requirements Capability Maturity Model (R-CMM1). The model focuses on the requirements engineering process as defined within the established Software Engineering Institute’s (SEI’s) software process improvement framework. Our empirical work with software practitioners is a primary motivation for creating this requirements engineering process improvement model. Although all organisations in our study were involved in software process improvement (SPI), they all showed a lack of control over many requirement engineering activities.This paper describes how the requirements engineering (RE) process is decomposed and prioritised in accordance with maturity goals set by the SEI’s Software Capability Maturity Model (SW CMM). Our R-CMM builds on the SEI’s framework by identifying and defining recommended RE sub-processes that meet maturity goals. This new focus will help practitioners to define their RE process with a view to setting realistic goals for improvement.Sarah Beecham is a research fellow in the Department of Maths and Computing in The Open University in the UK. She is currently working on the EPSRC funded CRESTES project () looking into modelling resource estimation for long-lived software. She has recently completed her PhD for a program of work entitled “A Requirements-based Software Process Maturity Model”. Current research interests are in estimation for software evolution and maintenance and in the general areas of software process improvement. Her particular research interests are in empirical methods in software engineering and requirements engineering.Tracy Hall leads the Systems & Software Research Group in the Department of Computer Science at the University of Hertfordshire. She specialises in the empirical investigation of technical and non-technical issues within software engineering. During the past ten years Tracy has successfully collaborated with many companies on a variety of research projects. She is very active in the Empirical Software Engineering community and is regularly invited to talk about empirical methods both in the UK and abroad. Tracy is an accomplished researcher having published over twenty high quality journal papers.Austen Rainer Austen Rainer is a senior lecturer at the University of Hertfordshire. He studied for his PhD at Bournemouth University, in conjunction with IBM Hursley Park. His current research interests include open source software development, longitudinal case study research, and the credibility of empirical evidence for researchers and software practitioners.  相似文献   

11.
Target setting in software quality function deployment (SQFD) is very important since it is directly related to development of high quality products with high customer satisfaction. However target setting is usually done subjectively in practice, which is not scientific. Two quantitative approaches for setting target values: benchmarking and primitive linear regression have been developed and applied in the past to overcome this problem (Akao and Yoji, 1990). But these approaches cannot be used to assess the impact of unachieved targets on satisfaction of customers for customer requirements. In addition, both of them are based on linear regression and not very practical in many applications. In this paper, we present an innovative quantitative method of setting technical targets in SQFD to enable analysis of impact of unachieved target values on customer satisfaction. It is based on assessment of impact of technical attributes on satisfaction of customer requirements. In addition both linear and non linear regression techniques are utilized in our method, which certainly improves the existing quantitative methods which are based on only linear regression. Frank Liu is currently an associate professor and a director of the McDonnel Douglass Foundation software engineering laboratory in the University of Missouri-Rolla. He has been working on requirements engineering, software quality management, and knowledge based software engineering since 1992. He has published about 50 papers in peer-reviewed journals and conferences in the above areas and several other software engineering application areas. He participates in research projects with a total amount of funds of more than four millions dollars as a PI or Co-PI sponsored by the National Science Foundation, Sandia National Laboratory, U.S. Air Force, University of Missouri Research Board, and Toshiba Corporation. He has served as a program committee member for many conferences. He was a program committee vice chair for the 2000 International Conference on Software Engineering and Knowledge Engineering. Kunio Noguchi is a senior quality expert in the software engineering center in the Toshiba Corporation. He has published several papers in the area of quality management systems. Anuj Dhungana was a M.S. graduate student in the computer science department at the Texas Tech University when he performed this research. V.V.N.S.N. Srirangam A. was a M.S. graduate student in the computer science department at the Texas Tech University when he performed this research. Praveen Inuganti was a M.S. graduate student in the computer science department at the University of Missouri-Rolla when he performed this research.  相似文献   

12.
Techniques for statistical process control (SPC), such as using a control chart, have recently garnered considerable attention in the software industry. These techniques are applied to manage a project quantitatively and meet established quality and process-performance objectives. Although many studies have demonstrated the benefits of using a control chart to monitor software development processes (SDPs), some controversy exists regarding the suitability of employing conventional control charts to monitor SDPs. One major problem is that conventional control charts require a large amount of data from a homogeneous source of variation when constructing valid control limits. However, a large dataset is typically unavailable for SDPs. Aggregating data from projects with similar attributes to acquire the required number of observations may lead to wide control limits due to mixed multiple common causes when applying a conventional control chart. To overcome these problems, this study utilizes a Q chart for short-run manufacturing processes as an alternative technique for monitoring SDPs. The Q chart, which has early detection capability, real-time charting, and fixed control limits, allows software practitioners to monitor process performance using a small amount of data in early SDP stages. To assess the performance of the Q chart for monitoring SDPs, three examples are utilized to demonstrate Q chart effectiveness. Some recommendations for practical use of Q charts for SDPs are provided.  相似文献   

13.
ContextEmerging multicores and clusters of multicores that may operate in parallel have set a new challenge – development of massively parallel software composed of thousands of loosely coupled or even completely independent threads/processes, such as MapReduce and Java 3.0 workers, or Erlang processes, respectively. Testing and verification is a critical phase in the development of such software products.ObjectiveGenerating test cases based on operational profiles and certifying declared operational reliability figure of the given software product is a well-established process for the sequential type of software. This paper proposes an adaptation of that process for a class of massively parallel software – large-scale task trees.MethodThe proposed method uses statistical usage testing and operational reliability estimation based on operational profiles and novel test suite quality indicators, namely the percentage of different task trees and the percentage of different paths.ResultsAs an example, the proposed method is applied to operational reliability certification of a parallel software infrastructure named the TaskTreeExecutor. The paper proposes an algorithm for generating random task trees to enable that application. Test runs in the experiments involved hundreds and thousands of Win32/Linux threads thus demonstrating scalability of the proposed approach. For practitioners, the most useful result presented is the method for determining the number of task trees and the number of paths, which are needed to certify the given operational reliability of a software product. The practitioners may also use the proposed coverage metrics to measure the quality of automatically generated test suite.ConclusionThis paper provides a useful solution for the test case generation that enables the operational reliability certification process for a class of massively parallel software called the large-scale task trees. The usefulness of this solution was demonstrated by a case study – operational reliability certification of the real parallel software product.  相似文献   

14.
ContextThere are two interrelated difficulties in requirements engineering processes. First, free-format modelling practices in requirements engineering activities may lead to low quality artefacts and productivity problems. Second, the COSMIC Function Point Method is not yet widespread in the software industry because applying measurement rules to imprecise and ambiguous textual requirements is difficult and requires additional human measurement effort. This challenge is common to all functional size measurement methods.ObjectiveIn this study, alternative solutions have been investigated to address these two difficulties. Information created during the requirements engineering process is formalized as an ontology that also becomes a convenient model for transforming requirements into COSMIC Function Point Method concepts.MethodA method is proposed to automatically measure the functional size of software by using the designed ontology. The proposed method has been implemented as a software application and verified with real projects conducted within the ICT department of a leading telecommunications provider in Turkey.ResultsWe demonstrated a novel method to measure the functional size of software in COSMIC FP automatically. It is based on a newly developed requirements engineering ontology. Our proposed method has several advantages over other methods explored in previous research.ConclusionManual and automated measurement results are in agreement, and the tool is promising for the company under study and for the industry at large.  相似文献   

15.
ContextSoftware quality issues are commonly reported when offshoring software development. Value-based software engineering addresses this by ensuring key stakeholders have a common understanding of quality.ObjectiveThis work seeks to understand the levels of alignment between key stakeholder groups within a company on the priority given to aspects of software quality developed as part of an offshoring relationship. Furthermore, the study aims to identify factors impacting the levels of alignment identified.MethodThree case studies were conducted, with representatives of key stakeholder groups ranking aspects of software quality in a hierarchical cumulative exercise. The results are analysed using Spearman rank correlation coefficients and inertia. The results were discussed with the groups to gain a deeper understanding of the issues impacting alignment.ResultsVarious levels of alignment were found between the various groups. The reasons for misalignment were found to include cultural factors, control of quality in the development process, short-term versus long-term orientations, understanding of cost-benefits of quality improvements, communication and coordination.ConclusionsThe factors that negatively affect alignment can vary greatly between different cases. The work emphasises the need for greater support to align company internal success-critical stakeholder groups in their understanding of quality when offshoring software development.  相似文献   

16.
ContextIn a previous study, we reported on a systematic literature review (SLR), based on a manual search of 13 journals and conferences undertaken in the period 1st January 2004 to 30th June 2007.ObjectiveThe aim of this on-going research is to provide an annotated catalogue of SLRs available to software engineering researchers and practitioners. This study updates our previous study using a broad automated search.MethodWe performed a broad automated search to find SLRs published in the time period 1st January 2004 to 30th June 2008. We contrast the number, quality and source of these SLRs with SLRs found in the original study.ResultsOur broad search found an additional 35 SLRs corresponding to 33 unique studies. Of these papers, 17 appeared relevant to the undergraduate educational curriculum and 12 appeared of possible interest to practitioners. The number of SLRs being published is increasing. The quality of papers in conferences and workshops has improved as more researchers use SLR guidelines.ConclusionSLRs appear to have gone past the stage of being used solely by innovators but cannot yet be considered a main stream software engineering research methodology. They are addressing a wide range of topics but still have limitations, such as often failing to assess primary study quality.  相似文献   

17.
Many small software organizations have recognized the need to improve their software product. Evaluating the software product alone seems insufficient since it is known that its quality is largely dependant on the process that is used to create it. Thus, small organizations are asking for evaluation of their software processes and products. The ISO/IEC 14598-5 standard is already used as a methodology basis for evaluating software products. This article explores how it can be combined with the CMMI to produce a methodology that can be tailored for process evaluation in order to improve their software processes. SM: CMMI is a service mark of Carnegie-Mellon University. Sylvie Trudel has over 20 years of experience in software. She worked for more than 10 years in development and implementation of management information systems and embedded real-time systems. Since 1996, she works as a process improvement specialist, implementing best practices into organizations processes from CMM and CMMI models. She performed several CMM and CMMI assessments and participated in many other CMM assessments such as CBA IPI, SCE, and other proprietary methods. She obtained a bachelors degree in computer science in 1986 from Laval University in Québec City and a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal. Sylvie is currently working as a software engineering advisor at the Centre de Recherche Informatique de Montréal (CRIM). Jean-Marc Lavoie has been working in software development for over 10 years. He performed and published a comparative study between the guide to the SWEBOK and the CMMI in 2003. Jean-Marc obtained a bachelor degree in Electrical Engineering. He is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software architect at Trisotech. Marie-Claude Pare has been working in software development for 7 years. Marie-Claude obtained a bachelor degree in Software Engineering from école Polytechnique in Montréal. She is pursuing a Masters degree in Software Engineering at école de Technologie Supérieure (éTS) in Montréal while working as a software engineer at Motorola GSG Canada. Dr Witold Suryn is a Professor at the école de technologie supérieure, Montreal, Canada (engineering school of the Université du Québec network of institutions) where he teaches graduate and undergraduate software engineering courses and conducts research in the domain of software quality engineering, software engineering body of knowledge and software engineering fundamental principles. Dr Suryn is also the principal researcher and the director of GELOG : IQUAL, the Software Quality Engineering Research Group at école de technologie supérieure. From October 2003 Dr. Suryn holds the position of the International Secretary of ISO/IEC SC7 – System and Software Engineering.  相似文献   

18.
The amount of resources allocated for software quality improvements is often not enough to achieve the desired software quality. Software quality classification models that yield a risk-based quality estimation of program modules, such as fault-prone (fp) and not fault-prone (nfp), are useful as software quality assurance techniques. Their usefulness is largely dependent on whether enough resources are available for inspecting the fp modules. Since a given development project has its own budget and time limitations, a resource-based software quality improvement seems more appropriate for achieving its quality goals. A classification model should provide quality improvement guidance so as to maximize resource-utilization. We present a procedure for building software quality classification models from the limited resources perspective. The essence of the procedure is the use of our recently proposed Modified Expected Cost of Misclassification (MECM) measure for developing resource-oriented software quality classification models. The measure penalizes a model, in terms of costs of misclassifications, if the model predicts more number of fp modules than the number that can be inspected with the allotted resources. Our analysis is presented in the context of our Rule-Based Classification Modeling (RBCM) technique. An empirical case study of a large-scale software system demonstrates the promising results of using the MECM measure to select an appropriate resource-based rule-based classification model. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the graduate programs and research. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence applications, computer security, computer performance evaluation, data mining, machine learning, statistical modeling, and intelligent data analysis. He has published more than 300 refereed papers in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the general chair of the IEEE International Conference on Tools with Artificial Intelligence 2005. Naeem Seliya is an Assistant Professor of Computer and Information Science at the University of Michigan - Dearborn. He recieved his Ph.D. in Computer Engineering from Florida Atlantic University, Boca Raton, FL, USA in 2005. His research interests include software engineering, data mining and machine learnring, application and data security, bioinformatics and computational intelligence. He is a member of IEEE and ACM.  相似文献   

19.
ContextThe dependencies between individual requirements have an important influence on software engineering activities e.g., project planning, architecture design, and change impact analysis. Although dozens of requirement dependency types were suggested in the literature from different points of interest, there still lacks an evaluation of the applicability of these dependency types in requirements engineering.ObjectiveUnderstanding the effect of these requirement dependencies to software engineering activities is useful but not trivial. In this study, we aimed to first investigate whether the existing dependency types are useful in practise, in particular for change propagation analysis, and then suggest improvements for dependency classification and definition.MethodWe conducted a case study that evaluated the usefulness and applicability of two well-known generic dependency models covering 25 dependency types. The case study was conducted in a real-world industry project with three participants who offered different perspectives.ResultsOur initial evaluation found that there exist a number of overlapping and/or ambiguous dependency types among the current models; five dependency types are particularly useful in change propagation analysis; and practitioners with different backgrounds possess various viewpoints on change propagation. To improve the state-of-the-art, a new dependency model is proposed to tackle the problems identified from the case study and the related literature. The new model classifies dependencies into intrinsic and additional dependencies on the top level, and suggests nine dependency types with precise definitions as its initial set.ConclusionsOur case study provides insights into requirement dependencies and their effects on change propagation analysis for both research and practise. The resulting new dependency model needs further evaluation and improvement.  相似文献   

20.
ContextScientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code.ObjectiveThis study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software.MethodWe conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software.ResultsWe found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them.ConclusionsScientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号