首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

2.

The design of gas turbines is a challenging area of cyber-physical systems where complex model-based simulations across multiple disciplines (e.g., performance, aerothermal) drive the design process. As a result, a continuously increasing amount of data is derived during system design. Finding new insights in such data by exploiting various machine learning (ML) techniques is a promising industrial trend since better predictions based on real data result in substantial product quality improvements and cost reduction. This paper presents a method that generates data from multi-paradigm simulation tools, develops and trains ML models for prediction, and deploys such prediction models into an active control system operating at runtime with limited computational power. We explore the replacement of existing traditional prediction modules with ML counterparts with different architectures. We validate the effectiveness of various ML models in the context of three (real) gas turbine bearings using over 150,000 data points for training, validation, and testing. We introduce code generation techniques for automated deployment of neural network models to industrial off-the-shelf programmable logic controllers.

  相似文献   

3.
Prediction of stock market trends is considered as an important task and is of great attention as predicting stock prices successfully may lead to attractive profits by making proper decisions. Stock market prediction is a major challenge owing to non-stationary, blaring, and chaotic data, and thus, the prediction becomes challenging among the investors to invest the money for making profits. Several techniques are devised in the existing techniques to predict the stock market trends. This work presents the detailed review of 50 research papers suggesting the methodologies, like Bayesian model, Fuzzy classifier, Artificial Neural Networks (ANN), Support Vector Machine (SVM) classifier, Neural Network (NN), Machine Learning Methods and so on, based on stock market prediction. The obtained papers are classified based on different prediction and clustering techniques. The research gaps and the challenges faced by the existing techniques are listed and elaborated, which help the researchers to upgrade the future works. The works are analyzed using certain datasets, software tools, performance evaluation measures, prediction techniques utilized, and performance attained by different techniques. The commonly used technique for attaining effective stock market prediction is ANN and the fuzzy-based technique. Even though a lot of research efforts, the current stock market prediction technique still have many limits. From this survey, it can be concluded that the stock market prediction is a very complex task, and different factors should be considered for predicting the future of the market more accurately and efficiently.  相似文献   

4.
The concept of laboratories for distance (e-learning) with remotely controlled laboratory set-ups or virtual laboratories with different simulations have an important role in industrial engineering education and training. Although the concept is not new, there are a number of open issues that should be solved. This paper will present the fundamental objectives of learning through distance learning laboratories as well as the special issues connected with these labs, including their effectiveness. Other important questions will be addressed such as pre requests for remote controlled/virtual labs according to different stakeholders, different architectures will be compared and, finally, evaluations and students' feedback will be presented.  相似文献   

5.
In this article, we present a generic model-centric approach for realizing fine-grained dynamic adaptation in software systems by managing and interpreting graph-based models of software at runtime. We implemented this approach as the Graph-based Runtime Adaptation Framework (GRAF), which is particularly tailored to facilitate and simplify the process of evolving and adapting current software towards runtime adaptivity. As a proof of concept, we present case study results that show how to achieve runtime adaptivity with GRAF and sketch the framework's capabilities for facilitating the evolution of real-world applications towards self-adaptive software. The case studies also provide some details of the GRAF implementation and examine the usability and performance of the approach.  相似文献   

6.
Hub location problem (HLP) is a relatively new extension of classical facility location problems. Hubs are facilities that work as consolidation, connecting, and switching points for flows between stipulated origins and destinations. While there are few review papers on hub location problems, the most recent one (Alumur and Kara, 2008. Network hub location problems: The state of the art. European Journal of Operational Research, 190, 1–21) considers solely studies on network-type hub location models prior to early 2007. Therefore, this paper focuses on reviewing the most recent advances in HLP from 2007 up to now. In this paper, a review of all variants of HLPs (i.e., network, continuous, and discrete HLPs) is provided. In particular, mathematical models, solution methods, main specifications, and applications of HLPs are discussed. Furthermore, some case studies illustrating real-world applications of HLPs are briefly introduced. At the end, future research directions and trends will be presented.  相似文献   

7.
Software and Systems Modeling - Dealing with variability, during Software Product Line Engineering (SPLE), means trying to allow software engineers to develop a set of similar applications based on...  相似文献   

8.
Information Systems and e-Business Management - Business process management (BPM) broadly covers a lifecycle of four distinct phases: design, configuration, enactment, and analysis and evaluation....  相似文献   

9.
[Context and Motivation] Many requirements prioritization approaches have been proposed, however not all of them have been investigated empirically in real-life settings. As a result, our knowledge of their applicability and actual use is incomplete. [Question/problem] A 2007 systematic review on requirements prioritization mapped out the landscape of proposed prioritization approaches and their prioritization criteria. To understand how this sub-field of requirements engineering has developed since 2007 and what evidence has been accumulated through empirical evaluations, we carried out a literature review that takes as input publications published between 2007 and 2019. [Principle ideas/results] We evaluated 102 papers that proposed and/or evaluated requirements prioritization methods. Our results show that the newly proposed requirements prioritization methods tend to use as basis fuzzy logic and machine learning algorithms. We also concluded that the Analytical Hierarchy Process is the most accurate and extensively used requirement prioritization method in industry. However, scalability is still its major limitation when requirements are large in number. We have found that machine learning has shown potential to deal with this limitation. Last, we found that experiments were the most used research method to evaluate the various aspects of the proposed prioritization approaches. [Contribution] This paper identified and evaluated requirements prioritization techniques proposed between 2007 and 2019, and derived some trends. Limitations of the proposals and implications for research and practice are identified as well.  相似文献   

10.
《Ergonomics》2012,55(4):577-588
Abstract

Early biomechanical spine models represented the trunk muscles as straight-line approximations. Later models have endeavoured to accurately represent muscle curvature around the torso. However, only a few studies have systematically examined various techniques and the logic underlying curved muscle models. The objective of this review was to systematically categorise curved muscle representation techniques and compare the underlying logic in biomechanical models of the spine. Thirty-five studies met our selection criteria. The most common technique of curved muscle path was the ‘via-point’ method. Curved muscle geometry was commonly developed from MRI/CT database and cadaveric dissections, and optimisation/inverse dynamics models were typically used to estimate muscle forces. Several models have attempted to validate their results by comparing their approach with previous studies, but it could not validate of specific tasks. For future needs, personalised muscle geometry, and person- or task-specific validation of curved muscle models would be necessary to improve model fidelity.

Practitioner Summary: The logic underlying the curved muscle representations in spine models is still poorly understood. This literature review systematically categorised different approaches and evaluated their underlying logic. The findings could direct future development of curved muscle models to have a better understanding of the biomechanical causal pathways of spine disorders.  相似文献   

11.
Information and Communication technology (ICT) pervades every aspect of our daily lives to support us solving tasks and providing information. However, we are facing an increasing complexity in ICT due to interconnectedness and coupling of large-scale distributed systems. One particular challenge in this context is openness, i.e. systems and components are free to join and leave at any time, including those that are faulty or even malicious. In this article, we present a novel concept to master openness by detecting groups of similarly behaving systems in order to identify and finally isolate malicious elements. More precisely, we present a mechanism to cluster groups of systems at runtime and to estimate their contribution to the overall system utility. For evaluation and demonstration purposes, we use the Trusted Desktop Grid (TDG), where the system utility is an averaged speedup in job calculation for all benevolent participants. This TDG resembles typical Organic Computing characteristics such as self-organisation, adaptive behaviour of heterogeneous entities, and openness. We show that our concept is able to successfully identify groups of systems with undesired behaviour, ranging from freeriding to colluding attacks.  相似文献   

12.
Distributed Java virtual machine (dJVM) systems enable concurrent Java applications to transparently run on clusters of commodity computers. This is achieved by supporting Java's shared‐memory model over multiple JVMs distributed across the cluster's computer nodes. In this work, we describe and evaluate selective dynamic diffing and lazy home allocation, two new runtime techniques that enable dJVMs to efficiently support memory sharing across the cluster. Specifically, the two proposed techniques can contribute to reduce the overheads due to message traffic, extra memory space, and high latency of remote memory accesses that such dJVM systems require for implementing their memory‐coherence protocol either in isolation or in combination. In order to evaluate the performance‐related benefits of dynamic diffing and lazy home allocation, we implemented both techniques in Cooperative JVM (CoJVM), a basic dJVM system we developed in previous work. In subsequent work, we carried out performance comparisons between the basic CoJVM and modified CoJVM versions for five representative concurrent Java applications (matrix multiply, LU, Radix, fast Fourier transform, and SOR) using our proposed techniques. Our experimental results showed that dynamic diffing and lazy home allocation significantly reduced memory sharing overheads. The reduction resulted in considerable gains in CoJVM system's performance, ranging from 9% up to 20%, in four out of the five applications, with resulting speedups varying from 6.5 up to 8.1 for an 8‐node cluster of computers. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
Despite the importance of decision-making (DM) techniques for construction of effective decision models for supplier selection, there is a lack of a systematic literature review for it. This paper provides a systematic literature review on articles published from 2008 to 2012 on the application of DM techniques for supplier selection. By using a methodological decision analysis in four aspects including decision problems, decision makers, decision environments, and decision approaches, we finally selected and reviewed 123 journal articles. To examine the research trend on uncertain supplier selection, these articles are roughly classified into seven categories according to different uncertainties. Under such classification framework, 26 DM techniques are identified from three perspectives: (1) Multicriteria decision making (MCDM) techniques, (2) Mathematical programming (MP) techniques, and (3) Artificial intelligence (AI) techniques. We reviewed each of the 26 techniques and analyzed the means of integrating these techniques for supplier selection. Our survey provides the recommendation for future research and facilitates knowledge accumulation and creation concerning the application of DM techniques in supplier selection.  相似文献   

14.
15.
Software product line engineering is about producing a set of related products that share more commonalities than variabilities. Feature models are widely used for variability and commonality management in software product lines. Feature models are information models where a set of products are represented as a set of features in a single model. The automated analysis of feature models deals with the computer-aided extraction of information from feature models. The literature on this topic has contributed with a set of operations, techniques, tools and empirical results which have not been surveyed until now. This paper provides a comprehensive literature review on the automated analysis of feature models 20 years after of their invention. This paper contributes by bringing together previously disparate streams of work to help shed light on this thriving area. We also present a conceptual framework to understand the different proposals as well as categorise future contributions. We finally discuss the different studies and propose some challenges to be faced in the future.  相似文献   

16.
生物视觉系统的研究一直是计算机视觉算法的重要灵感来源。有许多计算机视觉算法与生物视觉研究具有不同程度的对应关系,包括从纯粹的功能启发到用于解释生物观察的物理模型的方法。从视觉神经科学向计算机视觉界传达的经典观点是视觉皮层分层层次处理的结构。而人工神经网络设计的灵感来源正是视觉系统中的分层结构设计。深度神经网络在计算机视觉和机器学习等领域都占据主导地位。许多神经科学领域的学者也开始将深度神经网络应用在生物视觉系统的计算建模中。深度神经网络多层的结构设计加上误差的反向传播训练,使得它可以拟合绝大多数函数。因此,深度神经网络在学习视觉刺激与神经元响应的映射关系并取得目前性能最好的模型同时,网络内部的单元甚至学习出生物视觉系统子单元的表达。本文将从视网膜等初级视觉皮层和高级视觉皮层(如,视觉皮层第4区(visual area 4,V4)和下颞叶皮层(inferior temporal,IT))分别介绍基于神经网络的视觉系统编码模型。主要内容包括:1)有关视觉系统模型的概念与定义;2)初级视觉系统的神经网络预测模型;3)任务驱动的高级视觉皮层编码模型。最后本文还将介绍最新有关无监督学习的神经编码...  相似文献   

17.
There are differing views on the importance of the differences between the relational model and the entity relationship (ER) model. The actual impact of the model differences on user performance is reported here. A summary of the very few experiments that compared user performance for the ER model and the relational model is presented. The overall result from the experiments is that user performance is usually better, and often significantly better, with the ER model than with the relational model.  相似文献   

18.
Despite the importance of data mining techniques to customer relationship management (CRM), there is a lack of a comprehensive literature review and a classification scheme for it. This is the first identifiable academic literature review of the application of data mining techniques to CRM. It provides an academic database of literature between the period of 2000–2006 covering 24 journals and proposes a classification scheme to classify the articles. Nine hundred articles were identified and reviewed for their direct relevance to applying data mining techniques to CRM. Eighty-seven articles were subsequently selected, reviewed and classified. Each of the 87 selected papers was categorized on four CRM dimensions (Customer Identification, Customer Attraction, Customer Retention and Customer Development) and seven data mining functions (Association, Classification, Clustering, Forecasting, Regression, Sequence Discovery and Visualization). Papers were further classified into nine sub-categories of CRM elements under different data mining techniques based on the major focus of each paper. The review and classification process was independently verified. Findings of this paper indicate that the research area of customer retention received most research attention. Of these, most are related to one-to-one marketing and loyalty programs respectively. On the other hand, classification and association models are the two commonly used models for data mining in CRM. Our analysis provides a roadmap to guide future research and facilitate knowledge accumulation and creation concerning the application of data mining techniques in CRM.  相似文献   

19.
ContextDistributed Software Development (DSD) has recently become an active research area. Although considerable research effort has been made in this area, as yet, no agreement has been reached as to an appropriate process model for DSD.PurposeThis paper is intended to identify and synthesize papers that describe process models for distributed software development in the context of overseas outsourcing, i.e. “offshoring”.MethodWe used a systematic review methodology to search seven digital libraries and one topic-specific conference.ResultsWe found 27 primary studies describing stage-related DSD process models. Only five of such studies looked into outsourcing to a subsidiary company (i.e. “internal offshoring”). Nineteen primary studies addressed the need for DSD process models. Eight primary studies and three literature surveys described stage-based DSD process models, but only three of such models were empirically evaluated.ConclusionWe need more research aimed at internal offshoring. Furthermore, proposed models need to be empirically validated.  相似文献   

20.
System and software requirements documents play a crucial role in software engineering in that they must both communicate requirements to clients in an understandable manner and define requirements in precise detail for system developers. The benefits of both lists of textual requirements (usually written in natural language) and software engineering models (usually specified in graphical form) can be brought together by combining the two approaches in the specification of system and software requirements documents. If, moreover, textual requirements are generated from models in an automatic or closely monitored form, the effort of specifying those requirements is reduced and the completeness of the specification and the management of the requirements traceability are improved. This paper presents a systematic review of the literature related to the generation of textual requirements specifications from software engineering models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号