首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextA potentially important, but neglected, reason for effort overruns in software projects is related to selection bias. Selection bias–induced effort overruns occur when proposals are more likely to be accepted and lead to actual projects when based on effort estimates that are too low rather than on realistic estimates or estimates that are too high. The effect of this bias may be particularly important in bidding rounds, but is potentially relevant in all situations where there is effort or cost-based selection between alternatives.ObjectiveTo better understand the relevance and management of selection bias effects in software development contexts.MethodFirst, we present a statistical model illustrating the relation between selection bias in bidding and other contexts and effort overruns. Then, we examine this relation in an experiment with software professionals who estimated and completed a set of development tasks and examine relevant field study evidence. Finally, we use a selection bias scenario to assess awareness of the effect of selection bias among software providers.ResultsThe results from the statistical model and the experiment demonstrated that selection bias is capable of explaining much of the effort overruns. The field evidence was also consistent with a substantial effect of selection bias on effort overruns, although there are alternative explanations for the findings. We found a low awareness of selection bias among the software providers.ConclusionSelection bias is likely to be an important source of effort overruns and should be addressed to reduce problems related to over-optimistic effort estimates.  相似文献   

2.
ContextReplication plays an important role in experimental disciplines. There are still many uncertainties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experimenters, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment?ObjectiveTo improve our understanding of SE experiment replication, in this work we propose a classification which is intend to provide experimenters with guidance about what types of replication they can perform.MethodThe research approach followed is structured according to the following activities: (1) a literature review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE.ResultsWe propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment.ConclusionThe aim of replication is to verify results, but different types of replication serve special verification purposes and afford different degrees of change. Each replication type helps to discover particular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.  相似文献   

3.
4.
《Ergonomics》2012,55(8):995-1007
Abstract

Workplace illumination is known to impact mood, performance and decision-making. Based on the idea that positive feelings associated with light might influence social judgements in workplaces, we propose that satisfaction with light as a specific affective response to light would lead to positive judgements of other individuals. In a laboratory experiment (N?=?164), participants assessed their satisfaction with light and rated other person’s faces on warmth and competence. Results showed that satisfaction with light positively influenced judgement of others. We replicated the positive relation between satisfaction with light and social judgements in a field study with employees (N?=?176). These findings highlight the importance of satisfaction with light for social judgement in workplaces. We discuss theoretical contributions and practical implications concerning the design of settings involving the evaluation of other individuals.

Practitioner Summary: The design of work settings where the evaluation of others takes place is an important topic. A laboratory experiment and a field study demonstrate that satisfaction with workplace illumination influences judgements of others. The results provide interesting possibilities for the design of work settings that involve the evaluation of others.

Abbreviations: ANOVA: Analysis of Variance; ANSI: American National Standards Institute; C: celsius; CI: confidence interval; Cm: centimeter; EN 12464 Lighting of indoor workplaces, English version; IESNA-RP: illuminating engineering society of North America, Recommended Practice; ISO: International Organization for Standardization; K: kelvin; Lx: lux; Min: minutes; PANAS: positive affect and negative affect scale; Ra: colour rendering index; SD: standard deviation; SE: standard error; WMA: World Medical Association  相似文献   

5.
ContextTwo recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012).ObjectiveIn this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research.MethodWe applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 1996–2012 complemented by manual and automatic search procedures that collected articles published in 2013.ResultsWe analyzed 37 papers reporting studies about replication published in the last 17 years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012.ConclusionsReplication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.  相似文献   

6.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

7.
《Ergonomics》2012,55(1):130-137
This paper addresses a number of issues for work environment intervention (WEI) researchers in light of the mixed results reported in the literature. If researchers emphasise study quality over intervention quality, reviews that exclude case studies with high quality and multifactorial interventions may be vulnerable to ‘quality criteria selection bias’. Learning from ‘failed’ interventions is inhibited by both publication bias and reporting lengths that limit information on relevant contextual and implementation factors. The authors argue for the need to develop evaluation approaches consistent with the complexity of multifactorial WEIs that: a) are owned by and aimed at the whole organisation; and b) include intervention in early design stages where potential impact is highest. Context variety, complexity and instability in and around organisations suggest that attention might usefully shift from generalisable ‘proof of effectiveness’ to a more nuanced identification of intervention elements and the situations in which they are more likely to work as intended.

Statement of Relevance: This paper considers ergonomics interventions from perspectives of what constitutes quality and ‘proof”. It points to limitations of traditional experimental intervention designs and argues that the complexity of organisational change, and the need for multifactorial interventions that reach deep into work processes for greater impact, should be recognised.  相似文献   

8.
ContextGamification seeks for improvement of the user’s engagement, motivation, and performance when carrying out a certain task, by means of incorporating game mechanics and elements, thus making that task more attractive. Much research work has studied the application of gamification in software engineering for increasing the engagement and results of developers.ObjectiveThe objective of this paper is to carry out a systematic mapping of the field of gamification in software engineering in an attempt to characterize the state of the art of this field identifying gaps and opportunities for further research.MethodWe carried out a systematic mapping with a view to finding the primary studies in the existing literature, which were later classified and analyzed according to four criteria: the software process area addressed, the gamification elements used, the type of research method followed, and the type of forum in which they were published. A subjective evaluation of the studies was also carried out to evaluate them in terms of methodology, empirical evidence, integration with the organization, and replicability.ResultsAs a result of the systematic mapping we found 29 primary studies, published between January 2011 and June 2014. Most of them focus on software development, and to a lesser extent, requirements, project management, and other support areas. In the main, they consider very simple gamification mechanics such as points and badges, and few provide empirical evidence of the impact of gamification.ConclusionsExisting research in the field is quite preliminary, and more research effort analyzing the impact of gamification in SE would be needed. Future research work should look at other game mechanics in addition to the basic ones and should tackle software process areas that have not been fully studied, such as requirements, project management, maintenance, or testing. Most studies share a lack of methodological support that would make their proposals replicable in other settings. The integration of gamification with an organization’s existing tools is also an important challenge that needs to be taken up in this field.  相似文献   

9.
ContextTo develop usable software we need to understand the users that will interact with the system. Personas is a HCI technique that gathers information about users in order to comprehend their characteristics. This information is used to define fictitious persons on which development should focus. Personas provides an understanding of the user, often overlooked in SE developments.ObjectiveThe goal of our research is to modify Personas to readily build the technique into the requirements stage of regular SE developments.MethodWe tried to apply Cooper’s version of the Personas technique and we found shortcomings in both the definition of the procedure to be enacted and the formalization of the product resulting from the execution of each step of the Personas technique. For each of these limitations (up to a total of 11), we devised an improvement to be built into Personas. We have incorporated these improvements into a SE version of Personas. The improved Personas avoid the weaknesses encountered by an average software developer unfamiliar with HCI techniques applying the original Personas.ResultsWe aim to improve requirements elicitation through the use of Personas. We have systematized and formalized Personas in the SE tradition in order to build this new version of the technique into the requirements stage. We have applied our proposal in an application example.ConclusionThe integration of Personas into the SE requirements stage might improves the understanding of what the software product should do and how it should behave. We have modified the HCI Personas technique to comply with the levels of systematization required by SE. We have enriched the SE requirements process by incorporating Personas activities into requirements activities. Requirements elicitation and requirements analysis are the RE activities most affected by incorporating Personas.  相似文献   

10.
ContextSystematic mapping studies are used to structure a research area, while systematic reviews are focused on gathering and synthesizing evidence. The most recent guidelines for systematic mapping are from 2008. Since that time, many suggestions have been made of how to improve systematic literature reviews (SLRs). There is a need to evaluate how researchers conduct the process of systematic mapping and identify how the guidelines should be updated based on the lessons learned from the existing systematic maps and SLR guidelines.ObjectiveTo identify how the systematic mapping process is conducted (including search, study selection, analysis and presentation of data, etc.); to identify improvement potentials in conducting the systematic mapping process and updating the guidelines accordingly.MethodWe conducted a systematic mapping study of systematic maps, considering some practices of systematic review guidelines as well (in particular in relation to defining the search and to conduct a quality assessment).ResultsIn a large number of studies multiple guidelines are used and combined, which leads to different ways in conducting mapping studies. The reason for combining guidelines was that they differed in the recommendations given.ConclusionThe most frequently followed guidelines are not sufficient alone. Hence, there was a need to provide an update of how to conduct systematic mapping studies. New guidelines have been proposed consolidating existing findings.  相似文献   

11.
This paper presents an accurate method for computing point-set surfaces from input data that can suppress the noise effect in the resulting point-set surface. This is accomplished by controlling spatial variation of residual errors between the input data and the resulting point-set surface and offsetting any systematic bias. More specifically, this method first reduces random noise of input data based on spatial autocorrelation statistics: the statistics Z via Moran’s I. The bandwidth of the surface is adjusted until the surface reaches desired value of the statistics Z corresponding to a given significance level. The method then compensates for potential systematic bias of the resultant surface by offsetting along computed normal vectors. Computational experiments on various point sets demonstrate that the method leads to an accurate surface with controlled spatial variation of residuals and reduced systematic bias.  相似文献   

12.
ContextSoftware products have requirements on software quality attributes such as safety and performance. Development teams use various specific techniques to achieve these quality requirements. We call these “Quality Attribute Techniques” (QATs). QATs are used to identify, analyse and control potential product quality problems. Although QATs are widely used in practice, there is no systematic approach to represent, select, and integrate them in existing approaches to software process modelling and tailoring.ObjectiveThis research aims to provide a systematic approach to better select and integrate QATs into tailored software process models for projects that develop products with specific product quality requirements.MethodA selection method is developed to support the choice of appropriate techniques for any quality attribute, across the lifecycle. The selection method is based on three perspectives: (1) risk management; (2) process integration; and (3) cost/benefit using Analytic Hierarchy Process (AHP). An industry case study is used to validate the feasibility and effectiveness of applying the selection method.ResultsThe case study demonstrates that the selection method provides a more methodological and effective approach to choose QATs for projects that target a specific quality attribute, compared to the ad hoc selection performed by development teams.ConclusionThe proposed selection method can be used to systematically choose QATs for projects to target specific product qualities throughout the software development lifecycle.  相似文献   

13.
ContextAccording to the search reported in this paper, as of this writing (May 2015), a very large number of papers (more than 70,000) have been published in the area of Software Engineering (SE) since its inception in 1968. Citations are crucial in any research area to position the work and to build on the work of others. Identification and characterization of highly-cited papers are common and are regularly reported in various disciplines.ObjectiveThe objective of this study is to identify the papers in the area of SE that have influenced others the most as measured by citation count. Studying highly-cited SE papers helps researchers to see the type of approaches and research methods presented and applied in such papers, so as to be able to learn from them to write higher quality papers which will likely receive high citations.MethodTo achieve the above objective, we conducted a study, comprised of five research questions, to identify and classify the top-100 highly-cited SE papers in terms of two metrics: total number of citations and average annual number of citations.ResultsBy total number of citations, the top paper is "A metrics suite for object-oriented design", cited 1817 times and published in 1994. By average annual number of citations, the top paper is "QoS-aware middleware for Web services composition", cited 154.2 times on average annually and published in 2004.ConclusionIt is concluded that it is important to identify the highly-cited SE papers and also to characterize the overall citation landscape in the SE field. We hope that this paper will encourage further discussions in the SE community towards further analysis and formal characterization of the highly-cited SE papers.  相似文献   

14.
《Ergonomics》2012,55(11):1464-1479
Abstract

Due to ubiquitous computing, knowledge workers do not only work in typical work-associated environments (e.g. the office) but also wherever it best suits their schedule or preferences (e.g. the park). In two experiments using laboratory and field methods, we compared decision making in work and non-work environments. We hypothesised that participants make riskier work-related decisions when in work-associated environments and riskier non-work-related decisions in non-work-associated environments. Therefore, if environment (work vs. non-work) and decision-making task (work-related vs. non-work-related) are incongruent, then risk-taking should be lower, as the decision maker might feel the situation is unusual or inappropriate. Although results do not reveal that work-associated environments generally encourage riskier work-related decisions (and likewise for non-work), we found environmental effects on decision making when including mood as a moderator.

Practitioner summary: Mobile workers are required to make decisions in various environments. We assumed that decisions are more risky when they are made in a fitting environment (e.g. work-related decisions in work environments). Results of the two experiments (laboratory and field) only show an environmental effect when mood is included as a moderator.  相似文献   

15.
目的触摸输入方式存在"肥手指"、目标遮挡和肢体疲劳现象,会降低触摸输入的精确度。本文旨在探索在移动式触摸设备上,利用设备固有特性来解决小目标选择困难与触摸输入精确度低的具体策略,并对具体的策略进行对比。方法结合手机等移动式触摸设备所支持的倾斜和运动加速度识别功能,针对手机和平板电脑等移动式触摸输入设备,实证地考察了直接触摸法、平移放大法、倾斜法和吸引法等4种不同的目标选择技术的性能、特点和适用场景。结果通过目标选择实验对4种技术进行了对比,直接触摸法、平移放大法、倾斜法、吸引法的平均目标选择时间,错误率和主观评价分别为(86.06 ms,62.28%,1.95),(1 327.99 ms,6.93%,3.87),(1 666.11 ms,7.63%,3.46)和(1 260.34 ms,6.38%,3.74)。结论 3种改进的目标选择技术呈现出了比直接触摸法更优秀的目标选择能力。  相似文献   

16.
17.
ContextSoftware quality attributes are assessed by employing appropriate metrics. However, the choice of such metrics is not always obvious and is further complicated by the multitude of available metrics. To assist metrics selection, several properties have been proposed. However, although metrics are often used to assess successive software versions, there is no property that assesses their ability to capture structural changes along evolution.ObjectiveWe introduce a property, Software Metric Fluctuation (SMF), which quantifies the degree to which a metric score varies, due to changes occurring between successive system's versions. Regarding SMF, metrics can be characterized as sensitive (changes induce high variation on the metric score) or stable (changes induce low variation on the metric score).MethodSMF property has been evaluated by: (a) a case study on 20 OSS projects to assess the ability of SMF to differently characterize different metrics, and (b) a case study on 10 software engineers to assess SMF's usefulness in the metric selection process.ResultsThe results of the first case study suggest that different metrics that quantify the same quality attributes present differences in their fluctuation. We also provide evidence that an additional factor that is related to metrics’ fluctuation is the function that is used for aggregating metric from the micro to the macro level. In addition, the outcome of the second case study suggested that SMF is capable of helping practitioners in metric selection, since: (a) different practitioners have different perception of metric fluctuation, and (b) this perception is less accurate than the systematic approach that SMF offers.ConclusionsSMF is a useful metric property that can improve the accuracy of metrics selection. Based on SMF, we can differentiate metrics, based on their degree of fluctuation. Such results can provide input to researchers and practitioners in their metric selection processes.  相似文献   

18.
ContextSecurity vulnerabilities discovered later in the development cycle are more expensive to fix than those discovered early. Therefore, software developers should strive to discover vulnerabilities as early as possible. Unfortunately, the large size of code bases and lack of developer expertise can make discovering software vulnerabilities difficult. A number of vulnerability discovery techniques are available, each with their own strengths.ObjectiveThe objective of this research is to aid in the selection of vulnerability discovery techniques by comparing the vulnerabilities detected by each and comparing their efficiencies.MethodWe conducted three case studies using three electronic health record systems to compare four vulnerability discovery techniques: exploratory manual penetration testing, systematic manual penetration testing, automated penetration testing, and automated static analysis.ResultsIn our case study, we found empirical evidence that no single technique discovered every type of vulnerability. We discovered that the specific set of vulnerabilities identified by one tool was largely orthogonal to that of other tools. Systematic manual penetration testing found the most design flaws, while automated static analysis found the most implementation bugs. The most efficient discovery technique in terms of vulnerabilities discovered per hour was automated penetration testing.ConclusionThe results show that employing a single technique for vulnerability discovery is insufficient for finding all types of vulnerabilities. Each technique identified only a subset of the vulnerabilities, which, for the most part were independent of each other. Our results suggest that in order to discover the greatest variety of vulnerability types, at least systematic manual penetration testing and automated static analysis should be performed.  相似文献   

19.
Background:In recent years, the application of artificial intelligence in the field of sleep medicine has rapidly emerged. One of the main concerns of many researchers is the recognition of sleep positions, which enables efficient monitoring of changes in sleeping posture for precise and intelligent adjustment. In sleep monitoring, machine learning is able to analyze the raw data collected and optimizes the algorithm in real-time to recognize the sleeping position of the human body during sleep.Methodology:A detailed search of relevant databases was conducted through a systematic search process, and we reviewed research published since 2017, focusing on 27 articles on sleep recognition.Results:Through the analysis and study of these articles, we propose several determinants that objectively affect sleeping posture recognition, including the acquisition of sleep posture data, data pre-processing, recognition algorithms, and validation analysis. Moreover, we analyze the categories of sleeping postures adapted to different body types.Conclusion:A systematic evaluation combining the above determinants provides solutions for system design and rational selection of recognition algorithms for sleep posture recognition, and it is necessary to regularize and standardize existing machine learning algorithms before they can be incorporated into clinical monitoring of sleep.  相似文献   

20.
ContextParametric cost estimation models need to be continuously calibrated and improved to assure more accurate software estimates and reflect changing software development contexts. Local calibration by tuning a subset of model parameters is a frequent practice when software organizations adopt parametric estimation models to increase model usability and accuracy. However, there is a lack of understanding about the cumulative effects of such local calibration practices on the evolution of general parametric models over time.ObjectiveThis study aims at quantitatively analyzing and effectively handling local bias associated with historical cross-company data, thus improves the usability of cross-company datasets for calibrating and maintaining parametric estimation models.MethodWe design and conduct three empirical studies to measure, analyze and address local bias in cross-company dataset, including: (1) defining a method for measuring the local bias associated with individual organization data subset in the overall dataset; (2) analyzing the impacts of local bias on the performance of an estimation model; (3) proposing a weighted sampling approach to handle local bias. The studies are conducted on the latest COCOMO II calibration dataset.ResultsOur results show that the local bias largely exists in cross company dataset, and the local bias negatively impacts the performance of parametric model. The local bias based weighted sampling technique helps reduce negative impacts of local bias on model performance.ConclusionLocal bias in cross-company data does harm model calibration and adds noisy factors to model maintenance. The proposed local bias measure offers a means to quantify degree of local bias associated with a cross-company dataset, and assess its influence on parametric model performance. The local bias based weighted sampling technique can be applied to trade-off and mitigate potential risk of significant local bias, which limits the usability of cross-company data for general parametric model calibration and maintenance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号