首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

Human–computer interaction (HCI) practice has emerged as a research domain in the HCI field and is growing. The need to transfer HCI practices to the industry began significantly with the works of Nielsen on usability engineering. To date, methods and techniques for designing, evaluating, and implementing interactive systems for human use have continued to emerge. It is, therefore, justified to conduct a systematic mapping study to determine the landscape of HCI practice research. A Systematic Mapping Study method was used to map 142 studies according to research type, topic, and contribution. These were then analyzed to determine an overview of HCI practice research. The objective was to analyze studies on HCI practice and present prominent issues that characterize the HCI practice research landscape. Second, to identify pressing challenges regarding HCI practices in software/systems development companies. The results show that HCI practice research has steadily increased since 2012. The majority of the studies explored focused on evaluation research that largely contributed to the evaluation methods or processes. Most of the studies were on design tools and techniques, design methods and contexts, design work and organizational culture, and collaboration and team communication. Interviews, case studies, and survey methods have been prominently used as research methods. HCI techniques are mostly used during the initial phase of development and during evaluation. HCI practice challenges in companies are mostly process-related and on performance of usability and user experience activities. The major challenge seems to be to find a way to collect and incorporate user feedback in a timely manner, especially in agile processes. There are areas identified in this study as needing more research.  相似文献   

2.
ContextTwo recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012).ObjectiveIn this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research.MethodWe applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 1996–2012 complemented by manual and automatic search procedures that collected articles published in 2013.ResultsWe analyzed 37 papers reporting studies about replication published in the last 17 years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012.ConclusionsReplication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications.  相似文献   

3.
ContextMany researchers adopting systematic reviews (SRs) have also published papers discussing problems with the SR methodology and suggestions for improving it. Since guidelines for SRs in software engineering (SE) were last updated in 2007, we believe it is time to investigate whether the guidelines need to be amended in the light of recent research.ObjectiveTo identify, evaluate and synthesize research published by software engineering researchers concerning their experiences of performing SRs and their proposals for improving the SR process.MethodWe undertook a systematic review of papers reporting experiences of undertaking SRs and/or discussing techniques that could be used to improve the SR process. Studies were classified with respect to the stage in the SR process they addressed, whether they related to education or problems faced by novices and whether they proposed the use of textual analysis tools.ResultsWe identified 68 papers reporting 63 unique studies published in SE conferences and journals between 2005 and mid-2012. The most common criticisms of SRs were that they take a long time, that SE digital libraries are not appropriate for broad literature searches and that assessing the quality of empirical studies of different types is difficult.ConclusionWe recommend removing advice to use structured questions to construct search strings and including advice to use a quasi-gold standard based on a limited manual search to assist the construction of search stings and evaluation of the search process. Textual analysis tools are likely to be useful for inclusion/exclusion decisions and search string construction but require more stringent evaluation. SE researchers would benefit from tools to manage the SR process but existing tools need independent validation. Quality assessment of studies using a variety of empirical methods remains a major problem.  相似文献   

4.
5.

The population increase in the Middle East and the respective decrease of water resources necessitate innovative methods for utilization and monitoring of water resources. Development of remote sensing tools can pave the way for remote, rapid mapping of soil-water content, control of excessive irrigation, and prevention of water waste. This paper describes a series of experiments conducted in the Negev Desert that were aimed at developing such tools for monitoring soil-water content. The use of visible near infrared and microwave techniques seems suitable. All provide good correlation with soil-water content measured on the ground. However, the microwave techniques presented here using a P-band scatterometer and ERS-2 SAR seem the most promising. Finally the possibility of optical simulation of the microwave processes is presented in an effort to improve the physical basis for empirical studies. A method of fabrication of optical samples that model soils with different water content and different surface roughness is developed, and a system for measuring backscattered signals is designed. It is shown that the reflectivity of a layered medium is a non-monotonic function of the water content. The effect of the surface roughness on the reflection from a strong buried reflector is being studied.  相似文献   

6.
To ensure more autonomy and intelligence with real-time processing capabilities for the obstacle avoidance behavior of Intelligent Autonomous Vehicles (IAV), the use of soft computing is necessary to bring this behavior near to that of humans in the recognition, learning, adaptation, generalization, reasoning and decision-making, and action. In this paper, pattern classifiers of spatial obstacle avoidance situations using Neural Networks (NN), Fuzzy Logic (FL), Genetic Algorithms (GA) and Adaptive Resonance Theory (ART) individually or in combination are suggested. These classifiers are based on supervised learning and adaptation paradigms as Gradient Back-Propagation (GBP), FL, GA and Simplified Fuzzy ArtMap (SFAM) resulting in NN/GBP and FL as Intelligent Systems (IS) and in NN/GA, NN/GA-GBP, NN-FL/GBP and NN-FL-ART/SFAM as Hybrid Intelligent Systems (HIS). Afterwards, a synthesis of the suggested pattern classifiers is presented where their results and performances are discussed as well as the Field Programmable Gate Array (FPGA) architectures, characterized by their high flexibility and compactness, for their implementation.  相似文献   

7.
ContextGUI testing is system testing of a software that has a graphical-user interface (GUI) front-end. Because system testing entails that the entire software system, including the user interface, be tested as a whole, during GUI testing, test cases—modeled as sequences of user input events—are developed and executed on the software by exercising the GUI’s widgets (e.g., text boxes and clickable buttons). More than 230 articles have appeared in the area of GUI testing since 1991.ObjectiveIn this paper, we study this existing body of knowledge using a systematic mapping (SM).MethodThe SM is conducted using the guidelines proposed by Petersen et al. We pose three sets of research questions. We define selection and exclusion criteria. From the initial pool of 230 articles, published in years 1991–2011, our final pool consisted of 136 articles. We systematically develop a classification scheme and map the selected articles to this scheme.ResultsWe present two types of results. First, we report the demographics and bibliometrics trends in this domain, including: top-cited articles, active researchers, top venues, and active countries in this research area. Moreover, we derive the trends, for instance, in terms of types of articles, sources of information to derive test cases, types of evaluations used in articles, etc. Our second major result is a publicly-accessible repository that contains all our mapping data. We plan to update this repository on a regular basis, making it a “live” resource for all researchers.ConclusionOur SM provides an overview of existing GUI testing approaches and helps spot areas in the field that require more attention from the research community. For example, much work is needed to connect academic model-based techniques with commercially available tools. To this end, studies are needed to compare the state-of-the-art in GUI testing in academic techniques and industrial tools.  相似文献   

8.
ContextDue to the complex nature of software development process, traditional parametric models and statistical methods often appear to be inadequate to model the increasingly complicated relationship between project development cost and the project features (or cost drivers). Machine learning (ML) methods, with several reported successful applications, have gained popularity for software cost estimation in recent years. Data preprocessing has been claimed by many researchers as a fundamental stage of ML methods; however, very few works have been focused on the effects of data preprocessing techniques.ObjectiveThis study aims for an empirical assessment of the effectiveness of data preprocessing techniques on ML methods in the context of software cost estimation.MethodIn this work, we first conduct a literature survey of the recent publications using data preprocessing techniques, followed by a systematic empirical study to analyze the strengths and weaknesses of individual data preprocessing techniques as well as their combinations.ResultsOur results indicate that data preprocessing techniques may significantly influence the final prediction. They sometimes might have negative impacts on prediction performance of ML methods.ConclusionIn order to reduce prediction errors and improve efficiency, a careful selection is necessary according to the characteristics of machine learning methods, as well as the datasets used for software cost estimation.  相似文献   

9.
ContextIdentifying suitable components during the software design phase is an important way to obtain more maintainable software. Many methods including Graph Partitioning, Clustering-based, CRUD-based, and FCA-based methods have been proposed to identify components at an early stage of software design. However, most of these methods use classical clustering techniques, which rely on expert judgment.ObjectiveIn this paper, we propose a novel method for component identification, called SBLCI (Search-Based Logical Component Identification), which is based on GA (genetic algorithm), and complies with an iterative scheme to obtain logical components.MethodSBLCI identifies logical components of a system from its analysis models using a customized GA, which considers cohesion and coupling metrics as its fitness function, and has four novel guided GA operators based on the cohesive component concept. In addition, SBLCI has an iterative scheme in which it initially identifies high-level components in the first iteration. Then, in the next iterations, it identifies low-level sub-components for each identified component in previous iterations.ResultsWe evaluated the effectiveness of SBLCI with three real-world cases. Results revealed that SBLCI is a better alternative for identifying logical components and sub-components in comparison with existing component identification methods.  相似文献   

10.
ContextIn software industry, project managers usually rely on their previous experience to estimate the number men/hours required for each software project. The accuracy of such estimates is a key factor for the efficient application of human resources. Machine learning techniques such as radial basis function (RBF) neural networks, multi-layer perceptron (MLP) neural networks, support vector regression (SVR), bagging predictors and regression-based trees have recently been applied for estimating software development effort. Some works have demonstrated that the level of accuracy in software effort estimates strongly depends on the values of the parameters of these methods. In addition, it has been shown that the selection of the input features may also have an important influence on estimation accuracy.ObjectiveThis paper proposes and investigates the use of a genetic algorithm method for simultaneously (1) select an optimal input feature subset and (2) optimize the parameters of machine learning methods, aiming at a higher accuracy level for the software effort estimates.MethodSimulations are carried out using six benchmark data sets of software projects, namely, Desharnais, NASA, COCOMO, Albrecht, Kemerer and Koten and Gray. The results are compared to those obtained by methods proposed in the literature using neural networks, support vector machines, multiple additive regression trees, bagging, and Bayesian statistical models.ResultsIn all data sets, the simulations have shown that the proposed GA-based method was able to improve the performance of the machine learning methods. The simulations have also demonstrated that the proposed method outperforms some recent methods reported in the recent literature for software effort estimation. Furthermore, the use of GA for feature selection considerably reduced the number of input features for five of the data sets used in our analysis.ConclusionsThe combination of input features selection and parameters optimization of machine learning methods improves the accuracy of software development effort. In addition, this reduces model complexity, which may help understanding the relevance of each input feature. Therefore, some input parameters can be ignored without loss of accuracy in the estimations.  相似文献   

11.
Abstract

Marty Nemzow is engaged in a variety of business activities. His company, Network Performance Institute (Miami, FL) provides enterprise network design and improvement consulting services, markets capacity planning and business continuity and network resource management tools to the industry, and develops and markets shrink-wrapped network configuration software tools to big companies and consulting firms. A graduate of Brown University and Harvard Graduate School of Business, Marty also sells books, software and medical products over the Internet and is a columnist for WebServer Online Magazine at http://webserver.cpg.com.  相似文献   

12.
目的 人脸年龄估计技术作为一种新兴的生物特征识别技术,已经成为计算机视觉领域的重要研究方向之一。随着深度学习的飞速发展,基于深度卷积神经网络的人脸年龄估计技术已成为研究热点。方法 本文以基于深度学习的真实年龄和表象年龄估计方法为研究对象,通过调研文献,分析了基于深度学习的人脸年龄估计方法的基本思想和特点,阐述其研究现状,总结关键技术及其局限性,对比了常见人脸年龄估计方法的性能,展望了未来的发展方向。结果 尽管基于深度学习的人脸年龄估计研究取得了巨大的进展,但非受限条件下年龄估计的效果仍不能满足实际需求,主要因为当前人脸年龄估计研究仍存在以下困难:1)引入人脸年龄估计的先验知识不足;2)缺少兼顾全局和局部细节的人脸年龄估计特征表达方法;3)现有人脸年龄估计数据集的限制;4)实际应用环境下的多尺度人脸年龄估计问题。结论 基于深度学习的人脸年龄估计技术已取得显著进展,但是由于实际应用场景复杂,容易导致人脸年龄估计效果不佳。对目前基于深度学习的人脸年龄估计技术进行全面综述,从而为研究者解决存在的问题提供便利。  相似文献   

13.
ContextThis systematic mapping review is set in a Global Software Engineering (GSE) context, characterized by a highly distributed environment in which project team members work separately in different countries. This geographic separation creates specific challenges associated with global communication, coordination and control.ObjectiveThe main goal of this study is to discover all the available communication and coordination tools that can support highly distributed teams, how these tools have been applied in GSE, and then to describe and classify the tools to allow both practitioners and researchers involved in GSE to make use of the available tool support in GSE.MethodWe performed a systematic mapping review through a search for studies that answered our research question, “Which software tools (commercial, free or research based) are available to support Global Software Engineering?” Applying a range of related search terms to key electronic databases, selected journals, and conferences and workshops enabled us to extract relevant papers. We then used a data extraction template to classify, extract and record important information about the GSD tools from each paper. This information was synthesized and presented as a general map of types of GSD tools, the tool’s main features and how each tool was validated in practice.ResultsThe main result is a list of 132 tools, which, according to the literature, have been, or are intended to be, used in global software projects. The classification of these tools includes lists of features for communication, coordination and control as well as how the tool has been validated in practice. We found that out the total of 132, the majority of tools were developed at research centers, and only a small percentage of tools (18.9%) are reported as having been tested outside the initial context in which they were developed.ConclusionThe most common features in the GSE tools included in this study are: team activity and social awareness, support for informal communication, Support for Distributed Knowledge Management and Interoperability with other tools. Finally, there is the need for an evaluation of these tools to verify their external validity, or usefulness in a wider global environment.  相似文献   

14.
ContextThe intensive human effort needed to manually manage traceability information has increased the interest in using semi-automated traceability recovery techniques. In particular, Information Retrieval (IR) techniques have been largely employed in the last ten years to partially automate the traceability recovery process.AimPrevious studies mainly focused on the analysis of the performances of IR-based traceability recovery methods and several enhancing strategies have been proposed to improve their accuracy. Very few papers investigate how developers (i) use IR-based traceability recovery tools and (ii) analyse the list of suggested links to validate correct links or discard false positives. We focus on this issue and suggest exploiting link count information in IR-based traceability recovery tools to improve the performances of the developers during a traceability recovery process.MethodTwo empirical studies have been conducted to evaluate the usefulness of link count information. The two studies involved 135 University students that had to perform (with and without link count information) traceability recovery tasks on two software project repositories. Then, we evaluated the quality of the recovered traceability links in terms of links correctly and erroneously traced by the students.ResultsThe results achieved indicate that the use of link count information significantly increases the number of correct links identified by the participants.ConclusionsThe results can be used to derive guidelines on how to effectively use traceability recovery approaches and tools proposed in the literature.  相似文献   

15.
ContextSoftware testing is a knowledge intensive process, and, thus, Knowledge Management (KM) principles and techniques should be applied to manage software testing knowledge.ObjectiveThis study conducts a survey on existing research on KM initiatives in software testing, in order to identify the state of the art in the area as well as the future research. Aspects such as purposes, types of knowledge, technologies and research type are investigated.MethodThe mapping study was performed by searching seven electronic databases. We considered studies published until December 2013. The initial resulting set was comprised of 562 studies. From this set, a total of 13 studies were selected. For these 13, we performed snowballing and direct search to publications of researchers and research groups that accomplished these studies.ResultsFrom the mapping study, we identified 15 studies addressing KM initiatives in software testing that have been reviewed in order to extract relevant information on a set of research questions.ConclusionsAlthough only a few studies were found that addressed KM initiatives in software testing, the mapping shows an increasing interest in the topic in the recent years. Reuse of test cases is the perspective that has received more attention. From the KM point of view, most of the studies discuss aspects related to providing automated support for managing testing knowledge by means of a KM system. Moreover, as a main conclusion, the results show that KM is pointed out as an important strategy for increasing test effectiveness, as well as for improving the selection and application of suited techniques, methods and test cases. On the other hand, inadequacy of existing KM systems appears as the most cited problem related to applying KM in software testing.  相似文献   

16.
ContextThe processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects.ObjectiveThis paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features.MethodThe proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions.ResultsThe results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project.ConclusionIt is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.  相似文献   

17.
Book Reviews     
《EDPACS》2013,47(3)
Abstract

Advanced Auditing: Fundamentals Of EDP And Statistical Audit Technology by Miklos Vasarhelyi and Thomas Lin. Addison-Wesley Publishing (Jacob Way, Reading MA 01867), 1988, 628 pp, 8 appendixes, individual chapter student exercises and bibliographies. Price: $53.95, US; elsewhere in the world, inquire of the publisher.

Audit and Control Of End-User Computing by Larry Rittenberg, Ann Senn, and Martin Bariff. The Institute of Internal Auditors Research Foundation (249 Maitland Avenue, Altamonte Springs FL 37201-4201), 1990, 187 pp. Includes glossary and bibliography. Price: $48.00, including shipping and handling, with supplemental charge for expedited shipment.  相似文献   

18.
Abstracts     
《EDPACS》2013,47(1):16-18
Abstract

The Practitioner's Guide To EDP Auditing, by Jack Mullen. The New York Institute of Finance (2 Broadway, New York NY 10004–2207), 1990; various paginations; glossary. Price: $98 in US; elsewhere inquire; include payment in US funds with order.

Disaster Recovery Planning For Telecommunications, by Leo Wrobel. Artech House (685 Canton St, Norwood MA 02062), 1990, 113 pp; bibliography, glossary. Price: $48 for US orders; $58 elsewhere; include payment in US funds with order.

The PC Virus Control Handbook: A Technical Guide to Detection, Disinfection, and Investigation, Second Edition, by Robert Jacobson. Miller Freeman Publications (PO Box T, Gilroy CA 95021), 1990, 164 pp; bibliography. Price: $24.95 for US orders; elsewhere, inquire; include payment in US funds with order.

Guidelines for Establishing an Information Systems Audit Function, by Leta Fee Higgins. IIA (249 Maitland Ave, Altamonte Springs FL 32701–4201), 45 pp; bibliography, glossary. Price: $20 for US orders; $23 elsewhere; discount for IIA members; include payment in US funds with order.  相似文献   

19.
ContextService-Orientation (SO) is a rapidly emerging paradigm for the design and development of adaptive and dynamic software systems. Software Product Line Engineering (SPLE) has also gained attention as a promising and successful software reuse development paradigm over the last decade and proven to provide effective solutions to deal with managing the growing complexity of software systems.ObjectiveThis study aims at characterizing and identifying the existing research on employing and leveraging SO and SPLE.MethodWe conducted a systematic mapping study to identify and analyze related literature. We identified 81 primary studies, dated from 2000–2011 and classified them with respect to research focus, types of research and contribution.ResultThe mapping synthesizes the available evidence about combining the synergy points and integration of SO and SPLE. The analysis shows that the majority of studies focus on service variability modeling and adaptive systems by employing SPLE principles and approaches.In particular, SPLE approaches, especially feature-oriented approaches for variability modeling, have been applied to the design and development of service-oriented systems. While SO is employed in software product line contexts for the realization of product lines to reconcile the flexibility, scalability and dynamism in product derivations thereby creating dynamic software product lines.ConclusionOur study summarizes and characterizes the SO and SPLE topics researchers have investigated over the past decade and identifies promising research directions as due to the synergy generated by integrating methods and techniques from these two areas.  相似文献   

20.
BackgroundWhen studying work related musculoskeletal disorders (WMSDs), various factors (mechanical, organizational, psychophysical, individual) and their interrelationships have been considered to be important in general models for epidemiologic surveys and risk assessment and management. Hence the need for a “holistic” approach towards MSD prevention. On the other hand, considering the widespread presence of these factors and of WMSDs in many work places located in both developed and developing countries, there is a strong demand from OSH agencies and operators for “simple” risk assessment and management tools that can also be used by non-experts.ObjectivesThis paper is one of the main contributions towards a WHO/IEA project for developing a “Toolkit for WMSD prevention” by the TC on MSD of the IEA. The paper focuses on selecting tools at different levels for hazard identification, risk estimation and management. The proposals were primarily developed in this context but they also derive from other converging issues such as the ISO TR 12295 – published in 2014.Methods and criteriaProposals are based on two essential criteria: 1) adoption of a step-by-step approach starting with basic tools and moving to more complex tools only when necessary; 2) factoring in complexity and the presence of multiple influencing factors at every step (although with different degrees of in-depth analysis).ResultsThe proposals include: Step one: identification of preliminary occupational hazards and priority setting via “key-enter” questions (at this step, all potential hazards affecting WMSDs should be considered). Step two: identification of risk factors for WMSDs, consisting of a “quick assessment” and substantially aimed at identifying three possible conditions: acceptable/no consequences; critical/redesign urgently needed; more detailed analysis required. Step three: recognized tools for estimating risk (of WMSDs) are used depending on the outcomes of step two. Examples of such tools include “adaptations” of the Revised NIOSH Lifting Equation, Liberty Mutual Psychophysical Tables, OCRA Checklist, etc. These tools should adequately cover most of the influencing factors.Relevance to industryThe use of a step-by-step approach and validated risk estimation tools, in accordance with international standards, makes it possible to tackle the challenge of simplifying complexity in the assessment of biomechanical overload conditions and in the prevention of WMSDs in enterprises of all sizes, small businesses, agriculture, and in developing countries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号