共查询到20条相似文献,搜索用时 46 毫秒
1.
Mature knowledge allows engineering disciplines the achievement of predictable results. Unfortunately, the type of knowledge used in software engineering can be considered to be of a relatively low maturity, and developers are guided by intuition, fashion or market-speak rather than by facts or undisputed statements proper to an engineering discipline. Testing techniques determine different criteria for selecting the test cases that will be used as input to the system under examination, which means that an effective and efficient selection of test cases conditions the success of the tests. The knowledge for selecting testing techniques should come from studies that empirically justify the benefits and application conditions of the different techniques. This paper analyzes the maturity level of the knowledge about testing techniques by examining existing empirical studies about these techniques. We have analyzed their results, and obtained a testing technique knowledge classification based on their factuality and objectivity, according to four parameters. 相似文献
2.
Paul Witherell Ian R. Grosse Sundar Krishnamurty Jack C. Wileden 《Advanced Engineering Informatics》2013,27(4):555-565
Semantic technologies are playing an increasingly popular role as a means for advancing the capabilities of knowledge management systems. Among these advancements, researchers have successfully leveraged semantic technologies, and their accompanying techniques, to improve the representation and search capabilities of knowledge management systems. This paper introduces a further application of semantic techniques. We explore semantic relatedness as a means of facilitating the development of more “intelligent” engineering knowledge management systems. Using semantic relatedness quantifications to analyze and rank concept pairs, this novel approach exploits semantic relationships to help identify key engineering relationships, similar to those leveraged in change management systems, in product development processes. As part of this work, we review several different semantic relatedness techniques, including a meronomic technique recently introduced by the authors. We introduce an aggregate measure, termed “An Algorithm for Identifying Engineering Relationships in Ontologies,” or AIERO, as a means to purposely quantify semantic relationships within product development frameworks. To assess its consistency and accuracy, AIERO is tested using three separate, independently developed ontologies. The results indicate AIERO is capable of returning consistent rankings of concept pairs across varying knowledge frameworks. A PCB (printed circuit board) case study then highlights AIERO’s unique ability to leverage semantic relationships to systematically narrow where engineering interdependencies are likely to be found between various elements of product development processes. 相似文献
3.
4.
Ruili Lang Guofan Shao Bryan C. Pijanowski Richard L. Farnsworth 《Computers & Geosciences》2008,34(12):1877-1885
The quality of remotely sensed land use and land cover (LULC) maps is affected by the accuracy of image data classifications. Various efforts have been made in advancing supervised or unsupervised classification methods to increase the repeatability and accuracy of LULC mapping. This study incorporates a data-assisted labeling approach (DALA) into the unsupervised classification of remotely sensed imagery. The DALA-unsupervised classification algorithm consists of three steps: (1) creation of N spectral-class maps using Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA); (2) development of LULC maps with assistance of reference data; and (3) accuracy assessments of all the LULC maps using independent reference data and selection of one LULC map with the highest accuracy. Classification experiments with a composite image of a Landsat Thematic Mapper (TM) image and an Enhanced Thematic Mapper Plus (ETM+) image suggest that DALA was effective in making unsupervised classification process more objective, automatic, and accurate. A comparison between the DALA-unsupervised classifications and some conventional classifications suggests that the DALA-unsupervised classification algorithm yielded better classification accuracies compared to these conventional approaches. Such a simple, effective approach has not been systematically examined before but has great potential for many applications in the geosciences. 相似文献
5.
This article introduces the application of equivalence hypothesis testing (EHT) into the Empirical Software Engineering field. Equivalence (also known as bioequivalence in pharmacological studies) is a statistical approach that answers the question "is product T equivalent to some other reference product R within some range $\Updelta$ ?." The approach of “null hypothesis significance test” used traditionally in Empirical Software Engineering seeks to assess evidence for differences between T and R, not equivalence. In this paper, we explain how EHT can be applied in Software Engineering, thereby extending it from its current application within pharmacological studies, to Empirical Software Engineering. We illustrate the application of EHT to Empirical Software Engineering, by re-examining the behavior of experts and novices when handling code with side effects compared to side-effect free code; a study previously investigated using traditional statistical testing. We also review two other previous published data of software engineering experiments: a dataset compared the comprehension of UML and OML specifications, and the last dataset studied the differences between the specification methods UML-B and B. The application of EHT allows us to extract additional conclusions to the previous results. EHT has an important application in Empirical Software Engineering, which motivate its wider adoption and use: EHT can be used to assess the statistical confidence with which we can claim that two software engineering methods, algorithms of techniques, are equivalent. 相似文献
6.
《Software, IEEE》2005,22(4):106-107
Although it has seen some spectacular successes, the software engineering field still has room for improvement. The Technical Council on Software Engineering is committed to advancing the development, application, and adoption of software engineering. 相似文献
7.
As the need for more complex software systems increases so does the need for developing systematic and standardized methods for software design and maintenance. Artificial Intelligence can play an important role in this activity as it may provide efficient, adaptable and customizable solutions. Domain analysis, program representation, process modeling, software testing and software verification are all areas that can benefit from the use of A.I techniques, including knowledge acquisition, knowledge representation, problem solving algorithms and theorem proving. This paper discusses the use of Artificial Intelligence techniques in Software Engineering, as it was presented in the ICSE 16's workshop on Research Issues in the Intersection between Software Engineering and Artificial Intelligence. 相似文献
8.
Many statechart-based testing strategies result in specifying a set of paths to be executed through a (flattened) statechart.
These techniques can usually be easily automated so that the tester does not have to go through the tedious procedure of deriving
paths manually to comply with a coverage criterion. The next step is then to take each test path individually and derive test
requirements leading to fully specified test cases. This requires that we determine the system state required for each event/transition
that is part of the path to be tested and the input parameter values for all events and actions associated with the transitions.
We propose here a methodology towards the automation of this procedure, which is based on a careful normalization and analysis
of operation contracts and transition guards written with the Object Constraint Language (OCL). It is illustrated by one case
study that exemplifies the steps of our methodology and provides a first evaluation of its applicability.
The scope of the testing activity depends on what is modeled by the statechart. If the statechart models the behavior of a
single class, then it can be used to support unit testing. If the behavior of a class-cluster, a subsystem or a component
is modeled, then we are concerned with integration testing. If the whole system is modeled, then the focus of statechart-based
testing is system testing.
Lionel C. Briand is on the faculty of the Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada, where he founded
and leads the Software Quality Engineering Laboratory (http://www.sce.carleton.ca/Squall/ Squall.htm). He has been granted
the Canada Research Chair in Software Quality Engineering and is also a visiting professor at the Simula laboratories, University
of Oslo, Norway. Before that he was the software quality engineering department head at the Fraunhofer Institute for Experimental
Software Engineering, Germany.
Dr. Lionel also worked as a research scientist for the Software Engineering Laboratory, a consortium of the NASA Goddard Space
Flight Center, CSC, and the University of Maryland. He has been on the program, steering, or organization committees of many
international, IEEE conferences such as ICSE, ICSM, ISSRE, and METRICS. He is the coeditor-in-chief of Empirical Software
Engineering (Springer) and is a member of the editorial board of Systems and Software Modeling (Springer). He was on the board
of IEEE Transactions on Software Engineering from 2000 to 2004.
His research interests include: object-oriented analysis and design, inspections and testing in the context of object-oriented
development, quality assurance and control, project planning and risk analysis, and technology evaluation. Lionel received
the BSc and MSc degrees in geophysics and computer systems engineering from the University of Paris VI, France. He received
the PhD degree in computer science, with high honors, from the University of Paris XI, France.
Yvan Labiche received the BSc in Computer System Engineering, from the graduate school of engineering: CUST (Centre Universitaire des
Science et Techniques, Clermont-Ferrand), France. He completed a Master of fundamental computer science and production systems
in 1995 (Université Blaise Pascal, Clermont Ferrand, France). While doing his Ph.D. in Software Engineering, completed in
2000 at LAAS/CNRS in Toulouse, France, Yvan worked with Aerospatiale Matra Airbus (now EADS Airbus) on the definition of testing
strategies for safety-critical, on-board software, developed using object-oriented technologies.
In January 2001, Dr. Yvan Labiche joined the Department of Systems and Computer Engineering at Carleton University, as an
Assistant Professor. His research interests include: object-oriented analysis and design, software testing in the context
of object-oriented development, and technology evaluation. He is a member of the IEEE.
Jim (Jingfeng) Cui completed his BSc in Industrial Automation Control, from the School of Information and Engineering, Northeastern University,
China. He received a Master of Applied Science (specialization in Software Engineering) in 2004 from the Ottawa-Carleton Institute
of Electrical and Computer Engineering, Ottawa, Canada. While in his graduate study, he was awarded the Ontario Graduate Scholarship
of Science and Technology. He is now a senior Software Architect in Sunyard System & Engineering Co.Ltd., China. His interest
includes Object-Oriented Software Development, Quality Assurance, and Content Management System. 相似文献
9.
10.
Test Case Generation as an AI Planning Problem 总被引:6,自引:0,他引:6
Adele E. Howe Anneliese von Mayrhauser Richard T. Mraz 《Automated Software Engineering》1997,4(1):77-106
While Artificial Intelligence techniques have been applied to a variety of software engineering applications, the area of automated software testing remains largely unexplored. Yet, test cases for certain types of systems (e.g., those with command language interfaces and transaction based systems) are similar to plans. We have exploited this similarity by constructing an automated test case generator with an AI planning system at its core. We compared the functionality and output of two systems, one based on Software Engineering techniques and the other on planning, for a real application: the StorageTek robot tape library command language. From this, we showed that AI planning is a viable technique for test case generation and that the two approaches are complementary in their capabilities. 相似文献
11.
Resource oriented selection of rule-based classification models: An empirical case study 总被引:1,自引:0,他引:1
The amount of resources allocated for software quality improvements is often not enough to achieve the desired software quality.
Software quality classification models that yield a risk-based quality estimation of program modules, such as fault-prone
(fp) and not fault-prone (nfp), are useful as software quality assurance techniques. Their usefulness is largely dependent on whether enough resources
are available for inspecting the fp modules. Since a given development project has its own budget and time limitations, a resource-based software quality improvement
seems more appropriate for achieving its quality goals. A classification model should provide quality improvement guidance
so as to maximize resource-utilization.
We present a procedure for building software quality classification models from the limited resources perspective. The essence
of the procedure is the use of our recently proposed Modified Expected Cost of Misclassification (MECM) measure for developing
resource-oriented software quality classification models. The measure penalizes a model, in terms of costs of misclassifications,
if the model predicts more number of fp modules than the number that can be inspected with the allotted resources. Our analysis is presented in the context of our
Rule-Based Classification Modeling (RBCM) technique. An empirical case study of a large-scale software system demonstrates
the promising results of using the MECM measure to select an appropriate resource-based rule-based classification model.
Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the
graduate programs and research. His research interests are in software engineering, software metrics, software reliability
and quality engineering, computational intelligence applications, computer security, computer performance evaluation, data
mining, machine learning, statistical modeling, and intelligent data analysis. He has published more than 300 refereed papers
in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the general chair
of the IEEE International Conference on Tools with Artificial Intelligence 2005.
Naeem Seliya is an Assistant Professor of Computer and Information Science at the University of Michigan - Dearborn. He recieved his Ph.D.
in Computer Engineering from Florida Atlantic University, Boca Raton, FL, USA in 2005. His research interests include software
engineering, data mining and machine learnring, application and data security, bioinformatics and computational intelligence.
He is a member of IEEE and ACM. 相似文献
12.
Wongthongtham Pornpit Chang Elizabeth Dillon Tharam Sommerville Ian 《Knowledge and Data Engineering, IEEE Transactions on》2009,21(8):1205-1217
This paper aims to present an ontology model of software engineering to represent its knowledge. The fundamental knowledge relating to software engineering is well described in the textbook entitled Software Engineering by Sommerville that is now in its eighth edition [1] and the white paper, Software Engineering Body of Knowledge (SWEBOK), by the IEEE [2] upon which software engineering ontology is based. This paper gives an analysis of what software engineering ontology is, what it consists of, and what it is used for in the form of usage example scenarios. The usage scenarios presented in this paper highlight the characteristics of the software engineering ontology. The software engineering ontology assists in defining information for the exchange of semantic project information and is used as a communication framework. Its users are software engineers sharing domain knowledge as well as instance knowledge of software engineering. 相似文献
13.
Designs almost always require tradeoffs between competing design choices to meet system requirements. We present a framework
for evaluating design choices with respect to meeting competing requirements. Specifically, we develop a model to estimate
the performance of a UML design subject to changing levels of security and fault-tolerance. This analysis gives us a way to
identify design solutions that are infeasible. Multi-criteria decision making techniques are applied to evaluate the remaining
feasible alternatives. The method is illustrated with two examples: a small sensor network and a system for controlling traffic
lights.
Dr. Anneliese Amschler Andrews is Professor and Chair of the Department of Computer Science at the University of Denver. Before that she was the Huie Rogers
Endowed Chair in Software Engineering at Washington State University. Dr. Andrews is the author of a text book and over 130
articles in the area of Software Engineering, particularly software testing and maintenance.
Dr. Andrews holds an MS and PhD from Duke University and a Dipl.-Inf. from the Technical University of Karlsruhe. She served
as Editor-in-Chief of the IEEE Transactions on Software Engineering. She has also served on several other editorial boards
including the IEEE Transactions on Reliability, the Empirical Software Engineering Journal, the Software Quality Journal,
the Journal of Information Science and Technology, and the Journal of Software Maintenance. She was Director of the Colorado
Advanced Software Institute from 1995 to 2002. CASI's mission was to support technology transfer research related to software
through collaborations between industry and academia.
Ed Mancebo studied software engineering at Milwaukee School of Engineering and computer science at Washington State University. His
masters thesis explored applying systematic decision making methods to software engineering problems. He is currently a software
developer at Amazon.com.
Dr. Per Runeson is a professor in software engineering at Lund University, Sweden. His research interests include methods to facilitate,
measure and manage aspects of software quality. He received a PhD from Lund University in 1998 and has industrial experience
as a consulting expert. He is a member of the editorial board of Empirical Software Engineering and several program committees,
and currently has a senior researcher position funded by the Swedish Research Council.
Robert France is currently a Full Professor in the Department of Computer Science at Colorado State University. His research interests
are in the area of Software Engineering, in particular formal specification techniques, software modeling techniques, design
patterns, and domain-specific modeling languages. He is an Editor-in-Chief of the Springer journal on Software and System
Modeling (SoSyM), and is a Steering Committee member and past Steering Committee Chair of the MoDELS/UML conference series.
He was also a member of the revision task forces for the UML 1.x standards. 相似文献
14.
Tradeoff and Sensitivity Analysis in Software Architecture Evaluation Using Analytic Hierarchy Process 总被引:1,自引:0,他引:1
Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes.
These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design
decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts.
AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the
exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such
that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper,
we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the
decision process. We apply our method to an example of a real-world distributed architecture presented in the literature.
The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and
the architecture's capability to handle future quality attribute changes. These expose critical decisions which are otherwise
too subtle to be detected in standard AHP results.
Liming Zhu is a PHD candidate in the School of Computer Science and Engineering at University of New South Wales. He is also a member
of the Empirical Software Engineering Group at National ICT Australia (NICTA). He obtained his BSc from Dalian University
of Technology in China. After moving to Australia, he obtained his MSc in computer science from University of New South Wales.
His principle research interests include software architecture evaluation and empirical software engineering.
Aybüke Aurum is a senior lecturer at the School of Information Systems, Technology and Management, University of New South Wales. She
received her BSc and MSc in geological engineering, and MEngSc and PhD in computer science. She also works as a visiting researcher
in National ICT, Australia (NICTA). Dr. Aurum is one of the editors of “Managing Software Engineering Knowledge”, “Engineering
and Managing Software Requirements” and “Value-Based Software Engineering” books. Her research interests include management
of software development process, software inspection, requirements engineering, decision making and knowledge management in
software development. She is on the editorial boards of Requirements Engineering Journal and Asian Academy Journal of Management.
Ian Gorton is a Senior Researcher at National ICT Australia. Until Match 2004 he was Chief Architect in Information Sciences and Engineering
at the US Department of Energy's Pacific Northwest National Laboratory. Previously he has worked at Microsoft and IBM, as
well as in other research positions. His interests include software architectures, particularly those for large-scale, high-performance
information systems that use commercial off-the-shelf (COTS) middleware technologies. He received a PhD in Computer Science
from Sheffield Hallam University.
Dr. Ross Jeffery is Professor of Software Engineering in the School of Computer Science and Engineering at UNSW and Program Leader in Empirical
Software Engineering in National ICT Australia Ltd. (NICTA). His current research interests are in software engineering process
and product modeling and improvement, electronic process guides and software knowledge management, software quality, software
metrics, software technical and management reviews, and software resource modeling and estimation. His research has involved
over fifty government and industry organizations over a period of 15 years and has been funded from industry, government and
universities. He has co-authored four books and over one hundred and twenty research papers. He has served on the editorial
board of the IEEE Transactions on Software Engineering, and the Wiley International Series in Information Systems and he is
Associate Editor of the Journal of Empirical Software Engineering. He is a founding member of the International Software Engineering
Research Network (ISERN). He was elected Fellow of the Australian Computer Society for his contribution to software engineering
research. 相似文献
15.
Deriving products from a Feature Model (FM) for testing Software Product Lines (SPLs) is a hard task. It is important to select a minimum number of products but, at the same time, to consider the coverage of testing criteria such as pairwise, among other factors. To solve such problems Multi-Objective Evolutionary Algorithms (MOEAs) have been successfully applied. However, to design a solution for this and other software engineering problems can be very difficult, because it is necessary to choose among different search operators and parameters. Hyper-heuristics can help in this task, and have raised interest in the Search-Based Software Engineering (SBSE) field. Considering the growing adoption of SPL in the industry and crescent demand for SPL testing approaches, this paper introduces a hyper-heuristic approach to automatically derive products to variability testing of SPLs. The approach works with MOEAs and two selection methods, random and based on FRR-MAB (Fitness Rate Rank based Multi-Armed Bandit). It was evaluated with real FMs and the results show that the proposed approach outperforms the traditional algorithms used in the literature, and that both selection methods present similar performance. 相似文献
16.
《Knowledge》2006,19(4):235-247
Software testing is the technical kernel of software quality engineering, and to develop critical and complex software systems not only requires a complete, consistent and unambiguous design, and implementation methods, but also a suitable testing environment that meets certain requirements, particularly, to face the complexity issues. Traditional methods, such as analyzing each requirement and developing test cases to verify correct implementation, are not effective in understanding the software’s overall complex behavior. In that respect, existing approaches to software testing are viewed as time-consuming and insufficient for the dynamism of the modern business environment. This dynamics requires new tools and techniques, which can be employed in tandem with innovative approaches to using and combining existing software engineering methods. This work advocates the use of a recently proposed software engineering paradigm, which is particularly suited to the construction of complex and distributed software-testing systems, which is known as Agent-Oriented Software Engineering. This methodology is a new one, which gives the basic approach to agent-based frameworks for testing. 相似文献
17.
Requirements Engineering - Software Product Line Engineering (SPLE) is a promising paradigm for reusing knowledge and artifacts among similar software products. However, SPLE methods and techniques... 相似文献
18.
软件测试是一种广泛使用的软件质量保证手段. 变异测试是一种基于故障的软件测试方法, 广泛用于评估测试用例集的充分性与软件测试技术的有效性. 数量庞大的变异体导致变异测试的成本非常高. 提出一种数据流分析指导的变异体精简方法(DFSampling), 设计了启发式规则, 基于这些规则对随机选择技术与基于路径感知的变异体精简技术(PAMR)进行了改进. 采用经验研究的方式评估了DFSampling的有效性, 比较了DFSampling与随机选择技术、PAMR技术的有效性, 实验结果表明DFSampling是一种有效的变异体精简策略, 提高了变异测试的效率. 相似文献
19.
近年来,基于表示的人脸图像识别方法吸引了众多学者的关注,如稀疏表示分类方法(Sparse Representation based Classification,SRC)、协作表示方法(Collaborative Representation based Classification,CRC)等。这些方法均利用单张图像的表示信息进行识别,而忽略了集体图像之间的关联性,容易存在信息不足的缺陷。为了能够充分利用多张人脸图像的相互关系,提出了一类集体表示分类方法。该方法将多张待识别图像映射为一个稀疏表示矩阵,并对每类测试图像集体重构,以最小残差为准则对每类人脸图像集分类。这种方法通过同时表示多张图像,关注到不同图像之间的相似与不同,获取到同一主体的更多信息,从而提高识别正确率。尤其在只有多张侧脸图像而无正脸图像的情况下,集体表示分类方法更能发挥优势,在两个公开人脸图像数据集上的实验结果也验证了该方法的有效性。 相似文献
20.
Empirical evaluation of optimization algorithms when used in goal-oriented automated test data generation techniques 总被引:2,自引:0,他引:2
Man Xiao Mohamed El-Attar Marek Reformat James Miller 《Empirical Software Engineering》2007,12(2):183-239
Software testing is an essential process in software development. Software testing is very costly, often consuming half the
financial resources assigned to a project. The most laborious part of software testing is the generation of test-data. Currently,
this process is principally a manual process. Hence, the automation of test-data generation can significantly cut the total
cost of software testing and the software development cycle in general. A number of automated test-data generation approaches
have already been explored. This paper highlights the goal-oriented approach as a promising approach to devise automated test-data
generators. A range of optimization techniques can be used within these goal-oriented test-data generators, and their respective
characteristics, when applied to these situations remain relatively unexplored. Therefore, in this paper, a comparative study
about the effectiveness of the most commonly used optimization techniques is conducted.
Man Xiao received a B.S. degree in Space Physics and Electronics Information Engineering from the University of Wuhan, China; and a M.S. degree in Software Engineering, from the University of Alberta, Canada. She is now a Software Engineer at a small start-up company in Edmonton, Alberta, Canada. Mohamed El-Attar is a Ph.D. candidate (Software Engineering) at the University of Alberta and a member of the STEAM laboratory. His research interests include Requirements Engineering, in particular with UML and use cases, object-oriented analysis and design, model transformation and empirical studies. Mohamed received a B.S. Engineering in Computer Systems from Carleton University. Marek Reformat received his M.S. degree from the Technical University of Poznan, Poland, and his Ph.D. from the University of Manitoba, Canada. His interests are related to simulation and modeling in time-domain, and evolutionary computing and its application to optimization problems. For 3 years he worked for the Manitoba HVDC Research Centre, Canada where he was a member of a simulation software development team. Currently, he is with the Department of Electrical and Computer Engineering at the University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, and probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. He was a member of program committees of several conferences related to computational intelligence and evolutionary computing. James Miller received his B.S. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship and subsequently a senior lectureship in Computer Science. Initially, during this period, his research interests were in computer vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests were in software and systems engineering. In 2000, he joined the Department of Electronic and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate verification and validation issues of software, embedded and ubiquitous computer systems. He has published over one hundred refereed journal and conference papers on software and systems engineering (see for details for recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering. 相似文献
James Miller (Corresponding author)Email: |
Man Xiao received a B.S. degree in Space Physics and Electronics Information Engineering from the University of Wuhan, China; and a M.S. degree in Software Engineering, from the University of Alberta, Canada. She is now a Software Engineer at a small start-up company in Edmonton, Alberta, Canada. Mohamed El-Attar is a Ph.D. candidate (Software Engineering) at the University of Alberta and a member of the STEAM laboratory. His research interests include Requirements Engineering, in particular with UML and use cases, object-oriented analysis and design, model transformation and empirical studies. Mohamed received a B.S. Engineering in Computer Systems from Carleton University. Marek Reformat received his M.S. degree from the Technical University of Poznan, Poland, and his Ph.D. from the University of Manitoba, Canada. His interests are related to simulation and modeling in time-domain, and evolutionary computing and its application to optimization problems. For 3 years he worked for the Manitoba HVDC Research Centre, Canada where he was a member of a simulation software development team. Currently, he is with the Department of Electrical and Computer Engineering at the University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, and probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. He was a member of program committees of several conferences related to computational intelligence and evolutionary computing. James Miller received his B.S. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship and subsequently a senior lectureship in Computer Science. Initially, during this period, his research interests were in computer vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests were in software and systems engineering. In 2000, he joined the Department of Electronic and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate verification and validation issues of software, embedded and ubiquitous computer systems. He has published over one hundred refereed journal and conference papers on software and systems engineering (see for details for recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering. 相似文献