Test-driven development (TDD) is a software development practice that has been used sporadically for decades. With this practice,
a software engineer cycles minute-by-minute between writing failing unit tests and writing implementation code to pass those
tests. Test-driven development has recently re-emerged as a critical enabling practice of agile software development methodologies.
However, little empirical evidence supports or refutes the utility of this practice in an industrial context. Case studies
were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies
indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects
that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after
adopting TDD.
Laurie WilliamsEmail:
Nachiappan Nagappan
is a researcher in the Software Reliability Research group at Microsoft Research. He received his MS and PhD from North Carolina
State University in 2002 and 2005, respectively. His research interests are in software reliability, software measurement
and empirical software engineering.
Dr. E. Michael Maximilien
(aka “max”) is a research staff member at IBM’s Almaden Research Center in San Jose, California. Prior to joining ARC, he
spent ten years at IBM’s Research Triangle Park, N.C., in software development and architecture. He led various small- to
medium-sized teams, designing and developing enterprise and embedded Java™ software; he is a founding member and contributor
to three worldwide Java and UML industry standards. His primary research interests lie in distributed systems and software
engineering, especially Web services and APIs, mashups, Web 2.0, SOA (service-oriented architecture), and Agile methods and
practices. He can be reached through his Web site (maximilien.org) and blog (blog.maximilien.com).
Thirumalesh Bhat
is a Development Manager at Microsoft Corporation. He has worked on several versions of Windows and other commercial software
systems at Microsoft. He is interested in software reliability, testing, metrics and software processes.
Laurie Williams
is an associate professor of computer science at North Carolina State University. She teaches software engineering and software
reliability and testing. Prior to joining NCSU, she worked at IBM for nine years, including several years as a manager of
a software testing department and as a project manager for a large software project. She was one of the founders of the XP
Universe conference in 2001, the first US-based conference on agile software development. She is also the lead author of the
Pair Programming Illuminated book and a co-editor of the Extreme Programming Perspectives book.
相似文献
Working collaboratively, psychologist educators and trainers at the doctoral, internship, and postdoctoral levels; credentialers; practitioners; and students offer 8 proposals for psychologists to consider in recognizing, assessing, and intervening with problems of professional competence in students and practicing professionals. In the proposals, the authors address the following topics: definitions and categories; preparing the system; self-assessment; remediation; diversity; communication across various levels of the system; confidentiality; and ethical, regulatory, and legal underpinnings. They also propose future directions for the assessment of problems in professional competence in both students and practicing psychologists. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Four experiments demonstrated implicit self-esteem compensation (ISEC) in response to threats involving gender identity (Experiment 1), implicit racism (Experiment 2), and social rejection (Experiments 3-4). Under conditions in which people might be expected to suffer a blow to self-worth, they instead showed high scores on 2 implicit self-esteem measures. There was no comparable effect on explicit self-esteem. However, ISEC was eliminated following self-affirmation (Experiment 3). Furthermore, threat manipulations increased automatic intergroup bias, but ISEC mediated these relationships (Experiments 2-3). Thus, a process that serves as damage control for the self may have negative social consequences. Finally, pretest anxiety mediated the relationship between threat and ISEC (Experiment 3), whereas ISEC negatively predicted anxiety among high-threat participants (Experiment 4), suggesting that ISEC may function to regulate anxiety. The implications of these findings for automatic emotion regulation, intergroup bias, and implicit self-esteem measures are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Negotiators' social motives (cooperative vs. individualistic) influence their strategic behaviors. In this study, the authors used multilevel modeling and analyses of strategy sequences to test hypotheses regarding how negotiators' social motives and the composition of the group influence group members' negotiation strategies. Four-person groups negotiating a 5-issue mixed-motive decision-making task were videotaped, and the tapes were transcribed and coded. Group composition included 2 homogeneous conditions (all cooperators and all individualists) and 3 heterogeneous conditions (3 cooperators and 1 individualist, 2 cooperators and 2 individualists, 1 cooperator and 3 individualists). Results showed that cooperative negotiators adjusted their use of integrative and distributive strategies in response to the social-motive composition of the group, but individualistic negotiators did not. Results from analyses of strategy sequences showed that cooperators responded more systematically to others' behaviors than did individualists. They also redirected the negotiation depending on group composition. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Vibration analysis of square and circular piezoelectric micro ultrasonic transducers (pMUTs) in the 100 kHz range as a function of experimental tools are reported. Analytical and 3D finite element method analysis using Comsol software has been performed to model static, modal and vibration behavior of these membranes. Comparison with standard impedancemeter measurement is shown to assess the performance of Laser Doppler Vibrometry system. Mechanical and electrical characterization and comparison with a model results are presented and discussed. The measured resonances frequencies of membrane can be weak and superimposed on important parasitic signal, which may mask the desired mechanical resonance signal. Our results revealed the real roles of the simulations and the combination of the experimental tools to get measurement accuracy. Subsequently, this piezoelectric micro-transducer was successfully tested as a sounder in air. These investigations offer guidance for the pMUTs design and associated electronic circuit but might at the same time be instructive and beneficial to further sensor applications.
Mutation testing has traditionally been used as a defect injection technique to assess the effectiveness of a test suite as
represented by a “mutation score.” Recently, mutation testing tools have become more efficient, and industrial usage of mutation
analysis is experiencing growth. Mutation analysis entails adding or modifying test cases until the test suite is sufficient
to detect as many mutants as possible and the mutation score is satisfactory. The augmented test suite resulting from mutation
analysis may reveal latent faults and provides a stronger test suite to detect future errors which might be injected. Software
engineers often look for guidance on how to augment their test suite using information provided by line and/or branch coverage
tools. As the use of mutation analysis grows, software engineers will want to know how the emerging technique compares with
and/or complements coverage analysis for guiding the augmentation of an automated test suite. Additionally, software engineers
can benefit from an enhanced understanding of efficient mutation analysis techniques. To address these needs for additional
information about mutation analysis, we conducted an empirical study of the use of mutation analysis on two open source projects.
Our results indicate that a focused effort on increasing mutation score leads to a corresponding increase in line and branch
coverage to the point that line coverage, branch coverage and mutation score reach a maximum but leave some types of code
structures uncovered. Mutation analysis guides the creation of additional “common programmer error” tests beyond those written
to increase line and branch coverage. We also found that 74% of our chosen set of mutation operators is useful, on average,
for producing new tests. The remaining 26% of mutation operators did not produce new test cases because their mutants were
immediately detected by the initial test suite, indirectly detected by test suites we added to detect other mutants, or were
not able to be detected by any test.
Laurie WilliamsEmail:
Ben Smith
is a second year Ph.D. student in Computer Science at North Carolina State University working as an RA under Dr. Laurie Williams.
He received his Bachelor’s degree in Computer Science in May of 2007 and he hopes to receive his doctorate in 2012. He has
begun work on developing SQL Coverage Metrics as a predictive measure of the security of a web application. This fall, he
will be beginning the doctoral preliminary exam and working as a Testing Manager for the NCSU CSC Senior Design Center: North
Carolina State’s capstone course for Computer Science. Finally, he has designed and maintained the websites for the Center
for Open Software Engineering and ESEM 2009.
Laurie Williams
is an Associate Professor in the Computer Science Department of the College of Engineering at North Carolina State University.
She leads the Software Engineering Reasearch group and is also the Director of the North Carolina State University Laboratory
for Collaborative System Development and the Center for Open Software Engineering. She is also technical co-director of the
Center for Open Software Engineering (COSE) and the area technical director of the Secure Open Systems Initiative (SOSI) at
North Carolina State University. Laurie received her Ph.D. in Computer Science from the University of Utah, her MBA from Duke
University, and her BS in Industrial Engineering from Lehigh University. She worked for IBM for nine years in Raleigh, NC
before returning to academia. Laurie’s research interests include agile software development methodologies and practices,
collaborative/pair programming, software reliability and testing, and software engineering for secure systems development.
相似文献
Information security has become increasingly important to organizations. Despite the prevalence of technical security measures, individual employees remain the key link – and frequently the weakest link – in corporate defenses. When individuals choose to disregard security policies and procedures, the organization is at risk. How, then, can organizations motivate their employees to follow security guidelines? Using an organizational control lens, we build a model to explain individual information security precaution-taking behavior. Specific hypotheses are developed and tested using a field survey. We examine elements of control and introduce the concept of ‘mandatoriness,’ which we define as the degree to which individuals perceive that compliance with existing security policies and procedures is compulsory or expected by organizational management. We find that the acts of specifying policies and evaluating behaviors are effective in convincing individuals that security policies are mandatory. The perception of mandatoriness is effective in motivating individuals to take security precautions, thus if individuals believe that management watches, they will comply. 相似文献
Si3N4 specimens having the composition 88.7 wt% Si3N4-4.9wt% SiO2-6.4wt% Y2O3 (85.1 mol% Si3N4-11.1 mol% SiO2-3.8mol% Y2O3) were sintered at 2140° C under 25 atm N2 for 1 h and then subjected to a 5 h anneal at 1500° C. Crystallization of an amorphous grainboundary phase resulted in the formation of Y2Si2O7. The short-time 1370° C strength of this material was compared with that of material of the same composition having no annealing treatment. No change in strength was noted. This is attributed to the refractory nature of the yttrium-rich grain-boundary phase (apparently identical in both glassy and crystalline phases) and the subsequent domination of the failure process by common processing flaws.Chemical analysis of the medium indicated 5.25 wt% O2, 0.46 wt% C, 0.8 wt% Al, and expressed in p.p.m. 670 Ca, 30 Cu, 2000 Fe, <2 Ti, 370 Cr, 130 Mg, 90 Mn, <10 V, <20 Zr, 2000 Mo, 240 Ni, 130 Zn, <30 Pb, <60 Sn. 相似文献
Finding security vulnerabilities requires a different mindset than finding general faults in software—thinking like an attacker. Therefore, security engineers looking to prioritize security inspection and testing efforts may be better served by a prediction model that indicates security vulnerabilities rather than faults. At the same time, faults and vulnerabilities have commonalities that may allow development teams to use traditional fault prediction models and metrics for vulnerability prediction. The goal of our study is to determine whether fault prediction models can be used for vulnerability prediction or if specialized vulnerability prediction models should be developed when both models are built with traditional metrics of complexity, code churn, and fault history. We have performed an empirical study on a widely-used, large open source project, the Mozilla Firefox web browser, where 21% of the source code files have faults and only 3% of the files have vulnerabilities. Both the fault prediction model and the vulnerability prediction model provide similar ability in vulnerability prediction across a wide range of classification thresholds. For example, the fault prediction model provided recall of 83% and precision of 11% at classification threshold 0.6 and the vulnerability prediction model provided recall of 83% and precision of 12% at classification threshold 0.5. Our results suggest that fault prediction models based upon traditional metrics can substitute for specialized vulnerability prediction models. However, both fault prediction and vulnerability prediction models require significant improvement to reduce false positives while providing high recall. 相似文献