Better Software Magazine Articles

The State of the Practice

While software testing focuses on detection rather than prevention, we can argue that it has become a powerful counter-offensive against bugs. We can equally argue that many of today's software practices impede quality. Ross Collard compares these two positions and invites you to join the discussion.

Ross Collard's picture Ross Collard
Scrumdamentalism

It's been said that, over time, charismatic movements often evolve to become "bureaucratic"—focused on a set of standardized procedures that dictate the execution of the processes within the movement. Has Scrum evolved to this point or is there still a place for agility in our processes?

Lee Copeland's picture Lee Copeland
Software Longevity Testing: Planning for the Long Haul

How long do you let your software run during testing? An increasing number of software applications are intended to run indefinitely, in an always-on operating environment. And yet, few test plans include more than a brief memory leak test case. Learn how to test for problems due to the passing of time and problems due to cumulative usage.

Steven Woody's picture Steven Woody
Food for Thought

Ideas about testing can come from many different and unexpected sources, including reductionism, agronomy, cognitive psychology, mycology, and general systems. Michael feasts on Michael Pollan's "The Omnivore's Dilemma" and finds much to whet the tester's appetite for learning about how things work.

Michael Bolton's picture Michael Bolton
Five Test Automation Fallacies that Will Make You Sick

Five common fallacies about test automation can leave even the most experienced test and development teams severely ill. If allowed to go unchallenged, these beliefs will almost guarantee the death of an automation effort. The five fallacies are: (1) Automated tests find many bugs-they don't. (2) Manual tests make good automated tests-they don't. (3) You know what the expected results are-often you don't. (4) Checking actual against expected is simple-it isn't. (5) More automated regression tests are always better-they aren't. Join Doug Hoffman to explore these fallacies-why we believe them, how to avoid them, and what to do now if you've based your automation efforts on them. Take back a set of antidotes to each of these fallacies and build a successful test automation framework or repair the sick one you are living with now.

Douglas Hoffman, Software Quality Methods, LLC.
Integrating Security Testing into the QA Process

Although organizations have vastly increased their efforts to secure operating systems and networks from attackers, most have neglected the security of their applications-making them the weakest link in their overall security chain. By some industry estimates, 75 percent of security attacks now focus on the application layer. All too often, the departmental responsibility for verifying application security is not defined, and security within the SDLC is either addressed too late or not at all. Based on his experience in a Fortune 1000 company, Mike Hryekewicz describes a step-wise strategy for extending the QA department’s role to include security as a quality attribute to verify prior to an application going into production. Learn how to deploy a security testing capability within your QA department and how to extend its coverage and activities as the process gains acceptance.

Mike Hryekewicz, Standard Insurance Company
Successful Teams are TDD Teams

Test-Driven Development (TDD) is the practice of writing a test before writing code that implements the tested behavior, thus finding defects earlier. Rob Myers explains the two basic types of TDD: the original unit-level approach used mostly by developers, and the agile-inspired Acceptance-Test Driven Development (ATDD) which involves the entire team. Rob has experienced various difficulties in adopting TDD: developers who don't spend a few extra moments to look for and clean up a new bit of code duplication; inexperienced coaches who confuse the developer-style TDD with the team ATDD; and waffling over the use of TDD, which limits its effectiveness. The resistance (overt or subtle) to these practices that can help developers' succeed is deeply rooted in our brains and our cultures.

Rob Myers, Agile Institute
Getting Started with Static Analysis

Static analysis is a technique for finding defects in code without executing it. Static analysis tools are easy to use because no test cases or manual code reviews are needed. Static analysis technology has advanced significantly in the past few years. Although the use of this technique is increasing, many misconceptions still exist about the capabilities of advanced static analysis tools. Paul Anderson describes the latest breed of static analysis tools, explains how they work, and clarifies their strengths and limitations. He demystifies static analysis jargon-terms such as object-sensitive, context-sensitive, and others. Paul describes how best to use static analysis tools in the software life cycle and how these can make traditional testing activities more effective. Paul presents data from real case studies to demonstrate the usage and effectiveness of these tools in practice.

Paul Anderson, GrammaTech, Inc.
Better Software Conference 2009: A Software Quality Engineering Maturity Model

You are probably familiar with maturity models for software development. Greg Pope and Ellen Hill describe a corresponding five-stage maturity model for software quality-not just testing-which addresses the challenges faced by organizations attempting to improve the quality of their software. How do you go about transforming your organization to improve software quality in today’s better, cheaper, faster world? Greg and Ellen present the different maturity levels of software quality organizations: (1) the whiner or know-it-all phase, (2) writing documents phase, (3) the measure the process phase, (4) the measure-based improvements phase, and (5) the tools and process automation phase. Learn how to recognize the signs of each maturity level, where and how to start the quality improvement process, how to get buy-in from developers and management, and the tools to predict and measure software quality.

Gregory Pope, Lawrence Livermore National Laboratory
Testing in Turbulent Projects

Turbulent weather such as tornados is characterized by chaotic, random, and often surprising and powerful pattern changes. Similarly, turbulent software projects are characterized by chaotic, seemingly random project changes that happen unexpectedly and with force. Dealing with turbulence is about dealing with change. Testing teams must contend with continuously changing project requirements, design, team members, business goals, technologies, and organizational structures. Test managers and leaders should not just react to change; instead, they need to learn how to read the warning signs of coming change and seek to discover the source of impending changes. Rob Sabourin shares his experiences organizing projects for testing in highly turbulent situations. Learn how to identify context drivers and establish active context listeners in your organization.

Robert Sabourin, AmiBug.com Inc

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.