Conference Presentations

STARWEST 2008: The Marine Corps Principles of Leadership for Testers

You can have the best tools and processes in the world, but if your staff is not motivated and productive, your testing effort will be, at best, inefficient. Good test managers must also be good leaders. Retired Marine Colonel Rick Craig describes how using the Marine Corps Principles of Leadership can help you become a better leader and, as a result, a better test manager. Learn the difference between leadership and management and why they complement each other. Join in the discussion and share ideas that have helped you motivate your testers (and those that didn't). Also, share your thoughts on what characteristics are associated with leaders and whether you believe that “leaders are made” or “leaders are born”. Rick discusses motivation, morale, training, span of control, immersion time, and promoting the testing discipline within your organization.

Rick Craig, Software Quality Engineering
STARWEST 2008: Test Estimation: Painful or Painless?

As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult aspects of test management. You must deal with many unknowns, including dependencies on development activities and the variable quality of the software you test. Lloyd presents seven proven ways he has used to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Lloyd discusses FIA (finger in the air), formula/percentage, historical reference, Parkinson's Law vs. pricing, work breakdown structures, estimation models, and assessment estimation. He shares spreadsheet templates and utilities that you can use and take back to help you improve your estimations. By the end of this session, you might just be thinking that the once painful experience of test estimation can, in fact, be painless.

Lloyd Roden, Grove Consultants
The Myth of Risk Mangement

Although test managers are tasked with helping manage project risks, risk management practices used on most software projects produce only an illusion of safety. Many software development risks cannot be managed because they are unknown, unquantifiable, uncontrollable, or unmentionable. Rather than planning only for risks that have previously occurred, project and test managers must begin with the assumption that something new will impact their project. The secret to effective risk management is to create mechanisms that provide for the early detection and quick response to such events--not simply to create checklists of problems you've previously seen. Pete McBreen presents risk "insurance" as a better alternative to classic risk management.

Pete McBreen, Software Craftsmanship Inc
STARWEST 2008: Branch Out Using Classification Trees

Classification trees are a structured, visual approach to identify and categorize equivalence class partitions for test objects. They enable testers to create better test cases faster. Classification trees visually document test requirements to make them easy to create and comprehend. Julie Gardiner explains this powerful technique and how it helps all stakeholders understand exactly what is involved in testing and offers an easier way to validate test designs. Using examples, Julie shows you how to create classification trees, how to construct test cases from them, and how they complement other testing techniques in every stage of testing. Julie demonstrates a free classification tree editing tool that helps you build, maintain, display, and use classification trees.

Julie Gardiner, Grove Consultants
Six Thinking Hats for Software Testers

Our testing is only as good as our thinking—and all too often we are hampered by limiting ideas, poor communication, and pre-set roles and responsibilities. Based on the work of Edward de Bono, the Six Thinking Hats for software testers have helped Julian, and numerous others, work more effectively as testers and managers. The concepts are simple and easy to learn. For instance, we can use these concepts as individuals performing reviews and while testing and in groups during team meetings. Each of the six hats has a color representing a direction of thinking—the blue hat provides the overview and helps to keep us productive, the white hat helps us to collect facts, the red is a way to express intuition and feelings without having to justify them, the yellow hat seeks the best possible outcome, the black hat helps us to discover what might go wrong—not only with the software but also with our tests and our assumptions!

Julian Harty, Google
STARWEST 2008: Telling Your Exploratory Story

What do you say when your manager asks, "How did it go today?" As a test manager, you might say, "I'll check to see how many test cases the team executed today." As a tester with a pile of test cases on your desk, you could say, "I ran 40 percent of these tests today," or "At the rate I'm going I'll be finished with these test cases in 40 days." However, if you're using exploration as part of your testing approach, it might be terrifying to try to give a status report--especially if some project stakeholders think exploratory testing is irresponsible and reckless compared to test cases. So how can you retain the power and freedom of exploration and still give a report that earns your team credibility, respect, and perhaps more autonomy? Jon Bach offers ways for you to explain the critical and creative thinking that makes exploratory testing so powerful.

Jon Bach, LexisNexis
Root Cause Analysis: Dealing with Problems, Not Symptoms

Test managers often choose solutions to problems without sufficient analysis, resulting in a cover-up of the symptom rather than a solution to the underlying problem. Later, the problem may surface again in a different disguise, and we may mishandle it again, just as we did initially. Alon Linetzki describes a simple process you can use to identify the root causes of problems and create an appropriate solution to eliminate them. Alon shows how he enhanced the classic root cause analysis method to create an approach to finding insidious problems in software and processes. His method includes ways to differentiate symptoms from problems, understand the connection between them, and determine the strength and direction of that connection. Alon illustrates this method with data from two testing projects and shares the lessons learned along the way.

Alon Linetzki, The Sela Group
How to Build and Motivate the Ideal Test Team

Testing projects have a habit of getting into trouble. After years of waterfall development in which testing is the last stage in the development pipeline, there are almost always deadline and budget squeezes that require test managers to attempt to do the impossible. Most of us have had to manage risks as part of the test management process. However, the most plausible mitigation strategy is not always the best one. Geoff Horne presents a method he uses for identifying and assessing risks and then developing mitigation strategies for testing projects. Geoff has successfully used the approach on different projects across different types of businesses and testing projects. His approach is based on evaluation of risks and assessing the impacts across the key criteria of resources, productivity, cost, quality, and confidence.

Lloyd Roden, Grove Consultants
Test Estimation: Simple Models and Practical Lessons

As software testers, we are regularly asked how long it will take to test a system. Unfortunately, we rarely have the tools to produce an accurate estimate. Tonnvane Wiswell introduces methods for producing better estimates-best guess, experienced person's best guess, and ways to use past data as a baseline--and the advantages and disadvantages of each. She discusses adaptable formulas that incorporate "buffer time" and risk factors. Finally, Tonnvane presents a real life example of a testing project with solid time estimates, including an explanation of how team size was determined, how the work flow was designed, what the "actual hours" of testing were, what unexpected items affected the testing time, and how the project permanently changed the company's "test estimation formula."

  • Two simple estimations methods when you don't have historical data
  • How to improve test estimation in your organization
TONNVANE WISWELL, Total Jobs Group
Test Strategies for the Modern Distributed World

Enterprise application development is quickly evolving with SOA and Web 2.0 taking center stage. Organizational structures are changing, with growing numbers of testing teams employing offshore resources. What do these changes mean to you, and what should you do to prepare? Most testing groups were created based on traditional development processes, traditional application architectures, and traditional organizational structures. As agile enters the mainstream, more change is on the way. Outsourcing, offshore development, and acquisitions continuously change the organizational landscape. Dan Koloski discusses proven and practical practices for adapting to today's new technologies, new structures, and the new modern distributed world. He will discuss how to effectively communicate across virtual and physical silos as well as ways to adapt your test strategies and execution to component-based applications.

Dan Koloski, Empirix

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.