|
A New Paradigm for Collecting and Interpreting Bug Metrics Many software test organizations count bugs; however, most do not derive much value from the practice, and other metrics can actually harm the quality of their software or their organization. Although valuable insights can be gained from examining find and fix rates or by graphing open bugs over time, you can be more easily fooled than informed by such metrics. Metrics used for control instead of inquiry tend to promote dysfunctional behavior whenever people know they are being measured. In this session, James Bach examines the subtleties of bug metrics analysis and shows examples of both helpful and misleading metrics from actual projects. Instead of the well-known Goal/Question/Metric paradigm, James presents a less intrusive approach to measurement that he describes as the Observe/Inquire/Model. Learn about the dynamics and dangers of measurement and a new approach to improve your metrics and the software you produce.
|
James Bach, Satisfice Inc
|
|
A "Follow the Sun" Test Automation Strategy In this case study of an award winning project, Andy Redwood describes how his team used "best shoring" of testing services to reduce costs, reuse assets, and get the best from their test automation tools. In an enterprise-wide transformation process at a large investment bank, his team used available infrastructure, technology, tools, and process to reduce business risk from software changes with a new automated regression test suite. With some facts and figures and a little hindsight, you will learn how to provide global, automated testing services on a twenty-four hours a day/seven days a week, on-demand basis. Find out what metrics you need to accurately measure the costs and benefits of a "follow the sun" test automation strategy.
- A successful outsource project that measurably improved business resilience
- The do's and don'ts of offshore testing
|
Andy Redwood, Buttonwood Tree Group
|
|
Implementing and Sustaining a Measurement Program Are you looking to install new measurements at the department or enterprise level? Are parts of your existing measurement program shaky? Starting a measurement program or revitalizing an existing one requires a good road map and checkpoints along the way. Janet Russac offers the fundamentals for establishing an organization-wide measurement program based on defined objectives. Find out about the principles of when to use metrics and when not to use them. Get a proven measurement program implementation strategy from this industry veteran, and take away an understanding of the key steps and attributes of a successful program. Make your measurements even more valuable by incorporating a benchmarking component into your program.
- Key steps to a successful measurement program
- Identification of key indicators of readiness and factors for success
|
Janet Russac, The David Consulting Group
|
|
Integrating Requirements-Based Tesing in the Development Process Good data feedback of software measurements is critical when analyzing measurement data, for drawing conclusions, and as the basis for taking action. Feedback to those involved in the activities being measured helps validate the data as well. In this presentation Ben Linders shows examples of how Ericsson Telecommunications delivers feedback at two levels: projects and the total development center. Although the basics are similar, the application differs, and the key success factors depend on the level and the audience. At the project level, you will see how the team reviews defect data, including defect classifications and test matrices. For development center feedback, you will see how line management and technical engineers review data and analyze information based on a balanced score card approach with measurable goals.
|
Richard Bender, Technology Builders, Inc.
|
|
Techniques for Measuring Software Quality A software defect is any flaw or imperfection in a software work product. A software defect is the result of a mistake in the design of a software process. In this article you will learn and gain an understanding of the software measurement processes, practices, and methods for: managing the quality of software products, improving the quality of software processes and procedures, and applying quantitative principles to your work for measuring quality.
|
James Rozum, Union Switch & Signal Inc
|
|
12 Steps to a Successful Metrics Program Software metrics are an integral part of the state-of-the-practice in software engineering. More and more customers are specifying quality metrics reporting as part of their contractual requirements. Companies are using metrics to better understand, track, control, and predict software projects and products. This paper explains the basic concepts of metrics and measurement theory and how those
concepts relate to software.
|
Linda Westfall, The Westfall Team
|
|
STARWEST 2000: Managing the End Game of a Software Project How do you know when a product is ready to ship? QA managers have been faced with this question for many years. Using the methodology discussed in this presentation, you take the guessing out of shipping a product and replace it with key metrics to help you rationally make the right decision. Learn how to estimate, predict, and manage your software project as it gets closer to its release date.
|
Mike Ennis, BMC Software, Inc.
|
|
Measuring the Complexity and Impact of Design Changes Mike Libassi discusses how to use the Weighted Stability Index (WSI) Metrics Model, an adaptation of a U.S. Army method, to measure system design changes and the impact to software releases. Both the original method and the WSI model will be presented, as well as customization, results interpretation, and implementation. Learn how to automate this model into current office technology, like Microsoft Excel and Access.
|
Mike Libassi, Intel Corporation
|
|
Performance Evaluation and Measurement of Enterprise Applications Today's large-scale enterprise applications are all Web-enabled and complex in nature. Many users experience performance problems from day one. Performance evaluation and measurement via extensive testing is the only practical way to raise and address all issues prior to a successful deployment. Learn how to tackle performance and capacity issues with the appropriate testing strategy and scalable infrastructure/architecture.
|
Rakesh Radhakrishnan, Sun Microsystems
|
|
Software Cost Management with COCOMO II COCOMO II updates the 1981 Constructive Cost Model (COCOMO) to address the new ways that software is being developed and managed, including non-sequential process models, applications composition, product line management, distributed development and applications, and rapid application development. Barry Boehm summarizes these trends and shows how COCOMO II and its emerging extensions are addressing them. Learn how COCOMO II can be used for a variety of management decision situations, such as linking tactical project management to strategic productivity and cycle time improvement management via a quantitative metrics-based approach.
|
Barry Boehm, University of Southern California
|