|
Lessons Learned from Forty-five Years of Software Measurement
Slideshow
Counting is easy. However, what makes measurement really valuable-and really hard to get right-is knowing what to count and what to do with the results. If your organization is mostly tracking resource usage, costs, and schedule data, it is making a big mistake. What about the users? The customers? The overall business strategy? Sharing the lessons he has learned from fighting-and surviving-many software measurement battles, Ed Weller offers a step-by-step approach for implementing a practical and valuable metrics program. After understanding what measures are most important to the business strategy and all stakeholders, the next step is to decide what data supports those measures and how to capture it. With data in hand, you can create simple and informative ways to make the resulting metrics visible and easy to digest. The biggest challenges-avoidance, disbelief, and rationalization-come next.
|
Edward Weller, Integrated Productivity Solutions, LLC
|
|
Simple Metrics for Starters Measurements and metrics are a hot topic again. Theorists often rail against them as meaningless and potentially harmful. Practitioners fear them because they don’t want to be metric-ed out of a job. Managers want them so they can better understand what is happening in the software development lifecycle and try to make their processes more efficient. David Gilbert shares the simple set of measurements and metrics Raymond James has implemented and describes the practical benefits they have gained. By first asking stakeholders what they really wanted to understand and then developing metrics to support their goals, David and his team have made measurement work at their company. They educated the stakeholders on key metrics concepts-first order measures and subjective relativity-and established a common model everyone agreed to follow.
|
David Gilbert, Raymond James
|
|
Talking Quality to Business: Metrics for Improvement Are your testing and quality assurance activities adding significant value in the eyes of your stakeholders? Do you have difficulty convincing decision-makers that they need to invest more in improving quality? Selecting metrics that stakeholders understand will get your improvement project or program past the pilot phase and reduce the risk of having it stopped in its tracks. Todd Brasel and Kent McDonald show you how to avoid getting bogged down in the minute details of test results reporting and approach quality measurement from a systems perspective. They offer practical lessons about using metrics to promote quality improvement initiatives to upper managers and executives-the people who make the real decisions about quality. Learn about common traps and problems of measurement programs and how to avoid them.
|
Todd Brasel, Pitney Bowes
|
|
Defect Analysis: The Foundation of Process Improvement Do you have a process in place to analyze defects, identify the defect categories and common pitfalls, and correlate the results to recommended corrective actions? Forced to get more done with less, organizations are increasingly finding themselves in need of an effective defect analysis process. David Oddis describes a systematic defect analysis process to optimize your efforts and enable higher quality software development. David’s approach promotes collaboration in the post-deployment retrospectives performed by the development/test teams. Join David as he facilitates an open conversation and provides guidance and tips via a real world walkthrough of the strategy and process he employs to analyze defects. Learn how these findings can lead to opportunities for process improvements in your requirements, design, development, test, and environment domains.
|
David Oddis, The College Board
|
|
STARWEST 2011: Quantifying the Value of Static Analysis During the past ten years, static analysis tools have become a vital part of software development for many organizations. However, the question arises, "Can we quantify the benefits of static analysis?" William Oliver presents the results of a study performed at Lawrence Livermore National Laboratory to do just that. They measured the cost of finding software defects using formal testing on a system without static analysis; then, they integrated a static analysis tool into the process and, over a period of time, recalculated the cost of finding software defects. Join William as he reveals the results of their study, and discusses the value and benefits of static testing with tools. Learn how commercial and open source analysis tools can perform sophisticated, interprocedural source code analysis over large code bases.
|
William Oliver, Lawrence Livermore National Laboratory
|
|
A Holistic Way to Measure Quality Have your executives ever asked you to measure product quality? Is there a definitive way to measure application quality in relation to customer satisfaction? Have you observed improving or excellent defect counts and, at the same time, heard from customers about software quality issues? If you are in a quandary about quality metrics, Jennifer Bonine may have what you are looking for. Join her to explore the Problems per User Month (PUM) and the Cost of Quality (CoQ) metrics that take a holistic approach to measuring quality and guiding product and process improvements. Learn what data is required to calculate these metrics, discover what they tell you about the quality trends in your products, and learn how to use that information to make strategic improvement decisions. By understanding both measures, you can present to your executives the information they need to answer their questions about quality.
|
Jennifer Bonine, Up Ur Game Learning Solutions
|
|
Understanding and Using Code Metrics Have you heard any of these from your development staff or said them yourself? "Our software and systems are too fragile." "Technical debt is killing us." "We need more time to refactor." Having quality code is great, but we should understand why it matters and specifically what is important to your situation. Joel Tosi begins by defining and discussing some common code metrics-code complexity, coverage, object distance, afferent/efferent coupling, and cohesion. From there, Joel takes you through an application with poor code metrics and shows how this application would be difficult to enhance and extend in the future. Joel wraps up with a discussion about what metrics are applicable for specific situations such as legacy applications, prototypes, and startups. You'll come away from this class with a better understanding of code metrics and how to apply them pragmatically.
|
Joel Tosi, VersionOne, Inc.
|
|
Sleep Better at Night: A Release Confidence Metric A project manager decides a product is good enough to release-that it will be successful in the marketplace or the business. The manager is basing this judgment on confidence in the product. Confidence is a simple word, yet it is an extraordinarily intangible measure. Confidence drives a huge number of software releases each day. Can our confidence be quantified? Can it be measured? Terry Morrish thinks so and shares a formula for measuring release confidence by combining measures from the current development cycle with those of the past releases and from client feedback. The Release Confidence metric can help predict the number of clients who will be affected by post-release problems and how much time and money will be spent on maintenance and rework. By employing this approach, project managers can have a quantitative picture of release risk, providing for a more informed decision process-and a better night's sleep.
|
Terry Morrish, Synacor
|
|
Questioning Measurement When we consciously measure something, we try to measure precisely and often assume that our measurements are accurate and useful. However, software development and testing activities are not subject to the same kinds of quantitative measurements and precise predictions we find in physics. Instead, our work is like the social sciences, in which complex interactions between people and systems make measurement difficult and precise prediction impossible. Michael Bolton argues that all is not lost. It is possible and surprisingly straightforward to measure development and testing accurately and usefully–even if not precisely. You can measure how much time is spent on test design and execution compared with time spent on interruptions, track coverage obtained for each product area, and more.
|
Michael Bolton, DevelopSense
|
|
The Net Promoter Score: Measure and Enhance Software Quality Would you like to know–prior to release–how your customers will perceive product quality? Employing the Net Promoter Score (NPS) technique, Anu Kak shares a strategy he has successfully used to provide this information and, at the same time, help improve actual product quality. Today, many organizations are using NPS for their production products to identify customers who are most likely to be either promoters or detractors. This measurement tool provides the information needed to prioritize product fixes and enhancements. Anu shares his experiences applying NPS within software product development to enhance quality before release. He explains the step-by-step implementation of NPS within software engineering. Learn how to read and analyze NPS feedback and implement an NPS-centric process to enhance product quality. Take back a road map to evangelize NPS adoption among the stakeholders in your organization.
|
Anu Kak, PayPal, Inc.
|