|
Applying Courtship Principles: Hiring for the Long Term
Slideshow
As managers, we tend to focus on improving our processes. But have you considered that good people—not processes—are really the foundation of high-quality software? Competent and skilled people—combined with good process—can consistently produce higher-quality software...
|
Philip Lew, XBOSoft
|
|
Servant Leadership: It’s Not All It’s Cracked Up to Be
Slideshow
Ah, the sounds of feathers being ruffled! Tricia Broderick believes that servant leadership is not all that it’s cracked up to be. She wants and expects more from leaders then just being servants who act only when asked. Until now, a common (and easy) coaching style has been to transform...
|
Tricia Broderick, Pearson
|
|
Measure Customer and Business Feedback to Drive Improvement
Slideshow
Companies often go to great lengths to collect metrics. However, even the most rigorously collected data tends to be ignored, despite the findings and potential for improving practices. Today, one metric that cannot be ignored is customer satisfaction. Customers are more than willing to...
|
Paul Fratellone, uTest
|
|
Form Follows Function: The Architecture of a Congruent Organization
Slideshow
One principle architects employ when designing buildings is "form follows function." That is, the layout of a building should be based upon its intended function. In software, the same principle helps us create an integrated design that focuses on fulfilling the intent of the system. Ken Pugh explores congruency-the state in which all actions work toward a common goal. For example, as Ken sees it, if you form and promote integrated teams of developers, testers, and business analysts, then personnel evaluations should be focused on team results rather than on each individual’s performance. If you embrace the principle of delivering business value as quickly as possible, the entire organization should focus on that goal and not the more typical 100% resource utilization objective. If you choose to have agile teams, then they should be co-located for easy communication, rather than scattered across buildings or the world.
|
Ken Pugh, Net Objectives
|
|
The Dangers of the Requirements Coverage Metric When testing a system, one question that always arises is, “How much of the system have we tested?” Coverage is defined as the ratio of “what has been tested” to “what there is to test.” One of the basic coverage metrics is requirements coverage-measuring the percentage of the requirements that have been tested. Unfortunately, the requirements coverage metric comes with some serious difficulties: Requirements are difficult to count; they are ideas, not physical things, and come in different formats, sizes, and quality levels. In addition, making a complete count of “what there is to test” is impossible in today’s hyper-complex systems. The imprecision of this metric makes it unreliable or even undefined and unusable. What is a test manager to do?
|
Lee Copeland, Software Quality Engineering
|
|
The Metrics Minefield In many organizations, management demands measurements to help assess the quality of software products and projects. Are those measurements backed by solid metrics? How do we make sure that our metrics are reliably measuring what they're supposed to? What skills do we need to do this job well? Measurement is the art and science of making reliable and significant observations. Michael Bolton describes some common problems and risks with software measurement, and what we can do to address them. Learn to think critically about numbers, what they appear to measure and how they can be distorted. Improve the quality of the information that we're gathering to understand the relationship between observation, measurement, and metrics. Evaluate your measurements by asking probing questions about their validity.
|
Michael Bolton, DevelopSense, Inc.
|
|
Performance Appraisals for Agile Teams Traditional performance evaluations, which focus solely on individual performance, create a “chasm of disconnect” for agile team members. Because agile is all about team performance and trust, the typical HR performance evaluation system is not congruent with agile development. Based on his practical experience leading agile teams, Michael Hall explores how measurements drive behavior, why team measurement is important, what to measure, and what not to measure. Michael introduces tangible techniques for measuring agile team performance-end of sprint retrospectives, sprint and project report cards, peer reviews, and annual team performance reviews. To demonstrate what he’s describing, Michael uses role plays to contrast traditional, dysfunctional annual reviews with agile-focused performance reviews.
|
Michael Hall, WorldLink, Inc.
|
|
Quality Metrics for Testers: Evaluating Our Products, Evaluting Ourselves As testers, we focus our efforts on measuring the quality of our organization's products. We count defects and list them by severity; we compute defect density; we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, Lee Copeland suggests that to reach a higher level of testing maturity, we must apply similar measurements to ourselves. He suggests you count the number of defects in your own test cases and the length of time needed to find and fix them; compute test coverage-the measure of how much of the software you have actually exercised under test conditions-and determine Defect Removal Effectiveness-the ratio of the number of defects you actually found divided by the total number you should have found. These and other metrics will help you evaluate and then improve the effectiveness and efficiency of your testing process.
|
Lee Copeland, Software Quality Engineering
|
|
A New Paradigm for Collecting and Interpreting Bug Metrics Many software test organizations count bugs; however, most do not derive much value from the practice, and other metrics can actually harm the quality of their software or their organization. Although valuable insights can be gained from examining find and fix rates or by graphing open bugs over time, you can be more easily fooled than informed by such metrics. Metrics used for control instead of inquiry tend to promote dysfunctional behavior whenever people know they are being measured. In this session, James Bach examines the subtleties of bug metrics analysis and shows examples of both helpful and misleading metrics from actual projects. Instead of the well-known Goal/Question/Metric paradigm, James presents a less intrusive approach to measurement that he describes as the Observe/Inquire/Model. Learn about the dynamics and dangers of measurement and a new approach to improve your metrics and the software you produce.
|
James Bach, Satisfice Inc
|
|
Using Defect Data to Make Real Quality Improvements A large development organization was challenged to decrease production defects by at least 70 percent. Without extra money or time to install major process changes, what should be done? For a baseline, there was a production defect database that had been running at a steady state for over a year, but no way to size the many different projects and no appetite for either function points or measuring lines of code. In this interesting case study, Betsy Radley reports how they used approximations and sometimes crude assumptions to develop measurements from the defect data. These measurements identified applications that had the fewest product defects. Find out how they used that information to look for processes and tools used in these "good" applications and then applied them to the "bad" applications.
|
Betsy Radley, Nationwide Insurance Company
|