How We Test Software at Microsoft
Discover how Microsoft implements and manages the software-testing process company-wide with guidance and insights direct from its test managers. Organizing any testing program the people, processes, and tools can be challenging and resource intensive. Even when the necessary tradeoffs are made, no development team can test every scenario. This book explains how a worldwide leader in software, services, and solutions staffed with 8,000 testers implements and manages its testing process effectively company-wide.
Whether you re a tester or test manager, you'll gain expert insights on effective testing techniques and methodologies including pros and cons of various approaches. For interesting context, the book also shares such facts as the number of test machines at Microsoft, how the company uses automated test cases, and bug statistics. It answers key testing questions, such as who tests what, when, and with what tools. And it describes how test teams are organized, when and how testing gets automated, testing tools, and feedback with illuminating insights for software-development organizations of all kinds.

Review By: Noreen Dertinger
12/16/2011
Considering a career as a tester at Microsoft is not a prerequisite to picking up a copy of How We Test Software at Microsoft. What sparked my own interest in this book is that I like to read the stories of successful individuals and organizations, draw upon their successes, and learn from their shortcomings.
How We Test Software at Microsoft is written by three prominent software test professionals: Alan Page, Ken Johnston and Bj Rollison. Together, these professionals have formulated a volume of sound advice and techniques that will enable readers to expand or update their testing knowledge.
The book is divided into four main parts: About Microsoft, About Testing, Test Tools and Systems, and About the Future. Part one will familiarize readers with Microsoft products, engineers, testers, the role of test, and the tools that Microsoft commonly uses. The three chapters that make up this section are especially useful to professionals who are thinking of pursuing a career with Microsoft. (For those interested in learning more about the evolution of Microsoft, I also highly recommend the book Idea Man by Paul Allan.) Other readers may wish to skip ahead to part two.
Those who do read part one will most likely find that, in general, the underlying business framework is similar to their own environments with some variation in how the philosophies are implemented. Chapter 1 explores the goals, values, and mission and Microsoft’s approach to them. In chapter 2, Ken Johnston describes the software testing profession at Microsoft and how it has evolved. I was interested to learn that Microsoft (founded in 1975) did not hire its first tester—Lloyd Frink, a young high school intern—until 1979 and its first wave of testers until 1985. Testing as a career path was implemented at Microsoft in the late 1980s. Chapter 3 covers engineering lifecycles, the most important message of which is that it is essential to know what development model is being used and at what stage of the model the project is in.
The heart of How We Test Software at Microsoft is contained in parts two (About Testing) and three (Test Tools and Systems). Herein is an overview, with supporting examples, of approaches to testing and testing tools. While the material is specific to Microsoft, it does provide a framework for those looking for one book with information about how some key techniques can be applied. Readers will be able to customize the knowledge and apply it in their own testing environments. Many references to other key testing books and websites have been provided by the authors.
Some of the topics covered in parts two and three include: good test case design (fundamental to the success of software releases); functional testing techniques, such as equivalence class partitioning, boundary value analysis, and combinatorial analysis; structural testing techniques (white box design approaches); analyzing risk with code complexity; and model-based testing.
Part three contains material that most testers will likely already be familiar with and have systems in place for. Topics covered include: tools and techniques to manage bugs and test cases; test automation; non-functional testing, including areas such as security, performance, usability, and stress testing; and other tools including source code control and generating the builds for test.
The concluding section contains thoughts for the future. Readers may find that they already have familiarity with some of these thoughts, given that How We Test Software at Microsoft was published in 2009 and technology evolves rapidly.
I highly recommend How We Test Software at Microsoft as a well written volume and look forward to future publications by these authors. As an experienced tester, it provided me with a refresher and with insights on new techniques I may be able to use in my own test environment in the future. For aspiring or new test professionals, I believe How We Test Software at Microsoft provides a solid roadmap to success in their careers.
Review By: Debra Martinez
12/16/2011
When I first started reading this book I thought it was going to be about how great it is to be a tester at Microsoft. Boy, was I wrong. This book has great information for the novice and expert tester. The most useful information I found was about structural testing techniques. This information is great for a company such as mine that needs more structure in the way we test. Don't get me wrong, all the information in the book is great. Also, the information on automated testing is valuable and would be great for a company already using or planning to use automation in their testing efforts.
This book has made its rounds in my testing department. There is not a day that goes by when I am not asked if I still have the book. I feel this book is great addition to any testing department. I was not real happy with all the references to Microsoft being the best place to work as a tester, but the information was good nonetheless. I just wish the author would have realized that the rest of us testers are just as proud of where we work.
The author does great job explaining how the bug matrix is a bad idea for any company. The author's main point is that a developer can move the bugs around and make it look like he doesn't have much on his plate, which is what happened at my company when we tired this.
We need more books like this one on the shelves of testers. The only problem I had with this book, like I said earlier, is that the author believes if you are a tester, then you need to work at Microsoft because it’s the only place on Earth you can become a real tester. Regardless, the author does a good job writing the material in a way that everyone can understand which helps one get the most out of the book. I expect to see this book on shelves for a very long time.
User Comments
BEFORE RELEASING its products, does MS detect and report every failure that its entire Customer base will encounter over the first, say at least, 5 years of its release?
If so, then please disregard the rest of my statements.
Otherwise this is by far the most subjective review I have ever read on this website that ironically includes both Fact & Fiction....
an Objective with possibly deniable Fact: "how Microsoft managed to become the biggest software engineering company".
a Subjective with UN-deniable Fiction: "how they have embedded quality within all of their processes, especially with utmost concern for customer needs."
Or just might be that I am not one of MS's targeted Customers.
If we had released as many defects with Customers' resulting failures that we have experienced with MS's products, we would not be working for any company where I have worked. And we use just the most basic and most common of MS's products.
This is a World-wide company with an International dictionary that does not even include the term "Testware".
Did anyone else notice that -- much less find it significant?
After reading the posted review, a MS television commercial sprung to mind -- the one where an MS QA/Test Manager appears to be attempting to balance himself on something and while doing so says (& I paraphrase): "I guess it is my job to break things."
If my job was to test cars and I pushed one of a 1000 foot cliff, then gee, wonders among wonders, I could report that I broke it and get a pat on the back -- certainly not from my Developers.
Obviously this is a company that has little, if any, understanding of the Problem Domain Curve over a product's Lifecycle -- the graph that illustrates the differentiation between a probable 'realized' Customer failure and those that while possible are virtually improbable.
I would not be surprised that many if not most/all of the MS Testware Developers still use terms like: "exhaustive testing", "positive & negative testing", "corner tests", ....
This thinking went out in the early 1980's -- along with the term "bug" that was replaced with "defect" with much Thanks and due Gratitude to Edsger Dijkstra, and Bill Hetzel for recognizing that ALL Test Cases/Scenarios are 'positive'.
BEFORE RELEASING our products, we did detect and report every failure -- and most defects -- that our Customer base encountered over the first 5 years of our the products' release. It quite probably could have been much longer if not for the limits of our measurements due to personal changes.
Can you spell Risk Management -- the Heart of the STEP methodology.
How can this be successfully achieved without these metrics that INCLUDE REAL WORLD Customer experience?
ALL Test Cases through Real World Operational Profiles were auto-generated.
These realized a minimum of 80% Statement Coverage of the intended functionality.
Moreover ALL TEST RESULTS ANALYSIS was automated with complete Traceability to each & every Requirement using an Expert System Help facility, transcending archaic and woefully limited binary Pass/Fail technology.
Our employer was a Software/Hardware vendor with hundreds of thousands of Customers. The tested software was a minimum ~1/4 to a maximum 1/2 of all its released software products' code.
This was my (might well be considered less than) Humble attempt to say that I have learned a thing or two on the Road to Zero Defects.
I have yet to experience the released results of MS embedding "quality within all of their processes, especially with utmost concern for customer needs."
If my experience with MS's products is what you call 'quality', then we are most definitely not on the same page, much less in the same book, much less in the same library -- is saying much less than on the same planet going too far?
An honest tale speeds best, being plainly told.
Source: King Richard III", Act 4 scene 3
Every man has business and desire,
Such as it is.
Source: Hamlet", Act 1 scene 5