|
The Many Hats of a Tester As testers, we must wear many hats to do our job effectively. Quite often, it is the pith helmet of an explorer, hacking through the vines and darkness of the unknown; or the baseball cap of the crime scene investigator, determining how the failure occurred. To make things even more interesting, the hats we need often differ from project to project and organization to organization. Adam Goucher begins with a general discussion of some hats testers typically wear and when they are appropriate or inappropriate. He then leads an “Art Show” exercise-a brainstorming process resulting in lots of “art” on the walls-illustrating the hats we all may wear in our daily testing activities. Through the Art Show process, you'll take away new insights into what hats you and other testers need, tips for wearing the beautiful ones with success, and how to avoid putting on the ugly ones.
|
Adam Goucher, Zerofootprint
|
|
STAREAST 2010: Testing AJAX: What Does It Take? Using AJAX technologies, Web 2.0 applications execute much of the application functionality directly in the browser. While creating a richer user-experience, these technologies pose significant new challenges for testers. Joachim Herschmann describes the factors that are critical in testing Web 2.0 applications and what it takes to master these challenges. After presenting an overview of typical Web 2.0 application technologies, Joachim explains why object recognition, synchronization, and speed are the pillars for a truly robust and reliable AJAX test automation approach. He shows how to architect testability directly into AJAX applications, including examples of how to instrument applications to provide the data that testing tools require. Joachim shares his experiences of Micro Focus's Linz development lab and describes how they overcame the challenges of testing their modern AJAX applications.
|
Joachim Herschmann, Borland (a Micro Focus company)
|
|
Using Test Automation Frameworks As you embark on implementing or improving automation within your testing process, you'll want to avoid the "Just Do It" attitude some have taken. Perhaps you've heard the term "test automation framework" and wondered what it means, what it does for testing, and if you need one. Andrew Pollner, who has developed automated testing frameworks for more than fifteen years, outlines how frameworks have grown up around test automation tools. Regardless of which automation tool you use, the concepts of a framework are similar. Andrew answers many of your questions: Why build a framework? What benefit does it provide? What does it cost to build a framework? What ROI can I expect when using a framework? Explore the different approaches to framework development and identify problems to watch out for to ensure the approach you take will provide years of productivity.
|
Andrew Pollner, ALP International Corp
|
|
Patterns of Testability Testability requires interfaces for observing and controlling software, either built into the software itself or provided by the software ecosystem. Observability exposes the input and output data of components, as well as monitoring execution flow. Controllability provides the ability to change data and drive actions through the component interface. Without testability interfaces, defects are harder to find, reproduce, and fix. Manual testing can be improved by access to information these interfaces provide, while all automated testing requires them. Alan Myrvold shares software component diagrams that show patterns of testability. These patterns will help you architect and evaluate the observability and controllability of your system. Apply these testability patterns to describe and document your own testability interfaces.
|
Alan Myrvold, Microsoft
|
|
The Myths of Rigor We hear that more rigor means good testing and, conversely, that less rigor means bad testing. Some managers-who've never studied testing, done testing, or even "seen" testing up close-insist that testing be rigorously planned in advance and fully documented, perhaps with tidy metrics thrown in to make it look more scientific. However, sometimes measurement, documentation, and planning don't help. In those cases, rigor may require us not to do them. As part of winning court cases, James Bach has done some of the most rigorous testing any tester will do in a career. James shows that rigor is at least as dangerous as it is useful and that we must apply care and judgment. He describes the struggle in our craft, not just over how rigorous our processes should be, but what kind of rigor matters and when rigor should be applied.
|
James Bach, Satisfice, Inc.
|
|
Lessons Learned from 20,000 Testers on the Open Source Mozilla Project Open source community-based software development can be extremely wild and woolly. Testing in this environment is even more so, given that it is often less structured than software design and coding activities. What are the differences between testing open source and commercial or corporate applications? What can you learn from the open source community? Take a peek into the open source testing world with Tim Riley as he describes how the Mozilla Project develops and tests the Firefox browser. Tim describes how they monitor new builds, how people all around the world engage in testing, and how anomalies quickly bubble up to the release team. Although some of the tools they use may look familiar, how the Mozilla Project applies them will give you a fresh perspective. Find out how to apply the lessons learned at Mozilla to your projects and unleash the creative power of really smart people inside and outside your organization.
|
Tim Riley, Mozilla
|
|
The Buccaneer Tester: Winning Your Reputation Who drives your career as a tester or test leader? Hopefully, not the company for which you work. It's you-you must be the driver. Because the craft of testing is still relatively free and open, there is no authority structure that defines or controls our industry. There are no generally accepted and standardized credentials that will admit you to the upper tier of income and respect as a tester. There are no universities that offer degrees in testing-although certificates and certifications abound. What we do have is a pastiche of communities, proprietary methodologies, schools of thought-together with ambitious individuals who write articles, teach, argue with each other, and speak at conferences.
|
James Bach, Satisfice, Inc.
|
|
Stop Guessing About How Customers Use Your Software What features of your software do customers use the most? What parts of the software do they find frustrating or completely useless? Wouldn't you like to target these critical areas in your testing? Most organizations get feedback-much later than anyone would like-from customer complaints, product reviews, and online discussion forums. Microsoft employs proactive approaches to gather detailed customer usage data from both beta tests and released products, achieving greater understanding of the experience of its millions of users. Product teams analyze this data to guide improvement efforts, including test planning, throughout the product cycle. Alan Page shares the inner workings of Microsoft's methods for gathering customer data, including how to know what features are used, when they are used, where crashes are occurring, and when customers are feeling pain.
|
Alan Page, Microsoft
|
|
Agile Testing: Uncertainty, Risk, and How It All Works Teams that succeed with agile methods reliably deliver releasable software at frequent intervals and at a sustainable pace. At the same time, they can readily adapt to the changing needs and requirements of the business. Unfortunately, not all teams are successful in their attempt to transition to agile and, instead, end up with a "frAgile" process. The difference between an agile and a frAgile process is usually in the degree to which the organization embraces the disciplined engineering practices that support agility. Teams that succeed are often the ones adopting specific practices: acceptance test-driven development, automated regression testing, continuous integration, and more. Why do these practices make such a big difference? Elisabeth Hendrickson details essential agile testing practices and explains how they mitigate common project risks related to uncertainty, ambiguity, assumptions, dependencies, and capacity.
|
Elisabeth Hendrickson, Quality Tree Software, Inc.
|
|
You Can't Test Quality into Your Systems Many organizations refer to their test teams and testers as QA departments and QA engineers. However, because errant systems can damage-even destroy-products and businesses, software quality must be the responsibility of the entire development team and every stakeholder. As the ones who find and report defects, and sometimes carry the “quality assurance” moniker, the test community has a unique opportunity to take up the cause of error prevention as a priority. Jeff Payne paints a picture of team and organization-wide quality assurance that is not the process-wonky, touchy, feely QA of the past that no one respects. Rather, it's tirelessly evaluating the software development artifacts beyond code; it’s measuring robustness, reliability, security, and other attributes that focus on product quality rather than process quality; it’s using risk management to drive business decisions around quality; and more.
|
Jeffery Payne, Coveros, Inc.
|