|
Lucky and Smart Charles Darwin was certainly a great scientist, but his career and his discoveries were also strongly influenced by serendipity and luck. What could this great explorer and scientist teach us about testing?
|
|
|
Going Mobile: The New Challenges for Testers Mobile device manufacturers face many challenges bringing quality products to market. Most testing methodologies were created for data processing, client/server, and Web products. As such, they often fail to address key areas of interest to mobile applications-usability, security, and stability. Wayne Hom discusses approaches you can use to transform requirements into usability guides and use cases into test cases to ensure maximum test coverage. He discusses automation frameworks that support multiple platforms to reduce test cycle times and increase test coverage, while measuring and reporting at the different phases of the software lifecycle. Wayne presents case studies to illustrate how to reduce test cycles by up to 75 percent. He demonstrates solutions that have helped providers of third-party applications and services manage testing cycles for multiple mobile device releases.
|
Wayne Hom, Augmentum Inc.
|
|
A Modeling Framework for Scenario-Based Testing Scenario-based testing is a powerful method for finding problems that really matter to users and other stakeholders. By including scenario tests representing actual sequences of transactions and events, you can uncover the hidden bugs often missed by other functional testing. Designing scenarios requires you to use your imagination to create narratives that play out through systems from various points of view. Basing scenarios on a structured analysis of the data provides a solid foundation for a scenario model. Good scenario design demands that you combine details of business process, data flows-including their frequency and variations-and clear data entry and verification points. Fiona Charles describes a framework for modeling scenario-based tests and designing structured scenarios according to these principles.
|
Fiona Charles, Quality Intelligence Inc.
|
|
STARWEST 2008: Quality Metrics for Testers: Evaluating Our Products, Evaluating Ourselves As testers, we focus our efforts on measuring the quality of our organization's products. We count defects and list them by severity; we compute defect density; we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, Lee Copeland suggests that to reach a higher level of testing maturity, we must apply similar measurements to ourselves. He suggests you count the number of defects in your own test cases and the length of time needed to find and fix them; compute test coverage--the measure of how much of the software you have actually exercised under test conditions--and determine Defect Removal Effectiveness--the ratio of the number of defects you actually found divided by the total number you should have found. These and other metrics will help you evaluate and then improve the effectiveness and efficiency of your testing process.
|
Lee Copeland, Software Quality Engineering
|
|
Lessons Learned in Acceptance Test-Driven Development Acceptance Test-Driven Development (ATDD), an application of the test-first practice of XP and agile development, can add enormous value to agile teams that are proficient in these practices. Moving from awareness of ATDD to being proficient at practicing ATDD comes about only after learning some important lessons. First, no one group can "own" the process. Second, ATDD is first about helping the customer and the team understand the problem; then it is about testing. Third, writing automated acceptance tests in ATDD is not the same as writing automated tests with typical automation tools. Antony Marcano shares his experiences with ATDD-the good, the bad, and the ugly-and the many other lessons he's learned in the process. Discover the benefits and pitfalls of ATDD and take advantage of Antony's experiences so that you avoid common mistakes that teams make on their journey to becoming proficient practitioners of ATDD.
|
Antony Marcano, Testing Reflections
|
|
Truths and Myths of Static Analysis Identifying defects with static analysis tools has advanced significantly in the last few years. Yet, there still are many misconceptions about the capabilities and limits of these innovative tools-and sales propaganda such as "100 percent path coverage" has not helped at all. Paul Anderson debunks common myths and clarifies the strengths and limitations of static-analysis technology. You'll learn about the types of defects that these tools can catch and the types they miss. Paul demystifies static analysis jargon, explaining terms such as "object-sensitive" and "context-sensitive". Find out how the FDA uses static analysis today to evaluate medical device software. Paul jump-starts your understanding of static analysis so you can decide where to apply this technology and have more knowledge and confidence in your interactions with tool vendors.
|
Paul Anderson, GrammaTech, Inc.
|
|
Toward an Exploratory Testing Culture Traditional testing teams often agonize over exploratory testing. How can they plan and design tests without detailed up-front documentation? Stubborn testers may want to quit because they are being asked to move out of their comfort zone. Can a team’s testing culture be changed? Rob Sabourin describes how several teams have undergone dramatic shifts to embrace exploratory testing. Learn how to blend cognitive thinking skills, subject matter expertise, and “hard earned” experience to help refocus your team and improve your outcomes. Learn to separate bureaucracy from thinking and paperwork from value. Explore motivations for change and resistance to it in different project contexts. Leverage Parkinson's Law-work expands to fill the time available-and Dijkstra-s Principle-testing can show the presence of bugs, but not their absence-to inspire and motivate you and your team to get comfortable in the world of exploratory testing.
|
Robert Sabourin, AmiBug.com Inc
|
|
Adding Measurement to Reviews Conceptually, most testers and developers agree that reviews and inspections of software designs and code can improve software and reduce development costs. However, most are unaware that measuring reviews and inspections greatly magnifies these improvements and savings. Riley Rice presents data from more than 4,000 real-world software projects in different domains-defense, commercial, and government. He compares the results of three scenarios: doing few or no reviews, doing unmeasured reviews, and doing measured reviews. For each scenario, Riley compares resulting metrics: defects delivered to customers, total project pre-release costs, total project post-release costs, total project lifecycle costs, project duration, mean time between failures, and productivity. The results are surprising-measured reviews are substantially more effective-and go far beyond what most people would expect.
|
Riley Rice, Booz Allen Hamilton
|
|
Life as a Performance Tester At the core of most performance testing challenges and failed performance testing projects are serious misunderstandings and miscommunications within the project team. Scott Barber and Dawn Haynes share approaches to overcoming some of the most common frustrations facing performance testers today. Rather than simply telling you how to improve understanding and communicate performance testing concepts, Scott and Dawn demonstrate their approaches through an amusing role play of interactions between a lead performance tester and a non-technical executive.
|
Scott Barber, PerfTestPlus, Inc.
|
|
Are Agile Testers Different? On an agile team everyone tests, blurring the lines between the roles of professional developers and testers. What's so special about becoming an agile test professional? Do you need different skills than testers on traditional projects? What guides you in your daily activities? Lisa Crispin presents her "Top Ten" list of principles that define an agile tester. She explains that when it comes to agile testers, skills are important but attitude is everything. Learn how agile testers acquire the results-oriented, customer-focused, collaborative, and creative mindset that makes them successful in an agile development environment. Agile testers apply different values and principles-feedback, communication, simplicity, continuous improvement, and responsiveness-to add value in a unique way. If you're a tester looking for your place in the agile world or a manager looking for agile testers, Lisa can help.
|
Lisa Crispin, ePlan Services, Inc.
|