|
Testing Imprecise Requirements
Slideshow
Articles on abc.net and elsewhere reported that Volvo has recently discovered a non-traditional requirement: Any self-driving vehicle approved for use outside Australian cities must recognize kangaroos on or near the roadway and take proper actions. The kangaroo’s bounce confused the large animal detector! In this session, industry expert David Gelperin shares a new perspective on the value of imprecise requirements and explores the nature of testing them. Excess precision may hamper the development of optimal solutions by excluding effective designs. Imprecise statements reduce the risk of excess precision and signal the need for analysis to understand their deeper meaning. Intentionally imprecise requirements entail a mixture of research and development and are a valuable supplement to traditional requirements.
|
David Gelperin
|
|
Mission Critical Automation Testing
Slideshow
When critical subsystems fail, the resulting losses can be catastrophic. In the insurance industry, if premiums are miscalculated, defect costs can reach well over a million dollars. In this session, Mike Keith and Dom Nunley draw on their practical experience with insurance systems testing to provide an overview of combinatorial automation testing for high-risk backend system areas—i.e., features that absolutely must work correctly. They share a process for categorizing requirement risk levels to determine which requirements warrant combinatorial testing. Mike and Dom illustrate various combinatorial testing techniques such as N-FAT, N-Wise, and RANDOM, which can be used to automatically generate test cases. These methods are used to ensure coverage against risk while controlling the number of tests that run.
|
Mike Keith
|
|
Everything I Learned about Automation, I Learned from Saturday Morning Cartoons
Slideshow
Do you remember sitting in front of the television as a kid enjoying your favorite Saturday morning cartoons? Chris Loder shows you how the lessons we learned from those cartoons apply to our everyday work in test automation. Wait until you hear what we’ve learned from the likes of Scooby Doo®, Wile E. Coyote®, and many other favorites! Like Bugs Bunny®, maybe we should “have taken that left turn at Albuquerque” and possibly done things a little differently. Discover how the animators in Spiderman® didn’t redraw every background but reused the animation cells, similar to our reusing pieces of test code. And see how Scooby Doo taught us that with the right team, we can solve anything! Chris talks about the automation that he is building at InGenius and how all those hours in front of the TV are helping make it successful. Come for the ‘toons, leave with the lessons!
|
Chris Loder
|
|
No More Shelfware—Let’s Drive
Slideshow
When Isabel Evans learned to drive a car, she also learned how to check, clean, and change spark plugs, mend the fan belt with a stocking, and indicate speed and direction changes with arm and hand signals. Now, we don’t expect to have to do any of those things; we just drive the car. That’s how test tools and automation could be. Just drive and concentrate on the journey of delivering software continuously—concentrate on engineering the solutions, not on the automation. To be effective engineers, we need the support of a powerful toolset that we understand. Is that what we have? Or do we still have shelfware sitting around expensively doing nothing, because we don’t know how to "clean the spark plugs"? Can we remove the difficulties and make using test automation a better experience, just like driving a car?
|
Isabel Evans
|
|
Testing In The Dark
Slideshow
Isn’t it amazing? Stakeholders drop software on our desks and expect us to test it—without any requirements, design, or product knowledge whatsoever. About the only clear thing is the absurd and unrealistic deadline. We are expected to bend over backward, spread magic pixie dust, and heroically test quality into a product we have never heard of before. But testing in the dark is not impossible, and as Rob Sabourin shows, it can even be a very valuable and fun experience. Learn strategies to emerge from a murky fog into clear, meaningful quality insights and leverage unlikely sources about what stakeholders care about and what users really need the software to do. Rob introduces you to methods of reconnaissance-style, charter-driven, and session-based exploratory testing and help you provide meaningful estimates to stakeholders with minimal hard information about the software under test.
|
Rob Sabourin
|
|
The Logic of Verification
Slideshow
Software testing is sometimes described as “verification and validation”—or, according to Wikipedia, “the process of checking that a software system meets specifications and that it fulfills its intended purpose.” Yet, renowned tester and teacher Michael Bolton argues, if we examine the concept and logic of verification, we quickly recognize that there are serious limitations to what can and cannot be checked and verified. This is not to say that checking is a bad thing—on the contrary; checking can be very valuable. Still, it’s important for testers and their clients to recognize the fundamental limitations of checking and to address those limitations in our testing strategies.
|
Michael Bolton
|
|
Building a Modern DevOps Enterprise Testing Organization
Slideshow
The DevOps movement is front and center across enterprises. Companies with mature systems are breaking down siloed IT departments and federating them into product development teams and departments. Testing and its practices are at the heart of these changes. Traditionally, development organizations have been filled with mostly manual testers and a limited number of automation and performance engineers. Adam Auerbach says this has to change. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding in order to stay relevant and add more value to the business. Based on his experiences at Lincoln Financial and Capital One, Adam explores what the DevOps movement is all about, its core values, and proven patterns for how testing must evolve.
|
Adam Auerbach
|
|
Help! I am Drowning In 2 Week Sprints....Please Tell Me What NOT to Test!
Slideshow
Sometimes we allow ourselves to drown in work… Mary Thorn hears it all the time: testers complaining at retrospectives to their teams that they do not have enough time to test everything. She often sees testers work overtime the last week of a sprint to ensure the definition of done is accomplished. Why do they do this? Why do we, as testers, enable the bad behaviors of “Scrummerfall” or a lack of whole-team ownership of quality? Mary aims to arm testers with techniques that allow them to test smarter, not harder, and enable the testers and the team to have better conversations that make it clear what they are testing in the sprint. Most importantly, she wants you to come out of her session being able to answer the question, “What are you not going to test this sprint?” Take home some approaches that allow you to swim, not sink, by focusing your own and your team’s efforts.
|
Mary Thorn
|
|
Rediscover Exploratory Testing
Slideshow
The testing community is caught between a rock and a hard place when it comes to exploratory testing. Although exploratory testing has been around for ages, it often leads to more confusion than clarity. Is exploratory testing an activity-something that you do? Or is it an approach-a way or a style of doing something? Isn't all testing exploratory? When do you do it? How do you do it properly? How does it relate to the entire software lifecycle? To answer these questions, join Ingo Philipp as he shares the most common confusions and controversies on this topic. He explains what exploratory testing is, why to use it, how and when to practice it. Ingo discusses specific heuristics and techniques/tours of exploratory testing, especially useful in fast-paced development environments, to get the most out of exploratory testing in your daily work.
|
Ingo Philipp
|
|
7 Proven Ways to Ruin Your Test Automation
Slideshow
Test automation projects fail, but why? Could you stop it from happening? In this tongue-in-cheek talk, Seretta Gamba will share seven proven methods to disrupt or utterly ruin a test automation project, including letting a lone champion keep important knowledge to himself, ignoring good programming practices, setting impossible goals, and feigning support. Seretta’s humorous recommendations will provide managers, testers, and automators alike with the early signs of an automation project in danger. By “warning” that the most effective defenses are found using the test automation patterns, Seretta will provide the tools needed to counter and resolve issues that lead to project failure. You will receive access to an online resource, the test automation patterns wiki, that leads you through test automation success patterns and offers ways to avoid failures.
|
Seretta Gamba
|