|
Exploratory Testing: The Next Generation Exploratory testing is sometimes associated with "ad hoc" testing, randomly navigating through an application. However, emerging exploratory techniques are anything but ad hoc. David Gorena Elizondo describes new approaches to exploratory testing that are highly effective, very efficient, and supported by automation. David describes the information testers need for exploration, explains how to gather that information, and shows you how to use it to find more bugs and find them faster. He demonstrates a faster and directed (not accidental) exploratory bug finding methodology and compares it to more commonly used approaches. Learn how test history and prior test cases guide exploratory testers; how to use data types, value ranges, and other code summary information to populate test cases; how to optimize record and playback tools during exploratory testing; and how exploratory testing can impact churn, coverage, and other metrics.
|
David Elizondo, Microsoft Corporation
|
|
STARWEST 2008: Test Estimation: Painful or Painless? As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult aspects of test management. You must deal with many unknowns, including dependencies on development activities and the variable quality of the software you test. Lloyd presents seven proven ways he has used to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Lloyd discusses FIA (finger in the air), formula/percentage, historical reference, Parkinson's Law vs. pricing, work breakdown structures, estimation models, and assessment estimation. He shares spreadsheet templates and utilities that you can use and take back to help you improve your estimations. By the end of this session, you might just be thinking that the once painful experience of test estimation can, in fact, be painless.
|
Lloyd Roden, Grove Consultants
|
|
STARWEST 2008: The Case Against Test Cases A test case is a kind of container. You already know that counting the containers in a supermarket would tell you little about the value of the food they contain. So, why do we count test cases executed as a measure of testing's value? The impact and value a test case actually has varies greatly from one to the next. In many cases, the percentage of test cases passing or failing reveals nothing about the reliability or quality of the software under test. Managers and other non-testers love test cases because they provide the illusion of both control and value for money spent. However, that doesn't mean testers have to go along with the deceit. James Bach stopped managing testing using test cases long ago and switched to test activities, test sessions, risk areas, and coverage areas to measure the value of his testing. Join James as he explains how you can make the switch-and why you should.
|
James Bach, Satisfice, Inc.
|
|
Automate API with White-box Tests with windows PowerShell Although a myriad of testing tools have emerged over the years, only a few focus on the area of API testing for Windows-based applications. Nikhil Bhandari describes how to automate these types of software tests with Windows PowerShell, the free command line shell and scripting language. Unlike other scripting shells, PowerShell works with WMI, XML, ADO, COM, and .NET objects as well as data stores, such as the file system, registry, and certificates. With PowerShell, you can easily develop frameworks for testing-unit, functional, regression, performance, deployment, etc.-and integrate them into a single, consistent overall automation environment. With PowerShell, you can develop scripts to check logs, events, process status, registry check, file system management, and more. Use it to parse XML statements and other test files.
|
Nikhil Bhandari, Intuit
|
|
STARWEST 2008: Understanding Test Coverage Test coverage of application functionality is often poorly understood and always hard to measure. If they do it at all, many testers express coverage in terms of numbers, as a percentage or proportion-but a percentage of what? When we test, we develop two parallel stories. The "product story" is what we know and can infer about the software product-important information about how it works and how it might fail. The "testing story" is how we modeled the testing space, the oracles that we used, and the extent to which we configured, operated, observed, and evaluated the product. To understand test coverage, we must know what we did not test and that what we did test was good enough.
|
Michael Bolton, DevelopSense
|
|
Great Test Teams Don't Just Happen Test teams are just groups of people who work on projects together. But how do great test teams become great? More importantly, how can you lead your team to greatness? Jane Fraser describes the changes she made after several people on her testing staff asked to move out of testing and into other groups-production and engineering-and how helping them has improved the whole team and made Jane a much better leader. Join Jane as she shares her team's journey toward greatness. She started by getting to really know the people on the team-makes them tick, how they react to situations, what excites them, what makes them feel good and bad. She discovered the questions to ask and the behaviors to observe that will give you the insight you need to lead.
|
Jane Fraser, Electronic Arts
|
|
Fun with Regulated Testing Does your test process need to pass regulatory audits (FDA, SOX, ISO, etc.)? Do you find that an endless queue of documentation and maintenance is choking your ability to do actual testing? Is your team losing good testers due to boredom? With the right methods and attitude, you can do interesting and valuable testing while passing a process audit with flying colors. It may be easier than you think to incorporate exploratory techniques, test automation, test management tools, and iterative test design into your regulated process. You'll be able to find better bugs more quickly and keep those pesky auditors happy at the same time. John McConda shares how he uses exploratory testing with screen recording tools to produce the objective evidence auditors crave. He explains how to optimize your test management tools to preserve and confidently present accountability and traceability data.
|
John McConda, Mobius Test Labs
|
|
Adventures with Test Monkey's Most test automation focuses on regression testing-repeating the same sequence of tests to reveal unexpected behavior. Despite its many advantages, this traditional test automation approach has limitations and often misses serious defects in the software. John Fodeh describes "test monkeys," automated testing that employs random inputs to exercise the software under test. Unlike regression test suites, test monkeys explore the software in a new way each time a test case executes and offers the promise of finding new and different types of defects. The good news is that test monkey automation is easy to develop and maintain and can be used early in development before the software is stable. Join John to discover different approaches you can take to implement test monkeys, depending on the desired "intelligence" level.
|
John Fodeh, Hewlett-Packard
|
|
Using Failure modes to Power up Your Testing When a tester uncovers a defect, it usually gets fixed. The tester validates the fix and may add the test to a regression test suite. Often, both the test and defect are then forgotten. Not so fast-defects hold clues about where other defects may be hiding and often can help the team learn to not make the same mistake again. Dawn Haynes explores methods you can use to generate new test ideas and improve software reliability at the same time. Learn to use powerful analysis tools, including FMEA-failure modes and effects analysis-and cause/effect graphing. Go further with these techniques by employing fault injections and forensically analyzing bugs that customers find. Discover ways to correct the cause of a problem rather than submitting a "single instance defect" that will result in a "single instance patch" that fixes one problem and does nothing to prevent new ones.
|
Dawn Haynes, PerfTestPlus, Inc.
|
|
The Myth of Risk Mangement Although test managers are tasked with helping manage project risks, risk management practices used on most software projects produce only an illusion of safety. Many software development risks cannot be managed because they are unknown, unquantifiable, uncontrollable, or unmentionable. Rather than planning only for risks that have previously occurred, project and test managers must begin with the assumption that something new will impact their project. The secret to effective risk management is to create mechanisms that provide for the early detection and quick response to such events--not simply to create checklists of problems you've previously seen. Pete McBreen presents risk "insurance" as a better alternative to classic risk management.
|
Pete McBreen, Software Craftsmanship Inc
|