|
She Said, He Heard: Challenges and Triumphs in Global Outsourcing You are asked to put together a QA group in India that will work in tandem with your US team to provide twenty-four hour support for a global financial company. And what did Judy Hallstrom, Manager of Testing Services, and Indian Project Manager, Ravi Sekhar Reddy, and their group accomplish? The successful implementation of a fully integrated QA function, from scratch, in less than one year with minimal infrastructure. Walk through the challenges and triumphs as they built their unit from the ground up with no outsourcing service company support. With obstacles ranging from leased equipment, inadequate infrastructure, and shared office space to training issues, visas, Indian Customs, and much more, Judy and Ravi have seen and overcome them all. Now, two years later, they have a global QA team with processes that meet industry recognized quality standards.
- Working with a sourcing partner vs. going it alone
|
Judy Hallstrom, Franklin Templeton Investments
|
|
You'll Be Surprised by the Things Testers Miss Why do some bugs lie undetected until live operation of the software and then almost immediately bite us? Drawing on instances of problems that were obvious in production--but missed or nearly missed in testing, James Lyndsay can help you catch more bugs starting the day you return to work. James first describes bugs not found because too little time is spent on testing. Then, looking at the testers' knowledge, he discusses bugs missed because of requirements issues or because testers did not understand the underlying technology's potential for failure. In the most substantial part of the session, James looks at bugs missed because they could not be observed or because testers skimmed over the issue. Learn to recognize each type of testing problem and discover ways to mitigate or eliminate it.
- Coding errors that are hard to spot with typical test
- Working with emergent behaviors and unexpected risks
|
James Lyndsay, Workroom Productions
|
|
The Last Presentation on Test Estimation You Will Ever Need to Attend Estimating the test effort for a project has always been a thorn in the test manager's side. How do you get close to something reasonable when there are so many variables to consider? Sometimes, estimating test effort seems to be no more accurate than a finger in the wind. As Geoff Horne likes to call it, the "testimation" process can work for you if you do it right. Learn where to start, the steps involved, how to refine estimates, ways to sell the process and the result to management, and how to use the process to develop a test plan that resembles reality. Geoff demonstrates a spreadsheet-based tool that he uses to formulate his "testimations" and shows you how to use it at each step of the process.
- The different variables that need to be considered
- How to convert the "testimation" into a workable test schedule
- A spreadsheet template to help you estimate test effort
|
Geoff Horne, Geoff Horne Testing
|
|
Risk-Based Testing in Practice The testing community has been talking about risk-based testing for quite a while, and now most projects apply some sort of implicit risk-based testing approach. However, risk-based testing should be more than just brainstorming within the test team; it should be based on business drivers and business value. The Test team is not the risk owner-the products' stakeholders are. It is our job to inform the stakeholders about risk-based decisions and provide visibility on product risk status. Erik discusses a real-world method for applying structured risk-based testing applicable in most software projects. He describes how risk identification and analysis can be carried out in close cooperation with stakeholders Join Erik to learn how the outcome of the risk analysis can-and should-be used in test projects in terms of differentiated test approaches.
|
Erik van Veenendaal, Improve Quality Services BV
|
|
Testing and the Flow of Value in Software Development High quality software should be measured by the value it delivers to customers, and high quality software process should be measured by the continual flow of customer value. Modern processes have taught us that managing flow is all about the constraints restricting that flow. Testing, rather than being thought of as a conduit in that flow, is often perceived as an obstacle. It doesn't help that most testers struggle to answer the questions that their managers ask: What has and hasn't been tested? What do we need to test next? Where do we need to shift resources? If it works in the lab, why isn’t it working on those production machines? Where do we need to fix the performance or security? The ability-or inability- to answer these questions can determine the success and budget of a test team as well as how it is valued by its organization.
|
Sam Guckenheimer, Microsoft
|
|
Model-Based Security Testing Preventing the release of exploitable software defects is critical for all applications. Traditional software testing approaches are insufficient, and generic tools are incapable of properly targeting your code. We need to detect these defects before going live, and we need a methodology for detection that is cost-efficient and practical. A model-based testing strategy can be applied directly to the security testing problem. Starting with very simple models, you can generate millions of relevant tests that can be executed in a matter of hours. Learn how to build and refine models to focus quickly on the defects that matter. Kyle Larsen shows you how to create a test oracle that can detect application-specific security defects: buffer overflows, uninitialized memory references, denial of service attacks, assertion failures, and memory leaks.
|
Kyle Larsen, Microsoft Corporation
|
|
Don't Whine - Build Your Own Test Tools The highly customized hardware-software system making up the new flight operations system for the world's largest airline did not lend itself to off-the-shelf tools for test automation. With a convergence of on-demand, highly available technologies and the requirement to make the new system compatible with hundreds of legacy applications, the test team was forced to build their own test software. Written in Java, these tools have helped increase test coverage and improved the efficiency of the test team. One tool compares the thirty-one year old legacy system with its new equivalent for undocumented differences. Clay Bailey will demonstrate these tools, including one that implements predictive randomization methods and another that decodes and manipulates hexadecimal bit string representations.
- Custom test tools for a unique systems environment
- Innovative ways to develop and use Java for writing test tools
|
Clay Bailey, IBM
|
|
Patterns for Reusable Test Cases You can think of Q-Patterns as a structured set of questions (tests) about the different aspects of a software application under test. They are questions about the system that are categorized, grouped, sorted, and saved for reuse. These Q-Pattern questions can be written ahead of time and stored in a repository of test case templates, developed for requirements and design reviews or built in real-time as a way to both guide and document exploratory testing sessions. See examples of Q-Patterns that Vipul Kocher has developed for error messages, combo boxes, login screens, and list handling. Learn how to associate related Q-Patterns and aggregate them into hierarchical and Web models. Take back the beginnings of Q-Patterns for your test team and organization.
- Sharable and reusable test case designs
- Templates to organize requirements and design reviews
|
Vipul Kocher, PureTesting
|
|
Acceptance Testing: What It is and How To Do It Better - in Context When test engineers use the term "acceptance testing," they might be saying and thinking profoundly different things. Acceptance testing can mean one of at least a dozen approaches to the testing of a product and serve one or more of at least thirty different customer roles in a project. Tests and testing approaches that are appropriate in one context can be unacceptable-even disastrous-in another. When someone asks you to do user acceptance testing, what should you do? When should you do it? How do you determine success? Michael Bolton outlines the ways in which testers and test managers use context-driven thinking to better serve the mission of acceptance testing and develop skills to handle dramatically different testing situations. Apply your context in this interactive session to discover ways to improve your acceptance testing, and learn to use context-driven thinking in other areas, too.
|
Michael Bolton, DevelopSense
|
|
Security Testing: Are You a Deer in the Headlights? With frequent reports in the news of successful hacker attacks on Web sites, application security is no longer an afterthought. More than ever, organizations realize that security has to be a priority while applications are being developed-not after. Developers and QA professionals are learning that Web application security vulnerabilities must be treated like any other software defect. Organizations can save time and money by identifying and correcting these security defects early in the development process. Ryan English helps you overcome the “deer in the headlights” look when you are asked to begin testing applications for security issues. See real world examples of company Web sites that have been hacked because of vulnerable applications and see how the attacks could have been avoided.
- Security defect categories and responsibility areas
|
Ryan English, SPI Dynamics Inc
|