Conference Presentations

Load Testing New Web Technologies

Web 2.0 applications represent a major evolution in Web development. These applications are based on new technologies such as AJAX, RIA, Web services, and SOA. Unless you, as a tester, understand the inner workings of these technologies, you cannot adequately test their functionality or prepare realistic and valid performance tests. Eran Witkon explains the new Web technologies, how to design and implement appropriate load tests, execute these tests, and interpret the results. For example, Eran describes why the classic "client requests a page and then waits" model used in performance testing the old Web does not adequately represent AJAX processing in which only parts of pages are requested and one request need not complete before another is initiated.

Eran Witkon, RadView Software
Apodora: An Open Source Framework for Web Testing

Are you frustrated with automated test scripts that require constant maintenance and don't seem to be worth the effort? Seth Southern introduces Apodora, a new open source framework for automating functional testing of Web applications. Apodora was released under the GNU General Public License to the open source community with the goal of collaboratively creating a superior, free, automated Web testing tool. The key benefit of Apodora is to help you reduce the maintenance and overhead of test automation scripts. Seth introduces you to the open source project, demonstrates the use of Apodora, and highlights some of the key differences between Apodora and other test automation tools currently available. Seth shows how Apodora can save you time when the software under test changes and scripts require maintenance.

  • Web test tool gaps that Apodora fills
  • Features of Apodora for functional Web testing
Seth Southern, ACULIS - Software Development Services
Emotional Test Oracles

An oracle is a heuristic principle or mechanism by which we may recognize a problem. Traditionally, discussion within testing about oracles has focused two references: (1) requirements specifications that provide us with the "correct" answer and (2) algorithms we execute to check our answers. Testing textbooks talk about identifying a bug by noting the differences between the actual results against those references. Yet high-quality software is not created by merely analyzing conformance to specifications or matching some algorithm. It is about satisfying-and not disappointing-the people who interact with the product every day. Michael Bolton introduces the idea that our emotional reactions to programs as we test them-frustration, confusion, annoyance, impatience, depression, boredom, irritation, curiosity, and amusement-are important triggers for noticing real problems that matter to real people.

Michael Bolton, DevelopSense
A "Framework for Test" for Repeatable Success

Do you have defined and documented processes that describe all the activities and deliverables for testing? Do you have a documented road map for repeating test project successes? The test group at Kaiser found themselves overwhelmed with too many projects, understaffed on most projects, lacking repeatable procedures, and without testing tools. Randy Slade describes how they identified the needed test processes and tools, set priorities, developed new procedures, and implemented them. Their "Framework for Testing" has become the blueprint for all testing activities. Its flexibility makes it applicable to software projects of all types and sizes. It guides testers and managers from A to Z in performing their duties by describing the "what, when, how, and why" of all testing activities and deliverables.

  • Five phases of a software testing life-cycle
  • How to develop, pilot, and evaluate new processes
Randy Slade, Kaiser Permanente
Lightning Talks: A Potpourri of 5-Minute Presentations

Lightning Talks are nine five-minute talks in one conference session. Lightning Talks represent a much smaller investment of time than track speaking and offer the chance to try conference speaking without the heavy commitment. Lightning Talks are an opportunity to present your single biggest bang-for-the-buck idea quickly. Use this as an opportunity to give a first time talk or to present a new topic for the first time. Maybe you just want to ask a question, invite people to help you with your project, boast about something you did, or tell a short cautionary story. These things are all interesting and worth talking about, but there might not be enough to say about them to fill up a full conference session.

Dawn Haynes, PerfTestPlus, Inc.
Testing SOA Applications: What's New, What's Not

The Service Oriented Architecture (SOA) approach to building applications is rapidly approaching critical mass. With this architecture comes a new set of challenges for testers. Brian Bryson demystifies the testing practices to ensure SOA application quality. He begins by building and deploying a Web service to introduce you to SOA. Brian then examines the requirements and risks of SOA quality management including functional, performance, and security testing challenges. Brian demonstrates testing a Web service using both open source and commercial software. Throughout his demonstration, Brian discusses what new skills and strategies, such as a strong focus on unit testing, are required for SOA testing and the more common strategies, such as a strong focus on requirements based testing, that still apply in the new world of SOA.

  • The test and quality ramifications of the SOA
Brian Bryson, IBM Rational
The Ten Most Important Automation Questions-and Answers

As test automation becomes more complex, many important strategic issues emerge. Mukesh Mulchandani shares key questions you must answer before you begin a test automation project or an improvement program. He begins with the elementary questions. Should I automate now or wait? What specifically should I automate? What approach should I adopt? Mukesh then considers more complex questions: vertical vs. horizontal automation, handling static and dynamic data, and testing dynamic objects. The final questions relate to future automation trends: moving beyond keywords automation technology, making automation scripts extensible, introducing test-driven development, starting automation when the application is not yet stable, and offering the automation scripts to clients.

Mukesh Mulchandani and Krishna Iyer, ZenTEST Labs
Improving Testing with Quality Stubs

Many testers use stubs-simple code modules that simulate the behavior of much more complicated things. As components and their interfaces evolve, it is easy to overlook the need for associated stubs to evolve with them. Lee Clifford explains that the stubs Virgin Mobile previously used to simulate the functionality of third-party software were basic and static-simply returning hard-coded data values. While adequate, the stubs were difficult to maintain. So Virgin Mobile's testers decided to design, build, test, and deploy their own smart "quality stubs," not only for use by the test team but also for development and performance testing. The testers created fully configurable and programmable stubs that interface their systems to third-party products. The key advantage is that anyone in the test team can update the stubs with minimal cost and without the need to learn a programming language.

Lee Clifford, Virgin Mobile UK
The Secrets of Faking a Test Project

It's never been easier to fool your manager into thinking that you're doing a great job testing! In his presentation, Jonathan Kohl covers today's most respected test fakery. These techniques include misleading test case metrics, vapid but impressive looking test documentation, repeatedly running old tests "just in case they find something", carefully maintaining obsolete tests, methodology doublespeak, endless tinkering with expensive test automation tools, and taking credit for a great product that would have been great even if no one had tested it. Jonathan also covers best practices for blame deflection. By the time you're through, your executive management won't know whether to fire the programmers or the customers. But, it won't be you. (Disclaimer: It could be you if an offshore company fakes it more cheaply than you do.)

  • Cautionary true stories of test fakery, both purposeful and accidental
Jonathan Kohl, Kohl Concepts Inc.
Load Generation Capabilities for Effective Performance Testing

To carry out performance testing of Web applications, you must ensure that sufficiently powerful hardware is available to generate load levels. At the same time, you need to avoid investing in unnecessarily expensive hardware "just to be sure." A valid model for estimating the load generation capabilities of performance testing tools on different hardware configurations will help you generate the load you need with the minimum hardware. Rajeev Joshi believes the models provided by most tool vendors are too simplistic for practical use. In fact, in addition to the hardware configuration, the load generation capabilities of any tool are a function of many factors: the number of users, frequency and time distribution of requests, data volume, and think time. Rajeev presents a model for the open source load generator tool, Jmeter, which you can adapt for any performance testing tool.

John Scarborough, Aztecsoft

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.