Conference Presentations

State-driven Testing: An Innovation in UI Test Automation

Keyword-driven testing is an accepted UI test automation technique used by mature organizations to overcome the disadvantages of record/playback test automation. Unfortunately, keyword-driven testing has drawbacks in terms of maintenance and complexity because applications easily can require thousands of automation keywords. To navigate and construct test cases based on so many keywords is extremely cumbersome and can be impractical. Join Dietmar Strasser to learn how state-driven testing employs an application state-transition model as its basis for UI testing to address the disadvantages of keyword-driven testing. By defining the states and transitions of UI objects in a model, you can minimize the set of allowed UI actions at a specific point in a test script, requiring fewer keywords.

Dietmar Strasser, Borland (a Micro Focus company)
Streamlining the Developer-Tester Workflow

The islands that many development and test teams live on seem far apart at times. Developers become frustrated by defect reports with insufficient data to reproduce a problem; testers are constantly retesting the same application and having to reopen "fixed" defects. Chris Menegay focuses on two areas for improvement that will help both teams: a better build process to deliver a more testable application to the test team and a defect reporting process that delivers better defect data back to the developers. From the build perspective, he explores ways for the development team to identify which requirements are completed, which defects were fixed, and how to guide testers on which test cases to execute. Chris details the components of a good defect report, illustrating ways for testers to provide accurate reproduction steps, demonstrating video capture tools, examining valuable log files, and discussing test environment issues.

Chris Menegay, Notion Solutions, Inc.
STARWEST 2010: Quality Metrics for Testers: Evaluating Our Products, Evaluating Ourselves

Finally, most businesses realize that a final system testing "phase" in the project cannot be used as the catch-all for software quality problems. Many organizations are changing development methodologies or creating organization-wide initiatives that drive quality techniques into all aspects of development. So, how do you know that a quality initiative is working or where the most improvement effort is needed? Adrian O’Leary shares examples of quality improvement programs he has observed and illustrates how they are using defect data from various test phases to guide their efforts. See how measurements of defect leakage help these organizations gauge the efficiency and effectiveness of all development activities. Adrian identifies key "quick hit" recommendations for defect containment, including the use of static testing, traceability, and more.

Lee Copeland, Software Quality Engineering
Model-based Testing: The Key to Testing Industrialization

Customers who want “more, faster, cheaper” put pressure on the development schedule, usually leaving less time for testing. The solution is to parallelize testing and development so that they proceed together. But how, especially when the requirements and software are constantly changing? Model-based testing (MBT) distills the testing effort down to the essential business processes and requirements, capitalizing on abstractions to reduce the costs of change and improve test data management. MBT facilitates a continuous and systematic transformation from business requirements to an automated or manual test repository. MBT permits re-use of the same test design for both integration testing-end-to-end and system-to-system-and functional testing-system, acceptance, and regression.

Bruno Legeard, Smartesting
STARWEST 2010: Automating Embedded System Testing

Many testers believe the challenges of automating embedded and mobile phone-based systems testing are prohibitively difficult. By approaching the problem from a test design perspective and using that design to drive the automation initiative, William Coleman demystifies automated testing of embedded systems. He draws on experiences gained on a large-scale testing project for a leading smart-phone platform and a Window CE embedded automotive testing platform. William describes the technical side of the solution-how to setup a tethered automation agent to expose the GUI and drive tests at the device layer. Learn how to couple this technology solution with a test design methodology that helps even non-technical testers participate in the automation development and execution. Take back a new approach to achieve large-scale automation coverage that is easily maintainable over the long term.

William Coleman, LogiGear Corporation
Using the Amazon Cloud to Accelerate Testing

Virtualization technologies have been a great boon to test labs everywhere. With the Amazon Elastic Compute Cloud (EC2), these same benefits are available to everyone-without the need to purchase and maintain your own hardware. Once you master the tricks and tools of this new technology, you too can instantly have limitless capacity at your disposal. Randy Hayes demonstrates how to use the AWS Management Console to create virtual test machines (AMIs), use S3 storage services, handle elastic IP Addresses, and leverage these services for functional testing, load testing, defect tracking, and other common testing functions. Randy explains the Amazon Virtual Private Cloud, which allows EC2 cloud instances to be configured to run inside your firewall to test inward-facing applications. Gain access to a pre-configured AMI with open source testing tools and other utilities for quickly migrating your test lab to EC2.

Randy Hayes, Capacity Calibration, Inc.
Automating Test Design in Agile Development Environments

How does model-based automated test design (ATD) fit with agile methods and developer test-driven development (TDD)? The answer is “Superbly!”, and Antti Huima explains why and how. Because ATD and TDD both focus on responsive processes, handling evolving requirements, emergent designs, and higher product quality, their goals are strongly aligned. Whereas TDD tests focus on the unit level, ATD works at higher test levels, supporting and enhancing product quality and speeding up development. ATD dramatically reduces the time it takes to design tests within an iterative agile process and makes tests available faster, especially as development proceeds through multiple iterations. Antti shatters the common misconception that model-based methods are rigid and formal and cannot be employed in the rapid, fluid setting of agile environments.

Antti Huima, Conformiq, Inc.
Reducing the Testing Cycle Time through Process Analysis

Because system testing is usually what lies between development and release to the customer-and hopefully more business value or profit-every test team is asked to test faster and finish sooner. Reducing test duration can be especially difficult because many of the factors that drive the test schedule are beyond our control. John Ruberto tells the story of his team’s cutting the system test cycle time from twelve weeks down to four. John shares how they first analyzed the overall test process to create a model of the test duration. This model decomposed the test schedule into six factors: test cycles, number of tests, defects, the rates at which tests were executed and defects handled, tester skills, and the number of testers. By decomposing the test cycle into these variables, their team identified six smaller-and thus easier-problems to solve.

John Ruberto, Intuit Inc
Focusing with Clear Test Objectives

Frustrated with your team’s testing results-sometimes great, sometimes lacking? Do you consistently over promise and under deliver? If these situations sound familiar, you may be suffering from the ills of UCT (Unclear Test Objectives). Clearly defining test objectives is vital to your project’s success; it’s also seriously hard to get right. Test objectives are often driven by habit-“Let’s copy and paste the last set of objectives”; by lack of understanding-“Let’s use whatever the requirements say”; or by outside forces-“Let’s just do what the user wants.” Sharon Robson shares the structured approach she uses to define test objectives, including key test drivers, approaches, processes, test levels, test types, focus, techniques, teams, environments, and tools. Sharon illustrates how to measure, evaluate, compare, and balance these often conflicting factors to ensure that you have the right objectives for your test project.

Sharon Robson, Software Education
STARWEST 2010: Tour-based Testing: The Hacker's Landmark Tour

When visiting a new city, people often take an organized tour, going from landmark to landmark to get an overview of the town. Taking a “tour” of an application, going from function to function, is a good way to break down the testing effort into manageable chunks. Not only is this approach useful in functional testing, it’s also effective for security testing. Rafal Los takes you inside the hacker’s world, identifying the landmarks hackers target within applications and showing you how to identify the defects they seek out. Learn what “landmarks” are, how to identify them from functional specifications, and how to tailor negative testing strategies to different landmark categories. Test teams, already choked for time and resources and now saddled with security testing, will learn how to pinpoint the defect-from the mountains of vulnerabilities often uncovered in security testing-that could compromise the entire application.

Rafal Los, Hewlett-Packard

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.