Conference Presentations

Automated Unit Test Generation: Improve Quality Earlier

Are you tired of finding seemingly simple defects late in development? Do you detect the majority of defects during late-stage, formal testing? Are your development teams too resource-constrained to perform serious unit testing? Brian Robinson describes how ABB utilizes advances in automated unit testing to help their development teams perform more comprehensive testing at the component level. These techniques enable developers to create and maintain high quality unit test suites with significantly less effort. Brian's results show that many defects are detected earlier, saving time and leading to a more stable software product for later formal testing. He discusses the techniques and tools they use and ways your organization can best integrate them into your development and test processes. The tools Brian uses apply to C, C#, and Java, and can be integrated into Eclipse and Visual Studio.

Brian Robinson, ABB Inc.
Exploratory Validation: What We Can Learn from Testing Investment Models

Over the past few years, the airwaves have been flooded with commercials for investment-support software. Do your research with us, they promise, and you can make scads of money in the stock market. How could we test such a product? These products provide several capabilities. For example, they estimate the value or direction of change of individual stocks or the market as a whole, and they suggest trading strategies that tell you whether to buy, hold, or sell. Every valuation rule and every strategy is a feature. We can test the implementation of these features, but the greater risks lie in the accuracy of the underlying models. If you execute the wrong trades perfectly, you will lose money. That's not a useful feature, no matter how well implemented.

Cem Kaner, Florida Institute of Technology
Automation Strategies for Testing Complex Data and Dashboards

Test automation engineers are inevitably confronted with the difficult challenge of testing a screen containing hundreds–if not thousands–of data values. Designing an approach to interact with this complex data can be a nightmare, often resulting in countless programming loops that navigate through volumes of data. Greg Paskal shares an innovative way to approach these automation challenges by breaking the problem into its logical parts. First, understand the data and how to organize it using the Complex Data Methodology, and second, execute common programmatic tasks resulting in shorter automation run times. This approach can be applied to Web and client systems, and adapted easily to other technologies. Automators scripting in languages such as VB Script will find this approach innovative, breaking down automation challenges to optimize performance while still producing meaningful results.

Greg Paskal, JCPenney
Handling Failures in Automated Acceptance Tests

One of the aims of automated functional testing is to run many tests and discover multiple errors in one execution of the test suite. However, when an automated test runs into unexpected behavior-system errors, wrong paths taken, incorrect data stored, and more-the test fails. When a test fails, additional errors, called inherited errors, can result or the entire test can stop unintentionally. Either way, some portion of the system remains untested, and either the error must be corrected or the automation changed before proceeding. Alexandra Imrie describes proven approaches to ensure that the most tests will continue running despite errors encountered. She begins by sharing a specific way of designing tests to minimize the disturbance from an error. Using this test design as a foundation, Alex describes the strategies she exploits for handling and recovering from error events that occur during automated functional tests.

Alexandra Imrie, BREDEX GmbH
Futility-based Test Automation

Developers and other project stakeholders are paying increased attention to test automation because of its promise to speed development and reduce the costs of systems over their complete lifecycle. Unfortunately, flawed test automation efforts have prevented many teams from achieving the productivity and savings that their organizations expect and demand. Clint Sprauve shares his real-world experiences, exposing the most common bad habits that test automation teams practice. He reveals the common misconceptions about keyword-driven testing, test-driven development, behavior-driven development, and methodologies that can lead to futility-based test automation. Regardless of your test automation methodology or whether you operate in a traditional or agile development environment, Clint offers a solution on how to avoid the “crazy cycle” of script maintenance and ways to incrementally improve your test automation practices.

Clinton Sprauve, Borland (a Micro Focus company)
Testing Dialogues: Automation Issues

What problems are you facing in test automation right now? Just getting started? Trying to choose the right tool set? Working to convince executive managers of the value of automation? Dealing with excessive maintenance of scripts? Worrying about usability and security testing? Something else? Based on the problems and topics you and fellow automators bring to this session, Dorothy Graham and Mieke Gevers, both experienced test automation experts, will explore many of the most vexing test automation issues facing testers today. Join with other participants in small groups to discuss your situation, share your experiences, learn from your peers, and get the experts’ views from Dorothy and Mieke. As you learn and share, each group will create a brief presentation to give at the conclusion of the session.

Dorothy Graham, Consultant
State-driven Testing: An Innovation in UI Test Automation

Keyword-driven testing is an accepted UI test automation technique used by mature organizations to overcome the disadvantages of record/playback test automation. Unfortunately, keyword-driven testing has drawbacks in terms of maintenance and complexity because applications easily can require thousands of automation keywords. To navigate and construct test cases based on so many keywords is extremely cumbersome and can be impractical. Join Dietmar Strasser to learn how state-driven testing employs an application state-transition model as its basis for UI testing to address the disadvantages of keyword-driven testing. By defining the states and transitions of UI objects in a model, you can minimize the set of allowed UI actions at a specific point in a test script, requiring fewer keywords.

Dietmar Strasser, Borland (a Micro Focus company)
Automating Test Design in Agile Development Environments

How does model-based automated test design (ATD) fit with agile methods and developer test-driven development (TDD)? The answer is “Superbly!”, and Antti Huima explains why and how. Because ATD and TDD both focus on responsive processes, handling evolving requirements, emergent designs, and higher product quality, their goals are strongly aligned. Whereas TDD tests focus on the unit level, ATD works at higher test levels, supporting and enhancing product quality and speeding up development. ATD dramatically reduces the time it takes to design tests within an iterative agile process and makes tests available faster, especially as development proceeds through multiple iterations. Antti shatters the common misconception that model-based methods are rigid and formal and cannot be employed in the rapid, fluid setting of agile environments.

Antti Huima, Conformiq, Inc.
Building a Successful Test Automation Strategy

You have been told repeatedly that test automation is a good thing and that you need to automate your testing. So, how do you know where to start? If you have started and your efforts don’t seem to be paying back your investment, what should you do? Although you believe automation is a good thing, how can you convince your management? Karen Rosengren takes you through a set of practical and proven steps to build a customized test automation strategy based on your organization’s needs. She focuses on the real problem you are trying to solve-repetitive manual test effort that can be significantly reduced through automation. Using concrete examples, Karen shows you how to develop a strategy for automation that addresses real-not theoretical-savings. She shares how she has demonstrated the business value of automation to executives and gained both buy-in and the necessary budget to be successful.

Karen Rosengren, IBM Global Services
How Google Tested Chrome

Ever wish you could peek inside a big, high-tech company and see how they actually do testing? Well, now you can. Led by Sebastian Schiavone, Google's Chrome Test Team will detail everything they have done to test Google Chrome-both the browser and the Netbook operating system-beginning with their process for test planning and how they design test automation. Sebastian and his team share their initial plans, automation efforts, and actual results in what is likely to be the most candid and honest assessment of internal testing practices ever presented. Learn what worked, what didn't work, and how they'd proceed if they had it all to do over again. Take away copies of Google's actual test artifacts and learn how to apply Google's test techniques on the product you are currently testing.

Sebastian Schiavone, Google

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.