|
An Execution Framework for Java Test Automation This presentation introduces the Java Execution Framework, describing test suites, test cases, and the JEF test harness.
|
Erick Griffin, Tivoli Systems Inc.
|
|
Automating Test Design The goals of this presentation are to: Redefine the term "path"; Introduce four value selection paradigms; Discuss strengths & weaknesses of each; Examine how value selection relates to automated test design capability; and Examine how test requirements identification relates to each paradigm.
|
Steve Morton, Applied Dynamics International
|
|
Scripts on My Tool Belt The aims of this presentation are to: convince you that "test automation" is more than automating test execution; show some examples of the kinds of things that can be accomplished with scripting languages, using simplified code samples; and make you aware of three different scripting languages (shells, perl, and expect).
|
Danny Faught, Tejas Software Consulting
|
|
The Dangers of Use Cases Employed as Test Cases Use cases are a great way to organize and document a software system's functionality from the user's perspective. However, they have limited uses for testers. They are great vehicles to accomplish some tasks, and not so great for others. Understand what you're trying to accomplish by testing before deciding if use cases can help-and be cognizant of the challenges they present. They are useful to testers, but not for every situation.
|
Bernie Berger, Test Assured, Inc.
|
|
Designing Reusable Test Automation This paper introduces the Sequencer design that facilitates the creation and execution of reusable operations. The idea behind the Sequencer is to carve the product under test into sets of functional operations. A test case data file describes the operations to be executed including their order and required data. The Sequencer’s test driver executes the test by loading the test case and sequencing the operations. The beauty of this approach is with a well-stocked library of operations coded, new tests can be generated by combining different sequences of existing operations.
|
Edward Guy Smith, Mangosoft Incorporated
|
|
Enjoying the Perks of Model-Based Testing Software testing demands the use of some model to guide such test tasks as selecting test inputs, validating
the adequacy of tests, and gaining insight into test effectiveness. Most testers gradually build a mental
model of the system under test, which would enable them to further understand and better test its many
functions. Explicit models, being formal and precise representations of a tester’s perception of a program,
are excellent shareable, reusable vehicles of communication between and among testers and other teams
and of automation for many tasks that are normally tedious and labor-intensive.
|
Ibrahim K. El-Far, Florida Institute of Technology
|
|
STARWEST 2001: Bug Hunting: Going on a Software Safari This presentation is about bugs: where they hide, how you find them, and how you tell other people they exist so they can be fixed. Explore the habitats of the most common types of software bugs. Learn how to make bugs more likely to appear and discover ways to present information about the bugs you find to ensure they get fixed. Drawing on real-world examples of bug reports, Elisabeth Hendrickson reveals tips and techniques for capturing the wiliest and most squirmy critters crawling around in your software.
|
Elisabeth Hendrickson, Quality Tree Software
|
|
Concise, Standardized, Organized Testing in Complex Test Environments There's a need for standardized, organized hardware and software infrastructure, and for a common framework, in a complex test environment. Gerhard Strobel focuses on the experience of testing diverse products on many different platforms (UNIX, Windows, OS2, z/OS, OS400)-how they differ and how much they have in common. He explains how to configure and profile test machines, then highlights the technical areas where test efficiency can be increased. He also covers methods of execution control.
|
Gerhard Strobel, IBM Germany
|
|
Test Result Checking Patterns Determining how a test case detects a product failure requires several test case design trade-offs. These trade-offs include the characteristics of test data used and when comparisons are done. This document addresses how result checking impacts test design.
|
Keith Stobie, Microsoft
|
|
A Framework for Testing Real-Time and Embedded Systems What do we mean when we say local, remote, simultaneous, and distributed testing? Alan Haffenden of The Open Group explores the differences, and explains why the architecture of a distributed test execution system must be different from that of non-distributed systems. An overview of POSIX 1003.13 profiles and units of functionality helps advanced users build a good foundation for testing both their real-time and embedded systems.
|
Alan Haffenden, The Open Group
|