|
Automating Test Design The goals of this presentation are to: Redefine the term "path"; Introduce four value selection paradigms; Discuss strengths & weaknesses of each; Examine how value selection relates to automated test design capability; and Examine how test requirements identification relates to each paradigm.
|
Steve Morton, Applied Dynamics International
|
|
Implementing an Automated Regression Test Suite Many efforts to automate regression testing have failed or not met expectations-resulting in "shelfware." Lloyd Roden presents a real-world case study based on the success of implementing a regression test tool within a software company. Learn the steps taken in evaluating and deploying the tool. Discover the key benefits and successes achieved over a three-year period as well as the challenges faced while using the tool.
|
Lloyd Roden, Grove Consultants
|
|
Adventures in Web Application Performance Testing Examine the challenges and successes experienced by a test team analyzing application and systems performance for applications moving from distributed Client/Server solutions to centralized, Web-based designs. In this presentation, Nancy Landau presents case studies to address the changes made in automated testing methods to handle compressed delivery schedules, new architectures, new test tools requirements, and changing customer expectations. These case studies encompass principles such as managing iterative test development, creating reusable tests, standardizing application metrics, migrating from simple to complex networking environments, and predicting performance bottlenecks.
|
Nancy Landau, ALLTEL
|
|
Scripts on My Tool Belt The aims of this presentation are to: convince you that "test automation" is more than automating test execution; show some examples of the kinds of things that can be accomplished with scripting languages, using simplified code samples; and make you aware of three different scripting languages (shells, perl, and expect).
|
Danny Faught, Tejas Software Consulting
|
|
Create Your Own Luck: Get Organized for Test Success The four "lucky" organizational factors are: clearly defined roles within-and interfaces between-test team and project; early test team involvement in project; sharing of test cases, data, and tools across test participants and phases (levels); and a project culture that promotes understanding and valuing test team's contributions. How do these factors promote test success? How can we institute these auspicious circumstances on our projects?
|
Rex Black, Rex Black Consulting Services, Inc.
|
|
STARWEST 2001: Designing an Automated Web Test Environment This paper offers an alternative to the typical automated test scripting method of "record and playback now and enhance the automation environment later." It explores a regression automation system design for testing Internet applications through the GUI, along with scripting techniques to enhance the scalability and flexibility of an automated test suite. This paper will present a basic
structure for an automated test environment, and will expand on each of the items found in that structure. Web testing levels will be laid out, along with a basic approach to designing test scripts based on those Web testing levels.
|
Dion Johnson, Pointe Technology Group, Inc.
|
|
An Introduction to Web Load Testing This session walks participants through the process of Web load testing. Jim Hyatt takes this opportunity to cover everything from what testing tools are available to how to plan for load testing. Get a basic understanding of what Web load testing is and how to do it correctly.
|
Jim Hyatt, Spherion
|
|
Three Seasons of Test Automation: A Case Study This presentation makes the following recommendations related to automating testing: don't automate all of an application (seventy to eight percent); don't automate all applications (stable, long term); don't take a 3G approach for short term gain; if shelf life and maintenance costs are important, a 3G approach is best; insure proper roles are filled and people trained; have requirements before you start; have good access to data and test oracles; spend time in design to set the right level of granularity for the test cases and action words.
|
Russell Roundtree, Landmark Graphics and Mike Sowers, Software Development Technologies
|
|
STARWEST 2001: Exploratory Testing in Pairs Exploratory testing involves simultaneous activities-learning about the program and the risks associated with it, planning and conducting tests, troubleshooting, and reporting results. This highly skilled work depends on the ability of the tester to stay focused and alert. Based on a successful pilot study, Cem Kaner and James Bach discuss why two testers can be more effective working together than apart. Explore the advantages of testing in pairs, including ongoing dialogue to keep both testers alert and focused, faster and more effective troubleshooting, and an excellent opportunity for a seasoned tester to train a novice.
|
James Bach, Satisfice, Inc. and Cem Kaner, Florida Institute of Technology
|
|
Establishing Best Testing Practices in Your Organization The path to best testing practices begins with communication. By building relationships with a product's key players-developers, analysts, and end users-your test team can achieve a higher level of both quality and customer satisfaction. Discover the link between effective communication and implementing critical step-by-step test processes such as test conditions, test case design, test data construction, and reporting.
|
Michelle Lynn Baldwin, Booz, Allen & Hamilton
|