|
STAREAST 2002: A Case Study In Automating Web Performance Testing Key ideas from this presentation include: define meaningful performance requirements; changing your site (hardware or software) invalidates all previous predictors; reduce the number of scripts through equivalence classes; don't underestimate the hardware
needed to simulate the load; evaluate and improve your skills, knowledge, tools, and outsourced services; document your process and results so that others may learn from your work; use your new knowledge to improve your site's performance and focus on progress, not perfection.
|
Lee Copeland, Software Quality Engineering
|
|
Investing Wisely: Generating Return on Investment from Test Automation Implementing test automation without following best practices and tracking your ROI is a prescription for failure. Still, many companies have done so seeking the elusive promise of automated testing: more thorough testing done faster, with less error, at a substantially lower cost. However, fewer than fifty percent of these companies realize any real success from these efforts. And even fewer have generated any substantial ROI from their automated testing initiatives. This presentation takes an in-depth look at the specific pitfalls companies encounter when implementing automated functional testing, and offers proven best practices to avoid them and guarantee long-term success.
|
Dale Ellis, TurnKey Solutions Corp.
|
|
A Crash Team Approach to Effective Testing Rapid changes and stunted delivery deadlines are always challenging software testers. To catch up, software testing must take a different approach without cutting corners-hence, the crash team. The crash team approach focuses on integration testing and runs in parallel with functional testing. Its technique discovers system problems early, problems that would be hard to find with traditional methods. It also supports the spiral development model that's been adopted in many rapid application development environments.
|
Pei Ma, WeiMa Group LLC
|
|
Problems with Vendorscripts: Why You Should Avoid Proprietary Languages Most test tools come bundled with vendor-specific scripting languages that I call vendorscripts. They are hard to learn, weakly implemented, and most importantly, they discourage collaboration between testers and developers. Testers deserve full-featured, standardized languages for their test development. Here’s why.
|
Bret Pettichord, Pettichord Consulting
|
|
Risk Analysis for Web Testing All Web sites take risks in some areas ... your job is to minimize your company's exposure to these risks. Karen Johnson takes you through a step-by-step analysis of a Web site to determine possible exposure points. By reviewing the functionality and other site considerations, such as supported browsers or anticipated loads, risk areas can be accurately determined. You'll then create categories of testing based on the exposure points you uncover, starting with broad areas such as functional, content, security, load, and performance, and drilling down to test and protect against even minor vulnerabilities.
|
Karen Johnson, Baxter Healthcare Corporation
|
|
Test Automation of Distributed Transactional Services Distributed transactions are being implemented everywhere. Web services, EAI, and B2B are just a few examples. Testing these transactions across disparate systems-sometimes even across organizations and firewalls-is difficult, yet vital. But automating the testing is impossible without the right tools. Manish Mathuria offers you a test automation framework built specifically for transactional and component-based implementations. He addresses the practical problems of testing such systems, and suggests solutions for many of them.
|
Manish Mathuria, Arsin Corporation
|
|
Adventures in Session-Based Testing This paper describes the way that a UK company controlled and improved ad-hoc testing, and was able to use the knowledge gained as a basis for ongoing, product sustained improvement. It details the session-based methods initially proposed, and notes problems, solutions and
improvements found in their implementation. It also covers the ways that the improved test results helped put the case for change throughout development, and ways in which the team has since built on the initial processes to arrive at a better testing overall. Session-based testing can be used to introduce measurement and control to an immature test process, and can form a foundation for significant improvements in productivity and error
detection.
|
James Lyndsay, Workroom Productions
|
|
Finding Firmware Defects Embedded systems software presents a different breed of challenges to the test professional than other types of applications. Hardware interfaces, interrupts, timing considerations, resource constraints, and error handling often pose problems that aren't well suited to many traditional testing techniques. This presentation discusses some of these problems, and the techniques and strategies that are the most effective at finding software bugs in embedded systems code.
|
Sean Beatty, High Impact Services Inc
|
|
Software Documentation Superstitions Do you need credible evidence that disciplined document reviews (a.k.a. inspections) can keep total project costs down while helping you meet the schedule and improve quality? The project documentation we actually need should meet predetermined quality criteria, but organizations are often superstitious about writing this documentation-and they let their superstitions inhibit their efforts. This presentation dispels the superstitions and shows you how reinforcements for improving the quality of your software project documentation-such as requirements, design, and test plans/procedures-can occur through disciplined document reviews.
|
Gregory Daich, Software Technology Support Center
|
|
Going Beyond QA: Total Product Readiness The successful release of software requires more than just testing to ensure the product functions properly; success is also defined by how prepared the product is for advertisement, delivery, installation, training, support, etc. In this paper, we’ll discuss how testing can be expanded to cover all aspects of Total Product Readiness (TPR).
|
Douglas Thacker, Liberty Mutual Insurance Group
|