|
Is There an Assessment in the House? Diagnosing Test Process Ailments in House When you're not feeling well, you go to the doctor for a checkup. If your organization's test process isn't working as well as you'd like, you should give it the same treatment. Ruud Teunissen offers advice on performing an in-house test process assessment.
|
|
|
A Look at VMware The more complicated the system to test, the bigger the headache. Chris Meisenzahl takes a look at how you can take the pain out of testing complicated software systems with VMware’s virtualization tools—VMware Player, VMware Workstation, VMware Server, and VMware ESX Server.
|
|
|
Speaking Truth to Power: How to Break The Bad News There comes a time in every software professional's career when telling the truth to someone in power becomes an issue. It can be a difficult situation, but it's far worse to keep silent. Norm Kerth offers some helpful advice on speaking up in ways that are tactful and sincere.
|
|
|
Open Source Tools for Web Application Performance Testing OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.
- Learn the capabilities of OpenSTA
|
Dan Downing, Mentora Inc
|
|
Measuring the End Game of Software Project - Part Deux The schedule shows only a few weeks before product delivery. How do you know whether you are ready to ship? Test managers have dealt with this question for years, often without supporting data. Mike Ennis has identified six key metrics that will significantly reduce the guesswork. These metrics are percentage of tests complete, percentage of tests passed, number of open defects, defect arrival rate, code churn, and code coverage. These six metrics, taken together, provide a clear picture of your product's status. Working with the project team, the test manager determines acceptable ranges for these metrics. Displaying them on a spider chart and observing how they change from build to build enables a more accurate assessment of the product's readiness. Learn how you can use this process to quantify your project's "end game".
- Decide what and how to measure
- Build commitment from others on your project
|
Mike Ennis, Savant Tecnology
|
|
Practical Model-Based Testing for Interactive Applications Model-based tests are most often created from state-transition diagrams. Marlon Vieria generates automated system tests for many Siemens systems from use cases and activity diagrams. The generated test cases are then executed using a commercial Capture-Replay tool. Marlon begins by describing the types of models used, the roles of those models in test generation, and the basic test
generation process. He shares the weaknesses of some techniques and offers suggestions on how to strengthen them to provide the required control flow and data flow coverage. Marlon describes the cost benefits and fault detection capabilities of this testing approach. Examples from a Web-based application will be used to illustrate the modeling and testing concepts.
- Learn how to implement model-based testing in your organization
- Create effective scripts for use by automation tools
|
Marlon Vieira, Siemens Corporate Research, Inc.
|
|
Testing for Sarbanes-Oxley Compliance In the wake of huge accounting scandals, many organizations are now being required to conform to Sarbanes-Oxley (SOX) legal requirements regarding internal controls. Many of these controls are implemented within computer applications. As testers, we should be aware of these new requirements and ensure that those controls are tested thoroughly. Specifically, testers should identify SOX-based application requirements, design automated test cases for
those requirements, create test data and test environments to support those tests, and document the test results in a way understandable by and acceptable to auditors, both internal and external. To be most efficient, SOX testing should not be separate but should be incorporated into system testing.
- Learn the SOX testing lifecycle
- Identify testable requirements for SOX compliance testing
- Review SOX test automation strategies
|
Suresh Chandrasekaran, Cognizant
|
|
Measuring the "Good" in "Good Enough Testing" The theory of "good enough" software requires determining the trade off between delivery date (schedule), absence of defects (quality), and feature richness (functionality) to achieve a product which can meet both the customer's
needs and the organization's expectations. This may not be the best approach for pacemakers and commercial avionics software, but it is appropriate for many commercial products. But can we quantify these factors? Gregory Pope
does. Using the COQALMOII model, Halstead metrics, and defect seeding to predict defect insertion and removal rates; the Musa/Everette model to predict reliability; and MatLab for verifying functional equivalence testing, Greg
evaluates both quality and functionality against schedule.
- Review how to measure test coverage
- Discover the use of models to predict quality
- Learn what questions you should ask customers to determine "good enough"
|
Gregory Pope, Lawrence Livermore National Laboratory
|
|
STARWEST 2006: Branch Out Using Classification Trees for Test Case Design Classification trees are a structured, visual approach to identify and categorize equivalence partitions for test objects to document test requirements so that anyone can understand them and quickly build test cases. Join Julie Gardiner to look at the fundamentals of classification trees and how they can be applied in both traditional test and development environments. Using examples, Julie
shows you how to use the classification tree technique, how it complements other testing techniques, and its value at every stage of testing. She demonstrates a classification tree editor that is one of the free, commercial tools now available to aid in building, maintaining, and displaying classification trees.
- Develop classification trees for test objects
- Understand the benefits and rewards of using classification trees
- Know when and when not to use classification trees
|
Julie Gardiner, QST Consultants Ltd.
|
|
Keeping it Between the Ditches: A Dashboard to Guide Your Testing As a test manager, you need to know how testing is proceeding at any point during the test. You are concerned with important factors such as test time remaining, resources expended, product quality, and test quality. When
unexpected things happen, you may need additional information. Like the dashboard in your car, a test manager's dashboard is a collection of metrics that
can help keep your testing effort on track (and out of the ditch). In this session, Randall Rice will explore what should be on your dashboard, how to obtain the data, how to track the results and use them to make informed decisions, and how to convey the results to management. Randall will present examples of various dashboard styles.
- Build your own test management dashboard
- Select useful metrics for your dashboard
- Use the dashboard to successfully control the test
|
Randy Rice, Rice Consulting Services Inc
|