|
Better Software Conference & EXPO 2007: The Art of SOA Testing: Theory and Practice Service Oriented Architecture (SOA) based on Web Services standards has ushered in a new era of how applications are being designed, developed, and deployed. SOA's promise to enable the development of applications that are built by combining loosely coupled and interoperable services poses new challenges for testers and everyone involved with the software reliability and security. Among the challenges are dealing with multiple Web Services standards and implementations, legacy applications (of unknown quality) now exposed as Web services, weak or non-existent security controls, and services of possibly diverse origins chained together to create dynamic application implementations. Join Rizwan Mallal to learn concepts, skills, and powerful techniques-WSDL chaining, schema mutation, and automated filtration-needed to meet these challenges.
|
Rizwan Mallal, Crosscheck Networks
|
|
Static Analysis and Secure Code Reviews Security threats are becoming increasingly more dangerous to consumers and to your organization. Paco Hope provides the latest on static analysis techniques for finding vulnerabilities and the tools you need for performing white-box secure code reviews. He provides guidance on selecting and using source code static analysis and navigation tools. Learn why secure code reviews are imperative and how to implement a secure code review process in terms of tasks, tools, and artifacts. In addition to describing the steps in the static analysis process, Paco explains methods for examining threat boundaries, error handling, and other "hot spots" in software. Find out about the analysis techniques of Attack Resistance Analysis, Ambiguity Analysis, and Underlying Framework Analysis as ways to expose risk and prioritize remediation of insecure code.
- Why secure code reviews are the right approach for finding security defects
|
Paco Hope, Cigital
|
|
Analyze Customer-Found Defects to Improve System Testing How do we know if we have made the right choices regarding the way we tested a product? Did we focus our efforts in the right areas? Only a careful and orchestrated analysis of customer-found bugs will give us the answers. You can obtain a wealth of information from post-release bugs: the need for more code coverage in our tests, the value of our regression testing, the validity of our load generating scripts, our choices of target environments, tests we do not need to run, and more. Evelyn Moritz describes how to gather, analyze, categorize, and measure customer-found bugs in ways that will help testers and test departments become more efficient and effective at finding the types of bugs that impact their customers the most.
- Information you should collect about customer-found bugs
- Techniques for bug analysis and reporting
- How customer-found bugs can be used to improve system testing
|
Evelyn Moritz, AVAYA
|
|
The Testing Center of Excellence When it comes to system and acceptance testing, project teams often end up scrambling for resources, late in the project schedule. The test team must be assembled or expanded, learn the application, and improve their skills before testing begins. When the project ends, the team is downsized or disbanded and its knowledge, skills, and experience are all diminished or lost. David Wong thinks there is a better way-organize skilled individuals into a Testing Center of Excellence (TCOE) to leverage their built-up expertise and application knowledge. A TCOE increases operational efficiencies and provides your organization with one-stop shopping for all testing services. The TCOE is responsible for scheduling test cycles, recruiting and training new staff, and retaining a pool of talented test professionals.
|
David Wong and Dalim Khandaker, CGI
|
|
When Will the Product be Ready to Ship - a "Hurricane Tracking System" Most test execution tracking systems are backward looking and do not attempt to quantify what remains to be done. Management, on the other hand, is forward looking-asking, "When will testing be done?" And that question itself is fundamentally flawed, implying that testing is either "done" or "not done." What management should be asking is "When will the risks be acceptable to release the product?" David Gilbert presents a unique approach to tracking and predicting the progress of testing efforts. Using the metaphor of hurricane tracking, he shows how "what if" scenarios can be created to demonstrate the costs and benefits of various test execution scenarios. Take back novel techniques to provide your team and senior management the key information they need to relate the testing effort to the "bottom line" impact of product release.
- Hurricane tracking as a model for test progress tracking
|
David Gilbert, Sirius Software Quality Associates
|
|
How to Design Frustrating Products In the software business, poor product design can lead to frustration and wasted time for our customers. Although we can ignore "usability" and "good design" without negatively impacting the initial success of a product, sales and customer satisfaction will suffer in the long run. Usability is a topic that has been discussed at great length, but many of the accepted design conventions either lack explanations of where and how to apply them, or they are entirely untrue. Sanjeev Verma explains how to ignore usability and save valuable time during the design phase of a product and apply it where it really counts-on new feature development. Kendra Yourtee offers proven practices that she has used in her daily routines to improve the usability of products as they are updated. She discusses simple ways to "test" designs against real data before the software is complete.
|
Sanjeev Verma, Microsoft
|
|
Navigating the Installation If you've ever popped a CD into a drive and run an install for software you're about to test, then you might be performing installation testing indirectly. If not properly installed, an application could give false results for all other testing. A better strategy is to test the install process directly, which will give you greater confidence in the quality of your software.
|
|
|
What's Wrong with Your Testing Strategy? When the design and the coding are complete, and the product seems ready to ship, it’s hard to understand why testing takes so long. Discover how your source code management system can help you unblock the testing bottleneck.
|
|
|
Test Design with Risk in Mind Sometimes in testing we find problems that surprise us. And that's where risk-based testing comes in. Build your tests around "What if...?" statements to help you anticipate problems before they arise.
|
|
|
Hurry Up & Wait There are no industry standards for Web response times. How long a user is willing to wait for a Web page to load depends on any number of variables and conditions. Find out how to determine and quantify performance criteria and use those criteria to create happy customers.
|
|