|
Variations on a Theme: Performance Testing and Functional Unit Testing The right types of performance tests can reveal functionality problems that would not usually be detected during unit testing. For example, concurrency and thread safety problems can manifest themselves in poor performance or deadlocks, leading to incorrect output. Because unit tests inherently lack concurrent activity, these problems rarely manifest themselves in functional tests. André Bondi describes test structures based on rudimentary models that reveal valuable insights about system scalability, performance, and system function. For example, to ensure that resource utilization increases linearly with the load-a necessary condition for scalability-transactions should be submitted to systems for long periods of time at different rates. Conversely, when the load is constant, performance measures should be constant.
|
Andre Bondi, Siemens Corporate Research
|
|
A Customer-driven Approach to Software Metrics In their drive to delight customers, organizations initiate testing and quality improvement programs and define metrics to measure their success. In many cases, we see organizations declare success even though their customers do not see much improvement in either products of services. Wenje Lai and J.P. Chen share their approach of identifying quality improvement needs and defining the appropriate metrics that link improvement goals to customer experiences. As a result, the resources allocated to internal quality improvement efforts maximize the value to the business. Their approach is a simple three-step procedure that any test or development organization can easily adopt. It starts with using customer survey data to understand the organization’s customer pain points and ends with identifying the metrics that are linked to the customer experience and actionable by development and test teams inside the organization.
|
Wenje Lai, Cisco Systems
|
|
Debunking Agile Testing Myths What do the Agile Manifesto and various agile development lifecycle implementations really mean for the testing profession? Extremists say they mean “no testers”; others believe it’s just “business as usual” for testers. As a test manager who has been around the block a few times, Geoff Horne has participated in countless test projects, both agile and traditional. Some of his traditional thinking about testing was turned on its ear and challenged by the key precepts of agile development. He’s discovered that traditional projects can achieve many benefits of the agile testing approach. In this revealing session, Geoff identifies and dispels the myths surrounding agile testing and demonstrates how traditional and agile methods can co-exist within a single project. For testers not versed in agile, Geoff offers suggestions for being prepared to work on an agile project when the opportunity arises.
|
Geoff Horne, iSQA
|
|
Futility-based Test Automation Developers and other project stakeholders are paying increased attention to test automation because of its promise to speed development and reduce the costs of systems over their complete lifecycle. Unfortunately, flawed test automation efforts have prevented many teams from achieving the productivity and savings that their organizations expect and demand. Clint Sprauve shares his real-world experiences, exposing the most common bad habits that test automation teams practice. He reveals the common misconceptions about keyword-driven testing, test-driven development, behavior-driven development, and methodologies that can lead to futility-based test automation. Regardless of your test automation methodology or whether you operate in a traditional or agile development environment, Clint offers a solution on how to avoid the “crazy cycle” of script maintenance and ways to incrementally improve your test automation practices.
|
Clinton Sprauve, Borland (a Micro Focus company)
|
|
Testing the System's Architecture The architecture is a key foundation for developing and maintaining flexible, powerful, and sustainable products and systems. Experience has shown that deficiencies in the architecture cause too many project failures. Who is responsible for adequately validating that the architecture meets the objectives for a system? And how does architecture testing differ from unit testing and component testing? Even in the ISTQB glossary, architecture testing is not defined. Peter Zimmerer describes what architecture testing is all about and shares a list of practices to implement this type of testing within test and development organizations. Peter offers practical advice on the required tasks and activities as well as the roles, contributions, and responsibilities of software architects and others in the organization. Learn what architecture testing really means and how to establish and lead it in your projects.
|
Peter Zimmerer, Siemens AG
|
|
Grassroots Quality: Changing the Organization One Person at a Time Throughout its history, SAS has valued innovation and agility over formal processes. Attempts to impose corporate-wide policies have been viewed with suspicion and skepticism. For quality analysts and test groups with a quality mission, the challenge is to marry innovation with the structures expected from a quality-managed development process. Frank Lassiter shares the experiences of his group’s working within the corporate culture rather than struggling against it. He describes the services his group provides to individual contributors-mentoring, facilitating meetings, exploring best practices, and technical writing support. With a reputation for adding real, immediate value to the daily tasks of individuals on R&D teams, Frank’s group is enthusiastically invited into projects.
|
Frank Lassiter, SAS Institute Inc
|
|
Testing with Emotional Intelligence Our profession can have an enormous emotional impact-on others as well as on us. We're constantly dealing with fragile egos, highly charged situations, and pressured people playing a high stakes game under conditions of massive uncertainty. On top of this, we're often the bearers of bad news and are sometimes perceived as the "critics", activating people's primal fear of being judged. Emotional Intelligence (EI), the concept popularised by Harvard psychologist and science writer Daniel Goleman, has much to offer our profession. Key EI skills include self awareness, self-management, social awareness, and relationship management. Explore the concept of EI, assess your own levels of EI, and look at ways in which EI can help in areas including anger management, controlling negative thoughts, constructive criticism, and dealing with conflict, all discussed within the context of the testing profession.
|
Thomas McCoy, Department of FaHCSIA
|
|
Testing Embedded Software Using an Error Taxonomy Just like the rest of the software world, embedded software has defects. Today, embedded software is pervasive-built into automobiles, medical diagnostic devices, telephones, airplanes, spacecraft, and really almost everything. Because defects in embedded software can cause constant customer frustration, complete product failure, and even death, it would seem critical to collect and categorize the types of errors that are typically found in embedded software. Jon Hagar describes the few error studies that have been done in the embedded domain and the work he has done to turn that data into a valuable error taxonomy. After explaining the concept of a taxonomy and how you can use it to guide test planning for embedded software, he discusses ways to design tests to exploit the taxonomy and find important defects in your embedded system.
|
Jon Hagar, Consultant
|
|
Requirements Based Testing on Agile Projects If your agile project requires documented test case specifications and automated regression testing, this session is for you. Cause-effect graphing-a technique for modeling requirements to confirm that they are consistent, complete, and accurate-can be a valuable tool for testers within agile environments. Whether the source material is story cards, use cases, or lightly documented discussions, you can use cause-effect graphing to confirm user requirements and automatically generate robust test cases. Dick Bender explains how to deal with short delivery times, rapid iterations, and the way requirements are documented and communicated on agile projects. By updating the cause-effect graphic models from sprint to sprint as requirements emerge, you can immediately regenerate the related test cases. This approach is far more workable than attempting to maintain the test specifications manually.
|
Richard Bender, Bender RBT, Inc.
|
|
Operational Testing: Walking a Mile in the User's Boots Often, it is a long way from the system’s written requirements to what the end user really needs. When testing is based on the requirements and focuses solely on the features being implemented, one critical perspective may be forgotten-whether or not the system is fit for its intended purpose and does what the users need it to do. Gitte Ottosen rediscovered this fact when participating in the development of a command and control system for the army, leading the test team to take testing to the trenches and implement operational testing-also called scenario-based testing. Although her team had used domain advisors extensively when designing the system and developing requirements, they decided to design and execute large operational scenarios with real users doing the testing. Early on, they were able to see and experience the system in action from the trenches and give developers needed feedback.
|
Gitte Ottosen, Systematic Software Engineering
|