|
When Testing Becomes a Risk We test software to prevent bad things from happening when the software is deployed into production. We assess the quality of the software and give well-founded advice on its readiness for release. However, in some cases, the mere act of testing can cause significant problems. Bart Knaack analyzes real-life testing “accidents” that had serious consequences to the business. For example, although most companies spend a lot of money to secure their production environments, many leave their test environments only partially protected. If a hacker gets into the testing environment–or even worse, the defect database–they can wreak havoc or learn all about the vulnerabilities of your system. Bart shares examples of testing accidents, challenging you to create solutions to prevent these accidents from happening in your organization. Life is too short to make all these mistakes yourself. Come and learn from Bart.
|
Bart Knaack, Logica
|
|
Better Than a Root Canal: Root Cause Analysis of Defects The quality problems many companies face after releasing a new product can be as painful as a root canal. One way to avoid this pain is timely root cause analysis (RCA) during development. Proper RCA and resulting improvements prevent product failures, eliminate associated rework, and reduce the pain of initial product releases. Based on empirical research conducted on today's RCA practices in the industry, Jan van Moll explains why many companies fail to do effective root cause analysis in practice. Presenting astonishing RCA data from projects, Jan shares specific examples of successes and failures with RCA. He points out the common pitfalls of defect analysis and demonstrates how to work toward problem solutions in a pragmatic and practical manner. Learn the critical success factors of RCA derived from industry experience to improve your practices and produce better products.
|
Jan Moll, Philips Healthcare - Magnetic Resonance Systems
|
|
Accelerate Your Testing with Service Virtualization We all know that it’s not always possible to have our entire system available during testing because internal components may not be ready to execute or external components may be too expensive to use for testing. There is a solution out there for you. Glyn Rhodes shares how and when to employ service virtualization techniques to create “stubs”–modules of code that stand in for and simulate portions of your system. While stubbing has traditionally been a developer-only activity, it’s time that testers learn these techniques, too. Glyn explores how testers can create and maintain service virtualization assets and estimate test environment utilization with and without stubs. Learn how strategic and coordinated service virtualization yields enormous benefits: testing downtime is minimized; risk is mitigated; organizational agility increases; and quality is built into your system as early as possible.
|
Glyn Rhodes, Green Hat
|
|
STAREAST 2011: Seven Key Factors for Agile Testing Success What do testers need to do differently to be successful on an agile project? How can agile development teams employ testers' skills and experience for maximum value to the project? Janet Gregory describes the seven key factors she has identified for testers to succeed on agile teams. She explains the whole-team approach of agile development that enables testers to do their job more effectively. Then, Janet explores the "agile testing mindset" that contributes to a tester's success. She describes the different kind of information that testers on an agile team need to obtain, create, and provide for the team and product owner. Learn the role that test automation plays in the fast-paced development within agile projects, including regression and acceptance tests. By adhering to core agile practices while keeping the bigger picture in mind, testers add significant value to and help ensure the success of agile projects.
|
Janet Gregory, DragonFire, Inc.
|
|
Streamline Test Planning with Business Process Models Test-QA professionals and business-systems analysts have always lived in two separate worlds, with test planning and design segregated from the task of creating business process models. However, a business process modeling (BPM) diagram–which visually shows the break down of activities into individual tasks–for the test project can help Test-QA managers see how to organize multiple tasks and create an end-to-end test project flow. Because the BPM effort eliminates many unnecessary steps and inefficiencies, test planning is streamlined and the entire test project is more efficient. Filip Szymanski examines how Test-QA teams can use BPMs as a guide to identify discrete testing components, gain access to already optimized data, ensure there is enough detail in the business process diagrams, and optimize a business process scenario.
|
Filip Szymanski, Hewlett Packard
|
|
Test Execution and Results Analysis: Time for a "Legal" Separation Generally, testers mix test execution and test analysis. Typically, each test case execution also does its bit of analysis focusing on the feature under test, comparing actual to expected results. Jacques Durand explains that by declaring and enforcing a "legal" separation between execution and analysis tasks, testers' perspectives automatically change. Rather than focusing only on actual vs. expected results of one output, every output in every test becomes fair game for a more comprehensive analysis, leading to finding more bugs sooner. With this separation approach, each test suite is split into a set of test scenarios plus a set of logical test assertions. Join Jacques to learn how to leverage XML to format scenario outputs and other analyzer inputs, and how to write executable declarative test assertions independent from test scenarios.
|
Jacques Durand, Fujitsu Software Corporation
|
|
The Cassandra Syndrome: The Tester's Dilemma In Homer's Iliad, we read of Cassandra, who had the gift of prophecy and the curse of having no one listen to her. Many testers have felt like Cassandra, but why? When engaged in what many perceive as "negative" activities–predicting problems, discovering defects, and reporting incidents–testers often are seen as negative people who don't make a "positive" contribution to the project. While most team members focus on making software work, testers focus on what doesn't work. Rick Hower warns that these seemingly contradictory perspectives have the potential to interfere with team communication, sometimes resulting in testers being labeled as "not team players" and literally being ignored.
|
Rick Hower, Digital Media Group, Inc.
|
|
Usability Testing in a Nutshell Because systems are now more complex and competition is extreme, testing for usability is crucial for ensuring our products not only stand out from the crowd but even exceed our customer's expectations. As testers, we often encounter requirements such as "The system must be user-friendly." What does this mean? And, more importantly, how do we test against this vague notion? Join Julie Gardiner as she presents usability testing techniques to help evaluate system efficiency, effectiveness, and user satisfaction. Take back a toolkit full of usability testing techniques-heuristic evaluation, cognitive walkthrough, focus groups, personas, contextual task analysis, usability labs, and satisfaction surveys-for your next testing project. Learn how to define usability goals and how to get your development team to take usability issues seriously. If you want to improve your confidence in usability testing, this session is for you.
|
Julie Gardiner, Grove Consultants
|
|
Insights into Mobile Applications Testing The phenomenal growth of mobile devices has opened avenues for organizations to integrate them into their mainstream computing environment. Today's mobile applications deliver complex functionality on platforms that have limited resources for processing and testing. Unlike the PC-based environment, the mobile world is comprised of a wide range of devices with diverse hardware and software configurations and communication intricacies. This diversity presents unique challenges and requires unique testing strategies. Rumesh Palaniswamy shares his experiences with testing mobile applications. The smaller screens, unique input methods, and minimal processing power in these devices often lead to unexpected outputs and other faults.
|
Rumesh Palaniswamy, Cognizant Technology Solutions
|
|
Half-Truths about Agile Testing Organizations of all sizes are rapidly adopting agile application development methodologies. Because agile has primarily focused on how developers work, much of the testing community has been at a loss as to how to achieve their mission within agile. Misconceptions abound about how testing should be conducted in this new paradigm. The debate has some arguing that organizations should completely abandon traditional testing methods and tools when adopting agile. Clint Sprauve explores the half-truths of testing in agile and how they affect the testing organization's role in agile development.
|
Clinton Sprauve, Borland (a Micro Focus company)
|