|
Hallmarks of a Great Tester As a manager, you want to select and develop people with the talents to become great testers, the ability to learn the skills of great testers, and the willingness to work hard in order to become great testers. As an individual, you aspire to become a great tester. So, what does it take? Michael Hunter reveals his twenty hallmarks of a great tester from personality traits-curiosity, courage, and honesty-to skills-knowing where to find more bugs, writing precise bug reports, and setting appropriate test scope. Measure yourself and your team against other great testers, and find out how to achieve greatness in each area. Learn how to identify the great testers you don’t know that you already know!
- The personality traits a person needs to become a great tester
- The talents a person needs to become great tester
- The skills you need to develop to become a great tester
|
Michael Hunter, Microsoft Corporation
|
|
Trends, Innovations and Blind Alleys in Performance Testing Join experts Scott Barber and Ross Collard for a lively discussion/debate on leading edge performance testing tools and methods. Do you agree with Scott that performance testing is poised for a great leap forward or with Ross who believes that these "silver bullets" will not make much difference in resolving the difficulties performance testing poses? Scott and Ross will square off on topics including commercial vs. open source tools; compatibility and integration of test and live environments; design for performance testability; early performance testing during design; test case reuse; test load design; statistical methods; knowledge and skills of performance testers; predicting operational behavior and scalability limits; and much more. Deepen your understanding of the new technology in performance testing, the promises, and the limitations.
- The latest tools and methods for performance testing
|
Scott Barber, PerTestPlus, and Ross Collard, Collard & Company
|
|
Diagnosing Performance Problems in Web Server Applications Many application performance failures are episodic, leading to frustrated users calling help desks, frantic troubleshooting of production systems, and re-booting systems. Often these failures are a result of subtle interactions between code and the configuration of multiple servers. On the other hand, well-designed applications should demonstrate gradual performance degradation and advanced warning of the need to add hardware capacity. Join Ron Bodkin as he discusses the patterns of application failure, some common examples, and testing techniques to help reduce the likelihood of episodic failures in production. Learn about the tools and techniques needed to instrument the application, monitor the infrastructure, collect systems data, analyze it, and offer insight for corrective actions.
|
Ron Bodkin, Glassbox software
|
|
Performance Testing Early in Development Iterations When the software architecture is emerging and many features are not yet ready, performance testing is a challenge. However, waiting until the software is almost finished is too risky. What to do? Neill McCarthy explores how performance testing can be made more Agile and run starting in the early iterations of development. Learn how to implement early performance automation using appropriate tools in build tests and the requirements for early performance testing of user stories. Neill presents lessons learned from his "coal face" of performance testing in Agile projects and shares ideas on how you can add more agility to your performance testing.
|
Neill McCarthy, BJSS
|
|
STARWEST 2005: Testing Dialogues - Technical Issues Is there an important technical test issue bothering you? Or, as a test engineer, are you looking for some career advice? If so, join experienced facilitators Esther Derby and Elisabeth Hendrickson for "Testing Dialogues-Technical Issues." Practice the power of group problem solving and develop novel approaches to solving your big problem. This double-track session takes on technical issues, such as automation challenges, model-based testing, testing immature technologies, open source test tools, testing web services, and career development. You name it! Share your expertise and experiences, learn from the challenges and successes of others, and generate new topics in real-time. Discussions are structured in a framework so that participants receive a summary of their work product after the conference.
|
Esther Derby, Esther Derby Associates Inc
|
|
It's 2005, Why Does Software Still Stink We've now been writing software for an entire human generation. Yet software is arguably the least reliable product ever produced. People expect software to fail, and our industry has developed a well-deserved and widely accepted reputation for its inability to deliver quality products. James Whittaker explores the history of software development over the last generation to find out why. He uncovers several attempts to solve the problem and exposes their fatal flaws. James then looks forward to a world without software bugs and offers a roadmap-practical techniques that can be implemented today-for how to get there from here. Join James on this journey through the past and into the future-and be sure to bring something to scrape the bugs off your windshield.
|
James Whittaker, Florida Institute of Technology
|
|
Agile Software Development: The Home of 31 Flavors You've heard of eXtreme Programming (XP) and perhaps Scrum. How about Crystal Clear, Adaptive Software Development, Dynamic Systems Development Method, Rational Unified Process for Agile Development, and Feature Driven Development? These are some of the many variations of Agile development methods. Join Jeff McKenna as he explores the many flavors of Agile development methods and explains the similarities and differences. Find out what aspects of Agile development can help your organization’s development team in its particular environment. If you are considering Agile development and need to decide in which direction to go, this session is for you. Although a one-hour session cannot provide all the information you will need, you can explore what is common-the philosophy, the values, the characteristics-and what is different-the methods, the coverage, the costs-about different Agile approaches.
|
Jeff McKenna, Agile Action
|
|
Rapid Bottleneck Identification for Successful Load Testing Rapid bottleneck identification is a methodology that allows QA professionals to very quickly uncover Web application limitations and determine what impact those limitations have on the enduser experience. Starting with the premise that every application has a scalability limit, this approach sets out to quickly uncover where those limitations are and to suggest corrective action.
Learn details about the most common application scalability limits-spanning network, application server, database server, and Web server-and how to quickly uncover them by focusing first on throughput and then on concurrency. With a modular, iterative approach to load testing, you focus
|
Joe Fernandes, Empirix
|
|
Controlling Performance Testing in an Uncontrolled World Think about it ... You are responsible for performance testing a system containing over 5 billion searchable documents to an active user base of 2.6 million users, and you are expected to deliver notification of sub-second changes in release response and certification of extremely high reliability and availability. Your n-tier architecture consists of numerous mainframes and large-scale UNIX
servers as well as Intel processor-based servers. The test environment architecture is distributed across large numbers of servers performing shared functions for a variety of products competing for test time and resources during aggressive release cycles. Because it is impractical and too costly to totally isolate systems at this scale, capacity and performance test engineers produce high quality
|
Jim Robinson, LexisNexis
|
|
A Strategic Approach - "Beta the Business" Beta testing is an industry standard practice to obtain user feedback prior to general availability of software. Have you ever considered that the Beta release can be used to validate the software's value to customers and application users? Extending the Beta concept will result in higher customer satisfaction (and higher revenue for commercial products). Also, you can employ Beta testing to evaluate not only the software product, but the distribution (and sales) process, training, customer support, and usage within your customers' environments. Far beyond just finding defects in the product, you can focus Beta testing on how well the software is meeting your customers' needs. What does that mean to the Development team and the organization as a whole? What are the risks and challenges that we face? What are the rewards?
|
Pete Conway, EMC Corporation
|