development

Conference Presentations

Web 2.0: The Fall and Rise of the User Experience

The Web has enabled pervasive global information sharing, commerce, and communications on a scale thought to be impossible only ten years ago. At the same time, the Web dealt a setback in the user interface experience of networked applications. Only now are Web standards and technologies emerging that can bring us back to the rich and robust user experiences that were developed in the desktop client/server era before the Web came along. Wayne Hom presents examples of great, rich client Web user interfaces and discusses the enabling tools, technologies, and methodologies for today’s popular Web 2.0 approaches. Wayne discusses the not-so-obvious pitfalls of the new technologies and concludes with a look at user interface opportunities beyond the current Web 2.0 state-of-the-art to see what may be possible in the future.

  • User experiences on the Web versus older technologies
Wayne Hom, Augmentum Inc.
Quantitative and Statistical Management Applications

There is no longer any question that-when appropriately used-quantitative measurement and management of software projects works. As with any tool, the phrase "appropriately used" tells the tale. Drawing on his experiences using quantitative and statistical measurement, Ed Weller provides insights into the key phrase "appropriate use." Ed offers cases of useful-and not so useful-attempts to use the "high maturity" concepts in the Capability Maturity Model Integration® (CMMI®) to illustrate how you can either achieve a high return on your investment in these methods or fail miserably. After an introduction to the theory of statistical measurement, Ed presents examples of the successful use of statistical measures and discusses the traps and pitfalls of their incorrect implementation.

Edward Weller, Integrated Productivity Solutions, LLC
Static Analysis and Secure Code Reviews

Security threats are becoming increasingly more dangerous to consumers and to your organization. Paco Hope provides the latest on static analysis techniques for finding vulnerabilities and the tools you need for performing white-box secure code reviews. He provides guidance on selecting and using source code static analysis and navigation tools. Learn why secure code reviews are imperative and how to implement a secure code review process in terms of tasks, tools, and artifacts. In addition to describing the steps in the static analysis process, Paco explains methods for examining threat boundaries, error handling, and other "hot spots" in software. Find out about the analysis techniques of Attack Resistance Analysis, Ambiguity Analysis, and Underlying Framework Analysis as ways to expose risk and prioritize remediation of insecure code.

  • Why secure code reviews are the right approach for finding security defects
Paco Hope, Cigital
Improving Code Quality with Eclipse and its Java Plug-ins

One of the features that makes Eclipse so popular within the Java community is the abundance of easy to use plug-ins. Many of these are freely available open-source tools. Plug-ins are available for virtually anything from implementing database connectivity to instant messaging. Because code quality is a critical aspect of production software applications, Eclipse has built-in tools that help developers write and deliver high quality code. Levent Gurses has employed a number of external plug-ins, including PMD, CheckStyle, JDepend, FindBugs, Cobertura, CPD, Metrics, and others to transform Eclipse into a powerhouse for writing, testing, and releasing high quality Java code. Levent shows you how to use Eclipse to improve your team's coding habits, enforce organizational standards, and zap bugs before they reach the client.

  • The standard quality check tools available in Eclipse
Levent Gurses, Stelligent
Testing Web Applications for Security Defects

Approximately three-fourths of today's successful system security breaches are perpetrated not through network or operating system security flaws but through customer-facing Web applications. How can you ensure that your organization is protected from holes that let hackers invade your systems? Only by thoroughly testing your Web applications for security defects and vulnerabilities. Michael Sutton describes the three basic security testing approaches available to testers-source code analysis, manual penetration testing, and automated penetration testing. Michael explains the key differences in these methods, the types of defects and vulnerabilities that each detects, and the advantages and disadvantages of each method. Learn how to get started in security testing and how to choose the best strategy for

  • Basic security vulnerabilities in Web applications
  • Skills needed in security testing
Michael Sutton, SPI Dynamics
Open Source Tools for Web Application Performance Testing

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

  • Learn the capabilities of OpenSTA
Dan Downing, Mentora Inc
How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Software Disasters and Lessons Learned

Software defects come in many forms--from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.

  • Learn from our mistakes-not in generalities but in specifics
  • Understand the synergistic effects of errors
  • Distinguish between technical failures and management failures
Patricia McQuaid, Cal Poly State University
Industry Benchmarks: Insights and Pitfalls

Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.

Jim Brosseau, Clarrus Consulting Group, Inc.
Beat the Odds in Vega$: Measurement Theory Applied to Development and Testing

James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.

  • Why the statistical approach does not work for making some accurate predictions
  • Measurement theory to predict and compare
James McCaffrey, Volt Information Sciences, Inc.

Pages

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.