Part of managing software development is dealing with the challenges that arise. Delivering software requires overcoming the challenges, or at least mitigating the attendant risks during the development activity. Generally, organizations work with a constant level of challenge. When one challenge is overcome, the organization will take on a new challenge.
For example, when a project releases software that overcomes a technical challenge, it might then schedule a new release with a challenging timeline, or re-open the technical challenge by choosing a new target platform.
The next challenge is not "just like the last one, only more." An organization that successfully implements geographically distributed development does not follow up by adding another development office. The organization may well open another office, but doing so is now a normal operation; in order to maintain the level of challenge, some other "impossible task" must be undertaken.
While problems may have unique characteristics, there are a lot of common themes. This series discusses sixteen separate dimensions of challenge. The situation your team faces can be analyzed with respect to the dimensions listed here, and some appropriate conclusions drawn. Each dimension is dissected, and some approaches for dealing with it are provided. Here is the complete list:
- Capability
- Culture
- Efficiency
- Finances
- Geography
- Human Language
- Knowledge Retention
- Location of Responsibility
- Organizational Structure
- Requirements or Business Demands
- Scalability
- Schedule
- Standards & Interfaces
- Simultaneous Development
- Technical Complexity
- Technological Diversity
This month the Journal is focusing on Release Management. Taken to the extreme, Release Management incorporates allof the challenges of CM but with less time to get things done. There's not enough time or enough room for me to write that article, so instead we'll look at two significant pieces of the puzzle: schedule, and technological diversity.
It's worth asking what Release Management actually means. If you check the job postings, you'll find a lot of open reqs for "Release Engineer" positions. Apparently, these folks are responsible for writing installation and packaging scripts. So a release manager would be someone who gave direction to release engineers. That direction, presumably, is release management.
So what kind of direction does a release manager provide? It includes determining the feature and update contents that go into a release, establishing the features and capabilities of the installation and deployment scripts, and monitoring the actual availability of complex features from development (to coordinate the application and installation components). Getting the right elements delivered at the right time-scheduling-is obviously an important part of release management. Other management-type functions, like bombarding team members with useless emails and scheduling far too many meetings, are basically constant and can be ignored.
There is an interesting difference in vocabulary between different kinds of software release processes. Some software gets integrated, while other software is installed. It seems that if a package is delivered in a non-working form, it is integrated into other, functioning, packages. If a package is delivered in working order, but requires some special operations to unpack it or connect it to the operating environment, it gets installed. And if the act of delivering the software is also supposed to make it ready for use, it gets deployed. There is clearly a distinction made between piecemeal delivery-with separate packaging and installation activities-and atomic deployment.
Technological Diversity-systems built atop multiple, varied technologies-is highly correlated with deployment rather than installation. This may be because diverse systems tend to have several contributing development teams, and because the natural modularity of the systems encourages the separate release of components. Because of this correlation, technologically diverse projects are a significant challenge for CM practitioners, and we will look at it here.
Dimension: Schedule
Scheduling is, of course, a part of project management. Nearly all projects have scheduling challenges, but most of them aren't configuration management challenges-so let's ignore those. Schedule challenges for configuration management lie in the areas of work breakdown, schedule estimation, feature management, simultaneous development and efficiency.
Work breakdown is the basic approach of nearly all project management and engineering design: large tasks or assemblies are decomposed into smaller and smaller units. Obviously, the decomposition based construction techniques (Component-based Development, Service-oriented Architecture, and Software Assembly) have a lot to offer in this area.
Simultaneous development presents its own set of challenges, and will be covered in a separate article in this series. But decomposition immediately implies some kind of parallel effort. The codevelopment techniques, obviously, are all about parallel effort. The size, structure, and level of formality of the project will determine which ones are
appropriate for your situation.
Efficiency is also a separate dimension of challenge, covered separately. Most so-called schedule challenges take the form of "get all this work done really fast!" That isn't a scheduling challenge, but rather an efficiency challenge-do more work in less time.
Schedule estimation-predicting the time and resources required to successfully complete all or part of a project-allows project and configuration managers to predict the delivery dates of individual changes. There are tools and models available to engineering managers to help with estimation of development schedules. Despite this, it remains a black art. One of the important elements in estimation is feedback on the estimates.
Tracking various life cycle metrics is an excellent way for the CM team to help improve the estimation process. (This is the focus of the higher levels of CMM/CMMi certification.) Moreover, understanding how a feature or bug fix will be implemented has an impact on the estimation process. The more information you can provide on the interconnections between the original request(s) and the eventual delivered updates, the better.
Obviously, the system's architecture can have a huge impact on the actual amount of work required to deliver changes, as well as impacting the ability of management to accurately estimate the project schedule. But the impact derives from the architecture already in place. Unless your company is changing its business or development model, or something equally traumatic, it makes little sense to change architecture as part of an estimation improvement effort.
Feature management makes it possible to quickly and completely include or exclude all of the artifacts related to a particular change or feature. When a series of releases is scheduled, these activities combine to provide a road map of development work to be done, dependencies, and expected completion dates. (When only a single, monolithic release event is scheduled, these activities enable managers that actually know what they are doing to forecast how and why the project is going to miss its release schedule.)
Nearly all of the construction techniques, correctly applied, can support or improve your project's ability to manage features. Software Product Lines, in particular, is targeted at selective management of features. If your project is significantly feature-based, SPL is a real win. The work flow techniques Fast Feedback on Change and Parallel Streams
can both be used to isolate features or sets of changes. Fast feedback, of course, helps discover when your attempt to include or exclude changes doesn't work. And Parallel streams lets you maintain separate streams with different sets of changes in them. These streams could be either "set A versus set B" or they could be "with A versus without A," depending on the needs of your project.
The ceremonious techniques Automated Enforcement of Standards and Gated Workflow can really help with feature management. If the standard in question is simply "don't break the build" then AES becomes another form of Fast Feedback on Changes. But if the feature specification is formalized, AES becomes a mechanization of Software Product Lines. This is how Java Beans work in J2EE, for example. Gated Workflow can be used at a higher level than AES to ensure that the inclusion of software changes is done with proper regard for feature management.
If features are developed in relative isolation, a set of workflow gates can be used to incorporate the estimation and scheduling for the change from the beginning of development. Sometimes feature isolation is not done, either because it cannot be done, because the development process does not encourage it, or because the need for isolation was not obvious at the start of development. In this case, workflow gates at the end of the life cycle can be used to help reconcile the delivery schedule(s). Since the features are not isolated, the choices will generally be to deliver what is available, or spend more time and effort disentangling things. As a CM specialist, you will offer lifecycle and tool support for this, but try to avoid being in these meetings-they're usually pretty loud.
Dimension: Technological Diversity
Combining two or more different technologies within a project can cause organizational problems and information flow problems. Organizational problems usually arise when the different technologies are managed by different organizational units. If a project requires a Java web interface communicating with a mainframe server, the Java and mainframe elements will probably be developed by different teams. The need to coordinate development resources from different teams is one potential source of organizational problems. Please refer to the (later) article on Organizational Structure for a discussion.
Information flow problems occur when members of the team don't know understand the details of a technology, or don't understand the requirements or process behind a technology. Even if a technology is well-integrated into the development environment, team members who are unfamiliar with the technology can make mistakes in designing around the technology, or in establishing tests for components that incorporate the technology. The failure will appear to be ‘bad design' or ‘bad testing' or bad something-or-other. But the root cause will be not understanding an unfamiliar technology, or failing to coordinate the project members working on different technologies.
For example, consider the basic programming task of storing data in a SQL database, and presenting and editing that data via a web page. Almost every website in existence relies on these two basic technologies, but even today developers make mistakes that result in security breaches, loss of personal identity data, and database corruption. One part of the problem stems from the switch between two different languages. The web interface may be developed in Java, PHP, or Perl, but the database access is inevitably done via SQL. Developers that fail to understand SQL, or fail to understand the interface to SQL provided by their development environment, may code their database access in a way that is vulnerable to SQL Injection Attacks. Consider a script that constructs SQL like this:
$statement = "INSERT INTO table VALUES
(‘$name', ‘$address')";
This will pass testing, but is vulnerable to an attacker that sets $address to a partial SQL statement, such as $address = "X'); DELETE FROM table
WHERE (‘x' != ‘y". This attack closes the initial statement (INSERT) then appends a separate statement (DELETE). As long as the second statement merges with the text at the end of the statement, both statements appear to be valid, and may be executed by the SQL engine being used by the application[1].
This is a simplistic example, and all the languages mentioned have SQL interfaces that can eliminate this vulnerability. But the news is filled with reports of attacks being discovered against popular systems. The fact that a safe method exists does not guarantee that the safe method is being used. Software CM cannot prevent bad code. But it can help reduce the risks of dealing with diverse technologies by enforcing process and by identifying risky technologies and making sure attention is focused on those components and on the interfaces between those components and others. The Codevelopment technique Formal Interfaces & Standards is one obvious way to help isolate risks. Establishing a formal interface between a risky component and the remainder of the system helps ensure design and testing are focused on the technology.
If the project is using diverse implementation technologies, such as different development environments or platforms (Java vs. .NET, C vs. COBOL, etc.), using construction techniques like Service-oriented Architecture or Component-based Development will come naturally. Focusing attention on the risky technologies or interfaces will cost little, and pay off nicely.
If the problem technologies are in a single development environment (as in the example earlier) then imposing one of the construction techniques may be more expensive, and must be integrated with the overall design of the project. Even late in project development, it may be possible to refactor some parts of the project towards a componentized structure. Software design patterns exist that can reduce coupling and increase the isolation of whatever technology is causing the problems. If this step is needed, you will have to have a good justification for requesting a change to the internals of the project. Defect metrics are your friend in this case. Also, regardless of how your suggestions on improving the code are received, do not be afraid to take action within the SCM domain to reflect what you know: impose extra process on defect-prone subsystems or technologies, and focus audits and reviews on the places you suspect problems exist.
Any technologically diverse project is likely to be complex enough that communications and information flow will be a problem. Organizational techniques like Change Control Board and System Architect will help. If the project is using diverse implementation technologies, finding a single architect can be a challenge-senior technical staff familiar with the whole gamut of technologies are comparatively few and far between. A CCB may be a safer bet in this case. But a CCB has its own problems, since the various board members will still have to explain to each other how and why something is causing a problem for them. CCB members in a multi-platform project should be sent to training on each of the technologies involved, if possible.
Coordination of work can be a problem in a technologically diverse project. Work Flow techniques that treat project components abstractly, such as Project Baseline and Longacre Deployment Management, can help address coordination problems. Project Baseline is a highly abstract technique, and will require architectural support from a construction technique (mentioned above). The size and timing of releases (baselines) in the Project Baseline approach is crucial for coordinating development among different components. Fortunately, the technique is adjustable, so problems with coordination can be "tuned" in a trial-and-error fashion. Longacre Deployment Management provides direct control over both the development and deployment of changes, so coordination of development is a more direct, hands-on activity. LDM makes rollback of deployments an expensive operation, though, so it pays to implement a certain amount of ceremony.
Ceremony plays a key part in managing technological diversity. When knowledge is not common, it must be made explicit. Ceremony does this. In many environments, testing and validation tools are available that can help technologies interoperate. Automated Enforcement of Standards can help teams "color inside the lines" by ensuring compliance with standards or executing test cases.
If compliance is difficult to determine in a scripted or automated manner, Gated Workflow can help by preventing work flow until manual steps have been performed. For example, in a project that needs human review of graphics, generated sound or speech, or timing. Alternatively, Gated Workflow may be the solution if compliance is checked at an aggregate level rather than on individual changes. For example, by running tests on a whole collection of changes, and associating the result with the package of changes. Telelogic Synergy uses this approach in their Task-based development model, approving or rejecting an entire bundle of changes as a single unit. When a test fails, though, individual work is required to decide which task(s) caused the test to fail.
The nature of compliance can be difficult to understand-a developer used to working in a GUI environment may not understand the request/response nature of web development. Requirements Management can help tame technological diversity by forcing all the interfaces to be made explicit, and forcing the exact needs of both sides to be documented. References to existing standards generally are not enough-many "standards" are really only as standardized as one or two reference implementations. Documenting the exact structure and sequence of an interface is challenging, but having a single hardcopy can prevent a lot of fingerpointing and misery.
[1] The resulting SQL looks like this:
INSERT INTO table VALUES (‘$name', ‘X'); DELETE FROM table WHERE (‘x' != ‘y')