Comparing structured and unstructured methodologies in firmware development - HP's Logic Systems Div - technical
William A. Fischer, Jr.Comparing Structured and Unstructured Methodologies in Firmware Development
STRUCTURED METHODOLOGY in software development tends to be very much a religious issue with many practitioners. They are either strongly for or against it, but they can point to very little data that supports their point of view. The purpose of this paper is to present some objective and subjective data on the relative merits of structured and unstructured (traditional) methodologies.
The data for this comparison came from the development of a medium-to-large-scale firmware project at HP's Logic Systems Division. It was the first project at this division to use structured methods. The project consisted of an embedded 68000 microprocessor design, coded in C and using a C compiler running on the HP-UX operating system. The firmware consisted of about 47 KNCSS (thousand lines of noncomment source statements) of new code and about 12 KNCSS of reused code.
At the start of the project, a goal was established to use structured methodologies to improve the software development process and increase product quality, but the decision of whether to use structured methods on each subproject was left up to the individual engineers. As a consequence, three of the subprojects were developed using structured techniques and the other six used traditional methods. Software designers using structured techniques did their analysis using data flow diagrams (DFDs) and structure charts for their design. They also did inspections on most of the analysis and design documents. HP Teamwork/SA, a graphical tool designed for structured analysis that also performs consistency checking, was used for structured analysis. HCL (Hierarchy Chart Language) was used for structured design. HCL is an internal HP tool that plots a program structure from Pascal-like statements (see Fig. 1).
For engineers who used the traditional methods, the analysis phase typically consisted of creating a written specification. Informal design methods were used and coding was started earlier in the product development cycle. Inspections were generally not part of this process.
This was not a scientifically designed experiment to determine the better of the two methods. Rather, we simply collected data on the two groups of engineers as the project developed. As such, our data suffers from many of the common problems that beset unplanned experiments. However, since the data was collected from a single work group, many of the variables that are factors in most comparisons of project experiences have been eliminated. For example, lines of code and time expended were measured and reported in the same way. Intangibles, such as work environment, computer resources, complexity of task, and management attitudes are also identical.
Many experts say the most important variable influenceing programmer quality and productivity is individual skill. The difference in the experience level between our two groups was not substantial. However, the unstructured group was more highly regarded by management than the structured group. It is possible that those in the unstructured group had already demonstrated winning techniques for which they had been rewarded, and so they were reluctant to try newer methods, while the structured group was more willing to try new methods to improve their overall skill level.
Data Collection
The data in this report was collected from the following sources:
* Engineering Time. The time spent by each engineer was reported to a central data base on a weekly basis. The time the engineer spent doing such things as system administration, meetings, and classes was not included in the reported time. Only time spent in analysis, design, test, or coding on the engineer's primary software project was included. Time data was reported by the individual engineers.
* Defect Data. The defect data was collected by DTS (defect tracking system), an internal defect tracking tool. Defects were reported from the beginning of system integration testing. The defects discussed in this paper are only unique reproducible defects. Duplicate, nonreproducible defects, operator errors, and enhancements were not included in the defect count. Defect data was reported by the individual development engineers and the formal and informal testers.
* KNCSS and Complexity. All the KNCSS counts and McCabe's Cyclomatic Complexity metrics were computed by an internal tool called Ccount.
* Design Weight. Design weight, a measure of the effort of coding and testing, was calculated from the C code by an internal tool. This tool counts all the decisions and the unique data tokens that are passed into and out of a function. Any system calls (e.g., printf) are not included in the token count.
Comparison Results
To facilitate the comparisons, the project was broken down into subprojects that closely corresponded to the efforts of individual software designers. Each subproject was then categorized as being representative of either the structured or the traditional methodology. The results of the two methodologies were compared on the basis of productivity and quality. Development effort, manageability, communication, and reusability were the criteria used for productivity measurement. The FURPS (an acronym standing for functionality, usability, reliability, performance, and supportability) model was used as the basis for comparing quality.
We have used metrics to evaluate these factors wherever possible. Where metrics did not exist we have presented the subjective views of the authors who observed the project team throughout the development of the product.
All statistical tests reterenced in this paper were made at the 95% confidence level. Since the sample size used for the comparisons between the structured and traditional methods is small, all conclusions are very tentative.
Productivity
Achieving productivity in software development requires using tools and techniques that yield optimal return on development money.
Development Effort
Project managers tend to be most concerned with the completion of a project on schedule. Consequently, one of the first questions asked by managers considering the use of structured methods is whether it will shorten the development cycle.
Since a learning curve was involved with the structured methods, we expected that since they were being used for the first time, the structured team would be less productive than the unstructured team. To verify this assumption, two measures of productivity were used, design weight per engineering hour worked and NCSS per engineering hour. These graphs are shown for the various subprojects in Figs. 2 and 3. It can be seen that the three structured subprojects have lower productivity than the unstructured subprojects. A statistical test using general linear hypothesis techniques showed that indeed a statistical difference did exist in the productivity rates between the two methodologies. Using the structured methods, the average productivity rate was 2.40 lines/engineering-hour, while using traditional methods resulted in an average productivity rate of 6.87 lines/engineering-hour.
A central problem with analysis performed with DFDs is that it is an iterative process. The DFDs were leveled until the lowest-level bubbles could be completely described in a minispecification of about one page as recommended by the methodology. This is a rather subjective requirement and we discovered that every designer using DFDs for the first time leveled them more deeply than required. A project review also confirmed that too many intermediate layers were created to keep the complexity per page of DFDs to a minimum. At the project postmortem, it was discussed that a major contributing factor to lower productivity with the structured methods was the lack of an on-site expert. Consultations were needed at various points in the analysis and design phases of the project to verify the proper application of the techniques.
Manageability
The structured work habits seemed to help the project manager and engineers understand the software development life cycle better. Designers had a better idea when to end one phase of the project and begin another, helping to make their own process better understood and easier to manage. Figs. 4 and 5 show the times spent in various project life cycle phases for structured and traditional methodologies, respectively. The structured methods graph shows cleaner, better-defined phase changes. These clear phase changes aid the management planning process by creating a better understanding of the state of a project. Plots showing the same data as Figs. 4 and 5 were done for each engineer, and these individual plots showed the same characteristics.
The regularity of the structured life cycle can be used to improve schedule estimates. Once the percentages of time spent in the phases are historically established, the time taken to reach the end of a phase can be used to project the finish date. For example, if it takes four months to complete the analysis phase and the historical figures indicate that 33 percent of the time is spent in analysis, then it could be estimated that the project would be completed in eight more months.
It is important to measure the progress of projects against the established schedule. Keeping track of the actual completion time of each phase can provide an independent verification of established scheduling methods. If problems in meeting schedule commitments are uncovered, corrective action, such as adding additional resources, can be applied to the project.
Taking a project through the system test phase is unpredictable. The time it takes to complete the formal abuse testing is dependent on the quality built into the product. If the product is well-designed and coded, less time is spent repairing defects found during abuse testing and during the completion of formal regression tests. A reduced testing phase can shorten the overall project development time. More important, if fewer defects are embedded in the product, less time will be spent in the maintenance phase, which can consist of 50% of the project's overall cost. Fig. 6 shows a comparison of the times spent in the various development phases for the project. The graph indicates that a greater percentage of time was spent in the analysis and design phases with the structured methods. However, a very small percentage of time in comparison to the traditional methods was spent in testing.
Reusability
Reusability of code is a major produ ctivity issue at HP in general. Several of the products at our division have already reached reuse figures of 50% and we are working to increase that figure in the future. Experts suggest that, on average, reusing a piece of software requires only 20% of the effort of writing it from scratch. Thus, any activity that leads to greater reuse of code will have major benefits for productivity. Code developed using structured techniques encourages more reuse in the future because it has better documentation on each function. In addition, the interface between that function and the outside world is more clearly stated by the structured design documentation.
A major concern is the maintenance of the software documentation. Documentation that is not keep up to date is of less value than no documentation, since it can be misleading. There is a very weak connection between structured analysis and structured design. This makes it difficult and often impractical to keep the structured analysis up to date because of changes in the design and coding. The structured team chose not to keep the structured analysis documentation up to date when design and coding changes were made later in the project. This takes away some of the documentation benefits provided by the structured methods. As back annotation methods are incorporated into the tools, this deficiency will be corrected.
Communication
One of the positive attributes cited in the literature about the structured methods is that they serve as a good communication tool between team members. The project saw some of this benefit, but not as much as was originally expected. Structured methods are not a team communications panacea.
Each software designer on this project was assigned a nearly autonomous subproject. Although there were interactions between the subprojects, the subprojects were defined to minimize these interactions. The structured analysis documentation for each subproject was large enough to cause difficulty in obtaining the number of design reviews that were necessary. For another team member to understand the analysis, much additional verbal explanation was required. However, the structured analysis was very useful to the developers of that analysis in fully understanding their own subprojects.
Fig. 7 shows the staffing profile of hardware and software engineers during product development. The staffing of software engineers lagged behind hardware engineers because of an initial underestimation of the software task and the shortage of software engineers. As a result, there were some problems discovered during the hardware/software integration that cost valuable schedule time. Since we were developing a system that consisted of both hardware and software, it would have been beneficial to have included the hardware interfaces into the structured analysis. This capability would have aided the hardware/software integration by providing additional communication links between the hardware and software groups.
There is probably a strong benefit in communication with structured analysis if the whole project team uses the methodology to develop the same system model. This enables each team member to have a better understanding of the product's requirements, and helps designers understand the task in the same way.
Quality
High product quality improves customer satisfaction, decreases maintenance costs, and improves the overall productivity of the development team. The FURPS model was used as the basis of comparison between the two methodologies.
Functionality
Functionality can best be measured by customer acceptance of the product. The product resulting from this project is still in the early stages of its product life and customer acceptance of its functionality is yet to be determined. It would also be difficult to separate the functionality of the code generated by the two methods.
However, we believe that the rigor required by the structured methods is an important contribution to the development life cycle. Structured analysis was a valuable exercise for the software designers in obtaining an understanding of the customer requirements. Although the structured analysis was not used directly as a communication tool, it helped the software designers identify the correct questions to ask when issues related to functionality were addressed. These benefits assist the product definition, and enhance the chances that the product will meet all user expectations.
We also believe that the entire team benefited from the efforts of the structured group. Since much of the initial product definition was performed using structured methods, software designers joining the project later benefited greatly from the initial structured work.
Usability
The look and feel of the user interface was of prime importance to the project. Structured analysis was useful in breaking up the interface functionality into its elemental commands. However, manual pages combined with verbal discussion proved to be more effective in defining the details of the interface.
Neither methodology appeared to enhance usability more than the other. However, the structured methods can help the software designer define user interface functionality. Another method that appears to be a better method for communicating and testing of user interface ideas is rapid prototyping. This method was not used on the user interface.
Reliability
Reliability is measured by the number, types, and frequency of defects found in the software. The defects found in code created using the structured methods were compared with those using the traditional methods. Table I outlines the defect rate normalized to defect per NCSS for both prerelease and postrelease defects. Prerelease defects were found during formal abuse testing, casual use by other individuals, and the code's designer. Postrelease defects include prerelease defects and the defects found in the first four months after product release. all the postrelease defects were found internally either by abuse testing or by casual use. No customer defects had been reported at the time of this study.
Although the structured code shows a slightly lower defect density than the unstructured code, the differences are not statistically significant (using a statistical test that compares two Poisson failure rates.sup.6.).
Low-severity defects are considered to be caused by typical coding errors that occur at the same frequency, independent of the analysis and design method used. Thus, using all the DTS defects is not truly indicative of the underlying defect density. Another way of characterizing the defect density is to look only at severe defects, those classified as serious or critical. Table II examines these defects.
Again, the structured code shows a slightly lower density but the difference is not significant. We knew that the code designers' rigor in logging defects that they found themselves varied a great deal. Since this might affect the quality results, we examined only the defects logged during formal abuse testing. It was again found that there was no statistical difference between the methodologies.
Our theory, developed from these results, is that the final reliability of a product is mainly a function of environmental factors, including management expectations and peer pressures. For example, reliability can either be designed in at the beginning using structured methodologies or tested in at the end of the project with thorough regression tests.
Performance
In the design of structured modules, one is encouraged to reduce complexity of the modules by breaking up functionality. This results in more function calls, and increases the processing time of critical functions.
The software for this project had critical performance requirements for communication rates. The processing of the bytes of data by the firmware was the critical path. The critical functions had to be recorded using in-line assembly code. Although structured methods were not used on this communication firmware, it is our opinion that the structured methods as used on this project would not have helped to identify performance-critical functions or to improve the performance of these functions.
A newer form of structured analysis.sup.8 has been developed and is based on modeling of the problem data as the first step. The data is represented in an information model with the state control diagrams showing the data control. This may in fact prove to be a better model for real-time applications that have critical performance requirements.
Supportability
A large part of the costs of software development and maintenance can be attributed to maintenance activities. Software that is supportable will have lower maintenance costs. Maintenance not only refers to the repair of defects, but also to software enhancement and modification.
One factor that relates strongly to software supportability is the complexity level of each function. Functions with lower complexity tend to make it easier to modify and repair defects. For this project, the average complexity of each module of the structured code was less than the average for the unstructured code. Table III shows a summary of the complexity statistics for the two code types.
Modules with a cyclomatic complexity greater than 10 may be too complex and should be reviewed for possible restructuring. Only 13% of the structured code had a complexity greater than ten, while the unstructured code had 20%. A statistical test (comparison of two binomial fractions) showed this to be a significal difference.
The discipline of the structured methods helps the designer to write consistent documentation that can be used by those who must support the product.
Conclusions
Our analysis of structured and unstructured techniques produced mixed results. It was apparent that the structured methodologies on tis project did not provide improvement in the project development time. In fact, a longer development time was measured. This was partially because of the time required for learning the structured methodologies. However, the manageability of designers using structured techniques is higher because of well-defined development cycles. Also, the structured code appears to be more reusable, so it should improve the productivity of future projects reusing this code.
In this project, structured methods didn't appear to improve the reliability of the final code significantly. It is our opinion that reliability is more a function of the team's environmental factors. The structured code appears to be more supportable since module complexity was lower. Structured methods do not appear to be a major benefit in developing code where performance factors are a main requirement. No significant benefit was seen for the areas of functionality and usability except in those projects where the techniques were used to enhance communication of the product design and specification.
Some aspects of the structured methodology were disappointing. However, what was most important for our development team was the discipline that the structured methods added to the early phases of the product definition. We feel the results of the structured methods are positive enough to continue using these methods on future projects. Fig. 8 summarizes the results of the comparison of the two methodologies.
COPYRIGHT 1989 Hewlett Packard Company
COPYRIGHT 2004 Gale Group