Interoperability testing for HP MAP 3.0 - Hewlett-Packard's Manufacturing Automation Protocol implementation
Jeffrey D. Meyer[FIGURES HAVE BEEN OMITTED]
Interoperability testing is used to ensure that HP MAP 3.0 OSI services can communicate with other vendors' systems and to uncover errors both in HPs and other vendors' OSI implementations.
ONE OF THE PRIMARY objectives of HP's MAP 3.0 offering is to allow HP systems to communicate with those of other system vendors. To be successful in meeting this objective requires interoperability testing in addition to standard software testing practices such as module, system, and reliability testing. Interoperability testing is the verification of the ability of different network implementations to communicate. This type of testing helps to expose implementation errors in both HP and other vendors' systems, and most important, it helps to ensure that we provide our customers with a truly open system.
Need for interoperability Testing
The large number of network standards, their relative newness, and the existence of many options within the standards makes the implementation of a full OSI stack a formidable task. Errors can be made both in the selection and enforcement of options and in the general encoding and decoding of the protocol data units that are sent across the network. These types of errors may go undetected during testing between similar systems because they can be canceled. Canceling happens when an error is present in both the sending and receiving code, the net effect being that no error is detected.
Fig. 1 illustrates this problem. In this case implementation A incorrectly requires the presence of optional field Y in a PDU. During the testing of A against itself this error is not detected because A always includes field Y. When A tests with implementation C, which does not include this field, the error is exposed. Fig. 1 also illustrates that interoperability testing is not transitive. A tests with B successfully because B includes the optional field. Also B tests successfully with C because B does not require the optional field Y. But A fails the test with C.
Because of the canceling effect, an OSI product can go through its internal software testing cycle and still contain significant undetected errors. Because of the lack of transitivity we cannot make assumptions about our ability to operate with one vendor based on our experience with another. For these reasons interoperability testing is an important and necessary element of the test cycle for OSI products.
Conformance Testing
Another type of testing that exposes defects in OSI implementations is conformance testing. Conformance testing consists of running a set of communication tests between the system being tested and a reference system. The idea here is that the reference system is a correct implementation of the OSI layers being tested and that successful completion of the conformance tests will indicate that the tested system's implementation is also correct (conformant). Tests for OSI implementations are administered by a national agency. In the United States this is the Corporation for Open Systems (COS), which issues certification marks to systems passing the tests.
Conformance tests may eventually replace the need for most interoperability testing, but this is not the case today. The main problems existing are the availability and stability of tests and the commitment of vendors to undergo testing. Because the FRAM conformance tests are still under development and there are no accepted MMS tests in the United States, we must perform interoperability testing.
BP is taking an active role in participating in the COS's conformance testing. We have completed IEEE 802.4 and OSI Transport Class 4 tests and are working with the COS on FTAM testing. Because the conformance tests are also subject to errors in implementation, the participation of other vendors in conformance tests is essential to their becoming stable. As stable conformance tests become available and are widely accepted among vendors, we should see the following benefits.
* The tests will be very rigorous and produce a range of behavior much greater than that generated by a typical implementation. This should enable conformance testing to expose the majority of implementation errors.
* The tests will be improved over time to incorporate tests for implementation errors that might still be found between conformant systems.
* The rigorous and improved tests will mean that a system's conformance mark will provide a high level of confidence in its ability to interoperate with other conformant systems. This will reduce the need to perform controlled interoperability testing between each pair of implementations.
The interoperability Test Process
Although interoperability testing could be performed by purchasing the other vendor's equipment and implementing and running the tests in-house, experience has shown that our most effective interoperability testing has resulted from cooperation between HP and the other vendor. The cooperative approach speeds the test development cycle and it improves the ability to diagnose problems detected in the other vendor's equipment. If problems are found in the other vendor's equipment, their involvement improves the turnaround time for fixes.
Fig. 2 shows the interoperability testing process that has been established at HP. The process is initiated by an HP field representative after a customer request has been received for an HP OSI network product. The other system vendors on the network are reviewed against the systems we have already tested. If any have not been tested then the testing process begins. The testing process may also be initiated by R&D or marketing for vendors our customers might use in the future.
The first phase in the testing process is determining the availability of personnel within the factory to perform the testing. After identifying the internal team, contact with the other vendor is established for carrying out the testing. In some circumstances the customer may also be involved.
When the teams are established the next step is to exchange information about each vendor's implementation. The information exchanged consists of a document called a Protocol Implementation Conformance Statement (PICS) and, if available, network traces showing the protocol data unit encodings produced by the vendor's stack. The PICS describes the services and options supported for a protocol. Each of the ISO protocol standards specifies the information to be provided in the PICS, which usually takes the form of tables to be filled in. The PICS information is used to determine the functionality to be tested. The PICS information can also be used to determine if the functionality requested by the customer is met by the two implementations. The network traces from the other vendor can be compared with those of our implementation to identify differences and to look for known problems.
After this exchange of information takes place, both parties are prepared to decide on the appropriate test cases to be executed and the location where the tests are to take place. HP has a set of abstract test cases for both FTAM and MMS, which are generally proposed as part of the test suite. Am abstract test case describes the test purpose, the steps to execute the test, and the expected results. in addition to testing at one of the vendor's facilities, two other options exist for test location. Testing can be performed over a wide area network by using a router to move data from each vendor's local EEEE 802.4 network to an X.25 public data network. An advantage of this type of testing is that both parties can work out of their respective labs, giving them easy access to the development and diagnostic tools available. Testing may also be performed at the COS's interoperability test lab. COS, the agency that performs conformance testing, has provided floor space in their lab for vendors to leave their equipment. For about a week every quarter vendors are invited to come to the lab to carry out interoperability testing. An advantage of this environment is that testing can be performed with more than one vendor.
Install, configure, and verify are combined and listed as a separate step because when equipment is moved for the purposes of this testing, it is important that its local functioning be verified first. This avoids wasting time diagnosing problems that have nothing to do with interoperability.
After all the preceding activities have been completed, the actual execution and interpretation of the interoperabilable and are widely accepted among vendors, we should see the following benefits.
* The tests will be very rigorous and produce a range of behavior much greater than that generated by a typical implementation. This should enable conformance testing to expose the majority of implementation errors.
* The tests will be improved over time to incorporate tests for implementation errors that might still be found between conformant systems.
* The rigorous and improved tests will mean that a system's conformance mark will provide a high level of confidence in its ability to interoperate with other conformant systems. This will reduce the need to perform controlled interoperability testing between each pair of implementations.
The interoperability Test Process
Although interoperability testing could be performed by purchasing the other vendor's equipment and implementing and running the tests in-house, experience has shown that our most effective interoperability testing has resulted from cooperation between HP and the other vendor. The cooperative approach speeds the test development cycle and it improves the ability to diagnose problems detected in the other vendor's equipment. If problems are found in the other vendor's equipment, their involvement improves the turnaround time for fixes.
Fig. 2 shows the interoperability testing process that has been established at HP. The process is initiated by an HP field representative after a customer request has been received for an HP OSI network product. The other system vendors on the network are reviewed against the systems we have already tested. If any have not been tested then the testing process begins. The testing process may also be initiated by R&D or marketing for vendors our customers might use in the future.
The first phase in the testing process is determining the availability of personnel within the factory to perform the testing. After identifying the internal team, contact with the other vendor is established for carrying out the testing. In some circumstances the customer may also be involved.
When the teams are established the next step is to exchange information about each vendor's implementation. The information exchanged consists of a document called a Protocol Implementation Conformance Statement (PICS) and, if available, network traces showing the protocol data unit encodings produced by the vendor's stack. The PICS describes the services and options supported for a protocol. Each of the ISO protocol standards specifies the information to be provided in the PICS, which usually takes the form of tables to be filled in. The PICS information is used to determine the functionality to be tested. The PICS information can also be used to determine if the functionality requested by the customer is met by the two implementations. The network traces from the other vendor can be compared with those of our implementation to identify differences and to look for known problems.
After this exchange of information takes place, both parties are prepared to decide on the appropriate test cases to be executed and the location where the tests are to take place. HP has a set of abstract test cases for both FTAM and MMS, which are generally proposed as part of the test suite. Am abstract test case describes the test purpose, the steps to execute the test, and the expected results. in addition to testing at one of the vendor's facilities, two other options exist for test location. Testing can be performed over a wide area network by using a router to move data from each vendor's local EEEE 802.4 network to an X.25 public data network. An advantage of this type of testing is that both parties can work out of their respective labs, giving them easy access to the development and diagnostic tools available. Testing may also be performed at the COS's interoperability test lab. COS, the agency that performs conformance testing, has provided floor space in their lab for vendors to leave their equipment. For about a week every quarter vendors are invited to come to the lab to carry out interoperability testing. An advantage of this environment is that testing can be performed with more than one vendor.
Install, configure, and verify are combined and listed as a separate step because when equipment is moved for the purposes of this testing, it is important that its local functioning be verified first. This avoids wasting time diagnosing problems that have nothing to do with interoperability.
After all the preceding activities have been completed, the actual execution and interpretation of the interoperability tests can take place. The evaluation of the failed tests should involve both vendors. It is useful to gather as much information as possible, including traces of the dialog, error messages reported, errors logged, and configuration data. After diagnosing a problem, ownership is assigned and the problem corrected. If fixes are not readily available, both vendors may agree to proceed with other tests if they are confident that the error exposed will not affect the outcome of the other tests. In fact, it is useful to define tests that are loosely coupled, that is, test each component of the network service with little dependence on other functions. This ensures that testing can proceed in parallel with fixes being developed for exposed problems.
When testing is complete an entry is made for that vendor into an on-line information base. This entry describes the tests that were performed, and for those that failed, the symptoms and type of problem, the owner of the problem, and the fix. In addition to the test results, these entries also give information about the vendor's equipment, the personnel involved in testing, the time and location of testing, and the diagnostic tools available on the other vendor's equipment. This record can then be used for evaluating future field requests for vendor support as well as for tracking trends in interoperability problems. interoperability Results
The most encouraging result to come from our interoperability testing is the high level of cooperation we have had from other vendors. One reason for this cooperation is that all parties involved gain from this testing. Both vendors have the opportunity to improve their implementations and their customers get the assurance that the different systems will be able to communicate. We had originally thought that issues of defect ownership might arise frequently, but all the defects we encountered so far have had ownership resolved.
The distribution of errors we have seen so far is shown in Table 1. Some quick observations from this table include:
* Interoperability (IOP) testing is necessary because defects were uncovered in the tests with all vendors.
* Defect rates from the session layer on down were low because the various implementations of these layers are quite stable.
* Defect rates should decrease steadily as interoperability testing is done with more vendors.
The largest class of errors encountered was protocol data unit (PDU) encoding errors, about 20%. It is noteworthy that from the presentation layer on up, the encoding rules for the PDUs change, and this is where the majority of the errors were located. The change is that the encoding rules are no longer explicitly stated in each protocol specification as they are in the lower layer standards. Instead, all the upper layer standards use what is called Abstract Syntax Notation One (ASN.1) to describe the way PDUs are to be constructed. ASN.1 is defined by two standard documents of its own. ASN.1 allows a consistent mechanism for describing the upper layer protocols, but the cost is that the final encoding of the PDU is less clear. ASN.1 also permits lengths of PDUs to be encoded in two different ways, and several vendors, including HP, had some errors in decoding both methods. See the article on page 11 for more about ASN.1.
Within the FTAM errors, half of the errors were the result of the incorrect effect being applied to the file being accessed. For instance, a request to recreate a file with new attributes might actually result in the same file being overwritten and its old attributes kept (i.e., time of creation, owner, access restrictions, etc.). Another large class of errors came from two implementations being overly restrictive on a set of concurrency control flags, which are used to control simultaneous access to files.
Another trouble spot, at the application and presentation layers, had to do with the negotiation of contexts to be used for the two entities to communicate. When a connection is established at the presentation layer the two entities agree what application layer protocols will be used (e.g., ACSE and MMS) and what encoding rules will be used, which is always ASN.1. One problem was that the codes used to indicate the protocols being proposed were still in a state of flux late in the implementation cycle. Other problems resulted from the standard's allowing multiple ways to order the proposed protocols and different methods of encapsulating the data from the upper layers.
One last trouble area worth mentioning resulted from optional fields. Six errors were the result of either an implementation requiring a field that was optional, or not handling a field that could optionally be present.
Useful Practices
The HP 4974A MAP 3.0 protocol analyzer proved to be an indispensable tool during interoperability testing. The analyzer captured all the OSI traffic between the two systems being tested and displayed it in real time on the screen. The traffic was presented in several windows corresponding to the different layers of the stack. This multilayer display proved especially useful for situations when one system encountered an error at an intermediate layer in the stack. We could see in real time the dialog that had occurred and determine which side was the last to transmit. Because the analyzer is able to save the traffic it monitors in files, we were able to keep track of exactly what dialogs took place for each test case.
The analyzer was also useful in identifying problems that are not apparent at the application layer. An example was incorrect disconnect or abort dialogs. Although at the application layer both sides may see the connection successfully released, the analyzer exposed errors in the dialog carried out at lower layers.
Another valuable quality of the analyzer is that it acts as an unbiased observer on the network. There is no doubt that traffic displayed by the analyzer is traffic that is actually on the network. This is more reliable than traces taken by either host.
The best method for isolating encoding errors is to place traces of each system's encoding for a particular PDU side by side and examine those fields that are different. Note that this process can also be carried out before actual testing if PDU traces are exchanged by the vendors as suggested in the interoperability process.
The logging facilities of the HP MAP services and the HP OSI Express card also proved very useful. Because much care had been taken to associate unique log messages with each potential error detected throughout the stack, several errors could be identified immediately from the log message text.
Conclusion
The interoperability testing process we have developed at HP has helped us uncover and correct errors in our OSI implementation as well as several other vendors' implementations. We plan to continue our interoperability testing with more vendors in the future. We now have a better understanding of the types of problems that occur between different implementations and through our records are in a better position to support our OSI offerings in the field.
Acknowledgments
The following people played key roles in the success of our interoperability test effort: interoperability project manager Jim Cunningham, previous manager Greg Gilliom who began this project, OSI Express engineers Mike Ellis, Steve Mueller, and Tom Smith, FTAM engineer John Smith, MMS engineers Pete Lagoni, Hugh Mahon, and Jeff Williams, and fellow interoperability engineer Ken Brown. Thanks must also be extended to the engineers at AllenBradley, Computrol, Data General Corporation, GE-Fanuc, Motorola, RETIX Incorporated, and Unisys for their hard work in making open systems a reality.
COPYRIGHT 1990 Hewlett Packard Company
COPYRIGHT 2004 Gale Group