An architecture for migrating to an open systems solution - HP's Worldwide Support Systems' model for switching to client/server, open systems architecture
Michael E. ThompsonA process and a model have been developed that provide an easy growth path to a client/server, open systems architecture for information technology applications.
As we move further into the 1990s, information technology is undergoing rapid changes and significant challenges. Business decisions are evolving rapidly to meet competition and satisfy customers. Technology is offering more choices than were available in the past decade. Emerging client/ server technologies and competitive business needs are forcing information technology groups to reevaluate how high-quality business solutions can be delivered faster.
HP's Worldwide Support Systems (WSS) organization is an information technology group that develops mission-critical [dagger] systems for Hewlett-Packard's worldwide customer support business. For the last decade, WSS has developed software applications using traditional technologies, tools, and processes.
During the late 1980s, WSS embraced client/server technology to meet rapidly changing business needs. Client/server technology is a form of distributed computing in which application processing is divided between a client process and a server process. The interaction is simple: the client process initiates requests to the server and the server fulfills the request and responds to the client. This article discusses our experiences with developing client/server solutions using open systems tools and technologies.
Open systems provides a unified approach for developing and managing systems, networks, and user applications, resulting in the ability to run a single version of a software application on different hardware platforms. Because they communicate using standard protocols, products based on an open systems design can be interconnected and will work together regardless of what company manufactured them. For a user configuring a computer system or network, the benefit of open systems is the freedom to choose the best component for each function from the offerings of many manufacturers.
Our experiences began when the open system concept was in its infancy so we learned along with the rest of the industry. There were no clear leaders for each technology component required to develop a complete client/server solution. We believe our experiences to be extremely valuable to those who are considering client/server development solutions while open systems standards are being defined.
The Early 1980s
During the early 1980s, the primary focus of our use of information technology was to increase productivity in the support community through automation. Most of Hewlett-Packard's customer support business was organized around area and region business units. Each of these business units used HP 3000 systems as the primary, and sometimes only, business computer. Because these business units were decentralized, transaction volume was relatively moderate.
Our applications were designed to run on HP 3000 computers from block mode terminals using the VPLUS/3000 screen handler. Interactive dialogs were rarely used in applications. When communication between applications was required, a batch solution was usually the answer.
Systems were designed with very little effective code reuse. Business edits, database access, and screen I/O were all mixed together into a module or program. Our technology choices were limited to COBOL, Image (DBMS), and the VPLUS/3000 screen handler (Fig. 1).
There were few choices, if any, for software that would run across multiplatform or multivendor hardware or software. Introducing new technology into this type of environment was rarely considered. System developers worked within a limited technology environment where constraints existed because of the HP proprietary hardware selected.
The Late 1980s
Into the mid-to-late 1980s, different technology choices were becoming more available. Business managers were asking if we could deliver our application systems on PCs or HP-UX* workstations. The HP-UX operating system, computer-aided software engineering (CASE) tools, object-oriented languages, and relational database management systems (RDBMS) were emerging as popular alternatives to the traditional technologies and tools we had used over the past ten years.
We were faced with the challenge of developing a strategy to introduce these new technologies into our existing systems without any delay to our customers. This challenge forced us to reevaluate our most fundamental principles of systems development.
Technical Architecture
In the fall of 1989, WSS created a technical architecture to provide a foundation for developing a consistent long-term design strategy for systems development. This architecture provides a common framework for designing modern, integrated client/server systems. These systems are integrated in the sense that they can operate and interact with other applications through common application program interfaces (APIs) using integrated technology tools and components. The components of our current technical architecture are shown in Fig. 2.
Logical Model. This portion of the technical architecture defines a template that can be used to create or modify applications that are compatible with the open systems concept. The logical model is described in more detail below.
Recommended Supporting Technologies. Supporting technologies include hardware platforms, operating systems, network services, languages, databases, user interfaces, and development environments that support the logical model.
Recommended Tools. These are tools used by application developers, designers, and maintenance personnel for the creation and support of application software. Tools such as integrated CASE and branch flow analysis are some of the recommendations.
Design and Use Standards. This part of the architecture consists of style guides, interface definitions, and practices. These standards define a common set of terms and a consistent framework. Style guides include items such as guidelines and standards for design and coding. Practices means process standards such as code inspections and testing strategies for client/server implementations.
In summary, this architecture has been adopted as the model for HP information technology, which is an internal HP function responsible for developing and maintaining information systems applications and networks for organizations such as marketing, support, and accounting. The architecture is constantly being revised as new technologies, tools, standards, and other methodologies become available.
The technical architecture enables an application designer to define an application in terms of the logical model components, select the technologies needed to support the application, select specific tools for implementing the design, and follow design standards so that the application will have a common look and feel and be able to interact consistently with other applications.
Logical Model
The logical model shown in Fig. 3 is a definition or template of how an application can be partitioned for an open systems solution. It is a framework that organizes all applications into five components: user interface, user task logic, business transaction logic, data manager, and environment manager. This model is a good basis for researching and recommending open systems strategies for application development. In addition, it defines a framework for strategies relating to application migration and third-party purchases. Finally, this model does not imply any specific technologies or client/server configuration because these decisions are made in other parts of the technical architecture.
User Interface. The user interface provides the user's view into a business application, and manages all communication between the user and the application. It is responsible for painting the "picture" the user sees and collecting user input. It constructs dialogs by presenting data or choices and requesting selection, data input, or other responses.
A good example of a user interface is the automated teller machine, or ATM, which is used for automated banking transactions. The ATM provides an interface that enables a user to perform a bank transaction. The interface itself is mostly idle. However, it will interact with the user by presenting data such as accounts and balances, and requesting selections such as withdrawals or deposits.
User Task Logic. The user task logic drives the user interface and invokes the appropriate business transaction processing. Understanding the results of user requests and communicating them to the user interface (where the results are displayed) are also the responsibilities of the task logic layer. For example, in the ATM example, when the user selects an option (such as withdrawal), the ATM application (client) formulates a request to the bank account management application to perform the withdrawal. The ATM application waits for a response from the account management application. Formulating the request to the bank application and communicating the response to the user interface are task logic functions.
Environment Manager. This component is responsible for managing communication between the other four components of the logical model. Communication responsibilities include connecting two components (dynamic and static connections), overall management of application layer instances, and communication from one client/server installation to another. For example, in the ATM model, the environment manager allows clients to transfer funds from one banking establishment to other banking establishments.
Business Transaction Logic. The business transaction logic creates, deletes, updates, and retrieves business data. It imposes business policy or rules upon the data. Transaction and data access security is managed in this layer to control authentication of the user requesting the transaction.
After the user presses "OK" on the ATM user interface, the task logic will transmit the withdrawal request to the banking application. The requester of the transaction is validated, the withdrawal request is edited, and account balances are checked. These are all functions performed by the business transaction logic.
Data Manager. This layer manages the physical storage of data, and provides read and write access to the data. It manages concurrent access by multiple users (business transactions). It is responsible for ensuring the physical integrity and recovery of data. In the ATM example, the banking application receives requests from many users at one time. The function in the banking application that updates the user's account with the withdrawal information and locks the account until the withdrawal is completed is located in the data manager.
Client/Server Architecture
The five components of the logical model are interconnected through message-based interfaces that allow the components to be separated physically into a client/server configuration. Fig. 4 shows an overview of our client/server architecture. Applications are organized into a client, a server, and a client/server interface. The client, which consists of the user interface and user task logic, deals with presenting and managing data for users. The server, which consists of business transaction logic and the data manager, deals with managing the business data and enforcing business rules and policies. Finally, the environment manger provides the client/server interface, which is a transparent network linkage between client and server.
Migrating Existing Systems
To test the feasibility of the technical architecture, we used it on two existing production applications. The first project consists of 100,000 lines of code and is used to dispatch and manage software-support service calls for HP's customer response center. This project was a step-by-step migration from the traditional application code architecture (shown in Fig. 1), which typically dispersed task and business logic throughout the application, to the client/server structure defined by the technical architecture. The original application ran in an HP 3000 MPE operating system environment and consisted of an Image database, COBOL application code, and VPLUS/3009 online access code. The new client application uses a graphical user interface and runs on HP 9000 Series 300 and 400 machines in an HP-UX operating system environment. The server portion of the application runs in native mode [dagger] on an HP 3000 Series 900 machine running in an MPE/iX operating system environment.
One of the objectives of this migration was to salvage as much of the original application as possible while reengineering the technical infrastructure. The existing COBOL code was a mixture of online code and database access operations. We took the following steps in migrating the old application to the new client/server architecture:
1. We evaluated existing code to determine which database access operations should be combined with application edits to ensure the same data integrity and performance.
2. The results from step 1 were encapsulated in a predefined interface which became the single access path for all external processes (clients and other batch applications). This portion of the application is the server in the client/server logical model.
3. We evaluated the existing code again to determine the online application logic that had to be reengineered to run in a new hardware and software environment. This environment included an HP 9000 Series 300 or 400 running the HP-UX operating system and a graphical user interface based on OSF/Motif. This portion of the apphcation is the client in the client/server logical model.
4. The connectivity component that enabled the application component from step 2 to interact with the application component from step 3 was designed and developed. This portion of application and other supporting software represents the environment manager in the client/server logical model.
Fig. 5 shows the components of the new system encapsulated in the client/server model.
The technical architecture helped to describe the division of responsibilities between the client and the server. The biggest impact the architecture had was in determining the location of the network. The logical model does not prohibit an application from introducing a network between any two layers. However, the client/server view of the logical model recommends splitting the application with a network between the user task logic and the business transaction logic.
The second project involved the development of a new client application with a graphical user interface that runs on HP 9000 Series 300 and 400 machines and a server application that runs in native mode on an HP 3000 Series 900 machine. This application configures and prices support products for HP's customer support business. The project consisted of approximately 40,000 lines of code.
The success of these two projects proved that the technical architecture, particularly the logical model, provided us with a framework for migrating our installed base of applications to new hardware and software platforms and developing new applications.
Process Learning Experiences
Despite the success of the two projects mentioned above, both projects exposed areas where we needed some improvements in both process and technology.
Business Transaction Design Time. Defining appropriate business transactions requires a lot of work at the beginning of a project. However, investing this extra time helps build a foundation for the rest of the application design. Once these transactions are defined, development teams can work in parallel.
Business transactions provide the interface between clients and servers. If the transactions are designed properly, clients will be able to communicate with any server (aside from security restrictions) using the business transactions defined for it. The design of either the client or the server can change without affecting the other as long as the business transaction rules or the transactions themselves are preserved. New clients can easily be added to any server by simply using the relevant transactions. Therefore, maintenance is decreased and reusability is increased. In addition, if business transactions are defined correctly, network traffic will be reduced.
During the call tracking project, we spent only 10% of our design, code, and test time working on business transaction design. However, with the configuration and pricing project, we invested approximately 50% of our initial work on the transaction definition. These transactions defined our application more concisely, which gave us better results and less rework as the project progressed.
Use of Information Engineering. The technical architecture provides one of the building blocks for application development. However, to meet certain business needs and improve the development process, parallel efforts such as usability engineering, performance engineering, and information engineering are recommended.
We spent a lot of time reworking the transaction definitions and code for the call tracking project because we did not do a thorough analysis of the problem. Using information engineering techniques in the beginning of the project can save time and effort. Information engineering provides a structured methodology and guidelines for discussing the how, what, and where aspects of development. The how relates to the processes in the application, the what is the data the applications use, and the where refers to the location of the solution (in hardware or software).
Use of Data Flow Diagrams. During the configuration and pricing project, we used data flow diagrams (DFDs) and structured analysis and structured design techniques to help represent the flow of data among the modules of the program. This enabled us to address critical elements during transaction definition. Data flow diagramming is the link between the analysis and design phases of information engineering.
Increase Training lime. Migrating the call tracking application was among the first client/server projects for HP information technology. Because the client/server model brought a new environment to HP information technology, users and developers had to become proficient in new skills and a new way of thinking about organizational relationships, applications, processes, and technology. The learning curie for both our developers and our users was substantial. We spent more time training our developers, and did not spend enough time training our users. The amount of time needed to train our users was significantly underestimated. This greatly increased our acceptance risks during the implementation phase of the project.
Consider Performance Needs. We focused our efforts more on network performance than client performance, which later became an issue. Client performance includes the response time from one logical task (such as selecting help, making a menu choice, or editing a dialog box) to the next logical task. The client application is driven by the end user and therefore must perform according to end-user expectations. Thus, the performance needs of users need to be considered so developers can balance new development tools and requirements with the hardware platforms that users demand.
Correct Hardware and Software Infrastructure. The expense of migrating to a client/server environment should be considered. While migrating our applications, we discovered that we did not have the correct hardware and software infrastructure in place. We needed high-speed networks which were not available at all sites. This limited our communication capabilities. In addition, we still used old hardware that had memory limitations. This limited our ability to incorporate new technologies, causing performance problems.
Adept a Migration Plan. Migrating from a traditional (terminal, host-based applications) architecture to a client/server architecture can be approached in phases. Existing databases should be surrounded with a server shell. Clients can communicate with servers using properly designed business transactions. This client/server separation allows the database to be replaced later without replacing the client. Unmigrated applications can be integrated on the user's work-station, using terminal emulation, until they are migrated to a new platform.
Technology Lessons Learned
Besides the process improvement lessons described above, we also learned some lessons about the technical architecture and our design model.
Adopt a Technical Architecture. Today, internal customers demand multivendor network connectivity providing standardized application services. Open systems will allow selection from a wide range of multivendor solutions. Although open systems standards are still evolving, adopting a technical architecture will help provide guidelines for selecting technologies and tools that support open systems. This reduces the risk of locking application developers into a vendor-specific framework of technologies, tools, and processes. As standards become defined or new technologies are introduced, migrating to open systems will be much easier.
Defining and standardizing interfaces allows application logic to be independent of technology. The API is a standard library of functions that separates the user from the underlying technology. Because an API can be used to encapsulate a technology, changes within the encapsulation can occur without affecting application code. For example, technologies (tools) providing the interface to a database can change without impacting the business transaction logic.
Our migration strategy required interfaces between clients and servers to be standardized, normalized, and portable. For example, HP's response center lab developed a network API called TVAL (tagged values), for interprocess communication between HP-UX and MPE/iX operating systems. This API will protect our investment during migration from NetIPC to an open systems product such as Network Computing System (NCS).
Develop a Business and Data Model. Implement a model to provide a description of the business that is going to be supported. This model will document what processes the business needs to meet its goals, and what information is needed for each process. Once this information is determined, develop a data model to provide consistent definition and interpretation of the data wherever it is used, and of business rules that determine its integrity. This will allow different applications to be consistent and to interact successfully. The objective is to manage data so that it is defined only once, although it may be used by multiple processes or applications (clients) and physically stored in several locations (servers).
Develop Data Security. As we migrate to an open systems environment, customers from outside HP will be able to access our systems. In our current environment, systems only check for valid identification and authorization. Standards groups are working to establish a Distributed Computing Environment (DCE) that includes security services. DCE is a comprehensive, integrated set of services developed and sponsored by the Open Software Foundation (OSF) which supports the development, use, and maintenance of distributed applications. DCE supports transparent connection and communication of any object (client or server) within a distributed networked environment.
Benefits
While developing and migrating our applications, we also discovered some unexpected benefits.
Development Team Independence. Splitting applications into client, server, and communication components encourages application development across organizational boundaries. Thus, applications can be developed in parallel, which decreases the time to market. The call tracking project was developed by two different development teams in HP's support organization. This allowed the teams to be managed separately while the application was being developed simultaneously. The configuration and pricing project teams were also split between client, server, and communication logic. After defining the various APIs (communication, error handling, and data buffer) between the client and the server, each team was able to develop and test its part separately. The teams worked independently and did not need to be concerned with the internals of the other components. The three parts were brought together successfully during integration testing.
Lower Maintenance and Higher Quality. Both projects greatly simplified maintenance. We noted that most service requests associated with the the initial release of two products were for changes to client behavior, not business logic. Because the user task logic (client) is separate from the business transaction logic (server), the amount of code to be reviewed before a change can be implemented is greatly reduced. This allowed us to estimate schedules easier and have faster turnaround for application change requests. Lastly, the development teams became familiar with the application quicker because they did not have to learn both the user task logic and the business logic.
Since their initial release both projects have exhibited a very high quality. In addition, the application teams consisted of eight to ten people for the initial release and only one to two people during the subsequent releases.
Improved Application Testing. Server testing was automated by defining a single interface to the server. We developed a test driver that could send predefined test cases, capture server responses, and compare them with expected results. We developed 2500 reusable server test cases. This accomplished about 90% of our unit testing. In addition, the complete test for the server takes only two hours, allowing a full regression test suite to be run whenever the server is changed. This allows the team to perform verifiable regression tests quickly as needed.
Conclusion
Developing a client/server solution using an open systems design strategy provided several learning experiences. These experiences included leveraging current investments of hardware, software, and personnel, choosing the right products and standards to integrate software, avoiding software and hardware lock-in, and incorporating new technologies.
The technical architecture provides a framework for deciding which technology and tools to use for client/server development. Once this architecture is demed, migrating and developing applications to the client/server paradigm can be accomplished without completely rewriting existing applications.
In addition, developing client/server applications using the open systems concept means that a development environment can be tailored to meet specific business needs. This minimizes the risk of locking developers into specific hardware and software platforms and processes. Resources can be spent on designing the best combination of existing and new components.
Finally, we have found that the technical architecture described in this paper will work because it provides an open systems solution, offering a simple migration path to a client/ server architecture.
Acknowledgments
Gratitude to Skip Ross and his staff--including the development team and production application teams--for their great efforts in following this project through. Special thanks to Tobin Cudd and Greg Spray who made significant contributions to the design and development of the technical architecture.
[dagger] Missiion-critical systems are software applications that cannot be taken offline for an extended period of time without adverse impact (e.g. order management. engineer dispatch, inventory control systems, etc.),
[dagger] Native mode implies that a machine is using its own capabilities (capabilities inherent to the machine) as opposed to compatibility made. in which a machine is emulating another machine.
HP-UX is based en and is compatible with UNIX System Laboratories' UNIX* operating system. It also complies with X/Open's* XPG3, POSIX 1003.1 and SVID2 interface specifications.
UNIX is a registered trademark of UNIX System Laboratories Inc. in the U.S.A. and other countries.
X/Open is a trademark of X/Open Company Limited in the UK and other countries.
COPYRIGHT 1992 Hewlett Packard Company
COPYRIGHT 2004 Gale Group