C2 in the CIRF test: a proof of concept: C2 To-Be operational architecture - C2 Combat Support
Amanda GellerWe have, in our reports, (1) described the elements of a global Agile Combat Support (ACS) network capable of enabling air and space expeditionary forces. The components of this global ACS network include:
* Forward operating locations (FOL) that can have differing levels of combat support (CS) resources to support a variety of employment time lines
* Forward support locations (FSL) and continental United States (CONUS) support locations (CSL); that is, sites for storing heavy CS resources such as munitions or sites with back-shop maintenance capabilities such as jet engine intermediate maintenance (JEIM)
* A robust transportation system to connect the FOLs, FSLs, and CSLs
* A combat support command and control (CSC2) system that facilitates estimating support requirements, configuring the specific nodes of the system selected to support a given contingency, executing support activities, and measuring actual CS performance against planned performance, developing recourse plans when the system is not within control limits, and reacting swiftly to rapidly changing circumstances
A notional illustration of these components of the ACS network of the future is shown in Figure 1.
[FIGURE 1 OMITTED]
This article focuses on three components of the ACS network: the CSC2 system, maintenance FSLs, and the distribution system that connects the FSLs to the FOLs. Specifically, we discuss how a CSC2 system was implemented in a test of maintenance FSLs, more commonly known as centralized intermediate repair facilities (CIRF). The CSC2 system implemented during the CIRF test demonstrates the viability of the CSC2 process concepts outlined in the CSC2 TO-BE operational architecture. (2)
CSC2 Objectives
The CSC2 system is a pivotal element of the expeditionary concept, as it is responsible for coordinating the other components of the CS network. Joint and Air Force doctrine defines command and control (C2) as "the exercise of authority and direction by a properly designated commander over assigned and attached forces in the accomplishment of the mission." (3) It includes the battlespace management process of planning, directing, coordinating, and controlling forces and operations. Command and control involves the integration of the systems, procedures, organizational structures, personnel, equipment, facilities, information and communications that enable a commander to exercise command and control across the range of military operations. (4)
Earlier RAND analysis further delineated required C2 capabilities, based on the support needs of expeditionary operations. (5)
* Generate support requirements based on desired operational effects.
* Provide support assessments quickly and continually and effectively communicate CS capabilities in terms of operational effects.
* Monitor resources in all theaters and allocate resources in accordance with global priorities.
* Be self-monitoring during execution and able to adjust to changes in either CS performance or operational objectives.
Testing CSC2 Concepts in Maintenance FSL Operations
From September 2001 to March 2002, the Air Force developed and tested several CSC2 capabilities associated with the operation of maintenance FSLs, referred to by the Air Force as CIRFs. CIRFs are centralized repair locations that provide intermediate repair capabilities for selected components; for example, engines, electronic warfare (EW) pods, and avionics components. Before describing the CIRF test parameters, we will present a brief background of the events that led to the test in fall 2001.
CIRF History
The concept of centralized intermediate maintenance is not a new one and has been implemented in various forms throughout the Air Force since the Korean conflict. Much of this history is documented in this edition of the Air Force Journal of Logistics, as well as an upcoming RAND report. (6)
RAND's involvement with CIRF began with the onset of the ACS concept in the late 1990s. There are numerous options for positioning resources and processes at FOLs, FSLs, and CSLs, and each option has differing effects on operational effectiveness and support efficiency. Several analyses have modeled the FOL, FSL, and CSL interactions for individual commodities--including F-15 avionics, (7) low-altitude navigation and targeting infrared for night (LANTIRN) pods, (8) and JEIM (9)--and defined circumstances under which the concepts would be most successful. In each of these studies, a mix of FSLs and CSLs proved to have advantages over the current decentralized maintenance concepts, where each unit would deploy its own intermediate maintenance shops with the aviation units to the deployed site. The centralized maintenance and support concepts were briefed to senior Air Force leadership as early as 1997, and the United States Air Forces in Europe (USAFE) Director of Logistics expressed an interest in testing these ideas in 1998. However, the Air War Over Serbia began in 1999, before a formal test could begin.
CIRF Operations and Noble Anvil (10)
In 1999, USAFE adopted CIRFs (maintenance FSLs) for use in Joint Task Force Noble Anvil (JTFNA), the Air Force component of the Air War Over Serbia. While the Air Force maintained base repair in the CONUS, three overseas facilities already operating informally as maintenance FSLs were officially designated as CIRFs during Noble Anvil. This reduced intermediate-level maintenance deployment by approximately two-thirds, enabled rapid spinup of repair operations, and demonstrated that CIRFs were capable of supporting contingency operations. However, ad hoc augmentation of CIRF assets significantly delayed the arrival of needed resources. These delays raised several questions regarding CIRF implementation processes and procedures, including CSC2 issues of how organizations should communicate and assets should be managed to meet operational goals.
CIRF Test Background
Based on experiences in JTFNA, the Air Force Deputy Chief of Staff, Installations and Logistics directed further development and testing of several ACS concepts, including that of CIRFs. The test was developed to determine how well CIRFs, with a well-planned support system, could support steady-state operations.
The test involved five wing-level USAFE work centers functioning as CIRFs for engines, LANTIRN pods, EW pods, and F-15 avionics for units supporting Operations Northern Watch and Southern Watch. The USAFE Regional Supply Squadron (RSS) acted as the C2 decision authority and controlled the allocation of spare items throughout the theater. CIRF operations in the test took much from the RAND concept of maintenance FSLs but had several deviations as well." In the test, when selected units deployed to Northern Watch and Southern Watch, they augmented CIRF staffing, equipment, and spares based on pre-established trigger points. The operational environment of the CIRF test is mapped in Figure 2.
[FIGURE 2 OMITTED]
The CIRF Test and CSC2 Operational Architecture
This article discusses CSC2 capabilities addressed throughout the CIRF test. The CIRF C2 structure was designed to provide a common operating picture and bring total asset visibility to decisionmakers at all levels, thereby improving support to the warfighter in both planning and execution activities. The common operating picture was to be leveraged in assessing the condition of deployed units to monitor the effectiveness of CIRF operations (based on customer wait time [CWT] and quality of repair), see if support operations should be modified, and monitor the inventory position of all units to see how the repair and spares capability should be allocated. These assessments were to be used to guide prioritization decisions and, in conjunction with Air Force operational goals, prioritize goals for weapon system availability and allocate resources accordingly.
These responsibilities link very closely with the planning and execution process outlined in the CSC2 TO-BE operational architecture and shown in Figure 3. This process begins, as shown on the left side of the figure, with the development of an integrated operational and CS plan. The jointly developed plan is then assessed to determine its feasibility, based on CS resource availabilities. Once the plan is determined to be feasible, it is executed. In the execution control portion of the process, shown in the lower right of the figure, actual CS process performance is compared to the control parameters identified as necessary to achieve the operational measures of effectiveness in the planning process. When a parameter measuring actual CS performance is not within the limits set in the planning phase, the process notifies CS planners that the process is out of control, and get well analyses and replanning are necessary.
[FIGURE 3 OMITTED]
This process centers on integrated operations and CS planning but also incorporates activities for continually monitoring and adjusting performance. A key element of planning and execution in the process template is the feedback loop that determines how well the system is expected to perform (during planning) or is performing (during execution) and warns of potential system failure. It is this feedback loop that tells the RSS support planners to act when the CS plan should be reconfigured to meet dynamic operational requirements, during both planning and execution. The feedback loop can drive changes in the CS plan and might call for a shift in the operational plan as well. Feedback might include notification of missions that cannot be performed because of CS limitations. (12) For the CS system to provide timely feedback to the operators, it must be tightly coupled with their planning and execution processes and systems and provide options that will result in the same operational effects, yet cost less in CS terms.
The C2 responsibilities defined in the CIRF test tie very closely to this process, as the resource allocation and prioritization of weapon system availability are both parts of the integrated planning process. Likewise, the common operating picture and comprehensive assessments of deployed units are necessary for the feedback loop that links the planning and execution phases.
CIRF Test Results
By most counts, the CIRF test showed centralized maintenance operations to be an effective step toward a global ACS framework. The CIRF supported all deployed sorties at a reduced deployment footprint. The regional supply squadron provided responsive decisionmaking capability; logistics costs and requirements were reduced; and the pre-established trigger points, with few exceptions, successfully supported operations. Procedures and performance standards were established in advance, based on operational needs, and used to measure performance and guide operations throughout the test. For example, while support operations and spares inventories occasionally fell short of the standards set at the beginning of the test and necessitated loaners from other units, the ability of units to recognize when operations were falling short and provide the necessary resources demonstrates the effectiveness of the pre-established performance standards and feedback loops. However, as CIRF implementation progressed, opportunities to improve operations were uncovered. There were several instances of processes, chains of command, and information systems not being defined for situations that arose. In this section, we detail the achievements of the CIRF test, with respect to the four C2 objectives discussed earlier.
C2 Objective 1. Generate Support Requirements Based on Desired Operational Effects
In the CIRF test, a primary goal of the concept was to meet the sortie requirements of Northern Watch and Southern Watch. The RSS personnel--composed of maintenance, transportation, and supply planners--used these sortie requirements and projected flying hours to determine FOL spare levels and performance standards for transportation times, maintenance times, and all other components of customer wait time.
As illustrated in Figure 4, CIRF planners used operational sortie generation and weapon system availability objectives to establish control parameters for CS performance--including expected unit component removal rates, transportation times to and from the CIRFs to the operational locations, CIRF repair times, inventory buffer levels; for example, contingency high-priority mission support kit levels and other parameters--and tracked actual logistics pipeline performance against these control parameters. (13)
[FIGURE 4 OMITTED]
The bottom of Figure 5 shows some of the CS process control parameters monitored during the CIRF test. The top half of the figure shows how two parameters associated with customer wait times, one from the CIRF to deployed units and the other from depots to the CIRFs, were monitored against trigger points or control limits. The CWT control graphs show the percentiles of total customer wait time for a number of FOLs for a 3-month period in Enduring Freedom.
[FIGURE 5 OMITTED]
The performance threshold lines on the figure illustrate how the C2 system might indicate if a control limit were breached and the theater distribution system (TDS) performance or strategic resupply system were out of control and had the potential to affect weapon system availability objectives. This comparison of support performance to the control parameters established from operational goals took place during the Enduring Freedom CIRF test. Personnel at the USAFE Regional Supply Squadron monitored transportation, maintenance, and supply parameters and compared them to those needed to achieve operational weapon system availability objectives, as shown on this figure. (14)
When the performance of the theater distribution system was out of tolerance with these, RSS personnel indicated how this performance, if left uncorrected, would impact future operations and were able to do this before the negative impacts actually occurred.
Another example of the CIRF test's link between operational and support performance was seen when determining spare levels at each FOL and process performance parameters for the CIRF. Support thresholds set in the CIRF test plan were later verified using a simulation model, which simulated a unit's flying schedule and associated base and CIRF processes to track daily spare engine and pod inventories at each base and in CIRF processes associated with intermediate repair operations over the duration of the Northern Watch and Southern Watch scenario. (15)
To verify the target set in the CIRF test, we used the simulation model and held all operational requirements constant. We then varied support performance incrementally. For example, for a given sortie profile, we examined how variations in transportation performance or removal rates might affect spares levels at the FOLs. In this manner, we could establish threshold values for process performance parameters and verify that targets set at the beginning of the CIRF test were adequate to achieve operational goals. The Air Force has recommended similar CWT goal development for other mission-design series and commodities.
Using these techniques, we also were able to observe interactions among performance parameters--for example, removal rates and customer wait time--and how they would impact operational performance (that is, sortie generation capability). For example, at low engine-removal rates, 1- or 2-day variations in the customer wait time for engines sent to the CIRF do not have a significant impact on operational readiness. With fewer removals, the time each engine spends in repair is not as noteworthy, since, unless CWT increases by an order of magnitude, additional engines are still unlikely to break in the time that the original engine is gone. However, at higher removal rates, with more engines sent to the CIRF, the time each engine spends not mission capable has a much greater impact on spare parts inventories and the ability for units to meet their sortie requirements.
C2 Objective 2. Provide Support Assessments Quickly, Continually, and Effectively and Update and Communicate Status Reports
One of the key enablers of access to status reports and quick and effective support assessments is the CIRF staff's ability to provide a common operating picture. During the CIRF test, this common operating picture was provided through the Air Force portal. At the time of the CIRF test, the portal had four modules: Fleet Engine Status, Fleet Engine Trending Report, Fleet CIRF Engine Status, and Fleet Pod Status. Further information on the capabilities of each module is provided in "CIRF Toolkit: Developing a Logistics Common Operating Picture." (16) This information system provided the status of each engine and pod at each unit and the availability status of transportation resources, allowing units to anticipate when they would get repaired parts back.
The CIRF portal also enabled immediate transfer of information and automatic aggregation of information from a central database. This ensured that once a part was repaired, shipped, or delivered its status would be updated and allocation and prioritization decisions would be made from the most current information possible.
After the CIRF toolkit was completed in January 2002, it was first implemented by USAFE, Air Combat Command, and the Pacific Air Forces and received positive feedback from maintenance personnel. Air Mobility Command (AMC), Air Force Special Operations Command, Air Education and Training Command, Air National Guard, and Air Force Reserve Command users were to be added next, with the anticipation of reducing the reporting workload throughout the CIRF community by 25 percent.
However, throughout the CIRF test, several opportunities for improvement were also noted. While the toolkit facilitated the sharing of data across organizations, there was also valuable information not incorporated. For example, the portal did not contain complete information about engines and pods while they were in repair. Furthermore, this information was not only not included in the portal but also not centralized at positions within the CIRF. Daring the test, there was no point of contact established for unit status. As a result, deployed units called several people in the propulsion flight for information. This led to problems on multiple fronts. Fielding questions not only distracted CIRF personnel from their primary responsibilities but also resulted in conflicting reports when the same question was posed to more than one person.
Similarly, while the CIRF toolkit contained the status of each engine and pod, during the test, it did not provide this information as a unit status report. As a result, it was difficult to provide feedback on a unit's capability or on what its needs might be. This made it more difficult for the regional supply squadron to allocate effectively. The portal also provided very little information on changes to units' taskings. The CIRF staff was, therefore, caught shorthanded at points throughout the test, when taskings were changed and units deployed with a greater workload or fewer augmentees than expected. To correct these deficiencies, the Air Force Maintenance Management Division has recommended continuing the development of the CIRF toolkit and using the toolkit to formalize the tracking of engines and pods.
C2 Objective 3. Monitor Resources in all Theaters and Allocate in Accordance with Global Priorities
As the decisionmaking authority of the CIRF test, the USAFE Regional Supply Squadron monitored resources in the European Command (EUCOM) and Central Command (CENTCOM) theaters. The regional supply squadron combines the supply C2 responsibilities of mission capability management, stock control, stock fund management, information system management, operational assessment and analysis, and reachback support procedures with the transportation C2 responsibilities of shipment tracing and tracking, source selection, traffic management research, movement arrangements, shipment expediting, customs issues, and channel requirements. The organization is designed to interface with the maintainers at the CIRF to provide "combatant commanders ... with operational materiel distribution C2 and regional weapon system support" and provide a comprehensive picture of the CIRF's needs.
The integrated nature of the regional supply squadron allowed the CIRF to provide responsive support to the deployed units. However, some holes in C2 presented challenges. The USAFE Regional Supply Squadron had the authority to distribute parts to both EUCOM and CENTCOM forces, despite their being different theaters. As a result, the USAFE Regional Supply Squadron was unfamiliar with the full spectrum of CENTCOM theater issues. Furthermore, the regional supply squadron faced some difficulties in resource allocation. Lack of clearly defined decision processes and command relationships forced the regional supply squadron to coordinate among deployed units, CIRFs, and MAJCOMs about personnel, equipment, status, funding, and transportation. Many of these issues were out of the RSS area of responsibility, and the regional supply squadron did not have the authority to set policies or determine resource allocation.
CIRF operations also raised issues of prioritizing support to USAFE home units that hosted CIRF and deployed units. When deployed units faced shortages, home wings often were forced to provide their resources as loaners. In these circumstances, their home support could potentially be degraded. Although the needs of the deployed units were generally given higher priority than those of the home units, care needs to be given to ensure that home-station support does not impact the training capability and, thus, place the Air Force at risk of being unable to respond to additional conflicts.
The lack of definition in command relationships was just one manifestation of the difficulties the CIRF faced in resource allocation. Although the regional supply squadron performed well as the CIRF decision authority, decision rules for cross-theater support were not yet fully developed at the time of the test. Maintenance and part requirements often were renegotiated throughout the course of operations. Because the CIRF was often not prepared for these added requirements, additional capability needed to be deployed. Augmentation presented many challenges as well, since augmentee unit type codes had not been defined at the start of the test and staff needed to be pulled in by unit line number instead. Furthermore, to moderate the delays caused by the augmentation process, many man-hours were spent trying to provide an added capability from the CIRF home wings. CIRF wings often were forced to provide their own resources as loaners, leading to further complications, as touched on above. Home-station support was compromised, support was degraded, assets became tied up in AWP status, and tracking of funds was complicated. Finally, although CIRF-wing line-replaceable units were authorized with the same Joint Chiefs of Staff project code as those of deployed units, this authorization was not universally understood.
C2 Objective 4. Be Self-Monitoring and Adjust to Changes in Operational Needs and Support Performance
One key to the success of the CIRF test was the clear definition of support goals and the ability of CIRF staff to monitor their own performance and make corrections when the goals were not being met. For example, as part of the Strategic Distribution Management Initiative (SDMI), transportation planners monitored the customer wait time of each item sent to the CIRF, through each stage of the repair process. They could, therefore, determine when customer wait time exceeded the target times and examine their transportation processes to see how resources could be put to better use. Throughout the CIRF test, the tanker airlift control center (TACC) at AMC provided qualitative feedback to USAFE, the US Transportation Command (USTRANSCOM), and other organizations on issues underlying the SDMI CWT statistics. This feedback allowed transporters to take corrective actions when needed, as was the case in the use of C-130s in CIRF transportation. Originally, USAFE was using a combination of trucks and C-130s to move cargo to the CIRF. C-130s were often available, and planners were concerned that they would otherwise fly empty, wasting valuable airlift capacity. However, channel routes for C-130s were unpredictable, and the cargo waiting for airlift could at times have been shipped faster by truck. TACC reports highlighted this issue and relayed concerns to USAFE, who ultimately shifted to a truck-only policy.
Another example of the C2 responsiveness in the CIRF test dealt with TDS performance to Al Jabar Air Base in Kuwait. Transportation times were consistently above the CWT performance criteria of 4 to 6 days to allow support of EW pods and LANTIRN to this location. The RSS personnel worked with AMC and USTRANSCOM personnel to improve TDS performance to this location, but the customer wait time could not be improved with resources that USTRANSCOM was willing to allocate to the theater distribution system. As a result, the regional supply squadron and deployed unit personnel made the decision to deploy EW and LANTIRN repair capability to Al Jabar during the Enduring Freedom CIRF test.
Use of this CSC2 process during the CIRF test represented a significant improvement in CSC2. These concepts and associated doctrine and educational programs that fully describe the process are being established to implement these concepts across a wide variety of CS processes Air Force-wide.
Despite these capabilities, the CIRF test revealed opportunities for further improvement to feedback capabilities. Limitations in information systems presented challenges in forecasting and information transfer. For example, the CIRF toolkit did not have a simple way to provide feedback on the status of units. Information was tracked by engine and pod serial number, which made it difficult to aggregate records to the unit level. In addition, the two information systems used in requirements forecasting, GATES and Brio, are under study to improve forecasting capabilities. The ability of the CIRF staff to predict cargo arrival and plan accordingly is dependent on the accuracy of these systems.
Even if feedback was given, CIRF planners still had difficulties using this information to adjust their operational and support plans. For example, if assets sent to the CIRF were missing components or had problems not described in their accompanying documentation, CIRF staff did not always have channels through which to follow up. In the event these discrepancies needed to be investigated before repair could proceed, the lack of accountability led to an increase in customer wait time. This lack of documentation also made it difficult to investigate foreign object damage or equipment abuse possibilities and did not provide a way to incorporate these issues into policies and plans.
Going Forward: Implementing C2 Changes
Changes to Air Force operational and CS processes and the C2 elements supporting them (that is, doctrine and policy, organizational relationships, training, and information systems) will allow the Air Force to better meet each of its C2 objectives. Some steps that may be taken to improve the C2 network are described below.
Organizational Changes
As discussed above, many of the CSC2 tasks are currently performed by the USAFE Regional Supply Squadron to manage the CIRF. These C2 features of the regional supply squadron can be accessed virtually by the COMAFFOR A4. These functions can be done from the COMAFFOR A4 Rear in a reachback fashion by a permanent and standing operations support center that would receive virtual inputs from the regional supply squadron with respect to CIRF operations. This will leave the regional supply squadron to focus on the daily supply operations of the CIRF and the rest of its theater and allow the operations support center to have visibility of spares involved in this operation, as well as spares supported by other processes and resources needed to initiate and sustain operations. Operations support centers should have visibility of theater resources and the ability to work with the Air Force and joint communities to ensure these allocations are in accordance with theater and global operational priorities. The operations support centers should report to the theater AFFOR/A-4 and communicate with inventory or commodity control points and the Air Staff Combat Support Center. The Combat Support Center should have responsibility for providing integrated weapon system assessments across commodities. It will have the capability to support allocation decisions when multiple theaters are competing for the same resources.
Each of the operations support centers and the Combat Support Center should have clear channels of communication with the deployed units, with the CIRF, and among each other. (17)
Information Sharing
Another important aspect of command and control is the successful sharing of information. The CIRF toolkit could be expanded to include the status of engines and pods in repair and aggregate status reports to provide information by unit. In addition, all operations, support, and C2 nodes (that is, the regional supply squadron, CIRF, and deployed units) could establish points of contact to provide all parties involved with a common operating picture.
Similarly, procedures should be instituted to inform these nodes of changes to deployments. The AEF Center and MAJCOMs should inform the nodes when the deployment packages change, either through the CIRF toolkit or other established channels. The operations support center can then task additional CIRF augmentees and enable the CIRF to allocate spares accordingly. The CIRF staff also should have a feedback channel for cases where deployed assets and equipment are broken, incomplete, or not properly documented. This will allow units to correct their deployments and explore root causes of these discrepancies.
Doctrine, Policy, and Training
Based on the success of the CIRF test, the Air Force is proceeding with further implementation of the CIRF concept. To assist in this implementation, CIRF scenarios could be incorporated into Air Force and joint policy. The Air Force Maintenance Management Division; Materiel Management and Policy Division; Deployment and Distribution Management Division; and Planning, Doctrine, and Wargames Division have been tasked with incorporating CIRF procedures into Air Force Doctrine Document (AFDD) 2-4, Combat Support; Air Force Instruction (AFI) 21-101, Aerospace Equipment Maintenance Management; Air Force Manual 23-110, USAF Supply; and AFI 24-201, Cargo Movement. This will involve revising spare item allocation standards and defining manpower and support unit type codes that can be used in a centralized maintenance scenario. In addition, further study of CIRF scenarios--to identify deployment requirements, performance standards, and other resource needs--could enhance operations. More specifically, the Air Force has tasked the USAFE Maintenance, Supply, and Transportation Directorates with evaluating the CWT goals and reassessing them every 6 months. This will keep transportation performance standards current with changing operational objectives.
Summary
The CIRF test provided an opportunity to not only study the implementation of CIRFs but also test the many C2 concepts that enable this implementation. Over the 6 months of CIRF test operations, the centralized repair and decisionmaking organizations performed effectively and were able to meet each of the four objectives established in the C2 architecture. However, there were also several areas in which shortfalls were noted. Standardizing organizational roles and responsibilities, process and information requirements, and channels of communication will further improve command and control and enable smoother implementation of future CIRF operations.
Notes
(1.) For a full definition of the five basic components of the ACS infrastructure, see Robert S. Tripp, Lionel Galway, Paul S. Killingsworth, Eric Peltz, Timothy L. Ramey, and John Drew, Supporting Expeditionary Aerospace Forces: An Integrated Strategic Agile Combat Support Planning Framework, RAND, MR-1056-AF, Jan 99, and Robert S. Tripp, Lionel Galway, Timothy L. Ramey, Mahyar Amouzegar, and Eric Peltz, A Concept for Evolving the Agile Combat Support/Mobility System of the Future, RAND, MR-1179-AF, 2000.
(2.) James A. Leftwich; Amanda Geller; David Johansen; Tom LaTourrette, Patrick Mills; C. Robert Roll, Jr; Robert Tripp; and Cauley von Hoffman, Supporting Expeditionary Aerospace Forces: An Operational Architecture for Combat Support Execution Planning and Control, RAND, MR-1536-AF, 2002.
(3.) Joint Pub 1-02, DoD Dictionary of Military and Associated Terms, 12 Apr 01.
(4.) AFDD-1, Basic Air Force Doctrine, 1 Sep 97.
(5.) Summarized from Leftwich, et al, An Operational Architecture for Combat Support Execution Planning and Control, MR-1536-AF, RAND, 2002.
(6.) Amanda Geller, et al, Supporting Air and Space Expeditionary Forces: Analysis of Maintenance Forward Support Location Operations, RAND, forthcoming 2003.
(7.) Eric Peltz, Hyman L. Shulman, Robert S. Tripp, Timothy L. Ramey, and John G. Drew, Supporting Expeditionary Aerospace Forces: An Analysis of F-15 Avionics Options, RAND, MR-1174-AF, 2000.
(8.) Amatzi Feinberg, Hyman L. Shulman, Louis W. Miller, Robert S. Tripp, Supporting Expeditionary Aerospace Forces: Expanded Analysis of LANTIRN Options, RAND, MR-1225-AF, 2001.
(9.) Mahyar A. Amouzegar, Lionel A. Galway, Amanda Geller, Supporting Expeditionary Aerospace Forces: Alternatives for Jet Engine Intermediate Maintenance, RAND, MR-1431-AF, 2001.
(10.) A complete RAND analysis of JTFNA is provided in Amatzia Feinberg; Robert S. Tripp; James A. Leftwich; Eric Peltz; Mahyar Amouzegar; Col Russell Grunch; CMSgt John Drew; Tom LaTourrette; and Charles Robert Roll, Jr, Supporting Expeditionary Aerospace Forces: Lessons from the Air War Over Serbia, RAND, MR-1263-AF, 2002.
(11.) The Air Force did not centralize maintenance in CONUS, the potential for which was discussed in MR-1174, MR-1225, and MR-1431. Instead, the CIRF test was based on CIRFs implemented to support overseas deployments and contingencies. The units maintained base maintenance in CONUS.
(12.) Leftwich, et al.
(13.) Methods on how to derive logistics performance parameters from operational metrics for reparable components have been known for some time. See such articles as Robert S. Tripp, "Measuring and Managing Readiness: The Concept and Design of a Wartime Spares Push System," Logistics Spectrum, Vol 17, No 2, Summer 1983; Robert S. Tripp and Raymond Pyles. "Measuring and Managing Readiness: An Old Problem--A New Approach," Air Force Journal of Logistics, Spring 1983; and other publications on Dyna-METRIC and the Weapon System Availability Model.
(14.) The RSS personnel were performing a COMAFFOR A4 function as outlined by the CSC2 operational architecture. These personnel could be considered to be a virtual extension of the UTASC, an operations support center, as described in the CSC2 operational architecture.
(15.) Further information on the RAND model and analysis is contained in Geller, et al.
(16.) Article in The Exceptional Release, Logistics Officer Association Magazine, Spring 2002.
(17.) More discussion about organizational roles needed to support the CSC2 operational architecture can be found in "Combat Support C2 Nodes: Major Responsibilities," page 22 of this edition of the Air Force Journal of Logistics.
Ms Geller is a RAND adjunct staff member. Dr Tripp and Dr Amouzegar are senior analysts at RAND. Mr Drew is a research analyst at Rand.
COPYRIGHT 2003 U.S. Air Force, Logistics Management Agency
COPYRIGHT 2004 Gale Group