首页    期刊浏览 2025年02月18日 星期二
登录注册

文章基本信息

  • 标题:Automatic calibration for easy and accurate power measurements - HP 8990A peak power analyzer - includes related article on testing the peak power analyzer firmware - Technical
  • 作者:David L. Barnard
  • 期刊名称:Hewlett-Packard Journal
  • 印刷版ISSN:0018-1153
  • 出版年度:1992
  • 卷号:April 1992
  • 出版社:Hewlett-Packard Co.

Automatic calibration for easy and accurate power measurements - HP 8990A peak power analyzer - includes related article on testing the peak power analyzer firmware - Technical

David L. Barnard

Changes in input power, carrier frequency, and sensor temperature are automatically compensated for. The user is not required to disconnect the sensor from the device under test and connect it to a calibration source.

The HP 8990A peak power analyzer is designed to measure the power of pulsed signals accurately over a wide dynamic range. A well-designed calibration strategy was required to achieve the specified accuracy over all of the specified operating conditions. HP 8990A calibration includes both calibration of the power sensor and calibration of the analyzer.

Calibration of the Sensor

The sensitivity of the sensor is strongly influenced by the operating conditions. Calibration of the analyzer and sensor to compensate for changes in sensor sensitivity over these varying conditions became a significant design issue. There are three parameters that affect the sensitivity of the sensor: the incident power level, the carrier frequency of the applied signal, and the operating temperature of the sensor diode (see Fig. 1). These needed to be carefully considered in the development of the calibration scheme.

Besides the accuracy requirement, other goals affected the selection of a calibration strategy. One of these was ease of use. We wanted to minimize the effort on the part of the user to perform a calibration. For example, we didn't want to require the user to disconnect the sensor from the device under test and connect it to a calibration source. This would be especially objectionable if frequent calibrations were required. Another goal was to avoid the need for a lot of specialized calibration hardware that would add significantly to the cost of the analyzer. We didn't want the analyzer to perform excessive computation during normal use, which could make operation slow.

Sensor Characteristics

The sensitivity of the sensor diode is a nonlinear function of the incident power. At low power levels, the diode response has a square-law characteristic, so the voltage appearing at the output of the sensor is proportional to the power of the applied signal. At high power levels, the response approaches linear operation. In this operating region, the sensor's output voltage is approximately proportional to the signal voltage. A broad transition region starts to appear above 10 [micro-W] and is still evident in the high-power region, so linear operation is never fully achieved. For this reason, no simple mathematical model exists to describe the voltage-versus-power transfer function of the sensor over the specified power range. The sensor is required to operate over a broad frequency range, which brings into play the frequency dependent characteristics of the diode. The dominant effect is a roll-off of sensitivity that occurs at higher carrier frequencies. This roll-off is strongly dependent on the. incident power level. In addition, there are some minor ripples in sensitivity starting near the middle of the frequency range. These frequency dependencies are much more noticeable at low incident powers (see Mg. 2). The sensitivity of the sensor shows a significant temperature dependency. The sensitivity changes quickly at low temperatures and flattens out at high temperatures. Again, this effect is most noticeable at low incident powers (see Fig. 3).

Calibration Alternatives

We concluded that each sensor would have to be calibrated over its specified power range at a given operating point of carrier frequency and temperature. No way was found to model the physical processes in the sensor diode with sufficient accuracy to reduce this requirement. Several calibration alternatives were considered. One solution would be to use a calibration source capable of producing accurate power levels over the power and frequency range of the sensor. The sensor could then be calibrated before use, regardless of its current operating temperature. However, such a calibration source would be prohibitively expensive. Also, this approach would require the user to disconnect the sensor from the source under test to perform the calibration, and the calibration would have to be repeated whenever the operating temperature of the sensor changed. A more economical solution would be to have a single-frequency calibration source. As before, the calibration would be performed over the sensor's input power range. Frequency calibration data might be stored in the sensor to correct for the change in sensitivity over frequency. This approach would be less accurate, since the frequency response of the sensor diode is not independent of power and temperature. It would also suffer from the need for manual intervention by the user.

HP 8990A Approach

The disadvantages of these calibration approaches led us to consider a dramatically different approach-characterizing the sensor over temperature, frequency, and power, and storing this information in the sensor for use by the analyzer. This scheme relies on the lorig-term stability of the sensor diode technology, which was shown to be excellent in the course of extensive reliability testing. To make use of the data characterizing the sensor, the analyzer must know the operating conditions of the sensor. The operating power is not a problem since the analyzer always knows what range it is set to. Since the analyzer has no way of determining the carrier frequency, the frequency must be specified by the user, but this would also be true for any of the previously mentioned calibration schemes. The one new item of information required by this scheme is the sensor's operating temperature.

To support this calibration scheme, the sensor is designed with a thermistor located in close proximity to the sensor diode. The analyzer can read the thermistor with an analog-to-digital converter to learn the operating temperature. This, along with the carrier frequency supplied by the user, gives the analyzer sufficient information to interpret the sensor data to perform a calibrated power measurement.

Implementation

To implement the selected calibration scheme, we needed to find an efficient way to represent the sensor's behavior. As previously mentioned, we concluded that the sensor's nonlinear relationship of input power to output voltage precluded the use of a simple mathematical model. This led us to test each sensor over the specified power range. Since the effects of temperature and frequency influence each other and are not easily modeled, it became apparent that the power-to-voltage transfer function of each sensor would have to be measured at a number of different temperatures and frequencies. A sensor test system was designed to perform the needed measurements. It measures the sensor output voltage as a function of power over the specified range of the sensor. The sources are broadband, allowing any test frequency in the specified range of the sensor to be used. The sensor under test is placed in a temperature chamber so it can be characterized over temperature.

A grid of temperatures and frequencies is constructed for each sensor model. For each temperature, power-versus-voltage data is collected at each test frequency. After correction for mismatch errors, the resulting test data forms a three-dimensional matrix of power and voltage pairs.

Processing the Sensor Test Results The matrix of measurements delivered by the sensor test system is too bulky to store directly in the sensor. We decided to try processing the data with numerical curvefitting techniques to yield a more compact representation. We were concerned that the representation be usable by the analyzer without a lot of time-consuming floating-point processing. Such processing could cause large delays whenever the analyzer recomputed the sensor's response.

This ruled out the use of logarithmic or exponential functions. Instead, we chose polynomials. To cover the large dynamic range, we broke the power-to-voltage curve into four segments. For each segment, a curve is fit to the data by the least squares method with the added constraint of making the endpoints match the adjacent segments. The resulting polynomial coefficients are much more compact than the original data, making storage in the sensor's internal EEPROM practical. Also, polynomials can be efficiently calculated in the analyzer.

Analyzer Calculations

The analyzer reads the coefficients and associated data from the sensor's EEPROM at power-up or when the sensor is plugged in. The analyzer then uses the coefficients to calculate the polynomials, which give power as a function of sensor output voltage. Each set of coefficients, and therefore each polynomial, applies at a single point in a two-dimensional grid of temperature and frequency. Thus, the power at a given operating point is calculated by interpolating between the powers at the nearest test frequencies and temperatures. This interpolation is implemented by a spline surface-fitting algorithm[1] and is included as part of the overall voltage-to-power function.

The voltage-to-power function is applied in two ways. The first step is calculating the amount of internal gain that the analyzer must insert to amplify the detected voltage to match the selected full-scale power sensitivity. This requires the inverse function, power to voltage, which is calculated from the original function by a version of Newton's method.2 Once the gain is set, we are assured that applying the power corresponding to the top of the screen to the sensor will result in the amplified output voltage corresponding to the top quantization level of the analog-to-digital converter (ADC). Independently, the offset leveling, which is automatically performed, ensures that the lowest ADC reading corresponds to zero power. The final step is to calibrate the rest of the ADC levels. A table maintained inside the analyzer translates the ADC levels to calibrated power. The voltage-to-power function is used to calculate the values in this table.

This process must be repeated whenever a new analyzer sensitivity is selected, when a new carrier frequency is entered, when the sensor temperature changes, or when a new sensor is plugged in.

The analyzer continually monitors the sensor thermistor to check for temperature changes. If the temperature deviates more than a certain amount, the calibration procedure is automatically performed. This relieves the user from the worry of manually recalibrating the analyzer when the sensor's operating environment changes.

Calibration of the Analyzer

The calibration of the analyzer is complicated by a number of constraints. These include circuit nonlinearities, the large dynamic range, and the nature of the signal in the HP 8990A. A systems design approach was required, including both hardware and software design.

Offset Voltage

Even ignoring the effects of the GaAs switches in the HP 8990A signal path, offset voltage is a design issue, since the amplifiers in the sensor produce an offset that depends upon ambient temperature and other factors and so tends to drift slowly in operation. The HP 84810 Series sensors incorporate a "chop" line, which commands the sensor circuitry to simulate a condition of no incident RF power. This permits offset leveling without operator intervention. The offset is periodically releveled to prevent any drift out of the specified accuracy over time. It doesn't matter where in the signal processing path the offset variation originates; a single swift automatic leveling effectively compensates for the offset.

Channel Resistance

The channel resistance of the GaAs switches is known to vary with operating temperature, so it is important to be able to compensate for the effects of channel resistance variation during analyzer self-calibration. One reason channel resistance effects are so important is that they affect the impedance match between amplifier stages and so influence the overall gain of the analyzer. Consequently, the voltage gain is measured during self-calibration. Another effect of channel resistance variation is that it directly affects the (nominally 50-ohm) input resistance of the HP 8990A. The sensors are calibrated in terms of their output to a load of exactly 50 ohms. During self-calibration the HP 8990A measures its own input resistance to determine the match between the sensor and the analyzer.

Offset DAC Circuits

In a sense, the HP 8990A self-calibration subsystems are built around the precision fine-DAC circuit, which is constructed of highly stable precision components. The circuit can operate as either a low-impedance source (voltage mode) or a precise medium-impedance source. The HP 8990A also has a coarse DAC, which is not a precision DAC. The coarse DAC injects into the signal path downstream of some amplification or attenuation from the point of injection of the precision fine DAC (see Fig. 4).

Automatic Offset Leveling

To level the offset in the general case, the analyzer is first set up as follows. With the sensor chop line "pulled" and the precision fine DAC set to its starting position in medium-impedance mode, the stick DAC, which provides the reference voltage to the flash ADC (see Fig. 4), and the coarse offset DAC are set to midrange. Then the flash ADC is used to take many data samples. Typically the ADC output is at its upper or lower limit at this time since the coarse DAC is probably in the wrong position. A conventional binary chop search is made for the coarse DAC setting that will bring the signal level within the range of the ADC. Then, because the fine DAC is extremely linear, an extrapolation can be done using two pairs of fine DAC settings and ADC readings to calculate the fine DAC setting that just gives an ADC reading of zero. This completes offset leveling for the simple case. The coarse DAC has a relatively long settling time, so when offset leveling is performed, care is taken to avoid changing the coarse DAC setting unless absolutely necessary. Since different vertical ranges typically require different coarse DAC settings, changing the range can result in a delay because of the binary search needed to find the new coarse DAC setting. To avoid this delay, once offset leveling is performed in a particular vertical range, the coarse DAC setting for that range is stored. When that range is revisited, which might happen after the user makes a series of range changes, the stored coarse DAC setting is used. This eliminates the need for the time-consuming binary search except when the offset voltages have changed significantly.

A manual zero feature analogous to that of Hewlett-Packand average power meters is provided. This feature can be time-consuming to use but is available for best accuracy when the signal is below -30 dBm. It is capable of correcting for offsets that precede the chop switch.

Vertical Calibration

Essentially, vertical calibration of the HP 8990A answers the question: How much signal from the sensor corresponds to full scale at the ADC?

Within its operating range, the sensitivity of the HP 8990A is, for practical purposes, continuously adjustable. Thus the purpose of vertical calibration is to provide data so the vertical setup subsystem can select the analyzer hardware settings that will provide the desired sensitivity. The vertical calibration data includes: Vertical sensitivity expressed in ADC counts per volt. Input resistance expressed in ohms.

Compensation coefficients for correcting analyzer nonlinearity.

This data is repeated for each combination of vertical amplifier settings.

The actual measurement of vertical sensitivity is relatively simple: the precision fine DAC is placed in its low-impedance mode (also known as voltage-source mode) and its count is changed. The ratio of the change in the fine-DAC count to the corresponding change in the ADC count, together with the absolute sensitivity value k of the precision fine DAC, provides the needed value:

Sensitivity  (fine DAC)
             -----------  k
              A(ADC)

Mathematical techniques such as linear regression and drift modeling are used to minimize execution time and maximize accuracy.

Input resistance can be measured with or without a sensor attached. The input resistance calibration is performed using the precision fine DAC as a stimulus. If a sensor is connected to the input of the analyzer, the sensor output impedance forms a load in parallel with the analyzer input resistance (see Fig. 5). In essence the resistance calibration consists of finding two precision fine-DAC stimuli, one in low-impedance mode and one in medium-impedance mode, that produce the same effects at the ADC converter. The Thevenin equivalent source resistance of the sensor is available from the sensor's own EEPROM coefficients. The sensor's parallel conductance is corrected for mathematically.

The compensation coefficients correct for small changes in analyzer sensitivity that occur if the calibration hardware setup is not the same as the measurement hardware setup. One such coefficient corrects the input resistance for the difference in voltage gain between calibration and measurement. Another coefficient accounts for the effects of offset voltage feedback. The error component is only a small part of the gain, so a very simple model of it is good enough.

Vertical Setup

To see how the various calibration factors interact, it is illustrative to look at what happens when a new vertical sensitivity is selected. In part the process involves making some apparently arbitrary decisions at the outset to set an overall gain or sensitivity. The process then converges to a solution. It begins when the user selects a desired sensitivity in microwave power level per screen division. This determines the full-scale power level. From this level, the sensor data, the temperature, and the carrier frequency, a full-scale Thevenin equivalent circuit of the source (open-circuit voltage and source resistance) can be determined. Making an approximation for analyzer input resistance, the full-scale power level is converted to an input voltage at the analyzer front panel and the amount of baseband board amplification required to produce a roughly correct signal level at the flash ADC converter is selected. Recall that the baseband board amplification is available in steps of approximately 20 dB. Once the baseband board amplifiers and appropriate low-pass filters are selected, the input resistance and the full-scale input voltage can be determined accurately. Since the full-scale input voltage is known, the lookup table (which expresses the nonlinear relationship between power and voltage) can be constructed, The postamplifier settings can also be selected. Having converged this far, the setup is within about [+ or -] 1 dB of the desired sensitivity. After a final iteration of the calculation, the final value of the stick DAC setting is deduced so as to arrive at precisely the desired sensitivity. The stick DAC provides the reference voltage to the flash ADC converter and is the final adjustment of sensitivity. Offset leveling is then all that remains to be done to complete the vertical setup of the analyzer.

Acknowledgments

Tom Menten developed and refined the curve-fitting algorithm that is used to process the sensor data. Sandy Dey provided production support of the sensor calibration process. Kari Santos helped develop the vertical calibration firmware.

Testing the Peak Power Analyzer Firmware

The firmware quality assurance plan for the HP 8990A firmware had the following objectives:

Extensively test all HP 8990A functionality to meet shipment criteria Develop a comprehensive automated test procedure to simplify verification of development firmware revisions and postshipment firmware releases Leverage as many test tools as possible.

An internally developed tool called the HP-IB Interactive Test System (HITS) was leveraged from previous projects and used for automated testing. The program was modified to add various new features and commands to test HP 8990A functionality. HITS is a BASIC program that runs in the RMB-UX environment on HP 9000 Series 300 workstations. It takes in HITS input test files and creates corresponding signature files. The HP-UX diff utility is used to compare the output signature files against verified reference files. Any discrepancies point to a change with the new firmware revision which should be investigated as a potential problem,

The HITS test files contain a series of commands, usually one per line. The command format is as follows:

CC XXXX...X YYYYYY..Y

The CC field contains a two-character command that specifies how to interpret the other two fields. For example:

CQ CHANNEL1:RANGE 10mw

The co tel Is H ITS to send the command CHANNEL1:RANGE 10mw and then send the query CHANNEL1:RANGE?. The command, query, response, any error messages, and status information are logged to the signature file. HITS provides the following capabilities:

Send a command

Send a query and get a response

Send a command followed by a query

Perform a measurement and limit-check the result

Perform random key testing for a specified number of key presses

Randomly set parameters in a specified range a specified number of times

Perform a series of tests in sequential order

Repeat a series of tests in random order a specified number of times

Test IEEE 488.2 functionality

Test digitization functionality of the HP 8990A

Test autoscale functionality of the HP 8990A

Interact with the DUT to facilitate development of test cases and other functions

Log commands and queries to a file (no responses, errors, or status messages are logged).

All the commands and the resulting queries, responses, error messages, and status information are logged to signature files. The HP 8990A is a fairly complex analyzer with over two hundred and fifty functions, To test this large set of features, both subsystem and scenario-based testing were used. As a first step, subsystem tests were written to verify the basic functionality of each subsystem. Then scenario tests were written to test various measurement scenarios and the interactions and couplings between the various functions. About 80% of the automated test development time was spent generating the scenario tests.

Time was also devoted to manual testing of front-panel operation, the display subsystem, calibration scenarios, interaction with different controller platforms, analyzer options, and features that cannot be tested in an automated fashion. in addition, all code was run through the C program checker/verifier tool lint and the C syntax checker tool inspect

In addition to HITS, a second BASIC program called INTERP was used for normal interaction with the analyzer and for testing new functionality. The INTERP program was written by the project's HP-IB engineer as a development tool. This program takes an HP-IB command, sends it to the analyzer, and automatically reads the responses to queries. It prints error messages and status information, and provides support for reading and sending block data and for recalling previous commands. The program can also read commands from a file. This feature was used to recreate problems, with input provided by the file created by the HITS log feature.

The automated tests proved to be very useful. After changes to the firmware a successful test run helped us verify that a bug fix had not introduced any new problems. When bugs were found that were not detected by the automated tests, the tests were updated to check for those specific problems. This improved the coverage of the automated tests. The automated tests were also run to verify that hardware changes had not had any unexpected impact on firmware functionality. The firmware shipment goal for the HP 8990A peak power analyzer was to reach a defect rate of <0.05 defects/hour of test time, To stop testing, the defect rate had to exhibit a trend of <0.05 defects/hour and the product had to go through a period of 40 hours of test time without discovering any defects.

During the final phase of quality assurance testing an automated/manual test cycle was used. Once the firmware passed the automated tests the product was released to a group of marketing and R&D engineers. This group performed application-specific and function-specific testing for a period of 24 hours. Any defects found were fixed and the cycle was repeated until the shipment criteria were met.

The HP 8990A has a total of 99,351 lines of noncomment source statements (NCSS). Since this was a leveraged product, a better metric is the number of lines of code that were modified or added. Approximately 40,000 NCSS were added or modified. During the product development cycle, 355 defects were logged. This gives a defect rate of 8.88 defects per thousand lines of code. The testing process worked well in helping the HP 8990A firmware team meet all of its quality assurance objectives.

Acknowledgments

I would like to thank Tatsuo Yano and Mark Johnston for their help with HITS, Jim Thalmann for writing INTERP, and the Colorado Springs Division's HP 54500 team for their help in resolving various defects.

Jayesh K. Shah

Development Engineer

Stanford Park Division

COPYRIGHT 1992 Hewlett Packard Company
COPYRIGHT 2004 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有