Video signal correction for scanning photocell array in the IR-computer vision systems.
Andreev, Victor Pavlovich
1. INTRODUCTION
In computer vision systems a photocell array installed in an
optomechanical scanning system is sometimes used as the video signal
sensor. Such systems are most often used in design of IR vision devices.
In robotics IR vision devices can be an extremely useful source of video
information, especially in computer vision systems of specialized mobile
robots, including those used by the Russian Emergency Situations
Ministry (Pryanichnikov et al., 2009).
In scanning systems video signal is formed upon successive
commutation of array photo sensors which are shifted as a whole in the
direction perpendicular to the photo sensor position in the array. In
the course of such scanning each photo sensor forms electric signal
whose value is proportional to the radiation flux hitting the photo
sensor through the objective. As a result, each photo sensor forms one
image row.
In most cases the function of radiation transformation into the
electric signal of the photo sensor can be described by the following
linear model (Boltar et al., 1999):
[U.sub.i] (x) = [S.sub.i] x [E.sub.i] (x) + [C.sub.i], for i = 1,2,
..., N, (1) where: [S.sub.i] is the integral sensitivity of i-th photo
sensor;
[C.sub.i] is the video signal component due to dark current;
[E.sub.i] (x) is the brightness of the optical signal scanned along axis
x by i-th photo sensor (Et (x) [greater than or equal to] 0 );
N is the number of photo sensors in the array. Video signal in
homogeneity, i.e., incongruence of the output signal [U.sub.i] (x) and
the image [E.sub.i] (x), occurs due to the spread of sensitivities {Si}
and dark components {Ct} of photo sensors.
This phenomenon is called structural or "geometric" noise
of multielement radiation receiver. Multielement IR radiation sensors
possess especially strong structural noise (Boltar et al., 1999).
Taking into account the linear character of photo sensor model (1),
the video signal can be corrected:
[U.sub.i](x) = [K.sub.i] x [[U.sub.i] (x) + [R.sub.i]], for i =
l,2, ..., N, (2)
where: [R.sub.i] is the adaptive correcting signal compensating the
in homogeneity of dark components of the video signal [C.sub.i];
[K.sub.i] is the amplification coefficient of i-th amplifier
compensating in homogeneity of the sensitivity [S.sub.i].
The process of video signal correction is naturally separated into
two parts: one is the compensation of video signal in homogeneity using
formula (2) which should be performed with the speed of query of array
photo sensors, and the other is the calculation of correcting
coefficients [K.sub.i] and [R.sub.i], which can be performed with a
lower speed.
2. STATISTICAL IMAGE MODEL
The standard method based on illumination of photo sensors by one
or two reference radiation sources with different intensity (Bogomolov
et al., 1987) is well known. The disadvantage of this method is low
compensation precision due to high complexity of achieving homogeneous
illumination, especially in the IR range and neglect of fluctuation
noise of photo sensors and reference radiation sources.
The reference-free method based on statistic image properties is
known (Oremorod, 1982). In this method the assumption is made that the
image is a random function of brightness with the ergodicity property.
However, this ergodicity property can be used only if "message
segments" are sufficiently long, i.e., it contains several thousand
frames.
In this study it is proposed to use the image model based on
statistical properties of neighboring image rows. This makes it possible
to increase the accuracy of calculation of correcting coefficients in
the case of small realization length.
The image can be considered as N realizations (rows) with finite
length (X = L) of the random brightness function [E.sub.i] (x), where i
is the row number; these realizations possess the following properties.
1. The probability that within one frame the variances of the
brightness function [D.sub.i] (E) and [D.sub.i+1](E) for neighboring
rows coincide (event [A.sub.i,i+1]) is much larger than the probability
that they differ (event [B.sub.i,i+1]):
P([A.sub.i,i+1]) >> P([B.sub.i,i+1]). (3)
2. The probability that within one frame the average values of the
brightness function [[bar.E].sub.i] and [[bar.E].sub.i+1] for
neighboring rows coincide (event [F.sub.i,i+1]) is much larger than the
probability that they differ (event [H.sub.i,i+1]):
P([F.sub.i,i+1]) >> P([H.sub.i,i+1]). (4)
3. Events [A.sub.i,i+1] and [B.sub.i,i+1] as well as [F.sub.i,i+1]
and [H.sub.i,i+1], respectively, form the complete groups of events:
P([A.sub.i,i+1]) + P([B.sub.i,i+1]) = 1 ; (5)
P([F.sub.i,i+1]) + P([H.sub.i,i+1]) = 1. (6)
The appropriateness of properties (3) and (4) follows from the fact
that the real image possesses strong statistical coupling between
neighboring rows of one frame. Properties (5) and (6) are evident.
3. DETERMINATION OF CORRECTING COEFFICIENTS
The following transition coefficient can be determined for each
pair of neighboring photo sensors based on (3) and (5):
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)
where D (U) is dispersion of video signal.
Substituting (1) into (7), we obtain the expression connecting the
transition coefficient and the sensitivity of neighboring photo sensors:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (8)
where the multiplicative error is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (9)
Then property (3) can be written as:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
For [[delta].sub.i,i+1] = 1 expression (8) connects the sensitivity
of neighboring photo sensors via the transition coefficient
[G.sub.i,i+1], which yields the iterative formula:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where k is the number of reference photo sensor.
The transition coefficients {[G.sup.<j>.sub.i,i+1]} for each
y'-th frame can be determined in terms of the values of output
signals of photo sensors (1) using formulas (7).
It follows from (4) and (6) with account of (1), and (9) that the
difference between the average values of brightness of neighboring rows
of the image for y'-th frame is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which yields
the expression for additive transition coefficient:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (10)
and
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)
Property (4), as applied to this problem, can be written as:
P([DELTA][[bar.E].sup.<j>.sub.i,i+1] = 0) >>
P([DELTA][bar.E].sup.<j>.sub.i,i+1] [not equal to] 0).
For ([DELTA][[bar.E].sup.<j>.sub.i,i+1] = 0 expression (10)
connects dark parameters of neighboring photo sensors via the additive
transition coefficient [Q.sup.<j>.sub.i,i+1], which yields the
iterative formula for determination of the values of dark parameters
(frame number j is omitted),
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where k is the number of reference photo sensor.
Expression (11) makes it possible to determine the transition
coefficient [Q.sup.<j>.sub.i,i+1] in terms of the video signal
[U.sub.i] (x) and transition coefficients {[G.sup.<j>.sub.i,i+1]}
for each j-th frame.
The accuracy of calculation of transition coefficients can be
increased by simple averaging over j values of the parameters
[G.sup.<j>.sub.i,i+1] and [Q.sup.<j>.sub.i,i+1] obtained for
the frame sequence.
4. RESULTS OF COMPUTER SIMULATION
The process and results of simulation are described in most detail
in (Lebedev & Lyong, 2007). Three digital black-and-white 512x512
bit 256 color images were used as initial data (Fig. 1).
[FIGURE 1 OMITTED]
[FIGURE 2 OMITTED]
For estimation of photo sensor array quality>> the parameter
[delta] was used:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
where:
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]
The spread of parameters of the initial photosensor array was
[delta]=98. Figure 2a shows the result of reading one of the initial
images of this array. For obtaining more accurate values of correcting
coefficients it is necessary to use several frames ( J [greater than or
equal to] 3) which differ from each other by spatial brightness
distribution (subject).
5. CONCLUSION
Results of simulation are shown high effect the method. This new
adaptive method can be realized as in real-time special processor in
thermal imaging systems. It is not required installation in an
optomechanical scanning system of a reference radiation sources. For
obtaining accurate values of correcting coefficients it is necessary to
use several frames.
6. REFERENCES
Pryanichnikov V., Andreev V., Prysev E. (2009). Computer vision and
control for special mobile robots. Annals of DAAAM for
2009&Proceedings of the 20th International DAAAM Symposium
"Intelligent Manufacturing & Automation: Focus on Theory,
Practice and Education", ISSN 1726-9679 ISBN 3-901509-58-5, Vienna,
Austria, pp. 1857-1858
Boltar K. O., Bovina L. A., Saginov L. D., et al. (1999).
"Thermal Vision Sensor Based on 128X128 Cd0, 2Hg0, 8Te
"Viewing" Matrix" // Appl. Phys., no. 2
Bogomolov P. A., Sidorov V. I., and Usoltsev I. F. (1987).
"Receiving Devices of IR Systems". Moscow, Radio i Svyaz
Oremorod D. High Performance Thermal Images. (1982). Conf.
Proceedings of Intern. Defence Electronics Expo-82, West Germany, pp.
303-327
Lebedev D. G., Lyong K. T. (2007). "Simulation of Adaptive
Equation of Photosensor Array Parameters Using Microscanning", //
Informal Processes, vol. 7, 2, pp. 124-137