Condon (1974) demonstrates that estimating point source
confusion noise requires only knowledge of the integral source
density and of the field of view of the instrument, so long as the
point source response is taken to be the "effective beam solid angle",
, which accounts for the wings of the beam. In the
regime of interest to us, the integral source counts go
approximately as a power law with index
. In this
case, the limiting noise goes as the square of the beam diameter.
This steep dependence can lead to substantial variations in
estimates of the confusion noise limit if differing assumptions are
made about the achievable beam diameter.
IRAS observations provide a direct measure of the far infrared
source density. At 60m, Hacking and Houck (1987) and Hacking
(1994, private communication) find
objects deg
brighter than 50 mJy. Above this flux limit, the integral counts
follow a power law as
, as expected for Euclidean space.
However, below this flux level the most prevalent galaxies become
increasingly more distant, and cosmological corrections will become
important so that the counts will deviate from the Euclidean
dependence. Therefore, a model of the behavior of galaxies must be
constructed from the IRAS data and used with an appropriate
cosmology to extrapolate to lower flux densities and determine the
density of faint galaxies on the sky. The model we have used will
be described elsewhere (Rieke, Young, and Gautier, in preparation);
here, we note that the source counts from our model are in good
agreement (within a factor of
) with the predictions of
Franceschini et al. (1991) and Helou and Beichman (1991). At the
flux densities and wavelengths of interest for limiting far
infrared measurements with cryogenic observatories, all the counts
behave roughly as power laws with
. The predictions
are not strongly dependent on cosmological assumptions or on the
amount of galaxy evolution when normalized to the 60
m counts
described above.
Given the densities of far infrared galaxies on the sky, to determine the confusion limit of an instrument we need to determine the smallest effective beam area that it can use. For future missions, the effective beam area should be determined using source extraction techniques appropriate to observations with imaging arrays such as will soon be available for the far infrared (Young et al. 1993). In the following, we investigate whether optimal source extraction with oversampled array images will allow useful data to be obtained in crowded fields that might be considered confusion limited with a single detector. We have taken a numerical approach that allows us to simulate the operation of various source extraction procedures using Monte Carlo methods. We have verified that our results agree with the analytic treatment of Condon (1974) where the results can be compared. In our Monte Carlo simulations, we have combined the effects of confusion and photon noise so that the results realistically model the effects of amplification of the photon noise as the extraction technique operates in increasingly crowded fields. In comparing results with varying pixel sizes, we have assumed that the pixels operate at the photon noise background limit. We have also simulated problems where the required signal to noise is modest, so that small errors in determining the beam profile do not affect our results. These latter two assumptions allow us to address a "fundamental" confusion limit that will not depend critically on the assumed parameters for the instrument.
The definition of confusion limit depends on the nature of the investigation. For example, an astronomer making an unbiased survey for infrared-bright galaxies would be less annoyed at being "confusion limited" in such objects than would one wishing to determine the flux density of a quasar in the same set of data. In the following, we have considered only the second type of observation since it yields the most stringent measure of the confusion limit.
In our experiments, we generated artificial data by drawing
sources randomly from a power law distribution N(> S) = C , where S is the flux density and C is a constant.
Sources were placed at random positions in a field of 57x57 pixels at an
average
density of 25 per
diameter beam, where D is the
telescope aperture. The faintest sources were therefore roughly
only 1
of the flux densities at the achievable detection
limits. We left out of this field any sources at more than 1000
times the rms noise level, under the assumption that any deep
survey field would be selected to avoid extremely bright sources. Each source
was convolved with an Airy pattern of full width at
half maximum of 10 pixels and gaussian-distributed random noise was
added to each pixel. This noise was scaled with pixel size under the
assumption of background limited operation. A test source of known amplitude
and with an
Airy pattern profile was placed in the center of this "data" array.
Initial tests set . We used a source extraction
method closely related to the CLEAN algorithm. The data array was
deconvolved by identifying the brightest pixel and subtracting an
Airy pattern of amplitude 1/3 the rms gaussian noise. The position
and amplitude of this subtracted flux contribution were stored in
another "deconvolved" array and the procedure was repeated,
incrementing the amplitudes in the deconvolved array as small
amplitude sources were subtracted from the data array. The
subtraction was stopped when the variance in the data array
increased with subtraction of an additional small source.
The program then computed the estimated flux density of the
test source in the deconvolved array. The sky level was determined
as the average of the surface brightness between radii of 1 and 2.5
, after rejecting all peaks at greater than 3 times the
rms noise in this region. This rejection was based on the
hypothesis that one would avoid obvious neighboring sources in
computing a sky level. The source flux density was determined by
integrating the signal within apertures of various sizes and
subtracting the sky contribution. 400 Monte Carlo integrations
were run for each value of assumed noise. It was found that the
optimum extraction procedure in the deconvolved image was to integrate the
signal within a sharp-sided angular aperture of diameter 0.5
to 0.7
. Extractions in this size (and neighboring
size) apertures reproduced the input source strengths accurately,
with no significant biases toward under- or over-estimation.
The artificial central source was selected so that the final
signal to noise (including confusion noise) would be near five.
This case is appropriate to a deep survey where source detections
are achieved down to the limiting noise. For each Monte Carlo run of 400
cases, we determined the rms fluctuations in the estimated brightness of the
central source, so the suite of runs yielded the relation between density of
confusing sources on the sky and noise in the measurement of the central
source. We fitted this relation to the expected theoretical behavior, with the
effective beam diameter as a free parameter; an excellent fit was achieved
with an assumed beam of angular diameter 0.8 . That
is, in this application and with the CLEAN method of source
extraction, we would successfully predict the total noise of the
system including amplification of the photon shot noise if we assumed we were
observing with a sharp sided aperture
of diameter 0.8
. The results were found to be largely
independent of the selection of reference field, so long as the
inner radius was
0.6
. Our results represent a modest amount
of superresolution over the full width at half maximum of 1.03
for
an Airy function.
A second set of simulations was run with a power law source
index of , with results very similar to those just
described. Again, the behavior is adequately described by the
simple theory if a beam diameter of 0.8
is assumed.
The experiments described above assumed an extreme degree of
oversampling, i.e., pixels of 0.1 the diffraction-limited beam
diameter. An additional set of experiments addressed the use of
larger pixels. Here, the data frames and CLEAN beam were both
measured with pixels respectively of 0.3, 0.5, and 0.7
.
It was assumed that the telescope was substepped to maintain the
same sampling interval as before, that is 0.1
. For
example, with the 0.5
pixels, it would be required that
data be taken at 25 telescope pointings on a 5x5 grid with 0.1
between points. 200 iterations were made for each
case, all at the same confusing source density where the previous
experiments indicated confusion noise would degrade the photon
noise limit by a factor of 3. These experiments produced similar
results to those with the 0.1
pixels, showing that
imagers can make efficient use of the far infrared arrays by using
pixels that are a reasonably large portion of the Airy pattern.
>From other experiments, we conclude that this favorable result
arises only if the telescope is substepped on a finer grid than the
pixel-to-pixel spacing. In addition, the requirements on the accuracy of
calibration and the knowledge of the point spread function will be increased
as the pixel size is increased. These factors must all be considered in the
design of realistic systems for high sensitivity operation in the far
infrared.
Although we have treated our simulations as if there is a hard confusion limit, at higher ratios of signal to noise it is likely that the source could be localized more precisely than in our experiments, leading to a smaller effective beam diameter and an improvement in limiting flux density. Again, achieving these goals will place greater demands on the calibration and knowledge of the point spread function and therefore will depend on the details of system design.
As examples, we compute for an 85 cm telescope at 60, 100, and
150m the rms confusion noise limits due to distant galaxies given
in Table I. The confusion noise from point sources with
uncorrelated positions on the sky will scale in this case inversely
as the effective aperture area, or as
.
Table i: Noise Components for 1000 Sec Integrations