Astronomy

Converting Jy/beam to Jy?

Converting Jy/beam to Jy?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Maybe its a dumb question, but to convert Jy/beam to Jy, I just have to multiply it by the beam size in sr right?

Being $Omega$ the beam size: $Omega = frac{pi heta_{maj} ~~ heta_{min}}{4 ln{2}}$

Jy/beam $cdot Omega$ = Jy ?


So long as you accurately know the beam size, then yes multiplying your Jy/beam measurement (effectively flux density) by the beam size (effectively area of flux) will give you the total Jy (effectively the flux).

See this source as an example.


Actually to convert from Jy/beam to Jy/pixel you need to divide by the beam size.

Let's say you have a quantity of 1 Jy/beam, then

$frac{Jy}{beam} frac{beam}{Omega}$, then to go from Jy/beam to Jy/pixel you would need to divide by $Omega$.

The values of the beam major and minor axis must be in pixels.

Source: NRAO


It depends on what you mean by "just Jy". Usually, what is meant is the surface brightness of a source, in some unit like $operatorname{Jy}operatorname{sr}^{-1}$ or $operatorname{Jy}operatorname{arcsec}^{-2}$, integrated over solid angle to get the source's total flux. What a measurement of $I operatorname{Jy}operatorname{beam}^{-1}$ is telling you is, roughly, "An unresolved source that is the nominal beam size with this peak flux will have total flux $I$ in Jy." So, if you want surface brightness, take your quantity in Jy/beam and divide by $Omega$/beam (see the relationship between $S$ [flux] and $I$ [surface brightness] in this brightness temperature NRAO tutorial.

In most situations with radio astronomy, the clean beam will be a Gaussian, so egin{align} I &= I_{ ext{Jy/beam}} frac{ ext{beam}}{Omega_{ ext{beam}}} &= I_{ ext{Jy/beam}} frac{1}{pi sigma_{ ext{beam}}^2} &= I_{ ext{Jy/beam}} frac{4ln(2)}{pi heta_{ ext{beam}}^2}, end{align} with $ heta_{ ext{beam}}$ the beams full width at half power and $sigma_{ ext{beam}}$ the standard deviation of the beam.

Once you have $I$, getting Jy / pixel is as easy as multiplying by $Omega_{ ext{pixel}}$. If you have some surface brightness profile you fit, like egin{align} I_{ ext{model}} &= A expleft(-frac{(x-x_0)^2sigma_y^2- 2 hosigma_xsigma_y(x-x_0)(y-y_0) + (y-y_0)^2}{2sigma_x^2sigma_y^2(1- ho^2)} ight), end{align} where $sigma_x$ and $sigma_y$ are normal standard deviations and $ ho$ is your normal correlation coefficient. For more normal astronomer usage, you'd use egin{align} heta_M &= sqrt{8ln(2)left[frac{sigma_x^2 + sigma_y^2}{2} + sqrt{left(frac{sigma_x^2 - sigma_y^2}{2} ight)^2 + ho^2sigma_x^2sigma_y^2} ight]} heta_m &=sqrt{8ln(2)left[frac{sigma_x^2 + sigma_y^2}{2} - sqrt{left(frac{sigma_x^2 - sigma_y^2}{2} ight)^2 + ho^2sigma_x^2sigma_y^2} ight]} phi &= left{egin{array}{ll} 0 & ext{if } ho=0, sigma_y > sigma_x 90^circ & ext{if } ho=0, sigma_x > sigma_y operatorname{atan2}left( heta_M^2 - 8ln(2)sigma_x^2, sigma_y^2 ight) & ext{otherwise} end{array} ight. end{align} Then you can convert the peak surface brightness $A$ to total brightness by integrating $I$ over all $x$ and $y$, yielding egin{align} S &= A 2pi sigma_x sigma_y sqrt{1- ho^2} &= A frac{pi heta_m heta_M}{4ln(2)}. end{align}

Note what happens if you combine the conversions from Jy/beam to surface brightness to Jy. You get: egin{align} S &= A frac{ heta_m heta_M}{ heta_{mathrm{beam}}^2}. end{align} C.f. Equation 35 of Condon et al (1998). Note that Condon et al provide multiple equations that depend on how resolved thee source is. Based on the referenced paper it looks like what they're doing is minimizing the variance of their "corrected" values.


Support conversion from Jy/beam to K #3463

At the moment we have a brightness_temperature equivalency for converting Jy to K, which requires a beam area. It would be nice to also be able to simply convert Jy/beam to K without explicitly specifying a beam area.

The text was updated successfully, but these errors were encountered:

We are unable to convert the task to an issue at this time. Please try again.

The issue was successfully created but we are unable to update the comment at this time.

Keflavich commented Feb 9, 2015

It would be great if we could make it work with the radio-beam package, which reads standard beam parameters from FITS headers.

But, @astrofrog, what you've asked for isn't possible, I think - Jy/beam has no meaning without a beam area.

Astrofrog commented Feb 9, 2015

Ah of course, but so when we do:

is it the same as if one did

? That is, the value in the left is actually per beam but we just don't put it because it's not a real unit? If so, maybe we should seamlessly support the second case too?

Keflavich commented Feb 9, 2015

Yes, that's essentially right. What we really want is something like

and then, when reading from a FITS header with BUNIT='JY/BEAM' (capitalized, unfortunately. ), the beam unit should be parsed from the BMAJ/BMIN parameters.

Keflavich commented Feb 9, 2015

The point is that a beam unit should have to be equivalent to u.sr to be valid u.beam is just a non-functional placeholder right now.

Astrofrog commented Feb 9, 2015

Right - I guess that in the mean time, would there be any harm in supporting the placeholder being there, i.e. supporting:

So the beam area is still passed to brightness_temperature but the code is more readable because one sees that the quantity on the left is per beam?

Keflavich commented Feb 9, 2015

I think that's OK, but it's still more explicit and clear (and therefore better?) to use the appropriate beam on both sides, though I note that it doesn't work right now, even with the dimensionless_angles equivalency:


How do I convert flux measurements given in Jy km/s or K km/s into the peak flux density required by the OT?

Suppose you want to observe the CO(1-0) line in a galaxy for which the integrated CO(1-0) line flux density is known from previous observations,or estimated based on simulations etc. Let us first assume that the proposed ALMA observations will not spatially resolve the galaxy. From previous observations we know that the total velocity width of the CO profile is 200 km/s and the integral flux density 20 Jy km/s. If the line shape is approximately boxcar-shaped, this implies that the peak flux density of the line is 0.1 Jy. Detecting this line at 5 sigma requires an rms noise level (sensitivity) of 20 mJy.

If you are only interested in detecting the line and not measure the profile shape in detail, a velocity resolution of

70 km/s may be sufficient. In that case enter 70 km/s in the Bandwidth used for sensitivity field in the Control and Performance tab, and the Desired sensitivity per pointing should be 20 mJy for a 5 sigma detection. On the other hand, if you need many spectral resolution elements over the full velocity width in order to measure the spectral profile in detail, you want to select (for example) 5 km/s as Bandwidth used for sensitivity. The on-source observing time will increase accordingly. Note that the value that you enter for the desired sensitivity is independent of the velocity resolution that is chosen. In the case that the CO(1-0) emission is not only spectrally resolved, but also spatially resolved by ALMA, one needs to take into account that only a fraction of the total integrated flux density is seen in every spatial resolution element. In the OT, the fluxes must be entered in Jy/beam, i.e. you must provide the peak flux density of the source estimated within one synthesised ALMA beam. For example, if the diameter of the CO disk is assumed to be 5 arcsec, and you observe with ALMA at an angular resolution of 0.5 arcsec, the total CO flux is spread over (5&rdquo/0.5&rdquo) ^ 2 =100 ALMA beams. In addition, the emission is also spread in frequency space. The most conservative assumption would be that the emission is equally distributed in all three dimensions, and therefore the desired sensitivity is 20 mJy/100=0.2 mJy. A more realistic case is that where the CO emission comes from a rotating disk, and hence at each of the spatial resolution elements the CO emission is only spread over (say) 40 km/s. In that case the integrated line flux density per beam is (20 Jy km/s / 100 beams) = 200 mJy km/s, and the average flux per beam is (200 mJy km/s / 40 km/s) = 5 mJy. The desired sensitivity per pointing for a 5sigma detection should then be 1 mJy. The calculations above all assume that the spectral profile is flat-topped. In the case of Gaussian or double-horned profiles, the peak flux density will be higher in some spectral channels, and adjustments may have to be made to the calculations.

If your previous measurement of the CO(1-0) intensity is based on single dish observations and given in terms of integrated brightness temperature that is in units of K km/s, this value needs to be converted to units of Jy km/s first. See How can I estimate the Peak Flux Density per synthesised beam using flux measurements in Jy or K from other observatories? for more details on this conversion. In short, a K per Jy conversion factor is needed, which is dependent on the single dish antenna diameter and the antenna efficiency.

Note that for high redshift molecular line measurements, luminosities are often expressed in units of K km/s pc ^ 2 . Such a measurement can be easily converted to integrated flux densities in units of Jy km/s using standard equations. For example, the CO luminosity L&rsquo is related to the CO integrated flux density SCO in units of Jy km/s, using the conversion

where DL is the luminosity distance at redshift z, in Mpc, and fobs is the observing frequency in GHz.


Converting Jy/beam to Jy? - Astronomy

__________________________________________________________________________

6.9.1. SPIRE Source Extraction & Photometry in HIPE

HIPE has many methods available for source extraction and photometry. In this section, we will show how to perform both source extraction and photometry on the Level 2 product maps for a variety of source types (see Section 6.9.1.3 for an overview of the point source photometry methods available in HIPE). In Section 6.9.1.4 the source extraction and photometry using the HIPE algorithms SUSSEXtractor and DAOphot is explained. In Section 6.9.1.5 photometry via timeline fitting of the Level 1 products is explained. In Section 6.9.1.6 aperture photometry on SPIRE images is outlined.

The current recommendation for point source photometry is to use the Timeline Fitting algorithm for photometry of all but the faintest SPIRE maps, however all photometry methods return broadly similar results for SPIRE data . Source coordinates can either be fed directly to the Timeline Fitter or alternatively can be provided via the output of the source extraction from SUSSEXtractor or DAOphot. These recommendations are shown pictorially in Section 6.9.1.6.

Figure 6.81. Summary of Source Extraction and Photometry.

6.9.1.1. SPIRE Flux Calibration

Full details of the SPIRE calibration can be found in the SPIRE Observers Manual and in dedicated publications: the calibration scheme is described in Griffin et al. (2013) and the implementation using Neptune as the primary calibration standard, is describe in Bendo et al. (2013). The treatment of the calibration for extended emission can be found in North et al. (2013)

The SPIRE photometer flux calibration has two sources of uncertainties which should be included in addition to the statistical errors of any measurement for point source calibration. One is a systematic uncertainty in the flux calibration related to the uncertainty in the models used for Neptune, the primary calibrator these uncertainties, which are correlated across all three SPIRE bands, are currently quoted as 4%. The other source of uncertainty is a random uncertainty related to the ability to repeat flux density measurements of Neptune. This random uncertainty is <1.5% for all three bands.

For extended emission calibration, in addition to the above uncertainties, there is an additional 1% uncertainty due to the current uncertainty in the measured beam area.

6.9.1.2. Converting Jy/beam to surface brightness or flux densities in the SPIRE pipeline

Pipeline data and Level-2 maps are calibrated in Jy/beam. It is important to note that, since the SPIRE photometer flux calibration is performed on the timeline data, the beam areas equivalent to the beams of the timeline data must be used when calibrating extended emission in terms of surface brightness (Jy/pixel or /sr). To convert maps from from Jy/beam to Jy/pixel, the point source calibrated maps need to be divided by the beam. The beam areas corresponding to the 1 arcsec pixel scale for a spectral index ( α ) of -1 as used in the pipeline should be used(see Table 6.10). However, point source fluxes measured on surface brightness maps (e.g. Jy/pixel) need to be corrected by a multiplicative factor corresponding to the ratio of the pipeline beam ( α =-1) and the effective beam for the assumed spectral index of the source, which take into account the RSRF, the aperture efficiency and the variation of beam profile with frequency. These ratios are given in Table 6.11.

The standard SPIRE pipeline flux calibration assumes a point source calibration for the standard spectral index of α = -1 adopted for Herschel. Conversion factors to transform RSRF-weighted fluxes densities to monochromatic flux densities for point and extended sources are applied automatically in the the pipeline. All the flux conversion parameters are explained in detail in the ( SPIRE Handbook). The parameters are listed in Table 6.9. K4P is the pipeline point source flux conversion parameter. KMonE( α = 𕒵) is the conversion to monochromatic surface brightness for an extended source with α = 𕒵. KPtoE is the conversion from point source flux density to extended source surface brightness for a source spectrum α = 𕒵. Ω pip is the beam solid angle for a source with α = 𕒵, i.e. The effective beam area ( Ω eff) for α =-1. K4E is the flux conversion parameter used in the pipeline, defined as KMonE( α = 𕒵) Ω pip, which converts to the flux density of an extended source.

The conversion between exteneded and point source calibration is given by the ratio K4E/K4P . This ratio is referred to as K4EdivK4P and converts a point source monochromatic flux density to a monochromatic extended source surface brightness (see Table 6.9, not to be confused with KPtoE which includes the beam in the parameter and cannot be derived directly from the SPIRE Calibration Tree - see the code examples below).

All these factors are automatically applied to the standard pipeline products point source (psrcPxW in Jy/beam) and extended emission (extdPxW in MJy/sr) products . The extended emission (extdPxW, see Table 6.1) products have also been processed with the relative gains applied and have been absolute zero-point corrected using the Planck maps.

The K4E and K4P are the only K parameters explicitly included within the SPIRE Calibration Tree. For the purposes of Point source photometry, dividing by the K4E/K4P ( K4EdivK4P ) parameter converts from extended to point source calibration.

The relationship between the various K parameters is given by

KPtoE = KMonE / K4P = (K4E / Ω eff) / K4P = (K4EdivK4P)/ Ω eff)

Note that the pipeline assumes a value of α = -1 , therefore for other spectral indicies, the appropriate colour corrections should also be applied (see Section 6.9.1.8).

For the purposes of point source extraction, where the absolute scale of the beam model is unimportant, the appropriate FWHM corresponding to the 1 arcsec pixel scale should be used from Table 6.12 below

Table 6.9. SPIRE pipeline conversion factors for point and extended sources.

Parameter PSW PMW PLW
Wavelength (microns) 250 350 500
K4P 1.0102 1.0095 1.0056
KMonE (MJy/sr per Jy/beam) 91.5670 51.6654 23.7113
KPtoE (MJy/sr per Jy/beam) 90.6462 51.1806 23.5798
Beam Area (arcsec 2 ) 469.3542 831.275 1804.3058
K4E 1.0102 1.0095 1.0056
K4E/K4P (K4EdivK4P) 1 1 1

Table 6.10. Beam Areas assumed by the pipeline ( α = -1)

Spectral Index Beam Area (arcsec 2 ) Beam Area (/10 -8 sr)
(Fnu=nu α ) PSW PMW PLW PSW PMW PLW
-1.0 469.35423 831.27497 1804.30575 1.10319 1.95386 4.24092

Table 6.11. Effective Beam Area ratios (beam correction) as function of spectral index ( α )

Spectral Index Effective Beam Area Ratio ( Ω (-1)/ Ω ( α ))
(Fnu=nu α ) PSW PMW PLW
-4 0.9593 0.9603 0.9311
-3.5 0.9658 0.9666 0.9419
-3 0.9724 0.973 0.953
-2.5 0.9791 0.9795 0.9643
-2 0.986 0.9862 0.976
-1.5 0.9929 0.9931 0.9879
-1 1 1 1
-0.5 1.0072 1.007 1.0123
0 1.0145 1.0142 1.0247
0.5 1.0219 1.0214 1.0371
1 1.0294 1.0287 1.0496
1.5 1.037 1.036 1.0621
2 1.0446 1.0434 1.0746
2.5 1.0523 1.0508 1.0869
3 1.06 1.0581 1.0991
3.5 1.0677 1.0655 1.1111
4 1.0755 1.0729 1.1229
4.5 1.0832 1.0802 1.1344
5 1.0908 1.0875 1.1456

Table 6.12. SPIRE FWHM Parameters for 1 arcsec pixels.

Band FWHM Mean FWHM Ellipticity (Flattening)
(micron) (arcsec) (arcsec) (%)
250 18.4x17.4 17.9 5.1
350 24.9x23.6 24.2 5.4
500 37.0x33.8 35.4 8.7

6.9.1.3. Recipes for SPIRE Point Source Photometry

The SPIRE pipeline and HIPE have several methods available for point source photometry, e.g. map based methods (SUSSEXtractor and DAOphot Section 6.9.1.4), timeline fitting (Section 6.9.1.5) and aperture photometry Section 6.9.1.6. Each method requires different input parameters (e.g. effective beam areas, FWHM, aperture sizes, colour and aperture corrections, etc). The rationale behind the various steps required for photometry can be found in the SPIRE Handbook (formerly the SPIRE Observers Manual), where as, in the following sections, photometry methodology specifically within the HIPE enviroment is explained. In Figure 6.82 and the associated table, the algorithmic steps required within HIPE to perform point source photometry are summarized for the main photometry tasks available.

Figure 6.82. Summary of Point Source Photometry methods in HIPE. The necessary inputs are summarised in the table below.

Input Reference
K4EdivK4P conversion factor Table 6.9
Pipeline Beam Areas ( α = -1) Table 6.10
Effective Beam ratios (beam correction) Table 6.11
FWHM Table 6.12
rpeak parameters Table 6.13
Apertures Table 6.14
Aperture Corrections Table 6.15
Colour Corrections Table 6.16

6.9.1.4. Source Extraction with SUSSEXtractor or DAOphot

HIPE provides two tasks to carry out source extraction on the SPIRE Level 2, Level 2.5, Level 3 maps. sourceExtractorDaophot and sourceExtractorSussextractor tasks are both included within HIPE, with sourceExtractorSussextractor optimized for use with SPIRE maps.

These algorithms are explained in detail in the Herschel Data Analysis Guide Section 4.18

This section explains how to use the source extractors via the graphical interface advanced usage is described in the User Reference Manual :

The two tasks are listed in the Applicable folder of the Tasks view whenever an image is selected in the Variables view. Figure 6.83 shows the lists of parameters for the two tasks (hover the mouse pointer on a parameter to reveal the tooltip):

Figure 6.83. List of parameters for the two source extraction tasks.

Example

Both SUSSEXtractor and DAOphot work on the Level 2 maps. For the purposes of source extraction the point source calibrated maps ( psrcPxW ) can be used for both algorithms. For photometry, SUSSEXtractor should use the point source calibrated maps ( psrcPxW ). However, since DAOphot actually carries out aperture photometry, for optimal results, for DAOphot the maps calibrated for extended emission ( extdPxW ) should be used for photometry.

The required inputs to be specified are the input map, the FWHM, as given in Table 6.12 for map pixel sizes of 1 arcsec for the PSW, PMW, PLW arrays respectively and the detection threshold. DAOphot also requires the pipeline beam area in Table 6.10 to convert the map to surface brightness units since it is carrying out aperture photometry.

The examples below are adapted from the official SPIRE Photometry script available from the Useful Scipts menu within HIPE and shows how to extract the necessary calibration parameters from the SPIRE calibration Tree and how to perform source extraction and photometry with SUSSEXTractor and DAOphot for a single SPIRE band.

SPIRE photometry is based on the assumption of a spectrum of the form nu.F(nu) = constant. In the case of a source having a different spectral shape multiplicative corrections must be applied to any point source photometry. All the necessary corrections for photometry are contained within the SPIRE Calibration Products and can be interrogated as a function of spectral index ( α ).

Changing the SUSSEXtractor PRF

Note that for HIPE12, SUSSEXtractor flux densities could as much as 5 percent lower due to a change in the default PRF from 5x5 to 13x13 pixels. The original default value of 5x5 pixels for the PRF has been restored for HIPE version 13 onwards. Note a change in PRF will affect the flux density measured by SUSSEXtractor. A 5x5 PRF (the default) provides consistent flux density measurements with the SPIRE Timeline Fitter. However, the PRF can be changed using the example below

SUSSEXtractor can either perform source extraction on the entire map but also supports a Region of Interest (ROI) , as shown below, to constrain the search area to a smaller map region.

Both DAOphot and SUSSEXtractor can detect and extract sources independently but they can also take a source list or RA and Dec as input. In the example below, DAOphot takes as input the RA and Dec of the first source found by SUSSExtractor from the previous example above.

As emphasized earlier, DAOphot photometry should be made using the maps calibrated for extended emission ( extdPxW ). . In order to use these maps for point source photometry the maps must be converted to a point source calibration and the units should be in Jy/pix . These two steps are shown in the example code below.

Fluxes obtained via SUSSEXtractor do not require any aperture correction. However, the fluxes obtained from DAOphot do require an multiplicative aperture correction. By default, DAOphot estimates the aperture correction automatically itself by setting the parameter doApertureCorrection = True . The automatic aperture correction is calculated from photometry on the point response function (PRF) to determine the correction. Alternatively, the user can estimate their own aperture correction using different aperture sizes within DAOphot to estimate a curve of growth or use the numbers given in Table 6.15. The aperture corrections in Table 6.15 are given for the typical case where there is a background around the source but also for the scenario where the background has been subtracted. Note that if DAOphot is run with the parameter doApertureCorrection = False then the resulting flux must also be multiplied by the effective beam ratio (beam correction), for the appropriate spectral index, α , given in Table 6.11, in order totake into account the RSRF, the aperture efficiency and the variation of beam profile with frequency.

Note that there are also aperture corrections for arbitrary apertures (from 0 to 700 arcsec) in the SPIRE Calibration Tree in the RadialCorrBeam product under normArea Table. The corrections in this table (by means of the Encircled Energy fraction = normalized beam area) are only for an α =-1 source, however, the colour corrections for the normalized beam are very small and in most cases can be ignored without too much concern.

For both SUSSEXtractor and DAOphot, the default, the point response function (PRF) is assumed to be Gaussian, with full-width-half-maximum (in arcsec) provided by the fwhm parameter. Alternatively, you can specify a custom PRF via the prf parameter. This should be a variable of type SimpleImage . The image should be of odd dimension, with the peak at the centre, normalised such that it gives the (central pixels) of a point source of flux 1 Jy, in the units of the input map. The PRF image is assumed to have the same pixel scale of the main image, and does not need to have an associated WCS.

A PRF can be extracted from the beam profile in the calibration tree. Assuming that cal is your SPIRE calibration context and myImage is your image (in Jy/beam units) for the PSW band, issue the following command:

Be sure to verify that the PRF image has the peak value at the image centre. Similar commands may be used to construct PRF images for the PMW and PLW bands.

The output from the source extraction is a SourceListProduct and is called sourceList by default. You can inspect it in the Product Viewer like any other product, as shown in Figure 6.84. To access the measured fluxes and positions directly, the SourceListProduct must be addressed in the following way flux = srcSussex['sources']['flux'].data[0] (see also the example code snippet)

Figure 6.84. The list of sources shown in the Product Viewer, with the internal dataset highlighted.

To display the extracted sources on the image, drag and drop the sourceList variable on the image in the Editor view. A circle with fixed width is overlaid at the location of each source, as shown by the following figure. Note that dragging and dropping will not work if you select the returnPixelCoordinates checkbox in the task graphical interface. When this option is selected, the task returns source coordinates in pixels rather than astronomical coordinates.

The manipulation of the sourceList is covered in detail in the Herschel Data Analysis Guide Chapter 4 which describes how to import a sourceList from a text or FITS file, overlay a sourceList on a image and change the size and colours of the circles.

Note that both DAOphot and SUSSEXtractor can take a source list (e.g. from some ancillary data catalogue) as input. When a source extractor is given a sourceList as input , it measures fluxes and flux error for all of the positions in that list. The flux returned in the estimate of the flux of a source at that position, regardless of whether there really is a source at that position, or whether a formal detection is possible. In this way, the source extractors will effectively also return an upper limit for an undetected source. See the Herschel Data Analysis Guide Section 4.18 for more details.

Common problems

No error extension (sourceExtractorSussextractor only)

sourceExtractorSussextractor requires the input image to have an error extension, and if this is not present the task will fail. The error (uncertainty) in the pixel values should be determined as part of the map-making algorithm. However, an easy way to add an error extension to a SimpleImage , image , assuming the uncertainty in each pixel is 0.001, is to use:

Invalid units, or units not specified

Both source extraction tasks require the input image to specify its units in a valid format. If the task cannot recognise the units of the image as units of surface brightness then it will fail. To set the units of the SimpleImage , image , to be "Jy/beam" (for example), use image.setUnit("Jy/beam") . Other units based on Jy, mJy, MJy, beam, pixel, sr, etc. are recognised.

Reducing the number of NaNs in the DAOphot background radii

The default values for the DAOphot background annulus of 60-90 arsec can produce a substantial number of NaN results in some cases. Although the current values for the background annulus produce the most consistent photometry, in severe cases, the number of NaNs can be reduced by changing the default backgroud annulus radii as follows

  • PSW: inner radius = 22.0 arcsec, outer radius = 33.0 arcsec
  • PSW: inner radius = 30.0 arcsec, outer radius = 45.0 arcsec
  • PSW: inner radius = 42.0 arcsec, outer radius = 63.0 arcsec

6.9.1.5. Source Fitting of Point Sources in Timeline Data

The sourceExtractorDaophot and sourceExtractorSussextractor source extraction algorithms work on the final image maps, however SPIRE also provides an alternative method for photometry working on the timeline data itself before the map making process. The Timeline Source Fitter performs photometry at a give position (or set of positions) by fitting a Gaussian to the timeline samples on the sky. The Timeline Fitter is currently the recommended algorithm for SPIRE point source photometry The Timeline Source Fitter does not work on maps and does not perform source extraction. The Timeline Source Fitter requires Level 1 destriped timelines as input and the locations of sources at which to fit. The Timeline Source Fitter can be accessed via a GUI in the Tasks window within HIPE under sourceExtractorTimeline as shown in Figure 6.85

Figure 6.85. GUI for the Timeline Fitter

From the GUI box in Figure 6.85, it can be seen that there are various parameters for the Timeline Fitter. The input ( destripedLevel1 ) is a destriped (or baseline subtracted) level 1 context or Scan Context (i.e. the output from the destriper or baseline removal algorithms). The other input required is a set of RA, Dec coordinates in decimal degrees for a single source ( [RA,DEC] ) or a list of coordinates ( [[RA1,RA2. ][DEC1,DEC2. ]] ). Alternatively a source list can be appended (see below). Additional parameters are explained below in Table 6.13

The Timeline Source Fitter can also be run from the command line as

Table 6.13. Parameters for the Timeline Fitter

Parameter Options
array String parameter for SPIRE array "PSW", "PMW", "PLW".
inputSourceList Either a Double1d array with 2 entries containing the estimated RA and Dec of source in degrees. Or a Source List Product from,for example, the output of SUSSEXtractor
rpeak Optional parameter for radius of the region that will include the peak of the source. Appropriate values are 22, 30, 42 for the PSW, PMW, PLW respectively and exclude the airy rings.
rbackground Optional parameter (double1D with 2 entries) for the inner and outer radius of the annulus for use as background subtraction. Setting either to a negative value will result in no background being subtracted. Default value is array dependent and is PSW [70.,74.], PMW [98.,103.], PLW [140.,147.]
useBackInFit Optional Boolean parameter. If True then all data samples from background annulus will be used in the fit. Otherwise the median value will be removed from the data to fit. Default value is True . Note: using useBackInFit = True will improve the fit.
allowVaryBackground Optional Boolean parameter. If True then background is treated as a free parameter in the fit. This parameter is ignored if the background is ignored in the fit. Default value is True . Note: using allowVaryBackground = True will improve the fit.
allowTiltBack Optional Boolean parameter. If True then a tilted plane is used for the background. Default value is False . This is ignored if allowVaryBackground is set to False
fitEllipticalGauss2d Optional Boolean parameter. If True then an elliptical Gaussian is fit to the data. Otherwise a circular Gaussian is used. Default value is False . Note that for relatively bright sources elliptical Gaussians are fine, however for sources fainter than approximately 30mJy, circular Gaussians should be used in preference.
modelGauss2dSigma Optional Double set in degrees will perform a circular Gaussian it with the sigma parameter of the Gaussian fixed (ignored if fitEllipticalGauss2d parameter set). Default value is -1.0
fitMaxIterations Optional integer parameter to set a limit on the number of iterations performed by the Levenberg Marquardt Fitter. Default value is 10000
fitTolerance Optional (Double) parameter that sets the tolerance the Levenberg Marquardt Fitter attempts to achive when performing the fit. Default value is 1e-4.
roi Region of Interest. A skymask where only the sources lying inside the ROI will be included.
slow By default, if processing more than one source, the task will attempt to store all of the data in memory at once. If it runs out of memory, it will use a slower method, which involves iterating over all of the data for each source. Set this parameter to True to use this slower method without first attempting to store all of the data in memory.

The output from the Timeline Fitter is a standard Source List Product (default name is sourceList ) containing results from the fitting, including

Positional information ra, dec and associated errors raPlusErr, decPlusErr, raMinusErr, decMinusErr .

Fitted point source flux density flux and associated errors fluxPlusErr, fluxMinusErr in mJy.

The background measurement and error backgroundParm1, backgroundParm1Err .

The Chi-squared result, evidence and reduced chi-squared chiSquare, evidence, reducedChiSquare . Note that although the Task returns the Chi-squared, reduced Chi-squared, and a Bayesian evidence value, tests with simulated sources added to real timeline data showed that these numbers did not necessarily reflect the accuracy of the resulting fit. For example, the evidence values for a Gauss2DModel with a fixed width fit to the simulated source was systematically higher (e.g. better) than fit, although the standard Gauss2DModel fit (with the variable width) and the Gauss2DRotModel fit produced more accurate measurements. Users should therefore keep in mind that so many data points are used in the fit that the reduced chi squared values will vary very little between models with different numbers of parameters. Use these metrics with extreme caution.

The number of iterations in the fit nIter and whether the fit was successful fitSuccessful .

The number of data points used to fit the background nRdoutBack and source nRdoutPeak respectively.

The width of the fitted Gaussian and errors sigma, sigmaErr in degrees where the FWHM = 2SQRT(2ln2)*sigma

The Source List Product can be written to an output ASCII file by the following command

Example

It is also possible to provide a source list from another source extraction algorithm, for example using SUSSEXTractor to extract the sources and passing the output source list as input for the timeline fitter as shown below (note that in this example the sources are not colour corrected)

Recommended Settings for the Timeline Fitter

To fit point sources in fields with flat backgrounds (such as extragalactic fields where the background is dominated by confused extragalactic sources), he following settings are recommended:

The parameter rPeak should be set to 22, 30, and 42 for PSW, PMW, and PLW, respectively. This aperture was selected because it includes the Gaussian part of the PSF (the region from the centre up to the gap between the peak and the first Airy ring).

The parameter rbackground has different defaults for each array. The values will potentially work for most sources, as the signal in this smaller annuli will be dominated by background noise except for very bright sources.

The parameter AllowTiltBack should be set to False .

The parameter fitEllipticalGauss2d should normally be set to False. This is only useful if attempting to distinguish between elongated and truly unresolved sources.

The parameter modelGauss2dSigma depends on the source brightness and the scan speed. For slow or nominal scan speeds, for sources brighter than 35mJy in the PSW band, 50 mJy in the PMW and PLW bands, modelGauss2dSigma should not be set. Allowing the FWHM of the Gaussian to vary usually produces a more accurate fit. However, users should check that the FWHM from the fit falls within nominal ranges for the data (15-21 arcsec for PSW, 19-28 arcsec for PMW, and 28-40 arcsec for PLW). For sources fainter than 35 mJy in the PSW band or 50 mJy in the PMW or PLW bands or for sources observed using the fast scan speed, modelGauss2dSigma should be set to 17.6 for PSW, 23.9 for PMW, and 35.2 for PLW. Fixing the width of the PSF normally produces more accurate results in these cases.

The parameter useBackInFit should be set to True .

The parameter allowVaryBack should be set to True .

The following optional set-ups can also be used to handle the background:

The parameter allowVaryBack can be set to False while still specifying rBackground and setting useBackInFit to True . In this scenario, the background will not be treated as a free parameter in the fit, an initial background offset will be measured and subtracted before the data is fitted.

The parameter allowVaryBack can be set to False and useBackInFit can be set to False if rBackground is specified. The data in the background annulus will still be used to measure and subtract an initial background before a Gaussian function is fit to the PSF, but the data in the background annulus will not be used in the fit, and the background will not be treated as a free parameter.

The parameter rBackground does not need to be given as an input, but allowVaryBack should be set to False and useBackInFit should be set to False . This is appropriate if the background has already been subtracted (for example, by using removeBaselines).

To fit sources in regions with significant background variations (such as sources observed in fields with thick cirrus emission), the following settings are recommended:

The parameter rBackground should not be specified. No background annulus should be used in the PSF fitting.

The parameter allowTiltBack should be set to True .

The parameter fitEllipticalGauss2d should be set to False .

The parameter useBackInFit should normally be set to False .

The parameter allowVaryBack should normally be set to False .

6.9.1.6. Performing Aperture Photometry on SPIRE Images

Aperture photometry of both point source and extended sources can be made using 3 tasks in HIPE, the annularSkyAperturePhotometry , rectangularSkyAperturePhotometry , fixedSkyAperturePhotometry tasks. These algorithms are explained in detail in the Herschel Data Analysis Guide Section 4.20 In the following 2 sections specific information is provided for aperture photometry on point sources and extended emission respectively.

Recipe for Point Source Aperture Photometry

Although, in principle, aperture photometry is not recommended for point sources in SPIRE maps, the framework to do so does exist. To measure integrated flux densities of point sources via aperture photometry, the following steps are performed (the explicit algorithmic steps within HIPE are also shown in Figure 6.82 and the HIPE script is shown below)

The starting point is the Level 2 extdPxW extended emission calibrated maps in MJy per steradian.

Convert to point source calibration

SPIRE maps for extended emission have been calibrated for a monochromatic extended source surface brightnes using the K4EdivK4P parameter described in Section 6.9.1.2 and Table 6.9. Therefore, the maps must first be divided by the K4EdivK4P parameter.

Divide the image by the beam area to convert the image to Jy/pixel, for an image with small (<1”) pixels using the pipeline beams for 1” pixels given in Table 6.10. Note that HIPE provides a specfic task convertImageUnit , available from the tasks window, to both divide the image by the beam and convert the units.

Measure the integrated flux density within the desired aperture and background annulus, using the appropriate aperture photometry task as explained in Herschel Data Analysis Guide Chapter 4 . For point sources the recommended aperture radii are selected to contain just the main lobe of the beam, and are given below in Table 6.14 along with the values for the annulus to estimate the background level.

The SPIRE flux calibration assumes a flat spectrum for the source ( nu.F(nu) =constant). For other spectral indicies, the flux density needs to be multiplied by the appropriate beam ratios Ω ( α =-1)/ Ω ( α ) for given in Table 6.11, in order to take into account the RSRF, the aperture efficiency and the variation of beam profile with frequency.

The SPIRE flux calibration assumes a flat spectrum for the source ( nu.F(nu) =constant). For other spectral indicies, the flux density needs to be multiplied by the appropriate point source color correction for the assumed spectral index using the Colour Corrections for Point Sources given in Table 6.16

Aperture correction factors are required, since this method measures the sky brightness in a fraction of the beam, and therefore underestimates the integrated flux density. The corrections have been tested and bring the integrated flux densities obtained through aperture photometry into agreement with those obtained from fitting timeline data, which is in general a more accurate method. The aperture corrections are also listed in Table 6.15 Note that there are also aperture corrections for arbitrary apertures (from 0 to 700 arcsec) in the SPIRE Calibration Tree in the RadialCorrBeam product under normArea Table. The corrections in this table (by means of the Encircled Energy fraction = normalized beam area) are only for an α =-1 source, however, the colour corrections for the normalized beam are very small and in most cases can be ignored without too much concern.

Example

The above procedure is shown in the script below and is adapted from the official SPIRE Photometry script available from the Useful Scipts menu within HIPE. Note as with the earlier examples for SUSSEXtractor and DAOphot, all the necessary calibration is contained within the SPIRE Calibration Tree which can be interogated as a function of spectral index, alpha . In the example for Aperture Photometry below, the source position for photometry is supplied via SUSSEXtractor, although, in practice could just as easily be supplied as a string variable for RA and Dec respectively.

Table 6.14. Parameters for Aperture Photometry

Table 6.15. Aperture Corrections for Annular Aperture Photometry

Spectral Index Background included Case of background removed
(Fnu=nu α ) PSW PMW PLW PSW PMW PLW
-4 1.294 1.255 1.293 1.29 1.246 1.254
-3.5 1.293 1.254 1.29 1.288 1.245 1.252
-3 1.292 1.253 1.287 1.287 1.244 1.249
-2.5 1.291 1.251 1.285 1.286 1.243 1.247
-2 1.29 1.25 1.282 1.285 1.242 1.245
-1.5 1.288 1.249 1.279 1.284 1.241 1.243
-1 1.287 1.248 1.277 1.283 1.24 1.24
-0.5 1.286 1.247 1.274 1.281 1.239 1.238
0 1.285 1.246 1.271 1.28 1.238 1.236
0.5 1.284 1.245 1.268 1.279 1.237 1.234
1 1.282 1.244 1.266 1.278 1.235 1.232
1.5 1.281 1.243 1.263 1.277 1.234 1.229
2 1.28 1.242 1.261 1.276 1.233 1.227
2.5 1.279 1.241 1.258 1.275 1.232 1.225
3 1.278 1.24 1.256 1.273 1.231 1.223
3.5 1.276 1.239 1.253 1.272 1.23 1.221
4 1.275 1.238 1.251 1.271 1.229 1.219
4.5 1.274 1.236 1.249 1.27 1.228 1.218
5 1.273 1.235 1.247 1.269 1.227 1.216
Recipe for Aperture Photometry on Extended Emission Maps

To measure integrated flux densities of extended sources or to work with surface density measurements for extended emission(See the SPIRE Handbook for details of the calibration for extended emission), the following steps should be performed (the explicit algorithmic steps within HIPE are also shown in Figure 6.86 and the HIPE script is shown below)

Start from the Level 2 extdPxW extended emission calibrated maps in MJy per steradian.

Convert image units to Jy/pix

The HIPE Aperture Photometry task requires images specifically in units of Jy/pixel. Convert the image to units of Jy/pixel by using the convertImageUnit task, available from the HIPE tasks window. Since the extended emission maps were originally in MJy/sr, no beam area is required.

The SPIRE flux calibration assumes a flat spectrum for the source ( nu.F(nu) =constant). For other spectral indicies, the flux density needs to be multiplied by the appropriate color correction for Extended Emission for the assumed spectral index using the Colour Corrections for Extended Sources given in Table 6.16. Note that for extended emission, the colour correction also takes into account the various beam effects so the values in Table 6.11 are not required

Measure the flux density within the desired aperture if integrated flux densities are required. The aperture photometry tasks are explained in Herschel Data Analysis Guide Chapter 4 .

Example

The procedure for aperture photometry for extended emission is summarized below for a single SPIRE band. To run the script for a different SPIRE band, simply edit the line array = "PSW" # SPIRE Array Bands: "PSW", "PMW", "PLW" .

Figure 6.86. Summary of Photometry methods for Extended Emission in HIPE. The necessary inputs are summarised in the table below.


Data Products

The VLA-ANGST HI data is available from this web page.


If the data is used for publication, please cite:

Ott, J., Stilp, A. M., Warren, S., Skillman, E. D., Dalcanton, J., Walter, F., de Blok, W. J. G., Koribalski, B., & West, A. A. 2012, AJ, 144, 123

This publication describes the observations, data reduction, and data products in full detail.

All images are in fits format and the spectra are in encapsulated postscript format. All files are compressed with gzip.

Spectrum: The naturally weighted HI spectrum
Data Cube: The HI data cube, not primary beam corrected. Channels are in LSR velocity units. Brightness in Jy beam -1 .
Column Density: The integrated HI intensity map, converted to units of HI column density (cm -2 )
Moment 0: The integrated HI intensity map in Jy beam -1 km s -1 units
Moment 1: The intensity-weighted HI velocity map in units of km s -1
Moment 2: The Moment 2 map, a measure for the HI linewidth, in units of km s -1

na/ro: All images are available in natural or robust weighted versions. Natural (na) weighted data are more sensitive to surface brightness than robust (ro) weighted data. Robust weighted data, on the other hand, has better spatial resolution than natural weighted data.

HS117, KDG63, KKH37, DDO113, KKR25, KK77 were not detected in our HI survey.


Varying resolution (multi-beam) cubes are somewhat trickier to work with in general, though unit conversion is easy. You can perform the same sort of unit conversion with VaryingResolutionSpectralCube s as with regular SpectralCube s spectral-cube will use a different beam and frequency for each plane.

You can identify channels with bad beams (i.e., beams that differ from a reference beam, which by default is the median beam) using identify_bad_beams (the returned value is a mask array where True means the channel is good), mask channels with undesirable beams using mask_out_bad_beams , and in general mask out individual channels using mask_channels .

For other sorts of operations, discussion of how to deal with these cubes via smoothing to a common resolution is in the Smoothing document.

Page Contents

© Copyright 2021, Adam Ginsburg, Tom Robitaille, Chris Beaumont, Adam Leroy, Erik Rosolowsky, and Eric Koch.
Created using Sphinx 4.0.2. Last built 17 Jun 2021.


Converting Jy/beam to Jy? - Astronomy

An object to handle single radio beams.

Create a new Gaussian beam

Parameters major Quantity with angular equivalency

minor Quantity with angular equivalency

pa Quantity with angular equivalency

area Quantity with steradian equivalency

The area of the beam. This is an alternative to specifying the major/minor/PA, and will create those values assuming a circular Gaussian beam.

default_unit Unit

The unit to impose on major, minor if they are specified as floats

Base object if memory is from some other object.

An object to simplify the interaction of the array with the ctypes module.

Python buffer object pointing to the start of the array’s data.

Data-type of the array’s elements.

Information about the memory layout of the array.

The imaginary part of the array.

Container for meta information like name, description, format.

Length of one array element in bytes.

Total bytes consumed by the elements of the array.

Number of array dimensions.

The real part of the array.

Tuple of array dimensions.

Number of elements in the array.

Tuple of bytes to step in each dimension when traversing an array.

Returns True if all elements evaluate to True.

Returns True if any of the elements of a evaluate to True.

Return indices of the maximum values along the given axis.

Return indices of the minimum values along the given axis of a .

Returns the indices that would partition this array.

Returns the indices that would sort this array.

Returns an elliptical Gaussian kernel of the beam.

Returns an elliptical Tophat kernel of the beam.

astype (dtype[, order, casting, subok, copy])

Copy of the array, cast to a specified type.

Attach the beam information to the provided header.

Return the beam area in pc^2 (or equivalent) given a distance

Swap the bytes of the array elements

Use an index array to construct a new array from a set of choices.

Return an array whose values are limited to [min, max] .

Return selected slices of this array along given axis.

Complex-conjugate all elements.

Return the complex conjugate, element-wise.

Convolve one beam with another.

Return a copy of the array.

Return the cumulative product of the elements along the given axis.

Return the cumulative sum of the elements along the given axis.

Generates a new Quantity with the units decomposed.

Deconvolve a beam from another

Return specified diagonals.

Dot product of two arrays.

Dump a pickle of the array to the specified file.

Returns the pickle of the array as a string.

Return a matplotlib ellipse for plotting

Fill the array with a scalar value.

Return a copy of the array collapsed into one dimension.

Instantiate beam from a CASA image.

Instantiate a single beam from a bintable from a CASA-produced image HDU.

Instantiate the beam from a header.

Instantiate the beam from an AIPS header.

Returns a field of the given array as a certain type.

Insert values along the given axis before the given indices and return a new Quantity object.

Copy an element of an array to a standard Python scalar and return it.

Insert scalar into an array (scalar is cast to array’s dtype, if possible)

Return the conversion for the given value between Jy/beam to K at the specified frequency.

Return conversion function between Jy/beam to K at the specified frequency.

max ([axis, out, keepdims, initial, where])

Return the maximum along a given axis.

Returns the average of the array elements along given axis.

min ([axis, out, keepdims, initial, where])

Return the minimum along a given axis.

Return the array with the same data viewed with a different byte order.

Return the indices of the elements that are non-zero.

Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array.

prod ([axis, dtype, out, keepdims, initial, …])

Return the product of the array elements over the given axis

Peak to peak (maximum - minimum) value along a given axis.

Set a.flat[n] = values[n] for all n in indices.

Repeat elements of an array.

Returns an array containing the same data with a new shape.

Change shape and size of array in-place.

Return a with each element rounded to the given number of decimals.

Find indices where elements of v should be inserted in a to maintain order.

Put a value into a specified place in a field defined by a data-type.

Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY), respectively.

Remove single-dimensional entries from the shape of a .

std ([axis, dtype, out, ddof, keepdims])

Returns the standard deviation of the array elements along given axis.

sum ([axis, dtype, out, keepdims, initial, where])

Return the sum of the array elements over the given axis.

Return a view of the array with axis1 and axis2 interchanged.

Return an array formed from the elements of a at the given indices.

Return a new Quantity object with the specified unit.

to_string ([unit, precision, format, subfmt])

Generate a string representation of the quantity and its unit.

The numerical value, possibly in a different unit.

Construct Python bytes containing the raw data bytes in the array.

Write array to a file as text or binary (default).

Return the array as an a.ndim -levels deep nested list of Python scalars.

Construct Python bytes containing the raw data bytes in the array.

trace ([offset, axis1, axis2, dtype, out])

Return the sum along diagonals of the array.

Returns a view of the array with axes transposed.

var ([axis, dtype, out, ddof, keepdims])

Returns the variance of the array elements, along given axis.

New view of array with the same data.

Base object if memory is from some other object.

The base of an array that owns its memory is None:

Slicing creates a view, whose memory is shared with x:

An object to simplify the interaction of the array with the ctypes module.

This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library.

Parameters None Returns c Python object

Possessing attributes data, shape, strides, etc.

Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes):

A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_['data'][0] .

Note that unlike data_as , a reference will not be kept to the array: code like ctypes.c_void_p((a + b).ctypes.data) will result in a pointer to a deallocated array, and should be spelt (a + b).ctypes.data_as(ctypes.c_void_p)

(c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype('p') on this platform. This base-type could be ctypes.c_int , ctypes.c_long , or ctypes.c_longlong depending on the platform. The c_intp type is defined accordingly in numpy.ctypeslib . The ctypes array contains the shape of the underlying array.

(c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array.

Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p) . Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)) .

The returned pointer will keep a reference to the array.

Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short) .

Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong) .

If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as_parameter attribute which will return an integer equal to the data attribute.

Python buffer object pointing to the start of the array’s data.

Data-type of the array’s elements.

Parameters None Returns d numpy dtype object

Information about the memory layout of the array.

The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE'] ), or by using lowercased attribute names (as in a.flags.writeable ). Short flag names are only supported in dictionary access.

Only the WRITEBACKIFCOPY, UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags .

The array flags cannot be set arbitrarily:

UPDATEIFCOPY can only be set False .

WRITEBACKIFCOPY can only be set False .

ALIGNED can only be set True if the data is truly aligned.

WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string.

Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays.

Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true.

Attributes C_CONTIGUOUS (C)

The data is in a single, C-style contiguous segment.

F_CONTIGUOUS (F)

The data is in a single, Fortran-style contiguous segment.

The array owns the memory it uses or borrows it from another object.

WRITEABLE (W)

The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception.

The data and all elements are aligned appropriately for the hardware.

WRITEBACKIFCOPY (X)

This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array.

UPDATEIFCOPY (U)

(Deprecated, use WRITEBACKIFCOPY) This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array.

F_CONTIGUOUS and not C_CONTIGUOUS.

F_CONTIGUOUS or C_CONTIGUOUS (one-segment test).

BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS.

The imaginary part of the array.

Container for meta information like name, description, format. This is required when the object is used as a mixin column within a table, but can be used as a general way to store meta information.

Length of one array element in bytes.

Total bytes consumed by the elements of the array.

Does not include memory consumed by non-element attributes of the array object.

Number of array dimensions.

The real part of the array.

Tuple of array dimensions.

The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with numpy.reshape , one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required.

Number of elements in the array.

Equal to np.prod(a.shape) , i.e., the product of the array’s dimensions.

a.size returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested np.prod(a.shape) , which returns an instance of np.int_ ), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type.

Tuple of bytes to step in each dimension when traversing an array.

The byte offset of element (i[0], i[1], . i[n]) in an array a is:

A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide.

Imagine an array of 32-bit integers (each 4 bytes):

This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4) .

all ( axis = None , out = None , keepdims = False ) ¶

Returns True if all elements evaluate to True.

Refer to numpy.all for full documentation.

Returns True if any of the elements of a evaluate to True.

Refer to numpy.any for full documentation.

Return indices of the maximum values along the given axis.

Refer to numpy.argmax for full documentation.

Return indices of the minimum values along the given axis of a .

Refer to numpy.argmin for detailed documentation.

Returns the indices that would partition this array.

Refer to numpy.argpartition for full documentation.

Returns the indices that would sort this array.

Refer to numpy.argsort for full documentation.

Returns an elliptical Gaussian kernel of the beam.

This method is not aware of any misalignment between pixel and world coordinates.

Conversion from angular to pixel size.

kwargs passed to EllipticalGaussian2DKernel as_tophat_kernel ( pixscale , ** kwargs ) [source] ¶

Returns an elliptical Tophat kernel of the beam. The area has been scaled to match the 2D Gaussian area:

This method is not aware of any misalignment between pixel and world coordinates.

**kwargs passed to EllipticalTophat2DKernel astype ( dtype , order = 'K' , casting = 'unsafe' , subok = True , copy = True ) ¶

Copy of the array, cast to a specified type.

Parameters dtype str or dtype

Typecode or data-type to which the array is cast.

order <‘C’, ‘F’, ‘A’, ‘K’>, optional

Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’.

Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility.

  • ‘no’ means the data types should not be cast at all.

  • ‘equiv’ means only byte-order changes are allowed.

  • ‘safe’ means only casts which can preserve values are allowed.

  • ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.

  • ‘unsafe’ means any data conversions may be done.

If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array.

copy bool, optional

By default, astype always returns a newly allocated array. If this is set to false, and the dtype , order , and subok requirements are satisfied, the input array is returned instead of a copy.

Returns arr_t ndarray

Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype , order .

Raises ComplexWarning

When casting from complex to float or int. To avoid this, one should use a.real.astype(t) .

Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not.

Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted.

Attach the beam information to the provided header.

Parameters header astropy.io.fits.header.Header

Header to add/update beam info.

copy bool, optional

Returns a copy of the inputted header with the beam information.

Returns copy_header astropy.io.fits.header.Header

Copy of the input header with the updated beam info when copy=True .

Return the beam area in pc^2 (or equivalent) given a distance

Swap the bytes of the array elements

Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually.

Parameters inplace bool, optional

If True , swap bytes in-place, default is False .

Returns out ndarray

The byteswapped array. If inplace is True , this is a view to self.

Arrays of byte-strings are not swapped

but different representation in memory

Use an index array to construct a new array from a set of choices.

Refer to numpy.choose for full documentation.

Return an array whose values are limited to [min, max] . One of max or min must be given.


Converting Jy/beam to Jy? - Astronomy

Just as the introduction of wide field CCDs revolutionized the survey capabilities of optical and infrared telescopes, HI line astronomy is undergoing a similar renaissance with the advent of multi-beam receivers on the large single-dish telescopes, enabling blind HI surveys that cover wide areas. In particular, the upgrade of the surface of the Arecibo antenna in the mid-1970's initiated a new era of extragalactic 21 cm HI line studies which exploited the big dish's collecting area and superior ancillary instrumentation (low noise amplifiers broadband, flexible multi-bit spectrometers). The addition in 2004 of the Arecibo L-band Feed Array (ALFA) has made possible new wide area surveys in galactic, extragalactic and pulsar research. The local extragalactic sky visible to Arecibo is rich, containing the central longitudes of the Supergalactic Plane in and around the Virgo cluster, the main ridge of the Pisces-Perseus Supercluster, and the extensive filaments connecting A1367, Coma and Hercules. With ALFA, the Arecibo legacy of extragalactic HI studies will continue, probing regimes untouched by other surveys and addressing fundamental cosmological questions (the number density, distribution and nature of low mass halos) and issues of galaxy formation and evolution (sizes of HI disks, history of tidal interactions and mergers, low z absorber cross section, origin of dwarf galaxies, nature of high velocity clouds). Here we briefly outline the main science objectives of the E-ALFA wide area high galactic latitude program, the Arecibo Legacy Fast ALFA (ALFALFA) survey. For further technical information, look at our publications page.

2.1 A Legacy Survey: HI in the Nearby Universe     The survey design and its strategy as proposed here have evolved from numerical simulations (c.f. Giovanelli 2003 Masters et al. 2004), in which we use a cosmic density map provided by the PSCz density reconstruction (Branchini et al. 1999) gridded with 0.9375h -1 Mpc spacing in the inner 60h -1 Mpc, and at twice that value between 60 and 120h -1 Mpc the map is smoothed with a Gaussian filter of 3.2h -1 Mpc. The density map is complemented by a peculiar velocity map, which allows us to infer more accurate estimates of the distances than those derived purely from redshifts. Two different HIMFs (those derived by Zwaan et al. 1997 and Rosenberg & Schneider 2002 the Zwaan et al. 2003 HIMF has values intermediate between the former two) are used to populate the map with HI "clouds", which we then proceed to "detect". HI sizes and velocity widths are assigned using empirical scaling relations obtained from our own HI survey data and Broeils & Rhee (1997), with realistic scatter and spectral baseline instability. Disk inclinations and pointing offsets are randomized. We have inspected a wide grid of survey parameters, including different scenarios of suppression of gas infall onto low mass halos due to reionization. Experiments made with the single pixel L-narrow receiver in 2003 and preliminary inspection of our recent A1946 precursor observations confirm the efficacy of these simulations in practical observing conditions (Giovanelli et al 2005a 2005b).

Our early simulations, based on past estimates of the HI Mass Function predicted that ALFALFA would detect more than 16,000 objects. However, the actual areal density currently being achieved by ALFALFA, about 5 sources per square degree, with peaks 10-20 times higher in regions of groups and clusters, now suggests that the full ALFALFA survey may yield 30,000 extragalactic HI sources. We attribute this higher yield to the high quality of the data, made possible by its "minimum intrusion" observing strategy and the specialized data processing routines developed by the ALFALFA team specifically for the survey, including a Fourier domain signal extraction algorithm written by Cornell graduate student Amélie Saintonge (2007a Ph.D. thesis, Cornell University 2007b).

As of August 2007, signal extraction has been completed for about 15% of the survey area with catalogs totaling some 4500 good quality HI detections in preparation for publication in the second half of 2007. As expected, ALFALFA is sampling a wide range of hosts from local, very low HI mass dwarfs to gas-rich massive galaxies seen to z

0.06. HI spectra provide redshifts, HI masses and rotational widths for normal galaxies, trace the history of tidal events and provide quantitative measures of the potential for future star formation via comparative HI contents. As a blind HI survey, ALFALFA will not be biased towards the high surface brightness galaxies typically found in optical galaxy catalogs and moreover, in contrast to HIPASS and HIJASS, will have adequate angular and spectral resolution to be used on its own, without the need for followup observations to determine identifications, positions and, in many cases, characteristic HI sizes. The wide areal coverage of ALFALFA overlaps with several other major surveys, most notably the Sloan Digital Sky Survey (SDSS), 2MASS and the NVSS. The catalog products of ALFALFA will be invaluable for multiwavelength data mining by a wide spectrum of astronomers, far beyond those currently engaged in the ALFALFA survey itself. A key element of this program is to provide broad application, legacy data products that will maximize the science fallout.

2.2 The HI Mass Function and the "Missing Satellite Problem"     One of the principal discrepancies between cold dark matter (CDM) theory and current observations revolves around the large difference between the number of dwarf dark matter halos seen around giant halos in numerical simulations based on CDM and the observed dwarf satellite population in the Local Group (Kauffmann et al. 1993 Klypin et al. 1999 Moore et al. 1999b), referred to as the "missing satellite problem". The logarithmic slope of the faint end of the galaxy mass function predicted by CDM simulations is close to the value of &alpha = -1.8 that arises analytically from the Press-Schechter formalism (Press & Schechter 1974 Bardeen et al. 1986). Because the mass function itself is difficult to determine directly, current efforts focus on estimation of the faint end of the optical luminosity function (LF) and, of direct relevance to this proposal, of the HIMF. By determining both, limits can be set on the number of low mass halos containing measurable stellar or gaseous components. The shape of the low mass end of the HIMF and its corollary, the cosmological mass density of HI, are important parameters in the modelling of the formation and evolution of galaxies.

The HIMF is the probability distribution over HI mass of detectable HI line signals in a survey sensitive to the global neutral hydrogen within a system. The most recent estimates of the HIMF have been presented by Zwaan et al. (1997 Z97), Rosenberg & Schneider (2002 RS02), Zwaan et al. (2003 Z03) and Springob et al. (2004). The latter is derived from a compilation of some 9000 optically selected galaxies, further restricted by HI line flux and optical diameter to a complete subsample containing 2200 galaxies. The other determinations are based on blind HI surveys and thus have no bias against the low luminosity and low optical surface brightness galaxies which may be underrepresented in optical galaxy catalogs. The Z03 HIMF is based on the HIPASS survey (Koribalski et al. 2004 Meyer et al. 2004), while the RS02 and Z97 HIMFs are both based on drift scan surveys conducted at Arecibo during the period of its recent upgrade. The faint end slope of those determinations of the HIMF vary between -1.20 and -1.53, yielding extrapolations below MHI = 10 7 Msun that disagree by an order of magnitude, the RS02 HIMF having the steeper slope. All three HI blind surveys sample a lower mass limit just below MHI = 10 8 Msun, for H° = 70 kms -1 Mpc -1 (a value that will be assumed throughout, while for Virgo we adopt a distance D = 16 Mpc). No galaxies were detected by RS02 or Z97 with MHI 7 Msun, while 3 are claimed by Z03, and only a small number of detections have MHI 8 Msun.

As pointed out by Kratsov et al. (2004), models of the formation of large scale structure must explain not only the number of satellites found in the Local Group, but also their clustering characteristics: whereas the dSphs are found concentrated with

300 kpc of their host giant galaxies, the irregulars are spread throughout, both near and far from the giants (Grebel 2004). The origin of this segregation as well as the fundamental differences among the dwarf populations are thus important issues for galaxy formation theories. The possible variation in the HIMF with local galaxy density or velocity dispersion can provide a statistical measure of the impact of environment mechanisms on the gas as galaxies evolve.

Surveys using ALFA will explore two fundamental aspects of the HIMF: its low mass slope, which has a direct bearing on the "missing satellite problem", and its behavior with varying galaxy environment. To date, studies of the possible environmental dependence of the HIMF have been limited to comparisons of the HIMF derived for galaxies in the Virgo cluster with those in the field (Hoffman et al. 1992 Briggs & Rao 1993 RS02 Davies et al. 2004 Gavazzi et al. 2004) but suffer from poor statistics and incompleteness. The results marginally suggest that the HIMF in Virgo is missing the low HI mass dwarfs found in the field or is at least flatter at the faint end than the field HIMF.

The science program of the E-ALFA consortium as illustrated in the E-ALFA white paper has, as one of its main goals, the robust determination of the HIMF over a range of independent volumes characterized by varying cosmic density. No single survey is likely to explore all the relevant parameter space. Achieving adequate detection statistics for objects in the 10 6 - 10 8 Msun range requires a balance of survey areal coverage and survey depth in order to sample adequate volume. Studying a wide range of environments likewise necessitates tradeoffs of depth and area. In combination with the deeper surveys proposed under the AGES (Arecibo Galaxy Environments Survey) program, ALFALFA will allow exploration of a wide range of possible HIMF scenarios. It will focus on studies of the lowest mass objects in the very nearby universe (MHI 7 Msun, D et al. (2004) by sampling, at higher masses, across the range of local densities that characterize the rich clusters like A262, A1367 and Coma, their supercluster filaments and the voids between them.

We emphasize that since the lowest HI masses will be found only very locally, ALFALFA must cover a very large solid angle in order to survey adequate volume at D 7.5 Msun depending on whether the HIMF follows Z97 or RS02. Both the legacy aspect and the local volume requirement thus dictate the need to survey 7000 deg 2 .

2.3 Galaxy Evolution and Dynamics within Local Large Scale Structures     The large scale distribution of galaxies in the local universe is concentrated in a structure (Lahav et al. 2000) first recognized by de Vaucouleurs and today designated as the Supergalactic Plane. At its center, the Virgo cluster is the nearest rich cluster to us. Overall, the galaxy distribution in that direction has been shown to trace a filamentary structure (West & Blakeslee 2000 Gavazzi et al. 1999 Solanes et al. 2002) elongated along the line of sight. Galaxies in the cluster core are known to be HI-deficient due to interaction with the hot intracluster gas, while galaxies in the cluster periphery, foreground and background are not. Including its several principal concentrations, the cluster extends about 14° over the sky (Binggeli, Popescu & Tammann 1993). The solid angle subtended by this region samples the highest densities in the local Universe, and thus constitutes the obvious choice for the study of the HIMF in a high density environment. A region of comparable volume but low density, surveyed to comparable sensitivity, is required to provide a reference. The regions with lowest cosmic density at comparable distance are, unsurpringly, in the anti-Virgo direction. Optimization of the Arecibo sky coverage and zenith angle dependence of sensitivity suggests an anti-Virgo region centered near 1.5 h in R.A. and +24° in Dec. This region includes a large section of the largest, nearby cosmic "void" averaged over a solid angle of

0.25 sterad, the anti-Virgo region between 0 and 3000 km/s is underdense by a factor of

6 with respect to the Virgo region in the same distance range. The comparative study of the HI and other properties of the galaxies in these two regions will yield clues on the procesess of environmental influence on galaxy evolution. HI contents will be compared, the dwarf population will traced over wide ranges of cosmic density, and a first truly blind survey for HI tidal remnants will be made.

A wide area Virgo survey will provide a database of unprecedented breadth for the investigation of the origin of gas deficiency in that cluster which will complement nicely the targeted, higher spatial resolution HI line synthesis study of Virgo galaxies currently being undertaken with the Very Large Array (PI: J. Kenney). It will also improve the dynamical understanding of the cluster and its surrounding groups, as well as of the processes associated with the evolution of large-scale structure, by providing a rich redshift data base of low optical luminosity gas-rich dwarfs not only within the cluster core but also in its broad surroundings.

Virgo is the only environment which is both near enough that distance estimates based on secondary methods can distinguish between infall and expansion regimes in the region around the cluster (the so-called "triple-valued region") and also massive enough to possess an extensive, well-populated infall region. In the case of Virgo, the infall domain extends 28° from the center of the cluster, so that a survey that can identify objects at turnaround must cover a very wide area. While the SDSS may provide the required photometric parameters for applications of the Tully-Fisher distance method, its 3 arcsec fibers cannot provide adequate rotational width measures. Thus ALFALFA, in combination with the SDSS database, will provide the basis for a unique study of the galaxy dynamics both in and around the Virgo cluster.

Other groups within the Local Supercluster will also be targets including: the Canes Venatici I Group at about 5 Mpc, the Leo I group at about 10 Mpc, the "groups of dwarfs" (Tully et al. 2002) around UGC 3974 (D = 5.4 Mpc) and NGC 784 (D = 4.4 Mpc), and the Canes Venatici II and Coma I groups at at 10-20 Mpc. There are

20 additional groups at velocities less than 1000 km/s which we should be able to study in great detail. Models of the structure of the Local Supercluster are being developed by KLM and by IDK and collaborators and will both contribute to and benefit from the ALFALFA survey.

Of particular note, the Leo I (M 96) Group offers an attractive opportunity for exploring both the optical luminosity function and the HIMF in an intermediate density environment. Unlike the Local Group, Leo I is dominated by early type galaxies, yet it is still characterized by a low velocity dispersion. For 19 galaxies with measured redshifts, the dispersion in radial velocity is 130 km/s. Two of the brightest galaxies in Leo I - NGC 3379, and NGC 3384 - are surrounded by a 200 kpc ring of HI gas (Schneider et al. 1983). Two possible scenarios for the origin of this cloud have been proposed. Rood & Williams (1985) suggested that the ring resulted from a collision between NGC 3384 and NGC 3368 some 500 Myr ago. After the discovery of several additional gas features, Schneider (1985) noted that the clouds appear to be stable against tidal disruption and proposed that they instead represent a remnant of the primordial gas cloud from which all of the group members formed. Recently uncovered kinematic signatures suggest that all of the brighter galaxies have been involved in past interactions (Sil'chenko et al. 2003). Thus the Leo I region presents an interesting environment in which to study differences among the low luminosity dwarf populations: a region of low velocity dispersion but containing a local density enhancement that supports the presence of bright E/S0 galaxies. The methodology developed by IDK & VEK to identify nearby dwarfs has already uncovered considerable numbers of faint gas-rich members of other nearby groups (e.g., Karachentseva & Karachentsev 1998 Karachentseva et al. 1999, 2001 Makarov, Karachentsev & Burenkov 2003). To probe the dI population found by Karachentsev & Karachentseva (2004), a very wide field (> 120 square deg), as provided by ALFALFA, must be studied.

2.4 The Extent and Origin of HI Disks     Extended gas disks around galaxies represent a reservoir for future star formation activity. The study of the distribution of HI relative to that of the optical (stellar) disk allows the investigation of the relationship of gas to star formation and the discrimination of models of the origin of the observed truncation of stellar disks at 3 - 5 optical disk scale lengths based on gas density threshholds (Fall & Efstathiou 1989) versus those related to the maximum protogalaxy specific angular momentum (van der Kruit 1979). In contrast to other major wide area surveys such as HIPASS and HIJASS, some 500 gas-rich galaxies will be resolved by ALFA's 3.5 arcmin beam, allowing a quantitative measure of their characteristic HI sizes (Hewitt et al. 1984) and the derivation of the HI diameter function. In combination with optical photometry, ALFALFA will determine the fraction of galaxies with extended gas disks and enable studies of their host galaxies, their environment, morphology and the role of gas in their evolution. Of particular note, we hope to discover more extremely extended gas disks, such as those found in DDO 154 (Krumm & Burstein 1984), NGC 4449 (Bajaja et al. 1994), NGC 2915 (Meurer et al. 1996) and UGC 5288 (van Zee 2004) and extensive tidal features such as those seen in the Leo Triplet (Haynes, Giovanelli & Roberts 1979).

Additionally, ALFALFA will resolve extended HI in the vicinity of the 100 large (DUGC > 5 arcmin) nearby galaxies that may have been missed by interferometric observations, allowing for a census of the neutral ISM on all spatial scales. In particular, the Arecibo telescope provides an ideal probe of the short spacings missed by the VLA in its more compact configurations. For the ALFALFA parameters ts = 28 s/beam and channel bandwidth of 5 km/s, the antenna temperature detectable at the 5 &sigma limit is TA = 0.13 K for a source with a spectral width of 25 km/s which fills the beam, this limit corresponds to a minimum detectable column density of NHI

The column density regime probed by ALFALFA will characterize the broad-scale emission at the edges of galaxy disks, which are hypothesized to truncate at roughly the same value of NHI (e.g. Corbelli & Salpeter 1993, Maloney 1993). The existence of a smooth HI component similar to that in DDO 154 (Hoffman et al. 2001) would also extend rotation curves further into the dark matter halo, allowing for more robust determination of the halo shape and concentration to contrast with cold dark matter paradigm predictions on galaxy scales (e.g. Dutton et al. 2004 Barnes et al. 2004).

ALFALFA will provide a census of the abundance and distribution of HI disks, providing the low redshift link to the damped Lyman &alpha (DLA) absorption seen in quasar spectra. At higher redshift, the neutral gas mass traced by DLA absorption makes a greater contribution to the luminous baryonic mass than it does at the current epoch. ALFALFA will provide important clues to such gas disk evolution. While the HIMFs derived from all surveys to date suggest that large galaxies contribute the majority of the local HI mass density, it also seems that massive galaxies do not dominate the cross section for DLA absorption (Rosenberg & Schneider 2001 Rao & Turnshek 1998). Recent observations with the GMRT of resolved low z DLAs similarly has found that the absorption is associated with HI masses less than those characteristic of L* galaxies (Chengalur & Kanekar 2002). To understand the DLA cross section at low redshifts requires study of the population of low mass but HI rich galaxies that are missing from optical catalogs. Followup optical studies of ADBS counterparts by JLR and JJS demonstrate that the earlier survey detected galaxies with absolute magnitudes of -16 at distances of 70 Mpc. ALFALFA should detect such objects in very large numbers, allowing not only robust estimates of their contribution to the local HI cross section, but also a measure of their clustering correlation amplitude and scale.

2.5 The Nature of High Velocity Clouds     In addition to providing important clues on the extents and kinematics of HI gas around other galaxies, ALFALFA will allow a wide area study of gas in and around the Milky Way as a complement to, and in conjunction with, the G-ALFA surveys. In particular, ALFALFA will explore the nature of the local high velocity clouds (HVCs) of neutral hydrogen which may represent gas accretion onto our Galaxy (e.g., Tripp et al. 2003) but which also have been claimed to be more distant, the "missing satellites" in the Local Group (Blitz et al. 1999 Braun & Burton 1999). Previous surveys of HVCs have been of substantially lower resolution (15.5 arcmin at best) and/or were unable to trace the connection between HVCs and Galactic HI emission (Putman et al. 2002 Wakker & van Woerden 1991). ALFALFA will trace important high-velocity structures, such as the northern portions of the Magellanic Stream and Complex C at 4× better resolution than HIPASS. It will also be 8× more sensitive to unresolved small clouds, or ultra-compact HVCs (if any exist with central neutral column density above 10 20 cm -2 ). This will allow us to determine if HVCs are interacting with a diffuse halo medium (e.g., Brüns et al. 2000 Quilis & Moore 2001) and or if they are bona fide dark matter-dominated Galactic satellites (e.g., Moore et al. 1999a).

The recent discovery of an extended, faint population of HI clouds within 50 kpc of M31 by Thilker et al. (2004) suggests a similar search for clouds around M33. At the Andromeda distance, the Thilker et al. clouds have masses between 10 5 - 10 7 Msun. While Arecibo can not reach as far north as M31, ALFALFA will cover part of the region containing the clouds discovered by Thilker et al. and their possible extension toward the region around M33. Wright's cloud (Wright 1979 Braun & Thilker 2004) was detected easily in our A1946 observations in Aug/Sep 2004.

100K and a velocity width of 100 km/s, &taupeak

3.6 x 10 20 cm -2 with the obvious condition that narrower widths would probe lower column densities. Using the values for &taupeak and &DeltaV given in Table 1 of Vermeulen et al. , we estimate that ALFALFA will be be able to detect all but three of the lines found by those authors, assuming a frequency range match. ALFALFA will target low redshift absorbers not associated with the radio sources themselves.

A major difficulty with absorption studies is spectral baseline determination. A method commonly used averages sources of similar strength observed at comparable telescope configuration (possible for the limited azimuth drift mode considered here). We expect that standing waves will be broader than expected HI absorption lines, and that most rfi will be spectrally unresolved. To identify absorbers, we will establish and follow a simple set of rules to assess whether or not a given spectral feature is RFI or real absorption. This aspect of the project will require extra effort, but will yield cosmologically interesting statistics based on such a "blind" HI absorption survey. Among others, JKD, EMM and CMS are interested in pursuing the absorption line study.

2.7 A Blind Survey for OH Megamasers at 0.16     OH Megamasers (OHM) are powerful line sources observed in the L band, arising from the nuclear molecular regions in merging galaxy systems. Approximately 100 such sources are known to date, half of which were discovered by JKD's Ph.D dissertation work (e.g. Darling & Giovanelli 2002) at Arecibo. Several of them are observed to have variable spectral features allowing superresolution and insight into the source structure and physics. Observations of OHMs hold the potential for tracing the merger history of the Universe since the sources are associated with merging galaxies.

Comparison with Previous Surveys:

HIPASS and HIJASS cover the same area of sky that is visible at Arecibo, HIPASS south of Dec.= +25°, and HIJASS further to the north. However, in addition to the large increase in sensitivity, ALFA surveys provide 2 direct benefits over the other two: improved angular and velocity resolution. The significant higher angular resolution (FWHM

3.5arcmin for ALFA versus 12 arcmin for HIJASS and 15.5 arcmin for HIPASS) will help to limit the confusion of sources that plagued those other surveys. The HIPASS follow-up needed is enormous and therefore has been limited to the highest flux sources. It will be years before the sources are followed-up (if ever). An ALFA survey will be able to do science with the survey data directly, without time consuming interferometric follow-up. Additionally, the higher velocity resolution of ALFA will be useful in several ways: First, detecting edge-on galaxies with peak fluxes near the noise limit. The edge of a double peak spectrum is much sharper at higher velocity resolution which should make it easier to automatically detect these sources. Second, the higher velocity resolution will allow more accurate velocity and velocity width measurements, without the need for follow-up. Even the narrowest sources will be detected over several channels. Third, since most rfi is narrow band, the higher frequency resolution will be extremely useful in identifying and excising rfi.

The HIJASS survey has a further serious limitation. Very bad rfi in the frequency band corresponding to cz

4500 - 7500 km/s (within the range of much of the interesting large scale structure e.g., Pisces-Perseus, A1367-Coma-Great Wall). In addition, HIJASS is not scheduled to do any more observing in the Arecibo range (a 4° × 4° region in Virgo and a few other areas have been covered at this point) for the next few years.

The principal advantage that an Arecibo survey will have over previous surveys is depth and the number of independent volumes surveyed. Table B.1 below includes a comparison of the major surveys, including those discussed here. For comparative purposes, the rms noise per beam quoted for each survey has been scaled to a velocity resolution of 18 km/s, the resolution of HIPASS.

a after Hanning smoothing.
b per beam, for W = 18 km/s. Note: ADBS gives 3-4 mJy for 7s, scaled to 12s and 18 km/s.
c at 10 Mpc, for 5&sigma detection with W = 30 km/s.
d Gap in velocity coverage between 4500-7500 km/s caused by rfi.
e Assumes second generation backend.

References:
1: Zwaan et al. (1997)
2: Rosenberg & Schneider (2002)
3: Braun et al. (2003)
4: Kraan-Korteweg et al. (1999)
5: Lang et al. (2003)
6: Davies et al. (2004)
7: Minchin et al. (2003)
8: Henning et al. (2000)
9: Current HIPASS survey, to Decl. et al. (2004), Zwaan et al. (2004)
10: Final HIPASS survey (including northern extension)
11: Freudling et al. AUDS precursor proposal
12: Davies et al. AGES precursor proposal


Pixels

Fun fact: Wikipedia tells me that the first use of the word pixel, which simply comes from a shortening of ‘picture element’, was by JPL scientists in the 1960s.

One very important point to understand is that higher resolution does not necessarily equate to more pixels. This is because ‘pixel’ can refer to two different concepts in astronomy. One usage deals with image capture [hardware] – that is, the physical cells that capture light in a detector. (They can also be lovingly referred to as photon buckets.) Squeezing more (smaller) physical pixels into a given area on your detector will typically give you sharper resolution. Think megapixels in cameras.

The other meaning of pixel deals with image processing and image reconstruction [software]. Pixels in the sense of a digital image refer to the data grid that represents your image information. You can take your favorite cat photo from the internet and try to scale it in Photoshop to twice the size – giving it more pixels – but you’re not adding any new information to the new pixels that wasn’t already there, so when you zoom in the overall picture will still look just as blurry as the original. (You can of course apply some clever interpolation algorithms to give the pixels the appearance of being smoother, but that’s a different issue.) You can combine separate images that are slightly offset to make a new image which has slightly finer resolution than any of the originals – the most common way to do this is called “drizzling“, and is used often in Hubble images, for example. In radio astronomy (and sub-mm and some infrared instruments), the detector is not directly capturing photons as particles like a CCD camera does. Usually it’s detecting light as a wave, and the image is reconstructed later from the raw voltages. In practice, this means that a scientist could reconstruct an image with arbitrarily small digital pixels – just as you could scale your cat photo to an arbitrarily large size – but the extra pixels wouldn’t necessarily contain any new information.

To expound a little more, taking the example of consumer digital cameras, a sensor that boasts 20 Megapixels is regarded as higher resolution than one with, say 15 Megapixels. This is only part of the story though, as it depends on things like the physical size of the pixels – it’s conceivable to have these two camera sensors made from the same chip of pixels, just cut to separate 15 and 20 Mp areas. Just considering the sensors themselves here and ignoring the other camera optics, that pair of chips would have the same inherent ability to resolve detail but different total areas. But let’s assume now that they have the same overall area. It makes sense that, say, a one inch square chip which contains 1000 pixels would produce a much lower resolution photo than a one inch square chip with 1 million pixels.
[Guess the image!] />
However, you can’t continue the trend forever – engineering issues aside, at some point the pixels would become smaller than the wavelength of the light it was trying to capture. Beyond that limit, squishing in more pixels won’t help you.

Besides, you could open up your photo in some image processing software and divide each pixel into, say, four (or 9, 25, etc…) equal parts if you wanted to. That would make your image have MANY more pixels. But that wouldn’t be increasing your resolution at all, because those new smaller pixels are not adding any information.

Let’s reverse that notion – if you are now faced with the task of observing light of longer wavelengths, a single photon can more easily bleed over multiple pixels if they remain small. Now an infinitesimally small point of light in the real world will register in several pixels, according to some pattern which depends on the optics. This is the PSF or beam pattern mentioned in other posts. (This description is vastly simplifying the whole process, of course – for starters, the detector technology of ‘pixels’ is quite different between optical, FIR, and radio telescopes. Please consult a proper optics or observational astronomical text for a more complete picture.)

So, let’s back up a step. In a basic sense, the sensor in your typical camera is recording the incoming light of whatever it’s pointing at. However, if the camera is out of focus or there are defects in the optics, no matter how many pixels you have, the image will appear blurry. And if your pixels are larger than the innate scale for detail afforded by your optics, your image will be ‘blurrier’ than if you were to employ smaller pixels. To summarize, the final resolution of your image is determined by a complex interplay between your *entire* optics system, including lenses (possibly interferometer spacing) and recording hardware.

Let’s look at an example of increasing the number of pixels in an image. Below on the left is an imaginary faraway source (top panel) observed with some telescope. The resolution (beam FWHM) is shown by the dotted circle – in this case the true source would be a point source with regards to the telescope’s resolution. The final image is in the bottom panel. Note that obviously no fine detail about the source can be determined from this image such as if there are 2 or 3 separate objects contained, etc. Now look at the image on the right side – this one has been produced by processing the raw signal onto a finer pixel grid. (You can often do this with radio data, for example.)

Do you gain anything by doing this? Well, note that you’re still not able to determine any fine detail about the true source – on that count there is no improvement over coarser image on the left. If you are trying to get a precise sum of the flux in a circular aperture around the source, though, this may help you out a bit as the smaller pixel sizes would fill the round aperture more fully than larger pixels. The downside of having smaller pixels in this case would be that processing time could increase dramatically.

Now let’s move on to the discussion of how to compare images from different instruments/settings. The tools we will use are convolution and regridding. These can sometimes be related (such as when, in radio interferometer data reduction, you choose the pixel size so that many pixels cover the beam area), but remember they are separate concepts.


Converting Jy/beam to Jy? - Astronomy


Until now we have deliberately avoided the issue of calibrating your data. This means that your reduced data, up until this stage, are in units of volts. Since the calibration varies from night to night and even within a single night, one should generally calibrate individual maps before coadding to achieve the best result. So how does one convert instrumental units into a physical measure of luminosity or surface brightness? The solution as in most astronomy is to look at a source of known brightness in exactly the same way, i.e. using the same mode of observing for your target as well as for your calibrator(s).

In the optical and infra-red the standard sources are almost always point sources, standard stars, and the point spread function is well defined. In the submillimetre things are more complicated. Our primary calibrators, Mars and Uranus, are not point sources, and the point spread function is very extended and strongly wavelength dependent. The JCMT beam at 450 μ m is actually much worse than the ill-fabled Hubble before the mirror was corrected.

The way we calibrate may differ depending on whether we observe point sources or extended sources. For point sources we can ignore the error beam and do simple aperture photometry, for extended sources we normally have to calibrate in Jy/beam and characterize our beam profile. In the following we first go through how to characterize the beam profile, then the case of calibrating in Jy/aperture and finally we proceed to the more general case of calibrating in Jy/beam, which is valid for all cases.

7.1 Analyzing beam maps

The calibration differs for jiggle maps and scan maps and it is also, although more weakly, dependent on chop throw. The relatively large difference in calibration for scan maps is due to the different chop wave form used for scan maps. The difference between a jiggle map with a 120” chop throw compared to one with a 60” chop throw is mostly dictated by duty cycle and to a lesser extent by changes in the beam. The beam is slightly broader with a 120” chop throw, but the duty cycle (time spent on source) is also slightly lower, both of these factors decrease the efficiency for large chops.

In the following example we are going to look at beam maps of Uranus taken in stable night time conditions during three nights in late May, 2001. These maps have been extinction corrected, we have blanked out bad bolometers and corrected each map for pointing drifts. There are slight calibration differences from night to night, but for this purpose the difference is negligible. The final coadded beam maps were rebinned in az and are shown in Fig.ꀑ .

A quick way to diagnose that the beam profile looks reasonable is to use Kappa ’s psf . The task psf ਏits a radial profile, A × e x p ( − 0 . 5 × ( r / σ ) γ ) , where r is calculated from the true radial distance of the source allowing for ellipticity, σ is the profile width, and γ is the radial fall-off parameter. psf ꃊn also fit a standard Gussian profile. However, the JCMT beam is better described by a two or three component Gaussian (main lobe plus inner and outer error lobes) and psf  therefore overestimates the Half Power Beam Width (HPBW). If we specify norm=no psf  will also return the fitted peak value of the source.


This produces the plot shown in Fig.ꀒ . The value of FWHM is 14.72” across the minor axis. The geometrical mean is simply meanਊxis ratio times 14.72, i.e. the measured FWHM (including the broadening from Uranus) is therefore predicted to be 15.4” if we use psf . However, if we fit a double Gaussian to the same data set we obtain 15.56” × 14.30” with a position angle of 85 for the main beam, and 55.8” × 49.6” for the inner error lobe. To find the true (HPBW) we need to remove the broadening caused by Uranus being an extended source. Using the program Fluxes  (just type fluxes at the command line and answer the prompts) we find out that Uranus had a diameter (W) of 3.54” that day. We convert the FWHM measured, θ m , to the true HPBW of the telescope, θ A , using the equation

θ A = θ m 2 − l n 2 2 × W 2 (4)

where W the diameter of the planet. In this case we get 14.5” for the HPBW and ∼ 53” for the near (inner) error beam. If we do the same for 450 μ m we obtain 7.8” for the HPBW and 34” for the near error beam. These agree with nominal values for the telescope.

7.2 Calibrating in Jy/(solid angle)

If your maps show simple source morphology and you are only interested in integrated flux densities, the simplest approach is to calibrate your map Jy/aperture for the aperture size you want to use. The listing of Fluxes ਊlso gives us the total flux, S t o t for Uranus at 850 μ m is 67.9 Jy. Let us first see how we can use this value to calibrate our image in terms of Jy/arcsecond 2 . In order to do this we need to derive a value for the Flux Conversion Factor (FCF) which is in units of Jy/arcsecond 2 /V. To do this we first need to work out the sum of the pixel values (V s u m ) in an aperture of radius r. We then find the FCF is given by

F C F ( J y / a r c s e c o n d 2 / V ) = 6 7 . 9 / V s u m a (5)

were a is the pixel area in square arcseconds. The easiest way to get the integrated signal in an aperture is to use Kappa ’s aperadd . For our 850 μ m map of Uranus we derive V s u m for a set of different circular apertures and compute the FCFs.

Radius (arcseconds) 20 30 40 60 120
V s u m 45.75 60.76 64.89 70.07 77.08
FCF (Jy/arcsecond 2 /V) 1.48 1.12 1.05 0.97 0.88

We can see from this table that the FCF is dependent on the aperture size that is used 1 This is because there is significant signal in the sidelobes and extended error beam of the telescope. Clearly then the value of FCF can be somewhat ambiguous. What you have to remember is that if you are doing photometry of an extended object, you should use a value for the FCF derived for the same aperture.

If you need to use small apertures, i.e. the size of your HPBW, you will need to use a point source or point like source as a calibrator. Flux densities for our secondary calibrators for a 40” aperture are given by Jenness et al.  [14] . However, several of our secondary calibrators are not point sources. If you end up with, for example, IRC + 10216 and IRAS 16293 − 2422 or Mars near perhelion as your only calibrators during your run, you are in trouble. You may be able to use a large aperture to recover all the flux and use the ratios between different apertures derived for a point source. But, you may as well bite the bullet and calibrate in Jy/beam.

7.3 Calibrating maps in Jy/beam

If your images show a lot of structure, you will need to calibrate your maps in Jy/beam. This is true for most observations of dark and molecular clouds, young supernovae, protostars or young stars and even for nearby galaxies. However, if you are only dealing with faint point sources and low S/N maps, you probably need to integrate over the map. If this is the case, it does not matter whether you calibrate in Jy/beam or Jy/aperture, both methods will give the same result. Since Starlink packages do not deal with Jy/beam, it may appear more complicated to integrate over an image calibrated in Jy/beam, but the only difference is that one needs to normalize the integral over the source with the beam integral, ∫ F ( Ω ) ν d Ω , where F( Ω ) is the normalized power pattern of the telescope. For a Gaussian beam the beam integral is simply 1 . 1 3 4 × θ A 2 . Radio astronomical reduction packages of course do this normalization automatically. Since the JCMT beam is not a simple Gaussian beam, we need to account for the error beam, which is equivalent to having an FCF which varies with aperture, when we calibrate in Jy/aperture. We discuss how this is done towards the end of this section.

To calibrate in Jy/beam we have to know the beam size. Ideally we would derive both the flux density conversion factor and the beam size, θ A , from planet observations. If there are no planets available, we can use one of the secondary calibrators. To determine the beam size at 850 μ m it is usually sufficient to make a weighted average from our pointing observations during the run, if we don’t have a planet observation or a point like secondary calibrator, but for 450 μ m we need a planet or a secondary calibrator. All JCMT secondary calibrators are directly calibrated in Jy/beam. In this case the FCF is simply the quoted flux divided by the peak signal of the source.

7.3.1 Calibrating on Planets

For a planet we have to account for the loss of signal due to the coupling to the beam, because all planets used for calibration are extended relative to the JCMT beam. For our Uranus data the flux density S b e a m is therefore the total flux density, S t o t divided by the coupling of the planet to the beam, given by:

K = x 2 1 − e − x 2 (6)

where x is

x = W 1 . 2 × θ A . (7)

The FCF, in Jy/beam/V is therefore

F C F ( J y / b e a m / V ) = S b e a m / V p e a k (8)

For 850 μ m we find K = 1.021 for θ A = 14.5”, which gives S b e a m = 66.5 Jy/beam. The peak signal that we found for our high S/N Uranus map, V p e a k = 0.2477 V, or an FCF = 268.5 Jy/beam/V. This FCF applies to a jiggle maps with a 120” chop throw. If we do the same for our jiggle maps of Uranus with a 60” chop throw, we derive FCF = 245.2 Jy/beam/V, i.e.ਊ map with a 60” chop throw is ∼ 10% more efficient than one with a 120” chop throw. Even though Jenness et al. ( [14] ) found no difference in FCF as a function of chop throw when calibrating in Jy/aperture we find that the difference is now smaller than compared to when calibrating in Jy/beam but still noticeable. For a 40” aperture the difference is 6%.

7.3.2 Using secondary calibrators

If we use a secondary calibrator to calibrate our maps, it is even simpler. We just take the quoted flux value, S b e a m , from the secondary calibrator page and divide it with the peak flux in our map of the same calibrator. If the map of the calibrator has poor S/N, we may want to fit a gaussian to the source to get a more accurate measure of the peak signal.

7.3.3 How do we extract information from a map calibrated in Jy/beam

Analyzing maps calibrated in Jy/beam is easy especially if we want to deduce flux densities for point sources or compact sources even when the source is embedded in a cloud with strong extended emission. For a point source the peak flux of the source is the same as the total flux corrected for any background emission. For an extended source we need to measure the FWHM and correct it for the measured HPBW of the telescope. We normally do this by fitting a double Gaussian, one for the source and one for the background. At 850 μ m the fitted peak signal minus background, S p e a k , is now the peak flux density measured in Jy/beam. From the fitted Full Width at Half Maximum (FWHM) we can derive the true FWHM, θ s , by deconvolving with the measured HPBW, θ A . This is trivial, because now we can assume a Gaussian source and a Gaussian beam. After we know the source size, θ s we multiply the peak flux with the correction factor we derive from the size, i.e.ਏor a spherically symmetric source with the source size, θ s , the total flux, S t o t is simply

S t o t = S p e a k × ( 1 + ( θ s / θ A ) 2 ) (9)

For 450 μ m the error beam amplitude is no longer negligible, but when we fit a double Gaussian, the error beam will blend in with the extended cloud emission, i.e. it adds into the background level, or we may fit the source with a single Gaussian plus a second order surface, or whatever best approximates the background in a limited area around the source. From our analysis of the 450 μ m beam maps of Uranus, we find that the combined error beam amplitude is of the order of 5% of the peak amplitude, and we should therefore multiply the peak signal by 1.05 before applying a source size correction (see e.g. Weintraub et al.  [17] ).

To find integrated intensities over large areas is more complicated, because now we need to correct for the error beam pickup, which now depends on the area we integrate over. This is equivalent to the varying the FCF as a function of aperture that one has to account for if the map is calibrated in Jy/pixel, but with the map calibrated in Jy/beam it is much easier to separate compact sources and extended emission. To determine the excess emission from the error beam, we again have to go back to our beam map. If we calibrate our 850 μ m map in Jy/beam and then integrate over 120” circular aperture, we find that the flux we derive is 86.8 Jy, while we know that the total flux of Uranus is only 67.9 Jy. We therefore have to scale our derived total flux density by the ratio of true flux density over measured flux density (for our calibrated planet map), which in this case is 0.78. At 450 μ m the situation is much worse. Even though the amplitude of the error lobe is still low, the area is large, and if we integrate over our calibrated 450 μ m beam map we now derive 415.6 Jy, if we integrate over the same 120" circular aperture, while the total flux density from Fluxes  is only 179.3 Jy. In this case our scaling factor is 0.43, i.e. we pick up more emission in the extended error beam than we do in the main beam.

For careful work, you may therefore want to deconvolve your SCUBA maps. This becomes especially important if you want to ratio the 450 and the 850 μ m maps, because if you want to smooth the 450 μ m map to the same resolution as the 850 μ m map, you first have to remove the error beam. For example of how this can be done, see e.g. Hogherheijde and Sandell [13] or Sandell and Weintraub [16] .

1 During this time period SCUBA had reduced sensitivity due to a problem in the optics, affecting primarily the 850 μ m array. Normally you would expect to find a FCF for a 40” aperture of 0.84, see Jenness et al.  [14]