Skip to the navigation links
Last modified: 17 October 2019

URL: https://cxc.cfa.harvard.edu/csc/proc/index.html

Chandra Source Catalog 2.0 Processing


Version 2.0 of the CSC is created by processing each Chandra dataset with a series of automated data analysis pipelines. Collectively, the pipelines are known as "Level 3 Processing" and the data products reflect that in their filenames—e.g. the event file suffix is evt3.fits. For more on this nomenclature, see Chandra Standard Data Processing, which also describes the Level 1 and 2 Chandra data products.

Pipeline overview

This page provides an overview of the various pipeline stages. More details on a particular pipelibe can be found by following the links from each of the headings linked to below:

  1. Observation Selection
  2. Pre-Calibrate/Pre-Detect Pipeline
  3. Fine Astrometry Pipeline
  4. Calibrate Pipeline
  5. ComboDet Pipeline
  6. Source Validation Pipeline
  7. MLE Pipeline
  8. Rebundle
  9. MLE Pipeline run 2
  10. Stacker Pipeline
  11. Master Match Pipeline
  12. Source Properties Pipeline
  13. Convex Hull Source Properties Pipeline
  14. Limiting Sensitivity Pipeline

There are a number of common terms and concepts that are used in the pipelines, including:

There are two different types of extended sources in CSC 2.0, each associated with a different characterization approach. Sources that are small enough to be characterized by fitting them with an elliptical source model (after detection using either wavdetect or mkvtbkg) can be extended if their deconvolved size (after accounting for the Chandra PSF) is larger than 0. We call these simply 'extended sources'. On the other hand, there are sources that are larger and more irregular in shape, and cannot be described by an elliptical model. These sources are detected as large polygons using the mkvtbkg algorithm, and later characterized by transforming the polygons into convex hulls that surround the extended emission. We therefore call these sources 'convex hull sources' (CHS).

The CSC 2.0 release is the first to include detecting and reporting the properties of large X-ray sources. The convex-hull sources should therefore considered to be a preliminary release.


Observation Selection

The Observation Selection page describes which observation intervals (OBIs) are chosen for catalog processing.

Each observation interval is assigned to a 'stack' such that all coaligned (within 1 arcmin) observations from the same instrument (e.g. ACIS or HRC) are in the same stack and can therefore be processed as a group. Stacks may therefore contain one or more observation intervals, and there can be multiple stacks which cover the same part of the sky.

Pre-Calibrate/Pre-Detect Pipeline

The Pre-Calibrate pipeline is run for each OBI that's a member of a stack with more than one OBI.

The Pre-Detect step uses a run of the wavdetect program with conservative parameter settings to identify bright point sources suitable for astrometrically matching the observations that comprise each observation stack.

Fine Astrometry Pipeline

The Fine Astrometry pipeline is run to compute the astrometric corrections needed to align each observation in a stack to the same astrometric frame. It is run on the observations that went through the Pre-Calibrate/Pre-Detect pipeline.

Calibrate Pipeline

The Calibrate pipeline is run for each OBI in the catalog.

ComboDet Pipeline

The ComboDet (combine and detect) pipeline is run for each calibrated OBI from the Calibrate pipeline to create combined data products and identify candidate detections for both the compact and convex-hull source lists.

Source Validation Pipeline

The Source Validation pipeline is run to reconcile the detections from wavdetect and mkvtbkg that form the list of compact sources, and to review the convex-hull detections.

MLE Pipeline Run 1

The MLE (Maximum Likelihood Estimator) pipeline takes the candidate compact sources in each bundle and assesses them using a source region significantly larger than the PSF, updating the source positions and evaluating their likelihood values.

Rebundle

The Rebundle step checks the new source positions and recalculates the assignment of sources to bundles.

MLE Pipeline Run 2 (Recenter)

The MLE (Maximum Likelihood Estimator) pipeline takes the candidate sources in each reassigned bundle and assesses them, using smaller source regions. The source positions are further updated. The steps are the same as for the first run.

After the run, QA is performed to inspect and adjust bundle positions where needed.

Stacker Pipeline

The Stacker pipeline creates a merged detection list for the compact sources in an observation stack.

Master Match Pipeline

The Master Match pipeline reconciles detections of the same compact source in different stacks. The method is similar to that used in Release 1.

Source Properties Pipeline

The Source Properties pipeline is run for each master (compact) source and energy band.

Convex-Hull Pipeline

The convex hull pipeline is run for each band and set of overlapping stacks to complete the analysis of highly extended sources (these are called convex-hull sources to disambiguate them from the compact-source pipeline).

Limiting Sensitivity Pipeline

The limiting sensitivity pipeline calculates the sensitivity, in each band, for point-source detection for each location covered by the catalog. It handles overlapping stacks by taking the lowest value from all stacks that cover the same point on the sky.