Skip to the navigation links
Last modified: 19 January 2010

URL: http://cxc-newtest.cfa.harvard.edu/ciao4.2/guides/hrc_data.html

Analysis Guide: HRC Data Preparation


It is possible to analyze any Chandra dataset straight out of the box. However, to get scientifically accurate results, there are a number of data processing questions that should be considered. When Chandra data goes through Standard Data Processing (SDP, or the pipeline), the most recently available calibration is applied to it. Since this calibration is continuously being improved, one should check whether there are currently newer files available. Similarly, some science decisions are made during SDP; every user has the option to reprocess the data with different parameters.

This guide is designed to help the user decide how an HRC dataset should be processed before starting the data analysis stage.

The following threads are referenced:

The threads should be run in the order in which they are presented below.



Thread: New Observation-Specific HRC Bad Pixel File

The HRC-I and HRC-S bad pixel files are used both to define the valid coordinate regions in the detectors and to identify bad (hot) pixels. Observation-specific bad pixel files are generated from calibration data products by applying the appropriate degap corrections and selecting all time-dependent bad pixel regions in the calibration data that are appropriate to the time of the observation.

It is necessary to run this thread if you are working with an HRC dataset and you have either re-run hrc_process_events using degap corrections different from those used in standard processing, or you have identified new bad pixel regions that are not contained in the CALDB bad pixel list.



Thread: Reprocessing Data to Create a New Level=2 Event File

The Reprocessing Data to Create a New Level=2 Event File thread generates a new level=2 event file for all possible grating and detector combinations.

This thread also includes grade and status filtering:

If you have been working with a level=1 event file, it needs to be filtered on grade and status to create a level=2 event file. In general, the data is filtered to remove events that do not have a good GRADE or that have one or more of the STATUS bits set to 1.



Thread: Computing Average HRC Dead Time Corrections (S-Lang or Python)

HRC deadtime corrections are determined as a function of time from detector total event and valid event counters and written to a deadtime factor (dtf1) file. The average deadtime correction (DTCOR) for an observation is computed from the dtf1 file, filtered by the relevant good time intervals, and is applied to the corresponding ONTIME to compute the LIVETIME (and EXPOSURE) of the observation.

There are two reasons why you might recompute the deadtime statistics for your observation:

This thread should be done after any necessary reprocessing is completed (e.g. running hrc_process_events).



Thread: Setting the Observation-specific Bad Pixel Files

Although the majority of the calibration files are now contained within the Chandra Calibration Database (CALDB), the observation-specific bad pixel list must still be set by the user. This file will be used by many of the CIAO tools, such as mkarf, mkgarf, and mkinstmap. Setting the bad pixel file ensures that the most accurately known bad pixel list for any observation will consistently be used in the data processing.

It is very important that you know what files are set in your ardlib.par. If you do not set the bad pixel file for your observation, the software will use a generic detector bad pixel file from the CALDB; pixels that are flagged as bad in a specific observation will not get filtered out when using this map. The criteria for a pixel to be flagged are described in the badpix dictionary entry.

Remember to "punlearn" or delete your ardlib.par file after completing analysis of this dataset to ensure that the proper bad-pixel maps are used the next time that ardlib.par is referenced by a tool.



Threads: Improving the Astrometry of your Data: Correct for a Known Processing Offset

Correcting Absolute Astrometry with reproject_aspect

There are two main reasons why you might need to change the aspect of your observation:

  1. Correct for a known processing offset: to remove any known aspect offsets and obtain absolute astrometry which is accurate to 1". This case is addressed in the Improving the Astrometry of your Data: Correct for a Known Processing Offset thread.

    This aspect correction should be applied after creating a new level=2 event file, as the reprocessing may reverse a correction applied beforehand.

  2. Apply an offset for improved astrometry: if the X-ray observation has point sources with very accurately known optical/radio/IR counterpart positions, it is possible to obtain absolute astrometry which is accurate to 0.1" - 0.2". In this case, users should follow the Correcting Absolute Astrometry with reproject_aspect thread.

For full details on the current status of Chandra astrometry, see the Notes on Chandra Astrometric Accuracy.


Last modified: 19 January 2010