Analysis Guide: HRC Data Preparation
It is possible to analyze any Chandra dataset straight out of the box. However, to get scientifically accurate results, there are a number of data processing questions that should be considered. When Chandra data goes through Standard Data Processing (SDP, or the pipeline), the most recently available calibration is applied to it. Since this calibration is continuously being improved, one should check whether there are currently newer files available. Similarly, some science decisions are made during SDP; every user has the option to reprocess the data with different parameters.
This guide is designed to help the user decide how an HRC dataset should be processed before starting the data analysis stage.
The following threads are referenced:
- New Observation-Specific HRC Bad Pixel File
- Reprocessing Data to Create a New Level=2 Events File
- Computing Average HRC Dead Time Corrections (S-Lang or Python)
- Setting the Observation-specific Bad Pixel Files
- Improving the Astrometry of your Data: Correct for a Known Processing Offset
- Correcting Absolute Astrometry with reproject_aspect thread
The threads should be run in the order in which they are presented below.
Thread: New Observation-Specific HRC Bad Pixel File
The HRC-I and HRC-S bad pixel files are used both to define the valid coordinate regions in the detectors and to identify bad (hot) pixels. Observation-specific bad pixel files are generated from calibration data products by applying the appropriate degap corrections and selecting all time-dependent bad pixel regions in the calibration data that are appropriate to the time of the observation.
It is necessary to run this thread if you are working with an HRC dataset and you have either re-run hrc_process_events using degap corrections different from those used in standard processing, or you have identified new bad pixel regions that are not contained in the CALDB bad pixel list.
Thread: Reprocessing Data to Create a New Level=2 Event File
The Reprocessing Data to Create a New Level=2 Event File thread generates a new level=2 event file for all possible grating and detector combinations.
This thread also includes grade and status filtering:
If you have been working with a level=1 event file, it needs to be filtered on grade and status to create a level=2 event file. In general, the data is filtered to remove events that do not have a good GRADE or that have one or more of the STATUS bits set to 1.
Thread: Computing Average HRC Dead Time Corrections (S-Lang or Python)
HRC deadtime corrections are determined as a function of time from detector total event and valid event counters and written to a deadtime factor (dtf1) file. The average deadtime correction (DTCOR) for an observation is computed from the dtf1 file, filtered by the relevant good time intervals, and is applied to the corresponding ONTIME to compute the LIVETIME (and EXPOSURE) of the observation.
There are two reasons why you might recompute the deadtime statistics for your observation:
your data was processed with software version 7.6.4 through 7.6.8: a bug in HRC standard processing led to the use of incorrect good time intervals (GTIs) in the calculation of DTCOR in the dtfstats file, and hence the LIVETIME and EXPOSURE. This bug was introduced in processing version 7.6.4 and resolved in 7.6.8. Users whose datasets were processed with these software versions should follow this thread to verify the deadtime corrections in their data. The software version is stored in the ASCDSVER header keyword.
if you are working with an HRC dataset and you wish to time-filter the event list in a manner different from that used in standard processing, particularly if the deadtime factors in the dtf1 file have been flagged as variable in the standard deadtime statistics (std_dtfstat1) file.
This thread should be done after any necessary reprocessing is completed (e.g. running hrc_process_events).
Thread: Setting the Observation-specific Bad Pixel Files
Although the majority of the calibration files are now contained within the Chandra Calibration Database (CALDB), the observation-specific bad pixel list must still be set by the user. This file will be used by many of the CIAO tools, such as mkarf, mkgarf, and mkinstmap. Setting the bad pixel file ensures that the most accurately known bad pixel list for any observation will consistently be used in the data processing.
It is very important that you know what files are set in your ardlib.par. If you do not set the bad pixel file for your observation, the software will use a generic detector bad pixel file from the CALDB; pixels that are flagged as bad in a specific observation will not get filtered out when using this map. The criteria for a pixel to be flagged are described in the badpix dictionary entry.
Remember to "punlearn" or delete your ardlib.par file after completing analysis of this dataset to ensure that the proper bad-pixel maps are used the next time that ardlib.par is referenced by a tool.
|Threads:||Improving the Astrometry of your Data: Correct for a Known Processing Offset|
|Correcting Absolute Astrometry with reproject_aspect|
There are two main reasons why you might need to change the aspect of your observation:
Correct for a known processing offset: to remove any known aspect offsets and obtain absolute astrometry which is accurate to 1". This case is addressed in the Improving the Astrometry of your Data: Correct for a Known Processing Offset thread.
This aspect correction should be applied after creating a new level=2 event file, as the reprocessing may reverse a correction applied beforehand.
Apply an offset for improved astrometry: if the X-ray observation has point sources with very accurately known optical/radio/IR counterpart positions, it is possible to obtain absolute astrometry which is accurate to 0.1" - 0.2". In this case, users should follow the Correcting Absolute Astrometry with reproject_aspect thread.
For full details on the current status of Chandra astrometry, see the Notes on Chandra Astrometric Accuracy.