Merging Central
Timing Analysis Across Multiple Observations
Using merged data sets.
Timing Analysis with Combined Observations
As in the spectral case, in general, merged observations should not be used for timing analysis. Different pointings, roll angles, bad pixels, and other instrumental effects can lead to incorrect count rates. However, in conditions similar to those described in Caveat: Extracting Spectra from Merged Datasets, "reasonable" results can be obtained when extracting a light curve.
In the following example(s), we use three pointings of the lenticular "Cartwheel" Galaxy (ObsIDs 2019, 9531, 9807) over a ~7 year period, of the 'ULX N10' point source (\(\alpha = \mathrm{00^{h}37^{m}39.38^{s}}, \delta = \mathrm{-33^{\circ} 43' 23.08''}\); A&A 426 787, MNRAS 406 1116), which corresponds to the Chandra Source Catalog master source 2CXO J003739.3-334323, which we make use of for our extraction source and background regions.
unix% cat ULX-N10.wcs.reg ellipse(0:37:39.3936,-33:43:23.517,0.02693',0.02205',100.05401) unix% cat ULX-N10.bkg.wcs.reg ellipse(0:37:39.3929,-33:43:23.522,0.13467',0.11023',100.05404) -ellipse(0:37:39.3929,-33:43:23.522,0.02963',0.02425',100.05404)
Approach I:
Time order matters when combining separate light curve files.
For example:
bash/dash/zsh | (t)csh | |
---|---|---|
unix% | tbin=2592.8 | set tbin=2592.8 |
unix% | lc_bin="bin time=::${tbin}" | set lc_bin="bin time=::${tbin}" |
unix% | efilt="500:7000" | set efilt="500:7000" |
unix% | src="sky=region(cartwheel.wcs.reg)" | set src="sky=region(cartwheel.wcs.reg)" |
unix% | bkg="sky=region(cartwheel.bkg.wcs.reg)" | set bkg="sky=region(cartwheel.bkg.wcs.reg)" |
An extra complication neglected in this case example (badpixels, effective area, detector effects on response, etc.), is that timing analysis also depends on the number of active CCDs that are readout, and/or any sub-arrays used. For instance, in our example two of the ObsIDs have five active CCDs with a TIMEDEL of 3.141 (s) and DTCOR of 0.9869 while the remaining ObsID uses six CCDs with TIMEDEL of 3.241 (s) and DTCOR of 0.9873. These are not big differences, but integrated over thousands of frames, it can have an effect.
For plotting purposes, if the light curve is being plotted as unconnected scatter plot, the order doesn't matter. However, for use in downstream processing, it will matter for many tools, if the software does not perform sorting internally, there can be a runtime failure.
[Version: full-size]
Compare the Effect of Merging Order on Light Curve Files
Approach II (not recommended):
• Extracting a light curve from a merged event file, which was combined without concern for time order, will generate a properly ordered light curve.
This example is only applicable to ACIS data and is much more likely to functionally fail or provide erroneous results. If different sets of CCDs amongst the ObsIDs are used then the incorrect GTI will be used for a subset of the events, and if different number of CCDs or if mixing sub-arrays and full frame data sets, then different TIMEDEL and DTCOR will be applied.
The merging order of events files does not affect the resulting extracted light curve since dmextract will sort the events by time when extracted. However the large time gaps in the data will be included in the light curves since the data set is binned with time.
[Version: full-size]
Compare the Effect of Merging Order on Events Files and Extracted Light Curve
If there are large time gaps, then extracting the lightcurve from a merged event file will result in a significantly larger output file since it will need to write time bins that contain zero counts.
unix% ls -alh merged*2592.8s*.lc -rw-r--r-- 1 user group 14M Feb 17 2023 merged_2592.8s.lc -rw-r--r-- 1 user group 14M Feb 17 2023 merged_time-disordered_2592.8s.lc -rw-r--r-- 1 user group 14M Feb 26 14:11 merged_timereverse_2592.8s.lc unix% ls -alh lightcurve*2592.8s*.lc -rw-r--r-- 1 user group 155K Feb 26 16:41 lightcurve_merged_2592.8s.lc -rw-r--r-- 1 user group 155K Feb 26 16:41 lightcurve_merged-timedisordered_2592.8s.lc -rw-r--r-- 1 user group 155K Feb 26 16:41 lightcurve_merged-timereverse_2592.8s.lc
The general suggestion is to go along with Approach I and handle time ordering when merging the light curve files together. The minor hassle in doing so early on will avoid possible downstream headaches whilst also minimizing resultant file sizes.
Other Considerations
-
Time ordering matters when running glvary with a merged events file as input; otherwise the tool will either hang or run until exiting with a segmetation fault.
That said, while the tool may 'mechanically' run with the time-ordered event file, it is unlikely to provide good results. In principle, the Gregory-Loredo algorithm can handle larger gaps in a time series, but its implementation in glvary was intended to look for intra-observation variablity and not inter-observation variability. Since the algorithm works by dividing the entire time range in half, then thirds, quarters, etc. and doing the Bayesian blocks analysis on the time bins, with a large time gap between observations, it is likely that all the events for an individual observation will end up in a single G-L time bin, i.e. one time bin for each ObsID—unless the observations are [really, really] closely separated in time.
Quirks specific to glvary that may be problematic for inter-observation applications include:
- Forcing all the time bins to have non-zero flux, which is why the G-L lightcurve never goes to 0, even for a single ObsID
- Does not account for changes in the spectral response/effective area over large time periods or mixing events from front- and back-illuminated CCDs, which can result in erroneously finding variability for a constant source
- Does not consider the background, so changes in the background level can result in the tool erroneously interpreting variablity even if background flares are removed from a quiescent field.
Upshot — we do not recommend using glvary on merged event files.
-
Time order matters for the input merged event file running through dither_region.
General Caveats
-
Depending on the nature of the analysis, consideration for the drop in effective area over the life of the mission may be required since it will introduce an intrinsic dimming to the light curve. From the discussion in the "Precise, Flux-Calibrate Lightcurve" entry in the "Timing Analysis with Lightcurves" Why page, entry:
The most dependable method of calculating fluxes is to model the response-folded source spectrum. This will give an intrinsic flux [ergs/cm2/s], instead of a simple count rate [count/s]. Moreover, one may plot source spectral parameters (the slope of a power law, line FWHM, etc.) versus time for a more detailed study of the nature of the source variation. The Phase-binning a Spectrum thread shows how to extract the data.
In general, creating a flux-calibrated lightcurve is uncomplicated, but time-consuming. First, extract spectra for the times of interest; one spectrum is needed for every datapoint in the final lightcurve. Each spectrum is then fit with a source model and the integrated flux is calculated for a given energy range. Each flux value will be calibrated and based on the appropriate GTIs. Finally, the flux value for each of the fits can be manually written to a file for plotting; the file should have two columns—time and flux (the best-fit parameter value).
Generally speaking, with the variation between CCDs [in a detector array] and for time series analysis, you will ideally like to have events across observations sharing the same CCD, since they will then correctly apply valid GTIs blocks to the merged data set. The instrument configuration should also be the same (mixing gratings and non-gratings is bad) since they have radically different effective areas, and quirks if different detector frame time (EXPTIME) are used. Care must be taken.
TipTo properly account for changes in the detector, ensure the data sets are (a) reprocessed with a common CalDB version (with chandra_repro and (b) run srcflux with each observation to get fluxes for each observation. The COUNTS and COUNT_RATE for each observation are otherwise meaningless for this comparison since they are the direct detector measurements using different CCDs and/or instrument configurations.
-
Timing analysis is challenging, if not impossible with large gaps in time between observations. However, if the nature of the source has some known periodicity, then the folded lightcurve datasets (i.e. transforming the time axis into phase), using a common reference point in time, will lend themselves to being combined, or combined as Lomb-Scargle periodgrams (aka Least-squares spectral analysis) analysis for unevenly spaced data sets.
TO DO: Expand explanation and add example in the CIAO system or astropy.