Using A Pileup Model
Sherpa Threads (CIAO 4.15 Sherpa)
Overview
Synopsis:
This thread describes how to include a pileup model in the model expression when fitting data in Sherpa. The Background Information section contains information on how the data should be processed in order to run this thread.
Related Links:

The Chandra ABC Guide to Pileup (PDF, 26 pages)

A comparison, for low pileup fractions, of the pileup models in Sherpa and XSpec with that in ISIS.
Last Update: 6 Dec 2022  updated for CIAO 4.15, added note on using delete_pileup_model to unset the pileup component.
Contents
 Background Information
 Getting Started
 Reading in Data & Instrument Responses
 Defining the Models
 Fitting with the NelderMead and Monte Carlo Optimization Methods
 Examining the Pileup Fraction
 Saving the Fit Results
 Scripting It
 History
 Images
Background Information
Experience with the jdpileup model has chiefly been for onaxis point sources, and in this thread, this condition is assumed. Even when analyzing piled data from a pointsource, keep in mind that events in the core of the Point Spread Function (PSF) may be severely piled up while events in the PSF wings may be essentially unpiled, so generally, the fraction of the measuring aperture deemed to be piled up (f model parameter) is less than 1.
The standard procedure for jdpileup is:

Reprocess the data and apply the standard event filtering to create a level=2 event file; this is illustrated in the Create a New Level=2 Event File thread.

Extract a source spectrum from a circular region, usually about 2 arcsec in radius (e.g. by following the Extract Spectrum and Response Files for a Pointlike Source).

Use the pileup model in Sherpa to fit a spectral model.
For a detailed discussion of the model, see Davis (2001).
An afterglow is the residual charge from the interaction of a cosmic ray in a CCD. Standard data processing (SDP, aka "the pipeline") uses the tool acis_detect_afterglow to flag possible cosmic ray events in the level 1 event file; these are then filtered out in the level 2 event file. It has been determined, however, that 35% of the valid source photons may be rejected from diffracted spectra. These rejections, though a small fraction of the total events, are systematic and nonuniform.
In order to accurately model the pileup, it is necessary to remove this correction so that no source photons are eliminated from the event file. Read the Pileup Talk for more information, in particular Data Preparation and Caveats. The Remove the acis_detect_afterglow correction thread shows how to undo the afterglow filtering.
Important Note
A more precise method for identifying afterglow events was introduced to SDP at version DS 7.4.0. If your data was processed with DS 7.4.0 or later, acis_detect_afterglow was not run in the pipeline. The new hot pixel tools are more judicious with respect to throwing away piled source events. Users should be able to analyze the data without having to unfilter events first.
The Get Started section of the Remove the acis_detect_afterglow correction thread shows how to check the processing version used for your data.
Getting Started
Download the sample data: 1618 (ACISS, NGC 4258)
unix% download_chandra_obsid 1618
The files used in this example were created by following these CIAO threads:
Here is a list of all the necessary files:
source_bin10.pi background.pi arf.fits rmf.fits
The data files are available in sherpa.tar.gz, as explained in the Sherpa Getting Started thread.
Reading in Data & Instrument Responses
The spectra that will be used in this session have already been binned by a factor of 10. The data set is input to Sherpa with the load_pha command:
sherpa> load_pha("source_bin10.pi") WARNING: systematic errors were not found in file 'source_bin10.pi' statistical errors were found in file 'source_bin10.pi' but not used; to use them, reread with use_errors=True read ARF file arf.fits read RMF file rmf.fits WARNING: systematic errors were not found in file 'background.pi' statistical errors were found in file 'background.pi' but not used; to use them, reread with use_errors=True read background file background.pi
Since the response files are defined in the header of source_bin10.pi, the instrument response is automatically established for use:
sherpa> show_all()
Data Set: 1
Filter: 0.001514.9504 Energy (keV)
Bkg Scale: 0.0484
Noticed Channels: 11024
name = source_bin10.pi
channel = Float64[1024]
counts = Float64[1024]
staterror = None
syserror = None
bin_lo = None
bin_hi = None
grouping = Int16[1024]
quality = Int16[1024]
exposure = 20651.149034509
backscal = 8.2267382518748e07
areascal = 1.0
grouped = True
subtracted = False
units = energy
rate = True
plot_fac = 0
response_ids = [1]
background_ids = [1]
RMF Data Set: 1:1
name = rmf.fits
energ_lo = Float64[1077]
energ_hi = Float64[1077]
n_grp = UInt64[1077]
f_chan = UInt32[1500]
n_chan = UInt32[1500]
matrix = Float64[440721]
e_min = Float64[1024]
e_max = Float64[1024]
detchans = 1024
offset = 1
ethresh = 1e10
ARF Data Set: 1:1
name = arf.fits
energ_lo = Float64[1077]
energ_hi = Float64[1077]
specresp = Float64[1077]
bin_lo = None
bin_hi = None
exposure = 20650.705998778
ethresh = 1e10
Background Data Set: 1:1
Filter: 0.001514.9504 Energy (keV)
Noticed Channels: 11024
name = background.pi
channel = Float64[1024]
counts = Float64[1024]
staterror = None
syserror = None
bin_lo = None
bin_hi = None
grouping = Int16[1024]
quality = Int16[1024]
exposure = 20651.149034509
backscal = 1.6997393568944e05
areascal = 1.0
grouped = True
subtracted = False
units = energy
rate = True
plot_fac = 0
response_ids = [1]
background_ids = []
Background RMF Data Set: 1:1
name = rmf.fits
energ_lo = Float64[1077]
energ_hi = Float64[1077]
n_grp = UInt64[1077]
f_chan = UInt32[1500]
n_chan = UInt32[1500]
matrix = Float64[440721]
e_min = Float64[1024]
e_max = Float64[1024]
detchans = 1024
offset = 1
ethresh = 1e10
Background ARF Data Set: 1:1
name = arf.fits
energ_lo = Float64[1077]
energ_hi = Float64[1077]
specresp = Float64[1077]
bin_lo = None
bin_hi = None
exposure = 20650.705998778
ethresh = 1e10
If this is not the case for your data, you will need to manually set the instrument response:
sherpa> load_arf("arf.fits") sherpa> load_rmf("rmf.fits")
The background is subtracted from the data, rather than fit simultaneously:
sherpa> subtract()
If the background file is not referenced in the header of source_bin10.pi, you can manually load the background with load_bkg.
The input data set may be plotted:
sherpa> plot_data()
The data are plotted in energy space since the instrument response provides the information necessary for the computation of the number of predicted counts in each detector bin; Figure 1 shows the resulting plot.
Figure 1: Source Spectrum
A filter may be applied to the data before proceeding:
sherpa> notice(1,7)
dataset 1: 0.00146:14.9504 > 0.876:7.008 Energy (keV)
sherpa> plot_data()
Note that the pileup correction will include the entire available energy information regardless of the specified filter. The fit statistics, however, are calculated only on the specified bins.
Figure 2 shows the filtered data, which has an energy range of 17 keV. See the Choosing an Energy Filter why topic for help in picking a range.
Figure 2: Energyfiltered source spectrum
Defining the Models
MultiComponent Source Model
Since we have backgroundsubtracted the data (rather than fitting it simultaneously), it is only necessary to create a source model expression. We model this source with a powerlaw (xspowerlaw) absorbed by the interstellar medium (xswabs).
The absorption model will be referred to as "abs1", and the broken powerlaw will be "power1"; the product of the two is assigned as the source model for the data set:
sherpa> set_source(xswabs.abs1*xspowerlaw.power1) sherpa> show_model() Model: 1 apply_rmf(apply_arf((20651.149034509 * (xswabs.abs1 * xspowerlaw.power1)))) Param Type Value Min Max Units       abs1.nH thawed 1 0 1e+06 10^22 atoms / cm^2 power1.PhoIndex thawed 1 3 10 power1.norm thawed 1 0 1e+24 sherpa> guess(get_model()) WARNING: No guess found for apply_rmf(apply_arf((20651.149034509 * (xswabs.abs1 * xspowerlaw.power1)))) sherpa> guess(power1) sherpa> show_model() Model: 1 apply_rmf(apply_arf((20651.149034509 * (xswabs.abs1 * xspowerlaw.power1)))) Param Type Value Min Max Units       abs1.nH thawed 1 0 1e+06 10^22 atoms / cm^2 power1.PhoIndex thawed 1 3 10 power1.norm thawed 0.000794651 7.94651e07 0.794651
Note that since the model contains multiple components, the Sherpa guess command cannot be used to estimate the initial parameter values for the full model expression based on the data; each model component must be considered separately. To have Sherpa automatically query for the initial parameter values when a model is established, set 'paramprompt(True)' (it is 'False' by default).
Since the absorption component is a multiplicative model, the guess function will not work with it. If you know the Galactic column density for your source, you may set abs1.nh and freeze it:
sherpa> abs1.nh = <value> sherpa> freeze(abs1.nh)
In this thread, however, we allow it to vary during the fit. For completeness, albeit unnecessary here since we can see above that the abs1.nh parameter is already thawed by default, the parameter is explicitly set to freely vary,
sherpa> thaw(abs1.nh) sherpa> show_model() Model: 1 apply_rmf(apply_arf((20651.149034509 * (xswabs.abs1 * xspowerlaw.power1)))) Param Type Value Min Max Units       abs1.nH thawed 1 0 1e+06 10^22 atoms / cm^2 power1.PhoIndex thawed 1 3 10 power1.norm thawed 0.000794651 7.94651e07 0.794651
Pileup Model
It is also necessary to define the pileup model (jdpileup) immediately after defining the source model:
sherpa> set_pileup_model(jdpileup.jdp) sherpa> print get_pileup_model() jdpileup.jdp Param Type Value Min Max Units       jdp.alpha thawed 0.5 0 1 jdp.g0 frozen 1 1.17549e38 1 jdp.f thawed 0.95 0.9 1 jdp.n frozen 1 1.17549e38 100 jdp.ftime frozen 3.241 1.17549e38 5 sec jdp.fracexp frozen 0.987 0 1 jdp.nterms frozen 30 1 100
Read carefully the following information on how to set the pileup model parameters:

alpha
alpha parameterizes "grade migration" in the detector, and represents the probability, per photon count greater than one, that the piled event is not rejected by the spacecraft software as a "bad event". Specifically, if N photons are piled together in a single frame, the probability of them being retained (as a single photon event with their summed energy) is given by alpha^{(N1)}.
alpha is the parameter most likely to be allowed to vary in a fit.

g0:
Grade correction for single photon detection, i.e. a fraction g0 of single photon events will be retained as good grades.
In practice, this should be frozen to unity (the default) in any fit.

f:
The fraction of events in the source extraction region to which pileup will be applied. A typical value is approximately 0.95, and it should always be >0.85. f is the second most likely parameter, after alpha, to be allowed to vary in a fit.
It is recommended that the lowerlimit of this parameter be set to 0.85 before fitting (the default is 0.9):
sherpa> jdp.f.min=0.85

n:
Divide the model counts among n regions, to which the pileup model will be applied independently. This should be approximately the number of 3×3 pixel islands in the extracted spectra; for point sources, it is unity (the default).
This should remain a frozen parameter in any fit.

ftime
The frame time parameter (ftime) should be set to the good exposure time per frame, which is recorded in the header keyword EXPTIME of the event file. (Note that EXPTIME is used instead of TIMEDEL because the latter includes the transfer time, which ftime should not.)
The CIAO command may be executed from within Sherpa, as long as it is escaped with an exclamation point (!):
sherpa> !dmkeypar acisf01618_evt2.fits EXPTIME echo+ 3.2 sherpa> jdp.ftime=3.2

fracexp
This parameter is a fraction ≤1 that multiplies the frame exposure time to create a shorter, effective frame exposure time.
Users should set and freeze fracexp to the FRACEXPO value from the ARF file, and set the frame exposure time via the ftime parameter only. FRACEXPO is a value between zero and one, which indicates the fractional time that a region was exposed by a detector (e.g., it is 1.0 on chip, and drops in chip gaps).
sherpa> !dmkeypar arf.fits FRACEXPO echo+ 0.972 sherpa> jdp.fracexp=0.972
Information on using this parameter to model novel deadtime effects is available in the The Chandra ABC Guide to Pileup.

nterms:
The nterms parameter represents the number of photons considered for pileup in a single frame.
This should be left frozen at its default value of 30. In other words, the expansion of the model will include terms corresponding to 0, 1, 2, ... , 28, 29, and 30 photon events landing in the same extraction region during the same frame time.
sherpa> jdp.nterms=30
The pileup model parameters are now set as shown:
sherpa> print(get_pileup_model())
jdpileup.jdp
Param Type Value Min Max Units
     
jdp.alpha thawed 0.5 0 1
jdp.g0 frozen 1 1.17549e38 1
jdp.f thawed 0.95 0.85 1
jdp.n frozen 1 1.17549e38 100
jdp.ftime frozen 3.2 1.17549e38 5 sec
jdp.fracexp frozen 1 0 1
jdp.nterms frozen 30 1 100
Fitting with the NelderMead and Monte Carlo Optimization Methods
To find a global minimum for the fit to the data, we first choose to use one of the simpler fit optimization methods: NelderMead. This will allow for a quick fit, but may require some "handson" work afterwards.
Fit using the NelderMead method
We set the optimization method to 'neldermead', the fit statistic to 'chi2datavar', and start the fit:
sherpa> set_method("neldermead") sherpa> set_stat("chi2datavar") sherpa> fit() ************************************************************** The wabs model is obsolete and is only included for comparison with historical results. The tbabs model should be used for the ISM or phabs for general photoelectric absorption. ************************************************************** Dataset = 1 Method = neldermead Statistic = chi2datavar Initial fit statistic = 3984.01 Final fit statistic = 77.7495 at function evaluation 1784 Data points = 42 Degrees of freedom = 37 Probability [Qvalue] = 0.000101379 Reduced statistic = 2.10134 Change in statistic = 3906.26 jdp.alpha 0.249281 jdp.f 0.919177 abs1.nH 5.9488 power1.PhoIndex 1.35435 power1.norm 0.00607134
The parameter space of models which include a pileup model component is very complex and presents a challenge to the search and fitting algorithm. By using the NelderMead method, we are settling for finding a local minimum; now one should examine the solution, check the error bars, find a new minimum, refit, etc. Another option is to use a different optimization method, one which will start from several initial parameter settings and explore—in a more consistent way—the entire parameter space; the "moncar" method that uses random starting locations can be used in Sherpa.
Fit using the Monte Carlo method
Now we change the method to Monte Carlo and rerun the fit to see how the results change. Note that we start at the results obtained by NelderMead, but you could reset the model parameters before continuing with the reset() command, if desired.
sherpa> set_method("moncar")
sherpa> fit()
Dataset = 1
Method = moncar
Statistic = chi2datavar
Initial fit statistic = 77.7495
Final fit statistic = 77.7489 at function evaluation 3776
Data points = 42
Degrees of freedom = 37
Probability [Qvalue] = 0.000101398
Reduced statistic = 2.10132
Change in statistic = 0.000645872
jdp.alpha 0.239319
jdp.f 0.922768
abs1.nH 5.94956
power1.PhoIndex 1.3544
power1.norm 0.00545624
This table compares the results from the two runs with different optimization methods:
parameter  NelderMead  Monte Carlo 

abs1.nH  5.9488  5.94956 
power1.PhoIndex  1.35435  1.3544 
power1.norm  0.00607134  0.00545624 
jdp.alpha  0.249281  0.239319 
jdp.f  0.919177  0.922768 
To remove the pileup model from being applied to your source model during fitting, the pileup component can be unset from the dataset ID by using delete_pileup_model.
Calculate the parameter uncertainties
At this point, we continue with the first set of fit results obtained by using the NelderMead optimization method. For the bestfit values to really mean something, we need to find the uncertainties on the parameters. The confidence, covariance, and/or projection commands can be used to estimate confidence intervals for the thawed parameters:
sherpa> set_method("neldermead") sherpa> set_conf_opt("fast", False) sherpa> conf jdp.alpha lower bound: 0.0964472 abs1.nH lower bound: 0.563971 jdp.alpha upper bound:  power1.PhoIndex lower bound: 0.236708 power1.norm lower bound: 0.00440763 abs1.nH upper bound: 0.586918 power1.PhoIndex upper bound: 0.238943 jdp.f lower bound: 0.298603 jdp.f upper bound: 0.0369167 power1.norm upper bound: 0.012453 Dataset = 1 Confidence Method = confidence Iterative Fit Method = None Fitting Method = neldermead Statistic = chi2datavar confidence 1sigma (68.2689%) bounds: Param BestFit Lower Bound Upper Bound     jdp.alpha 0.239319 0.0964472  jdp.f 0.922768 0.298603 0.0369167 abs1.nH 5.94956 0.563971 0.586918 power1.PhoIndex 1.3544 0.236708 0.238943 power1.norm 0.00545624 0.00440763 0.012453
The table lists the bestfit values for the model parameters, as well as the upper and lower bounds on each of them; 'jdp.alpha' does not have an upper bound, so it is unconstrained towards the hard maximum. Notice that the conf() command is preceded by 'set_conf_opt('fast', False)' to explicitly use the current optimization method (NelderMead) instead of LevMar, the method used by confidence when "fast"=False.
Finally, plot the fit and the residuals of the fit:
sherpa> plot_fit_delchi()
The final plot is show in Figure 3.
Figure 3: Plot of the fit and residuals
Examining the Pileup Fraction
It is now possible to see what fraction of the events are calculated as piled:
sherpa> print(get_pileup_model())
jdpileup.jdp
Param Type Value Min Max Units
     
jdp.alpha thawed 0.239319 0 1
jdp.g0 frozen 1 1.17549e38 1
jdp.f thawed 0.922768 0.85 1
jdp.n frozen 1 1.17549e38 100
jdp.ftime frozen 3.2 1.17549e38 5 sec
jdp.fracexp frozen 1 0 1
jdp.nterms frozen 30 1 100
1: 0.2342 0.752608
2: 0.266102 0.204647
3: 0.201565 0.0370981
4: 0.114511 0.0050438
5: 0.0520434 0.000548599
6: 0.0197108 4.97245e05
7: 0.00639876 3.86313e06
8: 0.00181759 2.62613e07
9: 0.000458926 1.58687e08
10: 0.000104288 8.62992e10
11: 2.15442e05 4.26659e11
*** pileup fraction: 0.247392
This command prints the pileup fractions from the most recent fit. From left to right, the columns report the number of piled photons, percentage of that number of photons in the pileup region, and percentage of that number of piled photons in the total events. The fraction of pileup in the observation is printed after the breakdown. The next paragraph analyzes the above results, which should clarify this explanation.
In this example, the first row indicates that ~25% of the frames contained a single photon in the pileup region and ~76% of the events were singlephoton events. The second row shows that ~27% of the frames contained 2 photons in the pileup region and ~20% of the events were due to 2 photons, and so forth. For this observation, the total pileup fraction is ~23.6%.
Saving the Fit Results
Before exiting Sherpa, save the fit results to a file:
sherpa> save("pileup_fit.save")
These results may then be restored in a later session with the restore command.
Scripting It
The file fit.py is a Python script which performs the primary commands used above; it can be executed by typing %run i fit.py on the Sherpa command line.
The Sherpa script command may be used to save everything typed on the command line in a Sherpa session:
sherpa> script(filename="sherpa.log", clobber=False)
(Note that restoring a Sherpa session from such a file could be problematic since it may include syntax errors, unwanted fitting trials, et cetera.)
History
01 Dec 2008  updated for CIAO 4.1 
29 Apr 2009  new script command is available with CIAO 4.1.2 
04 Jan 2010  updated for CIAO 4.2 
19 Apr 2010  changed the recommended values for the nterms and fracexp parameters of the jdpileup.jdp model 
21 Jun 2010  An updated version of the Chandra ABC Guide to Pileup is now linked to this thread. 
13 Jul 2010  updated for CIAO 4.2 Sherpa v2: removal of SLang version of thread. 
06 Jan 2012  reviewed for CIAO 4.4: minor changes to show_all output 
13 Dec 2013  updated for CIAO 4.6: use acis_find_afterglow in lieu of acis_detect_afterglow 
04 Mar 2015  updated for CIAO 4.7, no content change 
03 Dec 2015  reviewed for CIAO 4.8, no content change 
31 Oct 2016  reviewed for CIAO 4.9, updated table and fit results, no content change 
25 May 2018  reviewed for CIAO 4.10, no content change 
10 Dec 2018  reviewed for CIAO 4.11, no content change 
13 Dec 2019  reviewed for CIAO 4.12, figures revised with Matplotlib. 
25 Mar 2022  updated for CIAO 4.14, no content change 
06 Dec 2022  updated for CIAO 4.15, added note on using delete_pileup_model to unset the pileup component. 