Sherpa: Modeling and Fitting in Python

Sherpa is a modeling and fitting application for Python. It contains a powerful language for combining simple models into complex expressions that can be fit to the data using a variety of statistics and optimization methods. It is easily extensible to include user models, statistics and optimization methods.

What can you do with Sherpa?

  • fit 1D (multiple) data including: spectra, surface brightness profiles, light curves, general ASCII arrays
  • fit 2D images/surfaces in Poisson/Gaussian regime
  • build complex model expressions
  • import and use your own models
  • use appropriate statistics for modeling Poisson or Gaussian data
  • import the new statistics, with priors if required by analysis
  • visualize a parameter space with simulations or using 1D/2D cuts of the parameter space
  • calculate confidence levels on the best fit model parameters
  • choose a robust optimization method for the fit: Levenberg-Marquardt, Nelder-Mead Simplex or Monte Carlo/Differential Evolution.

To install Sherpa see Sections Source and Binary Installation. For an example of a Sherpa session check Run Sherpa section. More examples are given in a few Sherpa IPython Notebooks:

Sherpa Quick Start

Image Fitting

Template Fitting

Bayesian Analysis of an X-ray Spectrum

For detailed documentation see: http://cxc.harvard.edu/sherpa

Install Sherpa

Sherpa can be installed from a binary distribution or built from sources.

The binary distribution is suited for people wanting to have Sherpa up and running as soon as possible in its standard form. The binaries are built and tested on Linux 32, Linux 64 and Mac OSX (>=10.8)

Source installation is available for platforms incompatible with the binary builds, or for users wanting to customize the way Sherpa is built and installed.

Binary Install

The binary installation of Sherpa 4.8.1 was released on April 15, 2016. It has been tested on Linux 32, Linux 64, and Mac OSX (versions > 10.8)

Sherpa binaries can be seamlessly installed into Anaconda Python. You need to add Sherpa channel to your configuration, and then install Sherpa:

$ conda config --add channels https://conda.binstar.org/sherpa
$ conda install sherpa

To test that your installation works type:

$ sherpa_test

To update Sherpa:

$ conda update sherpa

If you do not have Anaconda Python and want to create the minimum environment for trying Sherpa check the README on GitHub.

Configuration Files

Sherpa expects a user configuration file sherpa-standalone.rc to be in the $HOME directory. If this file is not present Sherpa will use the default internal configuration file with the IO and Plotting back-ends set to pyfits and pylab.

matplotlib comes with a configuration file matplotlibrc. For smooth behavior with Sherpa, be sure to indicate interactive=True in ~/.matplotlib/matplotlibrc.

Source Installation

It takes only a few simple steps to build and install Sherpa in any Python 2.7 environment on Linux or Mac OSX. For example Anaconda Python Distribution contains many scientific software components needed for the analysis and Sherpa fits seamlessly into that environment.

The Sherpa source code is available on GitHub: https://github.com/sherpa/sherpa

Download the Sherpa source tar or zip file: sherpa-4.8.1.zip or sherpa-4.8.1.tar.gz

Unpack the files:

$ unzip sherpa-4.8.1.zip
$ tar xvf sherpa-4.8.1.tar.gz

To install:

$ cd sherpa
$ python setup.py install

The Sherpa code will be built and installed in the directory ${prefix}/lib/python2.7/site-packages/sherpa, where ${prefix} can be determined with:

$ python -c 'import sys; print(sys.prefix)'

Sherpa requires the standard Python packages and system compilers for the build:

Python: setuptools, numpy
System: gcc, g++, gfortran, make, flex, bison

In addition I/O requires ‘astropy’ (replaces pyfits) for reading FITS files and matplotlib for standard plotting, ds9 for imaging.

After the installation you can run the test to check if the installation was successful. This requires to install ‘pytest-cov’ first:

$ pip install pytest-cov
$ sherpa_test

Note that the ${prefix}/bin should be in the $PATH to run the test. The tests should succeed, but there could be two expected warnings if ‘ds9’ or/and ‘XSPEC’ models are not found. These are not necessary for the use of Sherpa and only needed if you plan to perform X-ray analysis or use ‘ds9’ to display images.

Copy the source code repository

If you want to look at the code, use the code and maybe develop your own code in Git you can ‘clone’ the full repository structure with the default ‘git clone’:

$ git clone https://github.com/sherpa/sherpa.git

This would also be useful if you want to contribute your own code to Sherpa via ‘pull request’. The contributions are welcome. We advise to follow the astropy guidelines for developing your contributions. We require the code to have a description of the code, unit and integration test and a user documentation.

Custom Source Build

The setup.cfg file supports customizing the Sherpa build. For example to use the fftw local library one needs to set the following configuration options:


where /usr/local path may need to be changed to the local directories with the header (.h) files and the libfftw3.so shared object file.

It can also allow you to build XSPEC models.

XSPEC Model Library

Sherpa does not support XSPEC models for X-ray spectral analysis by default. However, it is possible to instruct Sherpa to build its XSPEC extension module by changing the build configuration options. Check 4.8.1 Caveats when building with XSPEC on MAC OSX.

You may need to build XSPEC yourself, and in any case to point Sherpa to existing binary libraries for XSPEC, cfitsio, and CCfits. Additionaly, you will need to point Sherpa to the libgfortran shared library. These dependencies are required to build XSPEC itself, so they are assumed to be present on the system where you want to build Sherpa with XSPEC support. Here we assume that XSPEC has already been built:

$ git clone https://github.com/sherpa/sherpa.git
$ cd sherpa

Edit the setup.cfg file: find the XSPEC configuration section in the file, uncomment the relative options and make sure they point to the location of the XSPEC, cfitsio, CCfits, and gfortran libraries. An example would be:


You may need to change the values in the above example to reflect the actual directories where the libraries are to be found on your system.

Then, build Sherpa in the standard way:

$ python setup.py install

Note that XSPEC needs a set of libraries to run (the following shows an example from an OS-X build):


Their location can be specified in DYLD_LIRBARY_PATH (on Mac OSX) or LD_LIBRARY_PATH (on Linux). For example::

$ export DYLD_LIBRARY_PATH=$HOME/xspeclib

where $HOME/xspeclib is the directory with all the required libraries.

Run Sherpa

You can import Sherpa into your ipython session:

(conda)$  ipython --pylab
Python 2.7.11 |Continuum Analytics, Inc.| (default, Dec  6 2015, 18:08:32)
Type "copyright", "credits" or "license" for more information.

IPython 4.1.2 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
Using matplotlib backend: Qt4Agg

In [1]: from sherpa.astro.ui import *
WARNING: imaging routines will not be available,
failed to import sherpa.image.ds9_backend due to
'RuntimeErr: DS9Win unusable: Could not find ds9 on your PATH'

The standard warnings are issued if you do not have ds9 models in your path. The image with ds9 will not be available. See the Dependencies section below.

Now to simulate a simple shape (a parabola with errors):

In [2]: x = np.arange(-5, 5.1)

In [3]: y = x*x + 23.2 + np.random.normal(size=x.size)

In [4]: e = np.ones(x.size)

The data can now be loaded into Sherpa:

In [5]: load_arrays(1, x, y, e)

In [6]: plot_data()

For this example we know what model to use, so pick a polynomial and free-up some of the parameters:

In [7]: set_source(polynom1d.poly)

In [8]: print(poly)
   Param        Type          Value          Min          Max      Units
   -----        ----          -----          ---          ---      -----
   poly.c0      thawed            1 -3.40282e+38  3.40282e+38
   poly.c1      frozen            0 -3.40282e+38  3.40282e+38
   poly.c2      frozen            0 -3.40282e+38  3.40282e+38
   poly.c3      frozen            0 -3.40282e+38  3.40282e+38
   poly.c4      frozen            0 -3.40282e+38  3.40282e+38
   poly.c5      frozen            0 -3.40282e+38  3.40282e+38
   poly.c6      frozen            0 -3.40282e+38  3.40282e+38
   poly.c7      frozen            0 -3.40282e+38  3.40282e+38
   poly.c8      frozen            0 -3.40282e+38  3.40282e+38
   poly.offset  frozen            0 -3.40282e+38  3.40282e+38

In [9]: thaw(poly.c1, poly.c2)

With everything set up, the data can be fit using the standard optimization method levmar and chi2 statistics:

In [10]: fit()
Dataset               = 1
Method                = levmar
Statistic             = chi2
Initial fit statistic = 12190
Final fit statistic   = 5.40663 at function evaluation 8
Data points           = 11
Degrees of freedom    = 8
Probability [Q-value] = 0.713361
Reduced statistic     = 0.675829
Change in statistic   = 12184.6
   poly.c0        22.2341
   poly.c1        0.109262
   poly.c2        1.06812

In [11]: plot_fit_resid()

and an estimate of 1 sigma parameters uncertainties is:

In [12]: conf()
poly.c0 lower bound:        -0.455477
poly.c1 lower bound:        -0.0953463
poly.c0 upper bound:        0.455477
poly.c2 lower bound:        -0.0341394
poly.c1 upper bound:        0.0953463
poly.c2 upper bound:        0.0341394
Dataset               = 1
Confidence Method     = confidence
Iterative Fit Method  = None
Fitting Method        = levmar
Statistic             = chi2gehrels
confidence 1-sigma (68.2689%) bounds:
   Param            Best-Fit  Lower Bound  Upper Bound
   -----            --------  -----------  -----------
   poly.c0           22.2341    -0.455477     0.455477
   poly.c1          0.109262   -0.0953463    0.0953463
   poly.c2           1.06812   -0.0341394    0.0341394


Data I/O support and plotting need astropy (pyFITS is deprecated) and matplotlib. Imaging requires both ds9 and XPA.

[mpl]Hunter, JD (2007). Matplotlib: A 2D graphics environment. Computing in Science and Engineering. 9: 90-95. http://matplotlib.sourceforge.net.

Release Notes Sherpa 4.8.1 for Python

April 15, 2016

Sherpa 4.8.1 introduces support for new versions of the dependencies, along with some feature enhancements, bug fixes and additional new tests of the code.

The newly supported dependencies:

matplotlib v1.5

numpy 1.10 and 1.11 (with and without mkl support)

xspec v12.9.0i (when building from source)

astropy v1.1.2

region library v4.8 (from CIAO 4.8, included with Sherpa)

Please see the Caveats section for known issues regarding the XSpec support.

Here is a lit of bug fixes by the issue number on the Sherpa github web page (Note the infrastructure changes are not shown):

#102: fix issues when writing out FITS files using the save_pha and save_table commands when using the astropy/pyfits backend (bug #46). Fix for when the notice2d_id, notice2d_image, and the ignore version functions are called with an invalid identifier (i.e. an identifier that is not an integer or string value). The error is now an ArgumentTypeErr with the message “‘ids’ must be an identifier or list of identifiers”. It was a NameError with the message “global name ‘_argument_type_error’ is not defined”.

#107: Normalize plot labels. There are two main changes for plots of PHA data sets (and related quantities, such as the source, model, and ARF):

  • plots with matplotlib now use the LaTeX support - so that ‘cm^2’ is now displayed as a superscript; previously they were displayed directly. This does not change the display with the ChIPS backend.
  • plots created with plot_source used a different format to other plots when analysis=wavelength, in that LaTeX symbols were used for Angstrom and lambda (in other plots the string ‘Angstrom’ is used instead). The source plots now match the other plots.

#109: fix #103 and #113 in order to support matplotlib v1.5.

#116: fix bug #27. The astropy.io.fits/pyfits interface used deprecated functionality. The code was updated to use the replacement classes/methods when available, falling back to the original code if not. The interfaces that were changed were, when the new symbols are availble, to:

  • use astropy.io.fits.BinTableHDU.from_columns rather than astropy.io.fits.new_table
  • use astropy.io.fits.Header rather than astropy.io.CardList

#137: upgrade CIAO region library to v4.8

#138: improve and fix issues in save_all function.

  • added a new argument to save_all: if outfile is None then the outfh argument is used to define the output handle (the argument can be any file-like argument, such as a file handle like sys.stdout or the output of open, or a StringIO object)

  • setting the clobber argument to save_all now means that the output file (the outfile argument, if not None) is deleted if it already exists; prior to this, the file would be appended to instead

  • the source expression is now saved correctly for most cases (e.g. when not using set_full_model); this is bug #97 but also affects non-PHA data sets

  • the background model expression was not always written out correctly when using PHA data sets

  • quality and grouping arrays of PHA data sets are now stored as 16-byte integers rather than a floating-point value (this has no affect on the results, but matches the OGIP standard)

  • fixed up saving the grouping and quality arrays of background PHA data sets (this would only be an issue if the background is being fit, rather than subtracted)

  • basic data sets created with the load_arrays function are now written out by save_all as part of the script; this is intended for small datasets and may have problems with precision if used with floating-point arrays

  • calls to load_psf are now correctly restored (they may not have been written out correctly if multiple data sets were loaded)

  • user models are now written out to disk; this consists of two parts:
    • writing out the function that defines the model, which may or may not be possible (if not, a place-holder function is added to the output and a warning displayed).
    • the necessary calls to load_user_model and add_user_pars are now included in the output
  • the Python code created by save all has undergone several minor changes:
    • it now explicitly imports the sherpa.astro.ui module, so that it can be run from the IPython prompt using the %run <filename> command, or directly as python <filename>
    • it uses the create_model_component function rather than eval to create model components (this is CXC bug 12146)
    • many optional arguments to functions are now given as name=value rather than being a positional argument, to make it clearer what the script is doing.
    • calls to load_data have been replaced by more-specific versions - e.g. load_pha and load_image - if appropriate
    • there have seen several minor syntactic clean ups to better follow the suggestions from PEP8
    • When writing out code that defines a user-model, there is no attempt to make sure that modules used by the function are available. These will need to be added, either directly or imported, manually to the output.

#151: Ensure AstroPy and Crates behave the same with gzipped files. Change the behaviour of the AstroPy back end so that it matches that of Crates when given a file name which does not exist, but a compressed version, with the suffix .gz does. The Crates behavior is to read the file. This extends to PHA files whose ancillary files - e.g. those stored in the BACKFILE, ANCRFILE, and RESPFILE keywords - are given as unzipped names, but only the gzipped names exist on disk.

As an example: if pha.fits.gz exists but pha.fits does not, then


will now load the file with either back end. If the response files are set to arf.fits and rmf.fits (via the ANCRFILE and RESPFIL`E keywords), but only the `.gz versions exist, then they will now also be loaded by the AstroPy back end.

#153: Make comparison test in calc_chi2datavar_errors less stringent, so to include the case where sqrt(x)=0.

#155: The get_draws function now accepts a user-provided covariance matrix. If no covariance matrix is provided, the covariance matrix computed by the default implementation is used. Note that covar() must be invoked before invoking get_draws if no covariance matrix is provided, otherwise get_draws will exit with an error.

#158: Fix bug that prevented region ascii files to be read in standalone Sherpa.

#165: Remove usage of deprecated numpy API.

#185: Fix the problem where if the working directory contained a file called x or` y` then the sherpa.astro.ui.image_data() function would fail with the message DS9Err: Could not display image

#187: Fix #92: a more meaningful message is given to the user when sherpa.astro.io is imported directly and no fits backends are available.

#188: Fix #93. The sherpa_test script now tries to install the test dependencies before running the tests (but not the sherpatest package, which should be installed by the user if necessary, due to its footprint). If this is not possible, and the necessary dependencies (pytest) are not found, then a meaningful message is given to the user with instructions on how to install the dependencies. Also, the dependency on pytest-cov has been removed. Users can enable coverage reports from the command line if necessary.

#190: Fix #22 - The datastack package can now be used even if there is no available plotting backend. In this case, plotting functions will not be available, but the rest of the datastack functionality will.

4.8.1 Caveats

The following are known issues with the standalone 4.8.1 release

XSpec support: Several issues have been encountered with the optional source building with XSpec models on OSX platforms (Linux support appears unaffected). The issues include a name clash between the libcfitsio library and the astropy.io.fits Python extensions that results in XSpec failing to load fits files possibly resulting in a crash.

SAO DS9 issue on Ubuntu 14.04: the ds9 binaries shipped with Ubuntu and installed through apt-get install do not seem to work as expected. Binaries downloaded directly from the SAO ds9 page seem to work instead. (Note: this issue was listed in the 4.8.0 release as well).

Wrong save_data header keywords: when using astropy as a FITS backend to save PHA data with save_data some header keywords are incorrectly set by Sherpa. In particular, range information for certain columns may be inaccurate (see issue #203 for details).

Archive Release Notes

Release Notes for the previous version of Sherpa are available on the

[4.8.0] Release Notes