Fit models to your data in Python with Sherpa.

Overview

Build Status: Conda Build Status: Pip Documentation Status DOI GPLv3+ License Python version

Table of Contents

Sherpa

Sherpa is a modeling and fitting application for Python. It contains a powerful language for combining simple models into complex expressions that can be fit to the data using a variety of statistics and optimization methods. It is easily extensible to include user models, statistics, and optimization methods. It provides a high-level User Interface for interactive data-analysis work, such as within a Jupyter notebook, and it can also be used as a library component, providing fitting and modeling capabilities to an application.

What can you do with Sherpa?

  • fit 1D (multiple) data including: spectra, surface brightness profiles, light curves, general ASCII arrays
  • fit 2D images/surfaces in Poisson/Gaussian regime
  • build complex model expressions
  • import and use your own models
  • use appropriate statistics for modeling Poisson or Gaussian data
  • import new statistics, with priors if required by analysis
  • visualize the parameter space with simulations or using 1D/2D cuts of the parameter space
  • calculate confidence levels on the best fit model parameters
  • choose a robust optimization method for the fit: Levenberg-Marquardt, Nelder-Mead Simplex or Monte Carlo/Differential Evolution.

Documentation for Sherpa is available at Read The Docs and also for Sherpa in CIAO.

A Quick Start Tutorial is included in the notebooks folder and can be opened with an ipython notebook.

License

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. A copy of the GNU General Public License can be found in the LICENSE file provided with the source code, or from the Free Software Foundation.

How To Install Sherpa

Full installation instructions are part of the Read The Docs documentation, and should be read if the following is not sufficient.

It is strongly recommended that some form of virtual environment is used with Sherpa.

Sherpa is tested against Python versions 3.7, 3.8, and 3.9.

The last version of Sherpa which supported Python 2.7 is Sherpa 4.11.1.

Using Anaconda

Sherpa is provided for both Linux and macOS operating systems running Python 3.7, 3.8, and 3.9. It can be installed with the conda package manager by saying

$ conda install -c sherpa sherpa

Using pip

Sherpa is also available on PyPI and so can be installed with the following command (which requires that the NumPy package is already installed).

% pip install sherpa

Building from source

Source installation is available for platforms incompatible with the binary builds, or for when the default build options are not sufficient (such as including support for the XSPEC model library). The steps are described in the building from source documentation.

History

Sherpa is developed by the Chandra X-ray Observatory to provide fitting and modelling capabilities to the CIAO analysis package. It has been released onto GitHub for users to extend (whether to other areas of Astronomy or in other domains).

Release History

4.14.0: 07 October 2021 DOI

4.13.1: 18 May 2021 DOI

4.13.0: 08 January 2021 DOI

4.12.2: 27 October 2020 DOI

4.12.1: 14 July 2020 DOI

4.12.0: 30 January 2020 DOI

4.11.1: 1 August 2019 DOI

4.11.0: 20 February 2019 DOI

4.10.2: 14 December 2018 DOI

4.10.1: 16 October 2018 DOI

4.10.0: 11 May 2018 DOI

4.9.1: 01 August 2017 DOI

4.9.0: 27 January 2017 DOI

4.8.2: 23 September 2016 DOI

4.8.1: 15 April 2016 DOI

4.8.0: 27 January 2016 DOI

Comments
  • Replace EmissionVoight/AbsorptionVoigt models by Voigt model (fix #597)

    Replace EmissionVoight/AbsorptionVoigt models by Voigt model (fix #597)

    Summary

    Replaces the EmissionVoigt and AbsorptionVoigt models with a single model, Voigt1D. The EmissionVoigt and AbsorptionVoigt models will error out when an instance is created, pointing users to Voigt1D (as the parameter definitions have changed).

    Note

    The voigt profile, where requirement is given in https://github.com/sherpa/sherpa/issues/597#issuecomment-464845565

    opened by dtnguyen2 71
  • Python 3.8 test failures

    Python 3.8 test failures

    Still open

    There appears to be serious problems on macOS but not Linux (installed via conda).

    Fixed by #696

    As conda now has python 3.8 available (but not with astropy or matplotlib just yet) I took the latest master branch out for a spin and there were several test failures:

    ========================================================================================= test session starts =========================================================================================
    platform linux -- Python 3.8.1, pytest-5.3.3, py-1.8.1, pluggy-0.13.1
    rootdir: /home/dburke/sherpa/sherpa-pr640, inifile: pytest.ini
    plugins: xvfb-1.2.0
    collected 2358 items                                                                                                                                                                                  
    ...
    sherpa/models/tests/test_regrid_unit.py ..............................................F.................................                                                                        [ 30%]
    ...
    sherpa/optmethods/tests/test_optmethods.py .E.E.E.E.E.E.E.E.E.E.E.E.E.E.E.E.E.E.E..E.E.E.E.E.E.E                                                                                                [ 33%]
    ...
    sherpa/utils/tests/test_psf_rebinning_unit.py FFFFFFFFFFFF.................                                                                                                                     [ 95%]
    ...
    

    It looks like some clean up is needed:

    a) test_optmethods.py

    *** Warnings created: 1
    1/1 {message : DeprecationWarning("PY_SSIZE_T_CLEAN will be required for '#' formats"), category : 'DeprecationWarning', filename : '/home/dburke/sherpa/sherpa-pr640/sherpa/optmethods/tests/test_optmethods.py', lineno : 110, line : None}
    

    b) test_regrid_unit.py and test_psf_rebinning_unit.py

    I wonder if these are NumPy 1.18 errors instead of Python 3.8 (although maybe a Python 3.8 change has lead to these now being created/seen/...?)

    TypeError: only integer scalar arrays can be converted to a scalar index
    

    c) some general warnings

    sherpa/astro/data.py:2600
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2600: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'logical':
    
    sherpa/astro/data.py:2609
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2609: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'physical':
    
    sherpa/astro/data.py:2618
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2618: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'world':
    
    sherpa/astro/data.py:2811
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2811: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'logical':
    
    sherpa/astro/data.py:2826
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2826: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'physical':
    
    sherpa/astro/data.py:2841
      /home/dburke/sherpa/sherpa-pr640/sherpa/astro/data.py:2841: SyntaxWarning: "is not" with a literal. Did you mean "!="?
        if coord is not 'world':
    
    sherpa/ui/utils.py:258
      /home/dburke/sherpa/sherpa-pr640/sherpa/ui/utils.py:258: SyntaxWarning: "is" with a literal. Did you mean "=="?
        (name is '_sherpa_version') or
    
    sherpa/ui/utils.py:259
      /home/dburke/sherpa/sherpa-pr640/sherpa/ui/utils.py:259: SyntaxWarning: "is" with a literal. Did you mean "=="?
        (name is '_sherpa_version_string')):
    
    type:bug priority:high 
    opened by DougBurke 62
  • Adding Sphinx documentation

    Adding Sphinx documentation

    Demo

    I have a not-guaranteed-to-be-up-to-date version of the documentation at http://hea-www.harvard.edu/~dburke/playground/sherpa/index.html - this should not be taken as an official product as it is, in large part, an experiment. It may go away at any time!

    Background

    As I know there's been discussion about this at PyAstro16, I thought it worth mentioning what I've learnt in my investigations to help anyone who wants to work on this. Please do!

    There is a branch at feature/sphinx-docs which shows what I've done. I have a version on read-the-docs: output and configuration information.

    From memory, the issues include (they are not ordered in severity, can overlap somewhat, and may actually feed into other possible improvements we could do):

    • Move the contents of the existing doc/ directory somewhere else (e.g. notebook/) so that doc/ can be used for Sphinx.

    • I had trouble getting Sherpa to build on Read The Docs as soon as I tried getting it to run code (e.g. to make sure the screen output is up to date); I forget the details but the last log suggests I gave up as soon as it complained about missing yacc. My thoughts at the time are that the build would have to be done locally, with the results pushed to the web site, but I didn't investigate this any further. There have been a number of build improvements to Sherpa since then, but we still need to compile code.

    • I needed to add a customized IPython directive to grab the output from the Sherpa logger and include it into the Sphinx shell. It's pretty basic, but to save time re-inventing the wheel you can find it at 676cd6f77b5386d82065dd64439061bc1527a30a. If you don't do this then a lot of screen output is not included in the HTML output, which defeats the purpose just a little bit ;-)

      The version has since been updated (to address issues I found). It is less important when using the OO API, since most of the use of the logging infrastructure comes from the UI layer, but there are still places where it is used (e.g. the error analysis, although there I need to turn off the multi-cpu support for it to work properly, and I've yet to investigate why). [note added Fri May 6 10:54:23 EDT 2016]

    • The presence of optional/multiple code paths is a challenge, both for building (what is included and what isn't) and actual documentation text: for example how to best explain functionality that may only be present in one I/O backend. This is a combination of issues: technological, documentation content, and social (what do we want).

    • If you auto-generate the module layout then you end up with a confusing mess. Many Sherpa modules re-export content from other modules, which leads to a lot of repeated information and some very-large modules (e.g. sherpa.astro.ui), both of which lead to reader confusion and fatigue. If you don't autogenerate the layout, then you have a lot of work ahead of you, and you are still likely to hit some of these problems.

    • There has been some clean up of the code since I worked on this, removing unused and CIAO-specific code, but there are probably still files left over that need to be excluded from the auto-generated documentation. Also, some of the modules are too large (I know I've already mentioned this, but it needs repeating).

    • I am fairly sure some of the "clever" wrapping we do, to make some functionality easier to use from IPython, interacts poorly with Python docstrings, and hence the generated documentation. As an example, we allow users to say sherpa.astro.ui.set_source(sherpa.astro.ui.polynom1d.cpt) but sherpa.astro.ui.polynom1d is actually a sherpa.ui.utils.ModelWrapper instance around the sherpa.models.basic.Polynom1D class, which is where the docstrings should be (but currently are). Also, Polynom1D is also available as sherpa.models.Polynom1D (just re-iterating the point about repeated documentation). We (@olaurino and I) have talked about this particular issue before (i.e. pass through the model docstring) but that hasn't been worked on (and the use case of making sure that the information is available to Sphinx wasn't thought of).

      I keep on confusing myself about this; the problem is not for the object that was created, but for the class that creates the model object. That is, help(sherpa.ui.polynom1d) returns information on the ModelWrapper class. [note clarified Wed May 11 12:02:38 EDT 2016]

    • I think we have good coverage of the UI API (although some is still missing and the content can be improved), except for a few specific areas such as the model instances and functions written in C, but the documentation of the lower-level routines is in a sorry state. Work on Sphinx documentation should help illuminate what areas need work, and can be done outside of this project (I have been trying to add stuff as I work on code fixes and improvements, in particular the C functions, but lots more to be done).

    • See also my comment on #22 which discusses a problem I had with the DS9-related code when running Sphinx [note added 16:25 EDT Friday 25 March 2016]

    • One suggestion that has been made is to not include the session-based API (what is referred to as the UI layer above - i.e. sherpa.astro.ui and sherpa.ui) in the Sphinx documentation. This would essentially have the UI layer documented as part of CIAO and the object-oriented API (which needs a lot of documentation) for the Sphinx documentation. [note added 14:48 EDT Tuesday 26 April 2016]

    type:enhancement area:docs 
    opened by DougBurke 56
  • Evaluate model on finer grid

    Evaluate model on finer grid

    Release Note

    Sherpa users can now define arbitrary grids, called evaluation spaces, on which to evaluate individual model components, both in 1D and 2D. This can be useful in a number of cases, for instance when it is desirable to evaluate models on a finer grid than the one defined by the data, or in convolution models where information outside of the data range can be used to reduce boundary effects or to inform the evaluation inside the data space. Also, when plotting individual 1D model components (plot_source_component), if a specific evaluation space was attached to the model then the plot will be of the model evaluated over that evaluation space, not the data space. Other plotting commands should be unaffected by this change.

    Notes

    I based my work off what @DougBurke did in a branch on his fork, and then applied the following changes:

    EvaluationSpace class

    First of all I created an EvaluationSpace class that takes care of the grid management, simplifying the Regrid1D implementations (all the tests pass with probably about half the code, which is also more readable and maintainable).

    Regridder without a grid

    I didn't think a Regrid1D without a grid made much sense, so at first I removed such a possibility and some of the checks. For instance Regrid1D checked if the grid was none and just passed the original grid through. This seems overcomplicated. However, after going through more test cases, it seems like @DougBurke really wanted to have a way to say "this is a regrid model, but really it should simply pass through", and that one could even want to make the regrid mutable, i.e. to set the grid to None after instantiating it. I'd question the philosophy, but if that's the requirement, so be it. So rather than removing the tests and code I simply re-designed it to be cleaner and more explicit.

    Regrid1D and Model

    I have a problem with redefining the semantics of __call__ for Regrid1D. On the base class, calc actually calculates the model, while for Regrid1D it just applies the regrid to a wrapped model. I tried making Regrid1D not extend Model, but then it can't be used to define a composite model. So either Regrid1D should indeed be a Model and call should be used to calculate the model, or Regrid should not be a Model and the CompositeModel creation should be done differently, if at all.

    Now, I really don't think Regrid1D should be a Model, because that has a number of issues (and the IDE agrees, there's a lot of warnings for a number of reasons), so that's what I did.

    There is a test that exercises how we expect the regridded model to look like. It expected a composite model of two actual models, but that complicated things only for the sake of this choice, which seems arbitrary any way. So now the regridded model appears as a composite model with a name corresponding to the Regrid1D wrapper name + the name of the wrapped model, but the composition has only one part. Does that make any sense?

    Calculating the overlap between data space and eval space

    Something that surprises me is that if we don't explicitly set the response to zero when there is no overlap the model actually evaluates to something that is not zero (I tried!). I am guessing it's because the extrapolation is just not going to zero? That's most likely true but I didn't dig further.

    Class names

    I renamed Regrid1D as ModelDomainRegridded1D, which seems more explicit and self-documenting. And I renamed the __call__ method as apply_to, which again seems more explicit.

    Functional testing

    Unit tests were great for testing my refactoring (excellent work @DougBurke, thanks!) and I didn't need to change them unless I thought they were calling for bad design. I did remove some testing code that was dangling. We can bring them back, obviously.

    Test coverage

    It's pretty good, but we are missing tests for the ArithmeticFunctionModel wrapper.

    API

    I added a commit that addresses the high level API, i.e. how a user would use the UI to easily work with regridded models. In doing so I added a regrid method to the Model base class which takes care of instantiating the correct classes and returns an instance of the regridder wrapper. Unofruntately there is no way to know whether that makes sense for all of the Model subclasses. Maybe it's not important, but I wanted to make sure to note it.

    MultigridSumModel

    While working on this PR I found this class defined, but it looks like it's never used in Sherpa. Does anybody know anything about it?

    ArithmeticModel2D

    There is an issue in the current architecture that makes 1D and 2D models be interchangeable, which is however not true. In the scope of this PR, this makes it impossible (or rather it would require ugly workarounds) to override the regrid method for 2D models. I am introducing a new parent class for 2D models. I made the cut in the most conservative way, i.e.:

    • models that declare ArithmeticModel as their superclass should be unaffected by the changes. However, they won't be regriddable. This includes user models, table models, and templates, at least for the time being.
    • models can declare ArithmeticModel1D or ArithmeticModel2D as the superclass. This will make them regriddable. I believe I changed the superclass of all the available models. Please let me know if I missed some or if I didn't mark them properly as 1D or 2D models.
    • given the new semantics, I renamed ArithmeticModel1D and its 2D counterpart as RegriddableModel1D and 2D, which makes more sense.

    There are some models with a questionable class hierarchy, e.g. SigmaGauss2D is a Gauss2D, which is clearly not true. I didn't do anything about that though: if it ain't broken, don't fix it. However I did have to fix the questionable Const2D is a Const1D relation, so I extracted a common superclass Const(ArithmeticModel) and made Const1D and Const2D extend the proper superclass.

    Overlap conditions

    In the original prototype by @DougBurke the overlap between spaces in the 1D case was pretty liberal, they would overlap as long as a portion of the interval was common to the evaluation spaces. For the 2D case, after a conversation with @juramaga we decided to stick to a more stringent condition that the spaces must have the same boundaries. Note that some requirements are not covered by neither set of conditions, as the dataset available to the model is only the one pre-processed by Sherpa and sent to the model for evaluation, which doesn't cover those cases where the model might want to peek outside of those boundaries. In that case the condition should rather be that the custom regrid region should contain the data region. We really need to specify this stuff, although it might not make sense to do it in the context of this PR, i.e. I'd rather have this PR merged as long as the general design seems reasonable and implements the basic use cases, and then extend or change the implementation when new information is available.

    TODOs and FIXMEs

    We need to address those before we can merge this.

    EDITs

    I removed the sections on 2D and the API improvements as despite the lack of feedback I moved forward with those. There is now a basic 2D implementation, and the API (as shown by the test_regrid.py tests) is more user-friendly, with much of the infrastructure hidden away.

    opened by olaurino 51
  • PSF rebinning (fix #43)

    PSF rebinning (fix #43)

    Release Note

    Sherpa now supports using a PSF with a finer resolution than 2D images. If Sherpa detects that the PSF has a smaller pixel size than the data, it will evaluate the model on a "PSF Space" that has the same resolution as the PSF and the same footprint as the data, then rebin the evaluated model back to the data space for calculating the statistic.

    This PR fixes #43.

    See the comments for more details

    opened by olaurino 49
  • add sphinx documentation (fix #197)

    add sphinx documentation (fix #197)

    This PR is ready for review, although I am likely to try and improve the documentation.

    Summary

    Create the Sherpa documentation using Sphinx to build, Travis-CI to test, and Read The Docs to build and display. The aim for the documentation is to build a sensible skeleton, but not to have a feature-complete set of documentation.

    Notes

    There are three parts to getting sphinx documentation

    1. the infrastructure to get sphinx to build the documentation
    2. the contents of the documentation (mainly the .rst files but also fixes and additions to the module docstrings)
    3. getting docs built and available to the community

    Status

    The outline of the documentation is present, as are much of the concepts, but there are missing areas and areas that need work.

    All three points above have been addressed in this PR. I have set up a test build on Read The Docs - https://sherpa-test.readthedocs.io/en/latest/ - which requires manual re-building when new commits are made (so may not reflect the latest changes). Once the PR is integrated into the Sherpa branch I have a Read The Docs account set up to build the documentation from the master branch on every commit. It is possible to use this PR but build the documentation in some other manner (e.g. via GitLab) as there are only a few small, text-only, configuration files specific to the Read The Docs environment:

    • readthedocs.yml
    • docs/environment.yml
    • docs/rtd-pip-requirements

    In order to get the documentation to build on Read The Docs, which does not support "install any package you need", the build is done without requiring a functional Sherpa installation. It does this by mocking required packages and the compiled modules within Sherpa. The only required Python packages for the build (other than Sphinx and sphinx_rtd_theme) are numpy (since setup.py requires it), and six (although this probably also be mocked).

    Previous issues with evidence of mocked symbols - e.g. in the routines provided in sherpa.astro.xspec, as described in https://sherpa-test.readthedocs.io/en/latest/model_classes/xspec_model.html - has been fixed.

    The output from the example code is not auto-generated: this is different to my previous approach in https://github.com/DougBurke/sherpa/tree/feature/sphinx-docs which was nice, but required the ability to build Sherpa and there was some interesting interaction between the Sherpa support code and some of the Sphinx utility routines with screen output.

    Notes

    All the commits up to "Add a Sphinx build" are clean up, improvements, and moving information around in the existing code. It is not clear if all these changes are needed (since dropping support for building the documentation for Python 2.7 I have had less problems, but I have not gone tried the documentation build without these changes), but I believe they are worthwhile improvements anyway. They could also be cherry-picked into one or more PRs that are accepted before this one (getting these in sooner rather than later will reduce the amount of merge conflicts in our lives).

    Read The Docs issues

    The main issue at the moment is that the build can time out, which has meant that I have removed the PDF and ePub generation to try and save time (they are also not useful at this moment). The runtime does seem somewhat random - presumably down to download times.

    I was close to getting Read The Docs to be able to build Sherpa - which would have avoided the need to mock most things (although it may still be useful to mock out optional elements to reduce the round-trip time to get new documentation) by taking advantage of the conda support. Unfortunately there was a problem with the FORTRAN compiler (the conda environment didn't seem to be getting passed in), which scuppered these plans. There is a GitHub issue on the RTD page about this but I've not had any response.

    type:enhancement area:build area:docs 
    opened by DougBurke 44
  • Code is both Python 2.7 and 3.5 compliant

    Code is both Python 2.7 and 3.5 compliant

    Description

    Resolve #76.

    Introduce support for Python 3 while keeping code base compatible with Python 2.7.

    Introduced a Python 3.5 travis build, but we can target more Python 3.x versions later.

    This PR also tries and introduce new unit and integration tests.

    As a side effect, this PR also fixes #186.

    priority:high area:build area:code 
    opened by olaurino 42
  • Support vector backscales and bugfix for background modelling

    Support vector backscales and bugfix for background modelling

    Summary

    Support fitting backgrounds to PHA datasets which have a variable BACKSCAL array (rather than a scalar), which can come from combining spectra (e.g. the CIAO contrib script combine_spectra) or from the data extraction process. In doing so a number of routines related to the scaling of background-to-source aperture data have seen adjustments to behavior and some enhanced functionality (such as sherpa.astro.ui.get_bkg_scale and the sherpa.astro.data.DataPHA.get_background_scale method). This fixes #797.

    One issue with the variable-array support is that it is only correct for linear responses, such as the normal ARF and RMF pairing, where doubling the model amplitude (all other things being equal) will double the number of predicted counts. This is not the case if the jdpileup model is used to model the ACIS pileup phenomenon: in this case a warning message will be displayed, but the fit can still be run (there is no guarantee on the accuracy of the result).

    There have also been changes to how the background scale factor has been calculated when fitting the background as part of the source spectrum, since it previously included the exposure-time and areascal ratios which are not valid for this case. Note that this will not affect results for the common case when the background has the same exposure time as the source aperture, but users do fit background models (rather than subtracting the data) should check their fit results. This fixes #629.

    The sherpa.astro.background.BackgroundSumModel class has been removed as it is no-longer used (this is a low-level class that is not used by the sherpa.astro.ui layer).

    Details

    This follows from #884 and the first 13 commits are from #884.

    There are a number of commits that either re-work the tests, add new tests, or change existing tests to cover various features that we will want to either check aren't changed, or verify that we do see changes, in the later commits.

    For the source aperture, fitting a background model will result (conceptually) in a model expression (for the scalar version of the scaling value) of

    rsp(texp_src * (model_src + sum_i (bgnd_scale_i * model_bkj_i))
    

    Although the Sherpa model language can describe this expression (expanding out the summation), the existing code used a special model class (BackgroundSumModel) to do this. I don't see the benefit for adding this extra class, so the code now just creates the expression directly. There was a comment in the BackgroundSumModel class about a potential problem with the way it was implemented, regarding what parameter values were going to be used to evaluate the models, so this is now avoided (although I don't believe it was actually a problem, but I didn't look into it closely). Removing the class also means that the correct model expression is used: the string representation of BackgroundSumModel could actually display the wrong value, for complex situations, and I know people have tried to recreate these expressions as they are included in the ASCII save files, based on helpdesk tickets). See the examples section.

    For the vector scale factor (I'm going to use only a single background model for simplicity) this calculation has to be changed, because the scale factor is defined in channel units, not the energy grid of the response. This means that the expression changes dramatically to

    rsp(texp_src * model_src) + bgnd_scale_vector * rsp(text_src * model_bkg)
    

    That is, you need to fold the background through the response before multiplying it by the scale array. Fortunately the way the response models work we don't have to worry about filtering the vector: you apply it to the full grid, and then any filtering is handed elsewhere in the system. Rather than just leaving the vector as a NumPy array which would get multiplied by the evaluated model, I use the ArithmeticConstantModel to store the vector. This is potentially a change to this class, but we have no documentation on how it is meant to work, and using an array required no change (other than tweaking the derived name of the model if not set). Note that we do not really want to include a raw numpy array in a model expression because 'vector * mdlandmdl * vector` give different results, thanks to NumPy boradcasting, which took me some time to trace down!

    Unfortunately, breaking the terms into rsp(..) + rsp(..) means that it requires the response to be linear, which it won't be if we use a PileupResponse1D model. In this case the user gets a warning message to point out that all bet's are off, but the model will still be evaluated.

    The sherpa.astro.ui.get_bkg_scale and sherpa.astro.data.DataPHA.get_background_scale routines now return values for one background component, rather than the average value for all the background components associated with the dataset. It also now can calculate the scaling factor for background subtraction (units='counts') or when fitting simultaneously (units='rate').

    The last commit changes how background models are scaled to the source aperture (the case of background subtraction is unchanged). When subtracting the background data you say

      source - r_exposure * r_backscale * r_areascal * bgnd
    

    where r_x is the ratio of source x to background x (for simplicity this is a single component). When modelling the background we have to create the model (excluding the response term which would be applied to this whole expression)

      tsrc * (source_model + scale_factor * background_model)
    

    where tsrc is the source exposure time and scale_factor corrects the background model to the source aperture. The existing code uses the same scaling factor as above - namely

      scale_factor = r_exposure * r_backscale * r_areascal
    

    but this is wrong. The source and background models return rates and not counts. This is why the source model starts with 'tsrc * ' since you need to convert to the model (evaluated as a rate) to counts. This means that the ratio between the source and background exposure times DOES NOT FACTOR into scale_factor.

    The correct scale factor should therefore be

      scale_factor = r_backscale
    

    that is, we just correct for the aperture size when adding the background model to the source aperture. Fortunately this is not likely to be a big problem with existing data as I IMAGINE that the source and background exposures are often the same, and so it doesn't matter. It will matter for users fitting a background model to "blank-sky datasets", where the exposure time is different in the source and background spectra.

    Implementation choice

    When creating the background-to-source scaling value the has to create a bunch of (src/bgnd) values, that get multiplied together. The current code replaces any missing value with 1, but this means that /bgnd becomes 1/bgnd, when it may be better to just drop the term if either value is missing. These values are normally set, so it's a corner case, or f you are manually creating data.

    When applying the background model to the source aperture we now only use the BACKSCAL ratio (ie no exposure-time correction is included). It is unclear to me whether AREASCAL should be included or not: XSPEC treats it as an exposure correction (in general, from discussions with Keith), so I am also dropping it here, but this could be wrong. For Chandra data we don't have an AREASCAL correction so we don't really see this. We need XMM RGS examples (I think).

    Examples

    BackgroundSumModel display was wrong

    With the following annoying-long example:

    ui.clean()
    
    _data_x = [1, 2, 3, 4]
    _data_y = [10, 40, 30, 50]
    _data_y2 = [12, 45, 33, 49]
    _data_ebins = [0.1, 0.2, 0.3, 0.4, 0.5]
    
    _data_by1 = [2, 0, 5, 7]
    
    x = np.asarray([d for d in _data_x])
    y = np.asarray([d for d in _data_y])
    src = ui.DataPHA('example', x, y, exposure=10, backscal=0.1)
    
    by = np.asarray([d for d in _data_by1])
    bkg = ui.DataPHA('bkg1', x, by, exposure=50, backscal=0.2)
        
    src.set_background(bkg, id=1)
    
    # need to fake up an ARF to get an instrument
    arf = ui.create_arf(np.asarray([0.1, 0.2, 0.3, 0.4]), np.asarray([0.2, 0.3, 0.4, 0.5]))
    src.set_arf(arf)
    
    ui.create_model_component('const1d', 'cpt')
    cpt = ui.get_model_component('cpt')
    cpt.c0 = 35
    
    ui.create_model_component('const1d', 'bcpt')
    bcpt = ui.get_model_component('bcpt')
    bcpt.c0 = 3
        
    ui.set_data(src)
    ui.set_source(cpt)
    ui.set_bkg_source(bcpt, bkg_id=1)
    
    print(ui.get_model().name)
    

    the output from the master branch is

    apply_arf((10 * (const1d.cpt + 0.1 * (const1d.bcpt))))
    

    and from this PR it is

    apply_arf((10.0 * (const1d.cpt + (0.5 * const1d.bcpt))))
    

    The correct scaling factor for this data is src_backscal / bg_backscal = 0.1 / 0.2 = 0.5. Note that the exposure time is not included in this because the models are calculating rates, not counts (which you can see from the initial factor of '10 *' here which is the source exposure time).

    Example - vector scale value

    Here I fake a vector backscal column for the background dataset:

    In [1]: from sherpa.astro import ui
    
    In [2]: ui.load_pha('sherpa-test-data/sherpatest/3c273.pi')
    
    In [3]: bkg = ui.get_bkg()
    
    In [4]: bkg.backscal
    Out[4]: 1.872535141462e-05
    
    In [5]: import numpy as np
    
    In [6]: bkg.backscal = 2e-5 * np.ones(1024)
    
    In [7]: ui.set_source(ui.xsphabs.gal * ui.powlaw1d.spl)
    
    In [8]: ui.set_bkg_source(ui.powlaw1d.bpl)
    
    In [9]: ui.get_model()
    Out[9]: <BinaryOpModel model instance '(apply_rmf(apply_arf((38564.608926889 * (xsphabs.gal * powlaw1d.spl)))) + (scale1 * apply_rmf(apply_arf((38564.608926889 * powlaw1d.bpl)))))'>
    

    Note that the label scale1 is just a convenience (as is apply_arf and apply_rmf) as there is no model registered with this name; that is

    In [10]: print(scale1)
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-11-8d73606ab588> in <module>
    ----> 1 print(scale1)
    
    NameError: name 'scale1' is not defined
    

    I spent a lot of time wrestling with potentially setting up a model component and decided that with our current code it would be better done i a later PR, if ever. The important thing here is to get this functionality possible, and we can improve it later.

    If you tried this with the master branch then it errors out here with

    sherpa In [9]: ui.get_model()
    TypeError: only size-1 arrays can be converted to Python scalars
    

    Example - background scaling

    With this set up, where I set the exposure time and backscale values for the spectrum to "nice" values:

    In [18]: ui.load_pha('sherpa-test-data/sherpatest/9774.pi')
    read ARF file sherpa-test-data/sherpatest/9774.arf
    read RMF file sherpa-test-data/sherpatest/9774.rmf
    read background file sherpa-test-data/sherpatest/9774_bg.pi
    
    In [19]: ui.set_exposure(exptime=100, id=1)
    
    In [20]: ui.set_exposure(exptime=1000, id=1, bkg_id=1)
    
    In [21]: ui.set_backscal(id=1, backscale=1)
    
    In [22]: ui.set_backscal(id=1, backscale=0.5, bkg_id=1)
    
    In [23]: ui.set_source(ui.gauss1d.gmdl)
    
    In [24]: ui.set_bkg_source(ui.const1d.bmdl)
    

    With this PR I get the following background model: you can see that (exclusing the rmf and arf) we have

    source_exp * (source_model + backscale_ratio * background_model)

    where here backscale_ratio = 1 / 0.5 = 2

    In [25]: ui.get_model().name
    Out[25]: 'apply_rmf(apply_arf((100.0 * (gauss1d.gmdl + (2.0 * const1d.bmdl)))))'
    

    With the master branch you get the following, where you can see that the scaling factor has been multiplied by source_exp / bgnd_exp = 100 / 1000 = 0.1, which I claim is wrong:

    sherpa In [12]: ui.get_model().name
    Out[12]: 'apply_rmf(apply_arf((100.0 * (gauss1d.gmdl + 0.2 * (const1d.bmdl)))))'
    
    type:bug area:tests area:code 
    opened by DougBurke 41
  • Using XSPEC version 12.12.0, the model XSagnsed gives a Segmentation Violation on macOS

    Using XSPEC version 12.12.0, the model XSagnsed gives a Segmentation Violation on macOS

    @Marie-Terrell and I have been looking at a recent problem with XSPEC version 12.12.0 running on riverside (macOS: BigSur); However, XSPEC and sherpa build were done on dudley (macOS: Mojave). The XSagned model gives a Segmentation fault as can be seen from the following script:

    bash-3.2$ python tmp.py
    will use XSagnsed
    Segmentation fault: 11
    bash-3.2$ python tmp.py 1
    will use XSpowerlaw
    

    where

    bash-3.2$ cat tmp.py 
    import numpy
    
    from sherpa.astro import xspec
    
    def test(arg):
        if arg == 0:
            print('will use XSagnsed')
            mdl = xspec.XSagnsed()
        else:
            print('will use XSpowerlaw')
            mdl = xspec.XSpowerlaw()
        egrid = numpy.arange(0.1, 7.01, 0.1)
        elo = egrid[:-1]
        ehi = egrid[1:]
        _hc = 6.6260693e-27 * 2.99792458e+18 / 1.60217653e-9
        wgrid = _hc / egrid
        whi = wgrid[:-1]
        wlo = wgrid[1:]
        evals = mdl(elo, ehi)
    
    if '__main__' == __name__:
        import sys
        arg = 0
        if len(sys.argv) > 1:
            arg = int(sys.argv[1])
        test(arg)
    

    Moreover, there were two other minor issues:

    1. The test test_check_default_name is failing:
    bash-3.2$ pytest --pdb test_xspec.py 
    ============================= test session starts ==============================
    platform darwin -- Python 3.8.8, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
    rootdir: /Users/saoguest
    collected 480 items                                                            
    
    test_xspec.py .F
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    
        @requires_xspec
        def test_check_default_name():
            import sherpa.astro.xspec as xs
        
            for clname in dir(xs):
                if not clname.startswith('XS'):
                    continue
        
                cls = getattr(xs, clname)
                if is_proper_subclass(cls, (xs.XSAdditiveModel,
                                            xs.XSMultiplicativeModel,
                                            xs.XSConvolutionKernel)):
        
                    # At the moment we have some defaulting to xs... and some just ...
                    # (the forner are convolution cases which should probably be
                    # switched to drop the leading xs).
                    #
                    mdl = cls()
                    expected = clname.lower()
    >               assert mdl.name in [expected, expected[2:]]
    E               AssertionError: assert 'zxiwab' in ['xszxipab', 'zxipab']
    E                +  where 'zxiwab' = <XSzxipab model instance 'zxiwab'>.name
    
    test_xspec.py:258: AssertionError
    
    > /Users/saoguest/test_xspec.py(258)test_check_default_name()
    -> assert mdl.name in [expected, expected[2:]]
    (Pdb) mdl.name
    'zxiwab'
    (Pdb) expected
    'xszxipab'
    
    1. Using Python 3.8.8 on riverside, the fixture clean_astro_ui cannot be found:
    bash-3.2$ pytest --pdb test_xspec.py 
    ============================= test session starts =============================
    platform darwin -- Python 3.8.8, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
    rootdir: /Users/saoguest
    collected 252 items                                                            
    
    test_xspec.py E
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    file /Users/saoguest/test_xspec.py, line 217
      @requires_xspec
      def test_create_model_instances(clean_astro_ui):
    E       fixture 'clean_astro_ui' not found
    
    type:bug dep:xspec 
    opened by dtnguyen2 37
  • CentOS 5 compatibility of 4.9.1 conda package

    CentOS 5 compatibility of 4.9.1 conda package

    Hi,

    I'm trying to use the sherpa 4.9.1 linux 64 py36 conda package on a CentOS 5 system, but I'm guessing that you've moved to doing package builds on a different OS/version? I didn't see that called out in the release notes but:

    Package plan for installation in environment /data/fido/ska3/arch/x86_64-linux_CentOS-5:
    
    The following NEW packages will be INSTALLED:
    
        sherpa: 4.9.1-py36_0 sherpa
    
    Proceed ([y]/n)? y
    
    unagi: ipython
    Python 3.6.0 |Continuum Analytics, Inc.| (default, Dec 23 2016, 12:22:00) 
    Type 'copyright', 'credits' or 'license' for more information
    IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for help.
    
    In [1]: import sherpa.ui as ui
    ...
    ImportError: /data/fido/ska3/arch/x86_64-linux_CentOS-5/lib/python3.6/site-packages/sherpa/models/_modelfcts.cpython-36m-x86_64-linux-gnu.so: ELF file OS ABI invalid
    

    Is there a chance you could just build these on CentOS5? It think it only wins you more compatibility, but I don't know if you are dealing with other challenges.

    Thanks!

    type:bug area:build note:not-a-bug 
    opened by jeanconn 30
  • pileup test fails with XSPEC: covariance matrix is different

    pileup test fails with XSPEC: covariance matrix is different

    Summary

    The replacement of the Fortran optimisation code with C++ has lead to the pileup test to fail with XSPEC 12.9.1p. The errors are in the diagonal elements of the covariance matrix; the best-fit parameters and statistic are the same (within the error tolerance). What is is about my system that is different enough to the Travis tests?

    Details

    I have master checked out - i.e. 3540951 - built against XSPEC 12.9.1p

    % git rev-parse HEAD
    354095107245d5190c6cfe54741c5f4c2d4f87f9
    % python -c 'from sherpa.astro import xspec; print(xspec.get_xsversion());'
    12.9.1p
    

    The pileup test fails with:

    % python setup.py test -a sherpa/astro/tests/test_astro.py::test_threads::test_pileup
    running sherpa_config
    warning: sherpa_config: built configure string['./configure', '--prefix=/home/djburke/sherpa/sherpa-fix-sphinx-docs/build', '--with-pic', '--enable-standalone', '--disable-maintainer-mode', '--enable-stuberrorlib', '--disable-shared', '--enable-shared=libgrp,stklib', '--enable-fftw', '--enable-region', '--enable-group', '--enable-stk', '--enable-wcs']
    
    running xspec_config
    Found XSPEC version: 12.9.1p
    running develop
    running build_scripts
    running egg_info
    running build_src
    build_src
    building extension "sherpa.estmethods._est_funcs" sources
    building extension "sherpa.utils._utils" sources
    building extension "sherpa.models._modelfcts" sources
    building extension "sherpa.optmethods._saoopt" sources
    building extension "sherpa.optmethods._tstoptfct" sources
    building extension "sherpa.stats._statfcts" sources
    building extension "sherpa.utils.integration" sources
    building extension "sherpa.astro.models._modelfcts" sources
    building extension "sherpa.astro.utils._pileup" sources
    building extension "sherpa.astro.utils._utils" sources
    building extension "sherpa.utils._psf" sources
    building extension "sherpa.astro.utils._wcs" sources
    building extension "sherpa.astro.utils._region" sources
    building extension "sherpa.astro.xspec._xspec" sources
    building data_files sources
    build_src: building npy-pkg config files
    writing sherpa.egg-info/PKG-INFO
    writing dependency_links to sherpa.egg-info/dependency_links.txt
    writing entry points to sherpa.egg-info/entry_points.txt
    writing requirements to sherpa.egg-info/requires.txt
    writing top-level names to sherpa.egg-info/top_level.txt
    reading manifest file 'sherpa.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    writing manifest file 'sherpa.egg-info/SOURCES.txt'
    running build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    Creating /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages/sherpa.egg-link (link to .)
    sherpa 4.10.0+177.g3540951.dirty is already the active version in easy-install.pth
    Installing sherpa_smoke script to /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/bin
    Installing sherpa_test script to /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/bin
    
    Installed /home/djburke/sherpa/sherpa-fix-sphinx-docs
    Processing dependencies for sherpa==4.10.0+177.g3540951.dirty
    Searching for six==1.11.0
    Best match: six 1.11.0
    Adding six 1.11.0 to easy-install.pth file
    
    Using /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages
    Searching for numpy==1.15.2
    Best match: numpy 1.15.2
    Adding numpy 1.15.2 to easy-install.pth file
    
    Using /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages
    Finished processing dependencies for sherpa==4.10.0+177.g3540951.dirty
    running test
    Searching for pytest-xvfb
    Best match: pytest-xvfb 1.1.0
    Processing pytest_xvfb-1.1.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/pytest_xvfb-1.1.0-py3.6.egg
    Searching for mock
    Best match: mock 2.0.0
    Processing mock-2.0.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/mock-2.0.0-py3.6.egg
    Searching for pyvirtualdisplay>=0.2.1
    Best match: PyVirtualDisplay 0.2.1
    Processing PyVirtualDisplay-0.2.1-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/PyVirtualDisplay-0.2.1-py3.6.egg
    Searching for pbr>=0.11
    Best match: pbr 4.3.0
    Processing pbr-4.3.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/pbr-4.3.0-py3.6.egg
    Searching for EasyProcess
    Best match: EasyProcess 0.2.3
    Processing EasyProcess-0.2.3-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/EasyProcess-0.2.3-py3.6.egg
    running build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    ======================================================================= test session starts ========================================================================
    platform linux -- Python 3.6.6, pytest-3.8.1, py-1.6.0, pluggy-0.7.1
    rootdir: /home/djburke/sherpa/sherpa-fix-sphinx-docs, inifile: pytest.ini
    plugins: xvfb-1.1.0, remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
    collected 1 item                                                                                                                                                   
    
    sherpa/astro/tests/test_astro.py F                                                                                                                           [100%]
    
    ============================================================================= FAILURES =============================================================================
    _____________________________________________________________________ test_threads.test_pileup _____________________________________________________________________
    
    self = <test_astro.test_threads testMethod=test_pileup>
    
        @requires_fits
        @requires_xspec
        def test_pileup(self):
            self.run_thread('pileup')
        
            fr = ui.get_fit_results()
            covarerr = sqrt(fr.extra_output['covar'].diagonal())
    >       assert covarerr[0] == approx(684.056 , rel=1e-4)
    E       AssertionError: assert 694.3746848979197 == 684.056 ± 6.8e-02
    E        +  where 684.056 ± 6.8e-02 = approx(684.056, rel=0.0001)
    
    sherpa/astro/tests/test_astro.py:218: AssertionError
    ----------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------
     Solar Abundance Vector set to angr:  Anders E. & Grevesse N. Geochimica et Cosmochimica Acta 53, 197 (1989)
     Cross Section Table set to bcmc:  Balucinska-Church and McCammon, 1998
    ===================================================================== 1 failed in 0.67 seconds =====================================================================
    

    I am using python=3.6,numpy=1.15, built with gcc 7.3.0.

    If I switch to a commit before the great-Fortran-removal then the test passes - that is with

    % git checkout 03e3fb5ad872470fae79c4e8433c9ee42ef74441
    Note: checking out '03e3fb5ad872470fae79c4e8433c9ee42ef74441'.
    
    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by performing another checkout.
    
    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -b with the checkout command again. Example:
    
      git checkout -b <new-branch-name>
    
    HEAD is now at 03e3fb5 Add doc-build status to README.md
    

    I get the following from the test:

    % python setup.py test -a sherpa/astro/tests/test_astro.py::test_threads::test_pileup
    running sherpa_config
    warning: sherpa_config: built configure string['./configure', '--prefix=/home/djburke/sherpa/sherpa-fix-sphinx-docs/build', '--with-pic', '--enable-standalone', '--disable-maintainer-mode', '--enable-stuberrorlib', '--disable-shared', '--enable-shared=libgrp,stklib', '--enable-fftw', '--enable-region', '--enable-group', '--enable-stk', '--enable-wcs']
    
    running xspec_config
    Found XSPEC version: 12.9.1p
    running develop
    running build_scripts
    running egg_info
    running build_src
    build_src
    building extension "sherpa.estmethods._est_funcs" sources
    building extension "sherpa.utils._utils" sources
    building extension "sherpa.models._modelfcts" sources
    building extension "sherpa.optmethods._saoopt" sources
    building extension "sherpa.optmethods._tstoptfct" sources
    building extension "sherpa.stats._statfcts" sources
    building extension "sherpa.utils.integration" sources
    building extension "sherpa.astro.models._modelfcts" sources
    building extension "sherpa.astro.utils._pileup" sources
    building extension "sherpa.astro.utils._utils" sources
    building extension "sherpa.optmethods._minpack" sources
    f2py options: []
      adding 'build/src.linux-x86_64-3.6/sherpa/optmethods/src/minpack/fortranobject.c' to sources.
      adding 'build/src.linux-x86_64-3.6/sherpa/optmethods/src/minpack' to include_dirs.
      adding 'sherpa/optmethods/src/minpack/_minpack-f2pywrappers.f' to sources.
    building extension "sherpa.optmethods._minim" sources
    f2py options: []
      adding 'build/src.linux-x86_64-3.6/sherpa/optmethods/src/fortranobject.c' to sources.
      adding 'build/src.linux-x86_64-3.6/sherpa/optmethods/src' to include_dirs.
    building extension "sherpa.utils._psf" sources
    building extension "sherpa.astro.utils._wcs" sources
    building extension "sherpa.astro.utils._region" sources
    building extension "sherpa.astro.xspec._xspec" sources
    building data_files sources
    build_src: building npy-pkg config files
    writing sherpa.egg-info/PKG-INFO
    writing dependency_links to sherpa.egg-info/dependency_links.txt
    writing entry points to sherpa.egg-info/entry_points.txt
    writing requirements to sherpa.egg-info/requires.txt
    writing top-level names to sherpa.egg-info/top_level.txt
    reading manifest file 'sherpa.egg-info/SOURCES.txt'
    reading manifest template 'MANIFEST.in'
    writing manifest file 'sherpa.egg-info/SOURCES.txt'
    running build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
    customize Gnu95FCompiler
    Found executable /usr/bin/gfortran
    customize Gnu95FCompiler
    customize Gnu95FCompiler using build_ext
    Creating /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages/sherpa.egg-link (link to .)
    sherpa 4.10.0+149.g03e3fb5.dirty is already the active version in easy-install.pth
    Installing sherpa_smoke script to /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/bin
    Installing sherpa_test script to /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/bin
    
    Installed /home/djburke/sherpa/sherpa-fix-sphinx-docs
    Processing dependencies for sherpa==4.10.0+149.g03e3fb5.dirty
    Searching for six==1.11.0
    Best match: six 1.11.0
    Adding six 1.11.0 to easy-install.pth file
    
    Using /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages
    Searching for numpy==1.15.2
    Best match: numpy 1.15.2
    Adding numpy 1.15.2 to easy-install.pth file
    
    Using /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs/lib/python3.6/site-packages
    Finished processing dependencies for sherpa==4.10.0+149.g03e3fb5.dirty
    running test
    Searching for pytest-xvfb
    Best match: pytest-xvfb 1.1.0
    Processing pytest_xvfb-1.1.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/pytest_xvfb-1.1.0-py3.6.egg
    Searching for mock
    Best match: mock 2.0.0
    Processing mock-2.0.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/mock-2.0.0-py3.6.egg
    Searching for pyvirtualdisplay>=0.2.1
    Best match: PyVirtualDisplay 0.2.1
    Processing PyVirtualDisplay-0.2.1-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/PyVirtualDisplay-0.2.1-py3.6.egg
    Searching for pbr>=0.11
    Best match: pbr 4.3.0
    Processing pbr-4.3.0-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/pbr-4.3.0-py3.6.egg
    Searching for EasyProcess
    Best match: EasyProcess 0.2.3
    Processing EasyProcess-0.2.3-py3.6.egg
    
    Using /home/djburke/sherpa/sherpa-fix-sphinx-docs/.eggs/EasyProcess-0.2.3-py3.6.egg
    running build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    customize UnixCCompiler
    customize UnixCCompiler using build_ext
    get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
    customize Gnu95FCompiler
    customize Gnu95FCompiler
    customize Gnu95FCompiler using build_ext
    ======================================================================= test session starts ========================================================================
    platform linux -- Python 3.6.6, pytest-3.8.1, py-1.6.0, pluggy-0.7.1
    rootdir: /home/djburke/sherpa/sherpa-fix-sphinx-docs, inifile: pytest.ini
    plugins: xvfb-1.1.0, remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2
    collected 1 item                                                                                                                                                   
    
    sherpa/astro/tests/test_astro.py .                                                                                                                           [100%]
    
    ===================================================================== 1 passed in 0.63 seconds =====================================================================
    

    Then again, comparing this test in the two versions shows that the error comes from a check of the covariance matrix, which wasn't available in the Fortran version:

    % git diff -r 354095107245d5190c6cfe54741c5f4c2d4f87f9 sherpa/astro/tests/test_astro.py
    ...
    @@ -214,28 +190,22 @@ class test_threads(SherpaTestCase):
             self.run_thread('pileup')
     
             fr = ui.get_fit_results()
    -        covarerr = sqrt(fr.extra_output['covar'].diagonal())
    -        assert covarerr[0] == approx(684.056 , rel=1e-4)
    -        assert covarerr[1] == approx(191.055, rel=1e-3)
    -        assert covarerr[2] == approx(0.632061, rel=1e-3)
    -        assert covarerr[3] == approx(0.290159, rel=1e-3)
    -        assert covarerr[4] == approx(1.62529, rel=1e-3)
    -        assert fr.statval == approx(53.6112, rel=1e-4)
    -        assert fr.rstat == approx(1.44895, rel=1e-4)
    -        assert fr.qval == approx(0.0379417, rel=1e-4)
    +        self.assertEqualWithinTol(fr.statval, 53.6112, 1e-4)
    +        self.assertEqualWithinTol(fr.rstat, 1.44895, 1e-4)
    +        self.assertEqualWithinTol(fr.qval, 0.0379417, 1e-4)
             self.assertEqual(fr.numpoints, 42)
             self.assertEqual(fr.dof, 37)
     
             jdp = self.locals['jdp']
    -        assert jdp.alpha.val == approx(0.522593, rel=1e-1)
    -        assert jdp.f.val == approx(0.913458, rel=1e-2)
    +        self.assertEqualWithinTol(jdp.alpha.val, 0.522593, 1e-1)
    +        self.assertEqualWithinTol(jdp.f.val, 0.913458, 1e-2)
     
             abs1 = self.locals['abs1']
    -        assert abs1.nh.val == approx(6.12101, rel=1e-2)
    +        self.assertEqualWithinTol(abs1.nh.val, 6.12101, 1e-2)
     
             power = self.locals['power']
    -        assert power.gamma.val == approx(1.41887, rel=1e-2)
    -        assert power.ampl.val == approx(0.00199457, rel=1e-2)
    +        self.assertEqualWithinTol(power.gamma.val, 1.41887, 1e-2)
    +        self.assertEqualWithinTol(power.ampl.val, 0.00199457, 1e-2)
     
             # Issue #294 was a problem with serializing the pileup model
             # after a fit in Python 3 (but not Python 2). Add some basic
    @@ -264,21 +234,16 @@ class test_threads(SherpaTestCase):
    ...
    

    As a check, I went back to the 354095107245d5190c6cfe54741c5f4c2d4f87f9 commit and commented out the five checks of the diagonal elements of the covariance matrix and the test passed. Instrumenting the test to display the diagonal terms gives me

    >       assert covarerr[0] == approx(684.056 , rel=1e-4)
    E       AssertionError: assert 694.3746848979197 == 684.056 ± 6.8e-02
    E        +  where 684.056 ± 6.8e-02 = approx(684.056, rel=0.0001)
    
    sherpa/astro/tests/test_astro.py:221: AssertionError
    ----------------------------------------------------------------------- Captured stdout call -----------------------------------------------------------------------
     - covarerr[0] = 694.3746848979197
     - covarerr[1] = 192.13254192587013
     - covarerr[2] = 0.6322390421678051
     - covarerr[3] = 0.29017303370770375
     - covarerr[4] = 1.7174108323954933
    

    which can be compared to the values from the test:

            assert covarerr[0] == approx(684.056 , rel=1e-4)
            assert covarerr[1] == approx(191.055, rel=1e-3)
            assert covarerr[2] == approx(0.632061, rel=1e-3)
            assert covarerr[3] == approx(0.290159, rel=1e-3)
            assert covarerr[4] == approx(1.62529, rel=1e-3)
    

    Looking at the test thread I see that it moves the fit parameters to "the best-fit location" before fitting, to save time in the test. So maybe there's something here that means that the optimiser hasn't been able to adequately describe the search surface, and hence the covariance matrix. This still doesn't explain why I see it and the Travis tests don't.

    The test uses the xswabs and powlaw1d models. I would be surprised if there were significant differences in the xswabs model between XSPEC 12.9.1p and whatever versions we use on Travis.

    For reference:

    % conda list
    # packages in environment at /home/djburke/miniconda3/envs/sherpa-fix-sphinx-docs:
    #
    # Name                    Version                   Build  Channel
    astropy                   3.0.4            py36h14c3975_0  
    atomicwrites              1.2.1                    py36_0  
    attrs                     18.2.0           py36h28b3542_0  
    backcall                  0.1.0                    py36_0  
    blas                      1.0                         mkl  
    ca-certificates           2018.03.07                    0  
    certifi                   2018.8.24                py36_1  
    decorator                 4.3.0                    py36_0  
    intel-openmp              2019.0                      118  
    ipython                   7.0.1            py36h39e3cac_0  
    ipython_genutils          0.2.0                    py36_0  
    jedi                      0.12.1                   py36_0  
    libedit                   3.1.20170329         h6b74fdf_2  
    libffi                    3.2.1                hd88cf55_4  
    libgcc-ng                 8.2.0                hdf63c60_1  
    libgfortran-ng            7.3.0                hdf63c60_0  
    libstdcxx-ng              8.2.0                hdf63c60_1  
    mkl                       2019.0                      118  
    mkl_fft                   1.0.6            py36h7dd41cf_0  
    mkl_random                1.0.1            py36h4414c95_1  
    more-itertools            4.3.0                    py36_0  
    ncurses                   6.1                  hf484d3e_0  
    numpy                     1.15.2           py36h1d66e8a_1  
    numpy-base                1.15.2           py36h81de0dd_1  
    openssl                   1.0.2p               h14c3975_0  
    parso                     0.3.1                    py36_0  
    pexpect                   4.6.0                    py36_0  
    pickleshare               0.7.5                    py36_0  
    pip                       10.0.1                   py36_0  
    pluggy                    0.7.1            py36h28b3542_0  
    prompt_toolkit            2.0.5                    py36_0  
    psutil                    5.4.7            py36h14c3975_0  
    ptyprocess                0.6.0                    py36_0  
    py                        1.6.0                    py36_0  
    pygments                  2.2.0                    py36_0  
    pytest                    3.8.1                    py36_0  
    pytest-arraydiff          0.2              py36h39e3cac_0  
    pytest-astropy            0.4.0                    py36_0  
    pytest-doctestplus        0.1.3                    py36_0  
    pytest-openfiles          0.3.0                    py36_0  
    pytest-remotedata         0.3.0                    py36_0  
    python                    3.6.6                hc3d631a_0  
    readline                  7.0                  h7b6447c_5  
    setuptools                40.4.3                   py36_0  
    sherpa                    4.10.0+177.g3540951.dirty           <pip>
    simplegeneric             0.8.1                    py36_2  
    six                       1.11.0                   py36_1  
    sqlite                    3.25.2               h7b6447c_0  
    tk                        8.6.8                hbc83047_0  
    traitlets                 4.3.2                    py36_0  
    wcwidth                   0.1.7                    py36_0  
    wheel                     0.32.0                   py36_0  
    xz                        5.2.4                h14c3975_4  
    zlib                      1.2.11               ha838bed_2  
    
    area:tests general:optimisation 
    opened by DougBurke 28
  • minor differences when no I/O between sherpa.ui.utils.Session and sherpa.astro.ui.utils.Session

    minor differences when no I/O between sherpa.ui.utils.Session and sherpa.astro.ui.utils.Session

    There are some places where you can run a method from sherpa.ui.utils.Session but not sherpa.astro.ui.utils.Session when you have no I/O module. Perhaps we should behave the same here.

    This is mainly a reminder for me to note the differences, and to provide something I can refer to from #1662

    type:question priority:low 
    opened by DougBurke 0
  • model-wrapped symbols now reference the actual model (fix #215)

    model-wrapped symbols now reference the actual model (fix #215)

    Summary

    Wrapped models, accessible from the UI layer, now reference the wrapped model rather than being devoid of any useful information. Fix #215

    Details

    Requires https://github.com/sherpa/sherpa/pull/1662. This was pulled out of https://github.com/sherpa/sherpa/pull/1638

    This adds documentation to the ModelWrapper class (for the implementor) and to model-wrapped classes (for the user). It is unclear what information is best for the user, but this is definitely a significant improvement and we can think about updates in a later PR if necessary. For instance, it might be helpful to include the list of parameters, but unfortunately this is rather hard to grab (you can't grab it from the model being wrapped, you'd need to create an instance of that model, which we could do but is not as safe as you might think), so it isn't done here.

    There's some "interesting" over-riding of repr / str going on here (i.e. this is existing code, for which I added some tests in #1662), but I didn't want to delve into it as the code works "as is".

    Examples

    We now get

    >>> from sherpa.astro.ui import *
    >>> help(gauss1d)
    Help on ModelWrapper in module sherpa.ui.utils:
    
    <Gauss1D model type>
        Create a gauss1d model instance.
        
        One-dimensional gaussian function.
    
        Instances can be created either as an attribute of gauss1d,
        as long as the attribute does not begin with an underscore,
        or by calling gauss1d directly.
        
        Examples
        --------
        
        The model, here called mdl, is returned but it's also stored in
        the session and can be returned with `get_model_component`:
        
        >>> m1 = gauss1d.mdl
        
        If the model has already been created with the same name then
        the old version will be returned, rather than creating a new
        instance:
        
        >>> m1 = gauss1d.mdl
        >>> m2 = gauss1d("mdl")
        >>> m1 == m2
        True
    
    >>> help(xscflux)
    <XScflux model type>
        Create a xscflux model instance.
        
        The XSPEC cflux convolution model: calculate flux
        
        Instances can be created either as an attribute of xscflux,
        as long as the attribute does not begin with an underscore,
        or by calling xscflux directly.
        
        Examples
        --------
        
        The model, here called mdl, is returned but it's also stored in
        the session and can be returned with `get_model_component`:
        
        >>> m1 = xscflux.mdl
        
        If the model has already been created with the same name then
        the old version will be returned, rather than creating a new
        instance:
        
        >>> m1 = xscflux.mdl
        >>> m2 = xscflux("mdl")
        >>> m1 == m2
        True
    

    which is not perfect, but a darn site better than the previous, which was

    Help on ModelWrapper in module sherpa.ui.utils object:
    
    class ModelWrapper(sherpa.utils.NoNewAttributesAfterInit)
     |  ModelWrapper(session, modeltype, args=(), kwargs={})
     |  
     |  Method resolution order:
     |      ModelWrapper
     |      sherpa.utils.NoNewAttributesAfterInit
     |      builtins.object
     |  
     |  Methods defined here:
     |  
     |  __call__(self, name)
     |      Call self as a function.
     |  
     |  __getattr__(self, name)
     |  
     |  __init__(self, session, modeltype, args=(), kwargs={})
     |      Initialize self.  See help(type(self)) for accurate signature.
     |  
     |  __repr__(self)
     |      Return repr(self).
     |  
     |  __str__(self)
     |      Return str(self).
     |  
    ... continuing like this with no helpful information to the user
    
    type:enhancement 
    opened by DougBurke 1
  • Fix an issue with show_bkg (fix #1645)

    Fix an issue with show_bkg (fix #1645)

    Summary

    The show_bkg routine could fail when the set of backgrounds changes per dataset. Fix #1645

    Details

    Requires https://github.com/sherpa/sherpa/pull/1662. This was pulled out of https://github.com/sherpa/sherpa/pull/1638

    The code changes make the original bug hard to spot, but the problem was that the routine changed the bkg_id parameter (which was sent in to the routine) in a per-dataset loop. So when we hit a second dataset then the code was using a potentially invalid bkg_id value. The pylint checker reported the issue and it was fixed by using bidval for the loop value.

    There are also some other code-style changes, including

    • there are a few "try to use a separate variable rather than the input argument" changes, including preferring idval to id where possible (as id is an existing Python symbol)
    • f strings
    • use str(foo) rather than foo.str()
    • change the logic a bit (the result is the same but it's now more obvious what a value being None does, and it avoids a very-small amount of work done in certain cases)
    type:bug 
    opened by DougBurke 1
  • Allow save_model/source/resid to work with 2D data (fix #1642)

    Allow save_model/source/resid to work with 2D data (fix #1642)

    Summary

    Fix save_model/source/resid calls when used with 2D data sets. Fix #1642

    Details

    Requires #1662. This was pulled out of #1638

    This fixes a bug I introduced during pylint fixes in #175 but we didn't have coverage to check it. We now have coverage thanks to #1486.

    type:bug 
    opened by DougBurke 1
  • Correct the Miniconda Installer for CI

    Correct the Miniconda Installer for CI

    Switching to the more official method of installing Miniconda for the Conda CI. The old method of chmod'ing and running the installer seems to have broken and is a bit outdated.

    opened by Marie-Terrell 3
Releases(4.15.0)
  • 4.15.0(Oct 11, 2022)

    Sherpa 4.15.0

    This release of Sherpa includes various enhancements, documentation updates, bug fixes, and infrastructure changes.

    • enhancements:
      • Improved validation of arguments when creating Data objects:
        • arrays sent to Data objects are now converted to ndarrays
        • the independent axis is now made read-only
        • the size of a data object is now fixed.
      • Filter setting with notice/ignore are reported to the screen for the users of the UI layer.
      • Increased test coverage for plotting
    • documentation changes:
      • updated readthedocs to use pip and pytest instead of setup.py
      • several updates to documentation, including updates to fake_pha, calc_ftest, calc_mlr
    • Infrastructure changes:
      • Drop support for Python 3.7
      • Updates to start creating Python 3.10 Conda packages.
      • Use Numpy 1.20 for Python 3.8/3.9 and Numpy 1.21 for Python 3.10.
      • Moves toward PEP-517 with some distutils cleanup and more configuration moved from setup.py to setup.cfg
      • Various improvements to the GitHub Actions and GitLab workflows
    • bug fixes:
      • Ensure chi2xspecvar errors match XSPEC when 0 counts are present during background subtraction
      • Remove model instances from the global symbol table when clean is called
      • Addresses new warnings in the tests for Matplotlib 3.6.0 and AstroPy 5.1
      • Minor copy and paste error in fake_pha docstring
      • Test issues in test_fake_pha.py due to randomness

    Details

    #1329 - Build updates (move towards PEP-517 support) Move the Sherpa build system towards a more static configuration (PEP 517)

    #1412 - XSPEC: initialize XSPEC library at load time Simplify how and when the XSPEC model library is initialized. Fixes #1388

    #1477 - Add data validation Improve the validation of arguments when creating Data objects- Arrays sent to Data objects are now converted to ndarrays, the independent axis is now made read-only, the size of a data object is now fixed.

    #1504 - CI: rework setup Update the GitHub workflows for Continuous Integration to better take advantage of the workflow language.

    #1505 - RTD use pip/pytest rather than setup.py for install/develop/test Change the RTD documentation to use pip and pytest rather than setup.py when building and testing Sherpa

    #1507 - tests: ensure DS9 window is closed after the tests Ensure that any DS9 process started by the Sherpa tests is closed when the tests are finished.

    #1510 - Tests: improve test coverage of plot code Improve the testing coverage for several plot routines.

    #1513 - Minor UI tweaks Added validation to the set_xlog/ylog/xlinear/ylinear and updated save_model, save_source, save_resid, and save_delchi commands from sherpa.ui to not error out if not given an explicit identifier.

    #1516 - Consolidate plot and set_xlog handling Use the same labels for the plot and set_xlog/set_ylog/set_xlinear/set_ylinear functions. As a result, a number of names can no-longer be used as an identifier for a dataset (bkg_chisqr, bkg_delchi, bkg_fit, bkg_model, bkg_ratio, bkg_resid, bkg_source).

    #1519 - Use direct html links instead of using [1] and a reference section This replaces the use of a "References" section with direct URL links in particular in cases where the reference is just a plain old HTML website and not a scholarly publication.

    #1523 - Ensure the region-lib code is re-generated when needed Fix a build issue with clang where the region-library code caused an error message

    #1529 - Add arguments to -i in sed calls in config script Update sed command in config script for improved portability

    #1532 - Add plot related tests Add more tests to cover corner case scenarios (mostly plotting related).

    #1534 - docs: fix calc_ftest/mlr documentation Note that calc_ftest and calc_mlr can return an array or scalar value, depending on the input.

    #1536 - Tests: add more tests of get_order_plot/plot_order Add a test for multiple orders and plot_order.

    #1537 - Remove model instances from the global symbol table when clean is called Ensure that models are removed when clean() is called.

    #1538 - Ensure chi2xspecvar errors match XSPEC when 0 counts are present during background subtraction When using the chi2xspecvar statistic for estimating PHA errors, ensure that the errors match those calculated by XSPEC when the background is subtracted and the source or background group contains 0 counts

    #1540 - Tests: fix the random seed for a fake_pha test Remove randomness from a test.

    #1547 - Fix CI failures from AstroPy 5.1 warning changes Allow the macOS tests to run on CI with AstroPy 5.1.

    #1554 - Install New libxcb Dependencies for Qt For Linux GitHub Actions, we now install libxcb dependencies required for Conda's Qt package.

    #1555 - Bump the ci-pip-arch workflow to ubuntu-20.04 Update to use ubuntu 20 as GutHub Actions is deprecating ubuntu 18.04

    #1562 - report the change in the filter because of calls to notice and ignore When a notice or ignore call - including the variants like notice_id and ignore2d - then the change in the filter is reported to the screen (for users of the ui layer).

    #1563 - Add Python 3.10 Conda Builds, Drop Python 3.7 and Numpy <1.20, Switch to Pip for Conda Builds Drop official support for Python 3.7 and start creating Python 3.10 Conda packages. Our Conda builds now use Numpy 1.20 for Python 3.8/3.9 and Numpy 1.21 for Python 3.10.

    #1566 - GitLab: Add version info to deploy test, update test OS, and allow interrupt. Updates to gitlab to support automated tests, save version info, set default to interruptible to allow canceling pipelines, and switched to source conda instead of adding it to the path

    #1567 - Fix minor copy and paste error in fake_pha docstring Make UI layer fake_pha work for XMM/RGS by accepting input of arf=None when an RMF file is given.

    #1569 - Minor changes to the fake_pha test added in #1567 Internal changes to the test added for fake_pha in #1567

    #1572 - Tests: fix a logical test error Fix a logical error in a plotting test.

    #1574 - Avoid occasional test errors from Tcl_AsyncDelete Switch the matplotlib tests to use the Agg backend to avoid possible Tcl_AsyncDelete errors (issue #1509)

    #1578 - Gitlab: Reduce artifact retention period reduces artifact retention from 2 weeks to 3 days due to new gitlab 5gb restrictions

    #1579 - Add back --python removed in #1566 reinserts --python flag that was accidentally removed in #1566

    #1580 -add a versionchanged note for the fake_pha command Update the documentation to note the change to fake_pha in #1567

    #1585 - Tests: work around matplotlib 3.6.0 warning Allow the tests to pass when run with matplotlib version 3.6.0.

    #1589 - Set the language when building the documentation Ensure the language setting is defined for recent versions of Sphinx

    #1594 - gitlab-ci: Set build num to 0 if not set Set the default Sherpa build number to 0 in our GitLab workflow to know which package to run deploy tests on for official release builds.

    #1600 - Change the histogram1d and 2d routines so that they do not sort the input arguments Address some failures in histogram1d and histogram2d routines by making sure they copy the input arguments before sorting them.

    Caveats

    • There are known issues (#1602, #1605, #1606) in the histogram1d/histogram2d functions leading to failures which were not fully addressed in this release (see the failed case in the second histogram1d example). This is not the Sherpa core functionality and numpy.histogram can be used if needed.
    Source code(tar.gz)
    Source code(zip)
  • 4.14.1(May 20, 2022)

    Sherpa 4.14.1

    This release of Sherpa includes various enhancements, documentation updates, bug fixes, and infrastructure changes.

    • enhancements:
      • various plotting backend improvements
      • various i/o backend improvements
      • data object class improvements
      • basic support for Xspec 12.12.1
      • beta support for python 3.10
    • documentation changes:
      • updated build with CIAO documentation
      • Add a missing class (DataOgipResponse) to the documentation
      • Improves the docstrings for DataPHA
      • fixed typos in plot docs
      • clean up readthedocs issues such as missing bullets
    • Infrastructure changes:
      • updates for compatibility with Clang 12.0
      • updates to the regression tests
    • bug fixes:
      • Improve the FITS headers created when writing out a PHA file (to better match OGIP standards)
      • addresses delete_model_component call failing if a key does not exist
      • fixed issue with writing a PHA dataset as a table rather than a PHA file
      • ensure FITS column access is case insensitive
      • image handling and image coordinates

    Details

    #1030 - Update lev3fft test to use PHYSICAL not WCS Update a 2D image test to use physical rather than WCS coordinates for the fit.

    #1185 - Simplify switching the astro.io backend Add dummy_module, docs, and test case for switching the sherpa.astro.io backend. io backend are now tried in a default order (crates, pyfits, dummy)

    #1191 - Make it easier to switch the plotting backend Make is easier to switch the plotting backend (note that we currently only have on backend implemented anyway)

    #1207 - Improve OGIP headers in PHA files Improve the FITS headers created when writing out a PHA file (to better match OGIP standards). Fixes #488 and #1209

    #1314 - test: add requires_data decorator Add a needed decorator for a test. There's no functional change to the code.

    #1317 - Add requires group test decorators Ensure the tests that require the group library are marked with the requires_group decorator.

    #1318 - Adjust a test for python 3.10 Adjust a test so that it passes with Python 3.10

    #1326 - Write-up that environment variable in setup.cfg need to be expanded Add a detail to developer docs about compiling in a conda ciao environment

    #1328 - Versioneer cleanup Update the versioneer module we use from 0.14 to 0.21 (latest).

    #1331 - optmethods: remove Deprecation warning about invalid escape string Remove a deprecation warning from sherpa.optmethods.optfcts and update some fake_pha tests to each use a fixed random seed.

    #1332 - Add Instrument test improved the psf tests, adding low-level tests of the tcdData class

    #1337 - Fix import statements Change from using import to importlib.import_module.

    #1338 - clean up some C/C++ code in the _psf.cc file Minor clean up of C/C++ code

    #1339 - Add codecov.yml to fix path issue and update codecov uploader Fix path parsing issue on codecov as well as updated for the new codecov uploader

    #1340 - Change how the sherpa config file is accessed Support environments where the HOME environment variable is not set when accessing the Sherpa configuration file.

    #1347 - Refactored array classes cause std::vector has non-virtual destructor std::vector has non-virtual destructor so in this PR the refactored Array (1D and 2D) classes no longer inherit from std::vector

    #1348 - Minor internal code cleanup tracking errors Minor internal changes to how some errors are raised. There should be no user-visible differences due to this changes.

    #1351 - Update the sherpa-test-data repo to the latest from main updates that reference to the latest copy

    #1353 - Fix issue with delete_model_component The delete_model_component call could fail in certain circumstances, so fix those cases. Fixes #16

    #1355 - Minor cleanup of the I/O code Minor internal clean-up of the I/O code. There should be no user-visible changes.

    #1359 - Improve writing out data objects as a table (fix #47) Fix writing out of PHA datasets as a generic table (both ASCII and FITS formats). Fixes #47

    #1360 - Add missing export statement to clang 12 note Update documentation to be clearer on how to build on clang 12 and avoid compiler error on implicit function declarations

    #1361 - Ensure FITS column access is case insensitive for the pyfits I/O backend (fix #143) Allow FITS colummn access with the pyfits backend to be case insensitive. Fixes #143.

    #1365 - Simplify a test (fix #541) Remove duplicated code in a test (fix #541).

    #1370 - Minor tweaks to handling of PHA settings Improve the checking of the factor and rate settings of set_analysis and the plot_fac attribute of the DataPHA class, and add a few tests that exercise some corner cases of the DataPHA class.

    #1374 - Address aarch/ppc64le test failures Update tests to that they pass on aarch64. Fixes #1372

    #1376 - Very-minor tweaks to docs/tests Fix a minor test issue and an unused reference in the documentation.

    #1387 - minimize C++ code duplications (rebase) Remove code duplication in the XSPEC module.

    #1399 - Allow conf/covar/proj to be used with XSPEC model parameters (regression) Ensure that conf, covar, and proj can be called with an XSPEC model parameter. Fixes #1397.

    #1401 - Modified region code to remove implicit prototype error This change updates the extern region code in sherpa to resolve a missing prototype which causes build issues with clang 12.0.

    #1403 - Updates to fix typos in the plot documentation Update to fix a few typos in the plot documentation

    #1410 - Add script and docs for pre-commit hook to update copyright Add example script for a git pre-commit hook that checks the copyright is up to date.

    #1411 - Add sphinx_rtd_theme to sphinx extensions and set min version Our readthedocs pages have had some visual issues (in particular, bullets point where missing from output) for a while. This fixes that by ensuring we are using a recent version of the sphinx theme.

    #1413 - Add more editor/OS exclusions to .gitignore Add a VSCode workspace summary files and MacOS directory preview files to .gitignore

    #1414 - Improve the handling of image coordinates Address an error when converting between different coordinate systems for images. Fixes #1380.

    #1415 - RTD fix missing and changed symbols Update the ReadTheDocs build to cope with changes in #1191

    #1418 - Fix minor test issues Address several minor test issues.

    #1419 - Keep .rc file backwards compatible Default rc files will work in older version of sherpa as well.

    #1421 - Address usability issues with Data2DInt and DataIMGInt classes Fix issues using the Data2DInt and DataIMGInt classes (#1379).

    #1423 - fix a compiler bug on macOS update to address compiler issue when compiling on macOS

    #1424 - rm primini stat method This PR removes the primini iterative statistical method. Please see issue #1392 for details

    #1425 - Allow users to freeze models Models can now be frozen or thawed, which just calls the requested on all the parameters of the model (except for "alwaysfrozen" parameters which are skipped). This is only relevant for users who are accessing the objects directly since the ui versions of freeze and thaw already implemented this behavior. Fixes #1080.

    #1429 - Tests: fix up test problems shown by pytest 7.0.0 Fix up several tests so they can be run with pytest 7.0.0. Fixes #1428

    #1434 - Fix aarch64 test failures Allow the tests to pass on aarch64 (numerical precision checks).

    #1435 - Minor clean up of the data code Improves the docstrings for DataPHA, adds tests for several corner-cases, and makes some minor code improvements to the code.

    #1440 - Minor clean up of the data routines Address several minor code-style changes in the data handling (code and tests).

    #1441 - Clean up the data tests Clean up of the data tests which were added as part of the data-class rework in #614.

    #1442 - Ensure channel and count fields for PHA are aliases of independent and dependent axes Ensure that channel and the independent axis and counts and the dependent axis are synonymous for the DataPHA class.#1451 - Tests: ensure sherpa smoke tests are done in a clean environment

    #1453 - Tests: mark those that require the group module Note a number of tests that require the group module.

    #1454 - RTD: update copyright year and fix WCS library Update the sphinx build instructions so that changes from PR #1414 will be documented correctly on systems where the documentation is being built without first compiling the code (such as read-the-docs).

    #1455 - RTD: update the build-from-CIAO documentation Update the "build with CIAO" documentation to match the CIAO 4.14 conda instructions and automate the process.

    #1458 - Improve several issues when building and testing Sherpa Minor improvements to the build and test installation. The use of setuptools has been restricted to match the current advice from NumPy (< 60, from https://numpy.org/devdocs/reference/distutils_status_migration.html).

    #1461 - Minor grouping cleanup Minor internal change to how the group routines work for PHA objects.

    #1462 - Docs: fix up documentation of ui.set_grouping Fix the documentation for set_grouping to match the code.

    #1463 - rm a redundant call to calc_h Removed a redundant call the the calc_h method in the class fdJac

    #1464 - RTD: include the DataOgipResponse class in the documentation Add a missing class (DataOgipResponse) to the documentation.

    #1473 - Data test minor cleanup Minor test clean-ups.

    #1474 - Add basic tests of data validation Ensure we test a number of corner cases related to data validation.

    #1484 - Include ciao region patch Allow the CXC Data Model library to be used to parse region files, if available. This is primarily intended for building Sherpa as part of CIAO.

    #1485 - get_header_data is an official method of the astro.io backend An internal change to make sure that the get_header_data routine is part of the astro I/O backend API.

    #1490 - Basic support for XSPEC 12.12.1 Allow Sherpa to be built with XSPEC 12.12.1. It does not provide access to the three new models - polconst, pollin, and polpow - added in this release of XSPEC.

    #1492 - Make tests pass on 3.10 One of the tests fails in Python 3.10 because the error message has been changed.

    #1493 - Remove deprecated distutils.version.LooseVersion The distutils.version.LooseVersion class is now marked as deprecated so remove its use when building Sherpa and when importing the Sherpa XSPEC module.

    #1494 - Update AX_PYTHON_DEVEL.m4 from serial 17 to 25 This update covers the cleanup of distutils in the configure files for grplib (and stklib in the case of standalone sherpa).

    #1495 - Note that we support Python 3.10 Note that Sherpa can be built using Python 3.10 and add two Python 3.10 CI test runs.

    #1496 - DS9 update from 8.2.1 to 8.3 Update to use ds9 v8.3 since v8.2.1 is no longer available

    #1497 - Tests: ensure matplotlib windows are closed after all tests are run Ensure that our tests can run on CI cleanly.

    Source code(tar.gz)
    Source code(zip)
  • 4.14.0(Oct 7, 2021)

    Sherpa 4.14.0

    This release of Sherpa includes various documentation updates, bug fixes, enhancements, and infrastructure changes.

    • enhancements:
      • filtering and grouping area for binned (1D) spectral data has been improved with changes to the default behavior and many bug fixes resulting in changes to the statistics, degrees-of-freedom and energy flux in comparison to the previous version for the same data with the same filter.
      • updates to allow users to change the hard limits of XSPEC model parameters
      • the sample_flux routine now returns correct information for the clip column
    • documentation changes:
      • improved PHA simulation documentation
      • improved Filtering and grouping of PHA data documentation
      • added sherpa.image module documentation
      • added section on running tests to developer docs
    • Infrastructure Changes:
      • updates to support Apple ARM
      • update to support Xspec version 12.12
      • update fftw from version 3.3.8 to 3.3.9
      • clean up of compiler and sphinx warnings
      • changes to support gcc 9.3.0 in conda defaults
      • updates to support python 3.9 including readline 8.1 upgrade, numpy minimum 1.19 (numpy 1.18 minimum for python 3.7/8)
      • test infrastructure clean up and updates
    • bug fixes:
      • updates to fix several 'unable to parse region string: None' errors
      • fix issue where save_all() of a loaded image with no region filter would fail on reload
      • fixed issue with plot_model() being called before notice or ignore could lead to filters not getting applied
      • fix to error out instead of crash when grouping data using an unsupported method

    Details

    #1031 - Update fwhm calculation Update the estimation of FWHM for 1D profiles, and hence the guess method for Gauss1D and related routines. The 2D models use the same routine so see these changes.

    #1073 - Allow fake_pha to be called with an identifier of None The fake_pha command now treats id=None as the default id. This addresses #1064.

    #1106 - The sample_flux routine now returns the correct information for the clip column The sample_flux routine now returns correct information for the clip column (that is, it matches the clipping done by this routine). There may be changes to the reported error ranges because of this change.

    #1107 - Add some grating related keywords for repr_html for ARF, RMF, PHA Add keywords to the default output in the repr_html for some X-ray classes.

    #1113 - XSPEC: support etable table models Allow XSPEC ETABLE table model files to be read by load_xstable_model by setting the etable parameter.

    #1118 - Improve test cases for source plots with PHA data Add additional test cases for source plots with PHA data to check out rarely-used combinations.

    #1127 - PHA filtering changes Improve some corner cases for filtering PHA data, including fixing #921 (channel bounds are integers, not half-integers).

    #1131 - Test: fix a test failure when XSPEC installed but no fits backend Avoid running a test we know to fail for an uncommon combination of options.

    #1134 - This is a doc-string only update for load_multi_XXX The previous example implied that a spectrum may contain both positive and negative orders in the same spectrum. That can happen only after coadding - and in that case one would co-add the arfs as well.

    #1135 - Added docstring and removed unused no-op class Remove sherpa.astro.models.MultiResponseSumModel which did nothing and seems to be unused (except in a test that explicitly skip this class).

    #1136 - Replace error message "too few columns" with a more generic word Change the wording of an error message. Potentially breaking if someone tests for the wording of the error message in their code, but certainly more correct.

    #1137 - Correct docstring for fake_pha The old docstring claims that the background is just scaled and not simulated, but there is definitely a call to poisson_noise for the background in the code.

    #1138 - Address NumPy 1.20 bool/int/float deprecation warnings NumPy 1.20 notes that numpy.bool, numpy.int, and numpy.float will be removed and users should just use bool, int, and float instead, so these symbols have been changed. This is a follow on to #1092

    #1140 - Fix some remaining master/main renames clean up references to master with main

    #1143 - add basic test and documentation for Integrate1D Add basic testing of the integrate1d model class and improve its documentation.

    #1144 - A minor cleanup of estmethods A minor cleanup of the sherpa.estmethods code.

    #1147 - Tests: support use of data directory being local, not absolute Allow a test to pass when given a relative path for the test data directory.

    #1154 - Update docs on RTD builds Update docs on how the docs are build in CI

    #1155 - Remove Meta class for fits hdu headers Remove Meta class for header information that is used only rudimentary and acts almost, but not quite, like a dict

    #1163 - XMM/RGS triggers a notice/ignore bug because channels are in reverse-energy order in RMF Recent work on the ignore / notice logic in #1127 implicitly assumes that the energy grid in PHA/ARF/RMF files is in increasing order, which does not hold true for XMM/RGS data

    #1164 - Fix plots with wavelength general:visualization type:bug For some data files, plots with a reversed grid (i.e. plots in wavelength) were drawn with disjoint lines since #906.

    #1165 - Add a section on running the tests to the developer docs Add a section on running the tests to the developer docs

    #1168 - Logging tweaks Include the sherpa.utils.logging module in the documentation and update a test to use the caplog feature of pytest (the code had been written to work with python 2.7 version of unittest).

    #1170 - Update Python and particularly number versions in documentation Updates to doc pages that list dangerously old numpy versions

    #1171- Improve the grouping/filtering tests Add a number of tests for corner cases of filtering and grouping of PHA data.

    #1172 - Improve data pha docs Improve the documentation of the filtering and grouping of PHA data.

    #1173 - Improve DataPha documentation Minor improvements to the documentation of the DataPHA class.

    #1175 - Store UI contour plots in a dictionary, not directly Change the internal storage handling of contour objects

    #1177 - minor rework of image handling for ui layer Change the internal storage handling of image objects and improved testing

    #1178 - Define physical constant hc in fewer places hc (as in "Planck constant * speed of light") is a physical constant that does not depend on the DataPHA and thus it should not be defined as a class attribute.

    #1180 - Minor improvements to model/parameter documentation Minor fixes and improvements to the documentation (both docstrings and RTD) for the model- and parameter-related code.

    #1182 - Simulations with multiple responses Enable simulations with multiple ARF/RMF present (e.g. LETG/HRC).

    #1183 - Check parameter link behavior Minor tweaks to the parameter tests.

    #1184 - Better pha model component plotting (fix #1020) The use of plot_model_component and get_model_component will now automatically add the response for PHA data sets, if there is one. Model expressions which contain a response will not be changed. Fixes #1020.

    #1187 - Conda GCC 9.3.0 update issue update to support environment changes due to conda defaults updates for gcc 9.3.0

    #1192 - Remove Python 3.6 and add Python 3.9 updates to switch python support to python 3.7-3.9

    #1194 - datastack: Improve show_stack and add repr_html for stack Improve datastack display (via show_stack and in the notebook) for datastacks.

    #1198 - XSPEC test cleanup Very minor improvement to the XSPEC test suite.

    #1199 - Move the regrid code out into a separate method Very minor code reorganization for the regrid code.

    #1203 - Update GitLab Conda recipe to numpy 1.19 updates gitlab conda recipe to python 1.19+ and adds run time pin

    #1204 - region lib updates for CIAO 4.14 updates to fold in changes to CIAO region lib for CIAO 4.14. Specifically, fixes for SM-89: bug in pie shape (extent/inside logic), SL-243: region - deprecate obsoleted Warning.

    #1205 - Remove the SherpaTestCase class Remove the SherpaTestCase class from sherpa.utils.testing and switch the tests that used it to use pytest functionality where appropriate.

    #1208 - Fix get_xerr for Data1DInt when all data is filtered The get_xerr method of Data1DInt would fail if all bins had been ignored and it now returns the empty list.

    #1215 - Restrict the values used in PHA notice/filter calls Ensure that the low and high limits for notice and ignore calls for DataPHA objects are sensible (hi >= lo and that for energy or wavelength filters they are >= 0). When filtering DataPHA objects in channel units the lo and hi arguments must be integers otherwise an error is raised.

    #1216 - filter improvements for DataPHA and Data1DInt Energy and wavelength filters (with notice and ignore) for PHA data are now treated as lo <= x < hi in all cases, and channel filters are lo <= x <= hi (where the channel values must be integers). For Data1DInt cases the notice and ignore filters are treated as lo <= x < hi. This changes address a number of corner cases. This can result in slightly different fit results because the data used in the fit can be changed and hence changing the number of degrees of freedom (by one or two depending on whether the first and last bins have been removed).

    #1218 - Update fftw from version 3.3.8 to 3.3.9 This change updates the version of the fftw module in sherpa's external dependencies (extern) directory to utilize version 3.3.9 (www.fftw.org) instead of the previous version 3.3.8

    #1219 - Fix filter expressions Filter expressions for Data1DInt and DataPHA cases now report the start and end value of each set of grouped data rather than use the end points. For PHA data sets users now see the same ranges displayed whether the data is grouped or not. The actual filter remains the same even if the filter expressions has changed slightly.

    #1223 - XSPEC 12.12.0 support Support the new additive and multiplicative models in XSPEC 12.12.0 - XSgrbjet, XSwdem, XSvwdem, XSvvwdem are additive and XSzxipab is multiplicative - and note the additional abundance tables that are available when running with XSPEC 12.12.0 (lpgp and lpgs).

    #1227 - Support AstroPy 4.3 for tests The AstroPy FITS reader is less tolerant of invalid FITS files in AstroPy 4.3 which trips one of our tests. The test has been updated with a FITS file that doesn't trigger the error case. There is no functional change in this commit.

    #1228 - Fix three parameter names for XSPEC models Fix three XSPEC models which had some internal confusion over parameter names: XSzkerrbb should have uses fcol but had instead used hd, and XSzashift and XSzmshift used Redshift instead of Velocity. The old names have been kept as aliases.

    #1230 - Update submodule to pick up submodule README Updates submodule to pick up the submodule README

    #1231 - Remove init.py from tests directory Clean up of a test directory. There is no functional change in this commit.

    #1236 - Remove compiler warnings Avoid compiler warnings from clang, fixing #1232 and several other warnings.

    #1241 - to get sherpa to run Apple M1 (arm) Initial support of sherpa for ARM build

    #1244 - Fix "unable to parse region string: None" errors (issue #1214) In certain circumstances (such as plot_pvalue with an explicit 2D PSF convolution operator read from a file) the code would fail with the error unable to parse region string: None (issue #1214). This PR fixes this problem and several minor related issues.

    #1246 - Image region filters are now specified correctly (Fix #1245) Update the image filter code so that the region description matches what the filter actually is: rather than use & to combine regions we now use | as it is a logical or rather than logical and. This is mainly going to affect users who: have used the output of get_filter() as input to calc_data_sum2d, have very-complex spatial filters, or are restoring Sherpa sessions created by save or save_all.

    #1247 - Allow save_all to work well with images with no filters (Fix #437) Fix a problem with the output of the save_all command if an image had been loaded but no region filter applied (fix #437): the output file would contain a notice2d_id call with a filter of "" which would fail if you tried to load the file back in. It now no-longer creates this line.

    #1248 - Docs: note that set_stat can be sent a stats object Update the documentation for set_stat to note you can send in a stats object

    #1250 - Stop odd interaction between plot_model and PHA filters (Fix #1024) plot_model(), if done before calling notice or ignore, could lead to strange filters (e.g. filter not getting applied). This has been fixed. Fixes #1024

    #1253 - More numpy 1.20 warning fixes Avoid warnings from NumPy 1.20 about using the deprecated np.int symbol.

    #1254 - FIX: #1235 change XSPEC chatter to 10 from 0 Change the default X-Spec chatter level from 0 to 10. This may result in screen output the first time an XSPEC model is evaluated (e.g. from a plot_model or fit call). There have been a number of cases where APEC models appeared to be creating no output because of missing data files (due to old ~/.xspec/Xspec.init settings) which would have been more obvious with this change. To get back to the previous behavior users can call set_xschatter(0).

    #1259 - XSPEC parameter changes - you can change the hard limits of most XSPEC parameters XSPEC model parameters now use the same limits for soft and hard (the hard limit from the model.dat file). This is handled by the XSBaseParameter class which is used for XSPEC table models. The XSParameter class, which extends this and is used by most other parameters, allows the user to change the hard limits of the model. This follows XSPEC, and allows using some models which are documented as supporting a value outside the normal parameter range (normally by setting the value to a negative value). Note that this is potentially dangerous (it could crash the program) so should be used carefully. It is strongly suggested that any parameter set to a value outside of the original limits is also frozen.

    #1260 - Add a module to parse XSPEC model.dat and helper scripts #1260 Add support for parsing XSPEC model.dat files (useful for updating the XSPEC module or if you want to use XSPEC local models).

    #1261 - update to match recent XSPEC model.dat settings Update the XSPEC models to match the latest model.dat file (from HEASOFT 6.29 / XSPEC 12.12.0). This primarily changes the hard limits of the models, but there are some changes to the default parameter values as well, as well as some unit changes, including adding and removing units.

    #1264 - Fix crash from grouping data using an unsupported method It was possible to cause Sherpa to crash by passing an invalid function as the groupfunc argument to apply_filter and apply_group. The code now errors out instead. There are improvements to the documentation (docstring and RTD) for PHA filtering and grouping.

    #1268 - Doc: switch default sphinx role Multiply the number of working references on readthedocs, while at the same time reduce verbosity in the docstrings.

    #1270 - Cleanup setup Update the Python supported status in the package metadata (drop 3.5, add 3.8) and bump the minimum version to 3.6.

    #1273 - crates backend strip preceding/trailing whitespace when determining ascii file colnames (Fix #1262) This change modifies the crates backend ascii file handling to resolve issue #1262 where a test added in PR #1253 causes the tests to fail. This change may potential result in behavior changes for crates use on ascii data files (see caveat section for details).

    #1276 - Fix Inconsistent behavior in save_arrays when optional parameter ascii = True/False (Fixes issue #1251) Updates the data I/O so that write_arrays behaves consistently whether writing ascii or fits files via the pyfits backend.

    #1279 - rm the platform cause no longer need it rm the platform module since it is not needed for ARM support

    #1281 - RTD: include parse_xspec_model_description Fix a link on the Read The Docs pages from PR #1260

    #1282 - Add Zenodo details for 4.13.1 Include the Zenodo details for the 4.13.1 release into the hard-coded list of releases.

    #1285 - Added an option to reflect minim about the boundary This PR adds the option on how the optimization minim will behave if the free parameter is beyond the limit. The default parameter reflect is True (will reflect by an equal amount about the limit). If reflect is set to False then model function will return DBL_MAX (~1e308) and therefore it will not be included in the simplex.

    #1287 - Remove C++ warning when compiling the pileup code Use unique_ptr rather than auto_ptr in the pileup code to avoid C++ warnings. Fix #505

    #1288 - Improve the PHA simulation documentation Fix up some links in the RTD documentation for simulating PHA data.

    #1290 - Fix the string output of CDFPlot Corrected the order of the points, x, and y values when displaying a CDFPlot object, improve support for a list argument, and made minor additions to the documentation in the sherpa.plot module.

    #1291 - Comment: add link to the OSTI.GOV URL for the optimizing document Add a reference to the OSTI.GOV technical report in the comments in the C++ code used for testing the optimization functions.

    #1292 - Update tests that were changed by #1246 Fix tests that started to fail once #1246 was merged.

    #1293 -RTD: add a "how to optimize a function" example Add a section to the ReadTheDocs site explaining how to "optimize a function" as this is something we can do but only really with the low-level code.

    #1294 - RTD: Fix some sphinx warnings (sherpa.models.regrid) Fix a minor warning when generating the Read The Docs pages.

    #1295 - Update XSkerrconv model for #1275 Rename the first two parameters of the XSPEC XSkerrconv convolution model from Index/Index1 to Index1/Index2. Fixes #1275

    #1296 - Document the neldermead reflect flag from #1285 Add documentation for the reflect keyword added in #1285 for the neldermead optimizer

    #1303 - Updated Introductory Section of the MCMC page Added additional details regarding sherpa's MCMC inplementation to the documentation

    #1308 - Docs: fix get_filter examples Fix the examples in the get_filter documentation.

    #1310 -Update the python check for python setup.py installs to 3.7 Updates the minimum supported version of python supported in setup.py to use python 3.7

    Caveats

    • Crates behavior change - PR #1273 fixes an issue with the crates backend where supplying a column filter "opt colnames=none" for an ascii file would utilize default column names 'col1..coln' instead of the column names specified in the file. Code which relied on the old behavior may now produce an IO error such as "IOErr: Required column 'col1' not found in [ ]"
    Source code(tar.gz)
    Source code(zip)
  • 4.13.1(May 18, 2021)

    Sherpa 4.13.1

    This release of Sherpa includes various documentation updates, bug fixes, and infrastructure changes. The default branch in github has been migrated from master to main.

    • documentation changes:
      • updates to documentation for TableModel, Notice2D, cache support for evaluating models, and low level optimization code
      • jupyter notebook uopdates
    • Infrastructure Changes:
      • the master branch has been migrated from master to main
      • updates to support numpy 1.20
      • updates to support astropy 4.2.1
      • updates to support matplotlib 3.4
      • test infrastructure clean up and updates
    • bug fixes:
      • fix an issue with cache evaluation on 1D models using integrated bins
      • fix for aarch64 build issue
      • fix to sherpa citation command
      • fix to honor clearwindow setting for plot_source
      • fix errors from save_data when the output file exists
      • fix build issues using gcc 7.3 with -Werror=format-security compilation flag
      • fix for reg_proj and reg_unc erroring out when the min or max arguments are tuples rather than lists

    Details

    #754 - sample_flux now returns statistic values for each row The sample_flux command now returns a statistic value for each iteration, even if those rows are not used in the reported flux distribution. Fixes #751.

    #769 - add basic cache tracking Adds the cache_status and cache_clear methods to models for verifying the cache behavior (this is only expected to be used in rare cases). The cache code has seen documentation improvements.

    #946 - rework stats tests Update the stats tests to use pytest.

    #960 - Fix model evaluation when changing the integrate setting (fix #958) Fixes an issue with cache evaluation on 1D models using integrated bins and the user has changed the integrate setting of the model.

    #978 - Use C99 def for INFINITY, NAN, isfinite and isnan to build on aarch64 (fix issue #970) Use the math constants (IFINITIY, NAN) and funcs (isfinite, isnan, signbit) from a C99 compliant compiler if the compiler option -std=c99 or greater is used, otherwise use the quantities as defined by the library.

    #991 - lint changes Applies a number of flake8-reported warnings to the code base (e.g. excessive or missing spaces and new lines).

    #1000 - Improve sherpa.citation (fix #994 #987) Fixes the sherpa.citation() command with its default argument (issue #994) and adds release 4.12.2 to the hard-coded list of releases. A typo in a warning message was fixed (#987).

    #1001 - Allow command-line arguments for sherpa_test Allow command-line arguments to be passed to the sherpa_test script. This allows running optional tests (e.g. the --runzenodo argument) and to configure the pytest configuration (e.g. to run coverage checks with --cov sherpa).

    #1002 - Fix error with clobber=False for paging - issue #996 Fix an error with clobber=False when the output file exists for several paging commands (e.g. show_data and sherpa.citation). Instead of getting a Sherpa IOErr being raised a NameError was being raised.

    #1003 - Fix serialization of iter method data - issue #997 If set_iter_fit_method has been called with a value other than 'none' then the output of save_all would be incorrect for the options for the iter-fit method.

    #1004- flake8 F811 - fix repeated test names Clean up of several test files to fix repeated test names.

    #1005 - Add explicit get/set_datadir routines to sherpa.utils.testing Internal changes to how the test data directory location is set and queried, including removing direct support from SherpaTestCase. Added tests for some of this functionality, and updated several test files to remove SherpaTestCase or use the new datadir functionality.

    #1008 - rm warning msgs, fix issue #980 Fix the compiler warning messages, by defining kwlist to be static const then use the C++ const_cast to remove the const to conform to PyArg_ParseTupleAnd Keywords prototype

    #1012 - Tests: allow test_ui tests to be run with pytest-xdist Allow the tests to be run with pytest-xdist.

    #1016 - Jupyter notebook representation not ideal if model components don't have unique names - issue: #1013 Fix an error in the HTML display of a model (used in the notebook) when two model components have the same name.

    #1017 - reword remark on normalization of Lorentz function Updated the comments pertaining to Lorentz function for clarity

    #1018 - Fix ShekelModifiedInit missing init par vals This PR fixes the missing initial fitted parameter values for the ShekelModifiedInit function. A fix for issue #1011

    #1028 - Update region lib code to correct build issues with gcc 7.3.0 compile Corrects issue building with gcc 7.3.0+ compilers with the -Werror=format-security compilation flag

    #1034 - Add contextmanager and docs on how to control the output level of sherpa Sherpa uses logging for much of its output, this adds a docs and a context manager for controlling the output level for a particular piece of code.

    #1039 - Add docs for basic.TableModel Added missing documentation for TableModel class

    #1049 - CI: pip submodule test to report coverage Report the coverage data from the pip CI run. This only changes the GitHub Actions runs.

    #1053 - Release 4.13.0 Updates to support the 4.13.0 release

    #1054 - Post 4.13.0 updates Updates the Zenodo DOI reference in the README.md to include 4.13.0.

    #1057 - Fix plot_source clearwindow setting The plot_source function was ignoring the clearwindow parameter (always using True) in calls to plot_source for non-PHA data. This was only for sherpa.astro.ui.plot_source (so sherpa.ui.plot_source did not have this problem).

    #1058 - Store the opstr of model combinations (unary and binary) Store the operator string as well as the operator when creating the unary and binary operator expressions for models.

    #1062 - Minor code cleanup of sherpa.astro.ui.utils The sherpa.astro.ui.utils module has seen a number of minor clean-ups, addressing pylint-reported issues.

    #1067 - Improve testing of sherpa.astro.ui.utils Improve coverage of the sherpa.astro.ui.utils and require pytest 3.9.0 or later for testing Sherpa.

    #1068 - clean up typos in sherpa/plot code Fixed several typos noticed while reviewing plot related fix for #1057

    #1072 - Update load_data to match load_pha for PHA2 data Ensure that load_data behaves like load_pha when given a PHA2 dataset.

    #1076 - sample_flux now uses the id argument The sample_flux routine now uses the id argument rather than always using the data from the default dataset. Fixes #752

    #1078 - Add error checking for exceptional cases for sample_flux Ensure that sample_flux errors out if the Xrays argument is False (as this code path is currently broken) or if the confidence argument is invalid.

    #1082 - Improve documentation for notice2d Minor improvements to the documentation of notice2d and ignore2d set of commands. Fixes #1059

    #1086 - Use the logging infrastructure for sample_flux output The screen output from sample_flux is now generated by the Sherpa logger and so can be hidden by the user if required.

    #1088 - Allow two optimization test to pass Address an issue in the optimization tests that meant two tests were failing. This only changes the test code and makes no change to the behavior of the optimizers.

    #1092- Numpy 1.20 warnings Avoid test failures due to new warnings added by NumPy 1.20

    #1094 - Cleanup multi-plot code Rework the code that handles the plot_fit_xxx and plot_bkg_fit_xxx calls. There should be no user-visible changes.

    #1096 - Reduce direct access to plot objects Changes to the internals of the plot code, to access information via methods rather than direct access, which improves encapsulation and reduces code repetition.

    #1098 - Bump DS9 version to 8.2 for CI Bump DS9 version used in CI tests from 8.1 to 8.2.

    #1100 - Address reg_proj and reg_unc failures (fix #1093) Calls to reg_proj and reg_unc could error out when the min or max arguments were set to tuples rather than lists. The code now converts these attributes to lists, which can result in changes to the string output of the objects (use of '[]' brackets rather than '()'). Fixes #1093

    #1101 - Address upper limit issues with sample_flux (fix #457) sample_flux no-longer excludes samples at the parameter bounds (soft) when calculating the flux distribution. This could lead to an over-estimation of the flux for upper limits (Fix #457).

    #1104 - Add basic documentation for the low-level optimization code Add and update documentation on the interfaces used by the low-level optimization code. Ensure that the plot classes are fully included in the documentation.

    #1105 - Consolidate and harmonize the validation of dataset identifiers Simplify the code used to validate dataset identifiers. Several names can no-longer be used as an identifier ('astrocompmodel', 'astrocompsource', 'astrodata', 'astromodel', 'astrosource', 'model_component', and 'source_component') and two can now be used ('energy' and 'photon').

    #1108 - Read RMFs where N_CHAN is an array Allow the pyfits backend to read a wider range of RMF files

    #1111 - Minor test updates Minor updates to the test code, including a small enhancement to the tests run by the smoke_test command.

    #1112: XSPEC: require model evaluation to be sent low and high grid values XSPEC model classes must now be evaluated with bin edges - that is with low,high bins. The support for sending in a single grid and treating it a consecutive set of bins has been marked as deprecated from the model class interface. This feature is still supported for anyone evaluating the models directly from the sherpa.astro.xspec._xspec module or via the _calc method.

    #1116: Add parameter-based tests for the PSF model Adds several tests of edge-case handling of parameters for PSF convolution models.

    #1117 - Docs: improve cache discussion and documentation Improve the discussion of the cache support when evaluating models.

    #1120 - Switch default branch to main Change the default branch from master to main. Also includes minor documentation updates (CONTRIBUTING.md updates to switch to main, to reference GitHub Actions instead of Travis, and to remove a note about Python 3.5 support).

    #1121- Hide AstroPy 4.2.1 FITS-related warnings astropy.io.fits.open now creates warning messages about invalid FITS structures when given a non-FITS file. This updated hides those warning messages since routines such as sherpa.astro.io.load_data attempts to open files (including ascii files) as FITS.

    #1122 - Update ds9 download Update DS9 tests to use ubuntu18 and darwinhighsierra as ubuntu14 and darwinsierra are no longer supported. The DS9 version has been bumped from 8.2 to 8.2.1 as this is the latest version.

    #1125: Support Matplotlib 3.4 Matplotlib 3.4 changes how the drawstyle argument is handled in some functions. This change removes the use of this argument for those functions.

    #1126 - Tweak plot docs Adds a new notebook that shows off a number of plots created with matplotlib and exports the ScatterPlot, TracePlot, CDFPlot, PDFPlot, and LRHistogram classes from sherpa.plot.

    #1130: Fix save_data when the output file exists Fix problems when save_data is used with clobber=False but the output file already exists. Fixes #1071

    #1132 - Update XQuartz for GH Actions workflow Updates the xQuartz download location and version for the GitHub Actions Conda tests

    #1139 - Fix typo in Zenodo test that made it fail Fixed a typo in the zenodo test which gets only run via the --runzenodo flag

    #1142 - Add instruction for source build on Mac Add instruction for source building on Mac that were previously only part of the internal release notes.

    Source code(tar.gz)
    Source code(zip)
  • 4.13.0(Jan 8, 2021)

    Sherpa 4.13.0

    This release of Sherpa coincides with the CIAO 4.13 release. The release contains a few minor documentation updates, a version number update to coincide with CIAO version 4.13.0, and infrastructure changes to migrate from Travis-CI to GitHub Actions for testing. No additional functionality has been introduced.

    Details

    #1043 - RTD: install more packages via pip Install sphinx-astropy and ipykernel via pip instead of Conda for the Read the Docs builds to avoid current indirect requirement conflicts.

    #1035 - Travis to GitHub Actions Replaces the Travis test infrastructure with two GitHub Actions workflows.

    #1036 - Cleaned up several typos in the RELEASE_NOTES file Updates the RELEASE_NOTES file to correct several typos.

    #1038 - Docs: note Python version and new build status Updates the build status badge to track GitHub Actions rather than Travis-CI and report the python versions as 3.6, 3.7, and 3.8.

    Source code(tar.gz)
    Source code(zip)
  • 4.12.2(Oct 27, 2020)

    Sherpa 4.12.2

    This release of Sherpa serves as the baseline release of Sherpa for CIAO 4.13. It contains numerous enhancements and fixes including items which are stand alone sherpa specific. Notable highlights include:

    • plotting improvements
      • improved support for matplotlib (linestyle changes in matplotlib 3.3, support of alpha channels)
      • overplot support for plot_fit_* plt_bkg_fit_*
      • updates to histograms, residual plots
    • data I/O and data handling
      • several bug fixes to handling of the PHA, ARF, RMF files
    • modeling
      • support for regrid models in the binary expressions
      • improvements and bug fixes to background treatment in spectral models
      • improvements in the flux calculations and resampling
      • added the Voigt model
    • documentation changes
      • improvements to Sherpa display in IPython/Jupyter notebooks
      • updates to the content of the docstrings for generation of ahelp files

    Details

    Testing and infrastructure fixes are not shown.

    #483 - An initial release of simultaneous fit on multicores (slower for most… Distributes the evaluations of the multiple independent data sets using the multi-cores built-in the user's workstation. The current default setting for this PR is to evaluate the multiple independent data sets sequentially since the overhead for distributing the workload across multi-cores is high if the evaluation of the data sets is not time consuming.

    #631 - Add HTML representations of common classes for IPython display (fix #345) An initial version of HTML display support for Sherpa objects for users of IPython/Jupyter notebooks.

    #634 - Added invitation for native software citation Adds a citation method to the sherpa module and updates CITATION to refer to this new functionality

    #693 - Address matplotlib linestyle removal in Matplotlib version 3.3 Matplotlib version 3.3 now requires you to split out the drawstyle and linestyle arguments. This set of PRs changes the code so that the two arguments are now set: linestyle defaults to solid and drawstyle to default for plots and step-mid for histograms. This appears to replicate the old version, and should be backwards compatible (drawstyle is supported in Matplotlib 2.2.5 and 1.5.3)

    #709 - Residual-style plots ignore the ylog setting (fi#586) Residual, ratio, and delchi plots always use a linear scale for the y axis, no matter what the ylog setting is.

    #740 - resample_data: when the error range on a bin includes -1 and no-longer restrict sampling to +/-1 sigma Fixes resample_data/ReSampleData so that it correctly handles bins where the range of data values (i.e. low to high limit) includes the value -1.

    #741 - Fix issue 638, guess needs to update fwhm/sigma bounds Adds support for guessing the fwhm or sigma parameters of the Gauss2D, NormGauss2D, and SigmaGauss2D models

    #750 - Add xspec convolution api Adds support for XSPEC convolution-style models - this link is valid for XSPEC 12.10.1 documentation

    #765 - Add docs for setting up all dependencies of the source build with conda Adds documentation on how to use conda to install source build dependencies

    #766 - No error for no-ops in ungroup and friends Removes DataErrs when ungrouping a dataset that is not grouped or unsubtracting a dataset that was not subtracted

    #770 - Replace EmissionVoigt/AbsorptionVoigt models by Voigt model (fix #597) Replaces the EmissionVoigt and AbsorptionVoigt models with a single model, Voigt1D. The EmissionVoigt and AbsorptionVoigt models will error out when an instance is created, pointing users to Voigt1D (as the parameter definitions have changed).

    #772 - Add XSPEC 12.11.0 support (HEAsoft 6.27) Adds support for XSPEC 12.11.0 (released March 31 2020)

    #782 - Add a pyproject.toml file Adds a pyproject.toml file to the top level (PEP 518) to document build requirements

    #789 - fix issue #788, fit using moncar with verbose=1 and/or numcores!=1 Fixes NameError in moncar when verbose setting >0

    #791 - Fix ignore/notice error-ing out when all bins have been set bad #790 Allow notice and ignore to be called on a dataset which has no "good" bins after ignore_bad has been called

    #793 - Accept masked arrays for Data XXX creation Allows use the mask of numpy arrays when initializing DataXXX

    #803 - Updates to sampling of energy and photon fluxes: bug fixes and calculate unabsorbed components Fixes and improvements to the energy_flux and photon_flux set of commands: sample_energy_flux, sample_photon_flux, plot_energy_flux, plot_photon_flux, get_energy_flux_hist, and get_photon_flux_hist

    #811 - Remove all future imports Removes future imports that were in place when sherpa supported python 2

    #812 - Add unsubtract and ungroup to datastack Adds corresponding unsubtract and ungroup methods to match subtract and group

    #815 - fix typo in rst docs Fixes a typo in the rst docs

    #821 - Add hyperlink to similarly named SHERPA package Documentation update to include a link to thethe similarly named package "SHERPA" for hyperparameter tuning of machine learning models

    #842 - Fix scaling of staterror when reading PHA file with rate instead of counts Fixes scaling of STAT_ERR when reading a PHA spectra with a RATES column instead of COUNTS

    #845 - xspec table models: add out-of-bound check to avoid segfault Adds explicit out of bounds checks to avoid segv when calling an Xspec table model

    #851 - Updates to the Read-The-Docs build environment Update the Read-The-Docs configuration to the latest version (2), and switch to a cleaner build (using conda) for the documentation. The minimum Sphinx requirement is now 1.8 (updated from 1.3).

    #856 - Support setting the id value in load_pha with pha2 files (fix #666) Updates load_pha to set the data ids of PHA2 datasets to: id to id + nfiles - 1 (when id is an integer), or "{}1".format(id) to "{}{}".format(id, nfiles) when id is a string.

    #858 - Minor documentation improvements to ReadTheDocs Adds minor updates to the convolution, regrid, and model evaluation sections of the ReadTheDocs documentation

    #859 - Clean up of the XSPEC interface code Internal changes to the XSPEC interface code, which reduces the amount of similar (sometimes identical) code. There is no change to the behavior of the XSPEC models.

    #865 - Minor documentation fixes Several documentation fixes: XSPEC parameter names, avoiding confusion over links on references (Sphinx pages), and adding some basic documentation to the sherpa.astro.background module

    #866 - Add parameter-clipping strategy to routines that generate samples (fix #846) The addition of the clip parameter lets users control how parameter values are clipped before use in sample_energy_flux, sample_photon_flux, plot_energy_flux, plot_photon_flux, get_energy_flux_hist, and get_photon_flux_hist.

    #868 - Add delete_pileup_model (fix #441), list_pileup_model_ids, list_psf_ids functions, fix list_models (fix #749) Add the delete_pileup_model() function to allow a pileup model to be removed from a fit (issue #441), and list_psf_ids() and list_pileup_model_ids() routines to list those datasets with an associated PSF or pileup model. The list_models() routine no-longer returns an iterator but a list when given an option (issue #749).

    #871 - Add MacOS LDFLAGS warning to devdocs Copies the warning about setting PYTHON_LDFLAGS from the install page to the developer docs

    #884 - Move logic from ui layer to DataPHA class: background responses (fix #879, #880) Moves the logic for adding a background response, if one doesn't exist, from the UI layer to the DataPHA class to clear up several edge cases

    #888 - Support vector backscales and bugfix for background modeling Supports fitting backgrounds to PHA datasets which have a variable BACKSCAL array (rather than a scalar), which can come from combining spectra (e.g. the CIAO contrib script combine_spectra) or from the data extraction process. In doing so a number of routines related to the scaling of background-to-source aperture data have seen adjustments to behavior and some enhanced functionality (such as sherpa.astro.ui.get_bkg_scale and the sherpa.astro.data.DataPHA.get_background_scale method).

    #897 - Add support for XSPEC 12.11.1 Allows Sherpa to be built against XSPEC 12.11.1. There are no new or changed models in this release compared to XSPEC 12.11.0.

    #899 - Update likelihood descriptions Updates likelihood description in several doc-strings and clarifies descriptions of statistics.

    #900 - Ensure that 1D and 2D models are not combined in an expression. Checks that models have the correct dimensionality when combining them, so expressions like gauss2d.src + const1d.bgnd will now raise a ModelErr.

    #906 - Improve and add support for histogram plots Updates to the display of histogram-style plots, in particular for 1D integrated datasets and some model display for PHA data. The histograms now cover the full length of each bin (previously they only showed half the bin for the first and last bins), and gaps in the histogram (where the high edge of a bin is less than the lower edge of the next bin) are now correctly displayed.

    #907 - FEATURE: support alpha transparency for matplotlib plots Supports the 'alpha' preference setting for most plots and contours generated by Matplotlib

    #909 - Refactor: remove _get_model/source methods Removes the _get_source and _get_model methods as they are the same as get_source and get_model

    #910 - Docs: fix several minor issues Documentation clean up - the load_template_interpolator function was named incorrectly in the example, and the examples for get_source_component_plot/get_model_component_plot were missing the trailing _plot for the function names

    #911 - Docstring: changes for ahelp Formatting changes to the docstrings for several routines that are useful for SDS in generating ahelp files

    #918 - fix an issue with PHA filtering that affects plot_model Fixes an issue when applying filters to generate the plot_model and plot_model_component plots for PHA datasets

    #919 - enable regrid for the BinaryOpModel class (rebased #798) Enables composite models (created by a binary operation between two models) to be regridded. The composite model is evaluated at the new grid, and it is only the combined model expression that is rebinned to the data grid.

    #922 - channel settings with grouped PHA data and model plotting Fixes a bug when filtering a grouped PHA dataset using analysis=channel. The selected bin ranges did not always match the versions you would have received when doing the same operation with energy or wavelength analysis (the first or last bin may have been different).

    #924 - Updated to ignore the .vscode directory updates system .gitignore to skip over vscode directories

    #929 - Support overplot option in plot_fit_xxx (issue #700) The overplot argument can now be used with the plot_fit_xxx and plot_bkg_fit_xxx routines (e.g. plot_fit_ratio).

    #931 - Update Data1DInt / DataPHA data plots to use the histogram plot style Switch the plot_data/plot_bkg plots to draw the data as histograms for Data1DInt and DataPHA plots. This will change the behavior of code that accesses the plot data - e.g. get_data_plot() or the dataplot element of get_fit_plot() - since for DataPHA and Data1DInt datasets the data will no-longer have an x attribute but xlo and xhi. To reduce the need for code changes for existing scripts - as many people use get_model_plot and get_data_plot to get the data - the histogram plots will return (xlo+xhi)/2 when asked for the x attribute.

    #939 - Docs: note XSPEC convolution models are new in 4.12.2 Documentation only change noting that the support for XSPEC convolution models is new to 4.12.2.

    #940 - Minor documentation improvements Fixed several minor issues in the existing documentation

    #944 - Improve handling of the default id with PHA background datasets (fix issue #943) The sherpa.astro.ui.set_default_id call no-longer sets the default identifier for background ids, which are now kept as the value 1. This avoids several issues when using set_default_id with the background components of PHA datasets

    #950 - Documentation fixes Minor documentation updates and adding ReSampleData to the RTD documentation

    #953 - docstring fixes Cleans up typos in documentation

    #957 - Docs: include the example notebook in the RTD pages Adds the example notebook (SherpaQuickStart.ipynb) to the Sphinx documentation under the "Notebooks" heading

    #961 - Update notebook support for Data1DInt/PHA data Improves the display of Data1DInt and DataPHA objects when displayed directly by Jupyter notebook and add a new notebook showing off the notebook support

    #963 -Docs: note plot_fit_xxx overplot change and add info on notebook support Adds documentation updates including notes on overplot support in plot_fit_xxx and plot_bkg_fit_xxx functions and RTD notes on adding notebooks

    #964 - Docs: avoid invalid escape sequence warning Very-minor tweak to the Chi2 docstring

    #968 - Fix 2d image filtering (fix #965) Fix problems with ignore2d and notice2d when multiple regions are used.

    #969 - Support Python 3.8 for MacOSX Add a new multiprocessing_start_method option to sherpa.rc and sherpa-standalone.rc and init code to set start multiprocessing method to fork by default.

    #971 - Docs: fix sphinx warnings Fix documentation in sherpa.sim.sample on the ReadTheDocs site.

    #979 - Improve plots of PHA data sets and models when the response grid is not ideal Improve the display (plots) of PHA data and models when the response grid contains minor numerical differences. (fixes #977)

    #984 - Address set_xlog/ylog problems with DataPHA/Data1DInt classes (#981) Fix the set_xlog/ylog routines for PHA and 1D integrated datasets.

    Source code(tar.gz)
    Source code(zip)
  • 4.12.1(Jul 14, 2020)

    Sherpa 4.12.1

    This release of Sherpa supports a patch for CIAO 4.12. This release is driven by fixes that were deemed necessary to warrant a patch to sherpa in the CIAO 4.12 release. The changes include issues which had the potential of impacting users by either stopping an analysis session or providing incorrect or potentially confusing results, particularly in the area of grating analysis and multiple datasets. Additionally, this patch includes several fixes and improvements such as initial support for Python 3.8.

    This version of Sherpa has been tested with Python 3.5, 3.6, and 3.7. There has been limited python 3.8 testing as well.

    Details

    Testing and infrastructure fixes have been omitted

    #832 - Support building with NumPy 1.19 Compatibility updates for numpy v1.19 with regards to numpy.distutils and tostring deprecation

    #781 - Docs: fix typo in docstring for calc_kcorr Fixed a typo in docstring for calc_kcorrFix a typo in the documentation for calc_kcorr: change "0." to "0.5" in one of the examples.

    #759 - revert PR #444 (caching) Revert the ARF cache added in #444, as well as some of the related code changes, as they caused problems with Analysis in wavelength space (e.g. #746)

    #756 - calculate_photon_flux/calculate_energy_flux fix and improvement Bug fixes and improvements for calculate_photon_flux and calculate_energy_flux: - address flux and flux-density calculation issues discussed (fix #619) - fix documentation of the lo and hi arguments (fix #308) - add model parameter to calc_photon/energy_flux which allows users to easily calculate "unabsorbed" fluxes

    #747 - reworked regrid to eval usr grid, but 0 every where else Modifies the 1D grid implementation to evaluate over the user-supplied range, anything outside that is set to 0

    #745 - ensure that min/max limits are applied to linked parameters before use Enforce parameter restrictions (they must lie within their min/max range) even when the parameter is linked to another (which has a different range).

    #739 - Post 4.12.0 Updates Update readme to include DOI for 4.12.0

    #738 - Release 4.12.0 Merge the 4.12 Release into master

    #737 - Update copyright year for the documentation Read-The-Docs: update copyright year

    #735 - Remove ChIPS support Remove the ChIPS plotting code and configuration options from the code base

    #734 - Change datastack query_by_header_keyword to not error if keyword is missing Change the sherpa.astro.datastack query_by_header_keyword routine so that it skips data sets which do not have the keyword, rather than raising a KeyError. This also affects query_by_obsid.

    #733 - fix a bug with fit(cache=False) passing the runtime option while fitting #732 - Remove unused Python 2.7 compatibility code Remove unused code from the removal of Python 2.7 support.

    #696 - Support python 3.8 Updates to the support python 3.8 including - remove syntax warnings from use of is with a literal - remove deprecation warnings about PY_SSIZE_T_CLEAN - fix index error (use scalar indexes) - update the documentation to highlight initial python 3.8 support

    Source code(tar.gz)
    Source code(zip)
  • 4.12.0(Jan 30, 2020)

    Sherpa 4.12.0

    This release of Sherpa is based on the CIAO 4.12 release and includes additional bug fixes and improvements.

    This version of Sherpa has been tested with Python 3.5, 3.6, and 3.7.

    This release also provides support for XSPEC 12.10.1 (patch 'a' or later) in addition to versions 12.10.0 (included in CIAO) and version 12.9.1.

    Details

    Documentation and infrastructure fixes are not shown.

    #388 - Address indexing DeprecationWarning/IndexError from NumPy for PHA data with filter + ignore_bad (fix #361)

    #602 - Reduce integration tol Changed integration routine tolerance from double epsilon to float epsilon since opt routines hove tolerance of ~ 1.0e-7

    #606 - stop using numpy.typeNA which is now deprecated A numpy deprecation warning was fixed by removing the usage of typeNA, which was not documented and will be removed in a future release of numpy. The print_fields function has been changed to include a default mapping using the current typeNA implementation.

    #608 - update ciao default plotter to matplotlib The default Sherpa plotting package has been changed to matplotlib for CIAO users as well (it had been the default for standalone users for several years). The update will be applied to new users when the sherpa command is run. Users of previous versions would need to edit/regenerate the file.

    #616 - enable run-time-option to test cache (default=True) Capability to allow the user to turn off caching at runtime for testing purpose.

    #622 - Do not create warnings about error bars in plots if no error bars are to be shown (fix #621) Remove the The displayed errorbars have been supplied with the data or calculated using chi2xspecvar... warnings that appear for data, residual/ratio, and fit plots when the user has explicitly turned off the display of errorbars (by setting the yerrorbars plot-preference setting to False).

    #626 - add plot_fit_ratio and plot_bkg_fit_ratio functions Add the plot_fit_ratio function to the sherpa.ui layer (to match plot_fit_resid and plot_fit_delchi) and plot_bkg_fit_ratio to the sherpa.astro.ui layer. Added tests for a number of plot types.

    #640 - fix 'tuple index out of range' error in the fit.est_errors method Sherpa's Confidence class is confused if a parameter is frozen then thawed after a fit resulting with an exception of the form: IndexError: tuple index out of range. The error does not show up if conf is called from the ui level

    #647 - fix numpy hist warnings When plotting a PDF Sherpa now calls numpy.histogram with density=True | False rather than normed=True|False. There is no change visible to the user other than a warning that will not be issued anymore. normed=True was broken for non uniform bins, but our code always produces uniform bins, so we should never hit the problematic case.

    #658 - Handle files with MJD_OBS or MJD-OBS keywords (datastack module) The sherpa.astro.datastack module has been updated to deal with files that use the MJD-OBS keyword instead of MJD_OBS. This is only used by the show_stack routine, which prints out several keywords from each file in the stack.

    #665 - Remove python 2.7 support (fix #498) Remove support for Python 2.7 from Sherpa. The metadata used by pip now requires python 3.5 or higher (but not version 4),

    #667 - improve matplotlib plot_fit plots (order of data and model) Changes to Zorder of plot objects drawn by matplotlib to be more readible

    #674 - Add link/unlink parameter tests This PR adds three tests for sherpa's un/link commands.

    #677 - XSPEC: build against 12.10.1m by removing support for models Remove support for XSrelline, XSrelline_lp, and XSrelline_lp_ext models. These were added in 12.10.1 but removed in 12.10.1m. It does not seem worth supporting just those versions where they were available.

    #680 - Remove the unused parameter in sherpa.astro.optical.LogEmission model (fix #219) Remove the unused parameter in the sherpa.astro.optical.LogEmission model. The LogEmission model only has four parameters (fwhm, pos, flux and skew) consistent within the calc method. The hidden parameter limit and it doc were removed.

    #682 - jointplot now respects the ratio argument (top plot is larger) with matplotlib backend The matplotlib back end now makes the main (top) plot taller than the secondary (bottom) plot when using the sherpa.ui.plot_fit_resid and sherpa.ui.plot_fit_delchi routines (this also holds for the sherpa.astro.ui variants).

    #683 - Remove chips from the documentation Remove mention of ChIPS from the documentation (both on read the docs and in the sherpa.astro.ui and sherpa.ui modules), replacing with matplotlib where appropriate.

    #684 - Add default config options Whenever an option is read from the configuration file (~/.sherpa.rc) provide a default option in case the configuration file is missing (or is missing this option).

    #685 - raise ModelErr when model using Data1DInt have overlapping bins Adds a check to regrid make sure the integrated bins do not overlap. The PR is a fix for issue #569

    #688 - Improve documentation and testing for get_source_plot when sent lo/hi arguments Improve the documentation for the sherpa.astro.ui.get_source_plot routine, describing how touse the result when the lo or hi arguments are sent.

    #691 - add warning when chips is selected but cannot be imported (rebase of #648) Sherpa now warns the user if Chips is selected as a backend but it is not available in the installation

    #692 - Allow users to override plot preferences when creating a plot Change to allow kwargs to be specified to change plot preferences at creation time

    #698 - Add warnings in documentation about masked arrays Updates docstring to indicate that sherpa doesn't support numpy masked arrays

    #701 - a fix for /data/lenin2/Test/CIAO4.11/SherpaRegressionBeta1/45
    Enables the testers to turn on (or off) caching at runtime to test #444

    #704 - change calc_ftest from delta dof to dof (similar to XSPEC) - fix for #641 Update to make sherpa's calc_ftest provide same results as XSPEC's ftest (fix to #641)

    #705 - modelCacher1d needs to have a couple of deep instead of shallow copy() Update to make deep copy of cached function values (fix to #673)

    #710 - Add warning about masked array to sherpa.astro.ui.load_arrays Updates docstring to contain warning consistent with #698

    #712 Minor typo in docs Corrects a typo "is is" in the documentation.

    #716 Fix for plot_cdf: plot() got an unexpected keyword argument 'clearwindpw' Recent changes broke plot_cdf()due to a typo 'clearwindpw'. This update corrects the typo.

    #721 Docs: update installation notes for conda environment Add a warning about the need to set PYTHON_LDFLAGS to ' ' on macOS when building within an anaconda environment. This is a documentation-only change.

    #725 convert a list to np.array to avoid warning messages Fixes issue #723 (a lot of warnings while running 'resample_data()' in CIAO4.12 Sherpa)

    #728 a fix cause using load_multi_arfs causes caching error Fixes issue #717 (unable to fit if script uses load_multi_arfs/rmfs)

    Source code(tar.gz)
    Source code(zip)
  • 4.11.1(Aug 1, 2019)

    Sherpa 4.11.1

    This release of Sherpa introduces several functional improvements and bug fixes, in particular Sherpa now has support for:

    • asymmetric error bars
    • PSFs with better pixel resolution than the data
    • running optimization in parallel

    Details

    #630 Fix "get_int_proj does not work when recalc=True" (#543) get_int_proj did not work when recalc=True on Python 3. This has now been fixed.

    #615 Asymmetric Errors Sherpa now supports asymmetric error bars. Errors can be read through a new load_ascii_with_errors high level function, or through the new Data1DAsymmetricErrs class. Sherpa uses bootstrap for estimating the uncertainties.

    #585 plot pvalue Updates to utilize the appropriate response files (ARF and RMF) for X-ray spectra and changes to the p_value output to 1/(number of simulations) when p_value is 0 and the number of simulations in not large enough.

    #596 Run optimization algorithms over multiple cores This PR enables the user to run the optimization algorithms (DifEvo, LevMar, and NelderMead) on multi-cores.

    #607 PSF rebinning (fix #43) Sherpa now supports using a PSF with a finer resolution than 2D images. If Sherpa detects that the PSF has a smaller pixel size than the data, it will evaluate the model on a "PSF Space" that has the same resolution as the PSF and the same footprint as the data, then rebin the evaluated model back to the data space for calculating the statistic.

    #614 Refactor data classes (fix #563, #627, #628) Sherpa's basic data classes have been refactored and cleaned up to help facilitating fixing bugs and implementing new features. New tests were added to reduce the chances of introducing regressions in the future.

    #612 Fix #609 User Model: unable to change parameter values A regression introduced in Sherpa 4.10.0 presented users from change user-model parameter values through direct access. This issue has been fixed. Several tests were added to reduce the chance of regressions in the future.

    Fix #514 - the command line tool sherpa_smoke in the conda package now correctly honors command line arguments.

    Source code(tar.gz)
    Source code(zip)
  • 4.11.0(Feb 20, 2019)

    Sherpa 4.11.0

    Optimization routines and statistic methods were made more robust with several bug fixes and improvements.

    This version of Sherpa has been tested with Python 2.7, 3.5, 3.6, and 3.7. Support for Python 2.7 is being deprecated and may be dropped in future releases.

    This release also provides support for XSPEC 12.10.1 (patch 'a' or later) in addition to versions 12.10.0 (included in CIAO) and version 12.9.1.

    Details

    Documentation and infrastructure fixes are not shown.

    #444 Improve caching When fitting multiple datasets simultaneously Sherpa now tries to cache model evaluations to improve performance.

    #465 Support XSPEC 12.10.0 (fix #436) XSPEC 12.10.0e is now supported

    #508 Ensure fields are initialized before use (pileup code) Resolved an issue where some fields in the C++ extension could have been uninitialized in a corner case.

    #523 Ensure the ui modules have a unique list of exported symbols (fix #502) Remove the sherpa.utils.erf symbol from the two ui modules as it is "over-written" by the ModelWrapper-created symbol when applied to sherpa.models.basic.Erf.

    #524 Be explicit about 'invalid escape characters' in strings Convert strings that contain "invalid" escape characters, which newer versions of Python complain about, to raw strings by preceding them with a 'r' character.

    #525 Fixed based line radpro_dm test and changed double --> real (template)

    #530 Ensure XSPEC 12.10.0 uses ATOMDB 3.0.9 by default Ensure that the default AtomDB version is 3.0.9 when using XSPEC 12.10.0. This is to fix an issue with the models-only XSPEC installation in XSPEC 12.10.0 which uses a default version of 3.0.7 but only provides data files for 3.0.9. Without this change models such as XSapec would return all 0's (unless the AtomDB version was set manually by the user).

    #534 XSPEC 12.10.1 support and mtable fix Sherpa now supports XSPEC 12.10.1. Several issues were fixed to ensure compatibility with this release. Note that there are known issues with version 12.10.1 (no patches), so at least v12.10.1a must be used.

    #537 / #552 sherpa.rc changes Sherpa does not read the verbosity.level setting from sherpa.rc anymore but the option is still in the file for backward compatibility

    #539 Fix XSPEC models: mtable table models and ismabs (fix #535, #538, #540) Fix to address issues with XSpec- #535 XSPEC multiplicative table models (mtable models loaded with load_xstable_model); #538 incorrect evaluation of the ismabs model; #540 the kerrd model evaluated to 0 for XSPEC 12.10.0 and later.

    #546 Small code cleanup in LevMar small code cleanup in the LevMar class to match the other classes in that file.

    #564 / #567 Harmonise the handling of dof is zero or negative in fit and calc_stat_info (fix #565) The fit and calc_stat_info methods now use the same code to calculate the reduced statistic (rstat) and "quality of fit" (qval) values. This avoids a TypeError when the number of degrees of freedom is negative in calc_stat_info and ensures that NaN is returned for both values when the goodness-of-fit calculation has failed (e.g. the reduced statistic is zero or less or the statistic is negative), whereas in previous versions the qval value could be None or 1 depending on the code path.

    #576 Fix regression with responses get_x method Version 4.10.1 introduced a regression which made the get_x() method fail when called on RMF and ARF objects. This has been resolved.

    #583 Update pager code (fix #445 #561) Replace the use of an external pager (such as more or less) with a Python one. This means that Sherpa no-longer uses the PAGER environment variable, and that the screen output from the show_* series of commands should now appear in the correct location when using a Jupyter notebook or the spyder application.

    Source code(tar.gz)
    Source code(zip)
  • 4.10.2(Dec 14, 2018)

    Sherpa 4.10.2

    This release fixes a regression introduced in Sherpa 4.10.1 related to PSF convolution. As part of the long term PSF rebinning improvements, Sherpa 4.10.1 introduced a check to validate that the data pixel size and the PSF pixel size match. If they don't match, then a warning is issued. The change did not account for an edge error case. Sherpa 4.10.2 solves this problem. The regression introduced a downstream regression in the gammapy project.

    Details

    #551 Fix gammapy/gammapy#1905, catch errors with psf pixel size. Sherpa did not properly catch errors when checking the PSF pixel size matches the image pixel size. This has now been fixed.

    Caveats

    There is a potential issue for macOS conda users with python > 3.6 and matplotlib v3.0.1, when pyqt5 is not installed and selected as a matplotlib backend. With this specific combination of platform and dependencies matplotlib can fail to work properly. The issue can be replicated without importing Sherpa, so this is not a Sherpa bug.

    Several workarounds exist, the simplest of which are: * install pyqt5 (conda install pyqt). * downgrade matplotlib to v2 or v3.0.0 (conda install matplotlib=3.0.0)

    Source code(tar.gz)
    Source code(zip)
  • 4.10.1(Apr 4, 2019)

    Sherpa 4.10.1

    This release fixes several bugs and introduces a few new features, notably the ability to evaluate model components on arbitrary grids and generate user-defined ARFs and RMFs. Also, as of this release Sherpa will no longer rely on any Fortran code. See the following section for details.

    It is now possible to build the Sherpa documentation using Sphinx. Additionally, the Sphinx documentation is automatically built and hosted on ReadTheDocs: https://sherpa.readthedocs.io/

    Details

    #407 Ensure 0-length array is an error in filter_resp (fix #405) Add an explicit check in the C++ filter_resp code to error out if the noticed channels array is empty.

    #422 Improve error message with wrong xspec version Improve the handling of XSPEC versions mismatch.

    #466 Fix bounding box out-of-bounds memory read Avoid an out-of-bounds memory read when calling pad_bounding_box (when the data is not matching the expected conditions for this call).

    #469 Evaluate model on finer grid Sherpa users can now define arbitrary grids, called evaluation spaces, on which to evaluate individual model components, both in 1D and 2D. This can be useful in a number of cases, for instance when it is desirable to evaluate models on a finer grid than the one defined by the data, or in convolution models where information outside of the data range can be used to reduce boundary effects or to inform the evaluation inside the data space. Also, when plotting individual 1D model components (plot_source_component), if a specific evaluation space was attached to the model then the plot will be of the model evaluated over that evaluation space, not the data space. Other plotting commands should be unaffected by this change.

    #470 Order for np.ones() should be a one length string not a bool Fixed a DeprecationWarning in the optimization module.

    #471 Using overwrite rather than clobber of astropy.io.fits clobber in astropy has been deprecated for a while now in favour of overwrite and thus issues a DeprecationWarning. Sherpa now uses overwrite instead.

    #475 Fix unecessary runtime warning (fix #402) A condition check in the optimization code was modified so as not to produce a warning when the value is a NaN.

    #481 Remove Fortran code Sherpa does not rely on any Fortran routines anymore, except those compiled in XSPEC. The existing Fortran code, which was used mainly in some of the optimization routines, has been replaced by equivalent C++ code. This means that gfortran is no longer required for building Sherpa, although a version of libgfortran compatible with the one used to link the XSPEC libraries would still be needed at runtime if the XSPEC extension is built.

    #482 ARF/RMF creation functions The create_rmf and create_arf functions now allow users to easily generate user defined response objects.

    #486 ZeroDivisionError in calc_stat_info (fix #476) Sherpa did not catch divisions by zero in the calc_stat_info command. That has been fixed.

    #484 Equivalent Width errors Several optional arguments (params=None, error=False, otherids=(), niter=1000), were added to eqwidth. If error = True, then get_draws shall be run if the fit stat is one of the following: (Cash, CStat, WStat) otherwise a multivariate normal distribution shall be run to generate the samples. The optional niter parameter is used to generate the number of samples. The optional parameter otherids is only used if get_draws is used internally when multiple data sets were used to fit. Alternatively, the user can enter the samples via the params option where the samples must be a numpy.ndarray of dimension (num, npar).

    #487 PSF bin size warning Sherpa assumes that the PSF image and the data have the same pixel size. When this is not true Sherpa ignores the difference, which results in a larger PSF being applied. From now on Sherpa will issue a warning when the PSF image pixel size and the data image pixel size are different.

    Source code(tar.gz)
    Source code(zip)
  • 4.10.0(May 11, 2018)

    Sherpa 4.10.0

    This release of Standalone Sherpa corresponds to the Sherpa code released as part of CIAO 4.10.

    Sherpa 4.10.0 fixes several bugs related to the support of instrumental responses, including improved support of XMM and Swift responses.

    Also, this release fixes a significant bug in the support of user statistics, improvements to the Python 3 compatibility, more robust usage of the numpy API, as well as several other minor bug fixes and new tests.

    Additionally, this release introduces support for XSPEC 12.9.1n models, as well as the ability to use aliases for parameter names. Some parameter names have been deprecated and may be removed in a future release. We reviewed the parameter limits for many models and updated them to reflect the latest XSPEC specification. Also, multiple versions of XSPEC are now supported, through optional models and version-dependent low-level function calls. This feature is for advanced users building Sherpa from source. Note that Sherpa has been tested against XSPEC 12.9.0i, 12.9.0o, and 12.9.1n. Note that XSPEC is not directly supported by the standalone binary builds, and users are expected to build Sherpa from sources if they want to link it against their version of XSPEC. These changes make it easier for user to link different versions of XSPEC with the same Sherpa code base. Also note, however, that XSPEC 12.10 is not currently supported.

    Sherpa now requires pytest-3.3 or later for running the tests.

    Details

    #451 Release/4.10.0 Testing code now works with pip 10 as well, despite a change in the pip 10 API. Also, the README is now properly rendered as markdown in PyPI.

    #438 Change from error to warning for OGIP violations Given that users can not easily change a response file, and previous versions of Sherpa would allow the responses to be used, this commit changes some of the errors recently introduced (in PR #383) into warnings. The errors for the first bin edge being <= 0 are still left in because users of the sherpa.astro.ui module will find that these files are auto-corrected for them (by PR #383).

    #430 Update XSPEC parameters XSPEC model parameter default values, limits, and properties were reviewed and updated to reflect changes in or mismatches with the model.dat file shipped with XSPEC 12.9.1n.

    #428 handle function name changes in XSPEC 12.9.1 Sherpa now supports multiple versions of the XSPEC models from the 12.9.0 and 12.9.1 series. Some models recommend using the C-style interface over the older FORTRAN one, which may also resolve some memory access issues. For CIAO 4.10 users this means the interfaces to XSPEC models have been updated to the 12.9.1n versions. For Standalone Sherpa users this means they can build and run Sherpa against a larger range of XSPEC versions, and Sherpa will pick the XSPEC low level model function accordingly. Note that Sherpa has been tested against XSPEC 12.9.0i, 12.9.0o, and 12.9.1n. The 14 models that have been changed are: apec, bapec, bvapec, bvvapec, gaussian, lorentz, meka, mekal, raymond, vapec, vmeka, vmekal, vraymond, and vvapec.

    #427 Add support for XSPEC models added in 12.9.1 (fix #331) Add support for models added in XSPEC 12.9.1: bvtapec, bvvtapec, tapec, vtapec, vvtapec, carbatm, hatm, ismabs, slimbh, snapec, TBfeo, TBgas, TBpcf, TBrel, voigt, xscat. Version 12.9.0 of XSPEC can still be used, in which case the models can be generated - e.g. a user can say xstapec.mdl to create a Python object - but it will error out when evaluated. The Si and S elemental parameters for the ismabs model have been renamed from the XSPEC versions since they are not unique using the caseinsensitive matching used by Sherpa: so SI, SII, SIII and SiI, SiII, SiIII have been renamed S_I, S_II, S_III and Si_I, Si_II, and Si_III respectively. Low level support for the following convolution models from XSPEC 12.9.1 has also been added, but there is no Python model support for these: clumin, rfxconv, vashift, vmshift, xilconv.

    #412 support ROSAT PSPC (and similar) RMF files with AstroPy Add explicit tests for reading in, and using, ROSAT PSPC PHA and RMF files. Fix a bug in the AstroPy back-end handling of the ROSAT RMF file.

    #409 EmissionGaussian model when skew parameter is not unity (fix #403) The sherpa.astro.optical.EmissionGaussian code has been fixed for the case when the skew parameter is not unity. The documentation has also been updated.

    #399 Python 3 fixes and new tests for pha2 datasets Fixed several problems when using Sherpa with Python 3: failures when calling list_bkg_ids and list_response_ids and the parameter querying after paramprompt(True) has been called. There is also a change to avoid a FutureWarning being raised in the astropy backend when reading in PHA2 data files. Tests have been added to test the reading of a PHA2 format dataset, and new files added to the sherpa-test-data repository for this purpose.

    #398 Check that list_samplers returns a list (fix #397) The list_samplers function now returns a list in Python 3 as well.

    #396 Fix set_xlog and related commands using Python 3 (fix #393) Fix use of commands that set the plot state using the sherpa.astro.ui module, such as set_xlog and set_xlinear, when using Python 3.

    #395 Add xspec optional models Sherpa has new infrastructure for supporting multiple versions of XSPEC at the same time, with models that are built and enabled conditionally depending on the version of XSPEC being linked. Models are assumed to have the same parameters and low level functions across supported versions.

    #394 XSPEC build and functionality improvements Add support for reading and changing the path to the XSPEC "manager" directory, and reading the current XSPEC "model" directory (get_xspath_manager, set_xspath_manager, and get_xspath_model). These are intended for the power user and so require an explicit import from sherpa.astro.xspec. There are several improvements to the build and interface to the XSPEC model library: these have no user-visible changes.

    #390 Add nlapec model The nlapec XSpec model has been added. Note that XSPEC 12.9.0 is now required.

    #385 DS9 calls return string in Python 2 and 3 (fix #319) Some low lever xpa calls were returning byte strings rather than strings in Python 3. This had particular impact on the image_getregion function. This has now been fixed.

    #383 Replace 0-energy bins in ARF and RMFs (fix #332) Sherpa now performs some validation of the energy ranges of ARF and RMF files against the OGIP standard. If the ranges are inconsistent then the code will return with an error. If one energy bound is 0 the bound is replaced with a small number to avoid numerical issues like division by zero. A new configuration option minimum_energy, if present, allows users to override Sherpa's default behavior.

    #379 Use a line not span to draw horizontal line (fix #378) Ensure that the line at y=0 (residual) or y=1 (ratio) is drawn for residual or ratio plots when using matplotlib 2.0.

    #373 Make sure ds9 properly works with Python 2.7 (fix #368) For Python versions prior to 3.2 add in explicit code to make _Popen work as a context manager.

    #352 Add low-level support for the XSPEC rgsxsrc convolution model Add low-level support for the rgsxsrc convolution model in XSPEC. This is intended for R&D into fully supporting XSPEC convolution models in Sherpa.

    #255 Add aliases for parameter names so to support new xspec names (fix #74) The XSPEC models have been updated to use parameter names that match the new XSPEC naming scheme introduced in XSPEC 12.9.0. The old parameter names can still be used as aliases for the new name, but are deprecated and may be removed in a next major release. Note this allows any models to define aliases for their parameters.

    Source code(tar.gz)
    Source code(zip)
  • 4.9.1(Aug 3, 2017)

    Sherpa 4.9.1

    This version introduces full support for Python 3.6. It also fixes issues with non-Chandra response files, correctly handles the AREASCAL column in PHA files, and fixes a significant regression that was preventing user statistics from working in v4.8.2. It also introduces a number of smaller improvements and fixes. In particular, quite a few improvements have been made to the documentation and to the testing framework, including several new tests to improve stability.

    Details

    Infrastructure and minor non-functional changes have been omitted.

    #335 Fix setup.py install command The setup.py install command was not enforcing the installation of the dependencies listed in setup.py. This has been fixed.

    #368 Remove ds9 warnings when run under Python 3.6 Update the DS9 code so that external processes are cleaned up properly, so to remove the potential ResourceWarning warnings when running DS9 on Python 3.6.

    #351 fix handling of AREASCAL column in PHA files (fix #350) Add support for handling the AREASCAL value (either scalar of vector) for PHA data sets. This array is used in XMM RGS data to handle missing chips.

    #358 Properly handle Swift RMF when using Astropy backend (fix #357) A Swift RMF could not be read in when the AstroPy back end was in use. The problem was that the code did not support RMF matrices that were not stored as variable-length arrays. This has now been fixed, and new tests have been added for this kind of files.

    #343 Fix user statistics regression (fix #341) A number of regressions were introduced in version 4.8.1 up to version 4.9.0, so user statistics that were properly working in version 4.7 have not been working any more. This has been fixed, and a number of regression tests have been added.

    Source code(tar.gz)
    Source code(zip)
  • 4.9.0(Jan 27, 2017)

    Sherpa 4.9.0

    This version fixes many bugs in the Python 3 support. Moreover, it includes a significant refactoring of the Fit and Stat classes that made it possible to fix several bugs related to the recent wstat implementation while making these classes more maintainable and extensible.

    Note that this version deprecates the use of load_table_model for XSPEC models. Sherpa/XSPEC users should use the new load_xstable_model function instead.

    Details

    Infrastructure and minor non-functional changes have been omitted.

    #242 Avoid use of inspect.getargspec in Python3 Finish off the replacement of inspect.getargspec by inspect.signature.

    #263 List_data_ids() fails on py3 with mixed id types (Fix #262). Sherpa was sorting the list of dataset IDs in a non-python3 compliant fashion, which resulted in issues when using strings and integers together as dataset IDs. This has now been fixed.

    #267 add wstat tests Add several regression tests for wstat.

    #282 Parallel_map not working on py3 with numcores=1 (Fix #277). The utils function parallel_map failed on Python 3 when called with numcores=1, i.e. on systems with only one processor/core. This has been fixed.

    #283 Sample flux and numpy deprecations (Fix #273 and #276). The sample_flux function was not working under Python 3 if the scales argument was provided. This has been fixed. Also, a DeprecationWarning was issued by numpy because during the sample_flux execution values were extracted from an array with non-integer indices. This has also been fixed.

    #284 String representation of data classes under py3 (Fix #275). Data classes DataPHA, DataARF, DataRMF, DataIMG, and DataIMGInt in sherpa.astro.data would throw an exception if users tried to print them as strings, under Python 3. This has been fixed.

    #287 Rewrite sherpa.stats.Stat.calc_stat and simplify sherpa.fit.Fit (fix #227 #248 #289 #292). In order to fix several issues related to the WStat support, and in order to make the code more maintainable, the sherpa.stats.Stat.calc_stat and sherpa.fit.Fit classes have gone through a round of refactoring. This fixes the following issues: #227 Issues using wstat when grouping/filtering data; #248 backscal column not treated properly for WStat; #289 calc_stat does not error out if background subtracted data is used with Likelihood statistics; #292 stat info does not include reduced stat/qval for wstat.

    #295 Fix display of pileup model in Python 3.5 (Fix #294). Fix display of instances of sherpa.astro.models.JDPileup so that, in Python 3.5, they can be displayed after the model has been evaluated.

    #304 replace file -> open (Fix #297). The save and restore functions used to use the file function which is not compatible with Python 3. This has now been fixed.

    #305 Fix python 3 issues with some session commands (Fix #303). The set_xlog, set_ylog, and show_bkg_model functions were not compatible with Python 3. This has now been fixed (Issue #303).

    #307 Move XSPEC table support to load_xstable_model and deprecate its support in load_table_model (Fix #270). Add the load_xstable_model routine to the sherpa.astro.ui module, which supports loading XSPEC additive or multiplicative (atable and mtable) models. The support for these models is still available via load_table_model in this release, but it is deprecated. The read_xstable_model routine has been added to the sherpa.astro.xspec module.

    #312 Fix over-zealous code clean up in PR #287 affecting sigmarej. Fits using the sigmarej iterated-fit method were broken if a filter had been applied to the data before the fit and there are any bins that get ignored at larger bin values than the filtered-out data. (This fixes a subtle regression introduced by #287).

    #313 Allow sequence=None when using gridsearch and Python 3.5 (Fix #309). Allow the gridsearch optimiser to be used with the sequence option set to None for Python 3.5.

    Caveats

    The requirements for Sherpa are to build with Python 2.7 and 3.5. There has been limited testing with Python 3.6, for which we distribute conda binaries. If in doubt, please install Sherpa in 2.7 or 3.5 environments only. Support for Python versions 3.3 and 3.4 is possible but would require community support.

    It has been reported during testing that some versions of the matplotlib conda package do not install properly because of a pyqt v5 dependency. If you encounter this issue, please pin down pyqt to version 4, e.g. conda install matplotlib pyqt=4.

    The sherpatest package is not distributed as a conda package anymore. This will probably be true for the foreseeable future. The sherpatest package contains data and functional tests that relies on external datasets, so it allows users and developers to run the entire regression tests suite. If you want to install sherpatest, please use pip and github:

    $ pip install https://github.com/sherpa/sherpa-test-data/archive/4.9.0.tar.gz
    

    If you decide to run the full regression tests suite you should also have matplotlib installed. If matplotlib is not installed a test will run and fail rather than being skipped. This issue will be fixed in the next release.

    Source code(tar.gz)
    Source code(zip)
  • ciao4.9(Dec 16, 2016)

    This is a CIAO release. Binaries will be provided with the next standalone release. Release notes are collected for all changes since CIAO 4.8, which may be included from multiple standalone releases.

    Release Notes

    Sherpa 4.9 now runs under both Python 2.7 and Python 3.5. The test infrastructure has been modified including simplification of the smoke test. Several bug fixes and enhancements are also included. Specific details are identified below.

    14361: Sherpa and Chips wrapper need to replace execfile for python 3 compatibility This fix modifies the sherpa and chips wrapper scripts to replace the 'execfile' command with an 'exec' command sequence that allows the wrapper script to be utilized by python 2.7 or python 3.5. Without the change, the option to specify a command script at the sherpa or chips command prompt (ie. 'chips /pool1/runme.py') will not work on python 3.5 since the execfile does not exist in python 3.5.

    #107: Normalize plot labels. Plots created with plot_source used a different format to other plots when analysis=wavelength, in that LaTeX symbols were used for Angstrom and lambda (in other plots the string 'Angstrom' is used instead). The source plots now match the other plots.

    #138: improve and fix issues in save_all function.

    • added a new argument to save_all: if outfile is None then the outfh argument is used to define the output handle (the argument can be any file-like argument, such as a file handle like sys.stdout or the output of open, or a StringIO object)
    • setting the clobber argument to save_all now means that the output file (the outfile argument, if not None) is deleted if it already exists; prior to this, the file would be appended to instead
    • the source expression is now saved correctly for most cases (e.g. when not using set_full_model); this is bug #97 but also affects non-PHA data sets
    • the background model expression was not always written out correctly when using PHA data sets
    • quality and grouping arrays of PHA data sets are now stored as 16-byte integers rather than a floating-point value (this has no affect on the results, but matches the OGIP standard)
    • fixed up saving the grouping and quality arrays of background PHA data sets (this would only be an issue if the background is being fit, rather than subtracted)
    • basic data sets created with the load_arrays function are now written out by save_all as part of the script; this is intended for small datasets and may have problems with precision if used with floating-point arrays
    • calls to load_psf are now correctly restored (they may not have been written out correctly if multiple data sets were loaded)
    • user models are now written out to disk; this consists of two parts:
    • writing out the function that defines the model, which may or may not be possible (if not, a place-holder function is added to the output and a warning displayed).
    • the necessary calls to load_user_model and add_user_pars are now included in the output
    • the Python code created by save all has undergone several minor changes:
    • it now explicitly imports the sherpa.astro.ui module, so that it can be run from the IPython prompt using the %run <filename> command, or directly as python <filename>
    • it uses the create_model_component function rather than eval to create model components (this is CXC bug 12146)
    • many optional arguments to functions are now given as name=value rather than being a positional argument, to make it clearer what the script is doing.
    • calls to load_data have been replaced by more-specific versions - e.g. load_pha and load_image - if appropriate
    • there have been several minor syntactic clean ups to better follow the suggestions from PEP8

    When writing out code that defines a user-model, there is no attempt to make sure that modules used by the function are available. These will need to be added, either directly or imported, manually to the output.

    #153: Minor bug with calc_chi2datavar_errors (Fix #148). Make comparison test in calc_chi2datavar_errors less stringent, so to include the case where sqrt(x)=0.

    #155: Add argument to get_draws for supplying a covariance matrix. The get_draws function now accepts a user-provided covariance matrix. If no covariance matrix is provided, the covariance matrix computed by the default implementation is used. Note that covar() must be invoked before invoking get_draws if no covariance matrix is provided, otherwise get_draws will exit with an error.

    #165: Remove usage of deprecated numpy API.

    #185: Protect XPA command to avoid shell confusion. Fix the problem where if the working directory contained a file called x or y then the sherpa.astro.ui.image_data() function would fail with the message

     DS9Err: Could not display image
    

    #187: Issue a more meaningful message when sherpa.astro.io is imported directly and no fits backends are available. (Fix #92).

    #190: Datastack can be used if no plotter available. The datastack package can now be used even if there is no available plotting backend. In this case, plotting functions will not be available, but the rest of the datastack functionality will. (Fix #22).

    #195 Generalize calc_stat API + example of how to have several datasets with different fit statistics. Attempts to generalize the calc_stat API and enable simultaneous fits with different statistics for different data sets.

    #209: Fix docstrings for group_snr() and group_adapt_snr(). Updates to the docstrings for clarity.

    #210: New Smoke Test The smoke test has been greatly simplified: rather than running all the unit and regression tests that do not require test data, the smoke test now simply ensures that the basic installation works, i.e. that basic commands can be run and that dependencies can be reached.

    #211: Cleanup of documentation and code in sherpa.astro.utils. The calc_kcorr function is now exported by sherpa.astro.utils. Minor changes to the documentation in sherpa.astro.utils were also made to conform to Sphinx standards.

    #221 Add model documentation (Fix #217). Integrate existing model documentation (from external sources and the CIAO ahelp documentation system) into the model classes.

    #229: Code is both Python 2.7 and 3.5 compliant. (Fix #76).

    #242: Avoid use of inspect.argspec in Python 3 This change replaces the deprecated 'inspect.argspec' call with 'inspect.signature'.

    #252: Fix plot_photon_flux function. (Fix #241). A bug where plotting photon flux was fixed by adding a missing argument to the sample_photon_flux call.

    #253 Make sure background is taken into account in calc_stat_info (Fix #147). calc_stat_info call failed when wstat was selected, as the background was not taken into account. This issue has now been fixed.

    #254 Fix the documentation for set_rmf (Fix #236). The documentation for set_rmf incorrectly referred to ARF rather than RMF.

    #256 Fix docstring in set_quality (Fix #205). The docstring in set_quality now correctly indicates the quality flags. The previous documentation didn't describe the values of such flags properly.

    #257 Fix docstring for levmar tolerance (Fix #257). The documentation string for the Levenberg-Marquardt optimization function now correctly states that the parameter default values are equal to the single precision epsilon, rather than the square root of the double precision epsilon.

    #263 List_data_ids() fails on py3 with mixed id types (Fix #262). Sherpa was sorting the list of dataset IDs in a non-python3 compliant fashion, which resulted in issues when using strings and integers together as dataset IDs. This has now been fixed.

    #267 add wstat tests Add several regression tests for wstat.

    #282 Parallel_map not working on py3 with numcores=1 (Fix #277). The utils function parallel_map failed on Python 3 when called with numcores=1, i.e. on systems with only one processor/core. This has been fixed.

    #283 Sample flux and numpy deprecations (Fix #273 and #276). The sample_flux function was not working under Python 3 if the scales argument was provided. This has been fixed. Also, a DeprecationWarning was issued by numpy because during the sample_flux execution values were extracted from an array with non-integer indices. This has also been fixed.

    #284 String representation of data classes under py3 (Fix #275). Data classes DataPHA, DataARF, DataRMF, DataIMG, and DataIMGInt in sherpa.astro.data would throw an exception if users tried to print them as strings, under Python 3. This has been fixed.

    #287 Rewrite sherpa.stats.Stat.calc_stat and simplify sherpa.fit.Fit (fix #227 #248 #289 #292). In order to fix several issues related to the WStat support, and in order to make the code more maintainable, the sherpa.stats.Stat.calc_stat and sherpa.fit.Fit classes have gone through a round of refactoring. This fixes the following issues: #227 Issues using wstat when grouping/filtering data; #248 backscal column not treated properly for WStat; #289 calc_stat does not error out if background subtracted data is used with Likelihood statistics; #292 stat info does not include reduced stat/qval for wstat.

    #295 Fix display of pileup model in Python 3.5 (Fix #294). Fix display of instances of sherpa.astro.models.JDPileup so that, in Python 3.5, they can be displayed after the model has been evaluated.

    #304 replace file -> open (Fix #297). The save and restore functions used to use the file function which is not compatible with Python 3. This has now been fixed.

    #305 Fix python 3 issues with some session commands (Fix #303). The set_xlog, set_ylog, and show_bkg_model functions were not compatible with Python 3. This has now been fixed (Issue #303).

    #307 Move XSPEC table support to load_xstable_model and deprecate its support in load_table_model (Fix #270). Add the load_xstable_model routine to the sherpa.astro.ui module, which supports loading XSPEC additive or multiplicative (atable and mtable) models. The support for these models is still available via load_table_model in this release, but it is deprecated. The read_xstable_model routine has been added to the sherpa.astro.xspec module.

    #312 Fix over-zealous code clean up in PR #287 affecting sigmarej. Fits using the sigmarej iterated-fit method were broken if a filter had been applied to the data before the fit and there are any bins that get ignored at larger bin values than the filtered-out data. (This fixes a subtle regression introduced by #287).

    #313 Allow sequence=None when using gridsearch and Python 3.5 (Fix #309). Allow the gridsearch optimiser to be used with the sequence option set to None for Python 3.5.

    Caveats

    #319: image_getregion returns byte string on Py3.

    SH-2: The new test_save_restore test in the CIAO regression tests suite is failing on all platforms. We are investigating the reasons of the failure. The failure is triggered when the test is not run in isolation, and only when certain other tests are run before it. Also, this only applies to CIAO and not to standalone Sherpa.

    SH-3: Some tests are skipped during the CIAO regression tests.

    SH-4: Several OS X regression tests are failing.

    Note: The SH-2/3/4 caveats are issues with the tests themselves, not with the code, and only appear when running the full CIAO regression tests suite. User will not be affected by the above issues unless they run the full CIAO regression tests suite.

    Source code(tar.gz)
    Source code(zip)
  • 4.8.2(Sep 23, 2016)

    Sherpa 4.8.2

    This version of Sherpa is the first one to run under both Python 2.7 and Python 3.5. The python 3 conversion is considered a beta designed to maintain backwards compatibility with python 2.7. As such, we expect to have some iterations before the Python 3.5 support stabilizes.

    The smoke test has been greatly simplified: rather that running all of the unit and regression tests that do not require test data, the smoke test now simply ensures that the basic installation works, i.e. that basic commands can be run and that dependencies can be reached. This allows the use of more advanced tools for actual unit and regression tests without having to ship such tools to users.

    Several bugs were fixed, some enhancements implemented, and some deprecated calls to external API replaced. In particular:

    • more documentation was migrated and is not part of the code base as docstrings.
    • the calc_stat API has been generalized to make statistic functions more extensible.
    • calc_stat_info now properly takes background into account when wstat is selected.

    Details

    Infrastructure and minor non-functional changes have been omitted.

    #209: Fix docstrings for group_snr() and group_adapt_snr(). Docstrings used to say "Combine the data so that each bin has a signal-to-noise ratio of at least minimum" but the end of the groups are marked only if the SNR exceeds the given SNR.

    #210: The smoke test has been greatly simplified and it now simply ensures that the basic installation works, i.e. that basic commands can be run and that dependencies can be reached. Previously, the smoke test would run all unit tests. Units and regression tests can now take advantage of py.test and mock packages (the latter is part of the standard library in Python3).

    #211: Cleanup of documentation and code in sherpa.astro.utils. The calc_kcorr function is now exported by sherpa.astro.utils. Minor changes to the documentation in sherpa.astro.utils to conform to Sphinx standards.

    #221: Integrate existing model documentation (from external sources and the CIAO ahelp documentation system) into the model classes.

    #229: Code is both Python 2.7 and 3.5 compliant. This included updating the CIAO dependencies: region, group, and stack libraries, which were ported to Python 3 as well.

    #242: Avoid use of inspect.argspec, which was deprecated in Python3.

    #252: plot_photon_flux function was calling an internal function with the wrong number of arguments and thus raising an error. This has been fixed.

    #253: calc_stat_info call failed when wstat was selected, as the background was not taken into account. This issue has now been fixed.

    #256: Fix docstring in set_quality, which were listing the wrong values of the quality flag.

    #257: Fix docstring for levmar tolerance, which was listing the wrong function defaults.

    Caveats/Known Issues

    The following are known issues with the standalone 4.8.2 release

    show_all does not work on Python3 with PHA. E.g., the following code throws an exception:

    from sherpa.astro.ui.utils import Session
    from sherpa.astro.data import DataPHA
    session = Session()
    session.load_arrays(1, [1, 2, 3], [1, 2, 3], DataPHA)
    session.show_all()
    

    the parallel_map function throws an exception on Python 3 when only one core is available on the system running Sherpa. This impacts a number of functions taking advantage of tasks parallelization on multi-core systems. This functions include calc_flux, sample_flux, a number of functions for plotting parameter projections, and the grid_search optimization algorithm.

    string representation of some (but not all) dataset classes is broken on Python 3. For instance, the following code will result in an exception:

    from sherpa.astro.ui import *
    load_pha('3c273.pi')
    print(get_data())
    

    sample_flux does not work on Python 3 when called with the scales argument, e.g.:

    from sherpa.astro import ui
    
    ui.load_pha('sherpa-test-data/sherpatest/3c273.pi')
    ui.set_model('polynom1d.p1')
    ui.fit()
    ui.covar()
    scal = ui.get_covar_results().parmaxes
    ui.sample_flux(p1, 0.5, 1, num=5, correlated=False, scales=scal)
    
    Source code(tar.gz)
    Source code(zip)
  • 4.8.1(Apr 15, 2016)

    Sherpa 4.8.1

    Sherpa 4.8.1 is the standalone counterpart to the 4.8.0 release, which was focused on supporting CIAO 4.8. In particular, this version introduces support for newer versions of the dependencies, along with some feature enhancements, bug fixes and additional, more accurate tests.

    The newly supported dependencies:

    • matplotlib v1.5
    • numpy 1.10 and 1.11 (with and without mkl support)
    • xspec v12.9.0i (when building from source)
    • astropy v1.1.2
    • region library v4.8 (from CIAO 4.8)

    Please see the Caveats section for known issues regarding the XSpec support.

    Mode details below (infrastructure changes are not shown):

    #102: fix issues when writing out FITS files using the save_pha and save_table commands when using the astropy/pyfits backend (bug #46). Fix for when the notice2d_id, notice2d_image, and the ignore version functions are called with an invalid identifier (i.e. an identifier that i snot an integer or string value). The error is now an ArgumentTypeErr with the message "'ids' must be an identifier or list of identifiers". It was a NameError with the message "global name '_argument_type_error' is not defined".

    #107: Normalize plot labels. There are two main changes for plots of PHA data sets (and related quantities, such as the source, model, and ARF):

    • plots with matplotlib now use the LaTeX support - so that 'cm^2' is now displayed as a superscript; previously they were displayed directly. This does not change the display with the ChIPS backend.
    • plots created with plot_source used a different format to other plots when analysis=wavelength, in that LaTeX symbols were used for Angstrom and lambda (in other plots the string 'Angstrom' is used instead). The source plots now match the other plots.

    #109: fix #103 and #113 in order to support matplotlib v1.5.

    #116: fix bug #27. The astropy.io.fits/pyfits interface used deprecated functionality. The code was updated to use the replacement classes/methods when available, falling back to the original code if not. The interfaces that were changed were, when the new symbols are available, to:

    • use astropy.io.fits.BinTableHDU.from_columns rather than astropy.io.fits.new_table
    • use astropy.io.fits.Header rather than astropy.io.CardList

    #137: upgrade CIAO region library to v4.8

    #138: improve and fix issues in save_all function.

    • added a new argument to save_all: if outfile is None then the outfh argument is used to define the output handle (the argument can be any file-like argument, such as a file handle like sys.stdout or the output of open, or a StringIO object)
    • setting the clobber argument to save_all now means that the output file (the outfile argument, if not None) is deleted if it already exists; prior to this, the file would be appended to instead
    • the source expression is now saved correctly for most cases (e.g. when not using set_full_model); this is bug #97 but also affects non-PHA data sets
    • the background model expression was not always written out correctly when using PHA data sets
    • quality and grouping arrays of PHA data sets are now stored as 16-byte integers rather than a floating-point value (this has no affect on the results, but matches the OGIP standard)
    • fixed up saving the grouping and quality arrays of background PHA data sets (this would only be an issue if the background is being fit, rather than subtracted)
    • basic data sets created with the load_arrays function are now written out by save_all as part of the script; this is intended for small datasets and may have problems with precision if used with floating-point arrays
    • calls to load_psf are now correctly restored (they may not have been written out correctly if multiple data sets were loaded)
    • user models are now written out to disk; this consists of two parts:
    • writing out the function that defines the model, which may or may not be possible (if not, a place-holder function is added to the output and a warning displayed).
    • the necessary calls to load_user_model and add_user_pars are now included in the output
    • the Python code created by save all has undergone several minor changes:
    • it now explicitly imports the sherpa.astro.ui module, so that it can be run from the IPython prompt using the %run <filename> command, or directly as python <filename>
    • it uses the create_model_component function rather than eval to create model components (this is CXC bug 12146)
    • many optional arguments to functions are now given as name=value rather than being a positional argument, to make it clearer what the script is doing.
    • calls to load_data have been replaced by more-specific versions - e.g. load_pha and load_image - if appropriate
    • there have been several minor syntactic clean ups to better follow the suggestions from PEP8

    When writing out code that defines a user-model, there is no attempt to make sure that modules used by the function are available. These will need to be added, either directly or imported, manually to the output.

    #151: Ensure AstroPy and Crates behave the same with gzipped files. Change the behaviour of the AstroPy back end so that it matches that of Crates when given a file name which does not exist, but a compressed version, with the suffix .gz does. The Crates behavior is to read the file. This extends to PHA files whose ancillary files - e.g. those stored in the BACKFILE, ANCRFILE, and RESPFILE keywords - are given as unzipped names, but only the gzipped names exist on disk.

    As an example: if pha.fits.gz exists but pha.fits does not, then

    load_pha('pha.fits')
    

    will now load the file with either back end. If the response files are set to arf.fits and rmf.fits (via the ANCRFILE and RESPFILE keywords), but only the .gz versions exist, then they will now also be loaded by the AstroPy back end.

    #153: Make comparison test in calc_chi2datavar_errors less stringent, so to include the case where sqrt(x)=0.

    #155: The get_draws function now accepts a user-provided covariance matrix. If no covariance matrix is provided, the covariance matrix computed by the default implementation is used. Note that covar() must be invoked before invoking get_draws if no covariance matrix is provided, otherwise get_draws will exit with an error.

    #158: Fix bug that prevented region ascii files to be read in standalone Sherpa.

    #165: Remove usage of deprecated numpy API.

    #185: Fix the problem where if the working directory contained a file called x or y then the sherpa.astro.ui.image_data() function would fail with the message

     DS9Err: Could not display image
    

    #187: Fix #92: a more meaningful message is given to the user when sherpa.astro.io is imported directly and no fits backends are available.

    #188: Fix #93. The sherpa_test script now tries to install the test dependencies before running the tests (but not the sherpatest package, which should be installed by the user if necessary, due to its footprint). If this is not possible, and the necessary dependencies (pytest) are not found, then a meaningful message is given to the user with instructions on how to install the dependencies. Also, the dependency on pytest-cov has been removed. Users can enable coverage reports from the command line if necessary.

    #190: Fix #22 - The datastack package can now be used even if there is no available plotting backend. In this case, plotting functions will not be available, but the rest of the datastack functionality will.

    Caveats

    The following are known issues with the standalone 4.8.1 release

    XSpec support: Several issues have been encountered with the optional source building with XSpec models on OSX platforms (Linux support appears unaffected). The issues include a name clash between the libcfitsio library and the astropy.io.fits Python extensions that results in XSpec failing to load fits files possibly resulting in a crash.

    SAO DS9 issue on Ubuntu 14.04: the ds9 binaries shipped with Ubuntu and installed through apt-get install do not seem to work as expected. Binaries downloaded directly from the SAO ds9 page seem to work instead. (Note: this issue was listed in the 4.8.0 release as well).

    Wrong save_data header keywords: when using astropy as a FITS backend to save PHA data with save_data some header keywords are incorrectly set by Sherpa. In particular, range information for certain columns may be inaccurate (see issue #203 for details).

    Source code(tar.gz)
    Source code(zip)
  • 4.8.0(Jan 27, 2016)

    Release Notes

    Sherpa 4.8.0

    This version of Sherpa introduces 'wstat' statistics which is an extension of 'cstat' with Poisson background data. It also provides the ability to include background data with the 'user statistics'.

    Many changes were aimed at improving the Xspec extension by making it more robust, intuitive, and by fixing several bugs. Sherpa 4.8.0 supports Xspec 12.9.0d and was also tested against versions 12.8.2e, 12.8.2l, and 12.8.2q.

    Most of the codebase was reviewed and cleaned up, in particular to remove the use of deprecated functionality and to comply with Python's PEP8 standard. More tests were added to the test suite, and a new testing infrastructure was put in place in order to simplify writing and running tests, and to measure the test suite code coverage.

    More details below (infrastructure changes are not shown):

    #32: Fix segfault from CRATES update in 4.8b1. Since v4.8b1 CRATES returns variable length arrays by default, rather than the zero-padded fixed length ones it used to return. Sherpa manipulated the arrays so to remove the zero-padding and obtaining variable length arrays. The change in the CRATES API resulted in Sherpa segfaulting when trying to manipulate the data coming from CRATES. In the patch, we use a new API offered by CRATES to get the old-style fixed-length arrays instead of the new default ones. In the future, we may want to update the Sherpa code to deal with the new arrays directly.

    #44: save_quality now correctly outputs 'QUALITY' as the column name, instead of 'GROUPS'.

    #48: Fix up several issues seen in plot labels - titles and Y-axis labels - for commands such as sherpa.ui.plot_data, sherpa.ui.plot_fit_resid, and sherpa.ui.plot_chisqr.

    #59: Fix bug #38 (grouping twice gives an IndexError exception). An un-handled corner case in one of the Sherpa internal methods (utils.create_expr) was triggering an IndexError when two group_counts operations were performed back to back. The fix handles the case so that applying group_counts twice does not result in an Exception.

    #77: Replace == and != comparisons to None with is and is not.

    #78: OutOfBoundErr exceptions in some sherpa.utils functions are properly caught. There were several places where screen output used either print or sys.stderr.write.

    #81: Ensure that XSPEC models which fail - for instance, because a data file it needs is missing - return 0's for all bins, rather than random values. This should make it more obvious that something has gone wrong (for instance if the XSPEC chatter level is low enough not to show any error messages, as is the case for the default setting used by Sherpa, namely 0).

    #82: The XSpec "spectrum number" value is now set to 1 rather than 0, as this value is 1-based in Xspec.

    #83: Removed S-Lang scripts, files, and references in the code.

    #84: Clarified error messages in Xspec extension. Also, changed the class of the exception from RuntimeError to more appropriate exception types, in particular LookupError, ValueError, KeyError. This is a backwards-incompatible change, in that code that caught the RuntimeError will not catch the new error.

    #87: Some methods in sherpa/fit.py assigned mutable objects to default arguments. This has now been fixed. More instances of this issue have been identified (Bug #95) and will removed in the future.

    #90: Added background data to the UserStat class.

    #94: Implement wstat statistic as described at the following url: https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html

    #96: Remove the unused myoptfct module.

    #99: Correct the documentation for the set_exposure function.

    #100 Fix bug #97, whereby the save_all function would not create the necessary set_source() line. This does not fix all cases, but it does the simple PHA case such as

    load_pha('src.pi')
    set_source(xsphabs.gal * powlaw1d.pl)
    save_all('test.out).
    

    It also ensures that files created by save_all can be run using IPython's %run directive, by explicitly importing the sherpa.astro.ui module.

    #101: Fix handling of non-contiguous bins in Xspec - i.e. when a model is called with both xlo and xhi arguments but the bins do not fully cover the energy, or wavelenth, range. This fixes #62 (for XSPEC 12.8.2; switching to XSPEC 12.9.0 should also fix it) and #56. It also fixes an (un-reported) problem with handling of non-contiguous grids when using a table model, where a crash was likely. When an XSPEC model is called with both low and high values for the grid - i.e. with two arguments - then the two arrays are checked to have the same length, and a ValueError is raised if this condition does not hold. This is a breaking change, but the results are not guaranteed to be correct if the two arrays are not the same length. The experimental interface for XSPEC convolution models has changed, so that the function call takes pars, fluxes, xlo, with optional xhi whereas before it was pars, xlo, xhi, fluxes. This is a breaking change, but this is in the low-level API that is not documented to users, and adds useful functionality (the ability to have xhi be optional). The cpflux convolution model has been added. Note that these models do not have Python classes associated with them as they are still an experimental interface. The test suite has been updated to test the new and changed functionality in this PR. The choice of models is made so as to avoid known problematic models (with a version check where relevant). It is believed that the changes in this PR fixes #42, although this is hard to prove conclusively given the erratic nature of the bug.

    #110: Update the sherpa.astro.datastack module documentation to include information from the CIAO ahelp documentation and to match the style used by the sherpa.astro.ui module.

    #111: Update the documentation to include more information about the pyBLoCXS code.

    Caveats

    These caveats are being fixed for the 4.8.1 release.

    Incompatibility with matplotlib 1.5: Sherpa 4.8.0 is not compatible with matplotlib 1.5. Unfortunately, this version is currently the default package installed by conda. Users should install sherpa with matplotlib=1.4 numpy=1.9.

    Test requirements are not installed automatically: sherpa_test does not work out of the box. Users should issue "pip install pytest-cov" in order for sherpa_test to run.

    SAO DS9 issue on Ubuntu 14.04: the ds9 binaries shipped with Ubuntu and installed through apt-get install do not seem to work as expected. Binaries downloaded directly from the SAO ds9 page seem to work instead.

    Source code(tar.gz)
    Source code(zip)
  • 4.7(Apr 21, 2015)

    Release Notes

    This standalone release is based on CIAO Sherpa v4.7.

    Release notes for this baseline version can be found at the following link: http://cxc.harvard.edu/ciao/releasenotes/ciao_4.7_release.html#Sherpa

    Additionally, the present release includes some changes listed below:

    • Standalone Sherpa and CIAO sherpa now look for different configuration files in the $HOME directory. For standalone this is $HOME/.sherpa-standalone.rc. If this file is not present, Sherpa falls back to the internal configuration file. This file has defaults better suited for the standalone mode: pyfits and pylab are set as backends, and the stack trace is not silenced. Users can still override the configuration file location by exporting the SHERPARC environment variable, as supported by previous versions of Sherpa and CIAO.
    • All source files now have copyright and licensing information. A summary is included upfront in the repository main directory.
    • Orphan code that was not actually used has been removed.
    • Fixed code triggering deprecation warnings from Numpy 1.9.
    • Fixed code triggering compiler warnings.
    • Added documentation (README, ipython notebook).
    • The version string (sherpa.__version__) depends on the git commit/tag, unlike in CIAO where it is fixed to 40701.
    • Some classes from the template module were not exposed by __all__, and template models were not imported in the sherpa session. Now they are.

    Known issues

    • the datastack module is not imported if no plotting packages are available. Installing matplotlib in the same environment as Sherpa fixes the issue.
    Source code(tar.gz)
    Source code(zip)
My first Python project is a simple Mad Libs program.

Python CLI Mad Libs Game My first Python project is a simple Mad Libs program. Mad Libs is a phrasal template word game created by Leonard Stern and R

Carson Johnson 1 Dec 10, 2021
Evaluation of a Monocular Eye Tracking Set-Up

Evaluation of a Monocular Eye Tracking Set-Up As part of my master thesis, I implemented a new state-of-the-art model that is based on the work of Che

Pascal 19 Dec 17, 2022
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
Data exploration done quick.

Pandas Tab Implementation of Stata's tabulate command in Pandas for extremely easy to type one-way and two-way tabulations. Support: Python 3.7 and 3.

W.D. 20 Aug 27, 2022
Spaghetti: an open-source Python library for the analysis of network-based spatial data

pysal/spaghetti SPAtial GrapHs: nETworks, Topology, & Inference Spaghetti is an open-source Python library for the analysis of network-based spatial d

Python Spatial Analysis Library 203 Jan 03, 2023
Calculate multilateral price indices in Python (with Pandas and PySpark).

IndexNumCalc Calculate multilateral price indices using the GEKS-T (CCDI), Time Product Dummy (TPD), Time Dummy Hedonic (TDH), Geary-Khamis (GK) metho

Dr. Usman Kayani 3 Apr 27, 2022
Senator Trades Monitor

Senator Trades Monitor This monitor will grab the most recent trades by senators and send them as a webhook to discord. Installation To use the monito

Yousaf Cheema 5 Jun 11, 2022
This is a tool for speculation of ancestral allel, calculation of sfs and drawing its bar plot.

superSFS This is a tool for speculation of ancestral allel, calculation of sfs and drawing its bar plot. It is easy-to-use and runing fast. What you s

3 Dec 16, 2022
MDAnalysis is a Python library to analyze molecular dynamics simulations.

MDAnalysis Repository README [*] MDAnalysis is a Python library for the analysis of computer simulations of many-body systems at the molecular scale,

MDAnalysis 933 Dec 28, 2022
Includes all files needed to satisfy hw02 requirements

HW 02 Data Sets Mean Scale Score for Asian and Hispanic Students, Grades 3 - 8 This dataset provides insights into the New York City education system

7 Oct 28, 2021
A simplified prototype for an as-built tracking database with API

Asbuilt_Trax A simplified prototype for an as-built tracking database with API The purpose of this project is to: Model a database that tracks constru

Ryan Pemberton 1 Jan 31, 2022
Desafio 1 ~ Bantotal

Challenge 01 | Bantotal Please read the instructions for the challenge by selecting your preferred language below: Español Português License Copyright

Maratona Behind the Code 44 Sep 28, 2022
Helper tools to construct probability distributions built from expert elicited data for use in monte carlo simulations.

Elicited Helper tools to construct probability distributions built from expert elicited data for use in monte carlo simulations. Credit to Brett Hoove

Ryan McGeehan 3 Nov 04, 2022
Data processing with Pandas.

Processing-data-with-python This is a simple example showing how to use Pandas to create a dataframe and the processing data with python. The jupyter

1 Jan 23, 2022
Wafer Fault Detection - Wafer circleci with python

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

Avnish Yadav 14 Nov 21, 2022
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Pedro Rodriguez 2.1k Jan 05, 2023
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Data and code accompanying the paper Politics and Virality in the Time of Twitter

Politics and Virality in the Time of Twitter Data and code accompanying the paper Politics and Virality in the Time of Twitter. In specific: the code

Cardiff NLP 3 Jul 02, 2022
A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

TennisBusinessIntelligenceProject - A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

carlo paladino 1 Jan 02, 2022
Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment

Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment Brief explanation of PT Bukalapak.com Tbk Bukalapak was found

Najibulloh Asror 2 Feb 10, 2022