Machine learning for NeuroImaging in Python

Overview
Github Actions Build Status Coverage Status Azure Build Status

nilearn

Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive documentation & friendly community.

It supports general linear model (GLM) based analysis and leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modelling, classification, decoding, or connectivity analysis.

Important links

Dependencies

The required dependencies to use the software are:

  • Python >= 3.5,
  • setuptools
  • Numpy >= 1.11
  • SciPy >= 0.19
  • Scikit-learn >= 0.19
  • Joblib >= 0.12
  • Nibabel >= 2.0.2

If you are using nilearn plotting functionalities or running the examples, matplotlib >= 1.5.1 is required.

If you want to run the tests, you need pytest >= 3.9 and pytest-cov for coverage reporting.

Install

First make sure you have installed all the dependencies listed above. Then you can install nilearn by running the following command in a command prompt:

pip install -U --user nilearn

More detailed instructions are available at http://nilearn.github.io/introduction.html#installation.

Development

Detailed instructions on how to contribute are available at http://nilearn.github.io/development.html

Comments
  • [ENH] Initial visual reports

    [ENH] Initial visual reports

    Closes #2022 .

    An initial implementation of visual reports for Nilearn. Adds:

    • [x] The templating library tempita as an external dependency
    • [x] A reorganization of HTMLDocument into a new reporting module
    • [x] A new reporting HTML template
    • [x] A super class Report to populate the report HTML template with tempita populated text
    • [x] Relevant CSS styling to improve report UX, using pure-css
    • [x] An ability to display reports directly in Jupyter Notebooks, without iframe rendering, thanks to @GaelVaroquaux
    • [x] Documentation of this functionality, with examples
    • [x] A new Sphinx Gallery image scraper to embed these example HTML reports

    For a current rendering of reports see: https://github.com/emdupre/nilearn/pull/4#issuecomment-527984327 and the plot_mask_computation example.

    opened by emdupre 172
  • switch from papaya to brainsprite in plotting.view_stat_map

    switch from papaya to brainsprite in plotting.view_stat_map

    I really love the new 3D interactive viewer (plotting.view_stat_map), but the notebooks it is producing are huge. In this PR, I am proposing to switch from papaya to brainsprite, which is a js library I developed for the exact purpose of embedding lightweight 3D viewers in html pages (http://github.com/simexp/brainsprite.js).

    The first difference with papaya is that it is using a jpg or a png containing all sagital slices of a volume as well as json metadata to store the brain images. That tend to be quite smaller than a nifti (depending on the numerical precision of the nifti). That also means that brainsprite can render brains with core html5 features, and no dependencies. So the brainsprite library weighs 15kb (500 lines...), as opposed to 2Mb for the current papaya html template. I have attached two brain viewers embedded in jupyter notebooks. The Papaya-based notebook is 12Mb, while the brainsprite-based notebook is 500kb. Again, this reflects a core difference in design: papaya is a full brain viewer app, featuring nifti reading as well as colorbar etc. Brainsprite is a minimal, fast brain viewer working from a pre-generated sprite.

    Which makes a transition with the second point: all the action for the generation of the brain volume happens in python. There is a new function called save_sprite that generates the brain sprite as well as the json meta data. It relies on matplotlib, as well as nilearn's own functions. In particular, thresholding and colormap generation are all done with nilearn's code. Resampling as well. This means that it will be easier to maintain and evolve for nilearn's developpers. The current version replicates all the arguments of plot_stat_map, including draw_cross, annotate, cut_coords and a few other (with a few as bonus, such as opacity).

    This PR is far from polished, there are a few oustanding issues, here. I also need to look into the doc and testing. Finally, I dumped some functions in html_stat_map.py which should probably live elsewhere. But I think it is time to get feedback, and in particular I'd like to know if there is an interest in merging this PR at all...

    opened by pbellec 166
  • [MRG] Cortex surface projections

    [MRG] Cortex surface projections

    Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others, this is a PR about surface plotting.

    nilearn has some awesome functions to plot surface data in nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from volumetric to surface data.

    It would be great to add a function to sample or project volumetric data on the nodes of a cortical mesh; this would allow users to look at surface plots of their 3d images (e.g. statistical maps).

    In this PR we will try to add this to nilearn.

    Most tools which offer this functionality (e.g. caret, freesurfer, pycortex) usually propose several projection and sampling strategies, offering different quality / speed tradeoffs. However, it seems to me that naive strategies are not so far behind more elaborate ones - see for example [Operto, Grégory, et al. "Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels." Neuroimage 39.1 (2008): 127-135]. For plotting and visualisation, the results of a simple strategy are probably accurate enough for most users.

    I therefore suggest to start by including a very simple and fast projection scheme, and we can add more elaborate ones later if we want. I'm just getting started but I think we can already start a discussion.

    The proposed strategy is simply to draw a sample from a 3mm sphere around each mesh node, and average the measures.

    The image below illustrates that strategy: each red circle is a mesh node. Samples are drawn from the blue crosses that are attached to it, and are inside the image, and then averaged to compute the color inside the circle. (This image is produced by the show_sampling.py example, which is only there to clarify the strategy implemented in this PR and will be removed).

    illustration_2d

    Here is an example surface plot for a brainpedia image (id 32015 on Neurovault, https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html), produced by brainpedia_surface.py:

    brainpedia_inflated

    And here is the plot produced by pycortex for the same image, as shown on Neurovault:

    brainpedia_pycortex

    Note about performance: in order to choose the positions of the samples to draw from a unit ball, for now, we cluster points drawn from a uniform distribution on the ball and keep the centroids (we can think of something better). This takes a few seconds and the results are cached with joblib for the time being, but since it only needs to be done once, when we have decided how many samples we want, the positions will be hardcoded once and for all (no computing, no caching). with 100 samples per ball, projecting a stat map of the full brain with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.

    opened by jeromedockes 79
  • (WIP) Sparse models: S-LASSO and TV-l1

    (WIP) Sparse models: S-LASSO and TV-l1

    • Supports TV-l1 and S-LASSO priors
    • Supports logistic and squared losses
    • Has cross validation
    • Can automatically select alpha by CV (+ automatic computation of useful alpha ranges for the CV)
    • Warning: User must supply l1_ratio
    opened by dohmatob 78
  • Add a Neurovault fetcher.

    Add a Neurovault fetcher.

    This is based on PR #832 opened by @bcipolli in answer to issue #640. The contribution is to add a fetcher for downloading selected

    images from http://neurovault.org/. I have tried to address some of the remarks that were made in the discussion about #832. I have included the examples plot_ica_neurovault.py and plot_neurovault_meta_analysis.py from the previous PR; they remain almost identical.

    The interface to the fetcher is similar to that proposed in #832. I have kept the possibility to filter collections and images either with a function or with a dictionary of {field: desired_value} pairs (or both), but they are now separate arguments called respectively collection_filter (image_filter for images) and collection_terms (image_terms for images). Also, now the filters passed in a dictionary are only inserted in the query URL if they are actually available on the server (which is the case for the collection owner, the collection DOI, and the collection name); otherwise they are applied to the metadata once it has been downloaded.

    For users who want a specific set of image or collection ids, downloading all the Neurovault metadata and filtering on the ids is inefficient; so they can use the image_ids or collection_ids parameters to pass the lists of ids they want to download; in this case any filter is ignored and the server is queried directly for the required image and collection ids. Note: this is also done under the hood if the collection id is used as a filter term - either by specifying the collection 'id' field or the image 'collection_id' field - except that in this case the other filters are still applied.

    I have included a ResultFilter class and a bunch of special values such as NotNull which can be used to specify filters more easily. In some neurovault metadata jsons, some values are strings such as "", "null", "None"... instead of actual null; these (more precisely strings that match ($|n/?a$|none|null) in a case-insensitive way) are replaced by a true null value (null in the json, converted to None when loaded in a python dict) upon download so that comparing to None or testing for truth should yield the expected results. In particular dict.get(field) will give the same value wether the original value was null, "null", ... , or plain missing.

    This should make using fetch_neurovault somewhat easier. However, for users who are interested in a large subset of the neurovault data and are not too short on disk space, I would recommend using only very simple filters (e.g. the defaults) when calling fetch_neurovault to download (almost all) the data, and, once it is on disk, only access it through read_sql_query, local_database_connection or local_database_cursor. The metadata is stored in an sqlite database, so instead of having to read the docstring for fetch_neurovault, writing their own filters, etc., most users will probably prefer to download it all and then simply use SQL syntax to select the subset they're interested in. read_sql_query queries the database and returns the result as an OrderedDict of columns (the default) or as a list of rows. local_database_connection gives a connection to the sqlite file, so that pandas users can load e.g., all the images' metadata by typing:

    images = pandas.read_sql_query(
         "SELECT * FROM images", neurovault.local_database_connection())
    

    Of course if they prefer manipulating sqlite3 objects directly they can use the connection given by local_database_connection or the cursor given by local_database_cursor.

    @bcipolli, @chrisfilo, @GaelVaroquaux, or anyone else, please let me know what modifications need to be made! In particular:

    • What should be the default filters? The current behaviour is to exclude:
      • Collections from a list of known bad collections (found in PR #832, with one addition).
      • Empty collections.
      • Images from a list of known bad images (found in PR #832).
      • Images that are not in MNI space.
      • Images for which the metadata field 'is_valid' is cleared.
      • Images for which the metadata field 'is_thresholded' is set.
      • Images for which 'map_type' is 'ROI/mask', 'anatomical' or 'parcellation'
      • Images for which 'image_type' is 'atlas'
    • Should the fetcher, by default, download all the images matching the filters, or only a limited number (the current behaviour)?
    opened by jeromedockes 67
  • [MRG] Glass brain visualisation

    [MRG] Glass brain visualisation

    This pull request is WIP for now and was only opened to get some feedback, both visualisation-wise and code-wise.

    Here is how it looks like atm: glass_brain

    The code to generate the plots is there.

    New feature Plotting 
    opened by lesteve 64
  • Surface plots

    Surface plots

    Hi again,

    This PR replaces #2454 which is a continuation of #1730 submitted by @dangom to resolve #1722. The intention is to check that everything other than the symmetric_cbar is working properly and merge the surface montages into master. I'll open an issue for the symmetric_cbar and fix that separately.

    Also, thank you so much @GaelVaroquaux and @effigies for your help creating the PR properly.

    opened by ZviBaratz 62
  • Brainomics/Localizer

    Brainomics/Localizer

    opened by DimitriPapadopoulos 60
  • SpaceNet (this PR succeeds PR #219)

    SpaceNet (this PR succeeds PR #219)

    This PR succeeds PR #219 (aka unicorn factory). All discussions should be done here henceforth. #219 is now classified, and should be referred to solely for histological purposes.

    opened by dohmatob 54
  • [ENH, MRG] fREM

    [ENH, MRG] fREM

    Following the merge of #2000, here is the introduction of fREMClassifier and fREMRegressor objects which run pipelines with clustering, feature selection and ensembling of best models across a grid of parameters.

    • The files changes are minor.
    • The current implementation yields results as expected on an exemple (see attached below on plot_haxby_tutorial)
    • But the test of fREM accuracy on small datasets in test_decoder.py keep failing just because accuracy of fREM is not good enough in this setting but after trying various parameters I don't know which tests to do that would be better suited to this usecase. Anybody ?
    • If I remember correctly you wanted to replace Decoder use by fREM in many examples to alleviate their computational cost @GaelVaroquaux . Which one are the main targets ? I will benchmark how many time we could gain but on ROIs it doesn't seem very useful to cluster. (e.g. it slows things down on Haxby ROI example)
    Capture d’écran 2020-03-05 à 18 40 24
    opened by thomasbazeille 52
  • [MRG] Dictionary learning + nilearn.decomposition refactoring

    [MRG] Dictionary learning + nilearn.decomposition refactoring

    Decomposition estimators (DictLearning / MultiPCA) now inherit a DecompositionEstimator.

    Loading of data is made through a PCAMultiNiftiMasker, which loads data from files and compress it.

    Potentially, the function check_masker could solve issue #688, as it factorizes the input checking of estimator to which you provide either a masker or parameters for a mask. It is tuned to be able to use PCAMultiNiftiMasker

    opened by arthurmensch 51
  • Collaboration with NiiVue and possibilities for integration

    Collaboration with NiiVue and possibilities for integration

    The NiiVue project uses WebGL 2.0 to provide interactive web-based visualization capabilities for viewing medical imaging. We have started to meet with NiiVue developers to discuss possible avenues for integration of NiiVue into Nilearn. They have started working on a Python interface so one path to take would be to work on building this with them. We can also start by providing examples of how Nilearn users can make use of NiiVue functionality. Some relevant repositories include https://github.com/niivue/niivue and https://github.com/niivue/ipyniivue. NiiVue demos can be found here: https://niivue.github.io/niivue/. We can keep track of progress and decisions made to advance this collaboration here as well as have a general discussion on benefits of the integration.

    Enhancement Discussion 
    opened by ymzayek 1
  • Remove ci-skip action from GitHub Actions workflows

    Remove ci-skip action from GitHub Actions workflows

    GitHub Actions now supports skipping workflows out of the box: https://docs.github.com/en/actions/managing-workflow-runs/skipping-workflow-runs and so the Action mstachniuk/ci-skip is deprecated and should be removed from all workflows before it starts failing.

    Infrastructure 
    opened by ymzayek 4
  • [ENH] Flat maps for all fsaverage resolutions

    [ENH] Flat maps for all fsaverage resolutions

    Closes #3171 .

    If we're happy with the flat maps generated in #3171 for all fsaverage resolutions, I suggest the following roadmap to integrate them to nilearn:

    • [x] clean up my current script so that it generates flat maps for both hemispheres and all fsaverage resolutions
    • [ ] publish it somewhere? (as a gist maybe? I'm happy to hear suggestions here)
    • [x] run the script I used in #2815 to generate our fsaverage tarballs, and add flat maps to it
    • [ ] update our OSF datasets for fsaverage 3-7
    • [ ] have at least one example's thumbnail display this feature (which is why this PR leverages unmerged changes from #3173, as I want this thumbnail to show curvature sign as well)

    As a reminder, here is what the current flat maps look like for fsaverage 3 to 7:

    image

    image

    image

    image

    image

    opened by alexisthual 7
  • `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    However, I don't think a filter option is provided for nilearn.maskers.NiftiMasker so seems it's using default aka butterworth

    Originally posted by @DasDominus in https://github.com/nilearn/nilearn/issues/3434#issuecomment-1333064573

    Good first issue effort: low 
    opened by htwangtw 6
  • All CI failing because flake8 --diff option has been removed

    All CI failing because flake8 --diff option has been removed

    All CI are currently failing because flake8==6.0.0 (which has been released on Nov 23) does not have the --diff flag anymore.

    The reason has been explained in this issue.

    A solution which is an adaptation of this comment is replacing the flake8 --diff from build_tools/flake8_diff.sh#L79 with the following command:

    git diff --name-only $COMMIT | grep '\.py$' | xargs --delimiter='\n' --no-run-if-empty flake8 --show-source
    

    Opened PR #3432

    Bug 
    opened by RaphaelMeudec 2
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
Pytorch Lightning Implementation of SC-Depth Methods.

SC_Depth_pl: This is a pytorch lightning implementation of SC-Depth (V1, V2) for self-supervised learning of monocular depth from video. In the V1 (IJ

JiaWang Bian 216 Dec 30, 2022
Flybirds - BDD-driven natural language automated testing framework, present by Trip Flight

Flybird | English Version 行为驱动开发(Behavior-driven development,缩写BDD),是一种软件过程的思想或者

Ctrip, Inc. 706 Dec 30, 2022
Pytorch implementation of

EfficientTTS Unofficial Pytorch implementation of "EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture"(arXiv). Disclaimer: Somebo

Liu Songxiang 109 Nov 16, 2022
[CVPR'22] Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast

wseg Overview The Pytorch implementation of Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast. [arXiv] Though image-level weakly

Ye Du 96 Dec 30, 2022
[NeurIPS 2020] This project provides a strong single-stage baseline for Long-Tailed Classification, Detection, and Instance Segmentation (LVIS).

A Strong Single-Stage Baseline for Long-Tailed Problems This project provides a strong single-stage baseline for Long-Tailed Classification (under Ima

Kaihua Tang 514 Dec 23, 2022
Code for all the Advent of Code'21 challenges mostly written in python

Advent of Code 21 Code for all the Advent of Code'21 challenges mostly written in python. They are not necessarily the best or fastest solutions but j

4 May 26, 2022
Graph InfoClust: Leveraging cluster-level node information for unsupervised graph representation learning

Graph-InfoClust-GIC [PAKDD 2021] PAKDD'21 version Graph InfoClust: Maximizing Coarse-Grain Mutual Information in Graphs Preprint version Graph InfoClu

Costas Mavromatis 21 Dec 03, 2022
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21'

Argument Extraction by Generation Code for paper "Document-Level Argument Extraction by Conditional Generation". NAACL 21' Dependencies pytorch=1.6 tr

Zoey Li 87 Dec 26, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
The repo for reproducing Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study

ECIR Reproducibility Paper: Seed-driven Document Ranking for Systematic Reviews: A Reproducibility Study This code corresponds to the reproducibility

ielab 3 Mar 31, 2022
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Kai Zhang 1.2k Dec 29, 2022
Code for the published paper : Learning to recognize rare traffic sign

Improving traffic sign recognition by active search This repo contains code for the paper : "Learning to recognise rare traffic signs" How to use this

samsja 4 Jan 05, 2023
Auxiliary data to the CHIIR paper Searching to Learn with Instructional Scaffolding

Searching to Learn with Instructional Scaffolding This is the data and analysis code for the paper "Searching to Learn with Instructional Scaffolding"

Arthur Câmara 2 Mar 02, 2022
render sprites into your desktop environment as shaped windows using GTK

spritegtk render static or animated sprites into your desktop environment as dynamic shaped windows using GTK requires pycairo and PYGobject: pip inst

hermit 20 Oct 27, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 07, 2022
PyTorch deep learning projects made easy.

PyTorch Template Project PyTorch deep learning project made easy. PyTorch Template Project Requirements Features Folder Structure Usage Config file fo

Victor Huang 3.8k Jan 01, 2023
Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging Minimal implementation and experiments of "No-Transaction Band N

19 Jan 03, 2023
Distilled coarse part of LoFTR adapted for compatibility with TensorRT and embedded divices

Coarse LoFTR TRT Google Colab demo notebook This project provides a deep learning model for the Local Feature Matching for two images that can be used

Kirill 46 Dec 24, 2022