Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others,
this is a PR about surface plotting.
nilearn has some awesome functions to plot surface data in
nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from
volumetric to surface data.
It would be great to add a function to sample or project volumetric data on the
nodes of a cortical mesh; this would allow users to look at surface plots of
their 3d images (e.g. statistical maps).
In this PR we will try to add this to nilearn.
Most tools which offer this functionality (e.g. caret, freesurfer, pycortex)
usually propose several projection and sampling strategies, offering different
quality / speed tradeoffs. However, it seems to me that naive strategies are not
so far behind more elaborate ones - see for example [Operto, Grégory, et al.
"Projection of fMRI data onto the cortical surface using anatomically-informed
convolution kernels." Neuroimage 39.1 (2008): 127-135].
For plotting and visualisation, the results of a simple strategy are probably
accurate enough for most users.
I therefore suggest to start by including a very simple and fast projection
scheme, and we can add more elaborate ones later if we want.
I'm just getting started but I think we can already start a discussion.
The proposed strategy is simply to draw a sample from a 3mm sphere around each
mesh node, and average the measures.
The image below illustrates that strategy: each red circle is a mesh node.
Samples are drawn from the blue crosses that are attached to it, and are inside
the image, and then averaged to compute the color inside the circle.
(This image is produced by the show_sampling.py example, which is only there to clarify
the strategy implemented in this PR and will be removed).

Here is an example surface plot for a brainpedia image (id 32015 on Neurovault,
https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html),
produced by brainpedia_surface.py:

And here is the plot produced by pycortex for the same image, as shown on
Neurovault:

Note about performance: in order to choose the positions of the samples to draw
from a unit ball, for now, we cluster points drawn from a uniform distribution
on the ball and keep the centroids (we can think of something better). This
takes a few seconds and the results are cached with joblib for the time being,
but since it only needs to be done once, when we have decided how many samples
we want, the positions will be hardcoded once and for all (no computing, no
caching). with 100 samples per ball, projecting a stat map of the full brain
with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.