A meta plugin for processing timelapse data timepoint by timepoint in napari

Overview

napari-time-slicer

License PyPI Python Version tests codecov napari hub

A meta plugin for processing timelapse data timepoint by timepoint. It enables a list of napari plugins to process 2D+t or 3D+t data step by step when the user goes through the timelapse. Currently, these plugins are using napari-time-slicer:

napari-time-slicer enables inter-plugin communication, e.g. allowing to combine the plugins listed above in one image processing workflow for segmenting a timelapse dataset:

If you want to convert a 3D dataset into as 2D + time dataset, use the menu Tools > Utilities > Convert 3D stack to 2D timelapse (time-slicer). It will turn the 3D dataset to a 4D datset where the Z-dimension (index 1) has only 1 element, which will in napari be displayed with a time-slider. Note: It is recommended to remove the original 3D dataset after this conversion.

Usage for plugin developers

Plugins which implement the napari_experimental_provide_function hook can make use the @time_slicer. At the moment, only functions which take napari.types.ImageData, napari.types.LabelsData and basic python types such as int and float are supported. If you annotate such a function with @time_slicer it will internally convert any 4D dataset to a 3D dataset according to the timepoint currently selected in napari. Furthermore, when the napari user changes the current timepoint or the input data of the function changes, a re-computation is invoked. Thus, it is recommended to only use the time_slicer for functions which can provide [almost] real-time performance. Another constraint is that these annotated functions have to have a viewer parameter. This is necessary to read the current timepoint from the viewer when invoking the re-computions.

Example

import napari
from napari_time_slicer import time_slicer

@time_slicer
def threshold_otsu(image:napari.types.ImageData, viewer: napari.Viewer = None) -> napari.types.LabelsData:
    # ...

You can see a full implementations of this concept in the napari plugins listed above.


This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.

Installation

You can install napari-time-slicer via pip:

pip install napari-time-slicer

To install latest development version :

pip install git+https://github.com/haesleinhuepf/napari-time-slicer.git

Contributing

Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the BSD-3 license, "napari-time-slicer" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Comments
  • pyqt5 dependency

    pyqt5 dependency

    The dependency on pyqt5 which gets installed via pip can create trouble if napari has been installed via conda (see https://napari.org/plugins/best_practices.html#don-t-include-pyside2-or-pyqt5-in-your-plugin-s-dependencies). Is there any reason for this dependency? As this plugin is itself a dependency of other plugins like napari-segment-blobs-and-things-with-membranes this can create trouble down the chain.

    opened by guiwitz 7
  • PyQt5 version requirement breaks environment

    PyQt5 version requirement breaks environment

    Hi @haesleinhuepf ,

    I wanted to ask whether it is really strictly necessary to use the current PyQt5 requirement?

    pyqt5>=5.15.0
    

    It collides with current Spyder versions that only support PyQt up to 5.13:

    spyder 5.1.5 requires pyqtwebengine<5.13, which is not installed.
    spyder 5.1.5 requires pyqt5<5.13, but you have pyqt5 5.15.6 which is incompatible.
    

    Since the time slicer is used downstream in quite a few plugins of yours (e.g., segment-blobs-and-things-with-membranes, etc.) this is quite a restriction.

    opened by jo-mueller 5
  • Bug report: `KeyError: 'viewer'`

    Bug report: `KeyError: 'viewer'`

    Hi @haesleinhuepf ,

    I am getting an error in this notebook in the 5th cell on this command:

    surface = nppas.largest_label_to_surface(labels)
    

    where nppas is napari-process-points-and-surfaces. Labels is a regular label image as made with skimage.measure.label().

    Thanks for looking at it!

    opened by jo-mueller 2
  • Make dask arrays instead of computing slice for slice

    Make dask arrays instead of computing slice for slice

    Hey @haesleinhuepf! this is the first implementation of the time slicer wrapper using dask instead of computing the time slices based on the current time index. I could re-use some a little of the previous code but the wrappers start to differ from eachother pretty soon. At the moment I'm also unsure if this wrapper can replace the original time slicer function as a substitute so I kept both your old version and the dask version. An idea that I had which could be useful for saving the dask images is a function which processes each time slice and saves it as a separate image (If images are saved one by one it's really easy to load them as dask arrays!)

    opened by Cryaaa 1
  • Tests failing

    Tests failing

    source:

     if sys.platform.startswith('linux') and running_as_bundled_app():
      .tox/py37-linux/lib/python3.7/site-packages/napari/utils/misc.py:65: in running_as_bundled_app
          metadata = importlib_metadata.metadata(app_module)
      .tox/py37-linux/lib/python3.7/site-packages/importlib_metadata/__init__.py:1005: in metadata
      return Distribution.from_name(distribution_name).metadata
      .tox/py37-linux/lib/python3.7/site-packages/importlib_metadata/__init__.py:562: in from_name
      raiseValueError("A distribution name is required.")
      E   ValueError: A distribution name is required.
    

    See also:

    https://github.com/napari/napari/issues/4797

    opened by haesleinhuepf 0
  • Have 4D dask arrays as result of time-sliced functions

    Have 4D dask arrays as result of time-sliced functions

    This turns result of time-slicer annotated functions into 4D delayed dask arrays as proposed by @Cryaaa in #5

    This PR doesn't fully work yet in the interactive napari user-interface. After setting up a workflow and when going through time, it crashes sometimes with a KeyError while saving the duration of an operation. This is related to a computation finishing while the result has already be replaced. Basically multiple threads writing to the same result. It's this error: https://github.com/dask/dask/issues/896

    Reproduce:

    • Start napari
    • Open the Example dataset clEsperanto > CalibZapwfixed
    • Turn it into a 2D+t dataset using Tools > Utilities
    • Open the assistant
    • Setup a workflow, e.g. Denoise, Threshold, Label
    • Move the time-bar a couple of times until it crashes.

    I'm not sure yet how to solve this.

    opened by haesleinhuepf 8
  • Aggregate points and surfaces in 4D

    Aggregate points and surfaces in 4D

    Hi Robert @haesleinhuepf ,

    I am seeing some issues with using the timeslicer on 4D points/surface data in napari. For instance, using the label_to_surface() function from napari-process-points-and-surfaces throws an error:

    ValueError: Input volume should be a 3D numpy array.
    

    which comes from the marching_cubes function under the hood. Here is a small example script to reproduce the error:

    import napari
    import napari_process_points_and_surfaces as nppas
    # Make a blurry sphere
    s = 100
    data = np.zeros((s, s, s), dtype=float)
    x0 = 50
    radius = 15
    
    for x in range(s):
        for y in range(s):
            for z in range(s):
                if np.sqrt((x-x0)**2 + (y-x0)**2 + (z-x0)**2) < radius:
                    data[x, y, z] = 1.0
    
    viewer = make_napari_viewer()
    viewer.add_image(image)
    
    segmentation = image > filters.threshold_otsu(image)
    viewer.add_labels(segmentation)
    
    surf = nppas.label_to_surface(segmentation.astype(int))
    viewer.add_surface(surf)
    

    When introspecting the call to marching_cubes within the time_slicer function it is also evident that the image is somehow still a 4D image.

    opened by jo-mueller 4
Releases(0.4.9)
Owner
Robert Haase
Computational Microscopist, BioImage Analyst, Code Jockey
Robert Haase
This repo is dedicated to the data extraction and manipulation of the World Bank's database called STEP.

Overview Welcome to the Step-X repository. This repo is dedicated to the data extraction and manipulation of the World Bank's database called STEP. Be

Keanu Pang 0 Jan 20, 2022
Python script for transferring data between three drives in two separate stages

Waterlock Waterlock is a Python script meant for incrementally transferring data between three folder locations in two separate stages. It performs ha

David Swanlund 13 Nov 10, 2021
Full ELT process on GCP environment.

Rent Houses Germany - GCP Pipeline Project: The goal of the project is to extract data about house rentals in Germany, store, process and analyze it u

Felipe Demenech Vasconcelos 2 Jan 20, 2022
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

2 Nov 20, 2021
Code for the DH project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval Muslim World"

Damast This repository contains code developed for the digital humanities project "Dhimmis & Muslims – Analysing Multireligious Spaces in the Medieval

University of Stuttgart Visualization Research Center 2 Jul 01, 2022
ICLR 2022 Paper submission trend analysis

Visualize ICLR 2022 OpenReview Data

Jintang Li 75 Dec 06, 2022
Program that predicts the NBA mvp based on data from previous years.

NBA MVP Predictor A machine learning model using RandomForest Regression that predicts NBA MVP's using player data. Explore the docs » View Demo · Rep

Muhammad Rabee 1 Jan 21, 2022
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Iron 1.3k Dec 30, 2022
Sensitivity Analysis Library in Python (Numpy). Contains Sobol, Morris, Fractional Factorial and FAST methods.

Sensitivity Analysis Library (SALib) Python implementations of commonly used sensitivity analysis methods. Useful in systems modeling to calculate the

SALib 663 Jan 05, 2023
Picka: A Python module for data generation and randomization.

Picka: A Python module for data generation and randomization. Author: Anthony Long Version: 1.0.1 - Fixed the broken image stuff. Whoops What is Picka

Anthony 108 Nov 30, 2021
Python Package for DataHerb: create, search, and load datasets.

The Python Package for DataHerb A DataHerb Core Service to Create and Load Datasets.

DataHerb 4 Feb 11, 2022
Using Python to derive insights on particular Pokemon, Types, Generations, and Stats

Pokémon Analysis Andreas Nikolaidis February 2022 Introduction Exploratory Analysis Correlations & Descriptive Statistics Principal Component Analysis

Andreas 1 Feb 18, 2022
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data struc

Zed(Zijun) Chen 40 Dec 12, 2022
Analyze the Gravitational wave data stored at LIGO/VIRGO observatories

Gravitational-Wave-Analysis This project showcases how to analyze the Gravitational wave data stored at LIGO/VIRGO observatories, using Python program

1 Jan 23, 2022
Pypeln is a simple yet powerful Python library for creating concurrent data pipelines.

Pypeln Pypeln (pronounced as "pypeline") is a simple yet powerful Python library for creating concurrent data pipelines. Main Features Simple: Pypeln

Cristian Garcia 1.4k Dec 31, 2022
PyPSA: Python for Power System Analysis

1 Python for Power System Analysis Contents 1 Python for Power System Analysis 1.1 About 1.2 Documentation 1.3 Functionality 1.4 Example scripts as Ju

758 Dec 30, 2022
Reading streams of Twitter data, save them to Kafka, then process with Kafka Stream API and Spark Streaming

Using Streaming Twitter Data with Kafka and Spark Reading streams of Twitter data, publishing them to Kafka topic, process message using Kafka Stream

Rustam Zokirov 1 Dec 06, 2021
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Trung-Duy Nguyen 27 Nov 01, 2022
Conduits - A Declarative Pipelining Tool For Pandas

Conduits - A Declarative Pipelining Tool For Pandas Traditional tools for declaring pipelines in Python suck. They are mostly imperative, and can some

Kale Miller 7 Nov 21, 2021
My first Python project is a simple Mad Libs program.

Python CLI Mad Libs Game My first Python project is a simple Mad Libs program. Mad Libs is a phrasal template word game created by Leonard Stern and R

Carson Johnson 1 Dec 10, 2021