starfish is a Python library for processing images of image-based spatial transcriptomics.

Overview

starfish: scalable pipelines for image-based transcriptomics

docs/source/_static/design/logo.png


Image.sc forum PyPI PyPI - Downloads Documentation Status https://travis-ci.com/spacetx/starfish.svg?branch=master

starfish is a Python library for processing images of image-based spatial transcriptomics. It lets you build scalable pipelines that localize and quantify RNA transcripts in image data generated by any FISH method, from simple RNA single-molecule FISH to combinatorial barcoded assays.

Documentation

See spacetx-starfish.readthedocs.io for the quickstart, user guide, examples, and API.

Installation

starfish supports python 3.7 and above and can easily be installed from PyPI:

$ pip install starfish[napari]

For more detailed installation instructions, see here.

Python Version Notice

starfish will be dropping support for python 3.6 in the next release due to minimum python=3.7 version requirements in upstream dependencies.

Contributing

We welcome contributions from our users! See our contributing.rst and developer guide for more information.

Help, support, and questions

Comments
  • New seqFISH Decoding Method

    New seqFISH Decoding Method

    This PR adds a new spot-based decoding method, the CheckAll decoder, based on the method described here (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6046268/). It is capable of detecting several times more true targets from seqFISH image data than current spot-based methods in starFISH (PerRoundMaxChannel) as barcodes are not restricted to exactly matching spots or only nearest neighbor spots and because it tries to put together barcodes based on spots from every round instead of a single arbitrary anchor round. It is also capable of utilizing error correction rounds in the codebook which current starFISH methods do not consider.

    Summary of algorithm:

    Inputs: spots - starFISH SpotFindingResults object codebook - starFISH Codebook object filter_rounds - Number of rounds that a barcode must be identified in to pass filters error_rounds - Number of error-correction rounds built into the codebook (ie number of rounds that can be dropped from a barcode and still be able to uniquely match to a single target in the codebook)

    1. For each spot in each round, find all neighbors in other rounds that are within the search radius
    2. For each spot in each round, build all possible full length barcodes based on the channel labels of the spot's 
    neighbors and itself
    3. Drop barcodes that don't have a matching target in the codebook
    4. Choose the "best" barcode of each spot's possible target matching barcodes by calculating the sum of variances 
    for each of the spatial coordinates of the spots that make up each barcode and choosing the minimum distance barcode 
    (if there is a tie, they are all dropped as ambiguous). Each spot is assigned a "best" barcode in this way.
        sum( var(x1,x2,...,xn), var(y1,y2,...,yn), var(z1,z2,...,zn) ), where n = # spots in barcode
    5. Only keep barcodes/targets that were found as "best" in a certain number of the rounds (determined by filter_rounds
    parameter)
    6. If a specific spot is used in more than one of the remaining barcodes, the barcode with the higher spatial variance
    between it's spots is dropped (ensures each spot is only used once)
    (End here if number of error_rounds = 0)
    7. Remove all spots used in decoded targets that passed the previous filtering steps from the original set of spots
    8. Rerun steps 2-5 for barcodes that use less than the full set of rounds for codebook matching (how many rounds can be
    dropped determined by error_rounds parameter)
    

    Tests of the CheckAll decoder vs starFISH's PerRoundMaxChannel method (w/ nearest neighbor trace building strategy) show improved performance with the CheckAll decoder. All the following tests used seqfISH image data from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6046268/.

    image

    Note PRMC NN = PerRoundMaxChannel Nearest Neighbor

    The x-axis for each of the above images marks the value for the search radius parameter used in either decoding method (the distance that spots can be from a reference spot and be allowed to form a potential barcode). It is marked in increments of increasing symmetric neighborhood size (in 3D). The left figure shows the total number of decoded transcripts that are assigned to a cell for each method (note: for the CheckAll decoder this includes partial barcodes (codes that did not use all rounds in decoding) which the PerRoundMaxChannel method does not consider). Depending on the search radius, there is as much as a 442% increase in the total number of decoded barcodes for the CheckAll decoder vs PerRoundMaxChannel.

    To assess the accuracy of either decoding method, I used orthologous smFISH data that was available from the same samples for several dozen of the same genes as were probed in the seqFISH experiment. Using this data, I calculated the Pearson correlation coefficient for the correlation between the smFISH data and the results from decoding the seqFISH data with either method (note: because the targets in this dataset were introns (see paper) the values that were correlated were the calculated burst frequencies for each gene (how often/fast transcription is cycled on/off) instead of counts). The results of this are shown in the center figure above with the right-hand figure showing the same data but zoomed out to a 0-1 range. The starFISH PerRoundMaxChannel method does achieve a higher accuracy using this test but it is not significant and comes at the cost of detecting far fewer barcodes. (Note: missing values on lower end of x-axis are due to not having enough results to calculate the burst frequency of the transcripts).

    Unlike current starFISH methods, the CheckAll decoder is capable of taking advantage of error correction rounds built into the codebook. As an example, say a experiment is designed with a codebook that has 5 rounds, but the codes are designed in such a way that only any 4 of those rounds are needed to uniquely match a barcode to a target, the additional round would be considered an error correction round because you may be able to uniquely identify a barcode as a specific target with only 4 rounds, but if you can also use that fifth round you can be extra confident that the spot combination making up a barcode is correct. This method is based on a previous pull request made by a colleague of mine (https://github.com/ctcisar/starfish/pull/1).

    image

    The above figures show similar results to the first figure except the results of the CheckAll decoder have been split between barcodes that were made using spots in all rounds (error correction) and those that only had a partial match (no correction). Even without considering error correction, the CheckAll decoder detects as much as 181% more barcodes than the PerRoundMaxChannel method. The smFISH correlation are as expected with error corrected barcodes achieving a higher correlation score with the smFISH data than those that were not corrected. Whether a barcode in the final DecodedIntensityTable uses an error correction round or not can be extracted from the new "rounds_used" field which shows the number of rounds used to make a barcode for each barcode in the table. This allows easy separation of data into high and lower confidence calls. Additionally, the distance field of the DecodedIntensityTable is no longer based on the intensity of the spots in each barcode but is the value that is calculated for the sum of variances of the list of spatial coordinates for each spot in a barcode. This can also be used as a filter in many cases as barcodes made of more tightly clustered spots may be more likely to be true targets.

    image

    The major downside to the CheckAll decoder is it's speed. This is no surprise, as it is searching the entire possible barcode space for every spot from all rounds instead of just nearest neighbors to spots in a single round, but the possible barcode space can become quite large as search radius increases which can significantly increase run times. In order to address this, I've added the ability to multi-thread the program and run multiple chunks simultaneously in parallel using the python module ray, though even with this added parallelization, runtimes for CheckAll are much higher than for PerRoundMaxChannel. The above figure shows the runtime in minutes for the CheckAll decoder (using 16 threads) vs PerRoundMaxChannel with nearest neighbors (note: the seqFISH dataset used here is among the larger that are available at 5 rounds, 12 channels, and over 10,000 barcodes in the codebook so for most other seqFISH datasets I expect runtimes will be considerably less than what is shown here, unfortunately I did not have access to another suitable seqFISH dataset to test on). Ongoing work is being done to optimize the method and bring runtimes down. I was unable to figure out how to correctly add ray to the requirements file so that will still need to be done.

    opened by nickeener 29
  • (composite, ragged) experiment and codebook support

    (composite, ragged) experiment and codebook support

    Objective

    As a spaceTX data analyst with an experiment that does not have a matching number of rounds and channels, I want starfish to support this data model, so that I can create a starfish benchmarking pipeline

    Acceptance Criteria

    • Simone can process his data with starfish
    • Brian Long can process his data with starfish

    Notes

    Encapsulates work that needs to be done to support data coming in from Ola, Simone, Brian Long, and presumably RNAscope users

    Epic 
    opened by shanaxel42 27
  • Is there a function that returns the outline of a segmented cell?

    Is there a function that returns the outline of a segmented cell?

    For example, after using Segmentation.Watershed to segment a cell, is there some function that takes the resulting np.ndarray as an input and returns another np.ndarray with the outlines of all segmented cells. This doesn't have to be the exact method; anything similar will do.

    I have tried looking around already for something similar, but I haven't been able to find it. If that function doesn't exist yet, I would like to help to write one since I need that function anyway.

    Thank you!

    Edit: To give some background to this, the reason I want to do this is because I want to detect and count dots that are only contained within cells, so I would use the outlines of cells from Segmentation.Watershed to crop the dots image to only include those outlined areas, and then count dots in those parts of the image. It is entirely possible that there is a function that does that already that I have just not been able to find yet. If that exists yet, please let me know.

    question 
    opened by sethb3744 24
  • [Possible Bug] IntensityTable `concatenate_intensity_tables()` 'TAKE_MAX' strategy does not appear to be working quite right

    [Possible Bug] IntensityTable `concatenate_intensity_tables()` 'TAKE_MAX' strategy does not appear to be working quite right

    Some quick and dirty test code I wrote:

    import numpy as np
    import xarray as xr
    
    from starfish import IntensityTable
    
    from starfish.core.types import Coordinates, Features
    from starfish.core.types._constants import OverlapStrategy
    from starfish.core.codebook.test.factories import codebook_array_factory
    from starfish.core.intensity_table.test.factories import create_intensity_table_with_coords
    from starfish.core.intensity_table.overlap import Area
    
    import matplotlib.pyplot as plt
    import matplotlib.patches
    
    
    def plot_intensity_tables(intensity_table_plot_specs, concatenated_table):
        
        fig, ax = plt.subplots(dpi=500)
        
        for table, color, marker, coord, x_dim, y_dim in intensity_table_plot_specs:
            ax.scatter(table['xc'], table['yc'], facecolors='none', marker=marker, linewidth=0.25, s=15, edgecolors=color)
            rect = matplotlib.patches.Rectangle(coord, x_dim, y_dim, edgecolor='none', facecolor=color, alpha=0.10)
            ax.add_patch(rect)
    
        ax.scatter(concatenated_table['xc'], concatenated_table['yc'], facecolors='none', marker='^', linewidth=0.25, s=15, edgecolors='black')
            
        fig.tight_layout()
    
    
    def create_more_realistic_intensity_table_with_coords(area: Area, n_spots: int=10, random_seed=888) -> IntensityTable:
        codebook = codebook_array_factory()
        it = IntensityTable.synthetic_intensities(
            codebook,
            num_z=1,
            height=50,
            width=50,
            n_spots=n_spots
        )
        
        np.random.seed(random_seed)
        it[Coordinates.X.value] = xr.DataArray(np.random.uniform(area.min_x, area.max_x, n_spots),
                                               dims=Features.AXIS)
        it[Coordinates.Y.value] = xr.DataArray(np.random.uniform(area.min_y, area.max_y, n_spots),
                                               dims=Features.AXIS)
        # 'Restore' random seed so downstream usages aren't affected
        np.random.seed()
    
        return it
    
    
    def test_take_max_of_multiple_overlaps():
        
        it1 = create_more_realistic_intensity_table_with_coords(Area(min_x=0, max_x=3,
                                                      min_y=2, max_y=5), n_spots=29)
        it2 = create_more_realistic_intensity_table_with_coords(Area(min_x=2, max_x=5,
                                                      min_y=2, max_y=5), n_spots=31)
        it3 = create_more_realistic_intensity_table_with_coords(Area(min_x=0, max_x=3,
                                                      min_y=0, max_y=3), n_spots=37)
        it4 = create_more_realistic_intensity_table_with_coords(Area(min_x=2, max_x=5,
                                                      min_y=0, max_y=3), n_spots=41)
        
        concatenated = IntensityTable.concatenate_intensity_tables(
            [it1, it2, it3, it4], overlap_strategy=OverlapStrategy.TAKE_MAX)
    
    
        tables = [it1, it2, it3, it4]
        colors = ['r', 'g', 'b', 'gold']
        markers = ['*', 'D', 's', '*']
        coords = [(0, 2), (2, 2), (0, 0), (2, 0)]
        x_dims = [3, 3, 3, 3]
        y_dims = [3, 3, 3, 3]
        
        plot_intensity_tables(zip(tables, colors, markers, coords, x_dims, y_dims), concatenated)
    
        
        print(concatenated.sizes[Features.AXIS])
    

    The weird result: Black Triangles = spots in concatenated Other colored shapes = spots associated with a given color quadrant

    Screenshot from 2019-07-01 14-13-58

    The center quad overlap area shouldn't have that blue square or yellow star surviving in the concatenated table. Also in the overlap area between yellow and blue, there are two anomalous blue spots that somehow survive the TAKE_MAX process... Finally there is a surviving red star in the overlap between red and green.

    Note also that this test code also doesn't even test for when Z-overlaps might occur.

    CC: @berl

    opened by njmei 20
  • `write_experiment_json()` doesn't handle non-flat experiment directory structures correctly

    `write_experiment_json()` doesn't handle non-flat experiment directory structures correctly

    Hi all,

    So I've encountered an issue when running write_experiment_json() in 'no copy' mode (see: https://github.com/spacetx/starfish/pull/1053)

    The resulting output .json files that starfish produces all assume that image tiles reside in the experiment root directory (see: https://github.com/spacetx/slicedimage/blob/3456f86a10cc9c89996a394cb75a03457a9bf6bc/slicedimage/io.py#L303).

    However, our data (and likely many other group's data) do not live in a flat directory structure. Ours happens to look like:

    expt_root_dir
    | - Round_1_dir
    |    | - FOV_1_dir
    |    |   | - channel_1_z_1.tiff
    |    |   ...
    |    | - FOV_2_dir
    |    |   | - channel_1_z_1.tiff
    |    |   ...
    |    ...
    | - Round_2_dir
    |   | - FOV_1_dir
    |   |    | - channel_1_z_1.tiff
    |   |    ...
    |   ...
    ...
    

    Although this issue arises because Starfish can no longer dictate a flat data structure (by forcing copying), copying has to be optional as the overhead of creating a new copy of huge 'local' datasets is unreasonable.

    From an end-user perspective, having image data organized into subdirectories is also much easier to deal with than having to deal with one folder with thousands of tiffs.

    opened by njmei 16
  • RFC: improve show_stack() performance

    RFC: improve show_stack() performance

    edit 7.27.18: updated to include prototypes for show_spots (see 5525d9b, 48ba1e7, dd941e2)

    Overview

    The show_stack tool is really useful for evaluating the images during processing. However, it is very slow and thus difficult to use. I have prototyped a faster way to update images (see the notebook) and proposed a method for speeding up the show_spots as well (see below). Thoughts?

    Approach

    Faster scrolling The current version in master creates a new imshow() plot each time the slider is adjusted. Recreating the entire plot is very slow, so instead we use the imshow().set_data() to update just the image data. We should be able to make show_stack(). If we need to go faster, we may need to look into a different plotting library (e.g., pyqtgraph).

    Faster spots To speed up the spot display, we draw all of the spots on the image (as if they were z projected) before calling interact object. We set all spots to set_visible = False (i.e., make them invisible). Then, when the viewer "scrolls" between slices, toggle the visible property for spots that are members of that frame. We precompute the masks for each slice to speed up the upate function.

    Image query cursor tooltop Additionally, note that I use the 'notebook' (as opposed to 'inline') matplotlib magic, which both allows the draw to work and as a bonus gives an image tooltip. The tooltip is super useful because it reads out the coordinates and intensity of the pixel under the cursor (exactly what we were talking about today)!

    Questions

    • Will the proposed show_spots method generalize to other slicing (i.e., not along z)?
    • Is the show_spots responsive enough? We could consider blitting (i.e., only interacting with spots that change state between slices), but it may not be worth the effort.
    • What class will be passed to show_spots as the 'results_df'?
    • Does it matter what backend (e.g., QtAgg or TkAgg) the user has?
    opened by kevinyamauchi 16
  • Updated seqFISH Decoding Method: CheckAll Decoder

    Updated seqFISH Decoding Method: CheckAll Decoder

    This is an update to my previous PR for a new seqFISH decoding method. We discovered some problems with the method in regards to the false positive rate and have spent the past several months improving it to it's current state and think it is now fit to be published.

    So recapping the previous PR, we found that starFISH's seqFISH decoding (PerRoundMaxChannel with NEAREST_NEIGHBOR or EXACT_MATCH TraceBuildingStrategy) was finding far fewer targets in our seqFISH datasets than we were expecting from previous results and the reason for this is the limited search space within which starFISH looks for barcodes. For both TraceBuildingStrategy's, spots in different rounds are only connected into barcodes if they are spatially close to a spot in an arbitrary anchor round which does not utilize the inherent error correction that can be done by comparing the barcodes found using each round as an anchor. Additionally, the EXACT_MATCH TraceBuildingStrategy requires spots in different rounds to be in the exact same pixel location in order to connected into barcodes while NEAREST_NEIGHBOR requires that they be nearest neighbors, which are both extremely sensitive to spot drift between rounds, requiring precise registration that isn't always possible. The PerRoundMaxChannel decoder also lacks the ability to find error-corrected barcodes (in codebook where all codes have at least a hamming distance of 2 each code can still be uniquely identified by a barcode that is missing one round).

    To address this, I attempted to write a seqFISH decoding method that emulates the one used by the lab where seqFISH originated at Caltech. In this method, the spots in all rounds are separately used as anchors to search to spots within a search radius of the anchor spots. So for each spot, all the possible barcodes that could constructed using the spots in other rounds within the search radius are considered, which is a much larger search space than when only considering exact matches or nearest neighbors. Each anchor spot is then assigned a "best" barcode out of its total possible set (it "checks all" of the possible barcodes) based on the spatial variance and intensities of the spots that make it up and whether it has a match to codebook, though the order of those steps will vary.

    This larger search space requires a few extra tricks to keep false positive calls down. One of these is a filter that removes barcodes that were not found as best for a certain number of the spots/rounds that make it up (usually one less than the total round number) which the Caltech group calls the seed filter. So if we had a 5 round barcode and that barcode was chosen as the best choice for only 3 of the spots that make it up, it would not pass the filter. I've also introduced a parameter I call strictness that sets a maximum number of possible barcodes a given spot is allowed to have before it is dropped as ambiguous. Barcodes in spot dense regions tend to be difficult to call accurately so it is useful to be able to restrict that if necessary. Another trick I found helpful was to run decoding in multiple nested stages that start off with stricter parameters which are loosened as decoding progresses. This includes the search radius (starts off with radius 0 and increments up to the user-specified value), the number of rounds that can be omitted from the barcode to make a match (only looks for full codes first then looks for partials) and the max allowed score (based on spatial variance and intensity of spots). Between each decoding, the spots that were found to be in decoded barcodes that pass all filters are removed from the original spot set before decoding the remaining spots. This allows you to call high confidence barcodes first and remove their spots, making calling adjacent barcodes easier. Running the whole decoding process repeatedly became quite time consuming so I have added in multiprocessing capability to allow users to speed things up using Python's standard multiprocessing library.

    In addition to the filters and incremental decoding I also take advantage of two slightly different decoding algorithms with different precision/recall properties. In one method, which I call "filter-first", once all the possible barcodes for each spot are assembled, the "best" barcode for each spot is chosen based on a scoring function that uses the spatial variance of the spots and their intensities. The chosen barcodes are then matched to the codebook and those that have no match are dropped. The other method, called "decode-first", instead matches all the possible barcodes for each spot to the codebook first and then if there are multiple matches the best is chosen based on the distance score. The "filter-first" method tends to be higher accuracy but return fewer decoded targets (high precision/low recall) while the "decode-first" method finds more mRNA targets but at a cost to accuracy (low precision/high recall). In keeping with my incremental decoding strategy I described earlier, the first set of decoding is done using the high accuracy "filter-first" method followed by decodings done using the low accuracy version.

    Below is a description of the required inputs and a step by step explanation of the algorithm.

    Inputs:

    • spots - starFISH SpotFindingResults object
    • codebook - starFISH Codebook object
    • search_radius - pixel distance that spots can be from each other and be connected into a barcode
    • error_rounds - number of rounds that can be dropped from each code and still allow unique barcode matching (only values are 0 or 1 are currently allowed)
    • mode - accuracy mode, determines settings of several different parameters ("high", "med", or "low")
    • physical_coords - boolean for whether to using the physical coordinate values found in the spots object for calculating distance between spots or use pixel coordinates

    Steps

    1. For each spot in each round, calculate a scaled intensity value by dividing the spot's intensity by the l2 norm of all the spots found in the same channel and same 100 radius pixel neighborhood.
    2. For each spot in each round, find all neighboring spots in different rounds within the search radius.
    3. From each spot and their neighbors, build all possible combinations of spots that form barcodes using their channel labels.
    4. Choose a "best" barcode for each spot using the scoring function and drop barcodes that don't have codebook matches (this is for the filter-first method and is switched for decode-first).
    5. Drop barcodes that were not found as best for most of the spots that make it up (usually one less than the round total).
    6. Choose between overlapping barcodes (barcodes that share spots) using their scores.
    7. Remove spots found in passing barcodes from original spot set. (2-7 is repeated, first using the filter-first method for each incremental radius and then using the decode-first method for each radius)
    8. One final decoding is done if error_rounds parameter > 0, where partial barcodes are allowed. Uses filter-first method and strict parameters as the barcodes it finds are more error prone than full barcodes.
    9. Turn overall barcode set into DecodedIntensityTable object and return

    One final detail worth explaining here is the mode parameter. This parameter can take three values: "high", "med", or "low" and essentially simplifies a number of parameter choices into these three presets including the seed filter value at the end of each decoding step, the max allowed score allowed for a barcode, and the strictness value for the filter-first and decode-first methods. "high" corresponds to high accuracy (lower recall), "low" corresponds to low accuracy (higher recall), and "med" is a balance between the two.

    To evaluate the results of our decodings and compare them with starFISH methods, we used two different kinds of qc measures. The first is correlation with orthogonal RNA counts (either smFISH or RNAseq) and the other is the proportion of false positive calls measured by placing "blank" codes in the codebook that adhere to the hamming distance rules of the codebook but don't actually correspond to any true mRNA. This false positive metric was lacking from my previous PR and when we ran it on the previous method our false positive values were quite high. Depending on the method used to calculate the false positive rate and the specific dataset, high false positive rates can be a problem with seqFISH data but the mode parameter allows you to easily adjust the decoding parameters to get the best results.

    Below are some figures showing results from 3 different seqFISH datasets comparing the starFISH PerRoundMaxChannel + NEAREST_NEIGHBORS method with each mode choice for the CheckAll decoder. Each red/blue bar pair along the x-axis represent a single cell, FOV, or image chunk (don't have segmented cells yet for this dataset so it is broken into equal sized chunks), while the height of the blue bar represents the total number of on-target (real) barcodes found in that cell/fov/chunk and the red bar height is the off-target count (blank) with the black bar showing the median on-target count. Also printed on each graph is the overall false positive (FP) rate and the pearson correlation with smFISH or RNAseq where available. For each dataset the same set of spots was used for each run.

    Intron seqFISH

    • Mouse ESC lines
    • 10,421 real genes + ~9,000 blank genes
    • 29 (close) z slices
    • This was the most difficult of the three datasets to get good results from. The large number of genes and close by z slices make finding true barcodes tricky.

    image

    RNA SPOTs seqFISH

    • No cells, just extracted RNA on a slide
    • 10,421 real genes + ~9,000 blank genes
    • single z slice
    • Pretty easy dataset as there was no cellular autofluorescence

    image

    Mouse Atlas seqFISH

    • slices of mouse hippocampus
    • 351 real genes, 351 blanks
    • 6 (distant) z slices
    • Lower false positives than intron seqFISH probably from the lower gene count and distant z slices reducing confusion. Not sure why starFISH does so poorly here though but the FP rate is almost 3x the low accuracy rate for the new decoder.

    image

    From these, you can see that this updated decoder is capable of finding more targets than starFISH, with lower false positive rates, and higher smFISH/RNAseq correlations. The one area where the starFISH methods have us beat is speed. Being more thorough is costly so while starFISH can decode a set of spots in 2-3 minutes on my system, this new decoder can take up to 1.5 hours even when using 16 different simultaneous processes. That time is from the intron seqFISH dataset which is quite large (5 x 12 x 29 x 2048 x 2048) so the run times are more reasonable for smaller datasets.

    Running the "make all" tests for my current repo throws a mypy error but in code that I haven't touched (like the ImageStack class code) so hopefully we can work together to get those kinks worked out so we can add this to starFISH so others might use it.

    opened by nickeener 15
  • RFC: Add Napari viewer to overlay detected spots on images

    RFC: Add Napari viewer to overlay detected spots on images

    What does this PR do?

    This PR is a prototype of using the Napari GUI for viewing detected spots overlayed on an ImageStack. The Napari GUI is still an early prototype, so it does have some bugs/brittle parts, but the performance for viewing spots on 3D volumes is way better than current notebook-based solutions.

    There are some more usability updates coming to the Napari GUI (e.g, constrast sliders) that will definitely improve UX. However, the API should remain the same in the near term, so show_spots_napari() will still work.

    Demo

    To try the viewer, first pip install --upgrade napari-gui and then open test_napari.ipynb in the notebooks directory. This demos the Allen Institute smFISH notebook using the Napari viewer. The notebook loads a saved version of the 3D spot detection (allen_decoded) from the notebook directory for convenience (spot detection takes a while). We will nuke this after initial testing.

    Open questions

    • Where should this function go? I also noticed that there is a stub for a show_spots() method in IntensityTable. Maybe it should go there?
    • Is there an easy to way to tell what type of image the spot attributes come from (e.g., 3D stack, MIP)? It would be nice to be able to automatically choose the type of projection of the 5D ImageStack to use for the display.
    • Noob question: how do I get a full r, c, z, x, y coordinate for a given feature in IntensityTable?
    opened by kevinyamauchi 15
  • Supporting tif/nd2 file formats

    Supporting tif/nd2 file formats

    Is your feature request related to a problem? Please describe.

    We are using Nikon microscopes and in each round (out of more than 10) of our experiment we have ~80 GB of data in .nd2 format. We are using nd2 files as we originally didn't want to lose any metadata.

    Describe the solution you'd like I was hopping to have a support for nd2 files to make json files (and single frame .tiff files) directly from .nd2 files.

    Describe alternatives you've considered In order to write the json file what we do for now is to convert all files from nd2 to single frame tiff format that have channel-label and z-level in their filenames so it could be used by `get_tiel()' and later make a formatted folder containing all the frames and json files. Very suboptimal specifically dealing with this large volume.

    Additional context I wish there were no need for starfish to save frames in its own format and separately from the original stack. This feature will save more than few TBs for each of our animals ...

    -- and sorry if I posted few issues/feature requests in a row!

    feature 
    opened by abedghanbari2 14
  • Upload example of transforming raw MERFISH -> spaceTx formatted data

    Upload example of transforming raw MERFISH -> spaceTx formatted data

    Remaining work includes:

    1. Running this tool against all raw data
    2. Uploading formatted data to cloud
    3. Using experiment API against this dataset to test parsing larger datasets
    opened by dganguli 14
  • Validate that the notebooks can run successfully

    Validate that the notebooks can run successfully

    If *.py.skip exists, then that notebook is skipped.

    See https://travis-ci.org/spacetx/starfish/builds/389071632 for a run when .travis.yml is set to look for a different branch name.

    opened by ttung 14
  • Bump setuptools from 58.1.0 to 65.5.1 in /starfish

    Bump setuptools from 58.1.0 to 65.5.1 in /starfish

    Bumps setuptools from 58.1.0 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump setuptools from 58.1.0 to 65.5.1 in /requirements

    Bump setuptools from 58.1.0 to 65.5.1 in /requirements

    Bumps setuptools from 58.1.0 to 65.5.1.

    Release notes

    Sourced from setuptools's releases.

    v65.5.1

    No release notes provided.

    v65.5.0

    No release notes provided.

    v65.4.1

    No release notes provided.

    v65.4.0

    No release notes provided.

    v65.3.0

    No release notes provided.

    v65.2.0

    No release notes provided.

    v65.1.1

    No release notes provided.

    v65.1.0

    No release notes provided.

    v65.0.2

    No release notes provided.

    v65.0.1

    No release notes provided.

    v65.0.0

    No release notes provided.

    v64.0.3

    No release notes provided.

    v64.0.2

    No release notes provided.

    v64.0.1

    No release notes provided.

    v64.0.0

    No release notes provided.

    v63.4.3

    No release notes provided.

    v63.4.2

    No release notes provided.

    ... (truncated)

    Changelog

    Sourced from setuptools's changelog.

    v65.5.1

    Misc ^^^^

    • #3638: Drop a test dependency on the mock package, always use :external+python:py:mod:unittest.mock -- by :user:hroncok
    • #3659: Fixed REDoS vector in package_index.

    v65.5.0

    Changes ^^^^^^^

    • #3624: Fixed editable install for multi-module/no-package src-layout projects.
    • #3626: Minor refactorings to support distutils using stdlib logging module.

    Documentation changes ^^^^^^^^^^^^^^^^^^^^^

    • #3419: Updated the example version numbers to be compliant with PEP-440 on the "Specifying Your Project’s Version" page of the user guide.

    Misc ^^^^

    • #3569: Improved information about conflicting entries in the current working directory and editable install (in documentation and as an informational warning).
    • #3576: Updated version of validate_pyproject.

    v65.4.1

    Misc ^^^^

    v65.4.0

    Changes ^^^^^^^

    v65.3.0

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump certifi from 2022.6.15 to 2022.12.7 in /starfish

    Bump certifi from 2022.6.15 to 2022.12.7 in /starfish

    Bumps certifi from 2022.6.15 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump certifi from 2022.6.15 to 2022.12.7 in /requirements

    Bump certifi from 2022.6.15 to 2022.12.7 in /requirements

    Bumps certifi from 2022.6.15 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 9.1.1 to 9.3.0 in /starfish

    Bump pillow from 9.1.1 to 9.3.0 in /starfish

    Bumps pillow from 9.1.1 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump pillow from 8.3.2 to 9.3.0 in /requirements

    Bump pillow from 8.3.2 to 9.3.0 in /requirements

    Bumps pillow from 8.3.2 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(0.2.2)
  • 0.2.2(May 3, 2021)

    • Updates requirements
    • Updates to documentation
    • Import match_histograms from skimage.exposure
    • Add necessary coords for IntensityTable when using Nearest Neighbors strategy (#1928)
    • Fix localmaxpeakfinder spot_props filter (#1839)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Jun 18, 2020)

    • Bump napari to 0.3.4 (#1889)
    • fix how spot_ids are handled by build_traces_sequential and Label._assign() (#1872)
    • reorganized examples gallery and made clarifications to example pipelines and formatting (#1880)
    • added image registration tutorial (#1874)
    • Add assigning spots to cells docs (#1832)
    • Add a Quick Start tutorial (#1869)
    • Update starfish installation guide for JOSS submission (#1868)
    • Add image segmentation docs (#1821)
    • Changing return value of PixelDecoding to DecodedIntensityTable (#1823)
    • Ensure that LocalMaxPeakFinder works in 3D (#1822)
    • Deprecate is_volume parameter with trackpy (#1820)
    • Fix on-demand calculation of BinaryMaskCollection's regionprops (#1819)
    • Remove workaround for non-3D images (#1808)
    • improve from_code_array validation (#1806)
    • Add group_by for tilefetcher-based ImageStack construction (#1796)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Jan 31, 2020)

    [0.2.0] - 2020-01-31

    • Add level_method to the clip filters. (#1758)
    • adding method to use installed ilastik instance (#1740)
    • Create a TileFetcher-based constructor for ImageStack (#1737)
    • adding mouse v human example to starfish.data (#1741)
    • adding method to binary mask collection that imports labeled images from external sources like ilastik (#1731)
    • Remove starfish.types.Clip (#1729)
    • Move watershed segmentation from morphology.Binarize to morphology.Segment (#1720)
    • Link to the available datasets in "loading data" section (#1722)
    • Document workaround for python3.8 (#1705)
    • Wrap skimage's watershed (#1700)
    • Add 3D support to target assignment. (#1699)
    • Pipeline component and implementation for merging BinaryMaskCollections (#1692)
    • Mechanism to reduce multiple masks into one (#1684)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.10(Dec 13, 2019)

    [0.1.10] - 2019-12-13

    • Bump slicedimage to 4.1.1 (#1697)
    • Make map/reduce APIs more intuitive (#1686)
    • updates roadmap to reflect 2020H1 plans
    • adding aws scaling vignette (#1638)
    • Use thresholded binarize and mask filtering in existing watershed code. (#1671)
    • adding spot ids to pixel results (#1687)
    • Implement Labeling algorithms (#1680)
    • Thresholded binarize conversion algorithm (#1651)
    • Area filter for binary masks (#1673)
    • Fix stain generation in watershed (#1670)
    • Use the new levels module. (#1669)
    • Linear image leveling (#1666)
    • add axis labels to display() (#1682)
    • Clip method for Richardson Lucy (#1668)
    • Filters for mask collections (#1659)
    • Provide an apply method to binary mask collections. (#1655)
    • adding convience method for slicing codebook data (#1626)
    • Fix display tests and code (#1664)
    • Additional builders for BinaryMaskCollection (#1637)
    • Methods for uncropping binary masks. (#1647)
    • Improve coordinate handling code for BinaryMaskCollection and LabelImage (#1632)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.9(Nov 18, 2019)

    • Create an ArrayLike type (#1649)
    • Verify that binary masks can be generated from empty label images (#1634)
    • Add a morphology package to hold BinaryMaskCollection, LabelImage, and their respective operators (#1631)
    • fixing travis (#1648)
    • Support multiple codewords for the same target (#1646)
    • Update data model for BinaryMaskCollection (#1628)
    • Test for Codebook.to_json / open_json (#1645)
    • Simplify Dockerfile (#1642)
    • Switch to version exclusion for scikit-image workaround (#1629)
    • Clean up binary mask (#1622)
    • adding an extras feild to SpotFindingResults (#1615)
    • deleting Decode and Detect modules in lieu of spot finding refactor (#1598)
    • Fix install issues (#1641)
    • Upgrade to slicedimage 4.1.0 (#1639)
    • Update vocabulary for LabelImage I/O operations. (#1630)
    • Add a label image data type (#1619)
    • Remove deprecated code (#1621)
    • fixing bug with codebook.to_json (#1625)
    • Don't fill a new ImageStack with NaN (#1609)
    • Rename SegmenationMaskCollection to BinaryMaskCollection (#1611)
    • Remove hack to force anonymous memory mapping on osx (#1618)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.8(Oct 18, 2019)

    • Logging improvements (#1617)
    • Make regionprops available per mask (#1610)
    • Don't use mypy 0.740 (#1616)
    • changing test code to use new spot finding modules (#1597)
    • refactoring allen smFish with new spot finding (#1593)
    • clean up max projection (#1379)
    • Use masked fill to produce labeled images (#1582)
    • Replace most instances of starfish.image.Filter.Reduce with imagestack.reduce (#1548)
    • implementing starMap spot finding refactor (#1592)
    • Add slots to classes that subclass xr.DataArray (#1607)
    • Convert SegmentationMaskCollection to a dict-like object (#1579)
    • Test case for multiprocessing + imagestack (#1589)
    • Masked fill method (#1581)
    • Add map/reduce methods to ImageStack (#1539)
    • Unify FunctionSource in Map and Reduce (#1540)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.7(Oct 9, 2019)

    • ISS refactored with new spot finding path (#1518)
    • Fix bugs in per-round-max-decoder (#1602)
    • Fix dimension ordering on Codebook and IntensityTable (#1600)
    • provanance logging refactor and support for SpotFindingResults (#1517)
    • napari 0.2.0 release (#1599)
    • starfish.display: unpin napari version, add tests, view masks separately (#1570)
    • adding coordinate support to SpotFindingResults (#1516)
    • adding new SpotFindingResults data structure and new packages (#1515)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.6(Sep 20, 2019)

    [0.1.6] - 2019-09-18

    • Switch to python multithreading (#1544)
    • Don't waste memory/compute in preserve_float_range (#1545)
    • Get rid of shared state for LocalMaxPeakFinder (#1541)
    • map filter (#1520)
    • funcs passed to apply and transform can use positional arguments (#1519)
    • import SegmentationMaskCollection in main starfish (#1527)
    • Enable Windows builds on master (#1538)
    • Throw a warning when the data size is unusual. (#1525)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.5(Aug 13, 2019)

    • add ability to convert segmentation masks to a label image
    • If in_place=True, we should return None (#1473)
    • Bump to slicedimage 4.0.1 (#1458)
    • on-demand loading of data. (#1456)
    • Remove Cli (#1444)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.4(Jul 18, 2019)

    • Update in-place experiment writing to use the new WriterContract API in slicedimage 4.0.0 (#1447)
    • data set formatter with fixed filenames (#1421)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.3(Jul 10, 2019)

    • Instantiate the multiprocessing pool using with (#1436)
    • Slight optimization of pixel decoding (#1412)
    • [easy] point starfish.data.osmFISH() to new dataset (#1425)
    • [easy] Warn about the deprecation of the MaxProject filter (#1390)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.2(Jun 19, 2019)

    • Refactor reduce to take an optional module and only a function name. (#1386)
    • Codify the expectation that in-place experiment construction does not rely on TileFetcher data (#1389)
    • Warn and return empty SpotAttributes when PixelDecoding finds 0 spots (#1400)
    • updating data.merfish link to full dataset (#1406)
    • Rename tile_coordinates to tile_identifier (#1401)
    • Support for irregular images in the builder (#1382)
    • Fix how we structure the run notebook rules. (#1384)
    • updated loading data docs and added image of napari viewer (#1387)
    • Format complete ISS experiment and expose in starfish.data (#1316)
    • Add concatenate method for ExpressionMatrix (#1381)
    • Add TransformsList repr (#1380)
    • Fix 3d smFISH notebook as well. (#1385)
    • Add custom clip Filter classes (#1376)
    • Fix smFISH notebook. (#1383)
    • Add Filter.Reduce (general dimension reduction for ImageStack) (#1342)
    • Handle denormalized numbers when normalizing intensities/codebooks (#1371)
    • TileFetcher formats complete 496 fov MERFISH dataset (#1341)
    • Refactor fov.getImage() to fov.getImages() (#1346)
    • Add the ability to write labeled experiments (#1374)
    • Add inplace TileFetcher module back to public builder API (#1375)
    • Always create Z coordinates, even on 4D datasets. (#1358)
    • Create an all-purpose ImageStack factory (#1348)
    • Remove physical_coordinate_calculator.py (#1352)
    • ImageStack parsers should provide coordinates as an array (#1351)
    • bump to slicedimage 3.1.1 (#1343)
    • Creating a standard starfish.wdl that can be run with any recipe file (#1364)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Jun 19, 2019)

    • public/private separation (#1244)
    • Recipe and recipe execution (#1192)
    • 3d smFISH notebook (#1238)
    • SeqFISH notebook (#1239)
    • Adding windows install instructions (#1227)
    • vectorize labeling spot lookups (#1215)
    • vectorize imagestack -> intensity_table coordinate transfer (#1212)
    • Fix the restoration of non-indexed axes. (#1189)
    • Allow for intensity tables with labeled axes (#1181)
    • ImageStack select on Physical Coordinates (#1147)
    • fixing Clip.SCALE_BY_IMAGE (#1193)
    • Update BaristaSeq text, fix LinearUnmixing (#1188)
    • Update STARmap notebook for SpaceJam (#1199)
    • replace label images with segmentation masks (#1135)
    • BaristaSeq + Plot tools update (#1171)
    • Intensity Table Concat Processing (#1118)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.36(Jun 19, 2019)

    • Update strict requirements (#1142)
    • High level goal: detect spots should accept imagestacks and not numpy arrays. (#1143)
    • Remove cropping from PixelSpotDetector, (#1120)
    • Add LocalSearchBlobDetector to support BaristaSeq, SeqFISH, STARmap (#1074)
    • Indirect File click types (#1124)
    • Move the registration tests next to their sources. (#1134)
    • Test to verify that inplace experiment construction works. (#1131)
    • Additional support code for building experiments in-place. (#1127)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.35(Jun 19, 2019)

    • Transfer physical Coords to Expression Matrix (#965)
    • Support for hierarchical directory structures for experiments. (#1126)
    • Pipeline Components: LearnTransform and ApplyTransform (#1083)
    • Restructure the relationship between PipelineComponent and AlgorithmBase (#1095)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.34(Jun 19, 2019)

    • Adding ability to pass aligned group to Imagestack.from_path_or_url (#1069)
    • Add Decoded Spot Table (#1087)
    • Enable appending to existing napari viewer in display() (#1093)
    • Change tile shape to a dict by default (#1072)
    • Add ElementWiseMult Filter Pipeline Component (#983)
    • Add linear unmixing pipeline component (#1056)
    • Spiritual Bag of Images Refactor: Part 1 (#986)
    • Add to provenance log (#968)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.33(Feb 14, 2019)

  • 0.0.32(Feb 7, 2019)

  • 0.0.31(Dec 1, 2018)

  • 0.0.30(Nov 22, 2018)

    • Added ImageStack cropping functionality (#711)
    • Added support for concatenating IntensityTables (#778)
    • Propagate physical coordinates from ImageStacks to IntensityTables (#753)
    • ImageStack max projection now returns ImageStack (#765)
    • Reduce memory consumption with python multiprocessing (#742)
    Source code(tar.gz)
    Source code(zip)
  • 0.0.29(Nov 2, 2018)

  • 0.0.28(Nov 2, 2018)

  • 0.0.27(Oct 17, 2018)

  • 0.0.26(Oct 17, 2018)

  • 0.0.25(Oct 9, 2018)

  • 0.0.23(Sep 27, 2018)

  • 0.0.21(Sep 14, 2018)

  • 0.0.20(Sep 12, 2018)

  • 0.0.19(Sep 7, 2018)

  • 0.0.18(Sep 5, 2018)

Python Image Optimizer Script

Image-Optimizer Download and Install git clone https://github.com/stefankumpan/Image-Optimizer-Script.git cd Image-Optimizer-Script pip install -r req

Stefan Kumpan 0 Jul 15, 2021
Find target hash collisions for Apple's NeuralHash perceptual hash function.💣

neural-hash-collider Find target hash collisions for Apple's NeuralHash perceptual hash function. For example, starting from a picture of this cat, we

Anish Athalye 630 Jan 01, 2023
A utility for quickly cropping large collections of images.

Crop Tool A utility for quickly cropping large collections of images. Inspired by Derrick Schultz's dataset-tools. Setup It's suggested that you use A

dusk (they/them) 6 Nov 14, 2021
Py3D - A 3d rendering engine written entirely in python

Py3D is a 3d rendering engine written entirely in python. It is a simple and eas

1up Community 2 Nov 14, 2022
Tweet2Image - Convert tweets to Instagram-friendly images.

Convert tweets to Instagram-friendly images. How to use If you want to use this repository as a submodule, don't forget to put the fonts d

Janu Lingeswaran 1 Mar 11, 2022
Simple utility to tinker with OPlus images

OPlus image utilities Prerequisites Linux running kernel 5.4 or up (check with uname -r) Image rebuilding Used to rebuild read-only erofs images into

Wiley Lau 15 Dec 28, 2022
Clip Bing Maps backgound as RGB geotif image using center-point from vector data of a shapefile and Bing Maps zoom

Clip Bing Maps backgound as RGB geotif image using center-point from vector data of a shapefile and Bing Maps zoom. Also, rasterize shapefile vectors as corresponding label image.

Gounari Olympia 2 Nov 22, 2021
A small Python module for BMP image processing.

micropython-microbmp A small Python module for BMP image processing. It supports BMP image of 1/2/4/8/24-bit colour depth. Loading supports compressio

Quan Lin 4 Nov 02, 2022
A large-scale dataset of both raw MRI measurements and clinical MRI images

fastMRI is a collaborative research project from Facebook AI Research (FAIR) and NYU Langone Health to investigate the use of AI to make MRI scans faster. NYU Langone Health has released fully anonym

Facebook Research 907 Jan 04, 2023
Computational Xmas Tree lights!

Computational Xmas Tree This repo contains the code for the computational illumination of a Christmas Tree! It is based on the work by Matt Parker fro

GSD6338 146 Dec 23, 2022
cmdpxl: a totally practical command-line image editor

cmdpxl: a totally practical command-line image editor Features cmdpxl has many exciting functionalities, including Editing pixels one at a time! Savin

Jieruei Chang 475 Dec 23, 2022
Anaglyph 3D Converter - A python script that adds a 3D anaglyph style effect to an image using the Pillow image processing package.

Anaglyph 3D Converter - A python script that adds a 3D anaglyph style effect to an image using the Pillow image processing package.

Kizdude 2 Jan 22, 2022
Gaphor is the simple modeling tool

Gaphor Gaphor is a UML and SysML modeling application written in Python. It is designed to be easy to use, while still being powerful. Gaphor implemen

Gaphor 1.3k Dec 31, 2022
A Gtk based Image Selector with Preview

gtk-image-selector This is an attempt to restore Gtk Image Chooser "lost functionality": displaying an image preview when selecting images... This is

Spiros Georgaras 2 Sep 28, 2022
LabelMe annotation tool source code

LabelMe annotation tool source code Here you will find the source code to install the LabelMe annotation tool on your server. LabelMe is an annotation

MIT CSAIL Computer Vision 1.3k Jan 03, 2023
Image Processing - Make noise images clean

影像處理-影像降躁化(去躁化) (Image Processing - Make Noise Images Clean) 得力於電腦效能的大幅提升以及GPU的平行運算架構,讓我們能夠更快速且有效地訓練AI,並將AI技術應用於不同領域。本篇將帶給大家的是 「將深度學習應用於影像處理中的影像降躁化 」,

2 Aug 04, 2022
An esoteric visual language that takes image files as input based on a multi-tape turing machine, designed for compatibility with C.

vizh An esoteric visual language that takes image files as input based on a multi-tape turing machine, designed for compatibility with C. Overview Her

Sy Brand 228 Dec 17, 2022
Maze generator with most popular shapes - hexagon, triangle, square

Maze-Generator Maze generator with most popular shapes - hexagon, triangle, square (sqaure not implemented yet): Theory: Planar Graph https://en.wikip

Kacper Plesiak 2 Dec 28, 2021
Javascript image annotation tool based on image segmentation.

JS Segment Annotator Javascript image annotation tool based on image segmentation. Label image regions with mouse. Written in vanilla Javascript, with

Kota Yamaguchi 513 Nov 15, 2022
A python program to generate ANSI art from images and videos

ANSI Art Generator A python program that creates ASCII art (with true color support if enabled) from images and videos Dependencies The program runs u

Pratyush Kumar 12 Nov 08, 2022