Create time-series datacubes for supervised machine learning with ICEYE SAR images.

Related tags

Deep Learningicecube
Overview

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates multidimensional SAR image and labeled data arrays.

The datacubes stack SAR time-series images in range and azimuth and can preserve the geospatial content, intensity, and complex SAR signal from the ICEYE SAR images. You can use the datacubes with ICEYE Ground Range Detected (GRD) geotifs and ICEYE Single Look Complex (SLC) .hdf5 product formats.

alt text

This work is sponsored by ESA Φ-lab as part of the AI4SAR initiative.


Getting Started

You need Python 3.8 or later to use the ICEcube library.

The installation options depend on whether you want to use the library in your Python scripts or you want to contribute to it. For more information, see Installation.


ICEcube Examples

To test the Jupyter notebooks and for information on how to use the library, see the ICEcube Documentation.


AI4SAR Project Updates

For the latest project updates, see SAR for AI Development.

Comments
  • 'RPC' does not exist

    'RPC' does not exist

    Trying to read an SLC .h5 downloaded from ICEYE archive (id 10499) and get 'RPC does not exist':

    cube_config = CubeConfig()
    slc_datacube = SLCDatacube.build(cube_config, '/Users/sstrong/bin/test_data_icecube/slcs')
    
    ---------------------------------------------------------------------------
    KeyError                                  Traceback (most recent call last)
    /var/folders/7r/fyfh8zx51ls6yt8t_jppnz3c0000gq/T/ipykernel_11546/2087236712.py in <module>
          1 cube_config = CubeConfig()
    ----> 2 slc_datacube = SLCDatacube.build(cube_config, '/Users/sstrong/bin/test_data_icecube/slcs')
    
    ~/Documents/github/icecube/icecube/bin/sar_cube/slc_datacube.py in build(cls, cube_config, raster_dir)
         52     def build(cls, cube_config: CubeConfig, raster_dir: str) -> SARDatacube:
         53         slc_datacube = SLCDatacube(cube_config, RASTER_DTYPE)
    ---> 54         ds = slc_datacube.create(cls.PRODUCT_TYPE, raster_dir)
         55         slc_datacube.xrdataset = ds
         56         return slc_datacube
    
    ~/Documents/github/icecube/icecube/utils/common_utils.py in time_it(*args, **kwargs)
        111     def time_it(*args, **kwargs):
        112         time_started = time.time()
    --> 113         return_value = func(*args, **kwargs)
        114         time_elapsed = time.time()
        115         logger.info(
    
    ~/Documents/github/icecube/icecube/bin/sar_cube/sar_datacube.py in create(self, product_type, raster_dir)
         43         """
         44         metadata_object = SARDatacubeMetadata(self.cube_config)
    ---> 45         metadata_object = metadata_object.compute_metdatadf_from_folder(
         46             raster_dir, product_type
         47         )
    
    ~/Documents/github/icecube/icecube/bin/sar_cube/sar_datacube_metadata.py in compute_metdatadf_from_folder(self, raster_dir, product_type)
        116         )
        117 
    --> 118         self.metadata_df = self._crawl_metadata(raster_dir, product_type)
        119         logger.debug(f"length metadata from the directory {len(self.metadata_df)}")
        120 
    
    ~/Documents/github/icecube/icecube/bin/sar_cube/sar_datacube_metadata.py in _crawl_metadata(self, raster_dir, product_type)
         68 
         69     def _crawl_metadata(self, raster_dir, product_type):
    ---> 70         return metadata_crawler(
         71             raster_dir,
         72             product_type,
    
    ~/Documents/github/icecube/icecube/utils/metadata_crawler.py in metadata_crawler(raster_dir, product_type, variables, recursive)
         36     _, raster_paths = DirUtils.get_dir_files(raster_dir, fext=fext)
         37 
    ---> 38     return metadata_crawler_list(raster_paths, variables)
         39 
         40 
    
    ~/Documents/github/icecube/icecube/utils/metadata_crawler.py in metadata_crawler_list(raster_paths, variables)
         43 
         44     for indx, raster_path in enumerate(raster_paths):
    ---> 45         metadata = IO.load_ICEYE_metadata(raster_path)
         46         parsed_metadata = _parse_data_row(metadata, variables)
         47         parsed_metadata["product_fpath"] = raster_path
    
    ~/Documents/github/icecube/icecube/utils/analytics_IO.py in load_ICEYE_metadata(path)
        432         are converted from bytedata and read into the dict for compatability reasons.
        433         """
    --> 434         return read_SLC_metadata(h5py.File(path, "r"))
        435 
        436     elif path.endswith(".tif") or path.endswith(".tiff"):
    
    ~/Documents/github/icecube/icecube/utils/analytics_IO.py in read_SLC_metadata(h5_io)
        329 
        330     # RPCs are nested under "RPC/" in the h5 thus need to be parsed in a specific manner
    --> 331     RPC_source = h5_io["RPC"]
        332     meta_dict["RPC"] = parse_slc_rpc_to_meta_dict(
        333         RPC_source=RPC_source, meta_dict=meta_dict
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    /opt/homebrew/anaconda3/envs/icecube_env/lib/python3.8/site-packages/h5py/_hl/group.py in __getitem__(self, name)
        303                 raise ValueError("Invalid HDF5 object reference")
        304         elif isinstance(name, (bytes, str)):
    --> 305             oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
        306         else:
        307             raise TypeError("Accessing a group is done with bytes or str, "
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
    
    h5py/h5o.pyx in h5py.h5o.open()
    
    KeyError: "Unable to open object (object 'RPC' doesn't exist)"
    
    opened by shaystrong 3
  • scikit-image dependency  fails on OSX M1 chip

    scikit-image dependency fails on OSX M1 chip

    Can't install all requirements for icecube on an M1 chip. This may present a future problem, just documenting for awareness. scikit-image cannot seem to be compiled/installed/etc on the M1. I have not tested the conda installation, as perhaps that does work. But i use brew/pip (and conda can create conflicts with those)

    opened by shaystrong 2
  • Fix/labels coords

    Fix/labels coords

    Summary includes:

    • Making xr.dataset structure coherent for labels and SAR (added time coords for labels)
    • For labels datacube, product_fpath are used compared to previously
    • small typo fixed
    • tests added for merging sar cubes with labels cube
    • instructions/cell added to install ml requirements for notebook#5
    • release notes added to mkdocs
    • steup.py updated with ml requirements and version
    opened by muaali 1
  • Update/docs/notebooks

    Update/docs/notebooks

    Changes involve:

    • Introduced a new markdown file called "overview.md" that talks about the structure of examples under docs/
    • Added a new notebook : CreatingDatacube that walks a user how to create datacubes with different methods
    • Other notebooks updated and improved.
    opened by muaali 1
  • missing RPC metadata set to None

    missing RPC metadata set to None

    related to issue: https://github.com/iceye-ltd/icecube/issues/11 Some of old ICEYE images can have RPC information missing. If that happens RPC key will be missing and pipeline does not work. RPC is now set to None if it's missing with a user warning generated.

    opened by muaali 0
  • feat/general metadata

    feat/general metadata

    Following changes introduced:

    • metadata constraints loosen up to allow merging general SAR data (rasterio/HDF5 compatible). But this means that cube configuration is not available for such rasters
    • .tiff support added for GRDs
    • code refactoring in SARDatacubeMetadata to avoid repetitive code
    opened by muaali 0
  • Labels/subset support

    Labels/subset support

    Changes include:

    • Updating SLC metadata reader to avoid key values stored as HDF5 dataset
    • Enabling cube generation from labels.json that have masks/labels for subset rasters (i.e., number of masks ingested into labels cube don't necessarily have to be same as number of rasters)
    • CHUNK_SIZE have been reduced to provide more optimized performance for creating massive datacubes
    opened by muaali 0
  • bin module not found

    bin module not found

    After installing from github using !pip install git+https://github.com/iceye-ltd/icecube.git it imports well icecube, but it throws this error for module bin ModuleNotFoundError: No module named 'icecube.bin'

    Any advice, thanks

    opened by jaimebayes 0
  • dummy_mask_labels.json

    dummy_mask_labels.json

    FileNotFoundError: [Errno 2] No such file or directory: './resources/labels/dummy_mask_labels.json'

    Could you upload it? is it available? Thanks in advance,

    opened by jaimebayes 0
Releases(1.1.0)
Owner
ICEYE Ltd
ICEYE Ltd
ICEYE Ltd
Head and Neck Tumour Segmentation and Prediction of Patient Survival Project

Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival Welcome to the Head and Neck Tumour Segmentation and Prediction of Patient Surviv

5 Oct 20, 2022
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
Subnet Replacement Attack: Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks

Subnet Replacement Attack: Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks Official implementation of paper Towards Practic

Xiangyu Qi 8 Dec 30, 2022
PyTorch implementation for Score-Based Generative Modeling through Stochastic Differential Equations (ICLR 2021, Oral)

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains a PyTorch implementation for the paper Score-Based Genera

Yang Song 757 Jan 04, 2023
A list of multi-task learning papers and projects.

This page contains a list of papers on multi-task learning for computer vision. Please create a pull request if you wish to add anything. If you are interested, consider reading our recent survey pap

svandenh 297 Dec 17, 2022
Code for our paper "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021

SimCLS Code for our paper: "SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization", ACL 2021 1. How to Install Requirements

Yixin Liu 150 Dec 12, 2022
《Deep Single Portrait Image Relighting》(ICCV 2019)

Ratio Image Based Rendering for Deep Single-Image Portrait Relighting [Project Page] This is part of the Deep Portrait Relighting project. If you find

62 Dec 21, 2022
Using OpenAI's CLIP to upscale and enhance images

CLIP Upscaler and Enhancer Using OpenAI's CLIP to upscale and enhance images Based on nshepperd's JAX CLIP Guided Diffusion v2.4 Sample Results Viewpo

Tripp Lyons 5 Jun 14, 2022
Recommendationsystem - Movie-recommendation - matrixfactorization colloborative filtering recommendation system user

recommendationsystem matrixfactorization colloborative filtering recommendation

kunal jagdish madavi 1 Jan 01, 2022
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 07, 2022
Official code repository for ICCV 2021 paper: Gravity-Aware Monocular 3D Human Object Reconstruction

GraviCap Official code repository for ICCV 2021 paper: Gravity-Aware Monocular 3D Human Object Reconstruction. Gravity-Aware Monocular 3D Human-Object

Rishabh Dabral 15 Dec 09, 2022
The Python3 import playground

The Python3 import playground I have been confused about python modules and packages, this text tries to clear the topic up a bit. Sources: https://ch

Michael Moser 5 Feb 22, 2022
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

631 Jan 04, 2023
Embodied Intelligence via Learning and Evolution

Embodied Intelligence via Learning and Evolution This is the code for the paper Embodied Intelligence via Learning and Evolution Agrim Gupta, Silvio S

Agrim Gupta 111 Dec 13, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
Implementation of the famous Image Manipulation\Forgery Detector "ManTraNet" in Pytorch

Who has never met a forged picture on the web ? No one ! Everyday we are constantly facing fake pictures touched up in Photoshop but it is not always

Rony Abecidan 77 Dec 16, 2022
Subpopulation detection in high-dimensional single-cell data

PhenoGraph for Python3 PhenoGraph is a clustering method designed for high-dimensional single-cell data. It works by creating a graph ("network") repr

Dana Pe'er Lab 42 Sep 05, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region (Paper and DataSet). [New] Note that all the emails about the download permission o

Healthcare Intelligence Laboratory 71 Dec 22, 2022
State-to-Distribution (STD) Model

State-to-Distribution (STD) Model In this repository we provide exemplary code on how to construct and evaluate a state-to-distribution (STD) model fo

<a href=[email protected]"> 2 Apr 07, 2022