Keras implementation of Deeplab v3+ with pretrained weights

Overview

Keras implementation of Deeplabv3+

This repo is not longer maintained. I won't respond to issues but will merge PR
DeepLab is a state-of-art deep learning model for semantic image segmentation.

Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.

Segmentation results of original TF model. Output Stride = 8




Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model




Segmentation results of this repo model with loaded weights and OS = 16
Results are still good




How to get labels

Model will return tensor of shape (batch_size, height, width, num_classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:

import numpy as np
from PIL import Image
from matplotlib import pyplot as plt

from model import Deeplabv3

# Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
# as original image.  Normalization matches MobileNetV2

trained_image_width=512 
mean_subtraction_value=127.5
image = np.array(Image.open('imgs/image1.jpg'))

# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image, 0))
labels = np.argmax(res.squeeze(), -1)

# remove padding and resize back to original image
if pad_x > 0:
    labels = labels[:-pad_x]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))

plt.imshow(labels)
plt.waitforbuttonpress()

How to use this model with custom input shape and custom number of classes

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384, 384, 3), classes=4#or you can use None as shape
deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)

After that you will get a usual Keras model which you can train using .fit and .fit_generator methods.

How to train this model

Useful parameters can be found in the original repository.

Important notes:

  1. This model doesn’t provide default weight decay, user needs to add it themselves.
  2. Due to huge memory use with OS=8, Xception backbone should be trained with OS=16 and only inferenced with OS=8.
  3. User can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder. Right now (March 2019), there is a problem with finetuning Keras models with BN. You can read more about it here.

Known issues

This model can be retrained check this notebook. Finetuning is tricky and difficult because of the confusion between training and trainable in Keras. See this issue for a discussion and possible alternatives.

How to load model

In order to load model after using model.save() use this code:

from model import relu6
deeplab_model = load_model('example.h5')

Xception vs MobileNetv2

There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2.

For MobileNetv2 there are pretrained weights only for alpha=1. However, you can initiate model with different values of alpha.

Requirement

The latest vesrion of this repo uses TF Keras, so you only need TF 2.0+ installed
tensorflow-gpu==2.0.0a0
CUDA==9.0


If you want to use older version, use following commands:

git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20

tensorflow-gpu==1.13
Keras==2.2.4

Comments
  • Performance degradation

    Performance degradation

    Hi,

    I'm using the converted pretrain-weights to measure mIoU and observed 10% drop. Could you sure you measurement results, did you still get 84% mIoU on PASCAL using the transferred model and weights?

    Thanks

    opened by baoruxiao 13
  • Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Hi, I read the document and found dialtion_rate seems not a hyperparameter in DepthwiseConv2D layer. But you used it in your model x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate)

    I made a toy example to check this:

    model=Sequential()
    model.add(DepthwiseConv2D(3,strides=1,padding='valid',depth_multiplier=2,\
                              dilation_rate=(2,2),input_shape=(9,9,3))) 
    model.summary()
    

    Output:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    depthwise_conv2d_6 (Depthwis (None, 7, 7, 6)           60        
    =================================================================
    Total params: 60
    Trainable params: 60
    Non-trainable params: 0
    _________________________________________________________________
    

    If dilation_rate is useful in DepthwiseConv2D , I think Output shape should be (5,5,6) not (7,7,6)

    My English is pretty limited, plz don't mind

    opened by Pofatoezil 10
  • About the softmax layer & label process?

    About the softmax layer & label process?

    Hi, bonlime, Thanks for your work! It helps me a lot! But I have two questions in my training process:

    1. I found there is no 'softmax' on the last layer of the Deeplabv3() model. Is it right if I use model.fit() directly after model=Deeplabv3() without any other process? Or could you please tell me where to use the 'softmax' layer?

    2. In the training image input process, how to correctly deal with labels? In my previous segmentation projects, the training and prediction steps usually operate on single-channel pngs labels, where the value of each pixel corresponds to the class label (for example, 0-21 for the Pascal dataset). Is this project the same as that?

    I'm looking forward to your reply. Thanks!

    opened by YanZhiyuan0918 10
  • The right way of pre-processing for input?

    The right way of pre-processing for input?

    image We use the first img as input, and resize it from (427, 640, 3) to (512, 512, 3), and normalize it from [0,255] to [0,1]. image As below, the output of OS=16 is normal, image

    but the output of OS=8 is worse.

    I can not figure out that, so I think my pre-processiong(such as normalizing) is wrong.

    opened by munanning 9
  • compatibility with tensorflow < 2.0

    compatibility with tensorflow < 2.0

    Hi, as I have cuda 9.0 in Ubuntu 16.04, so tensorflow 2.0 is not an option (tensorflow 1.12 installed instead). when I run the code:

    from model import Deeplabv3 'model=Deeplabv3(input_shape=(None,None,3), backbone='xception',OS=8)I got an error:AttributeError: module 'tensorflow._api.v1.image' has no attribute 'resize'`

    How can I resolve this so that I can run the code in tensorlow 1.120 ? many thanks

    opened by tsing90 7
  • image-level pooling

    image-level pooling

    Hello,

    I checked your code, I think the image-level pooling was not correctly implemented. because it is not computing the global average pooling.

    https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/model.py#L437

    I checked this code: https://github.com/tensorflow/models/blob/4a0ee4a29dd7e4b6e0135ccf6857f4dc58d71a90/research/deeplab/model.py#L397 and Reference are here ref 52 from original paper (https://arxiv.org/pdf/1506.04579.pdf):

    Exploiting the FCN architecture, ParsetNet can directly use global average pooling from the final (or any) feature map, resulting in the feature of the whole image, and use it as context.

    In implementation, this is accomplished by unpooling the context vector and appending the resulting feature map with the standard feature map.

    Specifically, we use global average pooling and pool the context features from the last layer or any layer if that is desired.

    opened by emedinac 7
  • Does the model have errors?

    Does the model have errors?

    When I use fit, it says Error when checking target: expected bilinear_upsampling_2 to have shape (512, 512, 2) but got array with shape (512, 512, 3) ps: My input image.shape is (512,512,3)

    opened by Taylor-Rose 6
  • How can I run the video module to achieve real-time detection?

    How can I run the video module to achieve real-time detection?

    Hello I want to know the speed of deeplabv3+ ,and I try to run that: from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 deeplab_model = Deeplabv3() def detect_video(deeplab_model): import cv2 vid = cv2.VideoCapture(0) if not vid.isOpened(): raise IOError("Couldn't open webcam or video") accum_time = 10 curr_fps = 10 fps = "20" prev_time = timer() while True: return_value, frame = vid.read() res = deeplab_model.predict(frame)

    	result = array_to_img(res)
        curr_time = timer()
        exec_time = curr_time - prev_time
        prev_time = curr_time
        accum_time = accum_time + exec_time
        curr_fps = curr_fps + 1
        if accum_time > 1:
            accum_time = accum_time - 1
            fps = "FPS: " + str(curr_fps)
            curr_fps = 0
        cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                    fontScale=0.50, color=(255, 0, 0), thickness=2)
        cv2.namedWindow("result", cv2.WINDOW_NORMAL)
        cv2.imshow("result", result)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    deeplab_model.close_session()
    

    detect_video(deeplab_model)

    but it doesn't work! I thank I need your help thanks.

    opened by ll1214 6
  • How can i get each label's coordinates on segmentation map?

    How can i get each label's coordinates on segmentation map?

    import numpy as np
    from PIL import Image
    from matplotlib import pyplot as plt
    
    from model import Deeplabv3
    
    # Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
    # as original image.  Normalization matches MobileNetV2
    
    trained_image_width=512 
    mean_subtraction_value=127.5
    image = np.array(Image.open('imgs/image1.jpg'))
    
    # resize to max dimension of images from training dataset
    w, h, _ = image.shape
    ratio = float(trained_image_width) / np.max([w, h])
    resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))
    
    # apply normalization for trained dataset images
    resized_image = (resized_image / mean_subtraction_value) - 1.
    
    # pad array to square image to match training images
    pad_x = int(trained_image_width - resized_image.shape[0])
    pad_y = int(trained_image_width - resized_image.shape[1])
    resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')
    
    # make prediction
    deeplab_model = Deeplabv3()
    res = deeplab_model.predict(np.expand_dims(resized_image, 0))
    labels = np.argmax(res.squeeze(), -1)
    
    # remove padding and resize back to original image
    if pad_x > 0:
        labels = labels[:-pad_x]
    if pad_y > 0:
        labels = labels[:, :-pad_y]
    labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))
    
    plt.imshow(labels)
    plt.show()
    

    I run this code and get segmentation map But, I want to get result like https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/imgs/seg_results2.png and each label's coordinates on image. How can i do that?

    opened by Baek2back 5
  • Make model.py compatible with Python 3 and switch to tf.image.resize()

    Make model.py compatible with Python 3 and switch to tf.image.resize()

    • make model.py compatible with Python 3 by changing [tensor].shape to [tensor].shape.as_list()
    • move away from tf.compat.v1.image.resize() due to "resize method is not implemented" error.
    • use tf.image.resize() instead because it now supports align_corners=True.
    opened by yanfengliu 5
  • Custom Dataset: Number of Classes

    Custom Dataset: Number of Classes

    Should num_classes take into account the background class. For example, if I have 3 foreground classes that I have masks for and a background class that I don't care about and don't have a mask for, should my num_classes be 3?

    opened by ad12 5
  • Bump pillow from 6.0.0 to 9.3.0

    Bump pillow from 6.0.0 to 9.3.0

    Bumps pillow from 6.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bumps tensorflow from 2.5.0rc0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Citation in Journal

    Citation in Journal

    I would like to cite this repository in a journal paper I'm currently writing as I have used most of the code here. I have already included the reference to the original DeepLabV3+ paper, but I also want to include this repository. What is the best way to cite it? Thanks!

    opened by pedrogalher 1
  • Model not learning anything

    Model not learning anything

    Hi I'm training DeepLabV3+ Mobilenet backbone on my custom dataset. My Dataset has 1 class. My Model architecture looks like:

    deeplab_model = Deeplabv3(input_shape=(512, 512, 3), classes=2,activation='softmax') deeplab_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics='accuracy']) deeplab_model.fit(...)

    My Loss is not reducing and the output is all black. My mask is of the shape (512,512) Someone, please guide me what to do? Any help would be great.

    opened by anmol4210 1
  • AttributeError: 'int' object has no attribute 'value'

    AttributeError: 'int' object has no attribute 'value'

    I copy and run this code(and copy images and model.py) :

    from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 img = plt.imread("imgs/image1.jpg") print(img.shape) w, h, _ = img.shape ratio = 512. / np.max([w,h]) resized = cv2.resize(img,(int(ratioh),int(ratiow))) resized = resized / 127.5 - 1. new_deeplab_model = Deeplabv3(input_shape=(512,512,3), OS=16)

    pad_x = int(512 - resized.shape[0]) resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)), mode='constant') res = new_deeplab_model.predict(np.expand_dims(resized2,0)) res_old = old_deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1) labels_old = np.argmax(res_old.squeeze(),-1) plt.imshow(labels[:-pad_x]) plt.show() plt.imshow(labels_old[:-pad_x]) plt.show() `

    but I get this error =>

    ERROR:root:An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (1, 2))


    TypeError Traceback (most recent call last) in () 26 # make prediction 27 deeplab_model = Deeplabv3() ---> 28 res = deeplab_model.predict(np.expand_dims(resized_image, 0)) 29 labels = np.argmax(res.squeeze(), -1) 30

    10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 func_graph: A FuncGraph object to destroy. func_graph is unusable 967 after this function. --> 968 """ 969 # TODO(b/115366440): Delete this method when a custom OrderedDict is added. 970 # Clearing captures using clear() leaves some cycles around.

    TypeError: in user code:

    TypeError: tf__predict_function() missing 8 required positional arguments: 'x', 'batch_size', 'verbose', 'steps', 'callbacks', 'max_queue_size', 'workers', and 'use_multiprocessing'
    
    opened by alikarimi120 1
Releases(1.2)
Owner
Emil Zakirov. MIPT & Skoltech. Computer Vision Engineer.
Official implementation of "OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association" in PyTorch.

openpifpaf Continuously tested on Linux, MacOS and Windows: New 2021 paper: OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Te

VITA lab at EPFL 50 Dec 29, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
An official implementation of "Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation" (ICCV 2021) in PyTorch.

Exploiting a Joint Embedding Space for Generalized Zero-Shot Semantic Segmentation This is an official implementation of the paper "Exploiting a Joint

CV Lab @ Yonsei University 35 Oct 26, 2022
Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes

Naive-Bayes Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes Downloading Data Set Use our Breast Cancer Wisconsin Data Set Also you can

Faeze Habibi 0 Apr 06, 2022
How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

Bogdan Kulynych 49 Nov 05, 2022
Domain Generalization for Mammography Detection via Multi-style and Multi-view Contrastive Learning

MSVCL_MICCAI2021 Installation Please follow the instruction in pytorch-CycleGAN-and-pix2pix to install. Example Usage An example of vendor-styles tran

Jaron Lee 11 Oct 19, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
This a classic fintech problem that introduces real life difficulties such as data imbalance. Check out the notebook to find out more!

Credit Card Fraud Detection Introduction Online transactions have become a crucial part of any business over the years. Many of those transactions use

Jonathan Hasbani 0 Jan 20, 2022
Sequence lineage information extracted from RKI sequence data repo

Pango lineage information for German SARS-CoV-2 sequences This repository contains a join of the metadata and pango lineage tables of all German SARS-

Cornelius Roemer 24 Oct 26, 2022
Official repository of the paper 'Essentials for Class Incremental Learning'

Essentials for Class Incremental Learning Official repository of the paper 'Essentials for Class Incremental Learning' This Pytorch repository contain

33 Nov 27, 2022
A hue shift helper for OBS

obs-hue-shift A hue shift helper for OBS This is a repo based on the really nice script Hegemege made. The original script can be found https://gist.g

Alexis Tyler 1 Jan 10, 2022
Learning cell communication from spatial graphs of cells

ncem Features Repository for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021

Theis Lab 77 Dec 30, 2022
Learning Features with Parameter-Free Layers (ICLR 2022)

Learning Features with Parameter-Free Layers (ICLR 2022) Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo | Paper NAVER AI Lab, NAVER CLOVA Up

NAVER AI 65 Dec 07, 2022
Torchlight2 lan game server tool - A message forwarding tool for Torchlight 2 lan game

Torchlight 2 Lan Game Server Tool A message forwarding tool for Torchlight 2 lan

Huaijun Jiang 3 Nov 01, 2022
R-package accompanying the paper "Dynamic Factor Model for Functional Time Series: Identification, Estimation, and Prediction"

dffm The goal of dffm is to provide functionality to apply the methods developed in the paper “Dynamic Factor Model for Functional Time Series: Identi

Sven Otto 3 Dec 09, 2022
Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021)

Monocular 3D Object Detection: An Extrinsic Parameter Free Approach (CVPR2021) Yunsong Zhou, Yuan He, Hongzi Zhu, Cheng Wang, Hongyang Li, Qinhong Jia

Yunsong Zhou 51 Dec 14, 2022
Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling".

PSSL Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling". It consists of the pre-tra

2 Dec 21, 2021
Tools for the Cleveland State Human Motion and Control Lab

Introduction This is a collection of tools that are helpful for gait analysis. Some are specific to the needs of the Human Motion and Control Lab at C

CSU Human Motion and Control Lab 88 Dec 16, 2022
Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication"

NFFT4ANOVA Source code for our Paper "Learning in High-Dimensional Feature Spaces Using ANOVA-Based Matrix-Vector Multiplication" This package uses th

Theresa Wagner 1 Aug 10, 2022
'Aligned mixture of latent dynamical systems' (amLDS) for stimulus decoding probabilistic manifold alignment across animals. P. Herrero-Vidal et al. NeurIPS 2021 code.

Across-animal odor decoding by probabilistic manifold alignment (NeurIPS 2021) This repository is the official implementation of aligned mixture of la

Pedro Herrero-Vidal 3 Jul 12, 2022