Research on Tabular Deep Learning (Python package & papers)

Overview

Research on Tabular Deep Learning

For paper implementations, see the section "Papers and projects".

rtdl is a PyTorch-based package providing a user-friendly API for the main models and concepts from our papers. See the documentation.

Press "Watch" to stay up to date with new papers and releases!

Feel free to report issues and post questions/feedback/ideas.

Papers and projects

Name Location Comment
On Embeddings for Numerical Features in Tabular Deep Learning link arXiv 2022
Revisiting Deep Learning Models for Tabular Data link NeurIPS 2021
rtdl link Python package
Comments
  • Fix MLP.make_baseline() return type

    Fix MLP.make_baseline() return type

    Return object of type cls, not MLP, in MLP.make_baseline(). Otherwise, child classes inheriting from MLP constructed using the .make_baseline() method always have type MLP (instead of the type of the child class).

    opened by jpgard 6
  • Is it possible to provide a scikit-learn interface?

    Is it possible to provide a scikit-learn interface?

    This project is interesting and I want to use it as the baseline algorithm for my paper. However, it seems that I need to take several steps in order to make a prediction. Consequently, is it possible to provide a scikit-learn interface for making a convenient comparison between different algorithms?

    opened by hengzhe-zhang 5
  • Cannot link in the document of zero

    Cannot link in the document of zero

    Hi! I am trying to understand the usage of python package zero, which is used in the example of rtdl. But I found that the linkage in the comment line of the code is not available anymore.

    Here is the invalid link: https://yura52.github.io/zero/0.0.4/reference/api/zero.improve_reproducibility.html

    I am wondering is there any other document? Thank you!

    Regards.

    opened by WuZheng326 4
  • embedding of categorical variables

    embedding of categorical variables

    Hi Yury,

    Thank you for your excellent work. I get a problem when handling categorical features. Do I need to pre-train the embedding layer when applying it to the data processing or just to attach the embedding layer to the model and train it with the model.

    opened by lhq12 3
  • Add ⭐️Weights & Biases⭐️ Logging

    Add ⭐️Weights & Biases⭐️ Logging

    This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes while supporting Checkpoint uploads as Weights and Biases Artifacts.

    Wherever needed, I have used the existing Weights and Biases integrations viz. LightGBM and XGBoost.

    I have validated the performance of all the proposed runs by running 150+ runs, which can be viewed on this project page and in detail in an accompanying blog post.

    opened by SauravMaheshkar 3
  • Bugs in piecewise-linear encoding

    Bugs in piecewise-linear encoding

    1. Here, indices = as_tensor(values) must be changed to this:
    indices = as_tensor(indices)
    
    1. Here, np.array(d_encoding) must be changed to this:
    torch.tensor(d_encoding).to(indices)
    
    1. Here, the argument dtype=X.dtype is missing for np.array

    2. Here, .to(X) is missing

    3. Here, it must be:

    is_last_bin = bin_indices + 1 == as_tensor(list(map(len, bin_edges)))
    
    opened by Yura52 2
  • LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    LGBMRegressor on California Housing dataset is 0.68 >> 0.46

    I use the sample code to prepare the dataset:

    device = 'cpu'
    dataset = sklearn.datasets.fetch_california_housing()
    task_type = 'regression'
    
    X_all = dataset['data'].astype('float32')
    y_all = dataset['target'].astype('float32')
    n_classes = None
    
    X = {}
    y = {}
    X['train'], X['test'], y['train'], y['test'] = sklearn.model_selection.train_test_split(
        X_all, y_all, train_size=0.8
    )
    X['train'], X['val'], y['train'], y['val'] = sklearn.model_selection.train_test_split(
        X['train'], y['train'], train_size=0.8
    )
    
    # not the best way to preprocess features, but enough for the demonstration
    preprocess = sklearn.preprocessing.StandardScaler().fit(X['train'])
    X = {
        k: torch.tensor(preprocess.fit_transform(v), device=device)
        for k, v in X.items()
    }
    y = {k: torch.tensor(v, device=device) for k, v in y.items()}
    
    # !!! CRUCIAL for neural networks when solving regression problems !!!
    y_mean = y['train'].mean().item()
    y_std = y['train'].std().item()
    y = {k: (v - y_mean) / y_std for k, v in y.items()}
    
    y = {k: v.float() for k, v in y.items()}
    

    And I train a LGBMRegressor with the default hyper parameters:

    model = lgb.LGBMRegressor()
    model.fit(X['train'], y['train'])
    

    But when I evaluate on the test fold, I found the performance is 0.68:

    >>> test_pred = model.predict(X['test'])
    >>> test_pred = torch.from_numpy(test_pred)
    >>> rmse = torch.nn.functional.mse_loss(
    >>>     test_pred.view(-1), y['test'].view(-1)) ** 0.5 * y_std
    >>> print(f'Test RMSE: {rmse:.2f}.')
    Test RMSE: 0.68.
    

    Even using the model from rtdl gives me 0.56 RMSE:

    (epoch) 57 (batch) 0 (loss) 0.1885
    (epoch) 57 (batch) 10 (loss) 0.1315
    (epoch) 57 (batch) 20 (loss) 0.1735
    (epoch) 57 (batch) 30 (loss) 0.1197
    (epoch) 57 (batch) 40 (loss) 0.1952
    (epoch) 57 (batch) 50 (loss) 0.1167
    Epoch 057 | Validation score: 0.7334 | Test score: 0.5612 <<< BEST VALIDATION EPOCH
    

    Is there anything I miss? How can I reproduce the performance in your paper? Thanks!

    opened by fingertap 2
  • Regression results about the RTDL models.

    Regression results about the RTDL models.

    Hi, you did a great implementation of the tab-transformer. However, when I use your example notebook to do the simple regression for the Sin(x), neither the baseline model or the FTTransformer give the good results. I have no idea about this and want to know why.

    Here is the link

    opened by linkedlist771 1
  • typos in CatEmbeddings

    typos in CatEmbeddings

    1. link. The variable cardinalities_and_dimensions does not exist
    2. link. The condition looks broken. Solution: simplify it and remove the word "spec" from the error message.
    opened by Yura52 0
  • Running error, prenormalization is not a class variable

    Running error, prenormalization is not a class variable

    The code crushes at this line, because prenormalization is not in self

    https://github.com/Yura52/rtdl/blob/b130dd2e596c17109bef825bc9c8608e1ae617cc/rtdl/nn/_backbones.py#L627

    opened by zahar-chikishev 0
  • Typos?

    Typos?

    Hello,

    I am trying to use PiecewiseLinearEncoder(). I think I found a few typos. Please check my work.

    I first ran into an issue in piecewise_linear_encoding where I got the error in line 618 saying "RuntimeError: The size of tensor a (3688) must match the size of tensor b (32) at non-singleton dimension 1"

    I dug into the code and found that when PiecewiseLinearEncoder was calling piecewise_linear_encoding the positional arguments of indices and ratios were switched in the former from what was expected in the latter.

    Additionally, when inspecting piecewise_linear_encoding it looks like bin_edges = as_tensor(bin_ratios) not "as_tensor(bin_edges)" which would make more sense.

    Can you please check this out? Much appreciated.

    opened by jdefriel 1
  • How to resume training?

    How to resume training?

    I ran your model in colab for a few hours before google terminated it. I used pickle.dump/load to store the trained model. It works to make predictions but it doesn't seem to be able to resume training.

          if progress.success:
              print(' <<< BEST VALIDATION EPOCH', end='')
              with open(mydrive+jobname, 'wb') as filehandler:
                dump((model, y_std, y_mean),filehandler)
                #we could see result was improving
    
            with open(mydrive+jobname, 'rb') as filehandler:
              model, y_std, y_mean = load(filehandler)
            pred=model(batch,None) #this seems to work
            for epoch in range(1, n_epochs + 1):
                for iteration, batch_idx in enumerate(train_loader):
                    model.train()
                    optimizer.zero_grad()
                    x_batch = X['train'][batch_idx]
                    y_batch = y['train'][batch_idx]
                    loss = loss_fn(apply_model(x_batch).squeeze(1), y_batch)
                    loss.backward()
                    optimizer.step()
                    if iteration % report_frequency == 0:
                        print(f'(epoch) {epoch} (batch) {iteration} (loss) {loss.item():.4f}')
                    #no improvement any more. even the model was dumped immediately after created.
    

    what is the right way to store the model so that I can resume the training?

    opened by jerronl 0
  • A scikit-learn interface for RTDL package.

    A scikit-learn interface for RTDL package.

    Hello! I have written a scikit-learn interface for the RTDL package (https://github.com/hengzhe-zhang/scikit-rtdl). I rely on the skorch to avoid coding errors, and set the default parameters based on the parameters presented in your paper. Hoping you will like it!

    opened by hengzhe-zhang 1
Releases(v0.0.13)
  • v0.0.13(Mar 16, 2022)

  • v0.0.12(Mar 10, 2022)

  • v0.0.10(Feb 28, 2022)

  • v0.0.9(Nov 7, 2021)

    This is a hot-fix release after the big 0.0.8 release (see the release notes for 0.0.8):

    • revert the breaking change in NumericalFeatureTokenizer accidentally introduced in 0.0.8
    • minor documentation refinements
    Source code(tar.gz)
    Source code(zip)
  • v0.0.8(Nov 6, 2021)

    This release focuses on improving the documentation.

    Documentation

    • The following models and classes are now documented:
      • MLP
      • ResNet
      • FTTransformer
      • MultiheadAttention
      • NumericalFeatureTokenizer
      • CategoricalFeatureTokenizer
      • FeatureTokenizer
      • CLSToken
    • Usability have been greatly improved:
      • signatures are now highlighted
      • added the "copy" button to code blocks
      • permalink buttons (signature anchors) are now visible

    Bug fixes

    • MultiheadAttention: fix the crash when bias=False

    Dependencies

    • numpy >= 1.18
    • torch >= 1.7

    Project

    • added spell checking for documentation
    • sphinx was updated to 4.2.0
    • flit was updated to 3.4.0
    Source code(tar.gz)
    Source code(zip)
  • v0.0.7(Oct 10, 2021)

  • v0.0.6(Aug 26, 2021)

    v0.0.6

    New features

    • CLSToken (old name: "AppendCLSToken"): add expand method for easy construction of batches of [CLS]-tokens

    Bug fixes

    • FTTransformer: the make_baseline method now properly constructs an instance

    API changes

    • FTTransformer: the ffn_d_intermidiate argument was renamed to a more conventional ffn_d_hidden
    • FTTransformer: the normalization argument was split into three arguments: attention_normalization, ffn_normalization, head_normalization
    • ResNet: the d_intermidiate argument was renamed to a more conventional d_hidden
    • AppendCLSToken: renamed to CLSToken

    Documentation improvements

    • CLSToken
    • MLP.make_baseline

    Project

    • add tests with CUDA
    • remove the .vscode directory from the repository
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 20, 2021)

    API Changes:

    • MLP.make_baseline is now more user-friendly and accepts a single d_layers argument instead of four (d_first, d_intermidiate, d_last, n_blocks)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.4(Jul 11, 2021)

  • v0.0.3(Jul 2, 2021)

    API Changes

    • ResNet & ResNet.Block: the d parameter was renamed to d_main

    Fixes

    • minor fix in the comments in examples/rtdl.ipynb

    Project

    • add tests that validate that the models in rtdl are literally the same as in the implementation of the paper
    Source code(tar.gz)
    Source code(zip)
Cascading Feature Extraction for Fast Point Cloud Registration (BMVC 2021)

Cascading Feature Extraction for Fast Point Cloud Registration This repository contains the source code for the paper [Arxive link comming soon]. Meth

7 May 26, 2022
Pytorch implementation of ICASSP 2022 paper Attention Probe: Vision Transformer Distillation in the Wild

Attention Probe: Vision Transformer Distillation in the Wild Jiahao Wang, Mingdeng Cao, Shuwei Shi, Baoyuan Wu, Yujiu Yang In ICASSP 2022 This code is

IIGROUP 6 Sep 21, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

25 Dec 08, 2022
Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendada, G. Salha and T. Bontempelli

Carousel Personalization in Music Streaming Apps with Contextual Bandits - RecSys 2020 This repository provides Python code and data to reproduce expe

Deezer 48 Jan 02, 2023
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
Volsdf - Volume Rendering of Neural Implicit Surfaces

Volume Rendering of Neural Implicit Surfaces Project Page | Paper | Data This re

Lior Yariv 221 Jan 07, 2023
EfficientDet (Scalable and Efficient Object Detection) implementation in Keras and Tensorflow

EfficientDet This is an implementation of EfficientDet for object detection on Keras and Tensorflow. The project is based on the official implementati

1.3k Dec 19, 2022
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
STMTrack: Template-free Visual Tracking with Space-time Memory Networks

STMTrack This is the official implementation of the paper: STMTrack: Template-free Visual Tracking with Space-time Memory Networks. Setup Prepare Anac

Zhihong Fu 62 Dec 21, 2022
Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper

Evaluating the Factual Consistency of Abstractive Text Summarization Authors: Wojciech Kryściński, Bryan McCann, Caiming Xiong, and Richard Socher Int

Salesforce 165 Dec 21, 2022
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Models used for prediction Diabetes and further the basic theory and working of Gold nanoparticles.

GoldNanoparticles This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Mode

1 Jan 30, 2022
Jupyter Dock is a set of Jupyter Notebooks for performing molecular docking protocols interactively, as well as visualizing, converting file formats and analyzing the results.

Molecular Docking integrated in Jupyter Notebooks Description | Citation | Installation | Examples | Limitations | License Table of content Descriptio

Angel J. Ruiz Moreno 173 Dec 25, 2022
Code for the paper Learning the Predictability of the Future

Learning the Predictability of the Future Code from the paper Learning the Predictability of the Future. Website of the project in hyperfuture.cs.colu

Computer Vision Lab at Columbia University 139 Nov 18, 2022
Advantage Actor Critic (A2C): jax + flax implementation

Advantage Actor Critic (A2C): jax + flax implementation Current version supports only environments with continious action spaces and was tested on muj

Andrey 3 Jan 23, 2022
Code for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)

M2m: Imbalanced Classification via Major-to-minor Translation This repository contains code for the paper "M2m: Imbalanced Classification via Major-to

79 Oct 13, 2022
TensorFlow implementation of "Variational Inference with Normalizing Flows"

[TensorFlow 2] Variational Inference with Normalizing Flows TensorFlow implementation of "Variational Inference with Normalizing Flows" [1] Concept Co

YeongHyeon Park 7 Jun 08, 2022
A pytorch implementation of Pytorch-Sketch-RNN

Pytorch-Sketch-RNN A pytorch implementation of https://arxiv.org/abs/1704.03477 In order to draw other things than cats, you will find more drawing da

Alexis David Jacq 172 Dec 12, 2022
Pydantic models for pywttr and aiopywttr.

Pydantic models for pywttr and aiopywttr.

Almaz 2 Dec 08, 2022