The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory

Overview

CircleCI Github Actions Codecov Documentation Status Pypi Version Black Python Versions DOI

This repository contains the software implementation of most algorithms used or developed in my research. The LaTeX and Python code for generating the paper, experiments' results and visualizations reported in each paper is available (whenever possible) in the paper's directory.

Additionally, contributions at the algorithm level are available in the package mlresearch.

Installation

A Python distribution of version 3.8 or 3.9 is required to run this project. Due to the computational limitations of the free tiers in CI/CD platforms, currently we cannot ensure compatibility with earlier Python versions.

ML-Research requires:

  • numpy (>= 1.14.6)
  • pandas (>= 1.3.5)
  • sklearn (>= 1.0.0)
  • imblearn (>= 0.8.0)
  • rich (>= 10.16.1)
  • matplotlib (>= 2.2.3)
  • seaborn (>= 0.9.0)
  • rlearn (>= 0.2.1)
  • pytorch (>= 1.10.1)
  • torchvision (>= 0.11.2)
  • pytorch_lightning (>= 1.5.8)

User Installation

If you already have a working installation of numpy and scipy, the easiest way to install scikit-learn is using pip :

pip install -U ml-research

The documentation includes more detailed installation instructions.

Installing from source

The following commands should allow you to setup the development version of the project with minimal effort:

# Clone the project.
git clone https://github.com/joaopfonseca/ml-research.git
cd ml-research

# Create and activate an environment 
make environment 
conda activate mlresearch # Adapt this line accordingly if you're not running conda

# Install project requirements and the research package
pip install .[tests,docs]

Citing ML-Research

If you use ML-Research in a scientific publication, we would appreciate citations to the following paper:

@article{Fonseca2021,
  doi = {10.3390/RS13132619},
  url = {https://doi.org/10.3390/RS13132619},
  keywords = {SMOTE,active learning,artificial data generation,land use/land cover classification,oversampling},
  year = {2021},
  month = {jul},
  publisher = {Multidisciplinary Digital Publishing Institute},
  volume = {13},
  pages = {2619},
  author = {Fonseca, Joao and Douzas, Georgios and Bacao, Fernando},
  title = {{Increasing the Effectiveness of Active Learning: Introducing Artificial Data Generation in Active Learning for Land Use/Land Cover Classification}},
  journal = {Remote Sensing}
}
You might also like...
A collection of 100 Deep Learning images and visualizations
A collection of 100 Deep Learning images and visualizations

A collection of Deep Learning images and visualizations. The project has been developed by the AI Summer team and currently contains almost 100 images.

ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.
ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

ManimML ManimML is a project focused on providing animations and visualizations of common machine learning concepts with the Manim Community Library.

Easily pull telemetry data and create beautiful visualizations for analysis.
Easily pull telemetry data and create beautiful visualizations for analysis.

This repository is a work in progress. Anything and everything is subject to change. Porpo Table of Contents Porpo Table of Contents General Informati

Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.

mtomo Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation.

The pyrelational package offers a flexible workflow to enable active learning with as little change to the models and datasets as possible
The pyrelational package offers a flexible workflow to enable active learning with as little change to the models and datasets as possible

pyrelational is a python active learning library developed by Relation Therapeutics for rapidly implementing active learning pipelines from data management, model development (and Bayesian approximation), to creating novel active learning strategies.

Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.
Rayvens makes it possible for data scientists to access hundreds of data services within Ray with little effort.

Rayvens augments Ray with events. With Rayvens, Ray applications can subscribe to event streams, process and produce events. Rayvens leverages Apache

Memoized coduals - Shows that it is possible to implement reverse mode autodiff using a variation on the dual numbers called the codual numbers This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper
This repository contains the source code and data for reproducing results of Deep Continuous Clustering paper

Deep Continuous Clustering Introduction This is a Pytorch implementation of the DCC algorithms presented in the following paper (paper): Sohil Atul Sh

Comments
  • Consider modifying default BYOL hyper-parameters for smaller batch sizes

    Consider modifying default BYOL hyper-parameters for smaller batch sizes

    Applicable to both BYOL and SimSiam: Some hyperparameters might need to be added. Some are hard-coded to the default values.

    Taken from the BYOL paper: Screenshot from 2022-03-18 17-54-43

    opened by joaopfonseca 1
  • Remove computer vision models, augmentations and datasets

    Remove computer vision models, augmentations and datasets

    They will be removed in the next release since:

    1. I'm not going to used these methods anytime soon and I don't have the time to test them properly
    2. They are out of scope of the library. It is meant to be used for machine learning techniques, focused on tabular data. In the feature it may be worth considering the development of another library for computer vision, for example.
    3. Setting Pytorch as a dependency for a reduced part of the library isn't particularly efficient.
    wontfix 
    opened by joaopfonseca 0
  • Host all raw data from datasets submodule elsewhere

    Host all raw data from datasets submodule elsewhere

    With Python 3.11, downloading some datasets returns an SSL error (when unsafe legacy renegotiation disabled). It happens when the server doesn't support "RFC 5746 secure renegotiation" and the client is using OpenSSL 3, which enforces that standard by default (source).

    Hosting the raw data elsewhere should fix this issue.

    bug 
    opened by joaopfonseca 0
  • Review and add examples to documentation

    Review and add examples to documentation

    The readthedocs page is getting a bit outdated:

    • [x] Add support for Python 3.10
    • [ ] Add support for Python 3.11
    • [ ] Check for missing, deleted or renamed functions and objects
    • [ ] Review content as a whole
    • [ ] Add examples to documentation
    • [ ] Add dependency groups to documentation
    • [ ] README contains dependencies that will no longer be used
    documentation 
    opened by joaopfonseca 0
Releases(v0.4a2)
  • v0.4a2(Jan 2, 2023)

    NOTE: This pre-release contains implementations of algorithms for Self-supervised learning (BYOL and SimSiam). This release also contains objects to download image data from Pytorch and general definitions for image augmentations. They will be removed in the next release since:

    1. I'm not going to used these methods anytime soon and I don't have the time to test them properly
    2. They are out of scope of the library. It is meant to be used for machine learning techniques, focused on tabular data. In the feature it may be worth considering the development of another library for computer vision, for example.
    3. Setting Pytorch as a dependency for a reduced part of the library isn't particularly efficient.

    Full Changelog: https://github.com/joaopfonseca/ml-research/compare/v0.4a1...v0.4a2

    Source code(tar.gz)
    Source code(zip)
  • v0.4a1(Apr 14, 2022)

  • v0.3.4(Feb 14, 2022)

  • v0.3.3(Feb 14, 2022)

  • v0.3.2(Feb 14, 2022)

  • v0.3.1(Feb 14, 2022)

  • v0.3.0(Feb 14, 2022)

  • v0.2.1(Feb 14, 2022)

  • v0.2.0(Feb 14, 2022)

  • 0.1.0(Feb 14, 2022)

Owner
João Fonseca
PhD student | Researcher | Invited lecturer @ NOVA Information Management School
João Fonseca
Repositorio de los Laboratorios de Análisis Numérico / Análisis Numérico I de FAMAF, UNC.

Repositorio de los Laboratorios de Análisis Numérico / Análisis Numérico I de FAMAF, UNC. Para los Laboratorios de la materia, vamos a utilizar el len

Luis Biedma 18 Dec 12, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023
PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

PPLNN is a Primitive Library for Neural Network is a high-performance deep-learning inference engine for efficient AI inferencing

943 Jan 07, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
Hypernetwork-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels

Hypernet-Ensemble Learning of Segmentation Probability for Medical Image Segmentation with Ambiguous Labels The implementation of Hypernet-Ensemble Le

Sungmin Hong 6 Jul 18, 2022
Facebook Research 605 Jan 02, 2023
Normalizing Flows with a resampled base distribution

Resampling Base Distributions of Normalizing Flows Normalizing flows are a popular class of models for approximating probability distributions. Howeve

Vincent Stimper 24 Nov 03, 2022
Crowd-sourced Annotation of Human Motion.

Motion Annotation Tool Live: https://motion-annotation.humanoids.kit.edu Paper: The KIT Motion-Language Dataset Installation Start by installing all P

Matthias Plappert 4 May 25, 2020
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

435 Dec 30, 2022
Syntax-Aware Action Targeting for Video Captioning

Syntax-Aware Action Targeting for Video Captioning Code for SAAT from "Syntax-Aware Action Targeting for Video Captioning" (Accepted to CVPR 2020). Th

59 Oct 13, 2022
QuakeLabeler is a Python package to create and manage your seismic training data, processes, and visualization in a single place — so you can focus on building the next big thing.

QuakeLabeler Quake Labeler was born from the need for seismologists and developers who are not AI specialists to easily, quickly, and independently bu

Hao Mai 15 Nov 04, 2022
Instance-wise Feature Importance in Time (FIT)

Instance-wise Feature Importance in Time (FIT) FIT is a framework for explaining time series perdiction models, by assigning feature importance to eve

Sana 46 Dec 25, 2022
Dilated Convolution with Learnable Spacings PyTorch

Dilated-Convolution-with-Learnable-Spacings-PyTorch Ismail Khalfaoui Hassani Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a no

15 Dec 09, 2022
Pytorch implementation of our paper under review -- 1xN Pattern for Pruning Convolutional Neural Networks

1xN Pattern for Pruning Convolutional Neural Networks (paper) . This is Pytorch re-implementation of "1xN Pattern for Pruning Convolutional Neural Net

Mingbao Lin (林明宝) 29 Nov 29, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
Training Structured Neural Networks Through Manifold Identification and Variance Reduction

Training Structured Neural Networks Through Manifold Identification and Variance Reduction This repository is a pytorch implementation of the Regulari

0 Dec 23, 2021
ICRA 2021 "Towards Precise and Efficient Image Guided Depth Completion"

PENet: Precise and Efficient Depth Completion This repo is the PyTorch implementation of our paper to appear in ICRA2021 on "Towards Precise and Effic

232 Dec 25, 2022
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

WenxueCui 7 Nov 07, 2022
Numerical-computing-is-fun - Learning numerical computing with notebooks for all ages.

As much as this series is to educate aspiring computer programmers and data scientists of all ages and all backgrounds, it is also a reminder to mysel

EKA foundation 758 Dec 25, 2022