AutoGluon: AutoML for Text, Image, and Tabular Data

Overview

AutoML for Text, Image, and Tabular Data

Build Status Pypi Version GitHub license Downloads Upload Python Package

AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on text, image, and tabular data.

Example

# First install package from terminal:
# python3 -m pip install -U pip
# python3 -m pip install -U setuptools wheel
# python3 -m pip install autogluon  # autogluon==0.3.1

from autogluon.tabular import TabularDataset, TabularPredictor
train_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')
test_data = TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')
predictor = TabularPredictor(label='class').fit(train_data, time_limit=120)  # Fit models for 120s
leaderboard = predictor.leaderboard(test_data)
AutoGluon Task Quickstart API
TabularPredictor Quick Start API
TextPredictor Quick Start API
ImagePredictor Quick Start API
ObjectDetector Quick Start API

Resources

See the AutoGluon Website for documentation and instructions on:

Scientific Publications

Articles

Hands-on Tutorials

Train/Deploy AutoGluon in the Cloud

Citing AutoGluon

If you use AutoGluon in a scientific publication, please cite the following paper:

Erickson, Nick, et al. "AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data." arXiv preprint arXiv:2003.06505 (2020).

BibTeX entry:

@article{agtabular,
  title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},
  author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},
  journal={arXiv preprint arXiv:2003.06505},
  year={2020}
}

If you are using AutoGluon Tabular's model distillation functionality, please cite the following paper:

Fakoor, Rasool, et al. "Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation." Advances in Neural Information Processing Systems 33 (2020).

BibTeX entry:

@article{agtabulardistill,
  title={Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation},
  author={Fakoor, Rasool and Mueller, Jonas W and Erickson, Nick and Chaudhari, Pratik and Smola, Alexander J},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

If you use AutoGluon's multimodal text+tabular functionality in a scientific publication, please cite the following paper:

Shi, Xingjian, et al. "Multimodal AutoML on Structured Tables with Text Fields." 8th ICML Workshop on Automated Machine Learning (AutoML). 2021.

BibTeX entry:

@inproceedings{agmultimodaltext,
  title={Multimodal AutoML on Structured Tables with Text Fields},
  author={Shi, Xingjian and Mueller, Jonas and Erickson, Nick and Li, Mu and Smola, Alex},
  booktitle={8th ICML Workshop on Automated Machine Learning (AutoML)},
  year={2021}
}

AutoGluon for Hyperparameter Optimization

AutoGluon also provides state-of-the-art tools for hyperparameter optimization, such as for example ASHA, Hyperband, Bayesian Optimization and BOHB.

To get started, checkout our paper "Model-based Asynchronous Hyperparameter and Neural Architecture Search" arXiv preprint arXiv:2003.10865 (2020).

@article{abohb,
  title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},
  author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},
  journal={arXiv preprint arXiv:2003.10865},
  year={2020}
}

License

This library is licensed under the Apache 2.0 License.

Contributing to AutoGluon

We are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read the Contributing Guide to get started.

Comments
  • [WIP] Add forecasting task

    [WIP] Add forecasting task

    Issue #, if available:

    Description of changes: Add forecasting task and enable hyperparameter tuning for gluonts models.

    This PR is used for further discussion about add the forecasting task to Autogluon, which should be parallel to tasks such as tabular prediction and image classifications etc.

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    Output in simple_forecasting_example.py:

        model     score  fit_order
    0     SFF -0.617567          2
    1   MQCNN -0.647109          1
    2  DeepAR -0.776917          3
    
    0.3063789898931557
    

    Output in advanced_forecasting_example.py:

                  model     score  fit_order  test_score
    0       SFF/trial_4 -0.771542          5   -0.455381
    1       SFF/trial_5 -0.800340          6   -0.469165
    2     MQCNN/trial_0 -0.859082          1   -0.819518
    3     MQCNN/trial_1 -0.859113          2   -0.819403
    4    DeepAR/trial_2 -0.882853          3   -0.894841
    5    DeepAR/trial_3 -0.899381          4   -0.917463
    6  SFF/trial_4_FULL       NaN          7   -0.559157
    
    0.4553322822285291
    
                        0.5
    2020-04-22   509.987183
    2020-04-23   617.542175
    2020-04-24  1040.601807
    2020-04-25  1154.234009
    2020-04-26  1073.659058
    2020-04-27  1090.976196
    2020-04-28   696.362183
    2020-04-29  1438.374390
    2020-04-30  1149.915405
    2020-05-01   468.797913
    2020-05-02   889.827393
    2020-05-03   938.827820
    2020-05-04   557.264038
    2020-05-05  1205.893555
    2020-05-06   379.958221
    2020-05-07   616.763733
    2020-05-08   877.277222
    2020-05-09  1134.502930
    2020-05-10   620.372009
    
    opened by yx1215 66
  • Add forecasting predictor

    Add forecasting predictor

    Issue #, if available:

    Description of changes: Add forecasting predictor.

    This PR is used for further discussion about add the forecasting task to Autogluon, which should be parallel to tasks such as tabular prediction and image classifications etc, replacing #684

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    Output in simple_forecasting_example.py:

        model     score  fit_order
    0     SFF -0.617567          2
    1   MQCNN -0.647109          1
    2  DeepAR -0.776917          3
    
    0.3063789898931557
    

    Output in advanced_forecasting_example.py:

                  model     score  fit_order  test_score
    0       SFF/trial_4 -0.771542          5   -0.455381
    1       SFF/trial_5 -0.800340          6   -0.469165
    2     MQCNN/trial_0 -0.859082          1   -0.819518
    3     MQCNN/trial_1 -0.859113          2   -0.819403
    4    DeepAR/trial_2 -0.882853          3   -0.894841
    5    DeepAR/trial_3 -0.899381          4   -0.917463
    6  SFF/trial_4_FULL       NaN          7   -0.559157
    
    0.4553322822285291
    
                        0.5
    2020-04-22   509.987183
    2020-04-23   617.542175
    2020-04-24  1040.601807
    2020-04-25  1154.234009
    2020-04-26  1073.659058
    2020-04-27  1090.976196
    2020-04-28   696.362183
    2020-04-29  1438.374390
    2020-04-30  1149.915405
    2020-05-01   468.797913
    2020-05-02   889.827393
    2020-05-03   938.827820
    2020-05-04   557.264038
    2020-05-05  1205.893555
    2020-05-06   379.958221
    2020-05-07   616.763733
    2020-05-08   877.277222
    2020-05-09  1134.502930
    2020-05-10   620.372009
    

    Things to improve in the future:

    • Ensembling
    • Dynamic features
    • More detailed preprocessing for dynamic/static features.(currently the code only do auto inferring of real/cat features and fillna, other things like standardizing might also be useful)
    • Making plots for predictions
    • Subsampling dataset(select a range from given train data)
    • Improve ways to do validation
    • New cat-static features in prediction might lead to failure.
    opened by yx1215 53
  • Temperature scaling

    Temperature scaling

    Issue #, if available:

    Description of changes: Applies temperature scaling to model if calibrate is true and multiclass is problem type

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by DolanTheMFWizard 46
  • Gaussian process based Bayesian optimization for FIFO and Hyperband s…

    Gaussian process based Bayesian optimization for FIFO and Hyperband s…

    …chedulers

    Issue #, if available:

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by mseeger 45
  • Add TabTransformer to model zoo, add unsupervised pretraining functionality

    Add TabTransformer to model zoo, add unsupervised pretraining functionality

    This is the cleaned up, tested, and benchmarked version of Josh's TabTransformer PR. https://github.com/awslabs/autogluon/pull/626

    There is a short TODO list at the bottom of TabTransformer_model.py. Some of these items should still be able to be completed in my internship. I also expect to add some documentation and fix any bad style during this PR review.

    This is a subset of the benchmarks ran on TabTransformer. I am willing to share the full set of benchmarks with whoever would like to see the raw results.

    Selected Benchmarks for Supervised TabTransformer: All OpenML benchmarks were done using 5-fold cross-validation. "TT" is TabTransformer model alone. "AG" is AutoGluon with 'default' settings. "AG+TT" is AutoGluon with 'default' with TabTransformer added. "AG-best" is AutoGluon with 'best_quality' settings.

    A few OpenML dataset's where using TT may help (although might not be statistically significant)

    dataset | ensemble | num_train | num_test | acc_avg | acc_std | train_time_avg | test_time_avg -- | -- | -- | -- | -- | -- | -- | -- kr-vs-kp | TT | 2557 | 639 | 0.81917 | 0.09603 | 89.29128 | 0.90347 kr-vs-kp | AG | 2557 | 639 | 0.92052 | 0.05081 | 12.78775 | 0.12446 kr-vs-kp | AG+TT | 2557 | 639 | 0.93241 | 0.05603 | 108.97985 | 0.10128 vehicle | TT | 677 | 169 | 0.65371 | 0.04611 | 102.3415 | 1.02259 vehicle | AG | 677 | 169 | 0.7884 | 0.01943 | 14.75907 | 0.18083 vehicle | AG+TT | 677 | 169 | 0.80026 | 0.05325 | 118.55637 | 0.42074

    Lots of other OpenML dataset's where using TT doesn't make much difference

    dataset | ensemble | num_train | num_test | acc_avg | acc_std | train_time_avg | test_time_avg -- | -- | -- | -- | -- | -- | -- | -- jasmine | TT | 2388 | 596 | 0.79792 | 0.0167 | 109.12893 | 1.27797 jasmine | AG | 2388 | 596 | 0.80696 | 0.01588 | 17.16468 | 0.86111 jasmine | AG+TT | 2388 | 596 | 0.80495 | 0.01504 | 136.45948 | 1.10267 car | TT | 1383 | 345 | 0.81822 | 0.0768 | 82.09336 | 0.80218 car | AG | 1383 | 345 | 0.87552 | 0.04811 | 22.33127 | 0.08059 car | AG+TT | 1383 | 345 | 0.87899 | 0.05277 | 105.44221 | 0.24134 sylvine | TT | 4100 | 1024 | 0.90984 | 0.00699 | 100.06864 | 0.91529 sylvine | AG | 4100 | 1024 | 0.95024 | 0.00485 | 15.50505 | 0.13682 sylvine | AG+TT | 4100 | 1024 | 0.95082 | 0.00382 | 126.8046 | 0.21298 phoneme | TT | 4324 | 1080 | 0.85159 | 0.00511 | 116.41546 | 1.11431 phoneme | AG | 4324 | 1080 | 0.90766 | 0.00461 | 20.87109 | 0.37877 phoneme | AG+TT | 4324 | 1080 | 0.90673 | 0.00388 | 141.69039 | 0.72139

    Benchmarks for Semi-Supervised TabTransformer:

    Note: There are a few internal data sets where we see good performance for both supervised and semi-supervised. Ping me internally if you'd like to see these results.

    OpenML semi-supervised

    Each data set was ran with either base 100 rows for training or 1000 rows for training.

    dataset | ensemble | num_unlab | num_train | num_test | acc_avg | acc_std | train_time_avg | test_time_avg -- | -- | -- | -- | -- | -- | -- | -- | -- volkert | TT | 34986 | 100 | 11662 | 0.43552 | 0.02368 | 3648.65349 | 2.36333 volkert | AG | 34986 | 100 | 11662 | 0.44828 | 0.02492 | 10.22401 | 0.6137 volkert | AG+TT | 34986 | 100 | 11662 | 0.44385 | 0.02283 | 3692.61063 | 1.51795 volkert | TT | 34986 | 1000 | 11662 | 0.53408 | 0.00681 | 3769.47776 | 2.35942 volkert | AG | 34986 | 1000 | 11662 | 0.54749 | 0.00773 | 47.99619 | 1.0925 volkert | AG+TT | 34986 | 1000 | 11662 | 0.55111 | 0.00601 | 3870.73334 | 3.27509 jannis | TT | 50240 | 100 | 16746 | 0.54975 | 0.01006 | 2068.45658 | 1.35153 jannis | AG | 50240 | 100 | 16746 | 0.52187 | 0.02818 | 7.39638 | 0.09968 jannis | AG+TT | 50240 | 100 | 16746 | 0.52557 | 0.02974 | 2101.79709 | 0.55362

    Limitations found in TabTransformer:

    • Memory requirements scale very poorly as the number of columns scales. This makes training on data sets with 100s of columns relatively infeasible. This is also why 'ignore_text' had to be turned on for the hs dataset.
    • Runs very slow on CPU. To properly use TabTransformer, I highly recommend using GPUs.

    Example usage of TabTransformer model: Supervised case task.fit(train_data = X_train, problem_type = problem_type, hyperparameters = {'TRANSF': {}})

    Semi-Supervised case, e.g. in the case of having a corpus of unlabeled data in addition to your labeled training data. task.fit(train_data = X_train, unlabeled_data = unlabeled_data, problem_type = problem_type, hyperparameters = {'TRANSF': {}}) A critical point for using unlabeled data is that the schema must match between the unlabeled data set and the labeled data set. In other words, you must have all the exact same columns between these two data sets. This can be addressed in a future PR on how to handle schema mismatches.

    In addition, TabTransformer can be used just like any other model in the model zoo. task.fit(train_data = X_train, problem_type = problem_type, hyperparameters = {'CAT': {}, 'TRANSF': {}})

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by Chudbrochil 41
  • Refactoring of schedulers and searchers

    Refactoring of schedulers and searchers

    • Cleanup of BaseSearcher API, in particular update
    • Improve checkpoint/resume, support searcher objects which cannot be pickled
    • Improved documentation of FIFOScheduler, HyperbandScheduler

    Issue #, if available:

    Description of changes:

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by mseeger 38
  • Tabular: Feature Generator Refactor (Part 2)

    Tabular: Feature Generator Refactor (Part 2)

    Issue #, if available: #579

    Description of changes:

    • Major overhaul of all components of feature generator to increase extensibility.
    • Maintains identical end-to-end functionality of previous code.
    • Add custom feature generator tutorial / example

    To better understand how the feature generators work, please pull this PR's code and run the new examples and tests:

    • examples/tabular/example_custom_feature_generator.py
    • tests/unittests/utils/tabular/generators/*
    • tests/unittests/utils/tabular/test_feature_metadata.py

    TODO:

    • Add more documentation in follow-up PR's.

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by Innixma 34
  •  RuntimeError: The best config {'search_space▁optimization.lr': 5.5e-05} is not found in config history = OrderedDict().

    RuntimeError: The best config {'search_space▁optimization.lr': 5.5e-05} is not found in config history = OrderedDict().

    Using SageMaker, tried conda_python3 (and downloaded mxnet as suggested in https://autogluon.mxnet.io/install.html#installation-faq) and conda_mxnet_36, error is reproduced in both kernels.

    Just following the text prediction quick start example, but running into following:

    !pip install --upgrade pip
    !pip install --upgrade 'scikit-learn<0.23,>=0.22.0'
    !pip install --upgrade mxnet autogluon
    
    from autogluon import TextPrediction as task
    from autogluon.utils.tabular.utils.loaders.load_pd import load
    import pandas as pd
    import numpy as np
    train_data = load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet')
    dev_data = load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet')
    rand_idx = np.random.permutation(np.arange(len(train_data)))[:2000]
    train_data = train_data.iloc[rand_idx]
    train_data.head(10)
    
    predictor = task.fit(train_data, label='label',
                         time_limits=60,
                         seed=123,
                         output_directory='./ag_sst')
    

    (No gpu configuration as my instance does not have gpu) Running into: RuntimeError: The best config {'search_space▁optimization.lr': 5.5e-05} is not found in config history = OrderedDict(). This should never happen! error

    Edit: tried on both 0.0.13 and 0.0.14, same error. I did check the setup.py and verified that all packages were installed. Not sure if there's a version mismatch?

    bug API & Doc module: text 
    opened by SamanthaSHan 30
  • Pseudo label

    Pseudo label

    Issue #, if available:

    Description of changes: Incorporated pseudolabeling into Autogluon. The changes entail the following:

    • Adding extra X_pseudo and y_pseudo args into aux_kwargs and core_kwargs at the fit_extra() function
    • Adding pseudo data validation
    • Incorporating pseudo data into training at the bagged ensemble model level and single model level

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by DolanTheMFWizard 28
  • Integrate FAISS index for KNN classifier

    Integrate FAISS index for KNN classifier

    Description of changes: This code adds

    The new KNeighborsClassifier is a drop-in replacement for the sklearn model, but is built on top of the FAISS index. There is one additional optional parameter for the model: index_factory_string, which describes what type of FAISS index to use (this just gets passed to FAISS's index construction method).

    There is one new dependency on faiss-cpu.

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by brc7 28
  • [BUG] Segmentation fault while importing TextPredictor

    [BUG] Segmentation fault while importing TextPredictor

    • [X] I have checked that this bug exists on the latest stable version of AutoGluon
    • [ ] and/or I have checked that this bug exists on the latest mainline of AutoGluon via source installation

    Describe the bug

    >>> from autogluon.text import TextPredictor
    /home/cjj/anaconda3/envs/pytorch/lib/python3.9/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
      warn(f"Failed to load image Python extension: {e}")
    Segmentation fault (core dumped)
    

    Expected behavior

    import TextPredictor

    To Reproduce

    from autogluon.text import TextPredictor
    

    Screenshots

    QQ图片20220528200354

    Installed Versions

    /home/cjj/anaconda3/envs/pytorch/lib/python3.9/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
      warn(f"Failed to load image Python extension: {e}")
    /home/cjj/anaconda3/envs/pytorch/lib/python3.9/site-packages/gluoncv/__init__.py:40: UserWarning: Both `mxnet==1.9.1` and `torch==1.10.2+cpu` are installed. You might encounter increased GPU memory footprint if both framework are used at the same time.
      warnings.warn(f'Both `mxnet=={mx.__version__}` and `torch=={torch.__version__}` are installed. '
    
    INSTALLED VERSIONS
    ------------------
    date                 : 2022-05-28
    time                 : 08:05:29.672541
    python               : 3.9.12.final.0
    OS                   : Linux
    OS-release           : 4.15.0-76-generic
    Version              : #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020
    machine              : x86_64
    processor            : x86_64
    num_cores            : 40
    cpu_ram_mb           : 257841
    cuda version         : 11.515.43.04
    num_gpus             : 10
    gpu_ram_mb           : [2886, 12195, 12195, 6636, 3766, 3766, 3766, 1598, 6930, 4332]
    avail_disk_size_mb   : 1487138
    
    autogluon.common     : 0.4.2b20220528
    autogluon.core       : 0.4.2b20220528
    autogluon.features   : 0.4.2b20220528
    autogluon.tabular    : 0.4.1
    autogluon.text       : 0.4.2b20220528
    autogluon.vision     : 0.4.1
    autogluon_contrib_nlp: 0.0.1
    boto3                : 1.23.8
    catboost             : 1.0.6
    dask                 : 2021.11.2
    distributed          : 2021.11.2
    fairscale            : 0.4.6
    fastai               : 2.5.6
    gluoncv              : 0.11.0
    lightgbm             : 3.3.2
    matplotlib           : 3.5.2
    networkx             : 2.8.2
    nptyping             : 1.4.4
    numpy                : 1.22.4
    omegaconf            : 2.1.2
    pandas               : 1.3.5
    PIL                  : 9.0.1
    psutil               : 5.8.0
    pytorch_lightning    : 1.6.3
    ray                  : 1.10.0
    requests             : 2.27.1
    scipy                : 1.7.3
    sentencepiece        : None
    setuptools           : 59.5.0
    skimage              : 0.19.2
    sklearn              : 1.0.2
    smart_open           : 5.2.1
    timm                 : 0.5.4
    torch                : 1.10.2+cpu
    torchmetrics         : 0.7.3
    tqdm                 : 4.64.0
    transformers         : 4.16.2
    xgboost              : 1.4.2
    
    bug urgent module: text Needs Triage 
    opened by cjj490168650 27
  • [Tabular] Enable per-model specification of num_bag_sets

    [Tabular] Enable per-model specification of num_bag_sets

    It would be nice to be able to specify specific num_bag_sets values for repeated bagging on a per-model basis. For example, in multimodal datasets, it might be good to avoid repeated bagging the image and text models to save compute.

    enhancement module: tabular 
    opened by Innixma 0
  • Support HPO for matcher

    Support HPO for matcher

    Issue #, if available:

    Description of changes:

    1. Add HPO util function hyperparameter_tune to support both predictor and matcher.
    2. Add HPO unit tests for matcher.

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by zhiqiangdon 3
  • [BUG] Website: Dead links for Multimodal Predictor tutorials on `stable` tutorials page (not broken on `dev`)

    [BUG] Website: Dead links for Multimodal Predictor tutorials on `stable` tutorials page (not broken on `dev`)

    • [x] I have checked that this bug exists on the latest stable version of AutoGluon
    • [ ] and/or I have checked that this bug exists on the latest mainline of AutoGluon via source installation

    Describe the bug The Multimodal Predictor tutorial links on the tutorials page are dead on stable but working on dev.

    Additional context Reported by user in internal AG interest slack channel.

    bug API & Doc 
    opened by gidler 2
  • multimodal label-studio export reader & doc

    multimodal label-studio export reader & doc

    This tool is to help user to transform the exported label annotation data from a data labeling platform Label-Studio (https://labelstud.io/) and generate the pandas Dataframe for Autogluon multimodal input. In this way use can build up a labelstudio-autogluon workflow, label the data through Label-Studio and then feed the data to Autogluon with a few lines of simple code to adjust the data. So far there are 3 task template available, including image-classification (image), named entity recognition(text) and user-customized template. Other templates are WIP. A documentation for this feature is attached to this PR.

    Description of changes:

    • add from_labelstudio.py to autogluon/multimodal/src/autogluon/multimodal/utils
    • a documentation folder label-studio-export-reader to autogluon/examples/automm

    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

    opened by MountPOTATO 1
  • How to use multimodal predictor with my custom pretraind transform model?

    How to use multimodal predictor with my custom pretraind transform model?

    Discussed in https://github.com/autogluon/autogluon/discussions/2611

    Originally posted by ShaohanTian December 28, 2022 How to use multimodal predictor with my custom pretraind transform model?

    opened by ShaohanTian 0
Releases(v0.6.1)
  • v0.6.1(Dec 13, 2022)

    Version 0.6.1

    v0.6.1 is a security fix / bug fix release.

    As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/autogluon/autogluon/compare/v0.6.0...v0.6.1

    This version supports Python versions 3.7 to 3.9. 0.6.x are the last releases that will support Python 3.7.

    Changes

    Documentation improvements

    • Fix object detection tutorial layout (#2450) - @bryanyzhu
    • Add multimodal cheatsheet (#2467) - @sxjscience
    • Refactoring detection inference quickstart and bug fix on fit->predict - @yongxinw, @zhiqiangdon, @Innixma, @BingzhaoZhu, @tonyhoo
    • Use Pothole Dataset in Tutorial for AutoMM Detection (#2468) - @FANGAreNotGnu
    • add time series cheat sheet, add time series to doc titles (#2478) - @canerturkmen
    • Update all repo references to autogluon/autogluon (#2463) - @gidler
    • fix typo in object detection tutorial CI (#2516) - @tonyhoo

    Bug Fixes / Security

    • bump evaluate to 0.3.0 (#2433) - @lvwerra
    • Add finetune/eval tests for AutoMM detection (#2441) - @FANGAreNotGnu
    • Adding Joint IA3_LoRA as efficient finetuning strategy (#2451) - @Raldir
    • Fix AutoMM warnings about object detection (#2458) - @zhiqiangdon
    • [Tabular] Speed up feature transform in tabular NN model (#2442) - @liangfu
    • fix matcher cpu inference bug (#2461) - @sxjscience
    • [timeseries] Silence GluonTS JSON warning (#2454) - @shchur
    • [timeseries] Fix pandas groupby bug + GluonTS index bug (#2420) - @shchur
    • Simplified infer speed throughput calculation (#2465) - @Innixma
    • [Tabular] make tabular nn dataset iterable (#2395) - @liangfu
    • Remove old images and dataset download scripts (#2471) - @Innixma
    • Support image bytearray in AutoMM (#2490) - @suzhoum
    • [NER] add an NER visualizer (#2500) - @cheungdaven
    • [Cloud] Lazy load TextPredcitor and ImagePredictor which will be deprecated (#2517) - @tonyhoo
    • Use detectron2 visualizer and update quickstart (#2502) - @yongxinw, @zhiqiangdon, @Innixma, @BingzhaoZhu, @tonyhoo
    • fix df preprocessor properties (#2512) - @zhiqiangdon
    • [timeseries] Fix info and fit_summary for TimeSeriesPredictor (#2510) - @shchur
    • [timeseries] Pass known_covariates to component models of the WeightedEnsemble - @shchur
    • [timeseries] Gracefully handle inconsistencies in static_features provided by user - @shchur
    • [security] update Pillow to >=9.3.0 (#2519) - @gradientsky
    • [CI] upgrade codeql v1 to v2 as v1 will be deprecated (#2528) - @tonyhoo
    • Upgrade scikit-learn-intelex version (#2466) - @Innixma
    • Save AutoGluonTabular model to the correct folder (#2530) - @shchur
    • support predicting with model fitted on v0.5.1 (#2531) - @liangfu
    • [timeseries] Implement input validation for TimeSeriesPredictor and improve debug messages - @shchur
    • [timeseries] Ensure that timestamps are sorted when creating a TimeSeriesDataFrame - @shchur
    • Add tests for preprocessing mutation (#2540) - @Innixma
    • Fix timezone datetime edgecase (#2538) - @Innixma, @gradientsky
    • Mmdet Fix Image Identifier (#2492) - @FANGAreNotGnu
    • [timeseries] Warn if provided data has a frequency that is not supported - @shchur
    • Train and inference with different image data types (#2535) - @suzhoum
    • Remove pycocotools (#2548) - @bryanyzhu
    • avoid copying identical dataframes (#2532) - @liangfu
    • Fix AutoMM Tokenizer (#2550) - @FANGAreNotGnu
    • [Tabular] Resource Allocation Fix (#2536) - @yinweisu
    • imodels version cap (#2557) - @yinweisu
    • Fix int32/int64 difference between windows and other platforms; fix mutation issue (#2558) - @gradientsky
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Nov 19, 2022)

    Version 0.5.3

    v0.5.3 is a security hotfix release.

    This release is non-breaking when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.5.2...v0.5.3

    This version supports Python versions 3.7 to 3.9.

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Nov 17, 2022)

    Version 0.6.0

    We're happy to announce the AutoGluon 0.6 release. 0.6 contains major enhancements to Tabular, Multimodal, and Time Series modules, along with many quality of life improvements and fixes.

    As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 263 commits from 25 contributors!

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.5.2...v0.6.0

    Special thanks to @cheungdaven, @suzhoum, @BingzhaoZhu, @liangfu, @Harry-zzh, @gidler, @yongxinw, @martinschaef, @giswqs, @Jalagarto, @geoalgo, @lujiaying and @leloykun who were first time contributors to AutoGluon this release!

    Full Contributor List (ordered by # of commits):

    @shchur, @yinweisu, @zhiqiangdon, @Innixma, @FANGAreNotGnu, @canerturkmen, @sxjscience, @gradientsky, @cheungdaven, @bryanyzhu, @suzhoum, @BingzhaoZhu, @yongxinw, @tonyhoo, @liangfu, @Harry-zzh, @Raldir, @gidler, @martinschaef, @giswqs, @Jalagarto, @geoalgo, @lujiaying, @leloykun, @yiqings

    This version supports Python versions 3.7 to 3.9. This is the last release that will support Python 3.7.

    Changes

    AutoMM

    AutoGluon Multimodal (a.k.a AutoMM) supports three new features: 1) object detection, 2) named entity recognition, and 3) multimodal matching. In addition, the HPO backend of AutoGluon Multimodal has been upgraded to ray 2.0. It also supports fine-tuning billion-scale FLAN-T5-XL model on a single AWS g4.2x-large instance with improved parameter-efficient finetuning. Starting from 0.6, we recommend using autogluon.multimodal rather than autogluon.text or autogluon.vision and added deprecation warnings.

    New features

    • Object Detection

      • Add new problem_type "object_detection".
      • Customers can run inference with pretrained object detection models and train their own model with three lines of code.
      • Integrate with open-mmlab/mmdetection, which supports classic detection architectures like Faster RCNN, and more efficient and performant architectures like YOLOV3 and VFNet.
      • See tutorials and examples for more detail.
      • Contributors and commits: @FANGAreNotGnu, @bryanyzhu, @zhiqiangdon, @yongxinw, @sxjscience, @Harry-zzh (#2025, #2061, #2131, #2181, #2196, #2215, #2244, #2265, #2290, #2311, #2312, #2337, #2349, #2353, #2360, #2362, #2365, #2380, #2381, #2391, #2393, #2400, #2419, #2421, #2063, #2104, #2411)
    • Named Entity Recognition

      • Add new problem_type "ner".
      • Customers can train models to extract named entities with three lines of code.
      • The implementation supports any backbones in huggingface/transformer, including the recently FLAN-T5 series released by Google.
      • See tutorials for more detail.
      • Contributors and commits: @cheungdaven (#2183, #2232, #2220, #2282, #2295, #2301, #2337, #2346, #2361, #2372, #2394, #2412)
    • Multimodal Matching

      • Add new problem_type "text_similarity", "image_similarity", "image_text_similarity".
      • Users can now extract semantic embeddings with pretrained models for text-text, image-image, and text-image matching problems.
      • Moreover, users can further finetune these models with relevance data.
      • The semantic text embedding model can also be combined with BM25 to form a hybrid indexing solution.
      • Internally, AutoGluon Multimodal implements a twin-tower architecture that is flexible in the choice of backbones for each tower. It supports image backbones in TIMM, text backbones in huggingface/transformers, and also the CLIP backbone.
      • See tutorials for more detail.
      • Contributors and commits: @zhiqiangdon @FANGAreNotGnu @cheungdaven @suzhoum @sxjscience @bryanyzhu (#1975, #1994, #2142, #2179, #2186, #2217, #2235, #2284, #2297, #2313, #2326, #2337, #2347, #2357, #2358, #2362, #2363, #2375, #2378, #2404, #2416, #2407, #2417)
    • Miscellaneous minor fixes. @cheungdaven @FANGAreNotGnu @geoalgo @zhiqiangdon (#2402, #2409, #2026, #2401, #2418)

    Other Enhancements

    • Fix the FT-Transformer implementation and support Fastformer. @BingzhaoZhu @yiqings (#1958, #2194, #2251, #2344, #2379, #2386)
    • Support finetuning billion-scale FLAN-T5-XL in a single AWS g4.2x-large instance via improved parameter-efficient finetuning. See tutorial. @Raldir @sxjscience (#2032, #2108, #2285, #2336, #2352)
    • Upgrade multimodal HPO to use ray 2.0 and also add new tutorial. @yinweisu @suzhoum @bryanyzhu (#2206, #2341)
    • Further improvement on model distillation. Add example and tutorial. @FANGAreNotGnu @sxjscience (#1983, #2064, #2397)
    • Revise the default presets of AutoMM for image classification problems. @bryanyzhu (#2351)
    • Support backend=“automm” in autogluon.vision. @bryanyzhu (#2316)
    • Add deprecated warning to autogluon.vision and autogluon.text and point the usage to autogluon.multimodal. @bryanyzhu @sxjscience (#2268, #2315)
    • Examples about Kaggle: Feedback Prize prediction competition. We created a solution with AutoGluon Multimodal that obtained 152/1557 in the public leaderboard and 170/1557 in the private leaderboard, which is among the top 12% participants. The solution is public days before the DDL of the competition and obtained more than 3000 views. @suzhoum @MountPOTATO (#2129, #2168, #2333)
    • Improve native inference speed. @zhiqiangdon (#2051, #2157, #2161, #2171)
    • Other improvements, security/bug fixes. @zhiqiangdon @sxjscience @FANGAreNotGnu, @yinweisu @Innixma @tonyhoo @martinschaef @giswqs @tonyhoo (#1980, #1987, #1989, #2003, #2080, #2018, #2039, #2058, #2101, #2102, #2125, #2135, #2136, #2140, #2141, #2152, #2164, #2166, #2192, #2219, #2250, #2257, #2280, #2308, #2315, #2317, #2321, #2356, #2388, #2392, #2413, #2414, #2417, #2426, #2028, #2382, #2415, #2193, #2213, #2230)
    • CI improvements. @yinweisu (#1965, #1966, #1972, #1991, #2002, #2029, #2137, #2151, #2156, #2163, #2191, #2214, #2369, #2113, #2118)

    Experimental Features

    • Support 11B-scale model finetuning with DeepSpeed. @Raldir (#2032)
    • Enable few-shot learning with 11B-scale model. @Raldir (#2197)
    • ONNX export example of hf_text model. @FANGAreNotGnu (#2149)

    Tabular

    New features

    • New experimental model FT_TRANSFORMER. @bingzhaozhu, @innixma (#2085, #2379, #2389, #2410)

      • You can access it via specifying the FT_TRANSFORMER key in the hyperparameters dictionary or via presets="experimental_best_quality".
      • It is recommended to use GPU to train this model, but CPU training is also supported.
      • If given enough training time, this model generally improves the ensemble quality.
    • New experimental model compilation support via predictor.compile_models(). @liangfu, @innixma (#2225, #2260, #2300)

      • Currently only Random Forest and Extra Trees have compilation support.
      • You will need to install extra dependencies for this to work: pip install autogluon.tabular[all,skl2onnx].
      • Compiling models dramatically speeds up inference time (~10x) when processing small batches of samples (<10000).
      • Note that a known bug exists in the current implementation: Refitting models after compilation will fail and cause a crash. To avoid this, ensure that .compile_models is called only at the very end.
    • Added predictor.clone(...) method to allow perfectly cloning a predictor object to a new directory. This is useful to preserve the state of a predictor prior to altering it (such as prior to calling .save_space, .distill, .compile_models, or .refit_full. @innixma (#2071)

    • Added simplified num_gpus and num_cpus arguments to predictor.fit to control total resources. @yinweisu, @innixma (#2263)

    • Improved stability and effectiveness of HPO functionality via various refactors regarding our usage of ray. @yinweisu, @innixma (#1974, #1990, #2094, #2121, #2133, #2195, #2253, #2263, #2330)

    • Upgraded dependency versions: XGBoost 1.7, CatBoost 1.1, Scikit-learn 1.1, Pandas 1.5, Scipy 1.9, Numpy 1.23. @innixma (#2373)

    • Added python version compatibility check when loading a fitted TabularPredictor. Will now error if python versions are incompatible. @innixma (#2054)

    • Added fit_weighted_ensemble argument to predictor.fit. This allows the user to disable the weighted ensemble. @innixma (#2145)

    • Added cascade ensemble foundation logic. @innixma (#1929)

    Other Enhancements

    • Improved logging clarity when using infer_limit. @innixma (#2014)
    • Significantly improved HPO search space of XGBoost. @innixma (#2123)
    • Fixed HPO crashing when tuning Random Forest, Extra Trees, or KNN. @innixma (#2070)
    • Optimized roc_auc metric scoring speed by 7x. @innixma (#2318, #2331)
    • Fixed bug with AutoMM Tabular model crashing if not trained last. @innixma (#2309)
    • Refactored Scorer classes to be easier to use, plus added comprehensive unit tests for all metrics. @innixma (#2242)
    • Sped up TextSpecial feature generation during preprocessing by 20% @gidler (#2095)
    • imodels integration improvements @Jalagarto (#2062)
    • Fix crash when calling feature importance in quantile_regression. @leloykun (#1977)
    • Add FAQ section for missing value imputation. @innixma (#2076)
    • Various minor fixes and cleanup @innixma, @yinweisu, @gradientsky, @gidler (#1997, #2031, #2124, #2144, #2178, #2340, #2342, #2345, #2374, #2339, #2348, #2403, #1981, #1982, #2234, #2233, #2243, #2269, #2288, #2307, #2367, #2019)

    Time Series

    New features

    • TimeSeriesPredictor now supports static features (a.k.a. time series metadata, static covariates) and ** time-varying covariates** (a.k.a. dynamic features or related time series). @shchur @canerturkmen (#1986, #2238, #2276, #2287)
    • AutoGluon-TimeSeries now uses PyTorch by default (for DeepAR and SimpleFeedForward), removing the dependency on MXNet. @canerturkmen (#2074, #2205, #2279)
    • New models! AutoGluonTabular relies on XGBoost, LightGBM and CatBoost under the hood via the autogluon.tabular module. Naive and SeasonalNaive forecasters are simple methods that provide strong baselines with no increase in training time. TemporalFusionTransformerMXNet brings the TFT transformer architecture to AutoGluon. @shchur (#2106, #2188, #2258, #2266)
    • Up to 20x faster parallel and memory-efficient training for statistical (local) forecasting models like ETS, ARIMA and Theta, as well as WeightedEnsemble. @shchur @canerturkmen (#2001, #2033, #2040, #2067, #2072, #2073, #2180, #2293, #2305)
    • Up to 3x faster training for GluonTS models with data caching. GPU training enabled by default on PyTorch models. @shchur (#2323)
    • More accurate validation for time series models with multi-window backtesting. @shchur (#2013, #2038)
    • TimeSeriesPredictor now handles irregularly sampled time series with ignore_index. @canerturkmen, @shchur (#1993, #2322)
    • Improved and extended presets for more accurate forecasting. @shchur (#2304)
    • 15x faster and more robust forecast evaluation with updates to TimeSeriesEvaluator @shchur (#2147, #2150)
    • Enabled Ray Tune backend for hyperparameter optimization of time series models. @shchur (#2167, #2203)

    More tutorials and examples

    Improved documentation and new tutorials:

    @shchur (#2120, #2127, #2146, #2174, #2187, #2354)

    Miscellaneous

    @shchur

    • Deprecate passing quantile_levels to TimeSeriesPredictor.predict (#2277)
    • Use static features in GluonTS forecasting models (#2238)
    • Make sure that time series splitter doesn't trim training series shorter than prediction_length + 1 (#2099)
    • Fix hyperparameter overloading in HPO for time series models (#2189)
    • Clean up the TimeSeriesDataFrame public API (#2105)
    • Fix item order in GluonTS models predictions (#2092)
    • Implement hash_ts_dataframe_items (#2060)
    • Speed up TimeSeriesDataFrame.slice_by_timestep (#2020)
    • Speed up RandomForestQuantileRegressor and ExtraTreesQuantileRegressor (#2204)
    • Various backend enhancements / refactoring / cleanup (#2314, #2294, #2292, #2278, #1985, #2398)

    @canerturkmen

    • Increase the number of samples used by DeepAR at prediction time (#2291)
    • revise timeseries presets to minimum context length of 10 (#2065)
    • Fix timeseries daily frequency inferred period (#2100)
    • Various backend enhancements / refactoring / cleanup (#2286, #2302, #2240, #2093, #2098, #2044, #2385, #2355, #2405)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(Jul 29, 2022)

    Version 0.5.2

    v0.5.2 is a security hotfix release.

    This release is non-breaking when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.5.1...v0.5.2

    This version supports Python versions 3.7 to 3.9.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.3(Jul 28, 2022)

    Version 0.4.3

    v0.4.3 is a security hotfix release.

    This release is non-breaking when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.2...v0.4.3

    This version supports Python versions 3.7 to 3.9.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Jul 19, 2022)

    Version 0.5.1

    We're happy to announce the AutoGluon 0.5 release. This release contains major optimizations and bug fixes to autogluon.multimodal and autogluon.timeseries modules, as well as inference speed improvements to autogluon.tabular.

    This release is non-breaking when upgrading from v0.5.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 58 commits from 14 contributors!

    Full Contributor List (ordered by # of commits):

    • @zhiqiangdon, @yinweisu, @Innixma, @canerturkmen, @sxjscience, @bryanyzhu, @jsharpna, @gidler, @gradientsky, @Linuxdex, @muxuezi, @yiqings, @huibinshen, @FANGAreNotGnu

    This version supports Python versions 3.7 to 3.9.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.5.0...v0.5.1

    AutoMM

    Changed to a new namespace autogluon.multimodal (AutoMM), which is a deep learning "model zoo" of model zoos. On one hand, AutoMM can automatically train deep models for unimodal (image-only, text-only or tabular-only) problems. On the other hand, AutoMM can automatically solve multimodal (any combinations of image, text, and tabular) problems by fusing multiple deep learning models. In addition, AutoMM can be used as a base model in AutoGluon Tabular and participate in the model ensemble.

    New features

    • Supported zero-shot learning with CLIP (#1922) @zhiqiangdon

      • Users can directly perform zero-shot image classification with the CLIP model. Moreover, users can extract image and text embeddings with CLIP to do image-to-text or text-to-image retrieval.
    • Improved efficient finetuning

      • Support “bit_fit”, “norm_fit“, “lora”, “lora_bias”, “lora_norm”. In four multilingual datasets (xnli, stsb_multi_mt, paws-x, amazon_reviews_multi), “lora_bias”, which is a combination of LoRA and BitFit, achieved the best overall performance. Compared to finetuning the whole network, “lora_bias” will only finetune <0.5% of the network parameters and can achieve comparable performance on “stsb_multi_mt” (#1780, #1809). @Raldir @zhiqiangdon
      • Support finetuning the mT5-XL model that has 1.7B parameters on a single NVIDIA G4 GPU. In AutoMM, we only use the T5-encoder (1.7B parameters) like Sentence-T5. (#1933) @sxjscience
    • Added more data augmentation techniques

    • Enhanced teacher-student model distillation

      • Support distilling the knowledge from a unimodal/multimodal teacher model to a student model. (#1670, #1895) @zhiqiangdon

    More tutorials and examples

    • Beginner tutorials of applying AutoMM to image, text, or multimodal (including tabular) data. (#1861, #1908, #1858, #1869) @bryanyzhu @sxjscience @zhiqiangdon

    • A zero-shot image classification tutorial with the CLIP model. (#1942) @bryanyzhu

    • A tutorial of using CLIP model to extract embeddings for image-text retrieval. (#1957) @bryanyzhu

    • A tutorial to introduce comprehensive AutoMM configurations (#1861). @zhiqiangdon

    • AutoMM for tabular data examples (#1752, #1893, #1903). @yiqings

    • AutoMM distillation example (#1846). @FANGAreNotGnu

    • A Kaggle notebook about how to use AutoMM to predict pet adoption: https://www.kaggle.com/code/linuxdex/use-autogluon-to-predict-pet-adoption. The model achieves the score equivalent to top 1% (20th/3537) in this kernel-only competition (test data is only available in the kernel without internet access) (#1796, #1847, #1894, #1943). @Linuxdex

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Jun 23, 2022)

    We're happy to announce the AutoGluon 0.5 release. This release contains major new modules autogluon.timeseries and autogluon.multimodal. In collaboration with the Yu Group of Statistics and EECS from UC Berkeley, we have added interpretable models (imodels) to autogluon.tabular.

    This release is non-breaking when upgrading from v0.4.2. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 91 commits from 13 contributors!

    Full Contributor List (ordered by # of commits):

    • @Innixma, @canerturkmen, @zhiqiangdon, @sxjscience, @yinweisu, @Linuxdex, @yiqings, @gradientsky, @csinva, @FANGAreNotGnu, @huibinshen, @Raldir, @lzcemma

    The imodels integration is based on the following work,

    Singh, C., Nasseri, K., Tan, Y.S., Tang, T. and Yu, B., 2021. imodels: a python package for fitting interpretable models. Journal of Open Source Software, 6(61), p.3192.

    This version supports Python versions 3.7 to 3.9.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.1...v0.5.0

    Full release notes will be available shortly.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.2(Jun 1, 2022)

    Version 0.4.2

    v0.4.2 is a hotfix release to fix breaking change in protobuf.

    This release is non-breaking when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.1...v0.4.2

    This version supports Python versions 3.7 to 3.9.

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(May 21, 2022)

    Version 0.4.1

    We're happy to announce the AutoGluon 0.4.1 release. 0.4.1 contains minor enhancements to Tabular, Text, Image, and Multimodal modules, along with many quality of life improvements and fixes.

    This release is non-breaking when upgrading from v0.4.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 55 commits from 10 contributors!

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.4.0...v0.4.1

    Special thanks to @yiqings, @leandroimail, @huibinshen who were first time contributors to AutoGluon this release!

    Full Contributor List (ordered by # of commits):

    • @Innixma, @zhiqiangdon, @yinweisu, @sxjscience, @yiqings, @gradientsky, @willsmithorg, @canerturkmen, @leandroimail, @huibinshen.

    This version supports Python versions 3.7 to 3.9.

    Changes

    AutoMM

    New features

    • Added optimization.efficient_finetune flag to support multiple efficient finetuning algorithms. (#1666) @sxjscience

    • Enabled knowledge distillation for AutoMM (#1670) @zhiqiangdon

      • Distillation API for AutoMMPredictor reuses the .fit() function:
      from autogluon.text.automm import AutoMMPredictor
      teacher_predictor = AutoMMPredictor(label="label_column").fit(train_data)
      student_predictor = AutoMMPredictor(label="label_column").fit(
          train_data, 
          hyperparameters=student_and_distiller_hparams, 
          teacher_predictor=teacher_predictor,
      )
      
    • Option to turn on returning feature column information (#1711) @zhiqiangdon

      • The feature column information is turned on for feature column distillation; for other cases it is turned off by default to reduce dataloader‘s latency.
      • Added a requires_column_info flag in data processors and a utility function to turn this flag on or off.
    • FT-Transformer implementation for tabular data in AutoMM (#1646) @yiqings

      • Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, Artem Babenko, "Revisiting Deep Learning Models for Tabular Data" 2022. (arxiv, official implementation)
    • Make CLIP support multiple images per sample (#1606) @zhiqiangdon

      • Added multiple images support for CLIP. Improved data loader robustness: added missing images handling to prevent training crashes.
      • Added the choice of using a zero image if an image is missing.
    • Avoid using eos as the sep token for CLIP. (#1710) @zhiqiangdon

    • Update fusion transformer in AutoMM (#1712) @yiqings

      • Support constant learning rate in polynomial_decay scheduler.
      • Update [CLS] token in numerical/categorical transformer.
    • Added more image augmentations: verticalflip, colorjitter, randomaffine (#1719) @Linuxdex, @sxjscience

    • Added prompts for the percentage of missing images during image column detection. (#1623) @zhiqiangdon

    • Support average_precision in AutoMM (#1697) @sxjscience

    • Convert roc_auc / average_precision to log_loss for torchmetrics (#1715) @zhiqiangdon

      • torchmetrics.AUROC requires both positive and negative examples are available in a mini-batch. When training a large model, the per gpu batch size is probably small, leading to an incorrect roc_auc score. Conversion from roc_auc to log_loss improves training stablility.
    • Added pytorch-lightning 1.6 support (#1716) @sxjscience

    Checkpointing and Model Outputs Changes

    • Updated the names of top-k checkpoint average methods and support customizing model names for terminal input (#1668) @zhiqiangdon

      • Following paper: https://arxiv.org/pdf/2203.05482.pdf to update top-k checkpoint average names: union_soup -> uniform_soup and best_soup -> best.
      • Update function names (customize_config_names -> customize_model_names and verify_config_names -> verify_model_names) to make it easier to understand them.
      • Support customizing model names for the terminal input.
    • Implemented the GreedySoup algorithm proposed in paper. Added union_soup, greedy_soup, best_soup flags and changed the default value correspondingly. (#1613) @sxjscience

    • Updated the standalone flag in automm.predictor.save() to save the pertained model for offline deployment (#1575) @yiqings

      • An efficient implementation to save the donwloaded models from transformers for the offline deployment. Revised logic is in #1572, and discussed in #1572 (comment).
    • Simplified checkpoint template (#1636) @zhiqiangdon

      • Stopped using pytorch lightning's model checkpoint template in saving AutoMMPredictor's final model checkpoint.
      • Improved the logic of continuous training. We pass the ckpt_path argument to pytorch lightning's trainer only when resume=True.
    • Unified AutoMM's model output format and support customizing model names (#1643) @zhiqiangdon

      • Now each model's output is dictionary with the model prefix as the first level key. The format is uniform between single model and fusion model.
      • Now users can customize model names by using the internal registered names (timm_image, hf_text, clip, numerical_mlp, categorical_mlp, and fusion_mlp) as prefixes. This is helpful when users want to simultaneously use two models of the same type, e.g., hf_text. They can just use names hf_text_0 and hf_text_1.
    • Support standalone feature in TextPredictor (#1651) @yiqings

    • Fixed saving and loading tokenizers and text processors (#1656) @zhiqiangdon

      • Saved pre-trained huggingface tokenizers separately from the data processors.
      • This change is backwards-compatibile with checkpoints saved by verison 0.4.0.
    • Change load from a classmethod to staticmethod to avoid incorrect usage. (#1697) @sxjscience

    • Added AutoMMModelCheckpoint to avoid evaluating the models to obtain the scores (#1716) @sxjscience

      • checkpoint will save the best_k_models into a yaml file so that it can be loaded later to determine the path to model checkpoints.
    • Extract column features from AutoMM's model outputs (#1718) @zhiqiangdon

      • Add one util function to extract column features for both image and text.
      • Support extracting column features for models timm_image, hf_text, and clip.
    • Make AutoMM dataloader return feature column information (#1710) @zhiqiangdon

    Bug fixes

    • Fixed calling save_pretrained_configs in AutoMMPrediction.save(standalone=True) when no fusion model exists (here) (#1651) @yiqings

    • Fixed error raising for setting key that does not exist in the configuration (#1613) @sxjscience

    • Fixed warning message about bf16. (#1625) @sxjscience

    • Fixed the corner case of calculating the gradient accumulation step (#1633) @sxjscience

    • Fixes for top-k averaging in the multi-gpu setting (#1707) @zhiqiangdon

    Tabular

    • Limited RF max_leaf_nodes to 15000 (previously uncapped) (#1717) @Innixma

      • Previously, for very large datasets RF/XT memory and disk usage would quickly become unreasonable. This ensures that at a certain point RF and XT will no longer become larger given more rows of training data. Benchmark results showed that the change is an improvement, particularly for the high_quality preset.
    • Limit KNN to 32 CPUs to avoid OpenBLAS error (#1722) @Innixma

      • Issue #1020. When training K-nearest-neighbors (KNN) models, sometimes a rare error can occur that crashes the entire process:
      BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
      Segmentation fault: 11
      

      This error occurred when the machine had many CPU cores (>64 vCPUs) due to too many threads being created at once. By limiting to 32 cores used, the error is avoided.

    • Improved memory warning thresholds (#1626) @Innixma

    • Added get_results and model_base_kwargs (#1618) @Innixma

      • Added get_results to searchers, useful for debugging and for future extensions to HPO functionality. Added new way to init a BaggedEnsembleModel that avoids having to init the base model prior to initing the bagged ensemble model.
    • Update resource logic in models (#1689) @Innixma

      • Previous implementation would crash if user specified auto for resources, fixed in this PR.
      • Added get_minimum_resources to explicitly define minimum resource requirements within a method.
    • Updated feature importance default subsample_size 1000 -> 5000, num_shuffle_sets 3 -> 5 (#1708) @Innixma

      • This will improve the quality of the feature importance values by default, especially the 99% confidence bounds. The change increases the time taken by ~8x, but this is acceptable because of the numerous inference speed optimizations done since these defaults were first introduced.
    • Added notice to ensure serializable custom metrics (#1705) @Innixma

    Bug fixes

    • Fixed evaluate when weight_evaluation=True (#1612) @Innixma

      • Previously, AutoGluon would crash if the user specified predictor.evaluate(...) or predictor.evaluate_predictions(...) when self.weight_evaluation==True.
    • Fixed RuntimeError: dictionary changed size during iteration (#1684, #1685) @leandroimail

    • Fixed CatBoost custom metric & F1 support (#1690) @Innixma

    • Fixed HPO not working for bagged models if the bagged model is loaded from disk (#1702) @Innixma

    • Fixed Feature importance erroring if self.model_best is None (can happen if no Weighted Ensemble is fit) (#1702) @Innixma

    Documentation

    • updated the text tutorial of cutomizing hyperparameters (#1620) @zhiqiangdon

      • Added customizeable backbones from the Huggingface model zoo and how to use local backbones.
    • Improved implementations and docstrings of save_pretrained_models and convert_checkpoint_name. (#1656) @zhiqiangdon

    • Added cheat sheet to website (#1605) @yinweisu

    • Doc fix to use correct predictor when calling leaderboard (#1652) @Innixma

    Miscellaneous changes

    • [security] updated pillow to 9.0.1+ (#1615) @gradientsky

    • [security] updated ray to 1.10.0+ (#1616) @yinweisu

    • Tabular regression tests improvements (#1555) @willsmithorg

      • Regression testing of model list and scores in tabular on small synthetic datasets (for speed).
      • Tests about 20 different calls to TabularPredictor on both regression and classification tasks, multiple presets etc.
      • When a test fails it dumps out the config change required to make it pass, for ease of updating.
    • Disabled image/text predictor when gpu is not available in TabularPredictor (#1676) @yinweisu

      • Resources are validated before bagging is started. Image/text predictor model would require minimum of 1 gpu.
    • Use class property to set keys in model classes. In this way, if we customize the prefix key, other keys are automatically updated. (#1669) @zhiqiangdon

    Various bugfixes, documentation and CI improvements

    • @yinweisu (#1605, #1611, #1631, #1638, #1691)
    • @zhiqiangdon (#1721)
    • @Innixma (#1608, #1701)
    • @sxjscience (#1714)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 10, 2022)

    We're happy to announce the AutoGluon 0.4 release. 0.4 contains major enhancements to Tabular and Text modules, along with many quality of life improvements and fixes.

    This release is non-breaking when upgrading from v0.3.1. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 151 commits from 14 contributors!

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.3.1...v0.4.0

    Special thanks to @zhiqiangdon, @willsmithorg, @DolanTheMFWizard, @truebluejason, @killerSwitch, and @Xilorole who were first time contributors to AutoGluon this release!

    Full Contributor List (ordered by # of commits):

    • @Innixma, @yinweisu, @gradientsky, @zhiqiangdon, @jwmueller, @willsmithorg, @sxjscience, @DolanTheMFWizard, @truebluejason, @taesup-aws, @Xilorole, @mseeger, @killerSwitch, @rschmucker

    This version supports Python versions 3.7 to 3.9.

    Bugs in v0.4

    • #1607 pip install autogluon.text will error on import if installed standalone due to missing autogluon.features as a dependency. To fix: pip install autogluon.features. This will be resolved in v0.4.1 release.

    Changes

    General

    • AutoGluon now supports Windows OS! Both CPU and GPU are supported on Windows.
    • AutoGluon now supports Python 3.9. Python 3.6 is no longer supported.
    • AutoGluon has migrated from MXNet to PyTorch for all deep learning models resulting in major speedups.
    • AutoGluon v0.4 Cheat Sheet: Get started faster than ever before with this handy reference page!
    • New tutorials showcasing cloud training and deployment with AWS SageMaker and Lambda.

    Text

    AutoGluon-Text is refactored with PyTorch Lightning. It now supports backbones in huggingface/transformers. The new version has better performance, faster training time, and faster inference speed. In addition, AutoGluon-Text now supports solving multilingual problems and a new AutoMMPredictor has been implemented for automatically building multimodal DL models.

    • Better Performance
    • Faster Speed
      • The new version has ~2.88x speedup in training and ~1.40x speedup in inference. With g4dn.12x instance, the model can achieve an additional 2.26x speedup with 4 GPUs.
    • Multilingual Support
      • AutoGluon-Text now supports solving multilingual problems via cross-lingual transfer (Tutorial). This is triggered by setting presets="multilingual". You can now train a model on the English dataset and directly apply the model on datasets in other languages such as German, Japanese, Italian, etc.
    • AutoMMPredictor for Multimodal Problems
      • Support an experimental AutoMMPredictor that supports fusion image backbones in timm, text backbone in huggingface/transformers, and multimodal backbones like CLIP (Tutorial). It may perform better than ensembling ImagePredictor + TextPredictor.
    • Other Features
      • Support continuous training from an existing checkpoint. You may just call .fit() again after a previous trained model has been loaded.

    Thanks to @zhiqiangdon and @sxjscience for contributing the AutoGluon-Text refactors! (#1537, #1547, #1557, #1565, #1571, #1574, #1578, #1579, #1581, #1585, #1586)

    Tabular

    AutoGluon-Tabular has been majorly enhanced by numerous optimizations in 0.4. In summation, these improvements have led to a:

    • ~2x training speedup in Good, High, and Best quality presets.
    • ~1.3x inference speedup.
    • 63% win-rate vs AutoGluon 0.3.1 (Results from AutoMLBenchmark)
      • 93% win-rate vs AutoGluon 0.3.1 on datasets with >=100,000 rows of data (!!!)

    Specific updates:

    • Added infer_limit and infer_limit_batch_size as new fit-time constraints (Tutorial). This allows users to specify the desired end-to-end inference latency of the final model and AutoGluon will automatically train models to satisfy the constraint. This is extremely useful for online-inference scenarios where you need to satisfy an end-to-end latency constraint (for example 50ms). @Innixma (#1541, #1584)
    • Implemented automated semi-supervised and transductive learning in TabularPredictor. Try it out via TabularPredictor.fit_pseudolabel(...)! @DolanTheMFWizard (#1323, #1382)
    • Implemented automated feature pruning (i.e. feature selection) in TabularPredictor. Try it out via TabularPredictor.fit(..., feature_prune_kwargs={})! @truebluejason (#1274, #1305)
    • Implemented automated model calibration to improve AutoGluon's predicted probabilities for classification problems. This is enabled by default, and can be toggled via the calibrate fit argument. @DolanTheMFWizard (#1336, #1374, #1502)
    • Implemented parallel bag training via Ray. This results in a ~2x training speedup when bagging is enabled compared to v0.3.1 with the same hardware due to more efficient usage of resources for models that cannot effectively use all cores. @yinweisu (#1329, #1415, #1417, #1423)
    • Added adaptive early stopping logic which greatly improves the quality of models within a time budget. @Innixma (#1380)
    • Added automated model calibration in quantile regression. @taesup-aws (#1388)
    • Enhanced datetime feature handling. @willsmithorg (#1446)
    • Added support for custom confidence levels in feature importance. @jwmueller (#1328)
    • Improved neural network HPO search spaces. @jwmueller (#1346)
    • Optimized one-hot encoding preprocessing. @Innixma (#1376)
    • Refactored refit_full logic to majorly simplify user model contributions and improve multimodal support with advanced presets. @Innixma (#1567)
    • Added experimental TabularPredictor config helper. @gradientsky (#1491)
    • New Tutorials

    Tabular Models

    NEW: TabularNeuralNetTorchModel (alias: 'NN_TORCH')

    As part of the migration from MXNet to Torch, we have created a Torch based counterpart to the prior MXNet tabular neural network model. This model has several major advantages, such as:

    • 1.9x faster training speed
    • 4.7x faster inference speed
    • 51% win-rate vs MXNet Tabular NN

    This model has replaced the MXNet tabular neural network model in the default hyperparameters configuration, and is enabled by default.

    Thanks to @jwmueller and @Innixma for contributing TabularNeuralNetTorchModel to AutoGluon! (#1489)

    NEW: VowpalWabbitModel (alias: 'VW')

    VowpalWabbit has been added as a new model in AutoGluon. VowpalWabbit is not installed by default, and must be installed separately. VowpalWabbit is used in the hyperparameters='multimodal' preset, and the model is a great option to use for datasets containing text features.

    To install VowpalWabbit, specify it via pip install autogluon.tabular[all, vowpalwabbit] or pip install "vowpalwabbit>=8.10,<8.11"

    Thanks to @killerSwitch for contributing VowpalWabbitModel to AutoGluon! (#1422)

    XGBoostModel (alias: 'XGB')

    • Optimized model serialization method, which results in 5.5x faster inference speed and halved disk usage. @Innixma (#1509)
    • Adaptive early stopping logic leading to 54.7% win-rate vs prior implementation. @Innixma (#1380)
    • Optimized training speed with expensive metrics such as F1 by ~10x. @Innixma (#1344)
    • Optimized num_cpus default to equal physical cores rather than virtual cores. @Innixma (#1467)

    CatBoostModel (alias: 'CAT')

    • CatBoost now incorporates callbacks which make it more stable and resilient to memory errors, along with more advanced adaptive early stopping logic that leads to 63.2% win-rate vs prior implementation. @Innixma (#1352, #1380)

    LightGBMModel (alias: 'GBM')

    • Optimized training speed with expensive metrics such as F1 by ~10x. @Innixma (#1344)
    • Adaptive early stopping logic leading to 51.1% win-rate vs prior implementation. @Innixma (#1380)
    • Optimized num_cpus default to equal physical cores rather than virtual cores. @Innixma (#1467)

    FastAIModel (alias: 'FASTAI')

    • Added adaptive batch size selection and epoch selection. @gradientsky (#1409)
    • Enabled HPO support in FastAI (previously HPO was not supported for FastAI). @Innixma (#1408)
    • Made FastAI training deterministic (it is now consistently seeded). @Innixma (#1419)
    • Fixed GPU specification in FastAI to respect the num_gpus parameter. @Innixma (#1421)
    • Forced correct number of threads during fit and inference to avoid issues with global thread updates. @yinweisu (#1535)

    LinearModel (alias: 'LR')

    Linear models have been accelerated by 20x in training and 20x in inference thanks to a variety of optimizations. To get the accelerated training speeds, please install scikit-learn-intelex via pip install "scikit-learn-intelex>=2021.5,<2021.6"

    Note that currently LinearModel is not enabled by default in AutoGluon, and must be specified in hyperparameters via the key 'LR'. Further testing is planned to incorporate LinearModel as a default model in future releases.

    Thanks to the scikit-learn-intelex team and @Innixma for the LinearModel optimizations! (#1378)

    Vision

    • Refactored backend logic to be more robust. @yinweisu (#1427)
    • Added support for inference via CPU. Previously, inferring without GPU would error. @yinweisu (#1533)
    • Refactored HPO logic. @Innixma (#1511)

    Miscellaneous

    • AutoGluon no longer depends on ConfigSpace, cython, dill, paramiko, autograd, openml, d8, and graphviz. This greatly simplifies installation of AutoGluon, particularly on Windows.
    • Entirely refactored HPO logic to break dependencies on ConfigSpace and improve stability and ease of development. HPO has been simplified to use random search in this release while we work on re-introducing the more advanced HPO methods such as bayesopt in a future release. Additionally, removed 40,000 lines of out-dated code to streamline future development. @Innixma (#1397, #1411, #1414, #1431, #1443, #1511)
    • Added autogluon.common to simplify dependency management for future submodules. @Innixma (#1386)
    • Removed autogluon.mxnet and autogluon.extra submodules as part of code cleanup. @Innixma (#1397, #1411, #1414)
    • Refactored logging to avoid interfering with other packages. @yinweisu (#1403)
    • Fixed logging output on Kaggle, previously no logs would be displayed while fitting AutoGluon in a Kaggle kernel. @Innixma (#1468)
    • Added platform tests for Linux, MacOS, and Windows. @yinweisu (#1464, #1506, #1513)
    • Added ROADMAP.md to highlight past, present, and future feature prioritization and progress to the community. @Innixma (#1420)
    • Various documentation and CI improvements
      • @jwmueller (#1379, #1408, #1429)
      • @gradientsky (#1383, #1387, #1471, #1500)
      • @yinweisu (#1441, #1482, #1566, #1580)
      • @willsmithorg (#1476, #1483)
      • @Xilorole (#1526)
      • @Innixma (#1452, #1453, #1528, #1577, #1584, #1588, #1593)
    • Various backend enhancements / refactoring / cleanup
      • @DolanTheMFWizard (#1319)
      • @gradientsky (#1320, #1366, #1385, #1448, #1488, #1490, #1570, #1576)
      • @mseeger (#1349)
      • @yinweisu (#1497, #1503, #1512, #1563, #1573)
      • @willsmithorg (#1525, #1543)
      • @Innixma (#1311, #1313, #1327, #1331, #1338, #1345, #1369, #1377, #1380, #1408, #1410, #1412, #1419, #1425, #1428, #1462, #1465, #1562, #1569, #1591, #1593)
    • Various bug fixes
      • @jwmueller (#1314, #1356)
      • @yinweisu (#1472, #1499, #1504, #1508, #1516)
      • @gradientsky (#1514)
      • @Innixma (#1304, #1325, #1326, #1337, #1365, #1395, #1405, #1587, #1599)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Aug 31, 2021)

    v0.3.1 is a hotfix release which fixes several major bugs as well as including several model quality improvements.

    This release is non-breaking when upgrading from v0.3.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 9 commits from 4 contributors.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.3.0...v0.3.1

    Thanks to the 4 contributors that contributed to the v0.3.1 release!

    Special thanks to @yinweisu who is a first time contributor to AutoGluon and fixed a major bug in ImagePredictor HPO!

    Full Contributor List (ordered by # of commits):

    @Innixma, @gradientsky, @yinweisu, @sackoh

    Changes

    Tabular

    • AutoGluon v0.3.1 has a 58% win-rate vs AutoGluon v0.3.0 for best_quality preset.
    • AutoGluon v0.3.1 has a 75% win-rate vs AutoGluon v0.3.0 for high and good quality presets.
    • Fixed major bug introduced in v0.3.0 with models trained in refit_full causing weighted ensembles to incorrectly weight models. This severely impacted accuracy and caused worse results for high and good quality presets. @Innixma (#1293)
    • Removed KNN from stacker models, resulting in stack quality improvement. @Innixma (#1294)
    • Added automatic detection and optimized usage of boolean features. @Innixma (#1286)
    • Improved handling of time limit in FastAI NN model to avoid edge cases where the model would use the entire time budget but fail to train. @Innixma (#1284)
    • Updated XGBoost to use -1 as n_jobs value instead of using os.cpu_count(). @sackoh (#1289)

    Vision

    • Fixed major bug that caused HPO with time limits specified to return very poor models. @yinweisu (#1282)

    General

    • Minor doc updates. @gradientsky (#1288, #1290)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Aug 15, 2021)

    v0.3.0 introduces multi-modal image, text, tabular support to AutoGluon. In just a few lines of code, you can train a multi-layer stack ensemble using text, image, and tabular data! To our knowledge this is the first publicly available implementation of a model that handles all 3 modalities at once. Check it out in our brand new multimodal tutorial! v0.3.0 also features a major model quality improvement for Tabular, with a 57.6% winrate vs v0.2.0 on the AutoMLBenchmark, along with an up to 10x online inference speedup due to low level numpy and pandas optimizations throughout the codebase! This inference optimization enables AutoGluon to have sub 30 millisecond end-to-end latency for real-time deployment scenarios when paired with model distillation. Finally, AutoGluon can now train PyTorch image models via integration with TIMM. Specify any TIMM model to ImagePredictor or TabularPredictor to train them with AutoGluon!

    This release is non-breaking when upgrading from v0.2.0. As always, only load previously trained models using the same version of AutoGluon that they were originally trained on. Loading models trained in different versions of AutoGluon is not supported.

    This release contains 70 commits from 10 contributors.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.2.0...v0.3.0

    Thanks to the 10 contributors that contributed to the v0.3.0 release!

    Special thanks to the 3 first-time contributors! @rxjx, @sallypannn, @sarahyurick

    Special thanks to @talhaanwarch who opened 21 GitHub issues (!) and participated in numerous discussions during v0.3.0 development. His feedback was incredibly valuable when diagnosing issues and improving the user experience throughout AutoGluon!

    Full Contributor List (ordered by # of commits):

    @Innixma, @zhreshold, @jwmueller, @gradientsky, @sxjscience, @ValerioPerrone, @taesup-aws, @sallypannn, @rxjx, @sarahyurick

    Major Changes

    Multimodal

    • Added multimodal tabular, text, image functionality! See the tutorial to get started. @innixma, @zhreshold (#1041, #1211, #1277)

    Tutorials

    Tabular

    • Overall, AutoGluon-Tabular v0.3 wins 57.6% of the time against AutoGluon-Tabular v0.2 in AutoMLBenchmark!
    • Improved online inference speed by 1.5x-10x via various low level pandas and numpy optimizations. @Innixma (#1136)
    • Accelerated feature preprocessing speed by 100x+ for datetime and text features. @Innixma (#1203)
    • Fixed FastAI model not properly scaling regression label values, improving model quality significantly. @Innixma (#1162)
    • Fixed r2 metric having the wrong sign in FastAI model, dramatically improving performance when r2 metric is specified. @Innixma (#1159)
    • Updated XGBoost to 1.4, defaulted hyperparameter tree_method='hist' for improved performance. @Innixma (#1239)
    • Added groups parameter. Now users can specify the exact split indices in a groups column when performing model bagging. This solution leverages sklearn's LeaveOneGroupOut cross-validator. @Innixma (#1224)
    • Added option to use holdout data for final ensembling weights in multi-layer stacking via a new use_bag_holdout argument. @Innixma (#1105)
    • Added neural network based quantile regression models. @taesup-aws (#1047)
    • Bug fix for random forest models' out-of-fold prediction computation in quantile regression. @jwmueller, @Innixma (#1100, #1102)
    • Added predictor.features() to get the original feature names used during training. @Innixma (#1257)
    • Refactored AbstractModel code to be easier to use. @Innixma (#1151, #1216, #1245, #1266)
    • Refactored BaggedEnsembleModel code in preparation for distributed bagging. @gradientsky (#1078)
    • Updated RAPIDS version to 21.06. @sarahyurick (#1241)
    • Force dtype conversion in feature preprocessing to align with FeatureMetadata. Now users can specify the dtypes of features via FeatureMetadata rather than updating the DataFrame. @Innixma (#1212)
    • Fixed various edge cases with out-of-bounds date time values. Now out-of-bounds date time values are treated as missing. @Innixma (#1182)

    Vision

    • Added Torch / TIMM backend support! Now AutoGluon can train any TIMM model natively, and MXNet is no longer required to train vision models. @zhreshold (#1249)
    • Added regression problem_type support to ImagePredictor. @sallypannn (#1165)
    • Added GPU memory check to avoid going OOM during training. @Innixma (#1199)
    • Fixed error when vision models are hyperparameter tuned with forked multiprocessing. @gradientsky (#1107)
    • Fixed crash when an image is missing (both train and inference). Use TabularPredictor's Image API to get this functionality. @Innixma (#1210)
    • Fixed error when the same image is in multiple rows when calling predict_proba. @Innixma (#1206)
    • Fixed invalid preset configurations. @Innixma (#1199)
    • Fixed major defect causing tuning data to not be properly created if tuning data was not provided by user. @Innixma (#1168)
    • Upgraded Pillow version to '>=8.3.0,<8.4.0'. @gradientsky (#1262)

    Text

    • Removed pyarrow as a required dependency. @Innixma (#1200)
    • Fixed crash when eval_metric='average_precision'. @rxjx (#1092)

    General

    • Improved support for GPU on Windows. @Innixma (#1255)
    • Added quadratic kappa evaluation metric. @sxjscience (#1104)
    • Improved access method for __version__. @Innixma (#1122)
    • Upgraded pandas to 1.3. @Innixma (#1258)
    • Upgraded ConfigSpace to 0.4.19. @Innixma (#1265)
    • Upgraded numpy, graphviz, and dill versions. @Innixma (#1275)
    • Various minor doc improvements. @jwmueller, @Innixma (#1089, #1091, #1093, #1095, #1219, #1253)
    • Various minor updates and fixes. @Innixma, @zhreshold, @gradientsky (#1098, #1099, #1101, #1113, #1117, #1118, #1166, #1177, #1188, #1197, #1227, #1229, #1235, #1245, #1251)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 28, 2021)

    v0.2.0 introduces numerous optimizations that reduce Tabular average inference time by 4x and average disk usage by 10x compared to v0.1.0, as well as a refactored ImagePredictor API to better align with the other tasks and a 20x inference speedup in Vision tasks. This release contains 42 commits from 9 contributors.

    This release is non-breaking when upgrading from v0.1.0, with four exceptions:

    1. ImagePredictor.predict and ImagePredictor.predict_proba have different output formats.
    2. TabularPredictor.evaluate and TabularPredictor.evaluate_predictions have different output formats.
    3. Custom dictionary inputs to TabularPredictor.fit's hyperparameter_tune_kwargs argument now have a different format.
    4. Models trained in v0.1.0 should only be loaded with v0.1.0. Loading models trained in different versions of AutoGluon is not supported.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.1.0...v0.2.0

    Thanks to the 9 contributors that contributed to the v0.2.0 release!

    Special thanks to the 3 first-time contributors! @taesup-aws, @ValerioPerrone, @lukemorrill

    Full Contributor List (ordered by # of commits):

    @Innixma, @zhreshold, @gradientsky, @jwmueller, @mseeger, @sxjscience, @taesup-aws, @ValerioPerrone, @lukemorrill

    Major Changes

    Tabular

    • Reduced overall inference time on best_quality preset by 4x (and 2x on others). @innixma, @gradientsky
    • Reduced overall disk usage on best_quality preset by 10x. @innixma
    • Reduced training time and inference time of K-Nearest-Neighbor models by 250x, and reduced disk usage by 10x via:
      • Efficient out-of-fold implementation (10x training & inference speedup, 10x reduced disk usage) on best_quality preset. @innixma (#1022)
      • [Experimental] Integration of the scikit-learn-intelex package (25x training & inference speedup). @innixma (#1049)
        • This is currently not installed by default. Try it via pip install autogluon.tabular[all,skex] or pip install "scikit-learn-intelex<2021.3". Once installed, AutoGluon will automatically use it.
    • Reduced training time, inference time, and disk usage of RandomForest and ExtraTrees models by 10x via efficient out-of-fold implementation. @innixma (#1066, #1082)
    • Reduced training time by 30% and inference time by 75% on the FastAI neural network model. @gradientsky (#977)
    • Added quantile as a new problem_type to support quantile regression problems. @taesup-aws, @jwmueller (#1005, #1040)
    • [Experimental] Added GPU accelerated RandomForest, K-Nearest-Neighbors and Linear models via integration with NVIDIA RAPIDS. @innixma (#995, #997, #1000)
      • This is not enabled by default. Try it out by first installing RAPIDS and then installing AutoGluon.
        • Currently, the models need to be specially passed to the .fit hyperparameters argument. Refer to the below kaggle kernel for an example or check out RAPIDS official AutoGluon example.
      • See how to use AutoGluon + RAPIDS to get top 1% on the Otto kaggle competition with an interactive kaggle kernel!
    • [Experimental] Added option to specify early stopping rounds for models LightGBM, CatBoost, and XGBoost via a new model parameter ag.early_stop. @innixma (#1037)
      • Try it out via hyperparameters={'XGB': {'ag.early_stop': 500}}.
      • The API for this may change in future releases as we try to optimize usage of early stopping in AutoGluon.
    • [Experimental] Added adaptive early stopping to LightGBM. This will attempt to choose when to stop training the model more smartly than using an early stopping rounds value. @innixma (#1042)
    • Re-ordered model training priority to perform better when time_limit is small. For time_limit=3600 on datasets with over 100,000 rows, v0.2.0 has a 65% win-rate over v0.1.0. @innixma (#1059, #1084)
    • Adjusted time allocation to stack layers when performing multi-layer stacking to allow for longer training on earlier layers. @innixma (#1075)
    • Updated CatBoost to v0.25. @innixma (#1064)
    • Added extra_metrics argument to .leaderboard. @innixma (#1058)
    • Added feature group importance support to .feature_importance. @innixma (#989)
      • Now, users can get the combined importance of a group of features.
      • predictor.feature_importance(test_data, features=['A', 'B', 'C', ('AB', ['A', 'B'])])
    • [BREAKING] Refactored .evalute and .evaluate_predictions to be easier to use and share the same code logic. @innixma (#1080)
      • The output type has changed and the sign of the metric score has been flipped in some circumstances.

    Vision

    • Reduced inference time by 20x via various optimizations in inference batching. @zhreshold
    • Fixed a problem when loading saved models on cpu-only machines when models are trained on GPU. @zhreshold
    • Improved model fitting performance by up to 10% for ObjectDetector when presets is empty. @zhreshold
    • [BREAKING] Refactored predict and predict_proba methods in ImagePredictor to have the same output formats as TabularPredictor and TextPredictor. @zhreshold (#1044)
      • This change is BREAKING. Previous users of v0.1.0 should ensure they update to use the new formats if they made use of the old predict and predict_proba when switching to v0.2.0.
    • Added improved support for CSV and pandas DataFrame input to ImagePredictor. @zhreshold (#1010)
    • Added early stopping strategies that significantly improve training efficiency. @zhreshold (#1039)

    General

    • [Experimental] Added new hyperparameter tuning method: constrained bayesian optimization. @ValerioPerrone (#1034)
    • General HPO code improvement / cleanup. @mseeger, @gradientsky (#971, #1002, #1050)
    • Fixed ENAS issue when passing in custom datasets. @lukemorrill (#1015)
    • Fixed incorrect dependency link between autogluon.mxnet and autogluon.extra causing crash on import. @innixma (#1032)
    • Various minor updates and fixes. @innixma, @jwmueller, @zhreshold, @sxjscience (#990, #996, #998, #1007, #1035, #1052, #1055, #1057, #1072, #1081, #1088)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Mar 1, 2021)

    v0.1.0 is our largest release yet, containing 173 commits from 20 contributors over the course of 5 months.

    This release is API breaking from past releases, as AutoGluon is now a namespace package. Please refer to our documentation for using v0.1.0. New GitHub issues based on versions earlier then v0.1.0 will not be addressed, and we recommend all users to upgrade to v0.1.0 as soon as possible.

    See the full commit change-log here: https://github.com/awslabs/autogluon/compare/v0.0.15...v0.1.0

    Try it out yourself in 5 minutes with our Colab Tutorial.

    Special thanks to the 20 contributors that contributed to the v0.1.0 release! Contributor List:

    @innixma, @gradientsky, @sxjscience, @jwmueller, @zhreshold, @mseeger, @daikikatsuragawa, @Chudbrochil, @adrienatallah, @jonashaag, @songqiang, @larroy, @sackoh, @muhyun, @rschmucker, @aaronkl, @kaixinbaba, @sflender, @jojo19893, @mak-454

    Major Changes

    General

    • MacOS is now fully supported.
    • Windows is now experimentally supported. Installation instructions for Windows are still in progress.
    • Python 3.8 is now supported.
    • Overhauled API. APIs between TabularPredictor, TextPredictor, and ImagePredictor are now much more consistent. @innixma, @sxjscience, @zhreshold, @jwmueller, @gradientsky
    • Updated AutoGluon to a namespace package, now individual modules can be separately installed to improve flexibility. As an example, to only install HPO related functionality, you can get a minimal install via pip install autogluon.core. For a full list of available submodules, see this link. @gradientsky (#694)
    • Significantly improved robustness of HPO scheduling to avoid errors for user. @mseeger, @gradientsky, @rschmucker, @innixma (#713, #735, #750, #754, #824, #920, #924)
    • mxnet is no longer a required dependency in AutoGluon. @mseeger (#726)
    • Various dependency version upgrades.

    Tabular

    • Major API refactor. @innixma (#768, #855, #869)
    • Multimodal Tabular + Text support (Tutorial). Now Tabular can train a multi-modal Tabular + Text transformer model alongside its standard models, and achieve state-of-the-art results on multi-modal tabular + text datasets with 3 lines of code. @sxjscience, @Innixma (#740, #752, #756, #770, #776, #794, #802, #848, #852, #867, #869, #871, #877)
    • GPU support for LightGBM, CatBoost, XGBoost, MXNet neural network, and FastAI neural network models. Specify ag_args_fit={'num_gpus': 1} in TabularPredictor.fit() to enable. @innixma (#896)
    • sample_weight support. Tabular can now handle user-defined sample weights for imbalanced datasets. @jwmueller (#942, #962)
    • Multi-label prediction support (Tutorial). Tabular can now predict across multiple label columns. @jwmueller (#953)
    • Added student model ensembling in model distillation. @innixma (#937)
    • Generally improved accuracy and robustness due to a variety of internal improvements and the addition of new models. (v0.1.0 gets a better score on over 70% of datasets in benchmarking compared to v0.0.15!)
    • New model: XGBoost. @sackoh (#691)
    • New model: FastAI Tabular Neural Network. @gradientsky (#742, #748, #826, #839, #842)
    • New model: TextPredictorModel (Multi-modal transformer) (Requires GPU). @sxjscience (#770)
    • New experimental model: TabTransformer (Tabular transformer model (paper)). @Chudbrochil (#723)
    • New experimental model: FastText. @songqiang (#580)
    • View all available models in our documentation: https://auto.gluon.ai/stable/api/autogluon.tabular.models.html
    • New advanced functionality: Extract out-of-fold predictions from a fit TabularPredictor (docs). @innixma (#779)
    • Greatly optimized and expanded upon feature importance calculation functionality. Now predictor.feature_importance() returns confidence bounds on importance values. @innixma (#803)
    • New experimental functionality: predictor.fit_extra() enables the fitting of additional models on top of an already fit TabularPredictor object (docs). @innixma (#768)
    • Per-model HPO support. Now you can specify hyperparameter_tune_kwargs in a model's hyperparameters via 'ag_args': {'hyperparameter_tune_kwargs': hpo_args}. @innixma (#883)
    • Sped up preprocessing runtimes by 100x+ on large (10M+ row) datasets by subsampling data during feature duplicate resolution. @Innixma (#950)
    • Added SHAP notebook tutorials. @jwmueller (#720)
    • Heavily optimized CatBoost inference speed during online-inference. @innixma (#724)
    • KNN models now respect time_limit. @innixma (#845)
    • Added stack ensemble visualization method. @muhyun (#786)
    • Added NLP token prefiltering logic for ngram generation. @sflender (#907)
    • Added initial support for compression of model files to reduce disk usage. @adrienatallah (#940, #944)
    • Numerous bug fixes. @innixma, @jwmueller, @gradientsky (many...)

    Text

    • Major API refactor. @sxjscience (#876, #936, #972, #975)
    • Support multi-GPU inference. @sxjscience (#873)
    • Greatly improved user time_limit adherence. @innixma (#877)
    • Fixed bug in model deserialization. @jojo19893 (#708)
    • Numerous bug fixes. @sxjscience (#836, #847, #850, #861, #865, #963, #980)

    Vision

    • Major API refactor. @zhreshold (#733, #828, #882, #930, #946)
    • Greatly improved user time_limit adherence. @zhreshold
    Source code(tar.gz)
    Source code(zip)
  • v0.0.15(Dec 8, 2020)

  • v0.0.14(Oct 21, 2020)

    Changes

    Tabular

    • Complete overhaul of feature generation, major improvements to flexibility, speed, memory usage, and stability @Innixma (#584, #661).
    • Revamped tabular tutorials @jwmueller (#636).
    • Added fastai neural network tabular model (not used by default: requires Torch) @gradientsky (#627).
    • Added LightGBM Extra Trees (LightGBM_XT) model @Innixma (#681).
    • Updated model training priority for multiclass, moved neural networks to train ahead of trees @Innixma (#676).
    • Added .persist_models(), .unpersist_models() methods to TabularPredictor @Innixma (#640).
    • Improved neural network training time @jwmueller (#598).
    • Added example for chunked inference @daveharmon (#634).
    • Improved memory stability on large datasets @Innixma (#644).
    • Reduced maximum memory usage of predictor.leaderboard() @Innixma (#648).
    • Updated LightGBM to v3.x, resulting in ~2x speedup in most cases @Innixma (#662).
    • Updated CatBoost to v0.24.x @Innixma (#664).
    • Updated scikit-learn to <0.24 (from <0.23) @Innixma (#671).
    • Updated pandas version to >=1.0 (from <1.0) @Innixma (#670).
    • Added GPU support for CatBoost @Innixma (#682).
    • Code cleanup @Innixma (#645, #665, #677, #680, #689).
    • Bug Fixes @Innixma, @gradientsky, @jwmueller (#643, #666, #678, #688).

    Text

    • Bug Fixes @sxjscience (#651, #653).

    General

    • Upgraded to mxnet 1.7 (from 1.6) @sxjscience (#650).
    • Updated all absolute imports to relative imports @Innixma (#637).
    • Documentation Improvements @aaronkl, @rdimaio, @jwmueller (#638, #639, #679).
    • Code cleanup @tirkarthi (#660).
    • Bug Fixes @Innixma, @aaronkl (#674, #686).
    Source code(tar.gz)
    Source code(zip)
  • v0.0.13(Aug 24, 2020)

    Changes

    Tabular

    • Added model distillation @jwmueller (#547).
    • Added FAISS KNN model @brc7 (#557).
    • Refactored Feature Generation (Part 1) @Innixma (#578).
    • Added extra_info argument to predictor.leaderboard @Innixma (#605).
    • Optimized out-of-fold feature memory usage by 50% @Innixma (#588).
    • Added confusion matrix to predictor.evaluate_predictions() output @alan-aipe (#571).
    • Improved output directory generation robustness @songqiang (#620).
    • Improved stability on large datasets by reducing maximum memory usage ratio of RF, XT, and KNN models @Innixma (#630).

    Text

    • Added TextPrediction Task @sxjscience (#556).

    General

    • Added mxnet 1.7 support @sxjscience (#546).
    • Numerous bug fixes @Innixma, @jwmueller, @sxjscience, @zhreshold, @yongzhengqi, (#559, #568, #577, #590, #592, #597, #600, #604, #621, #625, #629).
    • Documentation improvements @jwmueller, @sxjscience, @songqiang, @Bharat123rox (#554, #561, #585, #609, #628, #631).
    Source code(tar.gz)
    Source code(zip)
  • v0.0.12(Jul 14, 2020)

    Changes

    General

    • Removed gluonnlp from dependencies, gluonnlp can now be installed as an optional dependency to enable the text module (#512).
    • Documentation improvements (#503, #529, #549).

    Tabular

    • Added custom model support (#551).
    • Added support for specifying tuning_data argument in TabularPrediction.fit() with test data without the label column to improve data preprocessing and final predictive accuracy on the test data (#551).
    • Fixed major defect added in 0.0.11 which caused the Tabular neural network model to crash during training when categorical features with many possible values were present (#542).
    • Disabled usage of text ngram features in KNN models to dramatically improve inference speed on NLP problems (#531).
    • Added fit_weighted_ensemble() function to TabularPredictor class. Now the user can train additional weighted ensembles post-fit using any subset of the existing trained models (#550).
    • Added AG_args_fit argument to enable advanced model training control such as per-model time limit and memory usage (#531).
    • Added excluded_model_types argument to TabularPrediction.fit() to enable simplified removal of model types without editing the hyperparameters argument (#543).
    • Added version check when loading a predictor, will log a warning if the predictor was trained on a different version of AutoGluon (#536).
    • Improved support for GPU on CatBoost (#527).
    • Moved CatBoost to lazy import to enable running Tabular without installing CatBoost (#534).
    • Added support for training models with no features, in order to get a best guess prediction based only on the average label value (#537).
    • Major refactor of internal feature_types_metadata object and AutoFeatureGenerator (#548).
    • Major refactor of internal variable names (#551).

    Core

    • Minor scheduler cleanup (#523, #540).
    Source code(tar.gz)
    Source code(zip)
  • v0.0.11(Jun 15, 2020)

    Changes

    General

    • Added bayesopt and bayesopt_hyperband schedulers (#501, #507)
    • Updated minimum sklearn version from 0.20 to 0.22 (#521)

    Tabular

    • Optimized memory utilization for text features (#513)
    • Optimized memory utilization for tabular neural network (#518)
    • Optimized training speed of LightGBM by ~100%-200% on most datasets (#511)
    • Optimized training speed of CatBoost by ~100% on regression datasets (#514)
    • Added return_original_features argument to transform_features, plus bug fixes (#517)
    • Improved tabular neural network training stability on log loss metric (#481)
    • Numerous fixes and code cleanup (#510, #502, #505, #516)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.10(Jun 4, 2020)

    Changes

    General

    • Removed unnecessary thread workers upon importing autogluon (#494, #495)
    • Suppressed excessive logging of distributed thread workers (#496)
    • Capped gluoncv version to 0.x (#484)
    • Unified scheduler creation (#470)

    Tabular

    • Refactored hyperparameter argument, added options for different models per stack layer (#489)
    • Optimized CatBoost training time when many features are present (#489)
    • Enabled automatic type setting to dtypes during inference (#463)
    • Added feature importance for original features (#479)
    • Fixed root_mean_squared_error metric (#464)
    • Fixed pac_score metric (#483)
    • Various Fixes (#465, #472, #474, #489)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.9(May 12, 2020)

  • v0.0.8(May 11, 2020)

  • v0.0.7(May 9, 2020)

    Changes

    General

    • Updated dependency versions.

    Tabular

    • Added simplified argument preset options to tabular task.fit(). (#453)
    • Added options to significantly reduce disk usage by >10x during and after model training. (#453)
    • Added refit model support which dramatically reduces inference times. (#408, #412)
    • Added transform_features function to TabularPredictor. (#431)
    • Added transform_labels function to TabularPredictor. (#435)
    • Numerous improvements to text handling (#440, #451)
    • Improved memory stability and inference speed of tabular neural network. (#422)
    • Added NetworkX directed acyclic graph stack ensemble representation. (#385, #403)
    • Added linear model support (#375)
    • Improved leaderboard inference and fit time estimates. (#385)
    • Added info function to TabularPredictor. (#444)
    • Added .tsv file detection support. (#396)
    • Numerous code cleanup and bug fixes. (#373, #374, #379, #387, #389, #400, #410, #443, #446, #452)

    Image Classification

    • Added auto augmentation for images (#391)
    • Bug fixes. (#388, #402, #417, #436, #438)

    Object Detection

    • Added save() and load() functionality. (#405)

    Core

    • Major refactoring of schedulers and searchers. (#445)
    • Bug fixes. (#384, #386, #427)
    Source code(tar.gz)
    Source code(zip)
  • v0.0.6(Mar 24, 2020)

    Changes

    General

    • Updated dependency versions.

    Tabular

    • Added support for calculating feature importance on fitted TabularPredictor.
    • Enabled string path input of datasets to TabularPredictor.
    • Added option for user to specify model to predict with in TabularPredictor.
    • Added support for relative paths to TabularPredictor, enabling users to move the models between directories and machines without introducing loading issues.
    • Added support for moving models between machines of different operating systems.
    • Numerous major bug fixes and improvements to hyperparameter tuning resulting in significantly more stable functionality.
    • Numerous code cleanup.
    • Numerous bug fixes.
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon Web Services - Labs
AWS Labs
Amazon Web Services - Labs
Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks arXiv link: upcoming To be published in Findings of NA

Allen 16 Nov 12, 2022
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
Fake Shakespearean Text Generator

Fake Shakespearean Text Generator This project contains an impelementation of stateful Char-RNN model to generate fake shakespearean texts. Files and

Recep YILDIRIM 1 Feb 15, 2022
Toward Model Interpretability in Medical NLP

Toward Model Interpretability in Medical NLP LING380: Topics in Computational Linguistics Final Project James Cross ( 1 Mar 04, 2022

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
An easy-to-use framework for BERT models, with trainers, various NLP tasks and detailed annonations

FantasyBert English | 中文 Introduction An easy-to-use framework for BERT models, with trainers, various NLP tasks and detailed annonations. You can imp

Fan 137 Oct 26, 2022
Original implementation of the pooling method introduced in "Speaker embeddings by modeling channel-wise correlations"

Speaker-Embeddings-Correlation-Pooling This is the original implementation of the pooling method introduced in "Speaker embeddings by modeling channel

Themos Stafylakis 10 Apr 30, 2022
NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

Artefact 114 Dec 15, 2022
justCTF [*] 2020 challenges sources

justCTF [*] 2020 This repo contains sources for justCTF [*] 2020 challenges hosted by justCatTheFish. TLDR: Run a challenge with ./run.sh (requires Do

justCatTheFish 25 Dec 27, 2022
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
Paddle2.x version AI-Writer

Paddle2.x 版本AI-Writer 用魔改 GPT 生成网文。Tuned GPT for novel generation.

yujun 74 Jan 04, 2023
The PyTorch based implementation of continuous integrate-and-fire (CIF) module.

CIF-PyTorch This is a PyTorch based implementation of continuous integrate-and-fire (CIF) module for end-to-end (E2E) automatic speech recognition (AS

Minglun Han 24 Dec 29, 2022
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器

ja-timex 自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器 概要 ja-timex は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出しTIMEX3と呼ばれるアノテーション仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。

Yuki Okuda 116 Nov 09, 2022
Trains an OpenNMT PyTorch model and SentencePiece tokenizer.

Trains an OpenNMT PyTorch model and SentencePiece tokenizer. Designed for use with Argos Translate and LibreTranslate.

Argos Open Tech 61 Dec 13, 2022
Large-scale Knowledge Graph Construction with Prompting

Large-scale Knowledge Graph Construction with Prompting across tasks (predictive and generative), and modalities (language, image, vision + language, etc.)

ZJUNLP 161 Dec 28, 2022
Finally decent dictionaries based on Wiktionary for your beloved eBook reader.

eBook Reader Dictionaries Finally, decent dictionaries based on Wiktionary for your beloved eBook reader. Dictionaries Catalan 🚧 Ελληνικά (help welco

Mickaël Schoentgen 163 Dec 31, 2022
New Modeling The Background CodeBase

Modeling the Background for Incremental Learning in Semantic Segmentation This is the updated official PyTorch implementation of our work: "Modeling t

Fabio Cermelli 9 Dec 28, 2022
SentAugment is a data augmentation technique for semi-supervised learning in NLP.

SentAugment SentAugment is a data augmentation technique for semi-supervised learning in NLP. It uses state-of-the-art sentence embeddings to structur

Meta Research 363 Dec 30, 2022
多语言降噪预训练模型MBart的中文生成任务

mbart-chinese 基于mbart-large-cc25 的中文生成任务 Input source input: text + /s + lang_code target input: lang_code + text + /s Usage token_ids_mapping.jso

11 Sep 19, 2022