The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

Related tags

Deep LearningPyCIL
Overview

PyCIL: A Python Toolbox for Class-Incremental Learning


IntroductionMethods ReproducedReproduced ResultsHow To UseLicenseAcknowledgementsContact


LICENSEPython PyTorch method CIL

The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" [paper] in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

@misc{zhou2021pycil,
  title={PyCIL: A Python Toolbox for Class-Incremental Learning}, 
  author={Da-Wei Zhou and Fu-Yun Wang and Han-Jia Ye and De-Chuan Zhan},
  year={2021},
  eprint={2112.12533},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}

Introduction

Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process. However, real-world applications often face the incoming new classes, and a model should incorporate them continually. The learning paradigm is called Class-Incremental Learning (CIL). We propose a Python toolbox that implements several key algorithms for class-incremental learning to ease the burden of researchers in the machine learning community. The toolbox contains implementations of a number of founding works of CIL such as EWC and iCaRL, but also provides current state-of-the-art algorithms that can be used for conducting novel fundamental research. This toolbox, named PyCIL for Python Class-Incremental Learning, is open source with an MIT license.

Methods Reproduced

  • FineTune: Baseline method which simply updates parameters on new task, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
  • EWC: Gradient Episodic Memory for Continual Learning. [paper]
  • LwF: Learning without Forgetting. [paper]
  • Replay: Baseline method with exemplars.
  • GEM: Gradient Episodic Memory for Continual Learning. [paper]
  • iCaRL: Incremental Classifier and Representation Learning. [paper]
  • BiC: Large Scale Incremental Learning. [paper]
  • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. [paper]
  • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. [paper]
  • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. [paper]
  • Coil: Co-Transport for Class-Incremental Learning. [paper]

Reproduced Results

CIFAR100

Imagenet100

More experimental details and results are shown in our paper.

How To Use

Clone

Clone this github repository:

git clone https://github.com/G-U-N/PyCIL.git
cd PyCIL

Dependencies

  1. torch 1.81
  2. torchvision 0.6.0
  3. tqdm
  4. numpy
  5. scipy
  6. quadprog

Run experiment

  1. Edit the [MODEL NAME].json file for global settings.
  2. Edit the hyperparameters in the corresponding [MODEL NAME].py file (e.g., models/icarl.py).
  3. Run:
python main.py --config=./exps/[MODEL NAME].json

where [MODEL NAME] should be chosen from: finetune, ewc, lwf, replay, gem, icarl, bic, wa, podnet, der.

  1. hyper-parameters

When using PyCIL, you can edit the global parameters and algorithm-specific hyper-parameter in the corresponding json file.

These parameters include:

  • memory-size: The total exemplar number in the incremental learning process. Assuming there are $K$ classes at current stage, the model will preserve $\left[\frac{memory-size}{K}\right]$ exemplar per class.
  • init-cls: The number of classes in the first incremental stage. Since there are different settings in CIL with a different number of classes in the first stage, our framework enables different choices to define the initial stage.
  • increment: The number of classes in each incremental stage $i$, $i$ > 1. By default, the number of classes per incremental stage is equivalent per stage.
  • convnet-type: The backbone network for the incremental model. According to the benchmark setting, ResNet32 is utilized for CIFAR100, and ResNet18 is utilized for ImageNet.
  • seed: The random seed adopted for shuffling the class order. According to the benchmark setting, it is set to 1993 by default.

Other parameters in terms of model optimization, e.g., batch size, optimization epoch, learning rate, learning rate decay, weight decay, milestone, temperature, can be modified in the corresponding Python file.

Datasets

We have implemented the pre-processing of CIFAR100, imagenet100 and imagenet1000. When training on CIFAR100, this framework will automatically download it. When training on imagenet100/1000, you should specify the folder of your dataset in utils/data.py.

    def download_data(self):
        assert 0,"You should specify the folder of your dataset"
        train_dir = '[DATA-PATH]/train/'
        test_dir = '[DATA-PATH]/val/'

License

Please check the MIT license that is listed in this repository.

Acknowledgements

We thank the following repos providing helpful components/functions in our work.

Contact

If there are any questions, please feel free to propose new features by opening an issue or contact with the author: Da-Wei Zhou([email protected]) and Fu-Yun Wang([email protected]). Enjoy the code.

Comments
  • why FeTrIL acc is so bad

    why FeTrIL acc is so bad

    config:

    {
        "prefix": "train",
        "dataset": "cifar100",
        "memory_size": 0,
        "shuffle": true,
        "init_cls": 50,
        "increment": 10,
        "model_name": "fetril",
        "convnet_type": "resnet32",
        "device": ["0"],
        "seed": [1993],
        "init_epochs": 200,
        "init_lr" : 0.1,
        "init_weight_decay" : 0,
        "epochs" : 50,
        "lr" : 0.05,
        "batch_size" : 128,
        "weight_decay" : 0,
        "num_workers" : 8,
        "T" : 2
    }
    

    final result:

    2022-12-16 23:22:29,436 [fetril.py] => svm train: acc: 10.01
    2022-12-16 23:22:29,451 [fetril.py] => svm evaluation: acc_list: [15.2, 12.47, 10.84, 9.78, 9.06, 8.13]
    2022-12-16 23:22:31,440 [trainer.py] => No NME accuracy.
    2022-12-16 23:22:31,441 [trainer.py] => CNN: {'total': 13.63, '00-09': 19.1, '10-19': 18.3, '20-29': 22.5, '30-39': 17.3, '40-49': 30.1, '50-59': 3.5, '60-69': 11.4, '70-79': 3.1, '80-89': 6.1, '90-99': 4.9, 'old': 14.6, 'new': 4.9}
    2022-12-16 23:22:31,441 [trainer.py] => CNN top1 curve: [28.94, 21.87, 18.54, 16.76, 15.1, 13.63]
    2022-12-16 23:22:31,441 [trainer.py] => CNN top5 curve: [53.28, 47.07, 43.17, 39.08, 36.41, 33.98]
    

    I can provide log if you need.

    opened by muyuuuu 10
  • coil fixed memory

    coil fixed memory

    Amazing toolbox!!!

    I got a question about ur results of coil.

    In your work. Section 5.2

    Since all compared methods are exemplar-based, we fix an equal number of exemplars for every method, i.e., 2,000 exemplars for CIFAR-100 and ImageNet100, 20,000 for ImageNet-1000. As a result, the picked exemplars per class is 20, which is abundant for every class.

    I just wanna check the replay size with fixed memory of 2,000 in totoal over training process, which means that the "fixed_memory" in json file is set false, as shown in this link. I'm a little bit confused about this setting due to there are different protocols in recent community.

    https://github.com/G-U-N/PyCIL/blob/6d2c1280e8ceb139f4e74a70297519af9eea4a5e/exps/coil.json#L6

    The reason why I came corss this issues is:

    1648192236(1)

    As shown in this table, the icarl results of 10 steps is reported about 61.74, which is lower than that in the original paper of about 64.

    Hope to get ur replay early. THX in advance.

    opened by qsunyuan 8
  • Inquiry about Pre-trained Model & Parameter Setup

    Inquiry about Pre-trained Model & Parameter Setup

    Many thanks for this wonderful framework! It really helps our work a lot!

    I have some questions about your experiment setup.

    1. I have reproduced your 10-stage CIFAR-100 experiments on my own PC (3*3090). The results are as followed:

    1650045621

    I followed all the bash files and parameters you have set up, but the results seem to be much lower than yours. Is that because you use an ImageNet pre-trained ResNet? Thanks!

    1. I found you set different model optimization parameters (e.g. learning rate, epoch, milestone, etc.) for each continual learning approach (instead of setting the same hyperparameter policy for all the continual learning approaches). I was wondering whether this kind of parameter setup could be considered a "fair" comparison?

    Thanks in advance for your answer!

    question 
    opened by longbai1006 4
  • 关于FOSTER的数据增强

    关于FOSTER的数据增强

    FOSTER很令我感兴趣的工作。

    我尝试了cifar100的benchmark的设置。

    我发现pycil中的实现并没有数据增强,如:https://github.com/G-U-N/ECCV22-FOSTER/blob/340fbd1a16ca6a787e522b9dc349a5e742f00c30/utils/data.py#L73

    因此,pycil的实现结果要差一点。请问还有其他的一些细节上的不同吗?

    opened by qsunyuan 3
  • Single GPU training error in DER

    Single GPU training error in DER

    Hi,

    Thank you for this wonderful code base!

    I noticed the current version doesn't support single GPU training. Could you please add this feature?

    Thank you!

    opened by htwang14 3
  • Weird Training Result with train_acc=0 & loss=0

    Weird Training Result with train_acc=0 & loss=0

    Thanks for your excellent work!

    But when I tried my own dataset on it, I found that the training result was really weired, just take EWC as an example:

    ================ EWC ================
    Task 0, Epoch 200/200 => Loss 0.886, Train_accy 70.60, Test_accy 73.00
    Task 1, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 7.36, '00-09': 7.8, 'old': 7.8, 'new': 0.0}
    Task 2, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 38.67, '00-09': 46.4, '10-19': 0.0, 'old': 43.77, 'new': 0.0}
    Task 3, Epoch 180/180 => Loss 0.003, Train_accy 0.00
     CNN: {'total': 12.99, '00-09': 17.4, '10-19': 0.0, 'old': 14.5, 'new': 0.0}
    ...
    Task 9, Epoch 180/180 => Loss 0.004, Train_accy 0.00
     CNN: {'total': 28.66, '00-09': 55.6, '10-19': 0.0, 'old': 30.89, 'new': 0.0}
    CNN top1 curve: [74.0, 7.36, 38.67, 12.99, 1.22, 23.83, 34.52, 16.55, 8.56, 28.66]
    CNN top5 curve: [98.6, 52.26, 78.0, 45.82, 29.05, 53.58, 57.26, 48.85, 31.78, 48.66]
    

    I also tried it on Lwf, the problem remains.

    Questions About Loss Function

    I am wondering whether issues exist in the loss function. And I found that the loss function is defined as: loss_clf = F.cross_entropy(logits[:,self._known_classes:],targets-self._known_classes), along with loss_ewc.

    In my experiments, I set base_session=10, increment=1. So for task1, when calculating the loss, targets=[10, 10, ..., 10] and self._known_classes=10. Therefore, the target(#2para) of the cross_entropy was [0, 0, ..., 0]. Is that mean the loss function pushes the model to reject the new task, making logits of the new task approach 0?

    Tuning Loss Function

    I tried to change the #2para to [1, 1, ..., 1] by setting targets-self._known_classes+1, but error arises:

    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
    ...
    Traceback (most recent call last):
      File "/tmp/pycharm_project_241/main.py", line 33, in <module>
        main()
      File "/tmp/pycharm_project_241/main.py", line 13, in main
        train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 18, in train
        _train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 48, in _train
        model.incremental_train(data_manager)
      File "/tmp/pycharm_project_241/models/ewc.py", line 62, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/tmp/pycharm_project_241/models/ewc.py", line 84, in _train
        self._update_representation(train_loader, test_loader, optimizer, scheduler)
      File "/tmp/pycharm_project_241/models/ewc.py", line 138, in _update_representation
        loss.backward()
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
    

    And I don't why it happens and how to handle this. Could you please help me to clarify it? Thanks a lot!

    opened by Steven-cpp 3
  • why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    Thank you very much for your excellent work.

    In models/finetune.py , line 117:

                    fake_targets=targets-self._known_classes
                    loss_clf = F.cross_entropy(logits[:,self._known_classes:], fake_targets)
                    
                    loss=loss_clf
    

    why not :

                    loss_clf = F.cross_entropy(logits, targets)
                    loss=loss_clf
    
    opened by chester-w-xie 3
  • foster error: TypeError: string indices must be integers

    foster error: TypeError: string indices must be integers

    after train the second task and before compress:

    Task 1, Epoch 167/170 => Loss 0.752, Loss_clf 0.000, Loss_fe 0.001, Loss_kd 0.375, Train_accy 99.00:  98%|█████████▊| 167/170 [09:02<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  98%|█████████▊| 167/170 [09:04<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  99%|█████████▉| 168/170 [09:04<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 168/170 [09:07<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 169/170 [09:07<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67:  99%|█████████▉| 169/170 [09:10<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.01s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.24s/it]
    Traceback (most recent call last):
      File "foster.py", line 31, in <module>
        main()
      File "foster.py", line 12, in main
        train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 18, in train
        _train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 67, in _train
        model.incremental_train(data_manager)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 89, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 169, in _train
        self._feature_compression(train_loader, test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 289, in _feature_compression
        self._snet = FOSTERNet(self.args["convnet_type"], False)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/utils/inc_net.py", line 402, in __init__
        self.convnet_type = args["convnet_type"]
    TypeError: string indices must be integers
    
    opened by muyuuuu 2
  • the memory of rmm

    the memory of rmm

    Hello! First of all, thank you very much for your work.When I read the rmm code, I found that memory is not 2000. Instead, only 2000 pieces of data were used for the new task. Is rmm originally like this? I add the following code to rmm.py to see the number of memories.

    image image

    opened by zhl98 2
  • What is the purpose of the reduce Exemplar function

    What is the purpose of the reduce Exemplar function

    I'd like to know the role of the Reduce examplar functions, construct examplar functions, and construct Exemplar unified functions in your base.py file. Is there a reference for how they work?

    opened by Z-ZHIZHONG 2
  •  Genral idea for the code framework

    Genral idea for the code framework

    Hello, I have some questions for you.Is your code modified from the original DER? Could you give me a general idea of your thinking.Thank you very much. I look forward to your recovery.

    opened by Z-ZHIZHONG 2
Releases(v0.1)
  • v0.1(Dec 12, 2022)

    This is the first Github release of PyCIL. Reproduced methods are listed as:

    • FineTune: Baseline method which simply updates parameters on new tasks, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
    • EWC: Overcoming catastrophic forgetting in neural networks. PNAS2017 [paper]
    • LwF: Learning without Forgetting. ECCV2016 [paper]
    • Replay: Baseline method with exemplars.
    • GEM: Gradient Episodic Memory for Continual Learning. NIPS2017 [paper]
    • iCaRL: Incremental Classifier and Representation Learning. CVPR2017 [paper]
    • BiC: Large Scale Incremental Learning. CVPR2019 [paper]
    • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. CVPR2020 [paper]
    • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. ECCV2020 [paper]
    • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. CVPR2021 [paper]
    • PASS: Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR2021 [paper]
    • RMM: RMM: Reinforced Memory Management for Class-Incremental Learning. NeurIPS2021 [paper]
    • IL2A: Class-Incremental Learning via Dual Augmentation. NeurIPS2021 [paper]
    • SSRE: Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. CVPR2022 [paper]
    • FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning. WACV2023 [paper]
    • Coil: Co-Transport for Class-Incremental Learning. ACM MM2021 [paper]
    • FOSTER: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [paper]

    Stay tuned for more state-of-the-arts in PyCIL!

    Source code(tar.gz)
    Source code(zip)
Owner
Fu-Yun Wang
Fu-Yun Wang
Gradient Step Denoiser for convergent Plug-and-Play

Source code for the paper "Gradient Step Denoiser for convergent Plug-and-Play"

Samuel Hurault 11 Sep 17, 2022
Source code of the paper "Deep Learning of Latent Variable Models for Industrial Process Monitoring".

Source code of the paper "Deep Learning of Latent Variable Models for Industrial Process Monitoring".

Xiangyin Kong 7 Nov 08, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
Barbershop: GAN-based Image Compositing using Segmentation Masks (SIGGRAPH Asia 2021)

Barbershop: GAN-based Image Compositing using Segmentation Masks Barbershop: GAN-based Image Compositing using Segmentation Masks Peihao Zhu, Rameen A

Peihao Zhu 928 Dec 30, 2022
HackBMU-5.0-Team-Ctrl-Alt-Elite - HackBMU 5.0 Team Ctrl Alt Elite

HackBMU-5.0-Team-Ctrl-Alt-Elite The search is over. We present to you ‘Health-A-

3 Feb 19, 2022
Angular & Electron desktop UI framework. Angular components for native looking and behaving macOS desktop UI (Electron/Web)

Angular Desktop UI This is a collection for native desktop like user interface components in Angular, especially useful for Electron apps. It starts w

Marc J. Schmidt 49 Dec 22, 2022
GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️

GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️ This repo contains a PyTorch implementation of the original GAT paper ( 🔗 Veličković et

Aleksa Gordić 1.9k Jan 09, 2023
Official PyTorch implementation for "Low Precision Decentralized Distributed Training with Heterogenous Data"

Low Precision Decentralized Training with Heterogenous Data Official PyTorch implementation for "Low Precision Decentralized Distributed Training with

Aparna Aketi 0 Nov 23, 2021
This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)

Elaborative Rehearsal for Zero-shot Action Recognition This is an official implementation of: Shizhe Chen and Dong Huang, Elaborative Rehearsal for Ze

DeLightCMU 26 Sep 24, 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MEGVII Research 22 Jan 05, 2023
Automatic self-diagnosis program (python required)Automatic self-diagnosis program (python required)

auto-self-checker 자동으로 자가진단 해주는 프로그램(python 필요) 중요 이 프로그램이 실행될때에는 절대로 마우스포인터를 움직이거나 키보드를 건드리면 안된다(화면인식, 마우스포인터로 직접 클릭) 사용법 프로그램을 구동할 폴더 내의 cmd창에서 pip

1 Dec 30, 2021
Old Photo Restoration (Official PyTorch Implementation)

Bringing Old Photo Back to Life (CVPR 2020 oral)

Microsoft 11.3k Dec 30, 2022
This dlib-based facial login system

Facial-Login-System This dlib-based facial login system is a technology capable of matching a human face from a digital webcam frame capture against a

Mushahid Ali 3 Apr 23, 2022
Implementation of Research Paper "Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation"

Zero-DCE and Zero-DCE++(Lite architechture for Mobile and edge Devices) Papers Abstract The paper presents a novel method, Zero-Reference Deep Curve E

Tauhid Khan 15 Dec 10, 2022
This is an official source code for implementation on Extensive Deep Temporal Point Process

Extensive Deep Temporal Point Process This is an official source code for implementation on Extensive Deep Temporal Point Process, which is composed o

Haitao Lin 8 Aug 15, 2022
PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

WaveGrad2 - PyTorch Implementation PyTorch Implementation of Google Brain's WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis. Status (202

Keon Lee 59 Dec 06, 2022
Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces

This repository contains source code for the paper Combining Latent Space and Structured Kernels for Bayesian Optimization over Combinatorial Spaces a

9 Nov 21, 2022
Least Square Calibration for Peer Reviews

Least Square Calibration for Peer Reviews Requirements gurobipy - for solving convex programs GPy - for Bayesian baseline numpy pandas To generate p

Sigma <a href=[email protected]"> 1 Nov 01, 2021
A small fun project using python OpenCV, mediapipe, and pydirectinput

Here I tried a small fun project using python OpenCV, mediapipe, and pydirectinput. Here we can control moves car game when yellow color come to right box (press key 'd') left box (press key 'a') lef

Sameh Elisha 3 Nov 17, 2022
Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees"

Companion code for "Bayesian logistic regression for online recalibration and revision of risk prediction models with performance guarantees" Installa

0 Oct 13, 2021