9th place solution in "Santa 2020 - The Candy Cane Contest"

Overview

Santa 2020 - The Candy Cane Contest

My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place.

Basic Strategy

In this competition, the reward was decided by comparing the threshold and random generated number. It was easy to calculate the probability of getting reward if we knew the thresholds. But the agents can't see the threshold during the game, we had to estimate it.

Like other teams, I also downloaded the history by Kaggle API and created a dataset for supervised learning. We can see the true value of threshold at each round in the response of API. So, I used it as the target variable.

In the middle of the competition, I found out that quantile regression is much better than conventional L2 regression. I think it can adjust the balance between Explore and Exploit by the percentile parameter.

Features

        #         Name Explanation
#1 round number of round in the game (0-1999)
#2 last_opponent_chosen whether the opponent agent chose this machine in the last step or not
#3 second_last_opponent_chosen whether the opponent agent chose this machine in the second last step or not
#4 third_last_opponent_chosen whether the opponent agent chose this machine in the third last step or not
#5 opponent_repeat_twice whether the opponent agent continued to choose this machine in the last two rounds (#2 x #3)
#6 opponent_repeat_three_times whether the opponent agent continued to choose this machine in the last three rounds (#2 x #3 x #4)
#7 num_chosen how many times the opponent and my agent chose this machine
#8 num_chosen_mine how many times my agent chose this machine
#9 num_chosen_opponent how many time the opponent agent chose this machine (#7 - #8)
#10 num_get_reward how many time my agent got rewards from this machine
#11 num_non_reward how many time my agent didn't get rewarded from this machine
#12 rate_mine ratio of my choices against the total number of choices (#8 / #7)
#13 rate_opponent ratio of opponent choices against the total number of choices (#9 / #7)
#14 rate_get_reward ratio of my rewarded choices against the total number of choices (#10 / #7)
#15 empirical_win_rate posterior expectation of threshold value based on my choices and rewords
#16 quantile_10 10% point of posterior distribution of threshold based on my choices and rewords
#17 quantile_20 20% point of posterior distribution of threshold based on my choices and rewords
#18 quantile_30 30% point of posterior distribution of threshold based on my choices and rewords
#19 quantile_40 40% point of posterior distribution of threshold based on my choices and rewords
#20 quantile_50 50% point of posterior distribution of threshold based on my choices and rewords
#21 quantile_60 60% point of posterior distribution of threshold based on my choices and rewords
#22 quantile_70 70% point of posterior distribution of threshold based on my choices and rewords
#23 quantile_80 80% point of posterior distribution of threshold based on my choices and rewords
#24 quantile_90 90% point of posterior distribution of threshold based on my choices and rewords
#25 repeat_head how many times my agent chose this machine before the opponent agent chose this agent for the first time
#26 repeat_tail how many times my agent chose this machine after the opponent agent chose this agent last time
#27 repeat_get_reward_head how many times my agent got reward from this machine before my agent didn't get rewarded or the opponent agent chose this agent for the first time
#28 repeat_get_reward_tail how many times my agent got reward from this machine after my agent didn't get rewarded or the opponent agent chose this agent last time
#29 repeat_non_reward_head how many times my agent didn't get rewarded from this machine before my agent got reward or the opponent agent chose this agent for the first time
#30 repeat_non_reward_tail how many times my agent didn't get rewarded from this machine after my agent got reward or the opponent agent chose this agent last time
#31 opponent_repeat_head how many times the opponent agent chose this machine before my agent chose this machine for the first time
#32 opponent_repeat_tail how many times the opponent agent chose this machine after my agent chose this machine last time

Software

  • Python 3.7.8
  • numpy==1.18.5
  • pandas==1.0.5
  • matplotlib==3.2.2
  • lightgbm==3.1.1
  • catboost==0.24.4
  • xgboost==1.2.1
  • tqdm==4.47.0

Usage

  1. download data from Kaggle by /src/01_downlaod/download.py

  2. create a dataset by /src/02_[regressor]/preprocess.py

  3. train a model by /src/02_[regressor]/train.py

Top Agents

Regressor Loss NumRound LearningRate LB Score SubmissionID
LightBGM Quantile (0.65) 4000 0.05 1449.4 19318812
LightBGM Quantile (0.65) 4000 0.10 1442.1 19182047
LightBGM Quantile (0.65) 3000 0.03 1438.8 19042049
LightBGM Quantile (0.66) 3500 0.04 1433.9 19137024
CatBoost Quantile (0.65) 4000 0.05 1417.6 19153745
CatBoost Quantile (0.67) 3000 0.10 1344.5 19170829
LightGBM MSE 4000 0.03 1313.3 19093039
XGBoost Pairwised 1500 0.10 1173.5 19269952
Owner
toshi_k
toshi_k
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023
Tool for installing and updating MiSTer cores and other files

MiSTer Downloader This tool installs and updates all the cores and other extra files for your MiSTer. It also updates the menu core, the MiSTer firmwa

72 Dec 24, 2022
Deep Face Recognition in PyTorch

Face Recognition in PyTorch By Alexey Gruzdev and Vladislav Sovrasov Introduction A repository for different experimental Face Recognition models such

Alexey Gruzdev 141 Sep 11, 2022
It's A ML based Web Site build with python and Django to find the breed of the dog

ML-Based-Dog-Breed-Identifier This is a Django Based Web Site To Identify the Breed of which your DOG belogs All You Need To Do is to Follow These Ste

Sanskar Dwivedi 2 Oct 12, 2022
DGCNN - Dynamic Graph CNN for Learning on Point Clouds

DGCNN is the author's re-implementation of Dynamic Graph CNN, which achieves state-of-the-art performance on point-cloud-related high-level tasks including category classification, semantic segmentat

Wang, Yue 1.3k Dec 26, 2022
A large-scale face dataset for face parsing, recognition, generation and editing.

CelebAMask-HQ [Paper] [Demo] CelebAMask-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA da

switchnorm 1.7k Dec 26, 2022
Unofficial Implement PU-Transformer

PU-Transformer-pytorch Pytorch unofficial implementation of PU-Transformer (PU-Transformer: Point Cloud Upsampling Transformer) https://arxiv.org/abs/

Lee Hyung Jun 7 Sep 21, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

193 Dec 22, 2022
Official pytorch implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion"

DSPoint Official implementation of "DSPoint: Dual-scale Point Cloud Recognition with High-frequency Fusion". Paper link: https://arxiv.org/abs/2111.10

Ziyao Zeng 14 Feb 26, 2022
The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

PRIMER The official code for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization. PRIMER is a pre-trained model for mu

AI2 114 Jan 06, 2023
A simple python library for fast image generation of people who do not exist.

Random Face A simple python library for fast image generation of people who do not exist. For more details, please refer to the [paper](https://arxiv.

Sergei Belousov 170 Dec 15, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Dec 28, 2022
Efficient-GlobalPointer - Pytorch Efficient GlobalPointer

引言 感谢苏神带来的模型,原文地址:https://spaces.ac.cn/archives/8877 如何运行 对应模型EfficientGlobalPoi

powerycy 40 Dec 14, 2022
Extract MNIST handwritten digits dataset binary file into bmp images

MNIST-dataset-extractor Extract MNIST handwritten digits dataset binary file into bmp images More info at http://yann.lecun.com/exdb/mnist/ Dependenci

Omar Mostafa 6 May 24, 2021
OMLT: Optimization and Machine Learning Toolkit

OMLT is a Python package for representing machine learning models (neural networks and gradient-boosted trees) within the Pyomo optimization environment.

C⚙G - Imperial College London 179 Jan 02, 2023
Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)

ProbAI 2021 - Probabilistic Programming and Variational Inference Tutorial with Pryo Day 1 (June 14) Slides Notebook: students_PPLs_Intro Notebook: so

PGM-Lab 46 Nov 01, 2022
A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics, sequence features, and user profiles.

CCasGNN A new framework, collaborative cascade prediction based on graph neural networks (CCasGNN) to jointly utilize the structural characteristics,

5 Apr 29, 2022
Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

Python Rapid Artificial Intelligence Ab Initio Molecular Dynamics

14 Nov 06, 2022