Kaggle-titanic - A tutorial for Kaggle's Titanic: Machine Learning from Disaster competition. Demonstrates basic data munging, analysis, and visualization techniques. Shows examples of supervised machine learning techniques.

Overview

Kaggle-titanic

This is a tutorial in an IPython Notebook for the Kaggle competition, Titanic Machine Learning From Disaster. The goal of this repository is to provide an example of a competitive analysis for those interested in getting into the field of data analytics or using python for Kaggle's Data Science competitions .

Quick Start: View a static version of the notebook in the comfort of your own web browser.

Installation:

To run this notebook interactively:

  1. Download this repository in a zip file by clicking on this link or execute this from the terminal: git clone https://github.com/agconti/kaggle-titanic.git
  2. Install virtualenv.
  3. Navigate to the directory where you unzipped or cloned the repo and create a virtual environment with virtualenv env.
  4. Activate the environment with source env/bin/activate
  5. Install the required dependencies with pip install -r requirements.txt.
  6. Execute ipython notebook from the command line or terminal.
  7. Click on Titanic.ipynb on the IPython Notebook dasboard and enjoy!
  8. When you're done deactivate the virtual environment with deactivate.

Dependencies:

Kaggle Competition | Titanic Machine Learning from Disaster

The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.

One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.

In this contest, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.

This Kaggle Getting Started Competition provides an ideal starting place for people who may not have a lot of experience in data science and machine learning."

From the competition homepage.

Goal for this Notebook:

Show a simple example of an analysis of the Titanic disaster in Python using a full complement of PyData utilities. This is aimed for those looking to get into the field or those who are already in the field and looking to see an example of an analysis done with Python.

This Notebook will show basic examples of:

Data Handling

  • Importing Data with Pandas
  • Cleaning Data
  • Exploring Data through Visualizations with Matplotlib

Data Analysis

  • Supervised Machine learning Techniques: + Logit Regression Model + Plotting results + Support Vector Machine (SVM) using 3 kernels + Basic Random Forest + Plotting results

Valuation of the Analysis

  • K-folds cross validation to valuate results locally
  • Output the results from the IPython Notebook to Kaggle

Benchmark Scripts

To find the basic scripts for the competition benchmarks look in the "Python Examples" folder. These scripts are based on the originals provided by Astro Dave but have been reworked so that they are easier to understand for new comers.

Competition Website: http://www.kaggle.com/c/titanic-gettingStarted

Comments
  • output file

    output file "data/output/logitregres.csv" contains the survived values other than {0,1}

    Thanks for the great article and code. I see that direct submission of output file to kaggle results in error and it says Survived column values must be either 0 or 1.

    Am I missing something? Should I have a cutoff and turn them in to 0 or 1?

    opened by srini09 2
  • Fixed issue with bar chart

    Fixed issue with bar chart

    If auto-sorting is on (as per default), the returned series object is sorted by values, i.e. for „male“ the not-survived category is reported first and for „female“ the survived. When summing over male and female, the categories get mixed up.

    opened by metatier 2
  • Adds the updated csv files with capitalied column names. Fixed the iPyth...

    Adds the updated csv files with capitalied column names. Fixed the iPyth...

    ...on Notebook so it works with capitalized column headers. Updated the data folder with the two new csv files (train and test) as well as output/logitregres.csv.

    opened by thearpitgupta 2
  • Column headers are now capitalized

    Column headers are now capitalized

    Looks like column headers in the training data set are now capitalized. See here http://www.kaggle.com/c/titanic-gettingStarted/download/train.csv It's not capitalized in the data set used used in the repo https://github.com/agconti/kaggle-titanic/blob/master/data/train.csv Wonder if Kaggle changed the data set and intentionally made this change. Anyways, if you want I am happy to submit a PR that works with capitalized column names. Let me know. Thanks.

    PS - Great work.

    opened by thearpitgupta 2
  • sharey for subplots

    sharey for subplots

    Not sure if you're original intention was to show the Y axis for all your subplots in input 14 but if it wasn't you can pass in sharey=True into df.plot() function to eliminate the redundant axes.

    example

    Awesome work on the notebook btw!

    opened by zunayed 2
  • Install KaggleAux through pip

    Install KaggleAux through pip

    Currently, a subsection of KaggleAux is included in this repository as a temporary connivence. It would be cleaner to have KaggleAux as a 3rd party dependency installed through pip. This would be less confusing to users, and would allow updates in KaggleAux to be better incorporated.

    enhancement 
    opened by agconti 1
  • Categorization of algorithms

    Categorization of algorithms

    The README and several places in notebook categorize SVM and Random Forest into "Unsupervised Learning". They actually belong to "Supervised Learning".

    e.g. http://cs229.stanford.edu/notes/cs229-notes3.pdf

    opened by hupili 1
  • Suggestion -- update requirements.txt

    Suggestion -- update requirements.txt

    Hi, I don't know if this repo is still maintained, but would be nice to update the requirements.txt with supported versions.

    :+1: Thanks for putting this repo together.

    opened by DaveOkpare 0
  • Update agc_simp_gendermodel.py

    Update agc_simp_gendermodel.py

    data indexing was inappropriate for the operation taking care Lines #18,#19 index 3 , we have Name of passenger but not gender , so all the time we'll get false
    Lines #26,#27,#28 , proportions should be calculated on Survived column , not on PassengerId

    opened by praveenbommali 0
  • why use barh and ylim

    why use barh and ylim

    I don't understand the need of using barh and ylim functions in plotting.Simple vertical graphs are easier to understand then what is the purpose of using barh.And Thank you for sharing this notebook it's really informative.

    opened by barotdhrumil21 0
Releases(v0.2.0)
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT)

SMIT: Stochastic Multi-Label Image-to-image Translation This repository provides a PyTorch implementation of SMIT. SMIT can stochastically translate a

Biomedical Computer Vision Group @ Uniandes 37 Mar 01, 2022
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)

Complete-IoU Loss and Cluster-NMS for Improving Object Detection and Instance Segmentation. Our paper is accepted by IEEE Transactions on Cybernetics

290 Dec 25, 2022
Decentralized Reinforcment Learning: Global Decision-Making via Local Economic Transactions (ICML 2020)

Decentralized Reinforcement Learning This is the code complementing the paper Decentralized Reinforcment Learning: Global Decision-Making via Local Ec

40 Oct 30, 2022
Collections for the lasted paper about multi-view clustering methods (papers, codes)

Multi-View Clustering Papers Collections for the lasted paper about multi-view clustering methods (papers, codes). There also exists some repositories

Andrew Guan 10 Sep 20, 2022
Code for "Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification", ECCV 2020 Spotlight

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification Implementation of "Learning From Multiple Experts: Se

27 Nov 05, 2022
Only works with the dashboard version / branch of jesse

Jesse optuna Only works with the dashboard version / branch of jesse. The config.yml should be self-explainatory. Installation # install from git pip

Markus K. 8 Dec 04, 2022
Deep Reinforcement Learning with pytorch & visdom

Deep Reinforcement Learning with pytorch & visdom Sample testings of trained agents (DQN on Breakout, A3C on Pong, DoubleDQN on CartPole, continuous A

Jingwei Zhang 783 Jan 04, 2023
Evaluation suite for large-scale language models.

This repo contains code for running the evaluations and reproducing the results from the Jurassic-1 Technical Paper (see blog post), with current support for running the tasks through both the AI21 S

71 Dec 17, 2022
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
Freecodecamp Scientific Computing with Python Certification; Solution for Challenge 2: Time Calculator

Assignment Write a function named add_time that takes in two required parameters and one optional parameter: a start time in the 12-hour clock format

Hellen Namulinda 0 Feb 26, 2022
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 03, 2022
Joint-task Self-supervised Learning for Temporal Correspondence (NeurIPS 2019)

Joint-task Self-supervised Learning for Temporal Correspondence Project | Paper Overview Joint-task Self-supervised Learning for Temporal Corresponden

Sifei Liu 167 Dec 14, 2022
Code for Learning to Segment The Tail (LST)

Learning to Segment the Tail [arXiv] In this repository, we release code for Learning to Segment The Tail (LST). The code is directly modified from th

47 Nov 07, 2022
Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)

GS-WGAN This repository contains the implementation for GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators (NeurIPS

46 Nov 09, 2022
S-attack library. Official implementation of two papers "Are socially-aware trajectory prediction models really socially-aware?" and "Vehicle trajectory prediction works, but not everywhere".

S-attack library: A library for evaluating trajectory prediction models This library contains two research projects to assess the trajectory predictio

VITA lab at EPFL 71 Jan 04, 2023
"Learning Free Gait Transition for Quadruped Robots vis Phase-Guided Controller"

PhaseGuidedControl The current version is developed based on the old version of RaiSim series, and possibly requires further modification. It will be

X-Mechanics 12 Oct 21, 2022
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
NLMpy - A Python package to create neutral landscape models

NLMpy is a Python package for the creation of neutral landscape models that are widely used by landscape ecologists to model ecological patterns

Manaaki Whenua – Landcare Research 1 Oct 08, 2022
내가 보려고 정리한 <프로그래밍 기초 Ⅰ> / organized for me

Programming-Basics 프로그래밍 기초 Ⅰ 아카이브 Do it! 점프 투 파이썬 주차 강의주제 비고 1주차 Syllabus 2주차 자료형 - 숫자형 3주차 자료형 - 문자열형 4주차 입력과 출력 5주차 제어문 - 조건문 if 6주차 제어문 - 반복문 whil

KIMMINSEO 1 Mar 07, 2022