Kaggle-titanic - A tutorial for Kaggle's Titanic: Machine Learning from Disaster competition. Demonstrates basic data munging, analysis, and visualization techniques. Shows examples of supervised machine learning techniques.

Overview

Kaggle-titanic

This is a tutorial in an IPython Notebook for the Kaggle competition, Titanic Machine Learning From Disaster. The goal of this repository is to provide an example of a competitive analysis for those interested in getting into the field of data analytics or using python for Kaggle's Data Science competitions .

Quick Start: View a static version of the notebook in the comfort of your own web browser.

Installation:

To run this notebook interactively:

  1. Download this repository in a zip file by clicking on this link or execute this from the terminal: git clone https://github.com/agconti/kaggle-titanic.git
  2. Install virtualenv.
  3. Navigate to the directory where you unzipped or cloned the repo and create a virtual environment with virtualenv env.
  4. Activate the environment with source env/bin/activate
  5. Install the required dependencies with pip install -r requirements.txt.
  6. Execute ipython notebook from the command line or terminal.
  7. Click on Titanic.ipynb on the IPython Notebook dasboard and enjoy!
  8. When you're done deactivate the virtual environment with deactivate.

Dependencies:

Kaggle Competition | Titanic Machine Learning from Disaster

The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.

One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.

In this contest, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.

This Kaggle Getting Started Competition provides an ideal starting place for people who may not have a lot of experience in data science and machine learning."

From the competition homepage.

Goal for this Notebook:

Show a simple example of an analysis of the Titanic disaster in Python using a full complement of PyData utilities. This is aimed for those looking to get into the field or those who are already in the field and looking to see an example of an analysis done with Python.

This Notebook will show basic examples of:

Data Handling

  • Importing Data with Pandas
  • Cleaning Data
  • Exploring Data through Visualizations with Matplotlib

Data Analysis

  • Supervised Machine learning Techniques: + Logit Regression Model + Plotting results + Support Vector Machine (SVM) using 3 kernels + Basic Random Forest + Plotting results

Valuation of the Analysis

  • K-folds cross validation to valuate results locally
  • Output the results from the IPython Notebook to Kaggle

Benchmark Scripts

To find the basic scripts for the competition benchmarks look in the "Python Examples" folder. These scripts are based on the originals provided by Astro Dave but have been reworked so that they are easier to understand for new comers.

Competition Website: http://www.kaggle.com/c/titanic-gettingStarted

Comments
  • output file

    output file "data/output/logitregres.csv" contains the survived values other than {0,1}

    Thanks for the great article and code. I see that direct submission of output file to kaggle results in error and it says Survived column values must be either 0 or 1.

    Am I missing something? Should I have a cutoff and turn them in to 0 or 1?

    opened by srini09 2
  • Fixed issue with bar chart

    Fixed issue with bar chart

    If auto-sorting is on (as per default), the returned series object is sorted by values, i.e. for „male“ the not-survived category is reported first and for „female“ the survived. When summing over male and female, the categories get mixed up.

    opened by metatier 2
  • Adds the updated csv files with capitalied column names. Fixed the iPyth...

    Adds the updated csv files with capitalied column names. Fixed the iPyth...

    ...on Notebook so it works with capitalized column headers. Updated the data folder with the two new csv files (train and test) as well as output/logitregres.csv.

    opened by thearpitgupta 2
  • Column headers are now capitalized

    Column headers are now capitalized

    Looks like column headers in the training data set are now capitalized. See here http://www.kaggle.com/c/titanic-gettingStarted/download/train.csv It's not capitalized in the data set used used in the repo https://github.com/agconti/kaggle-titanic/blob/master/data/train.csv Wonder if Kaggle changed the data set and intentionally made this change. Anyways, if you want I am happy to submit a PR that works with capitalized column names. Let me know. Thanks.

    PS - Great work.

    opened by thearpitgupta 2
  • sharey for subplots

    sharey for subplots

    Not sure if you're original intention was to show the Y axis for all your subplots in input 14 but if it wasn't you can pass in sharey=True into df.plot() function to eliminate the redundant axes.

    example

    Awesome work on the notebook btw!

    opened by zunayed 2
  • Install KaggleAux through pip

    Install KaggleAux through pip

    Currently, a subsection of KaggleAux is included in this repository as a temporary connivence. It would be cleaner to have KaggleAux as a 3rd party dependency installed through pip. This would be less confusing to users, and would allow updates in KaggleAux to be better incorporated.

    enhancement 
    opened by agconti 1
  • Categorization of algorithms

    Categorization of algorithms

    The README and several places in notebook categorize SVM and Random Forest into "Unsupervised Learning". They actually belong to "Supervised Learning".

    e.g. http://cs229.stanford.edu/notes/cs229-notes3.pdf

    opened by hupili 1
  • Suggestion -- update requirements.txt

    Suggestion -- update requirements.txt

    Hi, I don't know if this repo is still maintained, but would be nice to update the requirements.txt with supported versions.

    :+1: Thanks for putting this repo together.

    opened by DaveOkpare 0
  • Update agc_simp_gendermodel.py

    Update agc_simp_gendermodel.py

    data indexing was inappropriate for the operation taking care Lines #18,#19 index 3 , we have Name of passenger but not gender , so all the time we'll get false
    Lines #26,#27,#28 , proportions should be calculated on Survived column , not on PassengerId

    opened by praveenbommali 0
  • why use barh and ylim

    why use barh and ylim

    I don't understand the need of using barh and ylim functions in plotting.Simple vertical graphs are easier to understand then what is the purpose of using barh.And Thank you for sharing this notebook it's really informative.

    opened by barotdhrumil21 0
Releases(v0.2.0)
Code from the paper "High-Performance Brain-to-Text Communication via Handwriting"

High-Performance Brain-to-Text Communication via Handwriting Overview This repo is associated with this manuscript, preprint and dataset. The code can

Francis R. Willett 306 Jan 03, 2023
BOVText: A Large-Scale, Multidimensional Multilingual Dataset for Video Text Spotting

BOVText: A Large-Scale, Bilingual Open World Dataset for Video Text Spotting Updated on December 10, 2021 (Release all dataset(2021 videos)) Updated o

weijiawu 47 Dec 26, 2022
UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model

UltraPose: Synthesizing Dense Pose with 1 Billion Points by Human-body Decoupling 3D Model Official repository for the ICCV 2021 paper: UltraPose: Syn

MomoAILab 92 Dec 21, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
Generate Contextual Directory Wordlist For Target Org

PathPermutor Generate Contextual Directory Wordlist For Target Org This script generates contextual wordlist for any target org based on the set of UR

8 Jun 23, 2021
Chinese named entity recognization with BiLSTM using Keras

Chinese named entity recognization (Bilstm with Keras) Project Structure ./ ├── README.md ├── data │   ├── README.md │   ├── data 数据集 │   │   ├─

1 Dec 17, 2021
Python Single Object Tracking Evaluation

pysot-toolkit The purpose of this repo is to provide evaluation API of Current Single Object Tracking Dataset, including VOT2016 VOT2018 VOT2018-LT OT

348 Dec 22, 2022
RRxIO - Robust Radar Visual/Thermal Inertial Odometry: Robust and accurate state estimation even in challenging visual conditions.

RRxIO - Robust Radar Visual/Thermal Inertial Odometry RRxIO offers robust and accurate state estimation even in challenging visual conditions. RRxIO c

Christopher Doer 64 Dec 29, 2022
The PyTorch implementation for paper "Neural Texture Extraction and Distribution for Controllable Person Image Synthesis" (CVPR2022 Oral)

ArXiv | Get Start Neural-Texture-Extraction-Distribution The PyTorch implementation for our paper "Neural Texture Extraction and Distribution for Cont

Ren Yurui 111 Dec 10, 2022
Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations

NANSY: Unofficial Pytorch Implementation of Neural Analysis and Synthesis: Reconstructing Speech from Self-Supervised Representations Notice Papers' D

Dongho Choi 최동호 104 Dec 23, 2022
CTF Challenge for CSAW Finals 2021

Terminal Velocity Misc CTF Challenge for CSAW Finals 2021 This is a challenge I've had in mind for almost 15 years and never got around to building un

Jordan 6 Jul 30, 2022
Submanifold sparse convolutional networks

Submanifold Sparse Convolutional Networks This is the PyTorch library for training Submanifold Sparse Convolutional Networks. Spatial sparsity This li

Facebook Research 1.8k Jan 06, 2023
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
Display, filter and search log messages in your terminal

Textualog Display, filter and search logging messages in the terminal. This project is powered by rich and textual. Some of the ideas and code in this

Rik Huygen 24 Dec 10, 2022
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 EAMLP will come soon Jitto

MenghaoGuo 357 Dec 11, 2022
[TIP2020] Adaptive Graph Representation Learning for Video Person Re-identification

Introduction This is the PyTorch implementation for Adaptive Graph Representation Learning for Video Person Re-identification. Get started git clone h

WuYiming 41 Dec 12, 2022
D-NeRF: Neural Radiance Fields for Dynamic Scenes

D-NeRF: Neural Radiance Fields for Dynamic Scenes [Project] [Paper] D-NeRF is a method for synthesizing novel views, at an arbitrary point in time, of

Albert Pumarola 291 Jan 02, 2023
a spacial-temporal pattern detection system for home automation

Argos a spacial-temporal pattern detection system for home automation. Based on OpenCV and Tensorflow, can run on raspberry pi and notify HomeAssistan

Angad Singh 133 Jan 05, 2023
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
AutoDeeplab / auto-deeplab / AutoML for semantic segmentation, implemented in Pytorch

AutoML for Image Semantic Segmentation Currently this repo contains the only working open-source implementation of Auto-Deeplab which, by the way out-

AI Necromancer 299 Dec 17, 2022