Intro-to-dl - Resources for "Introduction to Deep Learning" course.

Overview

Introduction to Deep Learning course resources

https://www.coursera.org/learn/intro-to-deep-learning

Running on Google Colab (tested for all weeks)

Google has released its own flavour of Jupyter called Colab, which has free GPUs!

Here's how you can use it:

  1. Open https://colab.research.google.com, click Sign in in the upper right corner, use your Google credentials to sign in.
  2. Click GITHUB tab, paste https://github.com/hse-aml/intro-to-dl and press Enter
  3. Choose the notebook you want to open, e.g. week2/v2/mnist_with_keras.ipynb
  4. Click File -> Save a copy in Drive... to save your progress in Google Drive
  5. Click Runtime -> Change runtime type and select GPU in Hardware accelerator box
  6. Execute the following code in the first cell that downloads dependencies (change for your week number):
! shred -u setup_google_colab.py
! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
# please, uncomment the week you're working on
# setup_google_colab.setup_week1()
# setup_google_colab.setup_week2()
# setup_google_colab.setup_week2_honor()
# setup_google_colab.setup_week3()
# setup_google_colab.setup_week4()
# setup_google_colab.setup_week5()
# setup_google_colab.setup_week6()
  1. If you run many notebooks on Colab, they can continue to eat up memory, you can kill them with ! pkill -9 python3 and check with ! nvidia-smi that GPU memory is freed.

Known issues:

  • Blinking animation with IPython.display.clear_output(). It's usable, but still looking for a workaround.

Offline instructions

Coursera Jupyter Environment can be slow if many learners use it heavily. Our tasks are compute-heavy and we recommend to run them on your hardware for optimal performance.

You will need a computer with at least 4GB of RAM.

There're two options to setup the Jupyter Notebooks locally: Docker container and Anaconda.

Docker container option (best for Mac/Linux)

Follow the instructions on https://hub.docker.com/r/zimovnov/coursera-aml-docker/ to install Docker container with all necessary software installed.

After that you should see a Jupyter page in your browser.

Anaconda option (best for Windows)

We highly recommend to install docker environment, but if it's not an option, you can try to install the necessary python modules with Anaconda.

First, install Anaconda with Python 3.5+ from here.

Download conda_requirements.txt from here.

Open terminal on Mac/Linux or "Anaconda Prompt" in Start Menu on Windows and run:

conda config --append channels conda-forge
conda config --append channels menpo
conda install --yes --file conda_requirements.txt

To start Jupyter Notebooks run jupyter notebook on Mac/Linux or "Jupyter Notebook" in Start Menu on Windows.

After that you should see a Jupyter page in your browser.

Prepare resources inside Jupyter Notebooks (for local setups only)

Click New -> Terminal and execute: git clone https://github.com/hse-aml/intro-to-dl.git On Windows you might want to install Git. You can also download all the resources as zip archive from GitHub page.

Close the terminal and refresh Jupyter page, you will see intro-to-dl folder, go there, all the necessary notebooks are waiting for you.

First you need to download necessary resources, to do that open download_resources.ipynb and run cells for Keras and your week.

Now you can open a notebook for the corresponding week and work there just like in Coursera Jupyter Environment.

Using GPU for offline setup (for advanced users)

Comments
  • cannot submit

    cannot submit

    In the first submission for week 3, I couldn't submit. Here is the error: AttributeError: module 'grading_utils' has no attribute 'model_total_params'

    opened by AhmedFrikha 4
  • week4/lfw_dataset.py

    week4/lfw_dataset.py

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-4-856143fffc33> in <module>()
          8 #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind
          9 from lfw_dataset import load_lfw_dataset
    ---> 10 data,attrs = load_lfw_dataset(dimx=36,dimy=36)
         11 
         12 #preprocess faces
    
    ~/GitHub/intro-to-dl/week4/lfw_dataset.py in load_lfw_dataset(use_raw, dx, dy, dimx, dimy)
         52 
         53     # preserve photo_ids order!
    ---> 54     all_attrs = photo_ids.merge(df_attrs, on=('person', 'imagenum')).drop(["person", "imagenum"], axis=1)
         55 
         56     return all_photos, all_attrs
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
       6377                      right_on=right_on, left_index=left_index,
       6378                      right_index=right_index, sort=sort, suffixes=suffixes,
    -> 6379                      copy=copy, indicator=indicator, validate=validate)
       6380 
       6381     def round(self, decimals=0, *args, **kwargs):
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
         58                          right_index=right_index, sort=sort, suffixes=suffixes,
         59                          copy=copy, indicator=indicator,
    ---> 60                          validate=validate)
         61     return op.get_result()
         62 
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in __init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, copy, indicator, validate)
        552         # validate the merge keys dtypes. We may need to coerce
        553         # to avoid incompat dtypes
    --> 554         self._maybe_coerce_merge_keys()
        555 
        556         # If argument passed to validate,
    
    ~/anaconda3/lib/python3.6/site-packages/pandas/core/reshape/merge.py in _maybe_coerce_merge_keys(self)
        976             # incompatible dtypes GH 9780, GH 15800
        977             elif is_numeric_dtype(lk) and not is_numeric_dtype(rk):
    --> 978                 raise ValueError(msg)
        979             elif not is_numeric_dtype(lk) and is_numeric_dtype(rk):
        980                 raise ValueError(msg)
    
    ValueError: You are trying to merge on int64 and object columns. If you wish to proceed you should use pd.concat
    
    opened by zuenko 4
  • explanation of

    explanation of "download_utils.py"

    def link_all_keras_resources():
        link_all_files_from_dir("../readonly/keras/datasets/", os.path.expanduser("~/.keras/datasets"))
        link_all_files_from_dir("../readonly/keras/models/", os.path.expanduser("~/.keras/models"))
    

    which datas are belong to the datasets and models dir ? (with name).

    def link_week_6_resources():
        link_all_files_from_dir("../readonly/week6/", ".")
    

    which datas are belong to the week6 dir ? (with name).

    Please, explain this two function. I want to run week-6 image_captionong_project into my local jupyter-notebook.

    Please help me . THANKS

    opened by rezwanh001 3
  • NumpyNN (honor).ipynb not able to import util.py

    NumpyNN (honor).ipynb not able to import util.py

    Hi,

    It seems like

    from util import eval_numerical_gradient

    not working. (week 2 honor assignment)

    It can work by manually adding eval_numerical_gradientm function but it would be better if linked.

    Cheers, Nan

    opened by xia0nan 1
  • The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks.Please help!!

    The Kernel dies after epoch 2 and the callbacks doesn't work, both in Colab & Jupyter notebooks. The result is always 6 out of 9 because the progress halts after that. Please help me complete the work and submit the results.

    It's an earnest request to the mentors, tutors , instructors to please consider those students facing such issues and provide assistance.

    As for my case , it's the only project left in the entire specialization and it's completion.

    I will be extremely grateful for the opportunity for the peer review to be made accessible to all the learners whether they are undergoing the same issue for a long span of time or otherwise.

    Will be eagerly awaiting a response.

    Regards,

    Saheli Basu

    opened by MehaRima 0
  • Fixed a typo on line 285.

    Fixed a typo on line 285.

    Original: So far our model is staggeringly inefficient. There is something wring with it. Guess, what?

    Changed to: So far, our model is staggeringly inefficient. There is something wrong with it. Guess, what?

    opened by IAmSuyogJadhav 0
  • KeyError in keras_utils.py

    KeyError in keras_utils.py

    I tried running on my local computer

    model.fit( x_train2, y_train2, # prepared data batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks=[keras.callbacks.LearningRateScheduler(lr_scheduler), LrHistory(), keras_utils.TqdmProgressCallback(), keras_utils.ModelSaveCallback(model_filename)], validation_data=(x_test2, y_test2), shuffle=True, verbose=0, initial_epoch=last_finished_epoch or 0 )

    But it returned me this error

    ~\Documents\kkbq\Coursera\Intro to Deep Learning\intro-to-dl\keras_utils.py in _set_prog_bar_desc(self, logs) 27 28 def _set_prog_bar_desc(self, logs): ---> 29 for k in self.params['metrics']: 30 if k in logs: 31 self.log_values_by_metric[k].append(logs[k])

    KeyError: 'metrics'

    Does anyone know why this happened? Thanks.

    opened by samtjong23 0
  • Week 3 - Task 2 issue

    Week 3 - Task 2 issue

    In one of the last cells,

    model.compile(
        loss='categorical_crossentropy',  # we train 102-way classification
        optimizer=keras.optimizers.adamax(lr=1e-2),  # we can take big lr here because we fixed first layers
        metrics=['accuracy']  # report accuracy during training
    )
    

    AttributeError: module 'keras.optimizers' has no attribute 'adamax'

    This can be fixed by changing "adamax" to "Adamax". However, after that the second next cell:

    # fine tune for 2 epochs (full passes through all training data)
    # we make 2*8 epochs, where epoch is 1/8 of our training data to see progress more often
    model.fit_generator(
        train_generator(tr_files, tr_labels), 
        steps_per_epoch=len(tr_files) // BATCH_SIZE // 8,
        epochs=2 * 8,
        validation_data=train_generator(te_files, te_labels), 
        validation_steps=len(te_files) // BATCH_SIZE // 4,
        callbacks=[keras_utils.TqdmProgressCallback(), 
                   keras_utils.ModelSaveCallback(model_filename)],
        verbose=0,
        initial_epoch=last_finished_epoch or 0
    )
    

    throws the following error:

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-183-faf1b24645ff> in <module>()
         10                keras_utils.ModelSaveCallback(model_filename)],
         11     verbose=0,
    ---> 12     initial_epoch=last_finished_epoch or 0
         13 )
    
    2 frames
    /usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
         85                 warnings.warn('Update your `' + object_name +
         86                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
    ---> 87             return func(*args, **kwargs)
         88         wrapper._original_function = func
         89         return wrapper
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
       1723 
       1724         do_validation = bool(validation_data)
    -> 1725         self._make_train_function()
       1726         if do_validation:
       1727             self._make_test_function()
    
    /usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
        935                 self._collected_trainable_weights,
        936                 self.constraints,
    --> 937                 self.total_loss)
        938             updates = self.updates + training_updates
        939             # Gets loss and metrics. Updates weights at each call.
    
    TypeError: get_updates() takes 3 positional arguments but 4 were given
    

    keras.optimizers.Adamax() inherits the get_updates() method from keras.optimizers.Optimizer(), and that method takes only three arguments (self, loss, params), but _make_train_function is trying to pass four arguments to it.

    As I understand it, the issue here is compatibility between tf 1.x and tf 2. I'm using colab and running the %tensorflow_version 1.x line, as well as the setup cell with week 3 setup uncommented at the start of the notebook.

    All checkpoints up to this point have been passed succesfully.

    opened by nietoo 1
  • conda issue

    conda issue

    Hi there, I face a lot of problem to create the environment. I want to use my GPU as I used to do but here, to run your environment I face a lot a package conflicts. I spent 4 hours trying to to make working tensorflow==1.2.1 & Keras==2.0.6 (with theano ).

    (nvidia-docker does not work on my Debian so I would use a stable conda environment) Please update the co-lab with tensflow 2+

    opened by kakooloukia 0
  • Google colab code addition

    Google colab code addition

    The original code does not work fine in the Google colab. Please add following code: !pip install q keras==2.0.6 to these lines of codes: ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/intro-to-dl/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab please, uncomment the week you're working on setup_google_colab.setup_week1() setup_google_colab.setup_week2() setup_google_colab.setup_week2_honor() setup_google_colab.setup_week3() setup_google_colab.setup_week4() setup_google_colab.setup_week5() setup_google_colab.setup_week6()

    opened by ansh997 0
Owner
Advanced Machine Learning specialisation by HSE
Advanced Machine Learning specialisation by HSE
Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation

UTNet (Accepted at MICCAI 2021) Official implementation of UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation Introduction Transf

110 Jan 01, 2023
This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies.

Learning to Learn Graph Topologies This is the official code of L2G, Unrolling and Recurrent Unrolling in Learning to Learn Graph Topologies. Requirem

Stacy X PU 16 Dec 09, 2022
The Turing Change Point Detection Benchmark: An Extensive Benchmark Evaluation of Change Point Detection Algorithms on real-world data

Turing Change Point Detection Benchmark Welcome to the repository for the Turing Change Point Detection Benchmark, a benchmark evaluation of change po

The Alan Turing Institute 85 Dec 28, 2022
Semiconductor Machine learning project

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

kunal suryawanshi 1 Jan 15, 2022
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Xintao 593 Jan 03, 2023
Chainer implementation of recent GAN variants

Chainer-GAN-lib This repository collects chainer implementation of state-of-the-art GAN algorithms. These codes are evaluated with the inception score

399 Oct 23, 2022
DropNAS: Grouped Operation Dropout for Differentiable Architecture Search

DropNAS: Grouped Operation Dropout for Differentiable Architecture Search DropNAS, a grouped operation dropout method for one-level DARTS, with better

weijunhong 4 Aug 15, 2022
Detectron2 is FAIR's next-generation platform for object detection and segmentation.

Detectron2 is Facebook AI Research's next generation software system that implements state-of-the-art object detection algorithms. It is a ground-up r

Facebook Research 23.3k Jan 08, 2023
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing

ProFuzzBench - A Benchmark for Stateful Protocol Fuzzing ProFuzzBench is a benchmark for stateful fuzzing of network protocols. It includes a suite of

155 Jan 08, 2023
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
The Official PyTorch Implementation of DiscoBox.

DiscoBox: Weakly Supervised Instance Segmentation and Semantic Correspondence from Box Supervision Paper | Project page | Demo (Youtube) | Demo (Bilib

NVIDIA Research Projects 89 Jan 09, 2023
3D-aware GANs based on NeRF (arXiv).

CIPS-3D This repository will contain the code of the paper, CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

Peterou 563 Dec 31, 2022
Transformer in Computer Vision

Transformer-in-Vision A paper list of some recent Transformer-based CV works. If you find some ignored papers, please open issues or pull requests. **

506 Dec 26, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
Python-experiments - A Repository which contains python scripts to automate things and make your life easier with python

Python Experiments A Repository which contains python scripts to automate things

Vivek Kumar Singh 11 Sep 25, 2022