Lab Materials for MIT 6.S191: Introduction to Deep Learning

Overview

banner

This repository contains all of the code and software labs for MIT 6.S191: Introduction to Deep Learning! All lecture slides and videos are available on the course website.

Opening the labs in Google Colaboratory:

The 2021 6.S191 labs will be run in Google's Colaboratory, a Jupyter notebook environment that runs entirely in the cloud, you don't need to download anything. To run these labs, you must have a Google account.

On this Github repo, navigate to the lab folder you want to run (lab1, lab2, lab3) and open the appropriate python notebook (*.ipynb). Click the "Run in Colab" link on the top of the lab. That's it!

Running the labs

Now, to run the labs, open the Jupyter notebook on Colab. Navigate to the "Runtime" tab --> "Change runtime type". In the pop-up window, under "Runtime type" select "Python 3", and under "Hardware accelerator" select "GPU". Go through the notebooks and fill in the #TODO cells to get the code to compile for yourself!

MIT Deep Learning package

You might notice that inside the labs we install the mitdeeplearning python package from the Python Package repository:

pip install mitdeeplearning

This package contains convienence functions that we use throughout the course and can be imported like any other Python package.

>>> import mitdeeplearning as mdl

We do this for you in each of the labs, but the package is also open source under the same license so you can also use it outside the class.

Lecture Videos

All lecture videos are available publicly online and linked above! Use and/or modification of lecture slides outside of 6.S191 must reference:

© MIT 6.S191: Introduction to Deep Learning

http://introtodeeplearning.com

License

All code in this repository is copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.

Licensed under the MIT License. You may not use this file except in compliance with the License. Use and/or modification of this code outside of 6.S191 must reference:

© MIT 6.S191: Introduction to Deep Learning

http://introtodeeplearning.com

Comments
  • Error

    Error

    Hey, Why am i having an error with this loc "import util.download_lung_data", is it correct?? I am getting an error of no module named util

    Thanks

    opened by arpita8 12
  • Docker images pull error

    Docker images pull error

    Hi amini, I'm trying to pull the docker images by this link DockerHub, it seems that there're some problems. I try docker pull mit6s191/iap2018 in terminal, it gives me this message:

    Using default tag: latest
    Error response from daemon: manifest for mit6s191/iap2018:latest not found
    

    then, I use docker pull mit6s191/iap2018:labs, it shows:

    error pulling image configuration: Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/48/48db1fb4c0b5d8aeb65b499c0623b057f6b50f93eed0e9cfb3f963b0c12a74db/data?Expires=1524752941&Signature=AKuwnCd69y-fs0NlLjQnAlBoUhbht-gWbIYIoIESf7dERzjlkeejUndYC1QCnEhjjlhZAvv2NWQFWEf-Efc6noGUV9hK4QRVaQqO23zRKRrqarTWVMLj5LQX4X1Qikze5YEXy4VqdNm5t88WRQsfDvsPHHDmKx6vqA2V4VgVDP8_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout
    

    Could you please help to address this problem? Thank you.

    opened by ytzhao 9
  • Error while importing util

    Error while importing util

    I'm getting this error in most of the notebooks. Stacktrace in Colab ->

    ModuleNotFoundError Traceback (most recent call last) in () 14 15 # Import the necessary class-specific utility files for this lab ---> 16 import introtodeeplearning_labs as util

    /content/introtodeeplearning_labs/init.py in () ----> 1 from lab1 import * 2 from lab2 import * 3 # from lab3 import * 4 5

    ModuleNotFoundError: No module named 'lab1'

    opened by SarthakSG 5
  • can't play music though no error occurs

    can't play music though no error occurs

    I ran the code below with colab, but I didn't hear a sound. !pip install mitdeeplearning import mitdeeplearning as mdl

    songs = mdl.lab1.load_training_data() example_song = songs[0] mdl.lab1.play_song(example_song)

    opened by creater-yzl 4
  • invalid bind mount spec, invalid mode: /notebooks/introtodeeplearning_labs

    invalid bind mount spec, invalid mode: /notebooks/introtodeeplearning_labs

    I am trying to try the notebooks posted here as per the instructions given in the readme here as:

    sudo docker run -p 8888:8888 -p 6006:6006 -v https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs mit6s191/iap2018:labs

    when I get the error:

    docker: Error response from daemon: invalid bind mount spec "https://github.com/aamini/introtodeeplearning_labs:/notebooks/introtodeeplearning_labs": invalid mode: /notebooks/introtodeeplearning_labs.

    Am I missing the path for the repo?

    opened by sameermahajan 4
  • [Lab 2 / Part 2] Training dataset is missing

    [Lab 2 / Part 2] Training dataset is missing

    Regarding Section 2.2 of Lab2, Part2 - Debiasing, the training data hosted at https://www.dropbox.com/s/dl/bp54q547mfg15ze/train_face.h5 no longer exists. Thanks for the great class!

    opened by vivianliang 3
  • ModuleNotFoundError: No module named 'lab1'

    ModuleNotFoundError: No module named 'lab1'

    Hi, quick solution for importing this modules? ModuleNotFoundError Traceback (most recent call last) in () 8 from IPython import display as ipythondisplay 9 ---> 10 import introtodeeplearning_labs as util 11 12 is_correct_tf_version = '1.14.0' in tf.version

    /content/introtodeeplearning_labs/init.py in () ----> 1 from lab1 import * 2 from lab2 import * 3 # from lab3 import * 4 5

    opened by IngridJSJ 3
  • lab1/Part2 fails assertion for tf version 1.13.0 (Colab is on 1.13.1)

    lab1/Part2 fails assertion for tf version 1.13.0 (Colab is on 1.13.1)

    Running lab1/Part2_music_generation.ipynb fails at the beginning because it requests 1.13.0. Looks like Colab updated to 1.13.1 (I didn't do anything special).

    is_correct_tf_version = '1.13.0' in tf.__version__
    ...
    AssertionError: Wrong tensorflow version (1.13.1) installed
    

    Replacing that with this will work, unless that specific version is important:

    is_correct_tf_version = tf.__version__ >= '1.13.0'
    
    opened by MaxGhenis 3
  • Avoid colocate_with warning in lab1/part1 solution by using tf.add

    Avoid colocate_with warning in lab1/part1 solution by using tf.add

    lab1/Part1_tensorflow_solution.ipynb includes this code:

    def our_dense_layer(x, n_in, n_out):
      # ...
      z = tf.matmul(x,W) + b
    

    When this is called in the next cell, it produces this warning:

    WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:642: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
    Instructions for updating:
    Colocations handled automatically by placer.
    tf.Tensor([[0.95257413 0.95257413 0.95257413]], shape=(1, 3), dtype=float32)
    

    This warning can be avoided by replacing the +b code segment with tf.add, i.e.

    z = tf.add(tf.matmul(x, W), b, name="z")
    

    (also more TensorFlow-y)

    opened by MaxGhenis 3
  • Lab1 - Part1 - Section 1.2:Error when using tf.constant to provide input to Keras model.predict

    Lab1 - Part1 - Section 1.2:Error when using tf.constant to provide input to Keras model.predict

    I am encountering an InvalidArgumentErrorerror when I use Keras model.predict with tf.constant as input. I am not sure whether it's because model.predict don't work with tf.constant or I am doing something wrong. It work fine when I use numpy array with same argument.

    # Define the number of inputs and outputs
    n_input_nodes = 2
    n_output_nodes = 3
    
    # First define the model 
    model = Sequential()
    
    '''TODO: Define a dense (fully connected) layer to compute z'''
    # Remember: dense layers are defined by the parameters W and b!
    # You can read more about the initialization of W and b in the TF documentation :) 
    dense_layer = Dense(n_output_nodes, input_shape=(n_input_nodes,),activation='sigmoid') # TODO 
    
    # Add the dense layer to the model
    model.add(dense_layer)
    

    Now when I do prediction using:

    # Test model with example input
    x_input = tf.constant([[1.0,2.]], shape=(1,2))
    '''TODO: feed input into the model and predict the output!'''
    print(model.predict(x_input)) # TODO
    

    I get following error:

    InvalidArgumentError: In[0] is not a matrix. Instead it has shape [2] [[{{node MatMul_3}}]] [Op:StatefulPartitionedCall]

    When I use Numpy array, it works:

    # Test model with example input
    x_input =np.array([[1.0,2.]])
    '''TODO: feed input into the model and predict the output!'''
    print(model.predict(x_input)) # TODO
    

    [[0.19114174 0.88079417 0.8062956 ]]

    Could you let me know if it is a tf issue. If so I can raise an issue on the tf repository.

    opened by sibyjackgrove 3
  • module 'mitdeeplearning.lab3' has no attribute 'pong_change'

    module 'mitdeeplearning.lab3' has no attribute 'pong_change'

    I'm doing the pong part of RL Lab I ran into this issue i've tried looking into all versions of mitdeeplearning i didn't find pong_change function can you fix this

    opened by babahadjsaid 2
  • Part2_Music_Generation: model prediction inputs

    Part2_Music_Generation: model prediction inputs

    Maybe this gets a little bit late, but I think this is also a good chance to say thank you to Alex and Ava for this great course.

    Here is my questions:

    When I went through Music_Generation codes, the answer here was quite confusing to me. Only one character passes to the model each time although it is updating but the previous information is missing. (I think this also part the reason why the songs generated are always invalid.)

    ~Pass the prediction along with the previous hidden state ~as the next inputs to the model input_eval = tf.expand_dims([predicted_id], 0)

    So I save the initial input and concatenate with each output as the next input. This makes more sense to me and the results start to be much better, but I'm not sure if I make something wrong or there are better ways, like taking the previous state as the next initializer state.

    out_eval = tf.expand_dims([predicted_id], 0) input_eval = tf.concat([input_eval, output_eval], 1)

    opened by maple24 0
  • [Lab 1 Part 1.1] A typo in the matrix indexes?

    [Lab 1 Part 1.1] A typo in the matrix indexes?

    The last matrix defined by the user in section 1.1 has to be a 2-d Tensor. Thus, it'll have a shape of (n, 2) or (2, n). The code in the Lab 1 notebook has:

    row_vector = matrix[1]
    column_vector = matrix[:,2]
    scalar = matrix[1, 2]
    

    This assumes the column-rank to be >=3. IMHO, it should instead be:

    row_vector = matrix[1]
    column_vector = matrix[:,1]
    scalar = matrix[0, 1]
    

    And assume the user will choose a (2,2) matrix as their solution.

    opened by datasith 1
  • Lab2: Part2_Debiasing Test Set Bias

    Lab2: Part2_Debiasing Test Set Bias

    Hello, I have a question on test set bias of the standard CNN model. After running cell #13 in Part2_Debiasing.ipynb, I see this histogram:

    Lab2_Cell13_Std_CNN_Test_Set_Bias

    Am I correct in interpreting that avg test set accuracy is ~65% for Light Female, ~70% for Light Male, ~82% for Dark Female, and ~90% for Dark Male?

    If training set has majority of data on light skinned female, I expected test set accuracy to be higher for that category. So, results of above hisogram are counter intuitive.

    What am I missing?

    --Rahul

    opened by rvh72 2
  • Unable to install on M1 Mac

    Unable to install on M1 Mac

    Getting the following output when attempting to install mitdeeplearning via pip on an M1 mac:

    % pip install mitdeeplearning
    DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621
    Collecting mitdeeplearning
      Using cached mitdeeplearning-0.2.0.tar.gz (2.1 MB)
      Preparing metadata (setup.py) ... done
    Requirement already satisfied: numpy in /opt/homebrew/lib/python3.9/site-packages (from mitdeeplearning) (1.22.3)
    Collecting regex
      Downloading regex-2022.3.15-cp39-cp39-macosx_11_0_arm64.whl (281 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 281.8/281.8 KB 5.1 MB/s eta 0:00:00
    Collecting tqdm
      Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB)
    Collecting gym
      Downloading gym-0.23.1.tar.gz (626 kB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 626.2/626.2 KB 17.9 MB/s eta 0:00:00
      Installing build dependencies ... done
      Getting requirements to build wheel ... done
      Preparing metadata (pyproject.toml) ... done
    Collecting mitdeeplearning
      Downloading mitdeeplearning-0.1.2.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 45.2 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
      Downloading mitdeeplearning-0.1.1.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 37.5 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
      Downloading mitdeeplearning-0.1.0.tar.gz (2.1 MB)
         ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 36.1 MB/s eta 0:00:00
      Preparing metadata (setup.py) ... done
    ERROR: Cannot install mitdeeplearning==0.1.0, mitdeeplearning==0.1.1, mitdeeplearning==0.1.2 and mitdeeplearning==0.2.0 because these package versions have conflicting dependencies.
    
    The conflict is caused by:
        mitdeeplearning 0.2.0 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.2 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.1 depends on tensorflow>=2.0.0a
        mitdeeplearning 0.1.0 depends on tensorflow>=2.0.0a
    
    To fix this you could try to:
    1. loosen the range of package versions you've specified
    2. remove package versions to allow pip attempt to solve the dependency conflict
    
    ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
    
    ERROR: Could not find a version that satisfies the requirement tensorflow>=2.0.0a 
    

    Looking for a solution that allows the installation of this package. I'm using python3.9 and I have tensorflow-macos installed.

    opened by jackson-sandland 4
  • Lab 3-Fail to import module for event camera

    Lab 3-Fail to import module for event camera

    During running the following section in lab 3 on autonomous driving:

    import vista from vista.utils import logging logging.setLevel(logging.ERROR)

    I get an error:

    ::WARNING::[vista.entities.sensors.EventCamera.] Fail to import module for event camera. Remember to do source /openeb/build/utils/scripts/setup_env.shCan ignore this if not using it

    How to solve this problem?

    I try to continue , then in the following code section: camera = car.spawn_camera(config={'size': (200, 320)})

    I got GL error: GLError: GLError( err = 12290, baseOperation = eglMakeCurrent, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7ff7681b5b90>, <OpenGL._opaque.EGLSurface_pointer object at 0x7ff768c97dd0>, <OpenGL._opaque.EGLSurface_pointer object at 0x7ff768c97dd0>, <OpenGL._opaque.EGLContext_pointer object at 0x7ff767f89290>, ), result = 0 )

    I use in Google Colab (and GPU) .

    opened by kerengold 2
Releases(v0.2.0)
Owner
Alexander Amini
Alexander Amini
【ACMMM 2021】DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning

DSANet: Dynamic Segment Aggregation Network for Video-Level Representation Learning (ACMMM 2021) Overview We release the code of the DSANet (Dynamic S

Wenhao Wu 46 Dec 27, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022
Database Reasoning Over Text project for ACL paper

Database Reasoning over Text This repository contains the code for the Database Reasoning Over Text paper, to appear at ACL2021. Work is performed in

Facebook Research 320 Dec 12, 2022
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
A Python parser that takes the content of a text file and then reads it into variables.

Text-File-Parser A Python parser that takes the content of a text file and then reads into variables. Input.text File 1. What is your ***? 1. 18 -

Kelvin 0 Jul 26, 2021
Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation

NeuralPDE NeuralPDE.jl is a solver package which consists of neural network solvers for partial differential equations using scientific machine learni

SciML Open Source Scientific Machine Learning 680 Jan 02, 2023
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Tong WU 89 Dec 15, 2022
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
A PyTorch implementation of "Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning", IJCAI-21

MERIT A PyTorch implementation of our IJCAI-21 paper Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning. Depen

Graph Analysis & Deep Learning Laboratory, GRAND 32 Jan 02, 2023
[WACV 2020] Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints

Reducing Footskate in Human Motion Reconstruction with Ground Contact Constraints Official implementation for Reducing Footskate in Human Motion Recon

Virginia Tech Vision and Learning Lab 38 Nov 01, 2022
Code for Active Learning at The ImageNet Scale.

Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.

Zeyad Emam 47 Dec 12, 2022
Create animations for the optimization trajectory of neural nets

Animating the Optimization Trajectory of Neural Nets loss-landscape-anim lets you create animated optimization path in a 2D slice of the loss landscap

Logan Yang 81 Dec 25, 2022
Qcover is an open source effort to help exploring combinatorial optimization problems in Noisy Intermediate-scale Quantum(NISQ) processor.

Qcover is an open source effort to help exploring combinatorial optimization problems in Noisy Intermediate-scale Quantum(NISQ) processor. It is devel

33 Nov 11, 2022
Python script that allows you to automatically setup your Growtopia server.

AutoSetup Python script that allows you to automatically setup your Growtopia server. How To Use Firstly, install all the required modules that used i

Aspire 3 Mar 06, 2022
BLEURT is a metric for Natural Language Generation based on transfer learning.

BLEURT: a Transfer Learning-Based Metric for Natural Language Generation BLEURT is an evaluation metric for Natural Language Generation. It takes a pa

Google Research 492 Jan 05, 2023
Pytorch implementation of Cut-Thumbnail in the paper Cut-Thumbnail:A Novel Data Augmentation for Convolutional Neural Network.

Cut-Thumbnail (Accepted at ACM MULTIMEDIA 2021) Tianshu Xie, Xuan Cheng, Xiaomin Wang, Minghui Liu, Jiali Deng, Tao Zhou, Ming Liu This is the officia

3 Apr 12, 2022
Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight

dimensions Estimating the instrinsic dimensionality of image datasets Code for: The Intrinsic Dimensionaity of Images and Its Impact On Learning - Phi

Phil Pope 41 Dec 10, 2022
Code and data for ACL2021 paper Cross-Lingual Abstractive Summarization with Limited Parallel Resources.

Multi-Task Framework for Cross-Lingual Abstractive Summarization (MCLAS) The code for ACL2021 paper Cross-Lingual Abstractive Summarization with Limit

Yu Bai 43 Nov 07, 2022