The fastai book, published as Jupyter Notebooks

Overview

Binder
English / Spanish / Korean / Chinese / Bengali / Indonesian

The fastai book

These notebooks cover an introduction to deep learning, fastai, and PyTorch. fastai is a layered API for deep learning; for more information, see the fastai paper. Everything in this repo is copyright Jeremy Howard and Sylvain Gugger, 2020 onwards.

These notebooks are used for a MOOC and form the basis of this book, which is currently available for purchase. It does not have the same GPL restrictions that are on this draft.

The code in the notebooks and python .py files is covered by the GPL v3 license; see the LICENSE file for details.

The remainder (including all markdown cells in the notebooks and other prose) is not licensed for any redistribution or change of format or medium, other than making copies of the notebooks or forking this repo for your own private use. No commercial or broadcast use is allowed. We are making these materials freely available to help you learn deep learning, so please respect our copyright and these restrictions.

If you see someone hosting a copy of these materials somewhere else, please let them know that their actions are not allowed and may lead to legal action. Moreover, they would be hurting the community because we're not likely to release additional materials in this way if people ignore our copyright.

This is an early draft. If you get stuck running notebooks, please search the fastai-dev forum for answers, and ask for help there if needed. Please don't use GitHub issues for problems running the notebooks.

If you make any pull requests to this repo, then you are assigning copyright of that work to Jeremy Howard and Sylvain Gugger. (Additionally, if you are making small edits to spelling or text, please specify the name of the file and a very brief description of what you're fixing. It's difficult for reviewers to know which corrections have already been made. Thank you.)

Citations

If you wish to cite the book, you may use the following:

@book{howard2020deep,
title={Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
author={Howard, J. and Gugger, S.},
isbn={9781492045526},
url={https://books.google.no/books?id=xd6LxgEACAAJ},
year={2020},
publisher={O'Reilly Media, Incorporated}
}
Comments
  • RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    RuntimeError Traceback (most recent call last) in 9 10 learn = cnn_learner(dls, resnet34, metrics=error_rate) ---> 11 learn.fine_tune(1)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\callback\schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs) 155 "Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR" 156 self.freeze() --> 157 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs) 158 base_lr /= 2 159 self.unfreeze()

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\callback\schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt) 110 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final), 111 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))} --> 112 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd) 113 114 # Cell

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt) 190 try: 191 self.epoch=epoch; self('begin_epoch') --> 192 self._do_epoch_train() 193 self._do_epoch_validate() 194 except CancelEpochException: self('after_cancel_epoch')

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in _do_epoch_train(self) 163 try: 164 self.dl = self.dls.train; self('begin_train') --> 165 self.all_batches() 166 except CancelTrainException: self('after_cancel_train') 167 finally: self('after_train')

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\learner.py in all_batches(self) 141 def all_batches(self): 142 self.n_iter = len(self.dl) --> 143 for o in enumerate(self.dl): self.one_batch(*o) 144 145 def one_batch(self, i, b):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\fastai2\data\load.py in iter(self) 95 self.randomize() 96 self.before_iter() ---> 97 for b in _loadersself.fake_l.num_workers==0: 98 if self.device is not None: b = to_device(b, self.device) 99 yield self.after_batch(b)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader) 717 # before it starts, and del tries to join but will get: 718 # AssertionError: can only join a started process. --> 719 w.start() 720 self._index_queues.append(index_queue) 721 self._workers.append(w)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\process.py in start(self) 110 'daemonic processes are not allowed to have children' 111 _cleanup() --> 112 self._popen = self._Popen(self) 113 self._sentinel = self._popen.sentinel 114 # Avoid a refcycle if the target function holds an indirect

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext):

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj) 63 try: 64 reduction.dump(prep_data, to_child) ---> 65 reduction.dump(process_obj, to_child) 66 finally: 67 set_spawning_popen(None)

    d:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 #

    d:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\multiprocessing\reductions.py in reduce_tensor(tensor) 240 ref_counter_offset, 241 event_handle, --> 242 event_sync_required) = storage.share_cuda() 243 tensor_offset = tensor.storage_offset() 244 shared_cache[handle] = StorageWeakRef(storage)

    RuntimeError: cuda runtime error (801) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245

    opened by ShowTimeJMJ 14
  • 06_multiclass accuracy_multi plot

    06_multiclass accuracy_multi plot

    In cell 35, there is a plot

    image Do I understand right that we have values of activations on the x-axis, not probabilities, as well as we pick sigmoid=False? If so, why do limited with 0 and 1? Is it an unintentional error and they should have wide ranges?

    opened by grayskripko 9
  • IndexError: index 3 is out of bounds for dimension 0 with size 3

    IndexError: index 3 is out of bounds for dimension 0 with size 3

    Got this exception -

    IndexError: index 3 is out of bounds for dimension 0 with size 3

    on a following cell of the intro's notebook

    from fastai.vision.all import *
    path = untar_data(URLs.PETS)/'images'
    
    def is_cat(x): return x[0].isupper()
    dls = ImageDataLoaders.from_name_func(
        path, get_image_files(path), valid_pct=0.2, seed=42,
        label_func=is_cat, item_tfms=Resize(224))
    
    learn = cnn_learner(dls, resnet34, metrics=error_rate)
    learn.fine_tune(1)
    

    Full exception:

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    <command-8165228> in <module>
         11 
         12 learn = cnn_learner(dls, resnet34, metrics=error_rate)
    ---> 13 learn.fine_tune(1)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
        159     "Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` from `epochs` using discriminative LR"
        160     self.freeze()
    --> 161     self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
        162     base_lr /= 2
        163     self.unfreeze()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
        111     scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
        112               'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
    --> 113     self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
        114 
        115 # Cell
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/utils.py in _f(*args, **kwargs)
        471         init_args.update(log)
        472         setattr(inst, 'init_args', init_args)
    --> 473         return inst if to_return else f(*args, **kwargs)
        474     return _f
        475 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
        205             self.opt.set_hypers(lr=self.lr if lr is None else lr)
        206             self.n_epoch,self.loss = n_epoch,tensor(0.)
    --> 207             self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
        208 
        209     def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_fit(self)
        195         for epoch in range(self.n_epoch):
        196             self.epoch=epoch
    --> 197             self._with_events(self._do_epoch, 'epoch', CancelEpochException)
        198 
        199     @log_args(but='cbs')
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_epoch(self)
        189 
        190     def _do_epoch(self):
    --> 191         self._do_epoch_train()
        192         self._do_epoch_validate()
        193 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _do_epoch_train(self)
        181     def _do_epoch_train(self):
        182         self.dl = self.dls.train
    --> 183         self._with_events(self.all_batches, 'train', CancelTrainException)
        184 
        185     def _do_epoch_validate(self, ds_idx=1, dl=None):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in all_batches(self)
        159     def all_batches(self):
        160         self.n_iter = len(self.dl)
    --> 161         for o in enumerate(self.dl): self.one_batch(*o)
        162 
        163     def _do_one_batch(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in one_batch(self, i, b)
        177         self.iter = i
        178         self._split(b)
    --> 179         self._with_events(self._do_one_batch, 'batch', CancelBatchException)
        180 
        181     def _do_epoch_train(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _with_events(self, f, event_type, ex, final)
        153 
        154     def _with_events(self, f, event_type, ex, final=noop):
    --> 155         try:       self(f'before_{event_type}')       ;f()
        156         except ex: self(f'after_cancel_{event_type}')
        157         finally:   self(f'after_{event_type}')        ;final()
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in __call__(self, event_name)
        131     def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
        132 
    --> 133     def __call__(self, event_name): L(event_name).map(self._call_one)
        134 
        135     def _call_one(self, event_name):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
        394              else f.format if isinstance(f,str)
        395              else f.__getitem__)
    --> 396         return self._new(map(g, self))
        397 
        398     def filter(self, f, negate=False, **kwargs):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
        340     @property
        341     def _xtra(self): return None
    --> 342     def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
        343     def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
        344     def copy(self): return self._new(self.items.copy())
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __call__(cls, x, *args, **kwargs)
         49             return x
         50 
    ---> 51         res = super().__call__(*((x,) + args), **kwargs)
         52         res._newchk = 0
         53         return res
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __init__(self, items, use_list, match, *rest)
        331         if items is None: items = []
        332         if (use_list is not None) or not _is_array(items):
    --> 333             items = list(items) if use_list else _listify(items)
        334         if match is not None:
        335             if is_coll(match): match = len(match)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in _listify(o)
        244     if isinstance(o, list): return o
        245     if isinstance(o, str) or _is_array(o): return [o]
    --> 246     if is_iter(o): return list(o)
        247     return [o]
        248 
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastcore/foundation.py in __call__(self, *args, **kwargs)
        307             if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
        308         fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
    --> 309         return self.fn(*fargs, **kwargs)
        310 
        311 # Cell
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in _call_one(self, event_name)
        135     def _call_one(self, event_name):
        136         assert hasattr(event, event_name), event_name
    --> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
        138 
        139     def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/learner.py in <listcomp>(.0)
        135     def _call_one(self, event_name):
        136         assert hasattr(event, event_name), event_name
    --> 137         [cb(event_name) for cb in sort_by_run(self.cbs)]
        138 
        139     def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/core.py in __call__(self, event_name)
         42                (self.run_valid and not getattr(self, 'training', False)))
         43         res = None
    ---> 44         if self.run and _run: res = getattr(self, event_name, noop)()
         45         if event_name=='after_fit': self.run=True #Reset self.run to True at each end of fit
         46         return res
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in before_batch(self)
         84     def __init__(self, scheds): self.scheds = scheds
         85     def before_fit(self): self.hps = {p:[] for p in self.scheds.keys()}
    ---> 86     def before_batch(self): self._update_val(self.pct_train)
         87 
         88     def _update_val(self, pct):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in _update_val(self, pct)
         87 
         88     def _update_val(self, pct):
    ---> 89         for n,f in self.scheds.items(): self.opt.set_hyper(n, f(pct))
         90 
         91     def after_batch(self):
    
    /local_disk0/.ephemeral_nfs/envs/pythonEnv-22e37778-b406-4688-8a34-66d9ba762e34/lib/python3.7/site-packages/fastai/callback/schedule.py in _inner(pos)
         67         if pos == 1.: return scheds[-1](1.)
         68         idx = (pos >= pcts).nonzero().max()
    ---> 69         actual_pos = (pos-pcts[idx]) / (pcts[idx+1]-pcts[idx])
         70         return scheds[idx](actual_pos.item())
         71     return _inner
    
    IndexError: index 3 is out of bounds for dimension 0 with size 3
    
    bug 
    opened by Tagar 8
  • cosmetic fixes 01

    cosmetic fixes 01

    I'll do this one and check in before happily looking at more

    it wasn't self evident how readers are already in a jupyter notebook and running import and gpu cells, but the material to select a cloud server GPU and get to the notebook was provided to them already / elsewhere?

    I haven't edited some candidates for this, but : can i check if its a significant goal to assist 2nd language readers? If so, we can edit out contractions e.g. We'll changes to we will . I found contractions ridiculously hard when I learnt my other languages. the other guides that help are - really long sentences as a red flag and also try to put active sentences with the important things first. sorry - I know you and Sylvain have written lots and know this - but I'm picturing an overseas high school student, not even a US based grad student from overseas.

    opened by bradsaracik 8
  • search_images_ddg always fails with TimeoutError

    search_images_ddg always fails with TimeoutError

    I've tried to use search_images_ddg from PaperSpace and my local environment. The result is always the same: TimeoutError. I've asked the question at forum and seems I am not the only one with such problem https://forums.fast.ai/t/live-coding-2/96690/55 Looks like it got broken recently

    opened by RomanVolkov 6
  • Wrong syntax in search_images_bing

    Wrong syntax in search_images_bing

    Hello, I am having trouble using the lesson 04. In the first input when I try to run the imports :

    #hide
    !pip install -Uqq fastbook
    import fastbook
    fastbook.setup_book()
    

    I am having the following error:

    File "/usr/local/lib/python3.6/dist-packages/fastbook/__init__.py", line 45
        def search_images_bing(key,earch_images_bing(key, term, min_sz=128, max_images=150):
    SyntaxError: invalid syntax
    

    Could it be a typo in the last release ?

    opened by Rob192 5
  • Azure api change - image search Endpoint change in Azure - 'PermissionDenied' Error Lesson 2

    Azure api change - image search Endpoint change in Azure - 'PermissionDenied' Error Lesson 2

    Hi,

    I can't complete lesson 2 - I have an API key for Bing Images search v7 but it seems this resource changed at the end of October 2020 and this change isn't yet incorporated into the code (Azure update). Azure does say that it will support the older Cognitive Services API for the next 3 years, but unfortunately it appears that is only for pre-existing users as I cannot find a way to subscribe as a new Azure user.

    image

    Fundamentally from a code perspective, it appears that the endpoint changed from that used in the code (fastbook/utils.py):

    def search_images_bing(key, term, min_sz=128):
        client = api('https://api.cognitive.microsoft.com', auth(key))
        return L(client.images.search(query=term, count=150, min_height=min_sz, min_width=min_sz).value)
    

    When I try to run this in the notebook I get a 'PermissionDenied' error which I think is due to this change in APIs from Azure - if so, my theory on the error is this will likely need to be refactored from using the older https://api.cognitive.microsoft.com endpoint to use the new endpoint - https://api.bing.microsoft.com/.. I did attempt a naïve endpoint replacement (wishful thinking I know!) but that unfortunately didn't work.

    I've tried on Collab, Paperspace, attempted a local install (hit other issues there) and tried a million different ways to get a Bing search API key via Cognitive Services but it does appear the guidance included on the forum here (referenced on the main fast.ai course page here) is no longer applicable (or I've done something silly - hopefully!).

    Unsure how to continue in this lesson without this resource - safe enough to move onto the next? Or perhaps I've done something very incorrect and feedback would be appreciated.

    Thanks for any advice/guidance/fixes! Rebecca

    opened by bkaCodes 5
  • 02_production: Updates needed for new Bing Image Search API

    02_production: Updates needed for new Bing Image Search API

    Hi FastAI team,

    There have been some recent updates to the Bing Image Search API, and 02_production.ipynb no longer works out of the box as a result. There has been some discussion about this over on the forums over the past few months:

    https://forums.fast.ai/t/02-production-permissiondenied-error/65823/19

    I don't see any pull requests created yet, so I went ahead and proposed the following changes:

    1. Update search_images_bing() function in utils.py
    2. Update the call to download_images in 02_production.ipynb
    opened by joshkraft 5
  • 09_tabular.ipynb: NameError: name 'file_extract' is not defined

    09_tabular.ipynb: NameError: name 'file_extract' is not defined

    When running notebook 9, there's no file_extract function.

    NameError                                 Traceback (most recent call last)
    /tmp/ipykernel_77010/3628020118.py in <module>
          2     path.mkdir(parents=true)
          3     api.competition_download_cli('bluebook-for-bulldozers', path=path)
    ----> 4     file_extract(path/'bluebook-for-bulldozers.zip')
          5 
          6 path.ls(file_type='text')
    
    NameError: name 'file_extract' is not defined
    

    You can fix it manually:

    cd ~/.fastai/archive/bluebook
    unzip bluebook-for-bulldozers.zip
    
    opened by erg 4
  • 04_minst_basics 'An End-to-End SGD Example/Step 6' - subplots do not converge when GPU enabled

    04_minst_basics 'An End-to-End SGD Example/Step 6' - subplots do not converge when GPU enabled

    If I load 04_mnist_basics.ipynb into Google Colab, the sub-plots for 'An End-to-End SGD Example/Step 6' look like this in the saved outputs:

    image

    If I run up-to and including the same cell with the GPU disabled:

    image

    Enable the GPU and reset, run from the beginning:

    image

    opened by pjgoodall 4
  • 09_tabular.ipynb: crashes when creating a `TabularPandas` using `Normalize`

    09_tabular.ipynb: crashes when creating a `TabularPandas` using `Normalize`

    Hi! I'm using a fresh checkout of 09_tabular.ipynb, clearing the notebook and running from the start. When I get to the following cell in the neural network section, it crashes:

    procs_nn = [Categorify, FillMissing, Normalize]
    to_nn = TabularPandas(df_nn_final, procs_nn, cat_nn, cont_nn,
                          splits=splits, y_names=dep_var)
    

    The error message (long, sorry) is as follows:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-105-9827c0e691d0> in <module>
          1 procs_nn = [Categorify, FillMissing, Normalize]
    ----> 2 to_nn = TabularPandas(df_nn_final, procs_nn, cat_nn, cont_nn,
          3                       splits=splits, y_names=dep_var)
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in __init__(self, df, procs, cat_names, cont_names, y_names, y_block, splits, do_setup, device, inplace, reduce_memory)
        164         self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs)
        165         self.split = len(df) if splits is None else len(splits[0])
    --> 166         if do_setup: self.setup()
        167 
        168     def new(self, df):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in setup(self)
        175     def decode_row(self, row): return self.new(pd.DataFrame(row).T).decode().items.iloc[0]
        176     def show(self, max_n=10, **kwargs): display_df(self.new(self.all_cols[:max_n]).decode().items)
    --> 177     def setup(self): self.procs.setup(self)
        178     def process(self): self.procs(self)
        179     def loc(self): return self.items.loc
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in setup(self, items, train_setup)
        190         tfms = self.fs[:]
        191         self.fs.clear()
    --> 192         for t in tfms: self.add(t,items, train_setup)
        193 
        194     def add(self,t, items=None, train_setup=False):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in add(self, t, items, train_setup)
        193 
        194     def add(self,t, items=None, train_setup=False):
    --> 195         t.setup(items, train_setup)
        196         self.fs.append(t)
        197 
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in setup(self, items, train_setup)
         77     def setup(self, items=None, train_setup=False):
         78         train_setup = train_setup if self.train_setup is None else self.train_setup
    ---> 79         return self.setups(getattr(items, 'train', items) if train_setup else items)
         80 
         81     def _call(self, fn, x, split_idx=None, **kwargs):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
        116         elif self.inst is not None: f = MethodType(f, self.inst)
        117         elif self.owner is not None: f = MethodType(f, self.owner)
    --> 118         return f(*args, **kwargs)
        119 
        120     def __get__(self, inst, owner):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in setups(self, to)
        271     store_attr(but='to', means=dict(getattr(to, 'train', to).conts.mean()),
        272                stds=dict(getattr(to, 'train', to).conts.std(ddof=0)+1e-7))
    --> 273     return self(to)
        274 
        275 @Normalize
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in __call__(self, x, **kwargs)
         71     @property
         72     def name(self): return getattr(self, '_name', _get_name(self))
    ---> 73     def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
         74     def decode  (self, x, **kwargs): return self._call('decodes', x, **kwargs)
         75     def __repr__(self): return f'{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}'
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
         81     def _call(self, fn, x, split_idx=None, **kwargs):
         82         if split_idx!=self.split_idx and self.split_idx is not None: return x
    ---> 83         return self._do_call(getattr(self, fn), x, **kwargs)
         84 
         85     def _do_call(self, f, x, **kwargs):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
         87             if f is None: return x
         88             ret = f.returns(x) if hasattr(f,'returns') else None
    ---> 89             return retain_type(f(x, **kwargs), x, ret)
         90         res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
         91         return retain_type(res, x)
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
        116         elif self.inst is not None: f = MethodType(f, self.inst)
        117         elif self.owner is not None: f = MethodType(f, self.owner)
    --> 118         return f(*args, **kwargs)
        119 
        120     def __get__(self, inst, owner):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/fastai/tabular/core.py in encodes(self, to)
        275 @Normalize
        276 def encodes(self, to:Tabular):
    --> 277     to.conts = (to.conts-self.means) / self.stds
        278     return to
        279 
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in f(self, other, axis, level, fill_value)
        649         # TODO: why are we passing flex=True instead of flex=not special?
        650         #  15 tests fail if we pass flex=not special instead
    --> 651         self, other = _align_method_FRAME(self, other, axis, flex=True, level=level)
        652 
        653         if isinstance(other, ABCDataFrame):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in _align_method_FRAME(left, right, axis, flex, level)
        501     elif is_list_like(right) and not isinstance(right, (ABCSeries, ABCDataFrame)):
        502         # GH17901
    --> 503         right = to_series(right)
        504 
        505     if flex is not None and isinstance(right, ABCDataFrame):
    
    ~/fast.ai.course/fastai-venv/lib/python3.9/site-packages/pandas/core/ops/__init__.py in to_series(right)
        463         else:
        464             if len(left.columns) != len(right):
    --> 465                 raise ValueError(
        466                     msg.format(req_len=len(left.columns), given_len=len(right))
        467                 )
    
    ValueError: Unable to coerce to Series, length must be 1: given 0
    

    I don't know where this is coming from (despite the long backtrace).

    opened by juliangilbey 4
  • bugfix to 15_arch_details.ipynb create_body call

    bugfix to 15_arch_details.ipynb create_body call

    I run this notebook in Google colab and got an error when using simply the model name as the input to create_body function. Since children can be called from the function form, I added the parentheses in the notebook, so encoder = create_body(resnet34, cut=-2) became encoder = create_body(resnet34(), cut=-2) and the notebook runs fine. Except it doesn't train up to 94% accuracy.

    opened by KikiCS 1
  • Chapter 2 has missing reference, and instead has a number of <> placeholders

    Chapter 2 has missing reference, and instead has a number of <> placeholders

    Seems like 02_production.ipynb suppose to have something instead of the <> placeholders Maybe a reference to chapter one or the first lesson should go here?

    The six lines of code we saw in <> are just one small part of the process of using deep learning in practice. In this chapter, we're going to use a computer vision example to look at the end-to-end process of creating a deep learning application. More specifically, we're going to build a bear classifier! In the process, we'll discuss the capabilities and constraints of deep learning, explore how to create datasets, look at possible gotchas when using deep learning in practice, and more. Many of the key points will apply equally well to other deep learning problems, such as those in <>. If you work through a problem similar in key respects to our example problems, we expect you to get excellent results with little code, quickly.

    and this one as well:

    There are many domains in which deep learning has not been used to analyze images yet, but those where it has been tried have nearly universally shown that computers can recognize what items are in an image at least as well as people can—even specially trained people, such as radiologists. This is known as object recognition. Deep learning is also good at recognizing where objects in an image are, and can highlight their locations and name each found object. This is known as object detection (there is also a variant of this that we saw in <>, where every pixel

    opened by delbarital 1
  • Fixed name of paper author and link to paper

    Fixed name of paper author and link to paper

    Fixed author name (it's Ron Kohavi) and associated the link with the paper, since the link goes to the paper, not the dataset.

    In the diff, I'm not sure if there's a way to avoid the language_info part.

    opened by cgoldammer 1
Releases(0.0.19)
Owner
fast.ai
fast.ai
Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Kaggle G2Net Gravitational Wave Detection : 2nd place solution

Hiroshechka Y 33 Dec 26, 2022
This Jupyter notebook shows one way to implement a simple first-order low-pass filter on sampled data in discrete time.

How to Implement a First-Order Low-Pass Filter in Discrete Time We often teach or learn about filters in continuous time, but then need to implement t

Joshua Marshall 4 Aug 24, 2022
A minimalist tool to display a network graph.

A tool to get a minimalist view of any architecture This tool has only be tested with the models included in this repo. Therefore, I can't guarantee t

Thibault Castells 1 Feb 11, 2022
A Simple Key-Value Data-store written in Python

mercury-db This is a File Based Key-Value Datastore that supports basic CRUD (Create, Read, Update, Delete) operations developed using Python. The dat

Vaidhyanathan S M 1 Jan 09, 2022
PyTorch implementation of Densely Connected Time Delay Neural Network

Densely Connected Time Delay Neural Network PyTorch implementation of Densely Connected Time Delay Neural Network (D-TDNN) in our paper "Densely Conne

Ya-Qi Yu 64 Oct 11, 2022
AirCode: A Robust Object Encoding Method

AirCode This repo contains source codes for the arXiv preprint "AirCode: A Robust Object Encoding Method" Demo Object matching comparison when the obj

Chen Wang 30 Dec 09, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 02, 2023
Self-Supervised Image Denoising via Iterative Data Refinement

Self-Supervised Image Denoising via Iterative Data Refinement Yi Zhang1, Dasong Li1, Ka Lung Law2, Xiaogang Wang1, Hongwei Qin2, Hongsheng Li1 1CUHK-S

Zhang Yi 72 Jan 01, 2023
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Zero-Cost Proxies for Lightweight NAS

Zero-Cost-NAS Companion code for the ICLR2021 paper: Zero-Cost Proxies for Lightweight NAS tl;dr A single minibatch of data is used to score neural ne

SamsungLabs 108 Dec 20, 2022
Continuous Diffusion Graph Neural Network

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE.

Twitter Research 227 Jan 05, 2023
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.

The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dea

MIC-DKFZ 1.2k Jan 04, 2023
Wileless-PDGNet Implementation

Wileless-PDGNet Implementation This repo is related to the following paper: Boning Li, Ananthram Swami, and Santiago Segarra, "Power allocation for wi

6 Oct 04, 2022
A keras implementation of ENet (abandoned for the foreseeable future)

ENet-keras This is an implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from ENet-training (lua-t

Pavlos 115 Nov 23, 2021
FFTNet vocoder implementation

Unofficial Implementation of FFTNet vocode paper. implement the model. implement tests. overfit on a single batch (sanity check). linearize weights fo

Eren Gölge 81 Dec 08, 2022
Quantum-enhanced transformer neural network

Example of a Quantum-enhanced transformer neural network Get the code: git clone https://github.com/rdisipio/qtransformer.git cd qtransformer Create

Riccardo Di Sipio 61 Nov 08, 2022
In generative deep geometry learning, we often get many obj files remain to be rendered

a python prompt cli script for blender batch render In deep generative geometry learning, we always get many .obj files to be rendered. Our rendered i

Tian-yi Liang 1 Mar 20, 2022
Trainable Bilateral Filter Layer (PyTorch)

Trainable Bilateral Filter Layer (PyTorch) This repository contains our GPU-accelerated trainable bilateral filter layer (three spatial and one range

FabianWagner 26 Dec 25, 2022
Repository for GNSS-based position estimation using a Deep Neural Network

Code repository accompanying our work on 'Improving GNSS Positioning using Neural Network-based Corrections'. In this paper, we present a Deep Neural

32 Dec 13, 2022
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. 🚀 Installat

Jintang Li 54 Jan 05, 2023