A CV toolkit for my papers.

Overview

License: MIT Build Docs Unit Test

PWC PWC

PyTorch-Encoding

created by Hang Zhang

Documentation

  • Please visit the Docs for detail instructions of installation and usage.

  • Please visit the link to image classification models.

  • Please visit the link to semantic segmentation models.

Citations

ResNeSt: Split-Attention Networks [arXiv]
Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Muller, R. Manmatha, Mu Li and Alex Smola

@article{zhang2020resnest,
title={ResNeSt: Split-Attention Networks},
author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
journal={arXiv preprint},
year={2020}
}

Context Encoding for Semantic Segmentation [arXiv]
Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, Amit Agrawal

@InProceedings{Zhang_2018_CVPR,
author = {Zhang, Hang and Dana, Kristin and Shi, Jianping and Zhang, Zhongyue and Wang, Xiaogang and Tyagi, Ambrish and Agrawal, Amit},
title = {Context Encoding for Semantic Segmentation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

Deep TEN: Texture Encoding Network [arXiv]
Hang Zhang, Jia Xue, Kristin Dana

@InProceedings{Zhang_2017_CVPR,
author = {Zhang, Hang and Xue, Jia and Dana, Kristin},
title = {Deep TEN: Texture Encoding Network},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}
Comments
  • No module named cpp_extension

    No module named cpp_extension

    Hi, I got the error named No module named cpp_extension (from torch.utils.cpp_extension import load) when I run the quick demo http://hangzh.com/PyTorch-Encoding/experiments/segmentation.html#install-package. The version of python and torch are 2.7 and 0.3.1 respectively. How can I handle it?

    bug 
    opened by qiulesun 51
  • libENCODING.so library missing

    libENCODING.so library missing

    Hello,

    I've tried to install this Encoding-Layer on Linux 16.04 but I have the following error occuring during the installation :

    x86_64-linux-gnu-g++: error: /usr/local/lib/python3.5/dist-packages/torch/lib/libENCODING.so: No file or repository of this type error: command 'x86_64-linux-gnu-g++' failed with exit status 1

    I tried to find this library to see if the problem was coming from a wrong path but it looks like this library doesn't exist on my computer. I have installed Pytorch from the source following the instructions with no problems and I'm also using Cuda 8.0. Could you help me to find a solution to this problem please ?

    Thanks

    duplicate 
    opened by TFisichella 24
  • Results of Cityscapes using EncNet

    Results of Cityscapes using EncNet

    I want to get results of Cityscapes dataset by training EncNet. Although you did not provide corresponding results, can I do it using directly the codes you have released, mainly involving prepare_cityscapes.py and cityscapes.py and other default hyper-parameters like lr=0.01, epoch=240?

    opened by qiulesun 23
  • ninja: build stopped: subcommand failed.

    ninja: build stopped: subcommand failed.

    /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h: In instantiation of ‘pybind11::object pybind11::detail::object_api::operator()(Args&& ...) const [with pybind11::return_value_policy policy = (pybind11::return_value_policy)1u; Args = {pybind11::handle&, pybind11::handle&}; Derived = pybind11::detail::accessorpybind11::detail::accessor_policies::str_attr]’: /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/pytypes.h:884:27: required from ‘pybind11::str pybind11::str::format(Args&& ...) const [with Args = {pybind11::handle&, pybind11::handle&}]’ /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/pybind11.h:749:72: required from here /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2096:74: error: no matching function for call to ‘collect_arguments(pybind11::handle&, pybind11::handle&)’ return detail::collect_arguments(std::forward(args)...).call(derived().ptr()); ^ /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2096:74: note: candidates are: /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2075:1: note: template<pybind11::return_value_policy policy, class ... Args, class> pybind11::detail::simple_collector pybind11::detail::collect_arguments(Args&& ...) simple_collector collect_arguments(Args &&...args) { ^ /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2075:1: note: template argument deduction/substitution failed: /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2082:1: note: template<pybind11::return_value_policy policy, class ... Args, class> pybind11::detail::unpacking_collector pybind11::detail::collect_arguments(Args&& ...) unpacking_collector collect_arguments(Args &&...args) { ^ /home/gaoxy/.conda/envs/pytorch/lib/python3.6/site-packages/torch/lib/include/pybind11/cast.h:2082:1: note: template argument deduction/substitution failed: ninja: build stopped: subcommand failed.

    compatibility 
    opened by OilGao 21
  • subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

    subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

    Warning (from warnings module): File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py", line 184 warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error)) UserWarning: Error checking compiler version for c++: Command 'c++' returned non-zero exit status 1. Traceback (most recent call last): File "<pyshell#0>", line 1, in import encoding File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding_init_.py", line 13, in from . import nn, functions, parallel, utils, models, datasets, transforms File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding\nn_init_.py", line 12, in from .encoding import * File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding\nn\encoding.py", line 18, in from ..functions import scaled_l2, aggregate, pairwise_cosine File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding\functions_init_.py", line 2, in from .encoding import * File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding\functions\encoding.py", line 14, in from .. import lib File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\encoding\lib_init_.py", line 15, in ], build_directory=cpu_path, verbose=False) File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py", line 645, in load is_python_module) File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py", line 814, in _jit_compile with_cuda=with_cuda) File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py", line 859, in _write_ninja_file_and_build with_cuda=with_cuda) File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py", line 1064, in _write_ninja_file 'cl']).decode().split('\r\n') File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 336, in check_output **kwargs).stdout File "C:\Users\ys\AppData\Local\Programs\Python\Python36\lib\subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

    After i install the latest version of encoding, this error will still occur when i import encoding module. I use python3.6, pytorch1.0 stable, cuda9.0

    opened by flyingshan 19
  • RuntimeError: Ninja is required to load C++ extension

    RuntimeError: Ninja is required to load C++ extension

    Hi,author. I have followed the instructions on your page. I got your code by git clone, and run "python setup.py install" with no errors. However when I run "python3 demo.py", I got the error like this:

    Traceback (most recent call last): File "/home/llg/.local/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 873, in verify_ninja_availability subprocess.check_call('ninja --version'.split(), stdout=devnull) File "/usr/lib/python3.5/subprocess.py", line 576, in check_call retcode = call(*popenargs, **kwargs) File "/usr/lib/python3.5/subprocess.py", line 557, in call with Popen(*popenargs, **kwargs) as p: File "/usr/lib/python3.5/subprocess.py", line 947, in init restore_signals, start_new_session) File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg) FileNotFoundError: [Errno 2] No such file or directory: 'ninja'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "1.py", line 2, in import encoding File "/home/llg/Documents/PyTorch-Encoding/encoding/init.py", line 13, in from . import nn, functions, parallel, utils, models, datasets, transforms File "/home/llg/Documents/PyTorch-Encoding/encoding/nn/init.py", line 12, in from .encoding import * File "/home/llg/Documents/PyTorch-Encoding/encoding/nn/encoding.py", line 18, in from ..functions import scaled_l2, aggregate, pairwise_cosine File "/home/llg/Documents/PyTorch-Encoding/encoding/functions/init.py", line 2, in from .encoding import * File "/home/llg/Documents/PyTorch-Encoding/encoding/functions/encoding.py", line 14, in from .. import lib File "/home/llg/Documents/PyTorch-Encoding/encoding/lib/init.py", line 15, in ], build_directory=cpu_path, verbose=False) File "/home/llg/.local/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 645, in load is_python_module) File "/home/llg/.local/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 814, in _jit_compile with_cuda=with_cuda) File "/home/llg/.local/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 837, in _write_ninja_file_and_build verify_ninja_availability() File "/home/llg/.local/lib/python3.5/site-packages/torch/utils/cpp_extension.py", line 875, in verify_ninja_availability raise RuntimeError("Ninja is required to load C++ extensions") RuntimeError: Ninja is required to load C++ extension

    How to overcome it? What's the "Ninja"?

    opened by lilingge 18
  • ninja: build stopped: subcommand failed.

    ninja: build stopped: subcommand failed.

    After i installed ninja , When i run python main.py --dataset cifar10 --model encnetdrop --widen 8 --ncodes 32 --resume model/encnet_cifar.pth.tar --eval

    it shows the error:

    /home/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py:118: UserWarning:

                               !! WARNING !!
    

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your compiler (c++) may be ABI-incompatible with PyTorch! Please use a compiler that is ABI-compatible with GCC 4.9 and above. See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.

    See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6 for instructions on how to install GCC 4.9 or higher. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!
    

    warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler)) Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 759, in _build_extension_module ['ninja', '-v'], stderr=subprocess.STDOUT, cwd=build_directory) File "/home/anaconda3/lib/python3.6/subprocess.py", line 336, in check_output **kwargs).stdout File "/home/anaconda3/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "main.py", line 24, in from encoding.utils import * File "/home/anaconda3/lib/python3.6/site-packages/encoding/init.py", line 13, in from . import nn, functions, dilated, parallel, utils, models, datasets File "/home/anaconda3/lib/python3.6/site-packages/encoding/nn/init.py", line 12, in from .encoding import * File "/home/anaconda3/lib/python3.6/site-packages/encoding/nn/encoding.py", line 18, in from ..functions import scaled_l2, aggregate, pairwise_cosine File "/home/anaconda3/lib/python3.6/site-packages/encoding/functions/init.py", line 2, in from .encoding import * File "/home/anaconda3/lib/python3.6/site-packages/encoding/functions/encoding.py", line 14, in from .. import lib File "/home/anaconda3/lib/python3.6/site-packages/encoding/lib/init.py", line 15, in ], build_directory=cpu_path, verbose=False) File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 514, in load with_cuda=with_cuda) File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 682, in _jit_compile _build_extension_module(name, build_directory) File "/home/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 765, in _build_extension_module name, error.output.decode())) RuntimeError: Error building extension 'enclib_cpu': [1/4] c++ -MMD -MF syncbn_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/syncbn_cpu.cpp -o syncbn_cpu.o FAILED: syncbn_cpu.o c++ -MMD -MF syncbn_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/syncbn_cpu.cpp -o syncbn_cpu.o /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/syncbn_cpu.cpp:1:26: fatal error: torch/tensor.h: No such file or directory compilation terminated. [2/4] c++ -MMD -MF roi_align_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/roi_align_cpu.cpp -o roi_align_cpu.o FAILED: roi_align_cpu.o c++ -MMD -MF roi_align_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/roi_align_cpu.cpp -o roi_align_cpu.o /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/roi_align_cpu.cpp:1:26: fatal error: torch/tensor.h: No such file or directory compilation terminated. [3/4] c++ -MMD -MF nms_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/nms_cpu.cpp -o nms_cpu.o FAILED: nms_cpu.o c++ -MMD -MF nms_cpu.o.d -DTORCH_EXTENSION_NAME=enclib_cpu -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -I/home/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -I/home/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/nms_cpu.cpp -o nms_cpu.o /home/anaconda3/lib/python3.6/site-packages/encoding/lib/cpu/nms_cpu.cpp:1:26: fatal error: torch/tensor.h: No such file or directory compilation terminated. ninja: build stopped: subcommand failed.

    bug high priority 
    opened by sanersbug 18
  • 我怎么才能直接使用Encoding Layer呢?我这里使用pip命令不能成功安装

    我怎么才能直接使用Encoding Layer呢?我这里使用pip命令不能成功安装

    您好,我的环境是python3.7 pytorch1.7 torchvision 0.8.1 使用命令pip install git+https://github.com/zhanghang1989/PyTorch-Encoding/后大量报错,我无法解决, 因此求助,我尝试将Encoding Layer相关的代码进行呢copy,但是encoding/lib/中的代码无法进行引用,我应该怎么做呢?

    另外我使用pytorch矩阵运算实现了Encoding模块,但是我发现在计算eik的过程中会出现数据爆炸的情况(Nan),不知道您是否有解决办法呢?我的代码详见后续

    望指点迷津,非常感谢

    class CodeBookBlock(nn.Module):
        def __init__(self, in_channels, c2, out_channels):
            super(CodeBookBlock, self).__init__()
            self.c2 = c2
            self.conv1 = nn.Sequential(
                nn.Conv2d(in_channels, c2, kernel_size=1),
                nn.BatchNorm2d(c2),
                nn.LeakyReLU()
            )
            self.codebook = nn.Parameter(torch. Tensor(c2, Config.K), requires_grad=True)
            self.scale = nn.Parameter(torch.Tensor(Config.K), requires_grad=True)
            self.dp = nn.Dropout(0.5)  # 不能用batchnorm2d,否则会造成fc之后数值全部变成nan
            self.relu = nn.ReLU6()
            self.leakyRelu = nn.LeakyReLU()
            self.fc = nn.Linear(self.c2, out_channels)
            self.sigmoid = nn.Sigmoid()
            self.init_params()  # 初始化参数
            torch.autograd.set_detect_anomaly(True)
    
        def init_params(self):
            std1 = 1. / ((Config.K * self.c2) ** (1 / 2))
            self.codebook.data.uniform_(-std1, std1)
            self.scale.data.uniform_(-1, 0)
    
        def forward(self, z):
            """
            :param z: (Batch, c, h, w)
            :return: (Batch, c2)
            """
            batch, c, h, w = z.shape
            N = h * w
            z1 = self.conv1(z)
            z1 = z1.flatten(start_dim=2, end_dim=-1)  # Batch, c2, N
            # print("我是z1")
            # print(z1.shape)
            # print(z1)
            # --------------开始计算放缩因子gama--------------
            # ---处理特征向量z1
            z1 = z1.unsqueeze(2)  # Batch, c2, 1, N
            z1 = z1.repeat(1, 1, Config.K, 1)  # Batch, c2, K, h*w
            # 将z1的K, N(即h*w)交换
            z1 = z1.transpose(2, 3)  # Batch, c2, N, K
            # print("z1")
            # print(z1)
            # ---处理codebook
            d = self.codebook.unsqueeze(1)  # c2, 1, K
            d = d.repeat(1, N, 1)  # c2, N, K
            d = d.unsqueeze(0)  # 1, c2, N, K
            d = d.repeat(batch, 1, 1, 1)  # batch, c2, N, K
            # print("d")
            # print(d.shape)
            # ---计算rik
            rik = z1 - d  # batch, c2, N, K
            # ---计算numerator
            rik = torch.pow(torch.abs(rik), 2)  # 对rik取绝对值并且平方   batch, c2, N, K
            # print(rik.shape)
            # 把scale从1, K变成   batch, c2, N, K
            scale = self.scale.repeat(N, 1)  # N, K
            scale = scale.unsqueeze(0).unsqueeze(0)  # 1, 1, N, K
            scale = scale.repeat(batch, self.c2, 1, 1)  # batch, c2, N, K
            # print(scale.shape)
            # 获得numerator
            # print("我是-scale * rik")
            # print(torch.max(-scale * rik))
            # 这里如果使用exp函数,会造成numerator的数值很大,进而造成后面的变量出现nan, 不用的话Rei可能为0造成后面除法出问题,因此这里改成+某个常数或者leakyRelu
            numerator = self.leakyRelu(-scale * rik)  # batch, c2, N, K
            # print("我是numerator")
            # print(torch.max(numerator))
            Rei = numerator.sum(3)  # eik公式中的分母   batch, c2, N
            # print(Rei.shape)
            # ---开始计算eik,必须在Rei计算完之后
            numerator = numerator * rik  # batch, c2, N, K
            # 将Rei从batch, c2, N变到batch, c2, K, N
            Rei = Rei.unsqueeze(2)  # batch, c2, 1, N
            # print(Rei.shape)
            Rei = Rei.repeat(1, 1, Config.K, 1)  # batch, c2, K, N
            # print(Rei.shape)
            # 将Rei的K, N交换
            Rei = Rei.transpose(2, 3)  # batch, c2, N, K
            # print("我是Rei")
            # print(Rei.shape)
            # # print(Rei)
            # print("Rei的最小值")
            # print(torch.min(Rei))
            # 获得eik
            # print("我是Rei")
            # print(torch.min(Rei))
            eik = numerator / Rei  # batch, c2, N, K
            # print("eik")
            # print(torch.max(eik))
            # 获得ek
            ek = eik.sum(2)  # batch, c2, K
            # print("ek")
            # print(ek.shape)
            # print(ek)
            # print(ek.shape)
            # 获得e
            e = ek.sum(2)  # batch, c2
            e = self.dp(e)
            e = self.relu(e)
            e = self.fc(e)
            # print("我是e")
            # print(torch.max(e))
            gama = self.sigmoid(e)
            # print("我是gama")
            # print(torch.max(gama))
            return gama  # batch, c2
    
    opened by RSMung 16
  • AttributeError: 'NoneType' object has no attribute 'run_slave'

    AttributeError: 'NoneType' object has no attribute 'run_slave'

    Hi ZHang:

    segmentation

    when i train segmentation model,

    CUDA_VISIBLE_DEVICES=0  python train.py --dataset pcontext --model encnet --aux --se-loss
    

    i face an error:

    Using poly LR Scheduler!
    Starting Epoch: 0
    Total Epoches: 80
      0%|                                                                                                                                         | 0/1249 [00:00<?, ?it/s]
    =>Epoches 0, learning rate = 0.0003,                 previous best = 0.0000
    Traceback (most recent call last):
      File "train.py", line 175, in <module>
        trainer.training(epoch)
      File "train.py", line 105, in training
        outputs = self.model(image)
      File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in __call__
        result = self.forward(*input, **kwargs)
      File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
        return self.module(*inputs[0], **kwargs[0])
      File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in __call__
        result = self.forward(*input, **kwargs)
      File "/root/anaconda3/lib/python3.6/site-packages/encoding/models/encnet.py", line 32, in forward
        features = self.base_forward(x)
      File "/root/anaconda3/lib/python3.6/site-packages/encoding/models/base.py", line 51, in base_forward
        x = self.pretrained.bn1(x)
      File "/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 468, in __call__
        result = self.forward(*input, **kwargs)
      File "/root/anaconda3/lib/python3.6/site-packages/encoding/nn/syncbn.py", line 57, in forward
        mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(xsum, xsqsum, N))
    AttributeError: 'NoneType' object has no attribute 'run_slave'
    
    

    my environment:

    • Quick Demo has been done (success)

    • version

    >>> torch.__version__
    '0.5.0a0+32bc28d'
    
    • only one nvidia card : 1080(8G)

    recognition

    when i run recognition demo, i

    python main.py --dataset cifar10 --model encnetdrop --widen 8 --ncodes 32 --resume model/encnet_cifar.pth.tar --eval
    
    

    also face an error:

        (9): View()
        (10): Linear(in_features=512, out_features=10, bias=True)
      )
    )
    Traceback (most recent call last):
      File "main.py", line 181, in <module>
        main()
      File "main.py", line 56, in main
        Dataloader = dataset.Dataloader
    AttributeError: module 'dataset.cifar10' has no attribute 'Dataloader'
    
    

    my environment:

    • torchvision:
    >>> import torchvision
    >>> torchvision.__version__
    '0.2.1'
    
    

    Can you help me with this problem? Thank you ~

    opened by hellodfan 16
  • Subprocess.CalledProcessError:Command '['ninja','-v']' returned non-zero exit status 1.

    Subprocess.CalledProcessError:Command '['ninja','-v']' returned non-zero exit status 1.

    the environment configure: pytorch: 1.0.0 python:3.6.7 ubuntu:16.04 Anaconda3

    when i run python and import encoding,it could appeare below errors. $python Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information.

    import torch print(torch.version) 1.0.0 import encoding Traceback (most recent call last): File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 946, in _build_extension_module check=True) File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) **subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

    During handling of the above exception, another exception occurred:**

    Traceback (most recent call last): File "", line 1, in File "/data_2/pytorch_project/PyTorch-Encoding/encoding/init.py", line 13, in from . import nn, functions, parallel, utils, models, datasets, transforms File "/data_2/pytorch_project/PyTorch-Encoding/encoding/nn/init.py", line 12, in from .encoding import * File "/data_2/pytorch_project/PyTorch-Encoding/encoding/nn/encoding.py", line 18, in from ..functions import scaled_l2, aggregate, pairwise_cosine File "/data_2/pytorch_project/PyTorch-Encoding/encoding/functions/init.py", line 2, in from .encoding import * File "/data_2/pytorch_project/PyTorch-Encoding/encoding/functions/encoding.py", line 14, in from .. import lib File "/data_2/pytorch_project/PyTorch-Encoding/encoding/lib/init.py", line 27, in build_directory=gpu_path, verbose=False) File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 645, in load is_python_module) File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 814, in jit_compile with_cuda=with_cuda) File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 863, in write_ninja_file_and_build build_extension_module(name, build_directory, verbose) File "/data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 959, in build_extension_module raise RuntimeError(message) RuntimeError: Error building extension 'enclib_gpu': [1/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/syncbn_kernel.cu -o syncbn_kernel.cuda.o FAILED: syncbn_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/syncbn_kernel.cu -o syncbn_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003cd1_00000000-7_syncbn_kernel.cpp1.ii". [2/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/encodingv2_kernel.cu -o encodingv2_kernel.cuda.o FAILED: encodingv2_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/encodingv2_kernel.cu -o encodingv2_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003cd0_00000000-7_encodingv2_kernel.cpp1.ii". [3/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/nms_kernel.cu -o nms_kernel.cuda.o FAILED: nms_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/nms_kernel.cu -o nms_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003cd3_00000000-7_nms_kernel.cpp1.ii". [4/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/encoding_kernel.cu -o encoding_kernel.cuda.o FAILED: encoding_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/encoding_kernel.cu -o encoding_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003cce_00000000-7_encoding_kernel.cpp1.ii". [5/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/roi_align_kernel.cu -o roi_align_kernel.cuda.o FAILED: roi_align_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/roi_align_kernel.cu -o roi_align_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003cd2_00000000-7_roi_align_kernel.cpp1.ii". [6/7] /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/activation_kernel.cu -o activation_kernel.cuda.o FAILED: activation_kernel.cuda.o /usr/local/cuda-8.0/bin/nvcc -DTORCH_EXTENSION_NAME=enclib_gpu -DTORCH_API_INCLUDE_EXTENSION_H -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/TH -isystem /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda-8.0/include -isystem /data_2/Anaconda3/envs/pytorch1.0.0/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /data_2/pytorch_project/PyTorch-Encoding/encoding/lib/gpu/activation_kernel.cu -o activation_kernel.cuda.o nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning). /data_2/Anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/lib/include/torch/csrc/jit/argument_spec.h(59): error: static assertion failed with "ArgumentInfo is to be a POD struct"

    1 error detected in the compilation of "/tmp/tmpxft_00003ccc_00000000-7_activation_kernel.cpp1.ii". ninja: build stopped: subcommand failed.

    opened by zhenxingsh 15
  • Can not pip install torch-encoding

    Can not pip install torch-encoding

    my pythoch is 0.40, and when I use pip install torch-encoding, it went wrong like this:

    Complete output from command python setup.py egg_info:
    fatal: Not a git repository (or any of the parent directories): .git
    error in torch-encoding setup command: '/tmp/pip-build-lhdbv8pg/torch-encoding/build.py' does not name an existing file
    

    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-lhdbv8pg/torch-encoding/

    enhancement 
    opened by irfanICMLL 15
  • Make compatible with pytorch 1.11 and newer; bugfix

    Make compatible with pytorch 1.11 and newer; bugfix

    The current version is incompatible with pytorch 1.11 and newer due to breaking changes.

    Two commits feature in this PR

    The first commit enables compatibility with newer pytorch versions (Issue #411)

    https://github.com/zhanghang1989/PyTorch-Encoding/commit/45d5f8cc3d932faafd98f5b427b30800a2f667fd

    • THCudaCheck is deprecated https://github.com/pytorch/pytorch/pull/66391 in favour of C10_CUDA_CHECK

    The other commit fixed this bug https://github.com/zhanghang1989/PyTorch-Encoding/commit/00167dc4b4338f332f74b9e0dda34a5cdb5f5e84

    • Missing "common.h" include

    Tested this out, and it compiles and runs on pytorch 1.11 and 1.13

    opened by krrish94 0
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Which index matches which class in ADE20k?

    Which index matches which class in ADE20k?

    When I output with any model, I got probabilities per class on each pixel. E.g. when I run

    model = encoding.models.get_model('DeepLab_ResNeSt101_ADE', pretrained=True).eval()
    img = torch.randn(1,3,256,256)
    output = model.evaluate(img)
    print(output.shape)
    

    I got torch.Size([1, 150, 256, 256]), since there are 150 classes in ADE20k dataset.

    However, I did not find anywhere in the code, which prediction of the output[0, :, x, y] corresponds to which of the 150 classes in ADE dataset. Where can I find list of classes?

    opened by 8ToThePowerOfMol 0
  • What changes need to be made in this repo if I wanted to use this on CamVid dataset

    What changes need to be made in this repo if I wanted to use this on CamVid dataset

    Recently,I came across the paper Fast FCN in which the author uses the pytorch-encoding repository.In order to test their model on CamVid dataset,what changes should I make apart from writign the CamVid.py file in datasets.

    opened by sparshgarg23 0
  • Package has issues with Pytorch 1.10

    Package has issues with Pytorch 1.10

    Hi,

    As I was configuring different combinations of pytorch and CUDA I realized if your device's current nvvc version and torch's compiled cuda version differ- which is totally ok for torch itself since you can run any code on the new cuda 11.5 machines while torch lags behind at 11.3 - you will receive an error for mismatched versions for installing extensions through distutils and cppextensions.

    Of course I realize reverting the torch version back to 1.9 is totally fine and you can easily install the package. Any plans for a change in the structure of extensions(Sorry I am not familiar how exactly you could bypass this mismatch issue).

    opened by codtiger 2
Releases(v1.2.1)
A set of examples around hub for creating and processing datasets

Examples for Hub - Dataset Format for AI A repository showcasing examples of using Hub Uploading Dataset Places365 Colab Tutorials Notebook Link Getti

Activeloop 11 Dec 14, 2022
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

27 Dec 01, 2022
A3C LSTM Atari with Pytorch plus A3G design

NEWLY ADDED A3G A NEW GPU/CPU ARCHITECTURE OF A3C FOR SUBSTANTIALLY ACCELERATED TRAINING!! RL A3C Pytorch NEWLY ADDED A3G!! New implementation of A3C

David Griffis 532 Jan 02, 2023
Emotion Recognition from Facial Images

Reconhecimento de Emoções a partir de imagens faciais Este projeto implementa um classificador simples que utiliza técncias de deep learning e transfe

Gabriel 2 Feb 09, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
:boar: :bear: Deep Learning based Python Library for Stock Market Prediction and Modelling

bulbea "Deep Learning based Python Library for Stock Market Prediction and Modelling." Table of Contents Installation Usage Documentation Dependencies

Achilles Rasquinha 1.8k Jan 05, 2023
simple demo codes for Learning to Teach with Dynamic Loss Functions

Learning to Teach with Dynamic Loss Functions This repo contains the simple demo for the NeurIPS-18 paper: Learning to Teach with Dynamic Loss Functio

Lijun Wu 15 Dec 30, 2021
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
[BMVC2021] "TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation"

TransFusion-Pose TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation Haoyu Ma, Liangjian Chen, Deying Kong, Zhe Wang, Xingwei

Haoyu Ma 29 Dec 23, 2022
An unofficial styleguide and best practices summary for PyTorch

A PyTorch Tools, best practices & Styleguide This is not an official style guide for PyTorch. This document summarizes best practices from more than a

IgorSusmelj 1.5k Jan 05, 2023
People Interaction Graph

Gihan Jayatilaka*, Jameel Hassan*, Suren Sritharan*, Janith Senananayaka, Harshana Weligampola, et. al., 2021. Holistic Interpretation of Public Scenes Using Computer Vision and Temporal Graphs to Id

University of Peradeniya : COVID Research Group 1 Aug 24, 2022
Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"

Output Diversified Sampling (ODS) This is the github repository for the NeurIPS 2020 paper "Diversity can be Transferred: Output Diversification for W

50 Dec 11, 2022
MASS (Mueen's Algorithm for Similarity Search) - a python 2 and 3 compatible library used for searching time series sub-sequences under z-normalized Euclidean distance for similarity.

Introduction MASS allows you to search a time series for a subquery resulting in an array of distances. These array of distances enable you to identif

Matrix Profile Foundation 79 Dec 31, 2022
HDMapNet: A Local Semantic Map Learning and Evaluation Framework

HDMapNet_devkit Devkit for HDMapNet. HDMapNet: A Local Semantic Map Learning and Evaluation Framework Qi Li, Yue Wang, Yilun Wang, Hang Zhao [Paper] [

Tsinghua MARS Lab 421 Jan 04, 2023
App customer segmentation cohort rfm clustering

CUSTOMER SEGMENTATION COHORT RFM CLUSTERING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU Nên chuyển qua theme màu dark thì sẽ nhìn đẹp hơn https://customer-segmentat

hieulmsc 3 Dec 18, 2021
Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine! Motivation Would

Joeri Hermans 15 Sep 11, 2022
CONditionals for Ordinal Regression and classification in PyTorch

CONDOR pytorch implementation for ordinal regression with deep neural networks. Documentation: https://GarrettJenkinson.github.io/condor_pytorch About

7 Jul 25, 2022
CT-Net: Channel Tensorization Network for Video Classification

[ICLR2021] CT-Net: Channel Tensorization Network for Video Classification @inproceedings{ li2021ctnet, title={{\{}CT{\}}-Net: Channel Tensorization Ne

33 Nov 15, 2022
Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

5 Nov 21, 2022
Deep High-Resolution Representation Learning for Human Pose Estimation

Deep High-Resolution Representation Learning for Human Pose Estimation (accepted to CVPR2019) News If you are interested in internship or research pos

HRNet 167 Dec 27, 2022