AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

Overview

AtlasNet [Project Page] [Paper] [Talk]

AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
Thibault Groueix, Matthew Fisher, Vladimir G. Kim , Bryan C. Russell, Mathieu Aubry
In CVPR, 2018.

🚀 New branch : AtlasNet + Shape Reconstruction by Learning Differentiable Surface Representations

chair.png chair.gif

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy/Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ThibaultGROUEIX/AtlasNet.git
cd AtlasNet 

#Dependencies
conda create -n atlasnet python=3.6 --yes
conda activate atlasnet
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch --yes
pip install --user --requirement  requirements.txt # pip dependencies
Optional : Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)
# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

A note on data.

Data download should be automatic. However, due to the new google drive traffic caps, you may have to download manually. If you run into an error running the demo, you can refer to #61.

You can manually download the data from three sources (there are the same) :

Please make sure to unzip the archives in the right places :

cd AtlasNet
mkdir data
unzip ShapeNetV1PointCloud.zip -d ./data/
unzip ShapeNetV1Renderings.zip -d ./data/
unzip metro_files.zip -d ./data/
unzip trained_models.zip -d ./training/

Usage

  • Demo : python train.py --demo
  • Training : python train.py --shapenet13 Monitor on http://localhost:8890/
  • Latest Refacto 12-2019 - [x] Factorize Single View Reconstruction and autoencoder in same class
    - [x] Factorise Square and Sphere template in same class
    - [x] Add latent vector as bias after first layer(30% speedup)
    - [x] Remove last th in decoder
    - [x] Make large .pth tensor with all pointclouds in cache(drop the nasty Chunk_reader)
    - [x] Make-it multi-gpu
    - [x] Add netvision visualization of the results
    - [x] Rewrite main script object-oriented
    - [x] Check that everything works in latest pytorch version
    - [x] Add more layer by default and flag for the number of layers and hidden neurons
    - [x] Add a flag to generate a mesh directly
    - [x] Add a python setup install
    - [x] Make sure GPU are used at 100%
    - [x] Add f-score in Chamfer + report f-score
    - [x] Get rid of shapenet_v2 data and use v1!
    - [x] Fix path issues no more sys.path.append
    - [x] Preprocess shapenet 55 and add it in dataloader
    - [x] Make minimal dependencies

Quantitative Results

Method Chamfer (*1) Fscore (*2) Metro (*3) Total Train time (min)
Autoencoder 25 Squares 1.35 82.3% 6.82 731
Autoencoder 1 Sphere 1.35 83.3% 6.94 548
SingleView 25 Squares 3.78 63.1% 8.94 1422
SingleView 1 Sphere 3.76 64.4% 9.01 1297
  • (*1) x1000. Computed between 2500 ground truth points and 2500 reconstructed points.
  • (*2) The threshold is 0.001
  • (*3) x100. Metro is ran on unormalized point clouds (which explains a difference with the paper's numbers)

Related projects

Citing this work

@inproceedings{groueix2018,
          title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
          author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
          booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
          year={2018}
        }

Comments
  • RuntimeError: CUDA error: out of memory

    RuntimeError: CUDA error: out of memory

    Thank you for the great work! I get this error below when I run: ./training/train_AE_AtlasNet.py

    I checked two more similar issues but this looks different. Any idea how to solve it? Any help appreciated!

    File "./training/train_AE_AtlasNet.py", line 151, in dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory

    I run pytorch:0.4.1 / Ubuntu 18.04

    FULL CODE:

    (pytorch-atlasnet) [email protected]:~/AtlasNet$ python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt Setting up a new session... Namespace(accelerated_chamfer=0, batchSize=32, env='AE_AtlasNet', model='', nb_primitives=25, nepoch=120, num_points=2500, super_points=2500, workers=12) Random Seed: 314 {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % {'plane': '02691156', 'bench': '02828884', 'cabinet': '02933112', 'car': '02958343', 'chair': '03001627', 'monitor': '03211117', 'lamp': '03636649', 'speaker': '03691459', 'firearm': '04090263', 'couch': '04256520', 'table': '04379243', 'cellphone': '04401088', 'watercraft': '04530566'} category 02691156 files 4044 0.999752781211372 % category 02828884 files 1813 0.9983480176211453 % category 02933112 files 1571 0.9993638676844784 % category 02958343 files 3514 0.46878335112059766 % category 03001627 files 6778 1.0 % category 03211117 files 1093 0.9981735159817352 % category 03636649 files 2309 0.9961173425366695 % category 03691459 files 1597 0.9870210135970334 % category 04090263 files 2373 1.0 % category 04256520 files 3173 1.0 % category 04379243 files 8436 0.9914208485133388 % category 04401088 files 1050 0.9980988593155894 % category 04530566 files 1939 1.0 % training set 31747 testing set 7943 **Traceback (most recent call last): File "./training/train_AE_AtlasNet.py", line 151, in <module> dist1, dist2 = distChamfer(points.transpose(2,1).contiguous(), pointsReconstructed) #loss function File "./training/train_AE_AtlasNet.py", line 64, in distChamfer P = (rx.transpose(2,1) + ry - 2*zz) RuntimeError: CUDA error: out of memory**

    help wanted 
    opened by spha-code 15
  • To be honest, the latest code is very hard to understand

    To be honest, the latest code is very hard to understand

    I compare our method with AtlasNet several times. I need to edit the source code each time. However, the latest code is very hard to understand because it is of high abstraction. It takes me an hour to understand the relationship between each module.

    help wanted 
    opened by hzxie 10
  • Stuck after launching visdom server

    Stuck after launching visdom server

    I run the demo succesfully but after I launch the visdom server

    python -m visdom.server -p 8888

    I am stuck, I can't write any command anymore in my anaconda window. How do I continue? thanks!

    help wanted 
    opened by spha-code 10
  • [BUG] Chamfer Distance is not Correct

    [BUG] Chamfer Distance is not Correct

    I tried to debug the chamfer.cu by printing the values of tensors. I created two point clouds containing 3 and 5 points, respectively. The values are shown as below.

    (1,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -20.4838  4.4935  6.1395
      -3.7283 -0.7629  1.7736
    
    (2,.,.) = 
     0.01 *
      0.0000  0.0000  0.0000
      -17.4992  4.4902  5.0518
      -1.6003 -1.2430  0.8040
    [ Variable[CUDAType]{2,3,3} ]
    (1,.,.) = 
      0.0051  0.1850  0.0004
      0.0051  0.1850  0.0093
      0.0096  0.1850  0.0081
      0.0096  0.1850  0.0016
      0.0075  0.1850  0.0004
    
    (2,.,.) = 
     -0.1486 -0.0932 -0.0014
     -0.0406 -0.0932 -0.0017
     -0.2057 -0.0932 -0.0001
     -0.0915 -0.0932 -0.0001
      0.0103 -0.0932 -0.0001
    [ Variable[CUDAType]{2,5,3} ]
    

    I also add print statements in CUDA functions, and I got the following output.

    2i = 0, n = 3, j = 0, k = 0, d = 0.03425420, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 1, k = 0, d = 0.06742091, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00038808)
    2i = 0, n = 3, j = 2, k = 0, d = 0.03920735, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00038808)
    2i = 1, n = 3, j = 0, k = 0, d = 0.03573948, x = (-0.08192606 0.01907521 0.02376382) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 1, k = 0, d = 0.03405631, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00749534 0.18500790 0.00928491)
    2i = 1, n = 3, j = 2, k = 0, d = 0.03437031, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00928491)
    2i = 0, n = 3, j = 0, k = 1, d = 0.03434026, x = (0.00000000 0.00000000 0.00000000) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 1, k = 1, d = 0.06641452, x = (-0.20483765 0.04493479 0.06139540) y = (0.00511124 0.18500790 0.00928491)
    2i = 0, n = 3, j = 2, k = 1, d = 0.03897782, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00511124 0.18500790 0.00928491)
    2i = 1, n = 3, j = 0, k = 1, d = 0.03490656, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 1, k = 1, d = 0.03394968, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00713918)
    2i = 1, n = 3, j = 2, k = 1, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00713918)
    2i = 0, n = 3, j = 0, k = 2, d = 0.03438481, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 1, k = 2, d = 0.06842789, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00809300)
    2i = 0, n = 3, j = 2, k = 2, d = 0.03939636, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00809300)
    2i = 1, n = 3, j = 0, k = 2, d = 0.03508088, x = (-0.08192606 0.01907521 0.02376382) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 1, k = 2, d = 0.03389502, x = (-0.00152916 0.00097788 -0.00109852) y = (0.00231482 0.18500790 0.00253408)
    2i = 1, n = 3, j = 2, k = 2, d = 0.03423970, x = (0.00000000 0.00000000 0.00000000) y = (0.00231482 0.18500790 0.00253408)
    2i = 0, n = 3, j = 0, k = 3, d = 0.03432181, x = (0.00000000 0.00000000 0.00000000) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 1, k = 3, d = 0.06916460, x = (-0.20483765 0.04493479 0.06139540) y = (0.00955979 0.18500790 0.00158027)
    2i = 0, n = 3, j = 2, k = 3, d = 0.03956439, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00955979 0.18500790 0.00158027)
    2i = 1, n = 3, j = 0, k = 3, d = 0.03652760, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 1, k = 3, d = 0.03404036, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00364473)
    2i = 1, n = 3, j = 2, k = 3, d = 0.03435681, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00364473)
    3i = 0, n = 3, j = 0, k = 4, d = 0.03428425, x = (0.00000000 0.00000000 0.00000000) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 1, k = 4, d = 0.06842767, x = (-0.20483765 0.04493479 0.06139540) y = (0.00749534 0.18500790 0.00038808)
    3i = 0, n = 3, j = 2, k = 4, d = 0.03941518, x = (-0.03728317 -0.00762936 0.01773610) y = (0.00749534 0.18500790 0.00038808)
    3i = 1, n = 3, j = 0, k = 4, d = 0.03643737, x = (-0.08192606 0.01907521 0.02376382) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 1, k = 4, d = 0.03406866, x = (-0.00152916 0.00097788 -0.00109852) y = (0.01075170 0.18500790 0.00602855)
    3i = 1, n = 3, j = 2, k = 4, d = 0.03437987, x = (0.00000000 0.00000000 0.00000000) y = (0.01075170 0.18500790 0.00602855)
    i = 0, n = 3, j = 0, best = 0.03425420, best_i = 0
    i = 0, n = 3, j = 1, best = 0.06641452, best_i = 1
    i = 0, n = 3, j = 2, best = 0.03897782, best_i = 1
    i = 1, n = 3, j = 0, best = 0.03490656, best_i = 1
    i = 1, n = 3, j = 1, best = 0.03389502, best_i = 2
    i = 1, n = 3, j = 2, best = 0.03423970, best_i = 2
    

    For batch 0 (i = 0), everything seems correct. However, for batch 1 (i = 1), the values of point clouds are not in the tensors. Is there something wrong with the code?

    chamfer 
    opened by hzxie 10
  • Evaluate RGB image with pretrained model

    Evaluate RGB image with pretrained model

    Hi Iam actually try to evaluate SVR Atlas pretrained model on RGB image(chair), my parameters are really similar to the demo and i got wird result (by view in chrom 3D viewer ). i used the demo grid generation. when i run your demo plane.jpg im my network i got good results in the 3Dviewer . Demo plain wird_pic_1 wird_pic_2 wird_pic_3 can you please direct my how to evaluate RGB image?

    testing 
    opened by Itamare1982 9
  • Test set used as validation to choose best model

    Test set used as validation to choose best model

    In train_AE_Atlasnet.py, the test set is used as the validation set to choose the best model. The test set should never be used during training and especially not to choose the best model as this biases the results. It's probably more appropriate to report the results on the last training epoch if there was no validation set.

    bug 
    opened by lynetcha 9
  • The corresponding normalized mesh

    The corresponding normalized mesh

    I downloaded the corresponding normalized mesh (Only 58Mb) from the link you provided. I found that the number of the mesh was much smaller than the corresponding point cloud. Could you please provide the full dataset of the corresponding normalized mesh? Thank you!

    data 
    opened by wang-ps 9
  • Cannot download the point cloud data

    Cannot download the point cloud data

    Hi! I'm trying to download the point cloud data provided in this link: https://cloud.enpc.fr/s/j2ECcKleA1IKNzk but the network fails every time I try to download.

    Do you know what's going on or how to download them?

    Thank you in advance!

    data 
    opened by jjpark 8
  • validation loss explodes

    validation loss explodes

    4cb6332fbe946eaa6a317f9f2ddc3b6 I directly run the script 'train_AE_Atlasnet.py' without any modification. As you can see above, the performance is good on the training set, but quite poor on the validation set. The validation loss increases quickly and doesn't decrease.

    pytorch 
    opened by AkonLau 8
  • About the point cloud dataset

    About the point cloud dataset

    I found that some of your point cloud dataset provided are missing. Could you provide all the point cloud dataset? or Could you tell me how to generate the point cloud dataset? Thank you!

    data 
    opened by guoyan1991 7
  • Memory Leak

    Memory Leak

    I found that the unused self.dist1 and self.dist2 in the file "nndistance/functions/nnd.py" cause memory leaking in my environment. (Python 3.5.2 with Pytorch 0.4.0)

    class NNDFunction(Function):
        def forward(self, xyz1, xyz2):
            dist1,dist2=cuda_compute_from(xyz1,xyz2)
            # following two lines cause memory leak
            self.dist1 = dist1
            self.dist2 = dist2
            return dist1, dist2
    
        def backward(self, graddist1, graddist2):
            gradxyz1,gradxyz2=grad_cuda_compute_from(graddist1,graddist2)
            return gradxyz1, gradxyz2
    
    chamfer pytorch 
    opened by liuyuan-pal 7
  • Question about running

    Question about running

    Hi,i'm soory to bother you again. When I ran the code as you explained, I got the following error: sh: 1: tmux: not found Setting up a new session... Exception in user code:

    could you give me some advice? The overall operation results in the terminal are as follows:

    /home/yukon/anaconda3/envs/pymesh/bin/python "/media/yukon/Extreme SSD/AtlasNet-master/train.py" anshu: Namespace(SVR=False, activation='relu', anisotropic_scaling=False, batch_size=32, batch_size_test=32, bottleneck_size=1024, class_choice=['airplane'], data_augmentation_axis_rotation=False, data_augmentation_random_flips=False, demo=True, demo_input_path='./doc/pictures/plane_input_demo.png', dir_name='', env='Atlasnet', hidden_neurons=512, http_port=8891, id='0', loop_per_epoch=1, lr_decay_1=120, lr_decay_2=140, lr_decay_3=145, lrate=0.001, multi_gpu=[0], nb_primitives=1, nepoch=150, no_learning=False, no_metro=False, normalization='UnitBall', num_layers=2, number_points=2500, number_points_eval=2500, random_rotation=False, random_seed=False, random_translation=False, reload_decoder_path='', reload_model_path='', remove_all_batchNorms=False, run_single_eval=False, sample=True, shapenet13=False, start_epoch=0, template_type='SPHERE', train_only_encoder=False, visdom_port=8890, workers=0) Loaded compiled 3D CUDA chamfer distance Launching new visdom instance in port 8890 TMUX=0 tmux new-session -d -s visdom_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m visdom.server -p 8890 > /dev/null 2>&1" Enter sh: 1: tmux: not found Launching new HTTP instance in port 8891 TMUX=0 tmux new-session -d -s http_server ; send-keys "/home/yukon/anaconda3/envs/pymesh/bin/python -m http.server -p 8891 > /dev/null 2>&1" Enter sh: 1: tmux: not found Setting up a new session... Exception in user code:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 398, in _make_request conn.request(method, url, **httplib_request_kw) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1291, in request self._send_request(method, url, body, headers, encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1337, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1286, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 1046, in _send_output self.send(msg) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/http/client.py", line 984, in send self.connect() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect conn = self._new_conn() File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 450, in send timeout=timeout File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/connectionpool.py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 695, in _send data=json.dumps(msg), File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/visdom/init.py", line 656, in _handle_post r = self.session.post(url, data=data) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/home/yukon/anaconda3/envs/pymesh/lib/python3.6/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8890): Max retries exceeded with url: /env/Atlasnetatlasnet_singleview_1_sphere_2atlasnet_singleview_1_sphere (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2834dcc2b0>: Failed to establish a new connection: [Errno 111] Connection refused',)) [Errno 111] Connection refused on_close() takes 1 positional argument but 3 were given New MLP decoder : hidden size 512, num_layers 2, activation relu Network weights loaded from ./training/trained_models/atlasnet_singleview_1_sphere/network.pth! Atlasnet generated mesh at ./doc/pictures/plane_input_demoAtlasnetReconstruction.ply!

    Process finished with exit code 0

    opened by tang-y-q 2
  • Question About Visulization

    Question About Visulization

    Hey! Sorry to disturb you again!

    I want to know if there any effective tools by python to visulize .obj file and save it to .png (except for meshlab).

    Thanks for your reply!

    opened by yufeng9819 1
  • Compile Metro Distance (GPL3 Licence)

    Compile Metro Distance (GPL3 Licence)

    Hi i'm sorry to bother you again.
    when I used the code you give me to trying to build metro distance, I found that it could not be compiled successfully. 
    It will indicate that the system path cannot be found. The results are shown as follows:
    

    捕获 could you please give some advice to solve this problem. thanks a lot!

    opened by tang-y-q 1
  • Question about train and test strategy

    Question about train and test strategy

    Hi! Sorry for disturb you again.

    I want to ask questions about train and test strategy. In your code, you set opt.shapenet13==True. So does it means that you first train your network on all categories and then test on each class to get the experiment metrics data of every single class.

    Looking forward to your reply!

    opened by yufeng9819 1
  • AtlasNet checkpoint not available

    AtlasNet checkpoint not available

    Hi @ThibaultGROUEIX, thank you for sharing the code.

    When downloading checkpoint of the model using the trained_models/download_models.sh (https://cloud.enpc.fr/s/c27Df7fRNXW2uG3/download) related to the version 2.2 of the source code, the link seems to be broken or no longer available. Could you please help me with this?

    Thanks.

    opened by apicis 4
Releases(v3.0)
Owner
also here : https://bitbucket.org/ThibaultGROUEIX/
Exploring Visual Engagement Signals for Representation Learning

Exploring Visual Engagement Signals for Representation Learning Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie and Ser-Nam Lim C

Menglin Jia 9 Jul 23, 2022
Code for Paper: Self-supervised Learning of Motion Capture

Self-supervised Learning of Motion Capture This is code for the paper: Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki, Self-sup

Hsiao-Yu Fish Tung 87 Jul 25, 2022
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

Ditto: Building Digital Twins of Articulated Objects from Interaction Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu CVPR 2022, Oral Project | arxiv News 2022

UT Robot Perception and Learning Lab 78 Dec 22, 2022
Understanding Convolution for Semantic Segmentation

TuSimple-DUC by Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, and Garrison Cottrell. Introduction This repository is for Under

TuSimple 585 Dec 31, 2022
Pyramid addon for OpenAPI3 validation of requests and responses.

Validate Pyramid views against an OpenAPI 3.0 document Peace of Mind The reason this package exists is to give you peace of mind when providing a REST

Pylons Project 79 Dec 30, 2022
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023
Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation

Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation This implementation is based on orobix implement

Juntang Zhuang 116 Sep 06, 2022
Sionna: An Open-Source Library for Next-Generation Physical Layer Research

Sionna: An Open-Source Library for Next-Generation Physical Layer Research Sionna™ is an open-source Python library for link-level simulations of digi

NVIDIA Research Projects 313 Dec 22, 2022
YOLOX-CondInst - Implement CondInst which is a instances segmentation method on YOLOX

YOLOX CondInst -- YOLOX 实例分割 前言 本项目是自己学习实例分割时,复现的代码. 通过自己编程,让自己对实例分割有更进一步的了解。 若想

DDGRCF 16 Nov 18, 2022
Gym environments used in the paper: "Developmental Reinforcement Learning of Control Policy of a Quadcopter UAV with Thrust Vectoring Rotors"

gym_multirotor Gym to train reinforcement learning agents on UAV platforms Quadrotor Tiltrotor Requirements This package has been tested on Ubuntu 18.

Aditya M. Deshpande 19 Dec 29, 2022
This repository contains the code for: RerrFact model for SciVer shared task

RerrFact This repository contains the code for: RerrFact model for SciVer shared task. Setup for Inference 1. Download SciFact database Download the S

Ashish Rana 1 May 22, 2022
Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini!

ConversorDeMedidas_CapuccinoGelado Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini! Requirem

Arthur Ottoni Ribeiro 48 Nov 15, 2022
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

RandWireNN Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition. Results Validation result on Imagenet

Seung-won Park 684 Nov 02, 2022
🏆 The 1st Place Submission to AICity Challenge 2021 Natural Language-Based Vehicle Retrieval Track (Alibaba-UTS submission)

AI City 2021: Connecting Language and Vision for Natural Language-Based Vehicle Retrieval 🏆 The 1st Place Submission to AICity Challenge 2021 Natural

82 Dec 29, 2022
OneFlow is a performance-centered and open-source deep learning framework.

OneFlow OneFlow is a performance-centered and open-source deep learning framework. Latest News Version 0.5.0 is out! First class support for eager exe

OneFlow 4.2k Jan 07, 2023
🌈 PyTorch Implementation for EMNLP'21 Findings "Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer"

SGLKT-VisDial Pytorch Implementation for the paper: Reasoning Visual Dialog with Sparse Graph Learning and Knowledge Transfer Gi-Cheon Kang, Junseok P

Gi-Cheon Kang 9 Jul 05, 2022
The codes reproduce the figures and statistics in the paper, "Controlling for multiple covariates," by Mark Tygert.

The accompanying codes reproduce all figures and statistics presented in "Controlling for multiple covariates" by Mark Tygert. This repository also pr

Meta Research 1 Dec 02, 2021
PyTorch implementation for "HyperSPNs: Compact and Expressive Probabilistic Circuits", NeurIPS 2021

HyperSPN This repository contains code for the paper: HyperSPNs: Compact and Expressive Probabilistic Circuits "HyperSPNs: Compact and Expressive Prob

8 Nov 08, 2022
Code for our CVPR 2021 Paper "Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes".

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes (CVPR 2021) Project page | Paper | Colab | Colab for Drawing App Rethinking Style

CompVis Heidelberg 153 Jan 04, 2023
Landmarks Recogntion Web application using Streamlit.

Landmark Recognition Web-App using Streamlit Watch Tutorial for this project Source Trained model landmarks_classifier_asia_V1/1 is taken from the Ten

Kushal Bhavsar 5 Dec 12, 2022