Nerf pl - NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning

Overview

nerf_pl

Update: an improved NSFF implementation to handle dynamic scene is open!

Update: NeRF-W (NeRF in the Wild) implementation is added to nerfw branch!

Update: The lastest code (using the latest libraries) will be updated to dev branch. The master branch remains to support the colab files. If you don't use colab, it is recommended to switch to dev branch.

Only issues of the dev and nerfw branch will be considered currently.

πŸ’Ž Project page (live demo!)

Unofficial implementation of NeRF (Neural Radiance Fields) using pytorch (pytorch-lightning). This repo doesn't aim at reproducibility, but aim at providing a simpler and faster training procedure (also simpler code with detailed comments to help to understand the work). Moreover, I try to extend much more opportunities by integrating this algorithm into game engine like Unity.

Official implementation: nerf .. Reference pytorch implementation: nerf-pytorch

Recommend to read: A detailed NeRF extension list: awesome-NeRF

🌌 Features

You can find the Unity project including mesh, mixed reality and volume rendering here! See README_Unity for generating your own data for Unity rendering!

πŸ”° Tutorial

What can NeRF do?

Tutorial videos

πŸ’» Installation

Hardware

  • OS: Ubuntu 18.04
  • NVIDIA GPU with CUDA>=10.1 (tested with 1 RTX2080Ti)

Software

  • Clone this repo by git clone --recursive https://github.com/kwea123/nerf_pl
  • Python>=3.6 (installation via anaconda is recommended, use conda create -n nerf_pl python=3.6 to create a conda environment and activate it by conda activate nerf_pl)
  • Python libraries
    • Install core requirements by pip install -r requirements.txt
    • Install torchsearchsorted by cd torchsearchsorted then pip install .

πŸ”‘ Training

Please see each subsection for training on different datasets. Available training datasets:

Blender

Steps

Data download

Download nerf_synthetic.zip from here

Training model

Run (example)

python train.py \
   --dataset_name blender \
   --root_dir $BLENDER_DIR \
   --N_importance 64 --img_wh 400 400 --noise_std 0 \
   --num_epochs 16 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 \
   --exp_name exp

These parameters are chosen to best mimic the training settings in the original repo. See opt.py for all configurations.

NOTE: the above configuration doesn't work for some scenes like drums, ship. In that case, consider increasing the batch_size or change the optimizer to radam. I managed to train on all scenes with these modifications.

You can monitor the training process by tensorboard --logdir logs/ and go to localhost:6006 in your browser.

LLFF

Steps

Data download

Download nerf_llff_data.zip from here

Training model

Run (example)

python train.py \
   --dataset_name llff \
   --root_dir $LLFF_DIR \
   --N_importance 64 --img_wh 504 378 \
   --num_epochs 30 --batch_size 1024 \
   --optimizer adam --lr 5e-4 \
   --lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
   --exp_name exp

These parameters are chosen to best mimic the training settings in the original repo. See opt.py for all configurations.

You can monitor the training process by tensorboard --logdir logs/ and go to localhost:6006 in your browser.

Your own data

Steps
  1. Install COLMAP following installation guide
  2. Prepare your images in a folder (around 20 to 30 for forward facing, and 40 to 50 for 360 inward-facing)
  3. Clone LLFF and run python img2poses.py $your-images-folder
  4. Train the model using the same command as in LLFF. If the scene is captured in a 360 inward-facing manner, add --spheric argument.

For more details of training a good model, please see the video here.

Pretrained models and logs

Download the pretrained models and training logs in release.

Comparison with other repos

training GPU memory in GB Speed (1 step)
Original 8.5 0.177s
Ref pytorch 6.0 0.147s
This repo 3.2 0.12s

The speed is measured on 1 RTX2080Ti. Detailed profile can be found in release. Training memory is largely reduced, since the original repo loads the whole data to GPU at the beginning, while we only pass batches to GPU every step.

πŸ”Ž Testing

See test.ipynb for a simple view synthesis and depth prediction on 1 image.

Use eval.py to create the whole sequence of moving views. E.g.

python eval.py \
   --root_dir $BLENDER \
   --dataset_name blender --scene_name lego \
   --img_wh 400 400 --N_importance 64 --ckpt_path $CKPT_PATH

IMPORTANT : Don't forget to add --spheric_poses if the model is trained under --spheric setting!

It will create folder results/{dataset_name}/{scene_name} and run inference on all test data, finally create a gif out of them.

Example of lego scene using pretrained model and the reconstructed colored mesh: (PSNR=31.39, paper=32.54)

Example of fern scene using pretrained model:

fern

Example of own scene (Silica GGO figure) and the reconstructed colored mesh. Click to link to youtube video.

Portable scenes

The concept of NeRF is that the whole scene is compressed into a NeRF model, then we can render from any pose we want. To render from plausible poses, we can leverage the training poses; therefore, you can generate video with only the trained model and the poses (hence the name of portable scenes). I provided my silica model in release, feel free to play around with it!

If you trained some interesting scenes, you are also welcomed to share the model (and the poses_bounds.npy) by sending me an email, or post in issues! After all, a model is just around 5MB! Please run python utils/save_weights_only.py --ckpt_path $YOUR_MODEL_PATH to extract the final model.

πŸŽ€ Mesh

See README_mesh for reconstruction of colored mesh. Only supported for blender dataset and 360 inward-facing data!

⚠️ Notes on differences with the original repo

  • The learning rate decay in the original repo is by step, which means it decreases every step, here I use learning rate decay by epoch, which means it changes only at the end of 1 epoch.
  • The validation image for LLFF dataset is chosen as the most centered image here, whereas the original repo chooses every 8th image.
  • The rendering spiral path is slightly different from the original repo (I use approximate values to simplify the code).

πŸŽ“ COLAB

I also prepared colab notebooks that allow you to run the algorithm on any machine without GPU requirement.

  • colmap to prepare camera poses for your own training data
  • nerf to train on your data
  • extract_mesh to extract colored mesh

Please see this playlist for the detailed tutorials.

πŸŽƒ SHOWOFF

We can incorporate ray tracing techniques into the volume rendering pipeline, and realize realistic scene editing (following is the materials scene with an object removed, and a mesh is inserted and rendered with ray tracing). The code will not be released.

add add2

With my integration in Unity, I can realize realistic mixed reality photos (note my character casts shadow on the scene, zero post- image editing required): defer defer2 BTW, I would like to visit the museum one day...

πŸ“– Citation

If you use (part of) my code or find my work helpful, please consider citing

@misc{queianchen_nerf,
  author={Quei-An, Chen},
  title={Nerf_pl: a pytorch-lightning implementation of NeRF},
  url={https://github.com/kwea123/nerf_pl/},
  year={2020},
}
Comments
  • Training NERF using real-captured data

    Training NERF using real-captured data

    Hello, I have followed your example to train NERF on my own data. So I have seen you and other guys have some success with single object scene (silica model). How about the real scene (fern or orchids dataset)?

    I have captured a video of my office link. However, I cant use colmap to estimate poses to train NERF model. Since you are more experienced than me on this project. Can u show me some suggestion ? It's interesting to see if this method works on real data like this.

    This is the error from the colmap:

    python imgs2poses.py ./cmvs/
    Need to run COLMAP
    Features extracted
    Features matched
    Sparse map created
    Finished running COLMAP, see ./cmvs/colmap_output.txt for logs
    Post-colmap
    Cameras 5
    Images # 6
    Traceback (most recent call last):
      File "imgs2poses.py", line 18, in <module>
        gen_poses(args.scenedir, args.match_type)
      File "/home/phong/data/Work/Paper3/Code/LLFF/llff/poses/pose_utils.py", line 276, in gen_poses
        save_poses(basedir, poses, pts3d, perm)
      File "/home/phong/data/Work/Paper3/Code/LLFF/llff/poses/pose_utils.py", line 66, in save_poses
        cams[ind-1] = 1
    IndexError: list assignment index out of range
    
    opened by phongnhhn92 34
  • ShapeNet dataset configuration

    ShapeNet dataset configuration

    Hi, Did you test the code with ShapeNet dataset? If so what are the pre-processing steps done to get good results?

    Thanking you.

    Regards, K. J. Nitthilan

    opened by nitthilan 17
  • Colmap gui + imgs2poses.py still gets error

    Colmap gui + imgs2poses.py still gets error

    I am using imgs2poses.py to estimate the camera poses for my own dataset. However, it always returns with ERROR: the correct camera poses for current points cannot be accessed. On the other hand, I can use colmap gui to reconstruct a part of the camera poses. (e.g. not pose of every image is estimated) And I execute 'imgs2poses.py' with the same argument on the sparse folder and database.db generated by colmap gui. However, it still returns with ERROR: the correct camera poses for current points cannot be accessed. Can you give me instructions about how to use colmap gui + imgs2poses.py to make the pose estimation work? Thank you!

    enhancement 
    opened by alex04072000 16
  • tips for getting a better colored mesh model

    tips for getting a better colored mesh model

    I've tested this repo with my own 360 degree images successfully, for getting a better colored mesh model, i will suggest:

    1. use FULL resolution photos to run COLMAP and imgs2poses.py file, like 3968*2976, make sure you take photos horizontally ;

    2. if you can't see your center object clearly when you run the extract_mesh.py file, probably your poses_bounds.npy file is not right, check this file with np.load('poses_bounds.npy')[:, -2:] to see whether there are many small values or not, make sure these array values at normal level ; (by the way, if you've trained a good model, you will not have this problem)

    3. a good model should converge to psnr 25 in the first 10k steps, if not then something is problematic ;

    4. when you tune the xyz range and sigma_threshold parameters to get a better volume box, start with (x_range, y_range=-1.5,1.5, z_range=-4, -1, sigma_threshold=5) , cause i found the object always at a lower place, make sure you can see your object completely and clearly, then it will be easy for you to tune these parameters slightly to get a better result.

    Thanks for the author's help, i've solved many problems, so i want to post my advices here for those people who have the same problems.

    Finally, i can show you my colored mesh model, it turns out this repo is good for 3D reconstruction:

    newscreen16846093

    good first issue 
    opened by SpongeGirl 16
  • apply nerf on interiornet

    apply nerf on interiornet

    Hi @kwea123,

    Thank you for this great work. I wanted to ask you whether do you think that I could use nerf to create the 3D mesh of the scenes from the Interiornet dataset (https://interiornet.org/). The Visim application that they show in their video is not available until now and moreover the provided ground truth, it doesn't seem to fit the renderings. However, they provide some renderings, including depth and albedo maps and thus I was thinking whether nerf could be useful.

    opened by ttsesm 9
  • Colmap fails on forward driving scene

    Colmap fails on forward driving scene

    Hi, I am trying to train NERF on a sequence of a forward driving scene like this. ezgif com-optimize I have used the colmap script from LLFF to estimate the pose but it doesn't work for me. Need to run COLMAP Features extracted Features matched Sparse map created Finished running COLMAP, see /home/phong/VKITTI2/colmap_output.txt for logs Post-colmap Cameras 5 Images # 2 ERROR: the correct camera poses for current points cannot be accessed Done with imgs2poses here is the link to download the image. I wonder if NERF can not handle these sequences or is this from COLMAP ?

    opened by phongnhhn92 8
  • Poor results for 360 captured scene

    Poor results for 360 captured scene

    First of all, thanks for the great implementation!

    I've managed to get good results with the code in this repository for frontal scenes but am struggling to have it properly work with 360 captures. Attached is an example capture of a fountain taken from a variety of different angles (and where the camera poses are "ground truth" poses gathered from the simulation): https://drive.google.com/file/d/1FbtrupOXURc0eTDtDOmD1oKZz5e2MIAE/view?usp=sharing

    Below are the results after 6 epochs of training (I've trained for longer but it never converges to anything useful).

    image

    In contrast, other nerf implementations such as https://github.com/google-research/google-research/tree/master/jaxnerf seem to provide more sensible results even after a few thousand iterations:

    image

    I'm using the spherify flag and have tried both with and without the use_disp option. I've also tried setting N_importance and N_samples to match the config of the other NeRF implementation that I tried (https://github.com/google-research/google-research/blob/master/jaxnerf/configs/llff_360.yaml). Would you have any pointers as to where the difference could be coming from?

    enhancement 
    opened by hturki 8
  • Initialization issue of training both coarse and fine together

    Initialization issue of training both coarse and fine together

    Hi, my name is Wonbong Jang, I am working on NeRF based on your implementation. Thank you for sharing the great work!

    I've tried with the tiny digger (From tiny_nerf implementation of the original repo - https://github.com/bmild/nerf) - it has the black background and 100 x 100.

    When I trained using both coarse and fine together (N_importance = 128), only 20% of the time, both networks are optimized. In other 40% of the times, only one of them is trained (coarse or fine), and other 40% of the time, none of them are optimized. The learning rate is 5e-4 with Adam Optimizer, and it usually works well when I train the coarse model only.

    I think this is probably due to the initialization issue. I am wondering if you had any kinds of the above issue before, and it would be appreciated if you could provide any insights on this.

    Kind regards,

    Wonbong

    opened by wbjang 7
  • Transparent inward facing images

    Transparent inward facing images

    Has anyone tried creating a mesh from inward facing photos of an object with the background removed? If yes, how did it turn out? If not, is it worthwhile to do so? Since nerf_pl keeps the largest cluster while removing noise, a transparent image with just the foreground should make the task easy right?

    opened by sixftninja 7
  • torchsearchsorted missing

    torchsearchsorted missing

    Describe the bug in rendering.py, there is import torchsearchsorted but it is missing and I do not get where is this file or library

    To Reproduce I run the train.py with commands in the readme and I get File "/media/TBData/Rita/PyCharm_NERF_kwea/models/rendering.py", line 2, in from torchsearchsorted import searchsorted ImportError: cannot import name 'searchsorted' from 'torchsearchsorted' (unknown location)

    opened by Riretta 6
  • After got the model, hwo can I generate a certain view of image by the model?

    After got the model, hwo can I generate a certain view of image by the model?

    Hi kwea123, I just wonder is it possible to define a certain view (Lets say spatial location (x,y,z) and viewing direction (ΞΈ,Ο†) ) and generate a image for this view by the nerf_pl model? Could you please help me with this? thanks!

    opened by jerrysheen 6
  • Why the coarse stage of sampling were sent to the network for inference twice

    Why the coarse stage of sampling were sent to the network for inference twice

    Hello kwea, sorry for disturbing you! I would like to ask a question.

    I noticed that the fine stage merged the sample points from the coarse stage and sent them to the network for inference, here I have a doubt, because the sample points from the coarse stage have already been inferred before, why do they have to be sent to the network for inference again here, is it to facilitate the final volume rendering? But it feels like this will add a lot of computation https://github.com/kwea123/nerf_pl/blob/52aeb387da64a9ad9a0f914ea9b049ffc598b20c/models/rendering.py#L229

    opened by YZsZY 0
  • Erroneous synthetic images

    Erroneous synthetic images

    Hello everyone.

    When I generate my synthetic images, they are generated incorrectly, causing the result not to be the desired one. Does anyone know how this can be fixed? More epochs? Does the quality of my input images improve? Any recommendation and/or solution?

    Cube3

    opened by LuisRosas29 0
  • When the train.py run half of the epoch the program will be Pending

    When the train.py run half of the epoch the program will be Pending

    Nice work, I am really interested in this. but When the train.py run half of the epoch the program will be Pending. I do not know why it is so confusing. So could you help me to deal with the issue

    opened by YLongJin 0
  • Colab run error

    Colab run error

    Hi, I really appreciate your work, and your tutorial video is clear and helpful! However, I had an issue running colab, the error message is as follows. Could anyone help me with it? Thanks a lot!

    /content/LLFF Need to run COLMAP QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root' qt.qpa.screen: QXcbConnection: Could not connect to display Could not connect to any X display. Traceback (most recent call last): File "imgs2poses.py", line 18, in gen_poses(args.scenedir, args.match_type) File "/content/LLFF/llff/poses/pose_utils.py", line 268, in gen_poses run_colmap(basedir, match_type) File "/content/LLFF/llff/poses/colmap_wrapper.py", line 35, in run_colmap feat_output = ( subprocess.check_output(feature_extractor_args, universal_newlines=True) ) File "/usr/lib/python3.7/subprocess.py", line 411, in check_output **kwargs).stdout File "/usr/lib/python3.7/subprocess.py", line 512, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['colmap', 'feature_extractor', '--database_path', '/content/drive/MyDrive/nerf/c313/database.db', '--image_path', '/content/drive/MyDrive/nerf/c313/images', '--ImageReader.single_camera', '1']' returned non-zero exit status 1.

    opened by Aoiryo 2
  • The variance of PSNR results among different runs is large

    The variance of PSNR results among different runs is large

    Hi, I follow the README.md and run experiments on the Realistic Synthetic 360 dataset as python train.py --dataset_name blender --root_dir ./data/nerf_synthetic/hotdog --N_importance 64 --img_wh 400 400 --noise_std 0 --num_epochs 16 --batch_size 1024 --optimizer adam --lr 5e-4 --lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 --exp_name nerf_synthetic/hotdog --num_gpus 8. I run on eight Tesla V100 GPUs, and install the dependencies by pip install -r requirements. However, after I run three times, the PSNR results are 31.00, 30.63, 34.14, respectively. Why does that happen?

    opened by machengcheng2016 0
  • 360 rendering result is bad

    360 rendering result is bad

    my traning tensor-board result looks nice. But, when eval the result is no so good. I don't known what's going wrong.
    360rendering result I want to known, my dataset is reasonable? Nerf is suitable for my dataset? colmap gui result as following: camare

    opened by hplegend 0
Releases(nerfw_branden)
  • nerfw_branden(Jan 27, 2021)

    Used for nerfw branch.

    Train command (trained on 8 time downscaled images, just for proof of implementation):

    python prepare_phototourism.py --root_dir /home/ubuntu/data/IMC-PT/brandenburg_gate/ --img_downscale 8
    
    python train.py \
      --root_dir /home/ubuntu/data/IMC-PT/brandenburg_gate/ --dataset_name phototourism \
      --img_downscale 8 --use_cache \
      --N_importance 64 --N_samples 64 --encode_a --encode_t --beta_min 0.03 --N_vocab 1500 --N_emb_xyz 15 \
      --num_epochs 20 --batch_size 1024 \
      --optimizer adam --lr 5e-4 --lr_scheduler cosine \
      --exp_name brandenburg_scale8_nerfw
    

    Profiler Report

    Action                      	|  Mean duration (s)	|Num calls      	|  Total time (s) 	|  Percentage %   	|
    -----------------------------------------------------------------------------------------------------------------------------
    Total                       	|  -              	|_              	|  2.5398e+04     	|  100 %          	|
    -----------------------------------------------------------------------------------------------------------------------------
    run_training_epoch          	|  1269.8         	|20             	|  2.5396e+04     	|  99.991         	|
    run_training_batch          	|  0.14633        	|170760         	|  2.4988e+04     	|  98.384         	|
    optimizer_step_and_closure_0	|  0.12823        	|170760         	|  2.1896e+04     	|  86.212         	|
    training_step_and_backward  	|  0.1241         	|170760         	|  2.1192e+04     	|  83.438         	|
    model_backward              	|  0.099837       	|170760         	|  1.7048e+04     	|  67.124         	|
    model_forward               	|  0.024055       	|170760         	|  4107.6         	|  16.173         	|
    on_train_batch_end          	|  0.00052083     	|170760         	|  88.938         	|  0.35018        	|
    get_train_batch             	|  0.00023393     	|170760         	|  39.946         	|  0.15728        	|
    evaluation_step_and_end     	|  0.52576        	|21             	|  11.041         	|  0.043472       	|
    cache_result                	|  1.2894e-05     	|854050         	|  11.012         	|  0.043357       	|
    on_after_backward           	|  1.0743e-05     	|170760         	|  1.8345         	|  0.007223       	|
    on_batch_start              	|  1.0535e-05     	|170760         	|  1.799          	|  0.0070832      	|
    on_batch_end                	|  9.6894e-06     	|170760         	|  1.6546         	|  0.0065145      	|
    on_before_zero_grad         	|  8.5198e-06     	|170760         	|  1.4548         	|  0.0057282      	|
    training_step_end           	|  6.6891e-06     	|170760         	|  1.1422         	|  0.0044974      	|
    on_train_batch_start        	|  5.9285e-06     	|170760         	|  1.0124         	|  0.003986       	|
    on_validation_end           	|  0.027978       	|21             	|  0.58754        	|  0.0023133      	|
    on_validation_batch_end     	|  0.00055518     	|21             	|  0.011659       	|  4.5904e-05     	|
    on_epoch_start              	|  0.00054319     	|20             	|  0.010864       	|  4.2774e-05     	|
    on_validation_start         	|  0.00024484     	|21             	|  0.0051417      	|  2.0244e-05     	|
    on_validation_batch_start   	|  5.3095e-05     	|21             	|  0.001115       	|  4.3901e-06     	|
    validation_step_end         	|  2.1799e-05     	|21             	|  0.00045779     	|  1.8024e-06     	|
    on_train_epoch_start        	|  1.7319e-05     	|20             	|  0.00034637     	|  1.3638e-06     	|
    on_epoch_end                	|  1.5776e-05     	|20             	|  0.00031551     	|  1.2423e-06     	|
    on_train_end                	|  0.0002874      	|1              	|  0.0002874      	|  1.1316e-06     	|
    on_validation_epoch_end     	|  1.1708e-05     	|21             	|  0.00024586     	|  9.6803e-07     	|
    on_validation_epoch_start   	|  8.0324e-06     	|21             	|  0.00016868     	|  6.6415e-07     	|
    on_train_start              	|  0.00015864     	|1              	|  0.00015864     	|  6.2463e-07     	|
    on_train_epoch_end          	|  7.2367e-06     	|20             	|  0.00014473     	|  5.6986e-07     	|
    on_fit_start                	|  1.4059e-05     	|1              	|  1.4059e-05     	|  5.5355e-08     	|
    

    Eval command (used for scale2_epoch29 model):

    python eval.py \
      --root_dir /home/ubuntu/data/IMC-PT/brandenburg_gate/ \
      --dataset_name phototourism --scene_name brandenburg_test \
      --split test --N_samples 256 --N_importance 256 \
      --N_vocab 1500 --encode_a --encode_t \
      --ckpt_path ckpts/brandenburg/scale2/epoch\=29.ckpt \
      --chunk 16384 --img_wh 320 240
    

    You can change the test camera path in eval.py.

    Source code(tar.gz)
    Source code(zip)
    brandenburg_test.gif(7.33 MB)
    log.zip(1.54 MB)
    scale2_epoch.29.ckpt(15.11 MB)
    scale8_epoch.19.ckpt(15.46 MB)
  • nerfw_all(Jan 24, 2021)

    Used for nerfw branch.

    Train command:

    python train.py \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --dataset_name blender --img_wh 200 200 --data_perturb color occ \
      --N_importance 64 --N_samples 64 --noise_std 0 --encode_a --encode_t --beta_min 0.1 \
      --num_epochs 20 --batch_size 1024 \
      --optimizer adam --lr 5e-4 --lr_scheduler cosine \
      --exp_name lego_nerfw_all
    

    Eval command:

    python eval.py \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --dataset_name blender --split test --img_wh 200 200 \
      --N_importance 64 --encode_a --encode_t --beta_min 0.1 \
      --ckpt_path ckpts/lego_nerfw_all/epoch\=19.ckpt \
      --scene_name nerfw_all
    

    Eval output: Mean PSNR : 24.86

    Profiler Report

    Action                      	|  Mean duration (s)	|Num calls      	|  Total time (s) 	|  Percentage %   	|
    -----------------------------------------------------------------------------------------------------------------------------
    Total                       	|  -              	|_              	|  1.1659e+04     	|  100 %          	|
    -----------------------------------------------------------------------------------------------------------------------------
    run_training_epoch          	|  582.57         	|20             	|  1.1651e+04     	|  99.931         	|
    run_training_batch          	|  0.14307        	|78140          	|  1.1179e+04     	|  95.882         	|
    optimizer_step_and_closure_0	|  0.12437        	|78140          	|  9718.4         	|  83.352         	|
    training_step_and_backward  	|  0.12006        	|78140          	|  9381.8         	|  80.465         	|
    model_backward              	|  0.095661       	|78140          	|  7475.0         	|  64.111         	|
    model_forward               	|  0.024116       	|78140          	|  1884.5         	|  16.162         	|
    evaluation_step_and_end     	|  1.8998         	|161            	|  305.86         	|  2.6233         	|
    on_train_batch_end          	|  0.00053565     	|78140          	|  41.856         	|  0.35898        	|
    get_train_batch             	|  0.00026832     	|78140          	|  20.966         	|  0.17982        	|
    cache_result                	|  1.6708e-05     	|391370         	|  6.5391         	|  0.056084       	|
    on_after_backward           	|  1.3945e-05     	|78140          	|  1.0897         	|  0.0093458      	|
    on_batch_start              	|  1.1257e-05     	|78140          	|  0.87959        	|  0.007544       	|
    on_batch_end                	|  1.0574e-05     	|78140          	|  0.82626        	|  0.0070866      	|
    on_before_zero_grad         	|  9.9755e-06     	|78140          	|  0.77948        	|  0.0066854      	|
    training_step_end           	|  7.3524e-06     	|78140          	|  0.57452        	|  0.0049275      	|
    on_train_batch_start        	|  7.0481e-06     	|78140          	|  0.55074        	|  0.0047235      	|
    on_validation_end           	|  0.025579       	|21             	|  0.53715        	|  0.004607       	|
    on_validation_batch_end     	|  0.00039767     	|161            	|  0.064025       	|  0.00054912     	|
    on_epoch_start              	|  0.00074399     	|20             	|  0.01488        	|  0.00012762     	|
    on_validation_start         	|  0.00024646     	|21             	|  0.0051757      	|  4.439e-05      	|
    on_train_end                	|  0.0033677      	|1              	|  0.0033677      	|  2.8884e-05     	|
    on_validation_batch_start   	|  1.301e-05      	|161            	|  0.0020947      	|  1.7965e-05     	|
    validation_step_end         	|  9.2702e-06     	|161            	|  0.0014925      	|  1.2801e-05     	|
    on_epoch_end                	|  1.6658e-05     	|20             	|  0.00033316     	|  2.8575e-06     	|
    on_validation_epoch_end     	|  1.4696e-05     	|21             	|  0.00030862     	|  2.6469e-06     	|
    on_train_start              	|  0.00020975     	|1              	|  0.00020975     	|  1.799e-06      	|
    on_validation_epoch_start   	|  9.7831e-06     	|21             	|  0.00020545     	|  1.7621e-06     	|
    on_train_epoch_start        	|  9.096e-06      	|20             	|  0.00018192     	|  1.5603e-06     	|
    on_train_epoch_end          	|  8.8208e-06     	|20             	|  0.00017642     	|  1.5131e-06     	|
    on_fit_start                	|  1.3749e-05     	|1              	|  1.3749e-05     	|  1.1792e-07     	|
    
    Source code(tar.gz)
    Source code(zip)
    epoch.19.ckpt(14.77 MB)
    log.zip(2.23 MB)
    nerfw_all.gif(4.43 MB)
  • nerfa_color(Jan 24, 2021)

    Used for nerfw branch.

    Train command:

    python train.py \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --dataset_name blender --img_wh 200 200 --data_perturb color \
      --N_importance 64 --N_samples 64 --noise_std 0 --encode_a \
      --num_epochs 20 --batch_size 1024 \
      --optimizer adam --lr 5e-4 --lr_scheduler cosine \
      --exp_name lego_nerfa_color
    

    Eval command:

    python eval.py \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --dataset_name blender --split test --img_wh 200 200 \
      --N_importance 64 --encode_a \
      --ckpt_path ckpts/lego_nerfa_color/epoch\=19.ckpt \
      --scene_name nerfa_color
    

    Eval output: Mean PSNR : 28.20

    Profiler Report

    Action                      	|  Mean duration (s)	|Num calls      	|  Total time (s) 	|  Percentage %   	|
    -----------------------------------------------------------------------------------------------------------------------------
    Total                       	|  -              	|_              	|  1.0174e+04     	|  100 %          	|
    -----------------------------------------------------------------------------------------------------------------------------
    run_training_epoch          	|  508.31         	|20             	|  1.0166e+04     	|  99.922         	|
    run_training_batch          	|  0.12504        	|78140          	|  9770.7         	|  96.036         	|
    optimizer_step_and_closure_0	|  0.10593        	|78140          	|  8277.7         	|  81.362         	|
    training_step_and_backward  	|  0.10272        	|78140          	|  8026.6         	|  78.893         	|
    model_backward              	|  0.081418       	|78140          	|  6362.0         	|  62.532         	|
    model_forward               	|  0.021105       	|78140          	|  1649.1         	|  16.209         	|
    evaluation_step_and_end     	|  1.6237         	|161            	|  261.41         	|  2.5694         	|
    on_train_batch_end          	|  0.00040171     	|78140          	|  31.39          	|  0.30853        	|
    get_train_batch             	|  0.0002557      	|78140          	|  19.981         	|  0.19639        	|
    cache_result                	|  1.4961e-05     	|391370         	|  5.8553         	|  0.057551       	|
    on_after_backward           	|  1.136e-05      	|78140          	|  0.88768        	|  0.008725       	|
    on_batch_start              	|  1.0067e-05     	|78140          	|  0.78663        	|  0.0077318      	|
    on_batch_end                	|  9.6172e-06     	|78140          	|  0.75149        	|  0.0073863      	|
    on_before_zero_grad         	|  9.0155e-06     	|78140          	|  0.70447        	|  0.0069242      	|
    on_validation_end           	|  0.026961       	|21             	|  0.56618        	|  0.005565       	|
    training_step_end           	|  6.6222e-06     	|78140          	|  0.51746        	|  0.0050861      	|
    on_train_batch_start        	|  6.3198e-06     	|78140          	|  0.49383        	|  0.0048539      	|
    on_validation_batch_end     	|  0.00036434     	|161            	|  0.058659       	|  0.00057656     	|
    on_epoch_start              	|  0.00047801     	|20             	|  0.0095601      	|  9.3966e-05     	|
    on_validation_start         	|  0.00024532     	|21             	|  0.0051518      	|  5.0637e-05     	|
    on_validation_batch_start   	|  1.2674e-05     	|161            	|  0.0020406      	|  2.0057e-05     	|
    validation_step_end         	|  8.6672e-06     	|161            	|  0.0013954      	|  1.3716e-05     	|
    on_epoch_end                	|  1.7733e-05     	|20             	|  0.00035466     	|  3.4859e-06     	|
    on_train_end                	|  0.00025723     	|1              	|  0.00025723     	|  2.5283e-06     	|
    on_validation_epoch_end     	|  1.1715e-05     	|21             	|  0.00024602     	|  2.4181e-06     	|
    on_train_epoch_start        	|  1.1723e-05     	|20             	|  0.00023446     	|  2.3045e-06     	|
    on_train_start              	|  0.00021311     	|1              	|  0.00021311     	|  2.0946e-06     	|
    on_validation_epoch_start   	|  8.2239e-06     	|21             	|  0.0001727      	|  1.6975e-06     	|
    on_train_epoch_end          	|  8.1054e-06     	|20             	|  0.00016211     	|  1.5934e-06     	|
    on_fit_start                	|  1.3379e-05     	|1              	|  1.3379e-05     	|  1.315e-07      	|
    
    Source code(tar.gz)
    Source code(zip)
    epoch.19.ckpt(13.77 MB)
    log.zip(2.06 MB)
    nerfa_color.gif(5.05 MB)
  • nerfu_occ(Jan 23, 2021)

    Used for nerfw branch.

    Train command:

    python train.py \
      --dataset_name blender --img_wh 200 200 \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --N_importance 64 --N_samples 64 --noise_std 0 \
      --num_epochs 20 --batch_size 1024 \
      --optimizer adam --lr 5e-4 --lr_scheduler cosine \
      --exp_name lego_nerfu_occ --beta_min 0.1 --data_perturb occ --encode_t
    

    Eval command:

    python eval.py \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego \
      --dataset_name blender --img_wh 200 200 --split test \
      --N_importance 64 \
      --ckpt_path ckpts/lego_nerfw_occ/epoch\=19.ckpt \
      --encode_t --beta_min 0.1 \
      --scene_name nerfu_occ
    

    Eval output: Mean PSNR : 28.60

    Note I use a very small image size (200x200) to speed up my experiments.

    Profiler Report

    Action                      	|  Mean duration (s)	|Num calls      	|  Total time (s) 	|  Percentage %   	|
    -----------------------------------------------------------------------------------------------------------------------------
    Total                       	|  -              	|_              	|  1.0901e+04     	|  100 %          	|
    -----------------------------------------------------------------------------------------------------------------------------
    run_training_epoch          	|  544.74         	|20             	|  1.0895e+04     	|  99.947         	|
    run_training_batch          	|  0.13381        	|78140          	|  1.0456e+04     	|  95.921         	|
    optimizer_step_and_closure_0	|  0.11587        	|78140          	|  9053.8         	|  83.057         	|
    training_step_and_backward  	|  0.11173        	|78140          	|  8730.4         	|  80.09          	|
    model_backward              	|  0.088715       	|78140          	|  6932.2         	|  63.595         	|
    model_forward               	|  0.02281        	|78140          	|  1782.4         	|  16.351         	|
    evaluation_step_and_end     	|  1.7842         	|161            	|  287.25         	|  2.6352         	|
    on_train_batch_end          	|  0.00042159     	|78140          	|  32.943         	|  0.30221        	|
    get_train_batch             	|  0.00025085     	|78140          	|  19.602         	|  0.17982        	|
    cache_result                	|  1.4803e-05     	|391370         	|  5.7934         	|  0.053147       	|
    on_batch_start              	|  1.0333e-05     	|78140          	|  0.80743        	|  0.0074072      	|
    on_after_backward           	|  9.5508e-06     	|78140          	|  0.7463         	|  0.0068464      	|
    on_batch_end                	|  9.4638e-06     	|78140          	|  0.7395         	|  0.006784       	|
    on_before_zero_grad         	|  8.3572e-06     	|78140          	|  0.65303        	|  0.0059908      	|
    on_validation_end           	|  0.025442       	|21             	|  0.53429        	|  0.0049014      	|
    training_step_end           	|  6.2163e-06     	|78140          	|  0.48574        	|  0.0044561      	|
    on_train_batch_start        	|  5.99e-06       	|78140          	|  0.46806        	|  0.0042939      	|
    on_validation_batch_end     	|  0.00042104     	|161            	|  0.067788       	|  0.00062187     	|
    on_epoch_start              	|  0.00079988     	|20             	|  0.015998       	|  0.00014676     	|
    on_validation_start         	|  0.00024023     	|21             	|  0.0050449      	|  4.6281e-05     	|
    on_validation_batch_start   	|  1.391e-05      	|161            	|  0.0022395      	|  2.0544e-05     	|
    validation_step_end         	|  8.4167e-06     	|161            	|  0.0013551      	|  1.2431e-05     	|
    on_train_end                	|  0.0003507      	|1              	|  0.0003507      	|  3.2172e-06     	|
    on_epoch_end                	|  1.611e-05      	|20             	|  0.0003222      	|  2.9558e-06     	|
    on_train_epoch_start        	|  1.5704e-05     	|20             	|  0.00031408     	|  2.8813e-06     	|
    on_validation_epoch_end     	|  1.4037e-05     	|21             	|  0.00029477     	|  2.7041e-06     	|
    on_validation_epoch_start   	|  8.7303e-06     	|21             	|  0.00018334     	|  1.6819e-06     	|
    on_train_start              	|  0.00016846     	|1              	|  0.00016846     	|  1.5454e-06     	|
    on_train_epoch_end          	|  7.923e-06      	|20             	|  0.00015846     	|  1.4537e-06     	|
    on_fit_start                	|  1.3867e-05     	|1              	|  1.3867e-05     	|  1.2721e-07     	|
    
    Source code(tar.gz)
    Source code(zip)
    epoch.19.ckpt(14.68 MB)
    log.zip(2.31 MB)
    nerfu_occ.gif(4.54 MB)
  • v2.0.2(May 8, 2020)

  • v2.0.1(May 7, 2020)

    release silica nerf model and reconstructed mesh (spheric poses) link to the data image size is 504x378 (original size 4032x3024)

    Usage: place the poses_bounds.npy under a folder $DIR (anywhere you want), then you can run

    python eval.py \
       --root_dir $DIR \
       --dataset_name llff --scene_name silica \
       --img_wh 504 378 --N_importance 64 --spheric_poses --ckpt_path $CKPT_PATH
    

    as usual. To extract the mesh, follow README_mesh.

    Source code(tar.gz)
    Source code(zip)
    poses_bounds.npy(6.50 KB)
    silica.ckpt(4.55 MB)
    silica.ply(22.62 MB)
  • v2.0(May 2, 2020)

    Command:

    python train.py \
      --dataset_name llff \
      --root_dir /home/ubuntu/data/nerf_example_data/nerf_llff_data/fern/ \
      --N_importance 64 --img_wh 504 378 \
      --batch_size 1024 --num_epochs 30 \
      --optimizer adam --lr 5e-4 \
      --lr_scheduler steplr --decay_step 10 20 --decay_gamma 0.5 \
      --exp_name fern
    

    Profile

    Profiler Report
    
    Action              	|  Mean duration (s)	|  Total time (s) 
    -----------------------------------------------------------------
    on_train_start      	|  0.00023312     	|  0.00023312     
    on_epoch_start      	|  0.00029521     	|  0.0088563      
    get_train_batch     	|  0.00023997     	|  25.456         
    on_batch_start      	|  6.1591e-06     	|  0.65317        
    model_forward       	|  0.019652       	|  2084.1         
    model_backward      	|  0.069537       	|  7374.4         
    on_after_backward   	|  1.5543e-06     	|  0.16483        
    optimizer_step      	|  0.0037302      	|  395.59         
    on_batch_end        	|  0.00030407     	|  32.247         
    on_epoch_end        	|  9.9102e-06     	|  0.00029731     
    on_train_end        	|  0.00036468     	|  0.00036468     
    
    Source code(tar.gz)
    Source code(zip)
    fern.ckpt(4.55 MB)
    log.zip(27.10 MB)
  • v1.0(Apr 20, 2020)

    lego model and reconstructed mesh. Command:

    python train.py \
       --dataset_name blender \
       --root_dir /home/ubuntu/data/nerf_example_data/nerf_synthetic/lego/ \
       --N_importance 64 --img_wh 400 400 --noise_std 0 \
       --batch_size 1024 --num_epochs 16 \
       --optimizer adam --lr 5e-4 \
       --lr_scheduler steplr --decay_step 2 4 8 --decay_gamma 0.5 \
       --exp_name exp3
    

    Detailed profile:

    Action              	|  Mean duration (s)	|  Total time (s) 
    -----------------------------------------------------------------
    on_train_start      	|  1.2281e-05     	|  1.2281e-05     
    on_epoch_start      	|  6.1691e-06     	|  9.8706e-05     
    get_train_batch     	|  0.00023678     	|  59.198         
    on_batch_start      	|  4.4245e-06     	|  1.1061         
    model_forward       	|  0.041729       	|  1.0432e+04     
    model_backward      	|  0.046964       	|  1.1741e+04     
    on_after_backward   	|  1.5339e-06     	|  0.38347        
    optimizer_step      	|  0.0035952      	|  898.81         
    on_batch_end        	|  4.1799e-06     	|  1.045          
    on_epoch_end        	|  5.4906e-06     	|  8.785e-05      
    on_train_end        	|  9.583e-06      	|  9.583e-06 
    
    Source code(tar.gz)
    Source code(zip)
    lego.ckpt(4.55 MB)
    lego.ply(55.81 MB)
    lego_lowres.ply(12.78 MB)
    log.zip(7.60 MB)
Owner
AIθ‘΅
AI R&D in computer vision. Doing VTuber about DL algorithms. Check my channel! If you find my works helpful, please consider sponsoring! ζˆ‘ζœ‰εœ¨εšVTuberοΌŒζ­‘θΏŽθ¨‚ι–±ζˆ‘ηš„ι »ι“οΌ
AIθ‘΅
UMPNet: Universal Manipulation Policy Network for Articulated Objects

UMPNet: Universal Manipulation Policy Network for Articulated Objects Zhenjia Xu, Zhanpeng He, Shuran Song Columbia University Robotics and Automation

Columbia Artificial Intelligence and Robotics Lab 33 Dec 03, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

83 Dec 31, 2022
K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

KCP The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for p

Yu-Kai Lin 109 Dec 14, 2022
Voice control for Garry's Mod

WIP: Talonvoice GMod integrations Very work in progress voice control demo for Garry's Mod. HOWTO Install https://talonvoice.com/ Press https://i.imgu

Meta Construct 5 Nov 15, 2022
Codes for the compilation and visualization examples to the HIF vegetation dataset

High-impedance vegetation fault dataset This repository contains the codes that compile the "Vegetation Conduction Ignition Test Report" data, which a

1 Dec 12, 2021
From the basics to slightly more interesting applications of Tensorflow

TensorFlow Tutorials You can find python source code under the python directory, and associated notebooks under notebooks. Source code Description 1 b

Parag K Mital 5.6k Jan 09, 2023
Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression.

Spatio-Temporal Entropy Model A Pytorch Reproduction of Spatio-Temporal Entropy Model (STEM) for end-to-end leaned video compression. More details can

16 Nov 28, 2022
A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers.

ViTGAN: Training GANs with Vision Transformers A PyTorch implementation of ViTGAN based on paper ViTGAN: Training GANs with Vision Transformers. Refer

Hong-Jia Chen 127 Dec 23, 2022
LaneAF: Robust Multi-Lane Detection with Affinity Fields

LaneAF: Robust Multi-Lane Detection with Affinity Fields This repository contains Pytorch code for training and testing LaneAF lane detection models i

155 Dec 17, 2022
Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

Bytedance Inc. 958 Jan 03, 2023
Self-labelling via simultaneous clustering and representation learning. (ICLR 2020)

Self-labelling via simultaneous clustering and representation learning πŸ†— πŸ†— πŸŽ‰ NEW models (20th August 2020): Added standard SeLa pretrained torchvis

Yuki M. Asano 469 Jan 02, 2023
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods."

pv_predict_unet-lstm Code for "Intra-hour Photovoltaic Generation Forecasting based on Multi-source Data and Deep Learning Methods." IEEE Transactions

FolkScientistInDL 8 Oct 08, 2022
Implementation of DropLoss for Long-Tail Instance Segmentation in Pytorch

[AAAI 2021]DropLoss for Long-Tail Instance Segmentation [AAAI 2021] DropLoss for Long-Tail Instance Segmentation Ting-I Hsieh*, Esther Robb*, Hwann-Tz

Tim 37 Dec 02, 2022
Code for Multinomial Diffusion

Code for Multinomial Diffusion Abstract Generative flows and diffusion models have been predominantly trained on ordinal data, for example natural ima

104 Jan 04, 2023
Neural Architecture Search Powered by Swarm Intelligence 🐜

Neural Architecture Search Powered by Swarm Intelligence 🐜 DeepSwarm DeepSwarm is an open-source library which uses Ant Colony Optimization to tackle

288 Oct 28, 2022
[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.

OpenCOOD OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 paper OPV

Runsheng Xu 322 Dec 23, 2022
Pytorch Performace Tuning, WandB, AMP, Multi-GPU, TensorRT, Triton

Plant Pathology 2020 FGVC7 Introduction A deep learning model pipeline for training, experimentaiton and deployment for the Kaggle Competition, Plant

Bharat Giddwani 0 Feb 25, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022