pytorch implementation of dftd2 & dftd3

Overview

torch-dftd

pytorch implementation of dftd2 [1] & dftd3 [2, 3]

Install

# Install from pypi
pip install torch-dftd

# Install from source (for developers)
git clone https://github.com/pfnet-research/torch-dftd
pip install -e .

Quick start

from ase.build import molecule
from torch_dftd.torch_dftd3_calculator import TorchDFTD3Calculator

atoms = molecule("CH3CH2OCH3")
# device="cuda:0" for fast GPU computation.
calc = TorchDFTD3Calculator(atoms=atoms, device="cpu", damping="bj")

energy = atoms.get_potential_energy()
forces = atoms.get_forces()

print(f"energy {energy} eV")
print(f"forces {forces}")

Dependency

The library is tested under following environment.

  • python: 3.6
  • CUDA: 10.2
torch==1.5.1
ase==3.21.1
# Below is only for 3-body term
cupy-cuda102==8.6.0
pytorch-pfn-extras==0.3.2

Development tips

Formatting & Linting

pysen is used to format the python code of this repository.
You can simply run below to get your code formatted :)

# Format the code
$ pysen run format
# Check the code format
$ pysen run lint

CUDA Kernel function implementation with cupy

cupy supports users to implement CUDA kernels within python code, and it can be easily linked with pytorch tensor calculations.
Element wise kernel is implemented and used in some pytorch functions to accelerate speed with GPU.

See document for details about user defined kernel.

Citation

Please always cite original paper of DFT-D2 [1] or DFT-D3 [2, 3], if you used this software for your publication.

DFT-D2:
[1] S. Grimme, J. Comput. Chem, 27 (2006), 1787-1799. DOI: 10.1002/jcc.20495

DFT-D3:
[2] S. Grimme, J. Antony, S. Ehrlich and H. Krieg, J. Chem. Phys, 132 (2010), 154104. DOI: 10.1063/1.3382344

If BJ-damping is used in DFT-D3:
[3] S. Grimme, S. Ehrlich and L. Goerigk, J. Comput. Chem, 32 (2011), 1456-1465. DOI: 10.1002/jcc.21759

Comments
  • [WIP] Cell-related gradient modifications

    [WIP] Cell-related gradient modifications

    I found that the current implementation has several performance issues regarding gradient wrt. cell. This PR modifies that. Since the changes are relatively much, I will put some comments.

    Change summary:

    • Use shift for gradient instead of cell.
    • shift is now length scale instead cell unit.
    • Calculate Voigt notation style stress directly

    Also, this PR contains bugfix related to sked cell.

    bug enhancement 
    opened by So-Takamoto 1
  • Raise Error with single atom inputs.

    Raise Error with single atom inputs.

    When the length of atoms is 1, the routine raises error.

    from ase.build import molecule
    from ase.calculators.dftd3 import DFTD3
    from torch_dftd.torch_dftd3_calculator import TorchDFTD3Calculator
    
    if __name__ == "__main__":
        atoms = molecule("H")
        # device="cuda:0" for fast GPU computation.
        calc = TorchDFTD3Calculator(atoms=atoms, device="cpu", damping="bj")
    
        energy = atoms.get_potential_energy()
        forces = atoms.get_forces()
    
        print(f"energy {energy} eV")
        print(f"forces {forces}")
    
    
    Traceback (most recent call last):
      File "quick.py", line 12, in <module>
        energy = atoms.get_potential_energy()
      File "/home/ahayashi/envs/dftd/lib/python3.8/site-packages/ase/atoms.py", line 731, in get_potential_energy
        energy = self._calc.get_potential_energy(self)
      File "/home/ahayashi/envs/dftd/lib/python3.8/site-packages/ase/calculators/calculator.py", line 709, in get_potential_energy
        energy = self.get_property('energy', atoms)
      File "/home/ahayashi/torch-dftd/torch_dftd/torch_dftd3_calculator.py", line 141, in get_property
        dftd3_result = Calculator.get_property(self, name, atoms, allow_calculation)
      File "/home/ahayashi/envs/dftd/lib/python3.8/site-packages/ase/calculators/calculator.py", line 737, in get_property
        self.calculate(atoms, [name], system_changes)
      File "/home/ahayashi/torch-dftd/torch_dftd/torch_dftd3_calculator.py", line 119, in calculate
        results = self.dftd_module.calc_energy(**input_dicts, damping=self.damping)[0]
      File "/home/ahayashi/torch-dftd/torch_dftd/nn/base_dftd_module.py", line 75, in calc_energy
        E_disp = self.calc_energy_batch(
      File "/home/ahayashi/torch-dftd/torch_dftd/nn/dftd3_module.py", line 86, in calc_energy_batch
        E_disp = d3_autoev * edisp(
      File "/home/ahayashi/torch-dftd/torch_dftd/functions/dftd3.py", line 189, in edisp
        c6 = _getc6(Zi, Zj, nci, ncj, c6ab=c6ab, k3=k3)  # c6 coefficients
      File "/home/ahayashi/torch-dftd/torch_dftd/functions/dftd3.py", line 97, in _getc6
        k3_rnc = torch.where(cn0 > 0.0, k3 * r, -1.0e20 * torch.ones_like(r)).view(n_edges, -1)
    RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1] because the unspecified dimension size -1 can be any value and is ambiguous
    
    opened by AkihideHayashi 1
  • use shift for gradient calculation instead of cell

    use shift for gradient calculation instead of cell

    I found that the current implementation has several performance issues regarding gradient wrt. cell. This PR modifies it. Since the changes are relatively much, I will put some comments.

    Change summary:

    • Use shift for gradient instead of cell.
    • shift is now length scale instead cell unit.
    • Calculate Voigt notation style stress directly

    Also, this PR contains bugfix related to sked cell.

    bug enhancement 
    opened by So-Takamoto 0
  • Bugfix: batch calculation with abc=True

    Bugfix: batch calculation with abc=True

    I found that test function test_calc_energy_force_stress_device_batch_abc unintentionally ignores abc argument.

    This PR modified related implementation to work it.

    In addition, corner case correspondence when the total number of atom is zero is also added. (n_graphs cannot be calculated from batch_edge when len(batch_edge) == 0.)

    bug 
    opened by So-Takamoto 0
  • Fixed a bug for inputs with 0 adjacencies.

    Fixed a bug for inputs with 0 adjacencies.

    The _gettc6 routine now works correctly even when the number of adjacencies is 0. Instead of calling calc_neighbor_by_pymatgen when the number of atoms is 0 and the periodic boundary condition, it now return edge_index, S for adjacency 0. In my environment, using the result of torch.sum for the size of torch.zeros caused an error, so I changed it to cast the result of sum to int.

    bug 
    opened by AkihideHayashi 0
  •  Bug in test for stress

    Bug in test for stress

    In test_torch_dftd3_calculator.py/_assert_energy_force_stress_equal, there is a code below.

        if np.all(atoms.pbc == np.array([True, True, True])):
            s1 = atoms.get_stress()
            s2 = atoms.get_stress()
            assert np.allclose(s1, s2, atol=1e-5, rtol=1e-5)
    

    This code cannot compare the results of stresses of calc1 and calc2. Both s1 and s2 are the stress of calc2.

    opened by AkihideHayashi 0
Releases(v0.3.0)
  • v0.3.0(Apr 25, 2022)

    This is the release note of v0.3.0.

    Highlights

    • use shift for gradient calculation instead of cell #13 (Thank you @So-Takamoto )
      • It includes 1. speed up of stress calculation for batch atoms, and 2. bug fix for stress calculation when cell is skewed.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Sep 4, 2021)

    This is the release note of v0.2.0.

    Highlights

    • Add PFP citation in README.md #2
    • Use pymatgen for pbc neighbor search speed up #3

    Bug fixes

    • Fixed a bug for inputs with 0 adjacencies. #6 (Thank you @AkihideHayashi )
    • Remove RuntimeError on no-cupy environment #8 (Thank you @So-Takamoto )
    • Bugfix: batch calculation with abc=True #9 (Thank you @So-Takamoto )

    Others

    • move pysen to develop dependency #10 (Thank you @So-Takamoto )
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(May 10, 2021)

[CVPR 21] Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yan

Ayan Kumar Bhunia 44 Dec 12, 2022
Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper

Ponder(ing) Transformer Implementation of a Transformer that learns to adapt the number of computational steps it takes depending on the difficulty of

Phil Wang 65 Oct 04, 2022
PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

PyTorch implementation of the paper Ultra Fast Structure-aware Deep Lane Detection

1.4k Jan 06, 2023
Applying CLIP to Point Cloud Recognition.

PointCLIP: Point Cloud Understanding by CLIP This repository is an official implementation of the paper 'PointCLIP: Point Cloud Understanding by CLIP'

Renrui Zhang 175 Dec 24, 2022
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
PyTorch code of my WACV 2022 paper Improving Model Generalization by Agreement of Learned Representations from Data Augmentation

Improving Model Generalization by Agreement of Learned Representations from Data Augmentation (WACV 2022) Paper ArXiv Why it matters? When data augmen

Rowel Atienza 5 Mar 04, 2022
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

yobi byte 29 Oct 09, 2022
FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT Preparation For instructions on generating data, plea

Lizonghang 9 Dec 22, 2022
Generate image analogies using neural matching and blending

neural image analogies This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch mat

Adam Wentz 3.5k Jan 08, 2023
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

Chen Kai 24 Dec 05, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning

Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning Yansong Tang *, Zhenyu Jiang *, Zhenda Xie *, Yue

Zhenyu Jiang 12 Nov 16, 2022
Pairwise learning neural link prediction for ogb link prediction

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
Official code for "End-to-End Optimization of Scene Layout" -- including VAE, Diff Render, SPADE for colorization (CVPR 2020 Oral)

End-to-End Optimization of Scene Layout Code release for: End-to-End Optimization of Scene Layout CVPR 2020 (Oral) Project site, Bibtex For help conta

Andrew Luo 41 Dec 09, 2022
Official implementation of Unfolded Deep Kernel Estimation for Blind Image Super-resolution.

Unfolded Deep Kernel Estimation for Blind Image Super-resolution Hongyi Zheng, Hongwei Yong, Lei Zhang, "Unfolded Deep Kernel Estimation for Blind Ima

Z80 15 Dec 26, 2022
SeqAttack: a framework for adversarial attacks on token classification models

A framework for adversarial attacks against token classification models

Walter 23 Nov 25, 2022
The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks This folder contains the code to reproduce the data in "The Implicit Bias o

Samuel Lippl 0 Feb 05, 2022
Provide baselines and evaluation metrics of the task: traffic flow prediction

Note: This repo is adpoted from https://github.com/UNIMIBInside/Smart-Mobility-Prediction. Due to technical reasons, I did not fork their code. Introd

Zhangzhi Peng 11 Nov 02, 2022