Application of the L2HMC algorithm to simulations in lattice QCD.

Overview

l2hmc-qcd CodeFactor

📊 Slides

📒 Example Notebook


Overview

The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.

This work is based on the original implementation: brain-research/l2hmc/.

A detailed description of the L2HMC algorithm can be found in the paper:

Generalizing Hamiltonian Monte Carlo with Neural Network

by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.

Broadly, given an analytically described target distribution, π(x), L2HMC provides a statistically exact sampler that:

  • Quickly converges to the target distribution (fast burn-in).
  • Quickly produces uncorrelated samples (fast mixing).
  • Is able to efficiently mix between energy levels.
  • Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).

L2HMC for LatticeQCD

Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.

A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in doc/main.pdf.

l2hmc-qcd poster

Organization

Dynamics / Network

The base class for the augmented L2HMC leapfrog integrator is implemented in the BaseDynamics (a tf.keras.Model object).

The GaugeDynamics is a subclass of BaseDynamics containing modifications for the 2D U(1) pure gauge theory.

The network is defined in l2hmc-qcd/network/functional_net.py.

Network Architecture

An illustration of the leapfrog layer updating (x, v) --> (x', v') can be seen below.

leapfrog layer

Lattice

Lattice code can be found in lattice.py, specifically the GaugeLattice object that provides the base structure on which our target distribution exists.

Additionally, the GaugeLattice object implements a variety of methods for calculating physical observables such as the average plaquette, ɸₚ, and the topological charge Q,

Training

The training loop is implemented in l2hmc-qcd/utils/training_utils.py .

To train the sampler on a 2D U(1) gauge model using the parameters specified in bin/train_configs.json:

$ python3 /path/to/l2hmc-qcd/l2hmc-qcd/train.py --json_file=/path/to/l2hmc-qcd/bin/train_configs.json

Or via the bin/train.sh script provided in bin/.

Features

  • Distributed training (via horovod): If horovod is installed, the model can be trained across multiple GPUs (or CPUs) by:

    #!/bin/bash
    
    TRAINER=/path/to/l2hmc-qcd/l2hmc-qcd/train.py
    JSON_FILE=/path/to/l2hmc-qcd/bin/train_configs.json
    
    horovodrun -np ${PROCS} python3 ${TRAINER} --json_file=${JSON_FILE}

Contact


Code author: Sam Foreman

Pull requests and issues should be directed to: saforem2

Citation

If you use this code or found this work interesting, please cite our work along with the original paper:

@misc{foreman2021deep,
      title={Deep Learning Hamiltonian Monte Carlo}, 
      author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
      year={2021},
      eprint={2105.03418},
      archivePrefix={arXiv},
      primaryClass={hep-lat}
}
@article{levy2017generalizing,
  title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
  author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
  journal={arXiv preprint arXiv:1711.09268},
  year={2017}
}

Acknowledgement

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.

Hits

Stargazers over time

Comments
  • Remove upper bound on python_requires

    Remove upper bound on python_requires

    (I'm moving between meetings so can iterate on this more later, so excuse the very brief Issue for now).

    At the moment the project has an upper bound on python_requires

    https://github.com/saforem2/l2hmc-qcd/blob/2eb6ee63cc0c53b187e6d716f4c12f418c8b8515/setup.py#L165

    Assuming that you're intending l2hmc to be a library and not an application, then I would highly recommend removing this for the reasons summarized in Henry's detailed blog post on the subject.

    Congrats on getting l2hmc up on PyPI though! :snake: :rocket:

    opened by matthewfeickert 2
  • Alpha

    Alpha

    Pull upstream alpha branch into main

    Major changes

    • new src/ hierarchical module organization
    • Contains skeleton implementation of 4D SU(3) lattice gauge model
    • Framework independent configuration
      • Unified configuration system simplifies logic, same configs used for both tensorflow and pytorch experiments
      • Plan to be able to specify which backend to use through config option
    • Unified (and framework independent) configurations between tensorflow and pytorch implementations

    Note: This is still very much a WIP. Many existing features still need to be re-implemented / updated into new code in src/.

    Todo

    • [ ] Write unit tests
    • [ ] Use simple configs for end-to-end workflow test + integrate into CI
    • [ ] dynamic learning rate scheduling
    • [ ] Test 4D SU(3) numpy code
    • [ ] Write tensorflow and pytorch implementations of LatticeSU3 objects
    • [ ] Improved / simplified ( / trainable?) annealing schedule
    • [ ] Distributed training support
      • [ ] horovod
      • [ ] DDP for pytorch implementation
      • [ ] DeepSpeed from Microsoft??
    • [ ] Testing / inference logic
    • [ ] Automatic checkpointing
    • [ ] Metric logging
      • [ ] Tensorboard?
      • [ ] Sacred?
      • [ ] build custom dashboard? plot.ly?
    • [ ] Setup packaging / distribution through pip
    • [ ] Resolve issue
    opened by saforem2 1
  • Alpha

    Alpha

    opened by saforem2 1
  • Rich

    Rich

    General improvements, rewrote logging methods to use Rich for better formatting.

    • Adds dynamic (trainable) step size eps for each separate x and v updates, seems to generally increase the total energy towards the middle of the trajectory but it remains unclear if this corresponds to an improvement in the tunneling rate
    • Adds methods for calculating autocorrelations of the topological charge, as well as notebooks for generating the plots
    • Updates to the writeup in doc/main.pdf
    • Will likely be last changes to writeup before public release of official draft
    opened by saforem2 1
  • Dev

    Dev

    • Updates to README

    • Ability to load network with new training instance

    • Updates to doc/, removes old sections related to debugging the bias in the plaquette

    opened by saforem2 1
  • Saveable model

    Saveable model

    Complete rewrite of dynamics.xnet and dynamics.vnet models to use tf.keras.functional Models.

    Additional changes include:

    • Non-Compact Projection update for gauge fields
    • Ability to specify convolution structure to be prepended at beginning of gauge network
    opened by saforem2 1
  • Dev

    Dev

    Removes models/gauge_model.py entirely.

    Instead, a base dynamics class is implemented in dynamics/dynamics.py, and an example subclass is provided in dynamics/gauge_dynamics.py.

    opened by saforem2 1
  • Split networks

    Split networks

    Major rewrite of existing codebase.

    This pull request updates everything to be compatible with tensorflow >= 2.2 and removes a bunch of redundant legacy code.

    opened by saforem2 1
  • Dev

    Dev

    • Dynamics object is now compatible with tf >= 2.0
    • Running inference on trained model with tensorflow now creates identical graphs and summary files to numpy inference code
    • Inference with numpy now uses object oriented structure
    • Adds LaTeX + PDF documentation in doc/
    opened by saforem2 1
  • Cooley dev

    Cooley dev

    Adds new GaugeNetwork architecture as the default for training GaugeModel

    Additionally, replaces pickle with joblib for saving data as .z compressed files (as opposed to .pkl files).

    opened by saforem2 1
  • Testing

    Testing

    Implemented nnehmc_loss calculation for an alternative loss function using the approach suggested in https://infoscience.epfl.ch/record/264887/files/robust_parameter_estimation.pdf.

    This modified loss function can be chosen (instead of the standard loss described in the original paper) by passing --use_nnehmc_loss as a command line argument.

    opened by saforem2 1
  • Packaging and PyPI distribution?

    Packaging and PyPI distribution?

    As you've made a library and are using it as such:

    # snippet from toy_distributions.ipynb
    
    # append parent directory to `sys.path`
    # to load from modules in `../l2hmc-qcd/`
    module_path = os.path.join('..')
    if module_path not in sys.path:
        sys.path.append(module_path)
    
    # Local imports
    from utils.attr_dict import AttrDict
    from utils.training_utils import train_dynamics
    from dynamics.config import DynamicsConfig
    from dynamics.base_dynamics import BaseDynamics
    from dynamics.generic_dynamics import GenericDynamics
    from network.config import LearningRateConfig
    from config import (State, NetWeights, MonteCarloStates,
                        BASE_DIR, BIN_DIR, TF_FLOAT)
    
    from utils.distributions import (plot_samples2D, contour_potential,
                                     two_moons_potential, sin_potential,
                                     sin_potential1, sin_potential2)
    

    do you have any plans and/or interest in packaging it as a Python library so it can either be pip installed from GitHub or be distributed on PyPI?

    opened by matthewfeickert 5
Releases(0.12.0)
Owner
Sam Foreman
Computational science Postdoc at Argonne National Laboratory working on applying machine learning to simulations in lattice QCD.
Sam Foreman
pq is a jq-like Pickle file viewer

pq PQ is a jq-like viewer/processing tool for pickle files. howto # pq '' file.pkl {'other': 456, 'test': 123} # pq 'table' file.pkl |other|test| | 45

3 Mar 15, 2022
Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Time-stretch audio clips quickly with PyTorch (CUDA supported)! Additional utilities for searching efficient transformations are included.

Kento Nishi 22 Jul 07, 2022
PyTorch - Python + Nim

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
SimBERT升级版(SimBERTv2)!

RoFormer-Sim RoFormer-Sim,又称SimBERTv2,是我们之前发布的SimBERT模型的升级版。 介绍 https://kexue.fm/archives/8454 训练 tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 下载

318 Dec 31, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
This code finds bounding box of a single human mouth.

This code finds bounding box of a single human mouth. In comparison to other face segmentation methods, it is relatively insusceptible to open mouth conditions, e.g., yawning, surgical robots, etc. T

iThermAI 4 Nov 27, 2022
Keras implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 8.9k Jan 04, 2023
Company clustering with K-means/GMM and visualization with PCA, t-SNE, using SSAN relation extraction

RE results graph visualization and company clustering Installation pip install -r requirements.txt python -m nltk.downloader stopwords python3.7 main.

Jieun Han 1 Oct 06, 2022
An addernet CUDA version

Training addernet accelerated by CUDA Usage cd adder_cuda python setup.py install cd .. python main.py Environment pytorch 1.10.0 CUDA 11.3 benchmark

LingXY 4 Jun 20, 2022
Residual Dense Net De-Interlace Filter (RDNDIF)

Residual Dense Net De-Interlace Filter (RDNDIF) Work in progress deep de-interlacer filter. It is based on the architecture proposed by Bernasconi et

Louis 7 Feb 15, 2022
SimpleDepthEstimation - An unified codebase for NN-based monocular depth estimation methods

SimpleDepthEstimation Introduction This is an unified codebase for NN-based monocular depth estimation methods, the framework is based on detectron2 (

8 Dec 13, 2022
Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability an

Liang HUANG 87 Dec 28, 2022
Speedy Implementation of Instance-based Learning (IBL) agents in Python

A Python library to create single or multi Instance-based Learning (IBL) agents that are built based on Instance Based Learning Theory (IBLT) 1 Instal

0 Nov 18, 2021
Codebase of deep learning models for inferring stability of mRNA molecules

Kaggle OpenVaccine Models Codebase of deep learning models for inferring stability of mRNA molecules, corresponding to the Kaggle Open Vaccine Challen

Eternagame 40 Dec 29, 2022
A collection of awesome resources image-to-image translation.

awesome image-to-image translation A collection of resources on image-to-image translation. Contributing If you think I have missed out on something (

876 Dec 28, 2022
"3D Human Texture Estimation from a Single Image with Transformers", ICCV 2021

Texformer: 3D Human Texture Estimation from a Single Image with Transformers This is the official implementation of "3D Human Texture Estimation from

XiangyuXu 193 Dec 05, 2022
Spatial Intention Maps for Multi-Agent Mobile Manipulation (ICRA 2021)

spatial-intention-maps This code release accompanies the following paper: Spatial Intention Maps for Multi-Agent Mobile Manipulation Jimmy Wu, Xingyua

Jimmy Wu 70 Jan 02, 2023
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

74 Dec 03, 2022