Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Overview

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine!

Motivation

Would you like fully reproducible research or reusable workflows that seamlessly run on HPC clusters? Tired of writing and managing large Slurm submission scripts? Do you have comment out large parts of your pipeline whenever its results have been generated? Don't waste your precious time! awflow allows you to directly describe complex pipelines in Python, that run on your personal computer and large HPC clusters.

import awflow as aw
import glob
import numpy as np

n = 100000
tasks = 10

@aw.cpus(4)  # Request 4 CPU cores
@aw.memory("4GB")  # Request 4 GB of RAM
@aw.postcondition(aw.num_files('pi-*.npy', 10))
@aw.tasks(tasks)  # Requests '10' parallel tasks
def estimate(task_index):
    print("Executing task {} / {}.".format(task_index + 1, tasks))
    x = np.random.random(n)
    y = np.random.random(n)
    pi_estimate = (x**2 + y**2 <= 1)
    np.save('pi-' + str(task_index) + '.npy', pi_estimate)

@aw.dependency(estimate)
def merge():
    files = glob.glob('pi-*.npy')
    stack = np.vstack([np.load(f) for f in files])
    np.save('pi.npy', stack.sum() / (n * tasks) * 4)

@aw.dependency(merge)
@aw.postcondition(aw.exists('pi.npy'))  # Prevent execution if postcondition is satisfied.
def show_result():
    print("Pi:", np.load('pi.npy'))

aw.execute()

Executing this Python program (python examples/pi.py) on a Slurm HPC cluster will launch the following jobs.

           1803299       all    merge username PD       0:00      1 (Dependency)
           1803300       all show_res username PD       0:00      1 (Dependency)
     1803298_[6-9]       all estimate username PD       0:00      1 (Resources)
         1803298_3       all estimate username  R       0:01      1 compute-xx
         1803298_4       all estimate username  R       0:01      1 compute-xx
         1803298_5       all estimate username  R       0:01      1 compute-xx

Check the examples directory and guide to explore the functionality.

Installation

The awflow package is available on PyPi, which means it is installable via pip.

[email protected]:~ $ pip install awflow

If you would like the latest features, you can install it using this Git repository.

[email protected]:~ $ pip install git+https://github.com/JoeriHermans/awflow

If you would like to run the examples as well, be sure to install the optional example dependencies.

[email protected]:~ $ pip install 'awflow[examples]'

Usage

The core concept in awflow is the notion of a task. Essentially, this is a method that will be executed in your workflow. Tasks are represented as a node in a directed graph. In doing so, we can easily specify (task) dependencies. In addition, we can attribute properties to tasks using decorators defined by awflow. This allows you to specify things like CPU cores, GPU's and even postconditions. Follow the guide for additional examples and descriptions.

Decorators

TODO

Workflow storage

By default, workflows will be stored in the current working direction within the ./workflows folder. If desired, a central storage directory can be used by specifying the AWFLOW_STORAGE environment variable.

The awflow utility

This package comes with a utility program to manage submitted, failed, and pending workflows. Its functionality can be inspected by executing awflow -h. In addition, to streamline the management of workflows, we recommend to give every workflow as specific name to easily identify a workflow. This name does not have to be unique for every distinct workflow execution.

aw.execute(name=r'Some name')

Executing awflow list after submitting the pipeline with python pipeline.py [args] will yield.

[email protected]:~ $ awflow list
  Postconditions      Status      Backend     Name          Location
 ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
  50%                 Running     Slurm       Some name     /home/jhermans/awflow/examples/.workflows/tmpntmc712a

Modules

[email protected]:~ $ awflow cancel [workflow] TODO

[email protected]:~ $ awflow clear TODO

[email protected]:~ $ awflow list TODO

[email protected]:~ $ awflow inspect [workflow] TODO

Contributing

See CONTRIBUTING.md.

Roadmap

  • Documentation
  • README

License

As described in the LICENSE file.

You might also like...
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.

Detectron is deprecated. Please see detectron2, a ground-up rewrite of Detectron in PyTorch. Detectron Detectron is Facebook AI Research's software sy

A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-t

Lightweight, Python library for fast and reproducible experimentation :microscope:

Steppy What is Steppy? Steppy is a lightweight, open-source, Python 3 library for fast and reproducible experimentation. Steppy lets data scientist fo

Open-sourcing the Slates Dataset for recommender systems research
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

Research on controller area network Intrusion Detection Systems

Group members information Member 1: Lixue Liang Member 2: Yuet Lee Chan Member 3: Xinruo Zhang Member 4: Yifei Han User Manual Generate Attack Packets

GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon Research.

BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.
BisQue is a web-based platform designed to provide researchers with organizational and quantitative analysis tools for 5D image data. Users can extend BisQue by implementing containerized ML workflows.

Overview BisQue is a web-based platform specifically designed to provide researchers with organizational and quantitative analysis tools for up to 5D

Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms
Open-L2O: A Comprehensive and Reproducible Benchmark for Learning to Optimize Algorithms

Open-L2O This repository establishes the first comprehensive benchmark efforts of existing learning to optimize (L2O) approaches on a number of proble

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark

MQBench: Towards Reproducible and Deployable Model Quantization Benchmark We propose a benchmark to evaluate different quantization algorithms on vari

Comments
  • [BUG] conda activation crashes standalone execution

    [BUG] conda activation crashes standalone execution

    Issue description

    In the standalone backend on Unix systems, the os.system(command) used here

    https://github.com/JoeriHermans/awflow/blob/1fcf255debfbc18d39a6b2baa387bbc85050209d/awflow/backends/standalone/executor.py#L53-L60

    actually calls /bin/sh. For some OS, like Ubuntu, sh links to dash which does not support the scripting features required by conda activations. This results in runtime errors like

    sh: 5: /home/username/miniconda3/envs/envname/etc/conda/activate.d/activate-binutils_linux-64.sh: Syntax error: "(" unexpected
    

    Proposed solution

    A solution would be to change the shell with which the commands are called. This is possible thanks to the subprocess package. A good default would be bash as almost all Unix systems use it.

        if node.tasks > 1:
            for task_index in range(node.tasks):
                task_command = command + ' ' + str(task_index)
                return_code = subprocess.call(task_command, shell=True, executable='/bin/bash')
        else:
            return_code = subprocess.call(command, shell=True, executable='/bin/bash')
    

    One could also add a way to change this default. Additionally, wouldn't it be better to launch the tasks as background jobs for the standalone backend (simply add & at the end of the command) ?

    bug 
    opened by francois-rozet 1
  • [BUG] pip install fails for version 0.0.4

    [BUG] pip install fails for version 0.0.4

    $ pip install awflow==0.0.4
    Collecting awflow==0.0.4
      Using cached awflow-0.0.4.tar.gz (19 kB)
        ERROR: Command errored out with exit status 1:
         command: /home/francois/awf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ou4rxs3q/awflow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-ou4rxs3q/awflow/pip-egg-info
             cwd: /tmp/pip-install-ou4rxs3q/awflow/
        Complete output (7 lines):
        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 54, in <module>
            'examples': _load_requirements('requirements_examples.txt')
          File "/tmp/pip-install-ou4rxs3q/awflow/setup.py", line 17, in _load_requirements
            with open(file_name, 'r') as file:
        FileNotFoundError: [Errno 2] No such file or directory: 'requirements_examples.txt'
        ----------------------------------------
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    
    bug high priority 
    opened by francois-rozet 1
  • Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Jobs submitted with awflow doesn't work with Multiprocessing.pool

    Hi,

    I tried submitting a few jobs with awflow but somehow each time I run it with slurm backend it never produces a pool.starmap and the process simply times out on cluster. `0 0 8196756 5.1g 85664 S 0.0 1.0 2:12.27 python 790517 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.66 python

    790518 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.45 python

    790519 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.76 python

    790520 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:02.02 python

    790521 rnath 20 0 7953388 5.0g 12020 S 0.0 1.0 0:01.99 python `

    An example of what happens in the cluster where the processes are spawned but each process uses 0 % of the cpu slurmstepd: error: *** JOB 1933332 ON compute-04 CANCELLED AT 2022-04-08T19:33:26 DUE TO TIME LIMIT ***

    opened by digirak 0
Releases(0.1.0)
Owner
Joeri Hermans
Combining Machine Learning and Physics to automate science.
Joeri Hermans
Lightweight plotting to the terminal. 4x resolution via Unicode.

Uniplot Lightweight plotting to the terminal. 4x resolution via Unicode. When working with production data science code it can be handy to have plotti

Olav Stetter 203 Dec 29, 2022
Github Traffic Insights as Prometheus metrics.

github-traffic Github Traffic collects your repository's traffic data and exposes it as Prometheus metrics. Grafana dashboard that displays the metric

Grafana Labs 34 Oct 27, 2022
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation (CIKM'17)

RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation This is the implementation of RATE: Overcoming Noise and Spar

Yu Zhang 5 Feb 10, 2022
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
A simple baseline for 3d human pose estimation in tensorflow. Presented at ICCV 17.

3d-pose-baseline This is the code for the paper Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3

Julieta Martinez 1.3k Jan 03, 2023
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
Near-Duplicate Video Retrieval with Deep Metric Learning

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

2 Jan 24, 2022
Extremely easy multi instancing software for minecraft speedrunning.

Easy Multi Extremely easy multi/single instancing software for minecraft speedrunning. A couple of goals of this project: Setup multi in minutes No fi

Duncan 8 Jul 16, 2022
Constructing interpretable quadratic accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.

IQNAS: Interpretable Integer Quadratic programming Neural Architecture Search Realistic use of neural networks often requires adhering to multiple con

0 Oct 24, 2021
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
TimeSHAP explains Recurrent Neural Network predictions.

TimeSHAP TimeSHAP is a model-agnostic, recurrent explainer that builds upon KernelSHAP and extends it to the sequential domain. TimeSHAP computes even

Feedzai 90 Dec 18, 2022
FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation [Project] [Paper] [arXiv] [Home] Official implementation of FastFCN:

Wu Huikai 815 Dec 29, 2022
Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)

This repository contains code to reproduce results for submission NeurIPS 2021, "Momentum Centering and Asynchronous Update for Adaptive Gradient Meth

Juntang Zhuang 15 Jun 11, 2022
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021) Kun Wang, Zhenyu Zhang, Zhiqiang Yan, X

kunwang 66 Nov 24, 2022
Task-based end-to-end model learning in stochastic optimization

Task-based End-to-end Model Learning in Stochastic Optimization This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains th

CMU Locus Lab 164 Dec 29, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper)

QAHOI QAHOI: Query-Based Anchors for Human-Object Interaction Detection (paper) Requirements PyTorch = 1.5.1 torchvision = 0.6.1 pip install -r requ

38 Dec 29, 2022