Deep Learning Pipelines for Apache Spark

Overview

Deep Learning Pipelines for Apache Spark

Build Status Coverage

The repo only contains HorovodRunner code for local CI and API docs. To use HorovodRunner for distributed training, please use Databricks Runtime for Machine Learning, Visit databricks doc HorovodRunner: distributed deep learning with Horovod for details.

To use the previous release that contains Spark Deep Learning Pipelines API, please go to Spark Packages page.

API Documentation

class sparkdl.HorovodRunner(*, np, driver_log_verbosity='all')

Bases: object

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job. It makes running Horovod easy on Databricks by managing the cluster setup and integrating with Spark. Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process, which is for local development only.

NOTE: Horovod is a distributed training framework developed by Uber.

  • Parameters

    • np - number of parallel processes to use for the Horovod job. This argument only takes effect on Databricks Runtime 5.0 ML and above. It is ignored in the open-source version. On Databricks, each process will take an available task slot, which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster. Accepted values are:

      • If <0, this will spawn -np subprocesses on the driver node to run Horovod locally. Training stdout and stderr messages go to the notebook cell output, and are also available in driver logs in case the cell output is truncated. This is useful for debugging and we recommend testing your code under this mode first. However, be careful of heavy use of the Spark driver on a shared Databricks cluster. Note that np < -1 is only supported on Databricks Runtime 5.5 ML and above.
      • If >0, this will launch a Spark job with np tasks starting all together and run the Horovod job on the task nodes. It will wait until np task slots are available to launch the job. If np is greater than the total number of task slots on the cluster, the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr messages go to the notebook cell output. In the event that the cell output is truncated, full logs are available in stderr stream of task 0 under the 2nd spark job started by HorovodRunner, which you can find in the Spark UI.
      • If 0, this will use all task slots on the cluster to launch the job. .. warning:: Setting np=0 is deprecated and it will be removed in the next major Databricks Runtime release. Choosing np based on the total task slots at runtime is unreliable due to dynamic executor registration. Please set the number of parallel processes you need explicitly.
    • np - driver_log_verbosity: This argument is only available on Databricks Runtime.

run(main, **kwargs)

Runs a Horovod training job invoking main(**kwargs).

The open-source version only invokes main(**kwargs) inside the same Python process. On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the documented behavior of np. Both the main function and the keyword arguments are serialized using cloudpickle and distributed to cluster workers.

  • Parameters

    • main – a Python function that contains the Horovod training code. The expected signature is def main(**kwargs) or compatible forms. Because the function gets pickled and distributed to workers, please change global states inside the function, e.g., setting logging level, and be aware of pickling limitations. Avoid referencing large objects in the function, which might result large pickled data, making the job slow to start.

    • kwargs – keyword arguments passed to the main function at invocation time.

  • Returns

    return value of the main function. With np>=0, this returns the value from the rank 0 process. Note that the returned value should be serializable using cloudpickle.

Releases

Visit Github Release Page to check the release notes.

License

  • The source code is released under the Apache License 2.0 (see the LICENSE file).
Comments
  • Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    First of all, thank you for a great library!

    I tried to use sparkdl in PySpark, but couldn't import sparkdl. Detailed procedure is as follows:

    # make sparkdl jar
    build/sbt assembly
    
    # run pyspark with sparkdl
    pyspark --master local[4] --jars target/scala-2.11/spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    # import sparkdl
    import sparkdl
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named sparkdl
    

    After digging a few places, I found that it works if I deflate the jar file as follows.

    cd target/scala-2.11
    mkdir tmp
    cp spark-deep-learning-assembly-0.1.0-spark2.1.jar tmp/
    cd tmp
    jar xf spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    pyspark --jars spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    import sparkdl
    Using TensorFlow backend.
    

    Edited-1 : The second method works only in the directory where the jar file is deflated.

    Best wishes, HanCheol

    opened by priancho 14
  • Porting Keras Estimator API and Reference Implementation

    Porting Keras Estimator API and Reference Implementation

    What changes are proposed in this pull request?

    Creating a Spark MLlib Estimator API for Keras models, with a reference implementation. It provides a taste of how to ingest Image from URI in a DataFrame and use them to train a Keras model.

    The changes consist of these components.

    1. Extracted a few Params types for Keras Transformers/Estimators.
    2. Keras utilities
      • Serialization: model <=> hdf5 <=> bytes (for broadcast)
      • Check avaialble Keras options (optimizers, loss functions, etc.)
    3. Keras Estimator.

    How is this patch tested?

    • [x] Unit tests
    • [x] Manual tests
    opened by phi-dbq 11
  • Not able to import sparkdl in jupyter notebook

    Not able to import sparkdl in jupyter notebook

    Hi,

    I am trying to use this library in jupyter notebook, but I am getting error "no module found".

    When I am running the below command pyspark --packages databricks:spark-deep-learning:1.5.0-spark2.4-s_2.11 I am able to import sparkdl in the spark shell.

    How can I use it in jupyter notebook?

    opened by yashwanthmadaka24 7
  • Support and build against Keras 2.2.2 and TF 1.10.0

    Support and build against Keras 2.2.2 and TF 1.10.0

    • bump spark version to 2.3.1
    • bump tensorframes version to 0.4.0
    • bump keras==2.2.2 and tensorflow==1.10.0 to fix travis issues
    • TF_C_API_GRAPH_CONSTRUCTION added as a temp fix
    • Drop support for Spark <2.3 and hence Scala 2.10
    • add python3 friendly print
    • add pooling='avg' in resnet50 testing model beccause keras api changed
    • test arrays almost equal with whatever precision 5 in NamedImageTransformerBaseTestCase, test_bare_keras_module, keras_load_and_preproc
    • make keras model smaller in test_simple_keras_udf

    This is a continued work from https://github.com/databricks/spark-deep-learning/pull/149.

    opened by lu-wang-dl 6
  • Fix KerasImageFileEstimator for model tuning

    Fix KerasImageFileEstimator for model tuning

    • add test for KerasImageFileEstimator used with CrossValidator
    • fix pickling issue
    • use new fitMultiple api and ensure thread safety
    • fix tests to reflect new api
    • bugfix setDefault
    • bugfix HasOutputMode
    • remove _validateParams
    • avoid testing KIFT functionality in KIFEst tests
    opened by yogeshg 6
  • Replace sparkdl's ImageSchema with  Spark2.3's version

    Replace sparkdl's ImageSchema with Spark2.3's version

    Use Spark 2.3's ImageSchema as image interface.

    • the biggest change is using opposite ordering of color channels - BGR instead of RGB, requires extra reordering in couple of places. -preserved ability to read and resize images in python using PIL to match Keras (resize gives different result but also reading jpegs produced images which were off by 1 on some green pixels)
    • needed few tweeks to run with spark 2.3 - notably UDFs are now referenced by SQL identifier and can not have dash as part of the name

    [TODO] - In order to run on spark < 2.3, the image schema files have been copied here and need to be removed in the future.

    opened by tomasatdatabricks 6
  • TensorFlow Graph Transformer Part-1: Params and Converters

    TensorFlow Graph Transformer Part-1: Params and Converters

    This is the first part of PRs from the TF Transformer API design POC PR.

    It introduces parameters needed for the TFTransformer (minus those for TFInputGraph) and corresponding type converters.

    • Introducing MLlib model Params for TFTransformer.
    • Type conversion utilities for the introduced MLlib Params used in Spark Deep Learning Pipelines.
      • We follow the convention of MLlib to name these utilities "converters", but most of them act as type checkers that return the argument if it is the desired type and raise TypeError otherwise.
      • We follow the practice that a type converter only returns a single type.
    opened by phi-dbq 6
  • Add style checks and refactor suggestions

    Add style checks and refactor suggestions

    In this PR, we

    • add python/.pylint/suggested.rc adapted from the default configuration generated by pylint
    • allow both camelCase and snake_case using regexes lifted from pylint source code
    • increase thresholds for number of arguments, local, variables
    • disable checks that are used often in this project: unused-argument, too-many-arguments, no-member, missing-docstring, no-init, protected-access, misplaced-comparison-constant, no-else-return, fixme
    • escape some code with # pylint: disable=... because it was hard to refactor without thorough testing

    Some style decisions that were discussed are:

    • disables are acceptable if there is no other way to do this, in which case a comment must be left explaining that
    • other disables should be removed and should be considered similar to todos
    • we allow todo marks in code because these are acceptable for this project and should be taken care of in future
    • there are 50 todos, fixmes or pylint disables currently, we should aim to bring this down find python/sparkdl | grep ".*\.py$" | xargs egrep -ino --color=auto "(TODO|FIXME|# pylint).*"
    • function calls and function defintions that span more than 1 line are left to committer and reviewer's discretion
      • pep8 style:
      long_function_name(
          long_argument_one = "one",
          long_argument_two = "two",
          long_argument_three = "three",
          long_argument_four = "four",
          long_argument_five = "five")
      
      • MLlib style:
      long_function_name(
          long_argument_one = "one", long_argument_two = "two", long_argument_three = "three",
          long_argument_four = "four", long_argument_five = "five")
      
    opened by yogeshg 5
  • Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions to/from BufferedImage in which we copied raw byte data to BufferedImage rasters using the wrong channel ordering. The fix in this PR is to use BufferedImage.setRGB, BufferedImage.getRGB APIs instead of accessing image raster data directly for three and four-channel images.

    Also enhanced an existing unit test to verify that we correctly convert from row image to BufferedImage for one and four-channel images.

    opened by smurching 5
  • Update ImageUtils to support resizing one, three, or four channel images

    Update ImageUtils to support resizing one, three, or four channel images

    This PR:

    • Updates conversions from row image to/from Buffered image (spImageToBufferedImage and spImageFromBufferedImage) to support one, three, and four channel images
    • Updates resizeImage to use the tgtChannels parameter to determine the number of channels in the output image instead of defaulting to three output channels
    • Updates existing tests to verify that resizing, conversions to/from BufferedImage work for one, three, and four-channel images
    opened by smurching 5
  • Make python DeepImageFeaturizer use Scala version.

    Make python DeepImageFeaturizer use Scala version.

    • Based of Image schema PR, do not merge until Image schema is merged.
    • Otherwise mostly straightforward except results will not match keras in general due to different image libraries
    opened by tomasatdatabricks 5
  • sparkdl.xgboost getting stuck trying to map partitions

    sparkdl.xgboost getting stuck trying to map partitions

    I am running the following code to try to fit a model

    from sparkdl.xgboost import XgboostClassifier
    param = {
        'num_workers': 4, # number of workers on the cluster, adjust as needed
      'missing': 0,
        "objective": "binary:logistic",
        "eval_metric": "logloss",
          'featuresCol':"features", 
          'labelCol':"objective",
          'nthread':32 # equal to the number of cpus on each worker machine
    }
      
    train, test = data.randomSplit([0.001, 0.001])
    xgb_classifier = XgboostClassifier(**param)
    xgb_clf_model = xgb_classifier.fit(train)
    

    When I run the model training on my databricks cluster is seems to be getting stuck when it is trying to map partitions. It is using almost zero cpu on each cluster but the memory usage is slowly increasing.

    image

    is there anything I can do to get around this issue

    opened by timpiperseek 0
  • Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Hi,

    Per this overflow question one needs to modify /home/user/.local/lib/python3.8/site-packages/sparkdl/transformers/keras_applications.py . This happened in databricks using v 10.3.

    Have to change from keras.applications import inception_v3, xception, resnet50

    to

    from keras.applications import inception_v3, xception from tensorflow.keras.applications import resnet50

    opened by yobdoy 1
  • Plugin Help with Spark framework

    Plugin Help with Spark framework

    https://github.com/hongzimao/decima-sim would you like to help me to integrate this deep learning model into your pipeline> how can I integrate or plug it with your frameworks?

    opened by jahidhasanlinix 0
  • Necessary imports not included in setup.py

    Necessary imports not included in setup.py

    Hi,

    I'm developing a neural network using Pytorch in a non-databricks cluster to ensure its functionality prior migrating to a databricks cluster.

    Since I'm using Pytorch, I don't need Keras or TensorFlow. I installed successfully Horovod and Sparkdl, however, when I try to run the Spark process I found (for now) three consecutive exceptions related to missing dependencies:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
      File "/opt/conda/default/lib/python3.8/site-packages/keras/__init__.py", line 21, in <module>
        from tensorflow.python import tf2
    ModuleNotFoundError: No module named 'tensorflow'
    
        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
    ModuleNotFoundError: No module named 'keras'
    

    This one is DEPRECATED!!:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 27, in <module>
        from sparkdl.transformers.tf_image import TFImageTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/tf_image.py", line 18, in <module>
        import tensorframes as tfs
    ModuleNotFoundError: No module named 'tensorframes'
    

    On one hand, I don't understand why should I need these dependencies if I'm not going to use them... Shouldn't it be checked and disabled instead of forcing it to be installed?

    On the other hand, if those dependencies are unavoidable, they should be included in the setup.py script to avoid having these errors and losing time, since installing Horovod packages in an ephemeral cluster takes a lot of time just to discover that you cannot run the program...

    I'm sure I won't have a problem in a Databricks cluster, but I cannot use it yet and that shouldn't be a problem to test HorovodRunner functionality as stated in the warning message when running a program in a non-databricks cluster...

    Kind regards

    opened by carlosfrutos 0
  • I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’, such as : elephas:https://github.com/maxpumperla/elephas dist-keras:https://github.com/cerndb/dist-keras sparknet:https://github.com/amplab/sparknet dl4j:https://github.com/deeplearning4j/dl4j-spark-ml TensorFlowOnSpark:https://github.com/yahoo/TensorFlowOnSpark spark-deep-learning:https://github.com/databricks/spark-deep-learning H2O:https://github.com/h2oai/sparkling-water/tree/master/ BigDL:https://github.com/intel-analytics/BigDL analytics-zoo:https://github.com/intel-analytics/analytics-zoo

    It looks like BigDL is the most active one. I want to start my DeepLearning on spark by using spark-deep-learning, but I afraid others will popular than databricks.spark-deep-learning. So I still hesitate which one to choice.

    opened by shuDaoNan9 1
Releases(v1.6.0)
  • v1.6.0(Jan 8, 2020)

  • v1.5.0(Jan 25, 2019)

  • v1.4.0(Nov 18, 2018)

  • v1.3.0(Nov 13, 2018)

    • Added HorovovodRunner API.
    • Simplified test and doc build w/ Docker and conda.
    • Updated public Python API docs.
    • Removed persistence from DeepImageFeaturizer.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Aug 28, 2018)

    • ignore nullable in DeepImageFeaturizer.validateSchema
    • upgrade TensorFrames version to 0.5.0
    • upgrade Tensorflow version to 1.10.0 and Keras version to 2.2.2
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Jun 18, 2018)

    • keras_image_file_estimator support both sparse and dense vectors
    • upgrade TensorFrames version to 0.4.0
    • add style checks to Travis CI
    • doc fixes
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(May 1, 2018)

    This is the 1.0.0 release. It brings compatibility with newer versions of Spark (2.3) and Tensorflow (1.6+). The custom image schema formerly defined in this package has been replaced with Spark's ImageSchema so there may be some breaking changes when updating to this version.

    Notable changes:

    • (breaking change) Using the definition of images from Spark 2.3.0. The new definition uses the BGR channel ordering for 3-channel images instead of the RGB ordering used in this project before the change.
    • Persistence for DeepImageFeaturizer (both Python and Scala).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 30, 2018)

    This is the final release of dl-pipelines prior to migrating to new ImageSchema.

    Notable changes:

    • Added vgg16, vgg19 models to DeepImageFeaturizer/DeepImagePredictor (Python).
    • Added a Scala API for DeepImageFeaturizer (for transfer learning for images).
    • Added TFTransformer and KerasTransformer for applying TensorFlow graphs or TensorFlow-backed Keras models to a column of arrays in a Spark DataFrame.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 31, 2017)

    This is the final release for Deep Learning Pipelines 0.2.0

    Notable additions since 0.1.0:

    • KerasImageFileEstimator API (train a Keras model on image files)
    • SQL UDF support for Keras models
    • Added Xception, Resnet50 models to DeepImageFeaturizer/DeepImagePredictor.
    Source code(tar.gz)
    Source code(zip)
Owner
Databricks
Helping data teams solve the world’s toughest problems using data and AI
Databricks
Semi-Supervised Learning with Ladder Networks in Keras. Get 98% test accuracy on MNIST with just 100 labeled examples !

Semi-Supervised Learning with Ladder Networks in Keras This is an implementation of Ladder Network in Keras. Ladder network is a model for semi-superv

Divam Gupta 101 Sep 07, 2022
NumPy로 구현한 딥러닝 라이브러리입니다. (자동 미분 지원)

Deep Learning Library only using NumPy 본 레포지토리는 NumPy 만으로 구현한 딥러닝 라이브러리입니다. 자동 미분이 구현되어 있습니다. 자동 미분 자동 미분은 미분을 자동으로 계산해주는 기능입니다. 아래 코드는 자동 미분을 활용해 역전파

조준희 17 Aug 16, 2022
Pytorch implementation of FlowNet by Dosovitskiy et al.

FlowNetPytorch Pytorch implementation of FlowNet by Dosovitskiy et al. This repository is a torch implementation of FlowNet, by Alexey Dosovitskiy et

Clément Pinard 762 Jan 02, 2023
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent

Narya The Narya API allows you track soccer player from camera inputs, and evaluate them with an Expected Discounted Goal (EDG) Agent. This repository

Paul Garnier 121 Dec 30, 2022
Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Expressive Power of Invariant and Equivaraint Graph Neural Networks In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alig

Marc Lelarge 36 Dec 12, 2022
3.8% and 18.3% on CIFAR-10 and CIFAR-100

Wide Residual Networks This code was used for experiments with Wide Residual Networks (BMVC 2016) http://arxiv.org/abs/1605.07146 by Sergey Zagoruyko

Sergey Zagoruyko 1.2k Dec 29, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
PyTorch implementation of Neural Dual Contouring.

NDC PyTorch implementation of Neural Dual Contouring. Citation We are still writing the paper while adding more improvements and applications. If you

Zhiqin Chen 140 Dec 26, 2022
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Jooyoung Choi 88 Dec 01, 2022
Modular Gaussian Processes

Modular Gaussian Processes for Transfer Learning 🧩 Introduction This repository contains the implementation of our paper Modular Gaussian Processes f

Pablo Moreno-Muñoz 10 Mar 15, 2022
PyTorch code of my ICDAR 2021 paper Vision Transformer for Fast and Efficient Scene Text Recognition (ViTSTR)

Vision Transformer for Fast and Efficient Scene Text Recognition (ICDAR 2021) ViTSTR is a simple single-stage model that uses a pre-trained Vision Tra

Rowel Atienza 198 Dec 27, 2022
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Fast and simple implementation of RL algorithms, designed to run fully on GPU.

RSL RL Fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is an evolution of rl-pytorch provided with NVIDIA's I

Robotic Systems Lab - Legged Robotics at ETH Zürich 68 Dec 29, 2022
JUSTICE: A Benchmark Dataset for Supreme Court’s Judgment Prediction

JUSTICE: A Benchmark Dataset for Supreme Court’s Judgment Prediction CSCI 544 Final Project done by: Mohammed Alsayed, Shaayan Syed, Mohammad Alali, S

Smit Patel 3 Dec 28, 2022
ivadomed is an integrated framework for medical image analysis with deep learning.

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.

144 Dec 19, 2022
Pytorch domain adaptation package

DomainAdaptation This package is created to tackle the problem of domain shifts when dealing with two domains of different feature distributions. In d

Institute of Computational Perception 7 Oct 22, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
Kaggle | 9th place single model solution for TGS Salt Identification Challenge

UNet for segmenting salt deposits from seismic images with PyTorch. General We, tugstugi and xuyuan, have participated in the Kaggle competition TGS S

Erdene-Ochir Tuguldur 276 Dec 20, 2022
Neural Radiance Fields Using PyTorch

This project is a PyTorch implementation of Neural Radiance Fields (NeRF) for reproduction of results whilst running at a faster speed.

Vedant Ghodke 1 Feb 11, 2022