Deep Learning Pipelines for Apache Spark

Overview

Deep Learning Pipelines for Apache Spark

Build Status Coverage

The repo only contains HorovodRunner code for local CI and API docs. To use HorovodRunner for distributed training, please use Databricks Runtime for Machine Learning, Visit databricks doc HorovodRunner: distributed deep learning with Horovod for details.

To use the previous release that contains Spark Deep Learning Pipelines API, please go to Spark Packages page.

API Documentation

class sparkdl.HorovodRunner(*, np, driver_log_verbosity='all')

Bases: object

HorovodRunner runs distributed deep learning training jobs using Horovod.

On Databricks Runtime 5.0 ML and above, it launches the Horovod job as a distributed Spark job. It makes running Horovod easy on Databricks by managing the cluster setup and integrating with Spark. Check out Databricks documentation to view end-to-end examples and performance tuning tips.

The open-source version only runs the job locally inside the same Python process, which is for local development only.

NOTE: Horovod is a distributed training framework developed by Uber.

  • Parameters

    • np - number of parallel processes to use for the Horovod job. This argument only takes effect on Databricks Runtime 5.0 ML and above. It is ignored in the open-source version. On Databricks, each process will take an available task slot, which maps to a GPU on a GPU cluster or a CPU core on a CPU cluster. Accepted values are:

      • If <0, this will spawn -np subprocesses on the driver node to run Horovod locally. Training stdout and stderr messages go to the notebook cell output, and are also available in driver logs in case the cell output is truncated. This is useful for debugging and we recommend testing your code under this mode first. However, be careful of heavy use of the Spark driver on a shared Databricks cluster. Note that np < -1 is only supported on Databricks Runtime 5.5 ML and above.
      • If >0, this will launch a Spark job with np tasks starting all together and run the Horovod job on the task nodes. It will wait until np task slots are available to launch the job. If np is greater than the total number of task slots on the cluster, the job will fail. As of Databricks Runtime 5.4 ML, training stdout and stderr messages go to the notebook cell output. In the event that the cell output is truncated, full logs are available in stderr stream of task 0 under the 2nd spark job started by HorovodRunner, which you can find in the Spark UI.
      • If 0, this will use all task slots on the cluster to launch the job. .. warning:: Setting np=0 is deprecated and it will be removed in the next major Databricks Runtime release. Choosing np based on the total task slots at runtime is unreliable due to dynamic executor registration. Please set the number of parallel processes you need explicitly.
    • np - driver_log_verbosity: This argument is only available on Databricks Runtime.

run(main, **kwargs)

Runs a Horovod training job invoking main(**kwargs).

The open-source version only invokes main(**kwargs) inside the same Python process. On Databricks Runtime 5.0 ML and above, it will launch the Horovod job based on the documented behavior of np. Both the main function and the keyword arguments are serialized using cloudpickle and distributed to cluster workers.

  • Parameters

    • main – a Python function that contains the Horovod training code. The expected signature is def main(**kwargs) or compatible forms. Because the function gets pickled and distributed to workers, please change global states inside the function, e.g., setting logging level, and be aware of pickling limitations. Avoid referencing large objects in the function, which might result large pickled data, making the job slow to start.

    • kwargs – keyword arguments passed to the main function at invocation time.

  • Returns

    return value of the main function. With np>=0, this returns the value from the rank 0 process. Note that the returned value should be serializable using cloudpickle.

Releases

Visit Github Release Page to check the release notes.

License

  • The source code is released under the Apache License 2.0 (see the LICENSE file).
Comments
  • Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    Can't import sparkdl with spark-deep-learning-assembly-0.1.0-spark2.1.jar

    First of all, thank you for a great library!

    I tried to use sparkdl in PySpark, but couldn't import sparkdl. Detailed procedure is as follows:

    # make sparkdl jar
    build/sbt assembly
    
    # run pyspark with sparkdl
    pyspark --master local[4] --jars target/scala-2.11/spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    # import sparkdl
    import sparkdl
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named sparkdl
    

    After digging a few places, I found that it works if I deflate the jar file as follows.

    cd target/scala-2.11
    mkdir tmp
    cp spark-deep-learning-assembly-0.1.0-spark2.1.jar tmp/
    cd tmp
    jar xf spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    pyspark --jars spark-deep-learning-assembly-0.1.0-spark2.1.jar
    
    import sparkdl
    Using TensorFlow backend.
    

    Edited-1 : The second method works only in the directory where the jar file is deflated.

    Best wishes, HanCheol

    opened by priancho 14
  • Porting Keras Estimator API and Reference Implementation

    Porting Keras Estimator API and Reference Implementation

    What changes are proposed in this pull request?

    Creating a Spark MLlib Estimator API for Keras models, with a reference implementation. It provides a taste of how to ingest Image from URI in a DataFrame and use them to train a Keras model.

    The changes consist of these components.

    1. Extracted a few Params types for Keras Transformers/Estimators.
    2. Keras utilities
      • Serialization: model <=> hdf5 <=> bytes (for broadcast)
      • Check avaialble Keras options (optimizers, loss functions, etc.)
    3. Keras Estimator.

    How is this patch tested?

    • [x] Unit tests
    • [x] Manual tests
    opened by phi-dbq 11
  • Not able to import sparkdl in jupyter notebook

    Not able to import sparkdl in jupyter notebook

    Hi,

    I am trying to use this library in jupyter notebook, but I am getting error "no module found".

    When I am running the below command pyspark --packages databricks:spark-deep-learning:1.5.0-spark2.4-s_2.11 I am able to import sparkdl in the spark shell.

    How can I use it in jupyter notebook?

    opened by yashwanthmadaka24 7
  • Support and build against Keras 2.2.2 and TF 1.10.0

    Support and build against Keras 2.2.2 and TF 1.10.0

    • bump spark version to 2.3.1
    • bump tensorframes version to 0.4.0
    • bump keras==2.2.2 and tensorflow==1.10.0 to fix travis issues
    • TF_C_API_GRAPH_CONSTRUCTION added as a temp fix
    • Drop support for Spark <2.3 and hence Scala 2.10
    • add python3 friendly print
    • add pooling='avg' in resnet50 testing model beccause keras api changed
    • test arrays almost equal with whatever precision 5 in NamedImageTransformerBaseTestCase, test_bare_keras_module, keras_load_and_preproc
    • make keras model smaller in test_simple_keras_udf

    This is a continued work from https://github.com/databricks/spark-deep-learning/pull/149.

    opened by lu-wang-dl 6
  • Fix KerasImageFileEstimator for model tuning

    Fix KerasImageFileEstimator for model tuning

    • add test for KerasImageFileEstimator used with CrossValidator
    • fix pickling issue
    • use new fitMultiple api and ensure thread safety
    • fix tests to reflect new api
    • bugfix setDefault
    • bugfix HasOutputMode
    • remove _validateParams
    • avoid testing KIFT functionality in KIFEst tests
    opened by yogeshg 6
  • Replace sparkdl's ImageSchema with  Spark2.3's version

    Replace sparkdl's ImageSchema with Spark2.3's version

    Use Spark 2.3's ImageSchema as image interface.

    • the biggest change is using opposite ordering of color channels - BGR instead of RGB, requires extra reordering in couple of places. -preserved ability to read and resize images in python using PIL to match Keras (resize gives different result but also reading jpegs produced images which were off by 1 on some green pixels)
    • needed few tweeks to run with spark 2.3 - notably UDFs are now referenced by SQL identifier and can not have dash as part of the name

    [TODO] - In order to run on spark < 2.3, the image schema files have been copied here and need to be removed in the future.

    opened by tomasatdatabricks 6
  • TensorFlow Graph Transformer Part-1: Params and Converters

    TensorFlow Graph Transformer Part-1: Params and Converters

    This is the first part of PRs from the TF Transformer API design POC PR.

    It introduces parameters needed for the TFTransformer (minus those for TFInputGraph) and corresponding type converters.

    • Introducing MLlib model Params for TFTransformer.
    • Type conversion utilities for the introduced MLlib Params used in Spark Deep Learning Pipelines.
      • We follow the convention of MLlib to name these utilities "converters", but most of them act as type checkers that return the argument if it is the desired type and raise TypeError otherwise.
      • We follow the practice that a type converter only returns a single type.
    opened by phi-dbq 6
  • Add style checks and refactor suggestions

    Add style checks and refactor suggestions

    In this PR, we

    • add python/.pylint/suggested.rc adapted from the default configuration generated by pylint
    • allow both camelCase and snake_case using regexes lifted from pylint source code
    • increase thresholds for number of arguments, local, variables
    • disable checks that are used often in this project: unused-argument, too-many-arguments, no-member, missing-docstring, no-init, protected-access, misplaced-comparison-constant, no-else-return, fixme
    • escape some code with # pylint: disable=... because it was hard to refactor without thorough testing

    Some style decisions that were discussed are:

    • disables are acceptable if there is no other way to do this, in which case a comment must be left explaining that
    • other disables should be removed and should be considered similar to todos
    • we allow todo marks in code because these are acceptable for this project and should be taken care of in future
    • there are 50 todos, fixmes or pylint disables currently, we should aim to bring this down find python/sparkdl | grep ".*\.py$" | xargs egrep -ino --color=auto "(TODO|FIXME|# pylint).*"
    • function calls and function defintions that span more than 1 line are left to committer and reviewer's discretion
      • pep8 style:
      long_function_name(
          long_argument_one = "one",
          long_argument_two = "two",
          long_argument_three = "three",
          long_argument_four = "four",
          long_argument_five = "five")
      
      • MLlib style:
      long_function_name(
          long_argument_one = "one", long_argument_two = "two", long_argument_three = "three",
          long_argument_four = "four", long_argument_five = "five")
      
    opened by yogeshg 5
  • Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions from row image to/from BufferedImage

    Fix bug in conversions to/from BufferedImage in which we copied raw byte data to BufferedImage rasters using the wrong channel ordering. The fix in this PR is to use BufferedImage.setRGB, BufferedImage.getRGB APIs instead of accessing image raster data directly for three and four-channel images.

    Also enhanced an existing unit test to verify that we correctly convert from row image to BufferedImage for one and four-channel images.

    opened by smurching 5
  • Update ImageUtils to support resizing one, three, or four channel images

    Update ImageUtils to support resizing one, three, or four channel images

    This PR:

    • Updates conversions from row image to/from Buffered image (spImageToBufferedImage and spImageFromBufferedImage) to support one, three, and four channel images
    • Updates resizeImage to use the tgtChannels parameter to determine the number of channels in the output image instead of defaulting to three output channels
    • Updates existing tests to verify that resizing, conversions to/from BufferedImage work for one, three, and four-channel images
    opened by smurching 5
  • Make python DeepImageFeaturizer use Scala version.

    Make python DeepImageFeaturizer use Scala version.

    • Based of Image schema PR, do not merge until Image schema is merged.
    • Otherwise mostly straightforward except results will not match keras in general due to different image libraries
    opened by tomasatdatabricks 5
  • sparkdl.xgboost getting stuck trying to map partitions

    sparkdl.xgboost getting stuck trying to map partitions

    I am running the following code to try to fit a model

    from sparkdl.xgboost import XgboostClassifier
    param = {
        'num_workers': 4, # number of workers on the cluster, adjust as needed
      'missing': 0,
        "objective": "binary:logistic",
        "eval_metric": "logloss",
          'featuresCol':"features", 
          'labelCol':"objective",
          'nthread':32 # equal to the number of cpus on each worker machine
    }
      
    train, test = data.randomSplit([0.001, 0.001])
    xgb_classifier = XgboostClassifier(**param)
    xgb_clf_model = xgb_classifier.fit(train)
    

    When I run the model training on my databricks cluster is seems to be getting stuck when it is trying to map partitions. It is using almost zero cpu on each cluster but the memory usage is slowly increasing.

    image

    is there anything I can do to get around this issue

    opened by timpiperseek 0
  • Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Need to modify kdl/transformers/keras_applications to be able to use resnet50

    Hi,

    Per this overflow question one needs to modify /home/user/.local/lib/python3.8/site-packages/sparkdl/transformers/keras_applications.py . This happened in databricks using v 10.3.

    Have to change from keras.applications import inception_v3, xception, resnet50

    to

    from keras.applications import inception_v3, xception from tensorflow.keras.applications import resnet50

    opened by yobdoy 1
  • Plugin Help with Spark framework

    Plugin Help with Spark framework

    https://github.com/hongzimao/decima-sim would you like to help me to integrate this deep learning model into your pipeline> how can I integrate or plug it with your frameworks?

    opened by jahidhasanlinix 0
  • Necessary imports not included in setup.py

    Necessary imports not included in setup.py

    Hi,

    I'm developing a neural network using Pytorch in a non-databricks cluster to ensure its functionality prior migrating to a databricks cluster.

    Since I'm using Pytorch, I don't need Keras or TensorFlow. I installed successfully Horovod and Sparkdl, however, when I try to run the Spark process I found (for now) three consecutive exceptions related to missing dependencies:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
      File "/opt/conda/default/lib/python3.8/site-packages/keras/__init__.py", line 21, in <module>
        from tensorflow.python import tf2
    ModuleNotFoundError: No module named 'tensorflow'
    
        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 16, in <module>
        import keras.backend as K
    ModuleNotFoundError: No module named 'keras'
    

    This one is DEPRECATED!!:

        from sparkdl import HorovodRunner
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/__init__.py", line 17, in <module>
        from sparkdl.transformers.keras_image import KerasImageFileTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/keras_image.py", line 27, in <module>
        from sparkdl.transformers.tf_image import TFImageTransformer
      File "/opt/conda/default/lib/python3.8/site-packages/sparkdl/transformers/tf_image.py", line 18, in <module>
        import tensorframes as tfs
    ModuleNotFoundError: No module named 'tensorframes'
    

    On one hand, I don't understand why should I need these dependencies if I'm not going to use them... Shouldn't it be checked and disabled instead of forcing it to be installed?

    On the other hand, if those dependencies are unavoidable, they should be included in the setup.py script to avoid having these errors and losing time, since installing Horovod packages in an ephemeral cluster takes a lot of time just to discover that you cannot run the program...

    I'm sure I won't have a problem in a Databricks cluster, but I cannot use it yet and that shouldn't be a problem to test HorovodRunner functionality as stated in the warning message when running a program in a non-databricks cluster...

    Kind regards

    opened by carlosfrutos 0
  • I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’

    I find it so many ‘spark-deep-learning’, such as : elephas:https://github.com/maxpumperla/elephas dist-keras:https://github.com/cerndb/dist-keras sparknet:https://github.com/amplab/sparknet dl4j:https://github.com/deeplearning4j/dl4j-spark-ml TensorFlowOnSpark:https://github.com/yahoo/TensorFlowOnSpark spark-deep-learning:https://github.com/databricks/spark-deep-learning H2O:https://github.com/h2oai/sparkling-water/tree/master/ BigDL:https://github.com/intel-analytics/BigDL analytics-zoo:https://github.com/intel-analytics/analytics-zoo

    It looks like BigDL is the most active one. I want to start my DeepLearning on spark by using spark-deep-learning, but I afraid others will popular than databricks.spark-deep-learning. So I still hesitate which one to choice.

    opened by shuDaoNan9 1
Releases(v1.6.0)
  • v1.6.0(Jan 8, 2020)

  • v1.5.0(Jan 25, 2019)

  • v1.4.0(Nov 18, 2018)

  • v1.3.0(Nov 13, 2018)

    • Added HorovovodRunner API.
    • Simplified test and doc build w/ Docker and conda.
    • Updated public Python API docs.
    • Removed persistence from DeepImageFeaturizer.
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Aug 28, 2018)

    • ignore nullable in DeepImageFeaturizer.validateSchema
    • upgrade TensorFrames version to 0.5.0
    • upgrade Tensorflow version to 1.10.0 and Keras version to 2.2.2
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Jun 18, 2018)

    • keras_image_file_estimator support both sparse and dense vectors
    • upgrade TensorFrames version to 0.4.0
    • add style checks to Travis CI
    • doc fixes
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(May 1, 2018)

    This is the 1.0.0 release. It brings compatibility with newer versions of Spark (2.3) and Tensorflow (1.6+). The custom image schema formerly defined in this package has been replaced with Spark's ImageSchema so there may be some breaking changes when updating to this version.

    Notable changes:

    • (breaking change) Using the definition of images from Spark 2.3.0. The new definition uses the BGR channel ordering for 3-channel images instead of the RGB ordering used in this project before the change.
    • Persistence for DeepImageFeaturizer (both Python and Scala).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 30, 2018)

    This is the final release of dl-pipelines prior to migrating to new ImageSchema.

    Notable changes:

    • Added vgg16, vgg19 models to DeepImageFeaturizer/DeepImagePredictor (Python).
    • Added a Scala API for DeepImageFeaturizer (for transfer learning for images).
    • Added TFTransformer and KerasTransformer for applying TensorFlow graphs or TensorFlow-backed Keras models to a column of arrays in a Spark DataFrame.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 31, 2017)

    This is the final release for Deep Learning Pipelines 0.2.0

    Notable additions since 0.1.0:

    • KerasImageFileEstimator API (train a Keras model on image files)
    • SQL UDF support for Keras models
    • Added Xception, Resnet50 models to DeepImageFeaturizer/DeepImagePredictor.
    Source code(tar.gz)
    Source code(zip)
Owner
Databricks
Helping data teams solve the world’s toughest problems using data and AI
Databricks
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
最新版本yolov5+deepsort目标检测和追踪,支持5.0版本可训练自己数据集

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

422 Dec 30, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

4 Aug 23, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
an implementation of Video Frame Interpolation via Adaptive Separable Convolution using PyTorch

This work has now been superseded by: https://github.com/sniklaus/revisiting-sepconv sepconv-slomo This is a reference implementation of Video Frame I

Simon Niklaus 985 Jan 08, 2023
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 09, 2022
Codebase for testing whether hidden states of neural networks encode discrete structures.

structural-probes Codebase for testing whether hidden states of neural networks encode discrete structures. Based on the paper A Structural Probe for

John Hewitt 349 Dec 17, 2022
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"

Few-shot-NLEs These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations". You can find the smal

Yordan Yordanov 0 Oct 21, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Multi-Scale Geometric Consistency Guided Multi-View Stereo

ACMM [News] The code for ACMH is released!!! [News] The code for ACMP is released!!! About ACMM is a multi-scale geometric consistency guided multi-vi

Qingshan Xu 118 Jan 04, 2023
Dynamic View Synthesis from Dynamic Monocular Video

Dynamic View Synthesis from Dynamic Monocular Video Project Website | Video | Paper Dynamic View Synthesis from Dynamic Monocular Video Chen Gao, Ayus

Chen Gao 139 Dec 28, 2022
A fast model to compute optical flow between two input images.

DCVNet: Dilated Cost Volumes for Fast Optical Flow This repository contains our implementation of the paper: @InProceedings{jiang2021dcvnet, title={

Huaizu Jiang 8 Sep 27, 2021
(NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductive few-shot classification"

SSR (NeurIPS 2021) Pytorch implementation of paper "Re-ranking for image retrieval and transductivefew-shot classification" [Paper] [Project webpage]

xshen 29 Dec 06, 2022
A light and fast one class detection framework for edge devices. We provide face detector, head detector, pedestrian detector, vehicle detector......

A Light and Fast Face Detector for Edge Devices Big News: LFD, which is a big update of LFFD, now is released (2021.03.09). It is strongly recommended

YonghaoHe 1.3k Dec 25, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
A framework for analyzing computer vision models with simulated data

3DB: A framework for analyzing computer vision models with simulated data Paper Quickstart guide Blog post Installation Follow instructions on: https:

3DB 112 Jan 01, 2023
A PyTorch Implementation of the Luna: Linear Unified Nested Attention

Unofficial PyTorch implementation of Luna: Linear Unified Nested Attention The quadratic computational and memory complexities of the Transformer’s at

Soohwan Kim 32 Nov 07, 2022