A flexible framework of neural networks for deep learning

Overview

Chainer: A deep learning framework

pypi GitHub license travis coveralls Read the Docs Optuna

Website | Docs | Install Guide | Tutorials (ja) | Examples (Official, External) | Concepts | ChainerX

Forum (en, ja) | Slack invitation (en, ja) | Twitter (en, ja)

Chainer is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using CuPy for high performance training and inference. For more details about Chainer, see the documents and resources listed above and join the community in Forum, Slack, and Twitter.

Notice: As announced, Chainer is under the maintenance phase and further development will be limited to bug-fixes and maintenance only.

Installation

For more details, see the installation guide.

To install Chainer, use pip.

$ pip install chainer

To enable CUDA support, CuPy is required. Refer to the CuPy installation guide.

Docker image

We are providing the official Docker image. This image supports nvidia-docker. Login to the environment with the following command, and run the Python interpreter to use Chainer with CUDA and cuDNN support.

$ nvidia-docker run -it chainer/chainer /bin/bash

Contribution

See the contribution guide.

ChainerX

See the ChainerX documentation.

License

MIT License (see LICENSE file).

More information

References

Tokui, Seiya, et al. "Chainer: A Deep Learning Framework for Accelerating the Research Cycle." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2019. URL BibTex

Tokui, S., Oono, K., Hido, S. and Clayton, J., Chainer: a Next-Generation Open Source Framework for Deep Learning, Proceedings of Workshop on Machine Learning Systems(LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), (2015) URL, BibTex

Akiba, T., Fukuda, K. and Suzuki, S., ChainerMN: Scalable Distributed Deep Learning Framework, Proceedings of Workshop on ML Systems in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS), (2017) URL, BibTex

Comments
  • Support dropconnect

    Support dropconnect

    This PR is proposal for supporting dropconnect activation. Dropconnect is generalization of dropout. It drops weight element instead of input element.

    Deteil is here. http://cs.nyu.edu/~wanli/dropc/dropc.pdf

    cat:feature 
    opened by fukatani 52
  • Use intermediate dtype in `F.mean_absolute_error` for FP16

    Use intermediate dtype in `F.mean_absolute_error` for FP16

    Close #6702.

    This PR fixes F.mean_absolute_error to use an intermediate dtype for FP16 inputs. numpy.mean is not used because old numpy does not use extra precision for FP16.

    https://docs.scipy.org/doc/numpy-1.9.2/reference/generated/numpy.mean.html https://docs.scipy.org/doc/numpy-1.16.1/reference/generated/numpy.mean.html

    cat:enhancement st:test-and-merge 
    opened by takagi 49
  • CuPy/ChainerX memory pool sharing

    CuPy/ChainerX memory pool sharing

    Currently, CuPy and ChainerX both implement and use their own memory pools which leads to poor utilization of the available GPU memory. This PR introduces a mechanism to share the same pool in order to avoid it.

    Requires ~https://github.com/cupy/cupy/pull/1883~ ~https://github.com/cupy/cupy/pull/1904~

    cat:feature st:test-and-merge 
    opened by hvy 49
  • Fast `IndexIterator` for ChainerX CUDA

    Fast `IndexIterator` for ChainerX CUDA

    Thanks to @asi1024 @shinh

    ChainerX indexer used pretty expensive int64 division and modulo operations when calculating array indexes on CUDA.

    This was noticeable when arrays were not contiguous, severely affecting the execution time of even simple kernels as ElementWise ones.

    This PR replaces the code for index calculation with the same one as Cupy.

    In the following test time for chainerx is reduced from 0.70 secs to 0.27

    import numpy as np
    import cupy
    import chainer
    import chainerx as chx
    import time
    
    def test_bench(name, xp, alloc_fn):
        np.random.seed(42)
        x = np.random.rand(800,389,2,66).astype(np.float32)
        y = np.random.rand(800,389,2,66).astype(np.float32)
        a = alloc_fn(x)
        b = alloc_fn(y)
        a = np.swapaxes(a, 2, 3)
        b = np.swapaxes(b, 2, 3)
    
        for i in range(1):
            cuda = xp.multiply(a,b)
    
        cupy.cuda.device.Device().synchronize()
        start = time.time()
        for i in range(400):
            cuda = xp.multiply(a,b)
        cupy.cuda.device.Device().synchronize()
        total = time.time() - start
        print(name, total)
    
    test_bench('cupy', cupy, lambda x: cupy.array(x))
    test_bench('chainerx', chx, lambda x: chx.array(x, device='cuda:0'))
    
    ChainerX cat:performance 
    opened by emcastillo 48
  • ChainerX needs to support more routines

    ChainerX needs to support more routines

    Update We've introduced an Op registration and dispatch mechanism. This practically means that Device methods will be replaced by Op implementations. E.g. Device::Arange -> ArangeOp : public Op. The following description will soon be updated to take this into account.

    class FuncOp : public Op {
    public:
        static const char* name() { return "Func"; }
        // Call is overridden per device. Does not create a graph but merely performs device computations such as calling a kernel in case of CUDA.
        // Call can have any signature.
        virtual void Call(..., const Array& out) = 0;  
        // Another definition would be
        // virtual Array Call(..., const nonstd::optional<Array>& out) = 0;  
    };
    
    class CudaFuncOp : public FuncOp {
    // Override Call.
    };
    
    CHAINERX_REGISTER_OP_CUDA(FuncOp, CudaFuncOp);  // Allows backend.CallOp<FuncOp>(...);
    
    Array Func(...) {  // A routine called `Func`.
        Array out = ....;
        {
            NoBackpropModeScope scope{};
            device.backend().CallOp<ArangeOp>(..., out);
        }
        // Create graph.
        return out;
    }
    
    

    The current repository of backpropable operations, or "routines" in ChainerX is still limited. We'd like to open this as a "contribution-welcome"-labeled issue for any contributor to introduce new routines and take part in the early development of ChainerX.

    References

    Implementing routines

    What kinds of routines are missing?

    Routines that need to be implemented probably fall into either of the following two categories.

    • NumPy compatible {numpy,chainerx} functions and {chainerx,numpy}.ndarray methods.
    • Deep learning routines such as convolutions, pooling, RNN-type routines, etc.

    Please make sure that it's not already implemented by checking the list of available routines.

    How do you start implementing routines?

    1. Make sure you can build ChainerX. Instructions here.

    2. If you are unsure which routine to start working on, refer to this list or create an issue suggesting or asking for one. Some routines will require device implementation (for each backend, i.e. native and CUDA) while for some, it might be sufficient to use existing device methods. The latter is easier to work on but it might not be obvious at first which routine that applies to unless you know the implementation details beforehand and what device methods are already available (see list above).

    3. Implement the routine.

    • Check if the routine is temporarily made available via the NumPy/CuPy fallback mechanism. If it is, delete the fallback.
    • Declare a routine interface.
    • Define forward pass using device methods. If device methods are missing, implement them.
    • Define backward pass using chainerx::BackwardBuilder.
    • Declare the routine as a chainerx::Array method if appropriate.
    • Write Python bindings and tests using test utilities.

    Getting familiar with the ChainerX code base

    Here are some starting points.

    To get familiar with the C++ code base.

    Array (with autograd)

    Routines

    • chainerx/routines : Defines "routines", i.e. forward/backward operations on the Array such as taking the sum or applying a convolution.
    • chainerx::BackwardBuilder: Extends the computation graph and is used by routines.
    • chainerx::Device: A device interface with operations on arrays. The device interface is currently implemented by chainerx::native::NativeDevice and chainerx::cuda::CudaDevice. A routine delegates the actual computation to these devices. Note that these operations only operate on the raw data and should not involve any graph operations (this might change).
    • chainerx/native: Contains native implementations including chainerx::native::NativeDevice.
    • chainerx/cuda: Contains CUDA implementations including chainerx::cuda::CudaDevice.

    Graph

    • chainerx::ArrayNode: A node representing an array in the computational graph. It is owned by an chainerx::ArrayBody.
    • chainerx::OpNode: A node representing an operation in the computational graph.

    Other

    • chainerx::Context: Manages the runtime state. A context has backends, which have devices.
    • Units tests are written next to their source files being tested, i.e. chainerx/routines/logic.h is tested by chainerx/routines/logic_test.cc. You can take a look at the routine tests to see how arrays are used.
    • Python bindings are created with pybind11.
    • ChainerX C++ MNIST example.

    Please note that the descriptions above may change as ChainerX is being developed.

    Coding style

    Please refer to https://github.com/chainer/chainer/issues/5778

    Ongoing / Status

    NOT UPDATED (since there are more PRs than expected and it's difficult to maintain the status here)

    • [x] Minimum https://github.com/chainer/chainer/pull/6477 (Missing Python bindings and tests. Open for contribution)
    • [x] Array-Array Minimum https://github.com/chainer/chainer/pull/6541
    • [ ] Array-Array Maximum https://github.com/chainer/chainer/pull/6570
    • [x] Sigmoid https://github.com/chainer/chainer/pull/6472
    • [x] Square https://github.com/chainer/chainer/pull/6486
    • [x] Dot for ndim > 2 https://github.com/chainer/chainer/pull/6476
    • [ ] Power https://github.com/chainer/chainer/pull/6496
    • [x] SquaredDifference https://github.com/chainer/chainer/pull/6501
    • [ ] Pad https://github.com/chainer/chainer/pull/6597/
    • [x] Sin, Cos https://github.com/chainer/chainer/pull/6601
    • [x] ArgMin ~https://github.com/chainer/chainer/pull/6650~ https://github.com/chainer/chainer/pull/6740
    • [ ] Meshgrid https://github.com/chainer/chainer/pull/6668
    • [x] Ceil https://github.com/chainer/chainer/pull/6705
    • [x] Floor https://github.com/chainer/chainer/pull/6707
    • [x] Tan,ArcSin,ArcCos,ArcTan https://github.com/chainer/chainer/pull/6703
    • [ ] Min/AMin https://github.com/chainer/chainer/pull/6752
    cat:feature stale ChainerX 
    opened by hvy 41
  • Add automatic management of snapshots (deletion and load)

    Add automatic management of snapshots (deletion and load)

    This branch adds two new feature to snapshot system of Chainer. The first feature is automatic deletion of old snapshot files during the training. This is done by adding a post-save hook interface to snapshot writers and if the number of snapshots to retain is set as positive integers, the snapshot writers automatically lists all files in Trainer output directory and matches the filename format. One limitation is that the automatic matching is only for integers but does not work for arbitrary formatting of objects or string. For example, if the filename format is defined as snapshot_iter_{.updater.iteration} , the automatic matching system picks files named as snapshot_iter_10, snapshot_iter_12000 and so on.

    Another feature is automatic loading of snapshot target at extension initialization. If autoload option is set as True it scans the Trainer output directory and finds matches the filename format. For example, if files named as snapshot_iter_10 and snapshot_iter_12000 are found at startup, it chooses the latter as the latest snapshot and loads the data to target.

    Note: this depends on #6762 to prevent conflict. Also this is more better version of #6531 .

    cat:feature to-be-backported st:test-and-merge prio:high 
    opened by kuenishi 40
  • Trainer2

    Trainer2

    fix #914. I wrote an updated version of the training loop abstraction. This includes many improvements based on the feedback to the old version (#958). For example,

    • Design around the dataset abstraction is improved. Most considerable applications are supported with some efforts of customization.
    • New Trainer supports using multiple datasets and multiple optimizers.
    • Evaluation report is abstracted into Reporter object. This makes it easy to collect many observable values like loss/accuracy, activation statistics, etc.

    I have not updated the tutorial document yet, which should be done before merging it.

    cat:feature 
    opened by beam2d 40
  • Use numbers for input check in `roi_{average|max}_{pooling|align}_2d.py`

    Use numbers for input check in `roi_{average|max}_{pooling|align}_2d.py`

    ~Merge after #5634 and #5635.~

    use numpy.issubdtype instead of isinstance to check function args. numpy.issubdtype supports int, float and numpy.integer, numpy.floating.

    cat:enhancement st:test-and-merge 
    opened by knorth55 39
  • Update Comparison with Other Frameworks

    Update Comparison with Other Frameworks

    As promised in #2685 I have updated the framework comparison table with (almost?) every actively developed deep learning framework and several new axes of comparison. Let me know if anything seems inaccurate or irrelevant.

    cat:document 
    opened by jekbradbury 39
  • Hide FunctionNode classes from `chainer.functions` namespace

    Hide FunctionNode classes from `chainer.functions` namespace

    This PR hides all FunctionNode implementations from functions/__init__.py since these are considered implementation details and should not "be visible" to the user.

    Some exceptions are FunctionNodes with states that are exposed through APIs such as the indices for the max pooling classes and the random states in dropout and gaussian noise.

    Tests are modified accordingly to not rely on these classes directly. There are few exceptions including the the ones above.

    • [x] Remove FunctionNode imports from functions/__init__
    • [x] Do not use FunctionNodes classes directly in tests if possible
    • ~[ ] Remove FunctionNode's __init__'s default arguments~ (Maybe better done in a separate PR)

    Note: Reviewing commit by commit is probably much easier than looking at the total diff.

    cat:enhancement no-compat st:test-and-merge 
    opened by hvy 38
  • Sparse matmul support

    Sparse matmul support

    This PR aims to support sparse matmul in Chainer (this is related to https://github.com/chainer/chainer/issues/4377 and https://github.com/pfnet-research/chainer-chemistry/pull/90).

    I implemented a function named sparse_matmul which computes matrix multiplication of sparse and dense matrix. The usage of this function is as follows (assuming a and b are matrix or 3D tensor).

    sp_a = F.sparse_dense2coo(a)
    c = F.sparse_matmul(sp_a, b)
    

    You can also use this function for batched sparse matrix multiplication (actually this is my main focus) like matmul. It supports backward and double-backward, so that you can compute gradients of sparse and dense matrix and gradients of gradients as well.

    Please note that CPU version is not implemented now because I don't have good idea on efficient CPU implementation using numpy or scipy for batched sparse matrix multiplication.

    cat:feature 
    opened by anaruse 38
  • Porting To SYCL

    Porting To SYCL

    Are you interested in having an (SYCL)[https://www.intel.com/content/www/us/en/developer/tools/oneapi/training/dpc-essentials.html#gs.bnjiaf] port of chainer as a new backend?

    With the SYCL backend, we'd like to extend the existing functionalities of the chainer, by enabling the application to leverage the multi-core accelerator devices of Nvidia, AMD, and Intel vendor platforms

    opened by ysantoshkumar 0
  • cupy version constraints

    cupy version constraints

    Hello! I've noticed that chainer checks pkg-resources at runtime and also enforces that cupy<8.0.0. The latest release of cupy is 10.3.1, which is three major releases away. Of course I can just patch _version.py locally, but I was wondering

    • if there is a fundamental reason to require cupy<8.0.0,
    • if you would consider upstreaming the relaxed constraints,
    • if you would would be interested in moving the version check from import-time to install-time? I understand that optional dependencies and pip/setuptools don't go together too well though

    Environment info that the issue template asks for: https://gist.github.com/faf251713723b851b48c3dc5cd2cc4a9

    Thank you!

    issue-checked 
    opened by SomeoneSerge 1
  • Incorrect reading of MNIST Images

    Incorrect reading of MNIST Images

    This may be considered a minor quibble by most, but I do want to be accurate when using the MNIST dataset.

    The documentation for chainer.datasets.get_mnist(...) implies that you do something like derive the 0 to scale values by just dividing the raw pixel value by 255. However, fromthe MNIST site (under "FILE FORMATS FOR THE MNIST DATABASE):

    Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).

    The same goes for the testing images. So, I believe you should be inverting your values output by get_mnist(...). Actually it would be best to give the option to invert them seeing as one may need to compare their results to other research that used the wrong method. I know there are workaround(s) for this, but this is technically a bug.

    stale issue-checked 
    opened by benvcutilli 1
  • cupy.cuda.cudnn.CuDNNError: cuDNN Error: CUDNN_STATUS_BAD_PARAM

    cupy.cuda.cudnn.CuDNNError: cuDNN Error: CUDNN_STATUS_BAD_PARAM

    i'm try to run lda2vec algorithm from lda2vec with my own data ( i have one file.csv with 3 columns : idTweet, dataTweet and textTweet). i try to run this example in colab, when i run lda2vec_run.py i'm getting this error image

    -->

    • Chainer version :7.70
    • CuPy version :7.8.0
    • OS/Platform :windows 10 how to fix this error ?? please i need the help
    stale issue-checked 
    opened by fathia-ghribi 1
  • Chainer v7 Release Schedule

    Chainer v7 Release Schedule

    See https://github.com/cupy/cupy/issues/3627 for CuPy release tasks.

    Timeline

    • [x] 2020-07-30: v7.7.0
    • [x] 2021-06-10: v7.8.0

    The code will be frozen 2 days before the release date for QA. All dates are in JST and subject to change.

    Chainer v7.8.0

    Chainer v7.7.0

    • [x] https://github.com/chainer/chainer/issues/8573

    See #8565 for the previous release tasks (v7.6.0)

    issue-checked 
    opened by asi1024 6
Releases(v7.8.1.post1)
  • v7.8.1.post1(Jun 29, 2022)

  • v7.8.1(Jan 5, 2022)

    This is the release note of v7.8.1. See here for the complete list of solved issues and merged PRs.

    This minor release allows importing Chainer with CuPy v10+ environment. Note that we still encourage Chainer v7 users to stay with CuPy v7.8.0 & CUDA 10.2 or earlier & cuDNN v7.6 if you don't have strong reasons to upgrade. A warning message will be shown if you run Chainer v7 with CuPy v8 or later, but you can disable it by setting the CHAINER_WARN_VERSION_MISMATCH=0 environment variable.

    As announced previously, Chainer is under the maintenance phase. There are no further planned releases for Chainer v7 series.

    Enhancements

    • Prevent importing chainer immediately raises an error with cupy>=10 (#8616)

    Documentation

    • Pin NumPy version to the latest supported (#8617)
    Source code(tar.gz)
    Source code(zip)
  • v7.8.0(Jun 10, 2021)

    This is the release note of v7.8.0. See here for the complete list of solved issues and merged PRs.

    For those who need to run Chainer on CUDA 11.1+, this release provides "limited" support for CuPy v8/v9. We confirmed that basic tests and examples run fine, but we still encourage Chainer v7 users to stay with CuPy v7.8.0 & CUDA 10.2 or earlier & cuDNN v7.6 if you don't have strong reasons to upgrade. A warning message will be shown if you run Chainer v7 with CuPy v8 or later, but you can disable it by setting the new CHAINER_WARN_VERSION_MISMATCH=0 environment variable. Please also understand that CuPy v10 is not compatible with Chainer.

    As announced previously, Chainer is under the maintenance phase. There are no further planned releases for Chainer v7 series.

    Enhancements

    • Add CHAINER_WARN_VERSION_MISMATCH environment variable (#8588)
    • Support importing cuDNN in CuPy v8 (#8590)

    Bug Fixes

    • Fix chainer.testing requiring pytest installed (#8611)

    Code Fixes

    • Import cupy.cuda.cudnn first to show preload warning (#8605)
    • Fix random for CuPy v8/v9 (#8606)

    Documentation

    • Fix simple typo, cotiguousness -> contiguousness (#8595, thanks @timgates42!)
    • Add CuPy version recommendation (#8608)

    Installation

    • Add detection of cupy-cuda110 (#8580)
    • Add recent CuPy packages (#8609)

    Examples

    • Update Optuna examples (#8597)

    Tests

    • Update [jenkins] requirement (#8585)
    • Ignore CuPy deprecation warnings in tests (#8589)
    • Use Python 3.7 in ReadTheDocs (#8591)
    • Use stable CuPy v7 in ONNX test (#8592)
    • Avoid pytest.PytestUnknownMarkWarning (#8599)
    • Remove travis for macOS (#8600)
    • Fix broken skip condition (#8607)
    • Fix debug print tests (#8610)

    Others

    • Mitigate breaking changes in CuPy v8 (#8583)
    • Ignore NumPy 1.20 deprecations (#8598)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0.post1(Jan 13, 2021)

  • v7.7.0(Jul 30, 2020)

    This is the release note of v7.7.0. See here for the complete list of solved issues and merged PRs.

    As announced previously, Chainer has reduced the release frequency from monthly to once every two months if there are changes that justify the release. We have decided to skip v7.5.0 and v7.6.0 in order to keep the Chainer version up to date with CuPy’s most recent release.

    Bug Fixes

    • Add support for spawn and forkserver start method in PickleDataset (#8465, thanks @zaltoprofen!)
    • Fix array indexing in create_multi_node_evaluator (#8568)

    Documentation

    • Fix Reporter example (#8561)
    • Add message about maintenance phase (#8567)

    Tests

    • Fix Travis macOS failure (#8562)
    • Fix onnxruntime version for CI failure (#8564)
    • Fix Chainer CIs (#8569)
    • Use v7 for base branch detection (#8570)
    • Install CuPy v7 for ChainerX Jenkins tests (#8574)

    Others

    • Update Twitter ID (#8572)
    • Bump python version for RTD build (#8576)
    • Fix onnxruntime version for CI failure (#8564)
    Source code(tar.gz)
    Source code(zip)
  • v7.4.0(Apr 23, 2020)

    This is the release note of v7.4.0. See here for the complete list of solved issues and merged PRs.

    As announced previously, Chainer has reduced the release frequency from monthly to once every two months. We have decided to skip v7.3.0 in order to keep the Chainer version up to date with CuPy’s most recent release.

    Enhancements

    • Allow concat_arrays to be pickable (#8549)

    Bug Fixes

    • Allow start_methods other than fork on MultiprocessParallelUpdater (#7552)
    • Fix backend.copyto for mismatched dtypes to CuPy ndarray (#8043)
    • Fix optimizer.use_fp32_update on ChainerX model (#8382, thanks @y1r!)

    Documentation

    • Fix local_convolution_2d result shape documentation (#8553, thanks @msakai!)
    • Update functions.rst (#8557, thanks @husisy!)

    Tests

    • Remove python 2.7 builds (#8550)
    • Use CuPy v7 in CI (#8554)
    Source code(tar.gz)
    Source code(zip)
  • v7.2.0(Feb 14, 2020)

    This is the release note of v7.2.0. See here for the complete list of solved issues and merged PRs.

    As announced previously, Chainer is currently under the maintenance phase. Considering the situation, we are going to reduce the release frequency of Chainer from monthly to once every two months. This does not affect the release frequency of CuPy.

    Enhancements

    • Add support for cupy-cuda102 (#8544)

    Bug Fixes

    • Calculate beta with static_code on F.BatchNormalization.forward (#8325)

    Code Fixes

    • Remove py2 warnings (#8542)

    Documentation

    • Remove stable version section from README (#7956)
    • Add Optuna to README.md (#8537)
    • Fix typo (#8541)

    Examples

    • Fix accuracy calculation of custom loop examples (#8534)
    Source code(tar.gz)
    Source code(zip)
  • v7.1.0(Jan 16, 2020)

    This is the release note of v7.1.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Support custom initializers in NStepRNN (#8489)
    • Support n_step_gru function on exporting ONNX (#8492, thanks @msakai!)
    • Extend ONNX-Chainer's TransposeSequence converter to support more cases (#8493, thanks @msakai!)
    • Add NStepGRU link converter example to ONNX-Chainer test (#8494, thanks @msakai!)
    • Allow ONNX-Chainer's patch_functions to patch functions in modules other than chainer.functions (#8495, thanks @msakai!)
    • Replaced n_fold with n_folds (#8516, thanks @Saanidhyavats!)
    • Remove trailing whitespaces (#8536)

    Performance Improvements

    • Fast IndexIterator for ChainerX CUDA (#8360)

    Bug Fixes

    • Fix CooMatrix.to_dense for duplicate indices (#8187)
    • Add try/finally block to yield in reporter.py (#8508)

    Documentation

    • Fix several documentation errors in chainer.functions.rnn.* (#8454, thanks @msakai!)
    • Fix typo: chainermn.extension -> chainermn.extensions (#8526, thanks @msakai!)
    • Remove '--pre' from 'pip install' commands in ChainerX installation document (#8527, thanks @msakai!)

    Installation

    • Do not install chainer>=7.0.0 in python2 (#8517, thanks @knorth55!)

    Tests

    • Add chainerx test in observation_aggregator (#8384)
    • Fix flaky TestZeta (#8514)
    • Fix flaky test: TestCholesky (#8520)
    • Skip chainerx.fromfile test when dtype is bool_ and mode is text (#8521)
    • Use FunctionTestCase to test F.decov (#8522)
    Source code(tar.gz)
    Source code(zip)
  • v6.7.0(Jan 16, 2020)

    This is the release note of v6.7.0. See here for the complete list of solved issues and merged PRs.

    As announced previously, this is the final release of v6 series, which is the last version supporting Python 2.

    Bug Fixes

    • Add try/finally block to yield in reporter.py (#8511)

    Documentation

    • Fix several documentation errors in chainer.functions.rnn.* (#8530, thanks @msakai!)

    Tests

    • Use FunctionTestCase to test F.decov (#8523)
    • Skip chainerx.fromfile test when dtype is bool_ and mode is text (#8524)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0(Dec 5, 2019)

    This is the release note of v7.0.0. See here for the complete list of solved issues and merged PRs.

    This release note only covers the difference from v7.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:

    See the Upgrade Guide if you are upgrading from previous versions. Also, note that we dropped the support of Python 2.7 and 3.4 from Chainer v7.

    Please read the following announcement to learn about the future of Chainer.

    Highlights

    • Most features of Chainer, including ChainerMN, are now compatible with ChainerX ndarray.
    • ONNX-Chainer is integrated into Chainer.
    • NHWC support added. Performance for convolutions and batch normalization greatly improved on GPUs with Tensor Core.

    Changes without compatibility

    • Forbid out-of-range insert on Sequence (#6374)
    • Update minimum required python version to 3.5.2 (#8410)

    New Features

    • Support soft target in softmax_cross_entropy (#5595, thanks @anaruse!)
    • Support NHWC tensor layout (#7620)
    • Add Cholesky Decompostion (#8202, thanks @UmashankarTriforce!)
    • Allow customizing setup/tear-down method names in testing.fix_random (#8432)

    Enhancements

    • Use intermediate dtype in F.mean_absolute_error for FP16 (#6807)
    • Avoid fallback for ChainerX in F.accuracy (#7396)
    • Add from_params to Linear & Conv (#7525, thanks @crcrpar!)
    • Correct FunctionNode.forward output type message (#7655)
    • Default index mode for ChainerX Take (#8281)
    • Forward chainerx::MakeArray in some case (#8296)
    • Raise ValueError when calling xxx_obj with ChainerX array in ChainerMN (#8320)
    • Add Permutate exporter to onnx_chainer (#8333, thanks @msakai!)
    • Update ONNX version (#8339)
    • Support ONNX export with opset11 (#8341)
    • Support multiple advanced indexing on ONNX export (#8345)
    • Revert output value check in SoftmaxCrossEntropy (#8347)
    • Enhance chainerx::AddAt as a public function (#8351)
    • Support cover_all=True on Unpooling2D in exporting to ONNX (#8391)
    • Use ceiling_mode on exporting to ONNX MaxPool (#8392)
    • Fix onnx_chainer.replace_func.fake_as_funcnode to reconstruct return value structure (#8398, thanks @msakai!)
    • Support Rollaxis in ONNX-Chainer (#8428, thanks @tkanmae!)
    • Add support of SelectItem in ONNX-Chainer (#8450, thanks @tkanmae!)
    • Add TransposeSequence exporter to ONNX-Chainer (#8451, thanks @msakai!)
    • Use __name__ attribute in parameterized test names when available (#8455, thanks @grlee77!)
    • SelectItem using GatherElements for ONNX opset>=11 (#8470)
    • Add deprecation warning to ONNX exporting without test cases (#8473)
    • Add workaround for cuSolver 10.2's new enums (#8475)
    • Support step slicing on ONNX export (#8484)
    • Support sign function on exporting ONNX (#8488)
    • Raise RuntimeError when using cudnn_fast without cudnn (#8499)

    Performance Improvements

    • Make contiguous case for chainerx::AddAt faster (#8299)

    Bug Fixes

    • Fix 'attempting to reference a deleted function' with MSVC (#8258, thanks @cloudhan!)
    • Fix onnx_chainer's exporter of Separate to handle single output case (#8332, thanks @msakai!)
    • Fix ChainerX fallback condition in batch normalization (#8359)
    • Remove host-side branch on F.accuracy with ignore_label (#8364, thanks @y1r!)
    • Fix rounding on float16 conversions (#8378)
    • Avoid overflow on index calculations when using large arrays (#8389)
    • Fix pickling of optimizers (#8394)
    • Fix AttributeError in WrappedFunctionNode.forward (#8397, thanks @msakai!)
    • Register uninitialized persistents (#8445)
    • Fix ONNX-Chainer's GetItem converter to handle -1 correctly (#8460, thanks @msakai!)
    • Support chainerx.batch_norm with 2D input on CUDA (#8464)
    • Fix BatchNormalization for NHWC without cudnn (#8497)

    Code Fixes

    • Code clean up for routines/indexing.h (#8288)
    • Fix style in _snapshot.py (#8297)
    • C++ cosmetic fixes (#8379)
    • Avoid using VariableNode in F.convolution_2d backward implementation (#8395)
    • Add unsigned suffix in float16 test (#8408)
    • Remove unused function (#8413)
    • Add unsigned integer suffix (#8414)
    • Avoid repeatedly enumerating submodules (#8421)
    • Fix ChainerX CMake test dependencies (#8422)
    • Avoid preprocessor for LAPACK error (#8468)

    Documentation

    • Fix for issue #6251 and issue #6810 (#6808, thanks @euler16!)
    • Document properties of computed gradients in cholesky and eigh (#8312)
    • Fix n-step RNN docs (#8326, thanks @euler16)
    • Fix documentation of NStepGRUBase (#8330, thanks @msakai!)
    • Fix typos in ONNX-Chainer introduction (#8334, thanks @msakai!)
    • Fix docs of ONNX export introduction (#8338)
    • Fix typo in /examples/seq2seq/README.md (#8399, thanks @tanaken0515!)
    • Link to examples directory for the current branch (#8403)
    • Fix scatter_dataset part of ChainerMN tutorial (#8406)
    • Update expected messages of type_check errors (#8407)
    • Fix typo in math expressions (#8433)
    • Update requirements (#8501)

    Installation

    • Allow multiple code in CHAINERX_NVCC_GENERATE_CODE (#8370)
    • Fix CMake target name for abseil (#8380)
    • Remove typing requirement (#8383, thanks @jonringer!)
    • Update minimum required python version to 3.5.2 (#8410)
    • Use PYBIND11_EXPORT instead of visibility hack (#8437)
    • Ignore unused function warning in NVCC (#8439)
    • Fix code grouping in CMakeLists.txt (#8440)

    Examples

    • Add MNIST MultiprocessParallelUpdater example (#7478)
    • Use ChainerX softmax cross entropy implementation in ChainerX examples (#8294)

    Tests

    • Forbid out-of-range insert on Sequence (#6374)
    • Check output in example tests (#7280)
    • Show pytest summary in flexCI (#8212)
    • Run example tests in Travis CI (#8251)
    • Fix Decorrelated Batch Normalization tests (#8260)
    • Build ChainerX example in CI (#8282)
    • Fix test_Meshgrid (#8285)
    • Add ChainerX pytest in multi_node_early_stopping (#8321)
    • Fix inputs of pooling function tests (#8328)
    • Include .git in ChainerCV compatibility CI (#8331)
    • Adjust SoftmaxCrossEntropy test tolerances (#8335)
    • Fix random condition in chainerx.where test (#8342)
    • Use LinkTestCase for L.GroupNormalization (#8343)
    • Relax tolerances of ChainerX linalg forward tests (#8344)
    • Add chainerx test to dataset_tests (#8346)
    • Print installed packages in pytest (#8348)
    • Reduce shape in ChainerX linalg test (#8349)
    • Use different docker image for each base development branch (#8350)
    • Set CHAINER_CI in Travis CI (#8353)
    • Set CHAINER_CI in ChainerX tests in Jenkins (#8354)
    • Set CHAINER_CI in Chainer tests in FlexCI (#8356)
    • Use xpytest to parallelize tests (#8361)
    • Relax float16 forward tolerance of F.cast test (#8363)
    • Print actual array values in FunctionTest modified input error (#8367)
    • Fix negative tests for chainerx.linalg.* (#8371)
    • Avoid non-differential point in TestTriplet (#8376)
    • Check ONNX Chainer python styles (#8400)
    • Change version of python in travis macos test (#8405)
    • Remove chainerx dependency from test backends (#8409)
    • Add ChainerX test to test_allreduce_persistent.py (#8412)
    • Use fix_random in xfail backward tests (#8419)
    • Fix TestMeshgrid (#8420)
    • Add ChainerMN and ONNX-chainer tests to Mergify requirements (#8424)
    • Add chainerx tests to test_checkpoint.py (#8429)
    • Fix random in ChainerX n-step GRU test (#8431)
    • Add chainerx tests to test_create_mnbn_model (#8435)
    • Add chainerx tests into multi_node_optimizer (#8436)
    • Annotate tests that usually run >30s (#8443)
    • Lookup macOS undefined symbols at runtime in backend tests (#8448)
    • Skip some Convolution2D tests for older numpy versions (#8458)
    • Add parametrize_device_name to setup.cfg (#8459)
    • Fix conflict between #8251 and #8361 (#8461)
    • Fix example test data (#8463)
    • Enable verbose flag when installing chainer in Jenkins (#8467)
    • Remove ChainerX F.cholesky test (#8469)
    • Ignore cupy.util.PerformanceWarning in pytest (#8471)
    • Avoid ChainerX slow tests in Jenkins (#8472)
    • Fix flaky test of _modified_xlogx (#8483)
    • Fix broken version specification in FlexCI dockerfile (#8485)
    • Remove unnecessary export on ONNX replace function test (#8487)
    • Allow array_utils.uniform to be deterministic with fix_random by default (#8491)
    • Add error message for invalid base branch in pfnCI (#8496)
    • Adjust timeout and build memory usage in FlexCI (#8498)
    Source code(tar.gz)
    Source code(zip)
  • v6.6.0(Dec 5, 2019)

    This is the release note of v6.6.0. See here for the complete list of solved issues and merged PRs.

    Bug Fixes

    • Fix SCE with ChainerX and normalize (#8311)
    • Fix kernel of double backward of max_pooling_2d (#8329)
    • Fix ChainerX fallback condition in batch normalization (#8368)
    • Fix optimizer_hooks.GradientHardClipping for scalar array (#8372)
    • Fix pickling of optimizers (#8417)
    • Register uninitialized persistents (#8446)

    Enhancements

    • Compute F.negative_sampling in fp32 for fp16 inputs (#8309)
    • Fix optimizer_hooks.GradientHardClipping for ChainerX (#8377, thanks @kshitij12345!)

    Documentation

    • Fix documentation of NStepGRUBase (#8337, thanks @msakai!)
    • Fix n-step RNN docs (#8402)
    • Fix typo in /examples/seq2seq/README.md (#8404, thanks @tanaken0515!)
    • Changes citation to new KDD paper (#8418)
    • Link to examples directory for the current branch (#8423)
    • Update expected messages of type_check errors (#8456)
    • Update requirements (#8502)

    Tests

    • Fix Decorrelated Batch Normalization tests (#8340)
    • Add missing FlexCI configurations (#8352)
    • Use LinkTestCase for L.GroupNormalization (#8355)
    • Show pytest summary in flexCI (#8369)
    • Set CHAINER_CI in Travis CI (#8373)
    • Set CHAINER_CI in ChainerX tests in Jenkins (#8375)
    • Set CHAINER_CI in Chainer tests in FlexCI (#8381)
    • Print installed packages in pytest (#8386)
    • Print actual array values in FunctionTest modified input error (#8388)
    • Avoid non-differential point in TestTriplet (#8396)
    • Use different docker image for each base development branch (#8401)
    • Disable ChainerMN FlexCI tests on v6 (#8411)
    • Use fix_random in xfail backward tests (#8457)
    • Avoid ChainerX slow tests in Jenkins (#8474)
    • Use CuPy v6 in ChainerX test in Jenkins (#8477)
    • Skip some Convolution2D tests for older numpy versions (#8478)
    • Fix Travis Openssl Error in OSX (#8480)
    • Fix flaky test of _modified_xlogx (#8486)
    • Add error message for invalid base branch in pfnCI (#8500)
    • Adjust timeout and build memory usage in FlexCI (#8503)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0rc1(Oct 25, 2019)

    This is the release note of v7.0.0rc1. See here for the complete list of solved issues and merged PRs.

    Announcements

    This time, we will keep the current branches for active development (master for v7.x, v6 for v6.x) after the RC. We will maintain v6.x series until Python2 EOL, so we do not cut the new development version for now to avoid increasing the number of branches to maintain. New features will be included directly into v7 for a while, and maintenance changes will be backported to v6.

    Highlights

    ONNX-Chainer Integration

    ONNX-Chainer which used to be a separate project has now been integrated to the Chainer repository and made more accessible to existing Chainer users (#8229). You can easily export Chainer model as ONNX format like this:

    import onnx_chainer
    onnx_chainer.export(chainer_model, pseudo_input, filename='model.onnx')
    

    For a more detailed description on how to get started, please refer to the ONNX-Chainer section in the official documentation.

    ChainerMN

    ChainerMN now works with ChainerX. In this release, the MNIST example has also been updated to demonstrate the usage. (#7844)

    New Features

    • Add UpsamplingDeconvFilter and DownsamplingConvFilter initializer (#5290, thanks @knorth55!)
    • Add chainerx.meshgrid (#6668, thanks @kshitij12345!)
    • Add chainerx.hsplit (#7030, thanks @ishanrai05!)
    • Add linalg.cholesky to ChainerX (#7329, thanks @IvanYashchuk!)
    • Add linalg.eigh, linalg.eigvalsh to ChainerX (#7503, thanks @IvanYashchuk!)
    • ChainerX + ChainerMN integration on MNIST (#7844)
    • New configuration system of communicator inspired by links (#7885)
    • More efficient multi-node snapshot (#8003)
    • A new multi-node evaluator for force_equal_length=False (#8071)
    • Allow weight initializer to have its own RandomState instance (#8081, thanks @mr4msm!)
    • Add chainerx.hinge (#8168)
    • Integrate ONNX-Chainer to Chainer repository (#8229)
    • Implement chainerx::SoftmaxCrossEntropy and chainerx.softmax_cross_entropy (#8250)
    • Add chainermn.testing.to_device function (#8279)
    • Add chainerx.copyto (#8314, thanks @kshitij12345!)

    Enhancements

    • Rename TabularDataset.as_tuple/as_dict to TabularDataset.astuple/asdict (#7788)
    • Deprecate DeviceResident.to_gpu/to_cpu/to_intel64 (#8058)
    • Support zero-sized matrix in generate_matrix (#8167)
    • Add mode argument to chainerx.take (#8197)
    • Delete move and copy of virtual *GradState classes (#8224)
    • Fix directional gradient stability in gradient_check (#8236)
    • Fix some typo (#8243, thanks @garanews!)
    • Fix CuPy installation detection error message (#8264)
    • Fix intel64 support of F.batch_normalization (#8266)
    • Fix dim clearing on output (#8270)
    • Remove device argument from chainerx.diag and chainerx.diagflat (#8275)
    • Fix algorithm to avoid small directions in gradient_check (#8290)
    • Show import error with guild message on ONNX (#8293)
    • Partially output_grad support on fake_as_funcnode (#8298)
    • Compute F.negative_sampling in fp32 for fp16 inputs (#8300)
    • Make some arguments keyword-only. Note that some of them may break code based on v7 beta versions, but none of them breaks the compatibility against v6.
      • Make mode and align_corners arguments in F.resize_image keyword-only (#8009)
      • Make weights and keepdims arguments in Variable.mean keyword-only (#8010)
      • Make arguments of WeightStandardization keyword-only (#8011)
      • Make call_before_training argument of Trainer.extend keyword-only (#8064)
        • The argument was introduced in v7.0.0b3, so it is not counted as compatibility break of v7.
      • Make arguments in ObservationAggregator and MultiNodeEarlyStoppingTrigger keyword-only (#8065)
      • Make force_equal_length argument in scatter_dataset and scatter_index keyword-only (#8066)
      • Make size argument of tabular.from_data keyword-only (#8067)

    Performance Improvements

    • Make contiguous case for chainerx::Take faster (#8295)

    Bug Fixes

    • Fix subgraph construction for ChainerX backward (#8049)
    • Fix a bug in F.batch_normalization with mixed dtype (#8149)
    • Fix __str__ of parameterized class (#8169)
    • Fix bugs when x and gamma/beta have different dtypes in F.batch_normalization (#8175)
    • Change copy to __deepcopy__ in ChainerMN batch_normalization and replace to_gpu (#8185)
    • Fix possible data race in CUDA memory keeper (#8213)
    • Add virtual destructor to CUDA Allocator (#8215)
    • Inherit input ndarray device in chainerx.ascontiguousarray (#8262)
    • Do not expose global_kernel_registry (#8265)
    • Fix SCE with ChainerX and normalize (#8301)
    • Unable to use gpu_id=0 in ChainerMN testing get_device (#8304)

    Code Fixes

    • Update variable names for consistent naming convention (#8074)
    • Fix style of setup.cfg (#8180)
    • Remove unused forward declaration of AveragePoolPadMode enum (#8214)
    • Write Read the Docs related comments in setup.py (#8218)
    • Remove unused classes {Max,Average}PoolForwardBackward (#8223)
    • Conform to readability-avoid-const-params-in-decls (#8225)
    • Simplify direction vector sampling in gradient_check (#8238)
    • Use type hint for method declaration (#8248)
    • Remove obsolete comment in F.softmax_cross_entropy (#8253)
    • Fix import order and grouping (#8257)
    • Simplify CreateSubgraph (#8310)

    Documentation

    • Change citation to new KDD paper (#7994)
    • Fix a typo in the Cauchy distribution page (#8208, thanks @nzw0301!)
    • Fix resize_images documentation to reflect recent code changes (#8221, thanks @zu3st!)
    • Set up documentation for loss functions in ChainerX (#8231)
    • Add documentation for chainerx.ravel (#8233)
    • Add documentation for chainerx.sigmoid_cross_entropy (#8249)
    • Put a link to CuPy installation guide in README instead of a command instruction (#8287)

    Installation

    • Add ability to build with ninja generator. (#8194, thanks @cloudhan!)
    • Suppress warnings-as-errors from external libraries (#8227)
    • Write CMake generator when building (#8239)
    • Add libchainerx_base.a to link chainerx statically (#8247)

    Examples

    • Fix WaveNet example not working (#8157, thanks @dhgrs!)
    • Fix generate.py in examples/wavenet (#8172, thanks @dhgrs!)

    Tests

    • Simplify F.scale test (#6969, thanks @ishanrai05!)
    • Improve example tests (#7475)
    • Add fp16 test to test_n_step_rnn (#7483)
    • Fix protobuf dependency (#7529)
    • Fix TestAccuracy: Randomly reduce testing parameters (#7820)
    • Support ChainerMN testing in pfnci (#7821)
    • Fix flaky tests of chx.linalg.solve (#7997)
    • Fix overflow warning in div backward test (#8109)
    • Fix flaky TestQR (#8114)
    • Disable flaky test retry in flexCI (#8143)
    • Pairwise testing (#8164)
    • Allow pytest.skip() in combination with testing.repeat/retry (#8174)
    • Remove DummySerializer and DummyDeserializer from iterators_tests (#8176)
    • Fix comparison with casting in hdf5 serializer test (#8182)
    • Relax BatchNormalization backward test tolerances (#8189)
    • Fix caffe test with protobuf>=3.8 (#8190)
    • Add CHAINER_TEST_PAIRWISE_PARAMETERIZATION and enable it only in Travis CI (#8211)
    • Fix attrs package version (#8219)
    • Fix HDF5Serializer test for h5py<2.9 (#8220)
    • Fix flaky TestBatchNormalization (#8230)
    • Relax tolerances in ChainerX unary math tests (#8234)
    • Add "jenkins" extras (#8241)
    • Use clang-format-6.0 if possible and track the version of clang-format (#8242)
    • Remove legacy DeprecationWarning filter from test_multi_node_chain_list (#8246)
    • Fix chainex_tests/unit_tests/routines_tests/test_linalg.py::Inverse (#8255)
    • Fix flaky TestHuberLoss (#8271)
    • Stop setting too small tolerances in backprop tests (#8283)
    • Make ImportWarning just a warning in tests (#8291)
    • Fix gtest linkage (#8292, thanks @cloudhan!)
    • test_average is slow in FlexCI (#8303)
    • Add ChainerX to test_mnist in chainermn_tests (#8305)
    • Implement communicator_test for ChainerX+ChainerMN (#8313)

    Others

    • Remove ImportWarning ignore entry (#8186)
    • Add WIN32_LEAN_AND_MEAN definition (#8205, thanks @cloudhan!)
    • Deprecate multinode checkpointer (#8207)
    • Replace Slack invitation links (#8263)
    Source code(tar.gz)
    Source code(zip)
  • v6.5.0(Oct 25, 2019)

    This is the release note of v6.5.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Display ChainerX availability in print_runtime_info (#7860)
    • Fix CuPy installation detection error message (#8278)

    Bug Fixes

    • Fix __str__ of parameterized class (#8184)

    Code Fixes

    • Update variable names for consistent naming convention (#8307)

    Documentation

    • Add document print runtime info (#8165)
    • Fix RNN documentation (#8203)
    • Fix a typo in the Cauchy distribution page (#8209, thanks @nzw0301!)

    Tests

    • Increase CPU memory for test instance in PFN CI (#7955)
    • Fix overflow warning in div backward test (#8188)
    • Disable flaky test retry in flexCI (#8191)
    • Relax BatchNormalization backward test tolerances (#8196)
    • Fix comparison with casting in hdf5 serializer test (#8198)
    • Fix tests of L.BatchRenormalization and adjust tolerances (#8200)
    • Adjust TestConvolution2DFunction::test_double_backward fp16 tolerance (#8201)
    • Fix attrs version (#8222)
    • Fix caffe test with protobuf>=3.8 (#8232)
    • Relax tolerances in ChainerX unary math tests (#8235)
    • Add Jenkins extras (#8252)
    • Fix HDF5Serializer test for h5py<2.9 (#8256)

    Others

    • Replace Slack invitation links (#8284)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0b4(Sep 26, 2019)

    This is the release note of v7.0.0b4. See here for the complete list of solved issues and merged PRs.

    Highlights

    Many updates to ChainerX including new routines and support for loss scaling.

    New Features

    • Support all float dtypes in F.n_step_rnn and F.n_step_birnn (#5808)
    • Add chainerx.vsplit to ChainerX (#7032, thanks @ishanrai05!)
    • Add chainerx.linalg.qr to ChainerX (#7379, thanks @IvanYashchuk!)
    • Add chainerx.accuracy (#7526, thanks @aksub99!)
    • Add chainerx.{remainder/mod} (#7675, thanks @sky58!)
    • Add Tree-LSTM to ChainerX (#7720, thanks @dido1998!)
    • Add S-LSTM to ChainerX (#7783, thanks @dido1998!)
    • Loss scale support for chainerx (#7979)
    • Add F.zeta (#8059, thanks @UmashankarTriforce!)
    • Add testing.generate_matrix to get matrices of given singular values (#8077)
    • Add chainerx.fmod (#8110)
    • Add chainerx.nonzero (#8124)

    Enhancements

    • Abbreviate output of chainerx::ArrayRepr for large inputs (#7708)
    • Make parameterized test names deterministic (#7945)
    • Raise FutureWarning on GPU-to-GPU transfer in StandardUpdater (#7952)
    • Always get typeid of kernels in libchainerx (#7970)
    • Fixed support of 0-sized arrays for linalg routines in ChainerX (#7980, thanks @IvanYashchuk!)
    • Support CuPy/ChainerX arrays when initializing variable.Parameter objects (#8022)
    • Add cuda ScanKernel (#8103)

    Performance Improvements

    • Add chainerx::Absolute device implementation (#7319)
    • Make weight standardization faster (#7963)

    Bug Fixes

    • Fix deadlock on MultiprocessIterator and MultiprocessParallelUpdater (#7511)
    • Support mixed16/float16 GroupNormalization (#7965)
    • Change return policy for chx::Device object on ndarray pickling (#7988)
    • Fix deepcopy for chain parameters (#7996)
    • Fix floating point exception in ChainerX inferred reshape (#8018)
    • Fix chainerx::Dot edge cases with empty arrays (#8020)
    • Fix LSTM for omitted upstream gradients (#8037)
    • Fix native AddAt implementation for float16 arrays (#8055)
    • Correctly cast fill_value in constant initializer (#8089)

    Code Fixes

    • Simplify ArrayReprImpl (#7699)
    • Remove unnecessary file (#8000)
    • Refactor F.batch_normalization and ChainerMN backend implementations (#8039)
    • Fix -Wabsolute-value for clang (#8045)
    • Generalize and simplify NativeCumsumKernel (#8053)
    • Fix coding style of some imports in ChainerMN (#8060)
    • Fix -Wbraced-scalar-init for clang (#8076)
    • Use standard constructor (#8088, thanks @cloudhan!)
    • Remove unused headers in arithmetic.{h,cc} (#8128)

    Documentation

    • Fix doc of backend.copyto (#7832)
    • Document chainerx.to_numpy (#7984)
    • Fix RNN docs for ChainerX (#7985, thanks @dido1998!)
    • Remove obsolete note about chainerx.take indices dtype (#7998)
    • Add undocumented arguments to snapshot extension signature (#8004)
    • Fix grammatical errors in documentation (#8029)
    • Fix heading anchor in ChainerX docs (#8091)
    • Documentation improvement CHAINERX_ENABLE_{BLAS,LAPACK} (#8099)
    • Add document print runtime info (#8125)
    • Fix RNN documentation (#8144)
    • Add documentation for chainerx.minimum (#8146)
    • Remove obsolete note in chainerx.maximum doc (#8147)
    • Fix typo (#8160)

    Installation

    • Fix NumPy version in Dockerfile (#8027)
    • Add cblas.h and modified CMakeLists.txt (#8052, thanks @okdshin!)
    • Fix environment variable CHAINERX_ENABLE_LAPACK=0 causes error (#8086, thanks @cloudhan!)
    • Update abseil to new release (#8120)

    Examples

    • Use some latest features for the WaveNet example (#6285)
    • Separate training script into main part and data submodule to avoid an error related to NVIDIA DALI. (#8127, thanks @lazykyama!)

    Tests

    • Treat warnings as errors in tests (#6653)
    • Filter DeprecationWarning in test_maniplation.py (#7824)
    • Avoid unnecessary test condition in F.max_pooling_2d test (#7924)
    • Add test for optimizers test coverage (#7927)
    • Fix flaky negative_sampling (#7975)
    • Avoid testing full combinations in F.lstm test parameterization (#7987)
    • Relax tolerances in gradient_check test (#7989)
    • Drop Python 2 Travis CI configuration (#8013)
    • Drop Python 2 AppVeyor configuration (#8014)
    • Drop Python 2 PFN CI configuration (#8017)
    • Suppress number of combinations of in_out_dtype (#8023)
    • Avoid non-differentiable point in min/max tests (#8044)
    • Adjust TrueDiv tolerances (#8047)
    • Add scripts for Docker base images for Chainer CI (#8075)
    • Fix tests of L.BatchRenormalization and adjust tolerances (#8080)
    • Add timestamp to Travis CI log (#8085)
    • Explicit h5py.File mode (#8090)
    • Fix flaky tests with np.empty (#8096)
    • Revive clang-tidy test in Travis CI (#8098)
    • Fix matrix generation in linear algebra PseudoInverse test (#8102)
    • Remove duplicated parameter in test_normal.py (#8111)
    • Register pytest markers (#8112, #8132)
    • Fix macOS Travis error caused by Homebrew (#8115)
    • Add ignore::ImportWarning to setup.cfg (#8131)
    • Relax tolerance of im2col test (#8133)
    • Allow fix_random decorator to be used with OpTest (#8136)
    • Fix missing dtype checks in ChainerX loss test (#8141)
    • Fix flaky NStepRNN and NStepBiRNN (#8142)
    • Avoid empty in F.cast test that can cause overflow warning (#8152)
    • Make xdist usable in ChainerX tests (#8155)
    • Adjust TestConvolution2DFunction::test_double_backward fp16 tolerance (#8163)

    Others

    • Convert tabs to spaces in setup.cfg (#8154)
    Source code(tar.gz)
    Source code(zip)
  • v6.4.0(Sep 26, 2019)

    This is the release note of v6.4.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Insert missing spaces between concatenated string literals (#7935)
    • Make parameterized test names deterministic (#8134)

    Bug Fixes

    • Fix decorrelated batch normalization when groups ≠ 1 (#7825)
    • Support mixed16/float16 GroupNormalization (#8113)
    • Fix deadlock on MultiprocessIterator and MultiprocessParallelUpdater (#8126)
    • Fixes deepcopy for chain parameters (#8150)

    Code Fixes

    • Remove unused argument from decorrelated batch norm (#8097)

    Documentation

    • Add undocumented arguments to snapshot extension signature (#8016)
    • Add a note about incompatibility with NumPy 1.17 + Python2 (#8028)
    • Fix grammatical errors in documentation (#8036)
    • Fix doc of backend.copyto (#8056)
    • Fix typo (#8161)

    Installation

    • Fix NumPy version in Dockerfile (#8068)

    Tests

    • Refactor DecorrelatedBatchNormalizationTest and add stable input (#7940)
    • Relax float16 tolerances in F.batch_inv test (#7981)
    • Relax tolerances in old cuDNN convolution tests (#7982)
    • Fix numerical gradient precision in F.squared_error test (#8012)
    • Fix flaky negative_sampling (#8019)
    • Relax tolerances in gradient_check test (#8021)
    • Explicit h5py.File mode (#8107)
    • Fix eps in Contrastive.backward (#8108)
    • Remove duplicated parameter in test_normal.py (#8117)
    • Fix macOS Travis error caused by Homebrew (#8118)
    • Add timestamp to Travis CI log (#8119)
    • Relax tolerance of im2col test (#8135)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0b3(Aug 22, 2019)

    This is the release note of v7.0.0b3. See here for the complete list of solved issues and merged PRs.

    Dropping Support of Python 2

    Due to the end-of-life (EOL) of Python 2 in January 2020, Python 2 support has been dropped in this release. Chainer v6.x continues to support Python 2. See the blog post for details.

    Note on F.max_pooling_2d refactoring

    Implementation of F.max_pooling_2d has been merged to F.max_pooling_nd. The behavior is unchanged, so ordinary users should not be affected by this change. However, the FunctionNode class recorded in the computational graph corresponding to F.max_pooling_2d has changed from MaxPooling2D to MaxPoolingND. The code explicitly depending on this class will need a fix.

    New Features

    • Add an option to invoke extensions before training (#3511, thanks @wkentaro!)
    • Add automatic management of snapshots (deletion and load) (#6856)
    • Add chainerx.repeat (#7223, thanks @durswd!)
    • Support mixed indices in TabularDataset.slice (#7251)
    • Add chainer.dataset.tabular.DelegateDataset (#7276)
    • Add ObservationAggregator extension to ChainerMN (#7302)
    • Add strict mode to scatter_dataset as well as scatter_index (#7327)
    • Add chainer.dataset.tabular.from_data (#7361)
    • Add linalg.svd, linalg.pinv to ChainerX (#7411, thanks @IvanYashchuk!)
    • Add TabularDataset.convert/with_converter (#7428)
    • Add linalg.solve, linalg.inv to ChainerX (#7474, thanks @IvanYashchuk!)
    • Add base Converter class (#7489)
    • Add chainerx.sigmoid_cross_entropy (#7524, thanks @aksub99!)
    • Add chainerx.cumsum (#7558, thanks @aksub99!)
    • Add chainerx.nansum (#7719, thanks @aksub99!)
    • Add chainerx.nanargmax and chainerx.nanargmin (#7755, thanks @aksub99!)
    • LSTM, GRU and RNN implementation for ChainerX (#7764, thanks @dido1998!)
    • Add tri* routines to ChainerX (#7791, thanks @IvanYashchuk!)
    • Add finalize method to ChainerMN CommunicatorBase class (#7814)
    • Add numerical_grad_dtype to FunctionTestCase and LinkTestCase (#7817)
    • Support callable in tabular.from_data (#7847)
    • Add chainerx.count_nonzero (#7852, thanks @aksub99!)
    • Implement hooks for memory pool in ChainerX (#7898)
    • Add chainerx.flatten (#7901, thanks @aksub99!)
    • Add chainerx.ravel (#7904, thanks @aksub99!)

    Enhancements

    • Use numbers for input check in roi_{average|max}_{pooling|align}_2d.py (#5636, thanks @knorth55!)
    • Warn Link.to_gpu unless compatible with to_device (#5762)
    • Change F.dropout to use cuDNN by default (#7185, thanks @crcrpar!)
    • Fix Adam FP16 overflow on GPU kernels (#7694)
    • Improve chainerx import check (#7738)
    • Make F.average as accurate as backend (#7758)
    • Improve NCCL availability error in PureNcclCommunicator (#7793)
    • Fix type_check error message on evaluating bool expression (#7795)
    • Fix module in msg of type_check (#7803)
    • Use scalar array in chx.leaky_relu/elu (#7816)
    • Allow None inputs to gradient check and generating None gradients in FunctionTestCase (#7831)
    • Display ChainerX availability in print_runtime_info (#7833)
    • Add support for input with different dtypes for 'linalg.solve' in ChainerX (#7840, thanks @IvanYashchuk!)
    • Fix F.clip for NumPy 1.17 (#7843)
    • Include rtol * abs(b) in allclose output (#7848)
    • Fix SLSTM for omitted upstream gradients (#7891)
    • Fix LSTM for omitted upstream gradients (#7896)
    • Insert missing spaces between concatenated string literals (#7930)
    • Fix a typo in a kernel name (#7962)

    Bug Fixes

    • Fix TypeError in max_pooling_2d (#6835, thanks @ishanrai05!)
    • Fix multi-device loss scaling (#7594)
    • Avoid unload module call in PureNcclCommunicator (#7600)
    • Fix decorrelated batch normalization when groups ≠ 1 (#7707)
    • Fix create_mnbn_model() bug (#7718)
    • Fix optimizer_hooks.GradientHardClipping for scalar array (#7760)
    • Fix "zero division" in resize image (#7769, thanks @meokz!)
    • Fix ChainerX non native deserialization (#7830)
    • Fix backends.copyto from chainerx to non-chainerx (#7835)
    • Fix backward of split_axis for intel64 when grad_ouputs contains None (#7836)
    • Support for CUDA async in batched copy (#7877)
    • Add scatter interface to CommunicatorBase (#7888)
    • Add DeprecationWarning to initializer of BuildingBlock (#7909)
    • Fix in-place update of arrays in Link.serialize and optimizers.Adam (#7918)
    • Fix precision in F.max_pooling_2d (#7922)

    Code Fixes

    • Avoid using _fallback_workarounds in SpectralNormalization (#7539)
    • Create links.rnn and functions.rnn (#7725)
    • Add batched_copy to all Communicators (#7761)
    • Remove unused lambda capture of axis (#7799)
    • Remove unused argument from decorrelated batch norm (#7828)
    • Fix copies for linalg.svd python bindings layer in ChainerX (#7866, thanks @IvanYashchuk!)
    • Replace n_layer with n_layers for consistency (#7871)
    • Rename a variable in CUDA SVD kernel (#7921, thanks @IvanYashchuk!)
    • Refactor pooling_nd functions (#7938)
    • Merge implementation of F.max_pooling_2d into F.max_pooling_nd (#7939)
    • Fix typo in comment: unique -> deterministic (#7775)

    Documentation

    • Fix static_graph docs code examples (#7875)
    • Add 1.17 to supported NumPy versions (#7883)
    • Add scatter to doc (#7897)
    • Update stable version in README (#7948)

    Installation

    • Relax typing version requirement in Python 3 (#7811)
    • Remove mypy from requirements (#7812)
    • Add OpenMP option for cuSOLVER (#7839)
    • Fix Windows build of ChainerX (#7967, thanks @cloudhan!)

    Examples

    • Improve VAE example (#7250)
    • Show prompt in text classification example (#7858, thanks @UmashankarTriforce!)

    Tests

    • Add test to ensure no mutable default arguments (#4413)
    • Simplify F.max_pooling_2d test (#6836, thanks @ishanrai05!)
    • Simplify F.lstm test (#7808, thanks @dido1998!)
    • Simplify F.slstm test (#7805, thanks @dido1998!)
    • Simplify F.n_step_rnn test (#7804, thanks @dido1998!)
    • Simplify F.n_step_lstm test (#7807, thanks @dido1998!)
    • Simplify F.n_step_gru test (#7806, thanks @dido1998!)
    • Simplify F.embed_id test (#7903, thanks @dido1998!)
    • Add ChainerCV's tests to pfnCI (#7060)
    • Add mixed16 tests to multi-node chain list (#7630)
    • Add mixed16 tests to collective functions (#7633)
    • Add mixed16 tests to point_to_point communications (#7637)
    • Add mixed16 tests to pseudo_connect (#7638)
    • Skip flaky TestConv*TensorCore (#7710)
    • Fix test of chx.reshape (#7762)
    • Revert tentative workaround related to OpenSSL (#7790)
    • Switch current directory in Jenkins tests (#7834)
    • Fix flaky TestHuberLoss (#7837)
    • Configure tolerances of F.average_pooling_2d test (#7841)
    • Fix F.clipped_relu test for NumPy 1.17 (#7842)
    • Add test_accuracy.py to the list of slow test files (#7851)
    • Fix BatchNorm flaky of ChainerX (#7857)
    • Refactor convolution functions tests (#7863)
    • Relax tolerances in convolution function tests when using old cuDNN (#7864)
    • Fix test_TrilTriu (#7865)
    • Fix chainerx.logsumexp test tolerance (#7867)
    • Relax tolerances in convolution link tests when using old cuDNN (#7868)
    • Relax float16 tolerances in ChainerX binary math tests (#7874)
    • F.tree_lstm test for ChainerX (#7881, thanks @dido1998!)
    • Avoid ndarray.data access and fix wrong test (#7890)
    • Sample stable inputs in tests of group normalization (#7894)
    • Avoid unstable inputs in tests of decorrelated batch normalization (#7900)
    • Relax fp16 tolerance in TrueDiv test (#7917)
    • Avoid testing F.cast from negative floating-point to unsigned (#7920)
    • Fix tolerance in L.CRF1d test (#7926)
    • Refactor DecorrelatedBatchNormalizationTest and add stable input (#7932)
    • Relax tolerances in old cuDNN convolution tests (#7942)
    • Fix flaky chainerx.power test (#7950)
    • Increase CPU memory for test instance in PFN CI (#7951)
    • Relax fp16 tolerances in TestContrastive (#7953)
    • Relax float16 tolerances in F.batch_inv test (#7971)

    Others

    • Drop support for Python 2.7 (#7826)
    Source code(tar.gz)
    Source code(zip)
  • v6.3.0(Aug 22, 2019)

    This is the release note of v6.3.0. See here for the complete list of solved issues and merged PRs.

    Highlights

    • NumPy 1.17 is now officially supported.

    New Features

    • Add automatic management of snapshots (deletion and load) (#7862)

    Enhancements

    • Fix Adam FP16 overflow on gpu kernels (#7780)
    • Make F.average as accurate as backend (#7782)
    • Fix type_check error message on evaluating bool expression (#7801)
    • Fix module in msg of type_check (#7810)
    • Fix F.clip for NumPy 1.17 (#7855)

    Bug Fixes

    • Fix Parameter.dtype for uninitialized parameter (#7749)
    • Fix UpdateRule.use_fp32_update for uninitialized parameter (#7751)
    • Avoid unload module call in PureNcclCommunicator (#7787)
    • Fix TypeError in max_pooling_2d (#7789, thanks @ishanrai05!)
    • Fix create_mnbn_model() bug (#7846)
    • Fix backward of split_axis for intel64 when grad_ouputs contains None (#7931)
    • Fix precision in F.max_pooling_2d (#7933)
    • Fix backends.copyto from/to chainerx (#7934)
    • Fix in-place update of arrays in Link.serialize and optimizers.Adam (#7941)
    • Fix ChainerX non native deserialization (#7954)
    • Fix multi-device loss scaling (#7968)

    Documentation

    • Fix static_graph docs code examples (#7884)
    • Add 1.17 to supported NumPy versions (#7961)

    Tests

    • Fix test of chx.reshape (#7792)
    • Revert #6754 (Fix Travis with macOS) (#7800)
    • Fix a typo in test_communicator (#7822)
    • Fix F.clipped_relu test for NumPy 1.17 (#7854)
    • Switch current directory in Jenkins tests (#7856)
    • Fix flaky TestHuberLoss (#7869)
    • Configure tolerances of F.average_pooling_2d test (#7870)
    • Refactor convolution functions tests (#7873)
    • Relax tolerances in convolution link tests when using old cuDNN (#7878)
    • Fix chainerx.logsumexp test tolerance (#7889)
    • Relax tolerances in convolution function tests when using old cuDNN (#7895)
    • Sample stable inputs in tests of group normalization (#7899)
    • Relax float16 tolerances in ChainerX binary math tests (#7908)
    • Avoid ndarray.data access and fix wrong test (#7913)
    • Avoid unstable inputs in tests of decorrelated batch normalization (#7915)
    • Avoid testing F.cast from negative floating-point to unsigned (#7944)
    • Relax fp16 tolerances in TestContrastive (#7959)
    • Relax fp16 tolerance in TrueDiv test (#7972)
    • Fix tolerance in L.CRF1d test (#7977)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0b2(Jul 18, 2019)

    This is the release note of v7.0.0b2. See here for the complete list of solved issues and merged PRs.

    Highlights

    ChainerX has several new backproppable ops such as ELU and softplus activation functions and loss functions including absolute error, squared error, Huber loss and Gaussian KL divergence. ChainerX is also supported in all OptimizerHooks when used through Chainer. TabularDataset has also been improved with new features.

    Changes without compatibility

    • Variable.grad getter now raises an error when it is called before calling cleargrad, zerograd, or setting the gradient directly. (#7146)
    • Moving average statistics of BatchRenormalization (usage of epsilon) is fixed. It affects the inference behavior. (#7202)
    • Deprecated communicators in ChainerMN have now been removed. Those include HierarchicalCommunicator, SingleNodeCommunicator and TwoDimensionalCommunicator and are no longer necessary as NCCL now supports inter-node communication. (#7697)

    New Features

    • Add WeightStandardization link hook (#6678, thanks @hitsgub!)
    • Add chainerx.dsplit (#7031, thanks @ishanrai05!)
    • Add basic loss functions (#7063, thanks @kshitij12345!)
    • Add basic activation functions (#7118, thanks @aksub99!)
    • Add chainerx.left_shift and chainerx.right_shift (#7339, thanks @sky58!)
    • Add chainerx.elu (#7439, thanks @aksub99!)
    • Add unary mode to TabularDataset (#7493)
    • Add TabluarDataset.__iter__ (#7601)
    • Add Variable.mean (#7670)
    • Add chainerx.softplus (#7679, thanks @aksub99!)

    Enhancements

    • Avoid mutable default arguments (#4822)
    • Set initial top_data as -np.inf and argmax_data as -1 in F.roi_max_pooling_2d (#6237, thanks @knorth55!)
    • Add a flag to detect access to grad before calling cleargrad (#7146)
    • Add fp16 support to collective functions (#7456)
    • Call chainerx.grad from chainer.grad (#7464)
    • Use abseil to print stacktrace when signal is raised in ChainerX (#7502)
    • Emit build info of ChainerX and stop hiding ImportError (#7518)
    • Avoid chainerx implicit type conversions (#7520)
    • Make device argument a keyword only argument. (#7537, thanks @kshitij12345!)
    • Support ellipsis in Array::At and __getitem__ (#7561)
    • Introduce chainerx.ndarray._is_chained (#7565)
    • Remove squared_difference and fix docs (#7582)
    • Avoid code duplication in optimizer hook implementation (#7592)
    • Refactor allreduce_grad() and functions related with it (#7604)
    • Raise python IndexError if the index __getitem__ takes is out of bounds (#7614)
    • Use six.integer_types for axis check in F.concat (#7632, thanks @knorth55!)
    • Fix optimizer_hooks.GradientClipping for ChainerX (#7641)
    • Replace optional-lite with abseil (#7646)
    • Make devices hashable (#7648)
    • Fix optimizer_hooks.GradientHardClipping for ChainerX (#7656, thanks @kshitij12345!)
    • Implement IntervalTrigger.__str__ (#7664, thanks @ktns!)
    • GradientLARS optimizer hook working with ChainerX (#7669)
    • Use absl::Span and related helpers instead of gsl::span (#7671)
    • Added chainerx support on initializers (#7687)
    • Delete deprecated communicators (#7697)
    • Use six.integer_types for axis checks (#7713)
    • Require CUDA if CHAINERX_BUILD_CUDA is set (#7752)

    Bug Fixes

    • Skip None array in FunctionNode NaN check (#6283)
    • Fix unit selection of CupyMemoryProfiler (#7003)
    • Exclude eps from running_var of F.batch_renormalization (#7202)
    • Fix pickling issues on MultiprocessIterator (#7486)
    • Fix initializers.Identity for ideep backend (#7548)
    • Fix a bug of chainermn.links.create_mnbn_model (#7603)
    • Fix PickleDataset crash when using multiprocessing (#7625, thanks @zaltoprofen!)
    • Fix AMSGrad with intel64 backend (#7661)
    • Fix an error on chainer.grad for multiple devices (#7692)
    • Fixes spectral normalization chainerx conversion (#7698)
    • Fix offset in chainerx::Flip (#7727)
    • Fix reporter for multi-thread use (#7731)
    • Fix Parameter.dtype for uninitialized parameter (#7735)
    • Fix UpdateRule.use_fp32_update for uninitialized parameter (#7736)

    Code Fixes

    • Use backend.get_array_module not cuda.get_array_module (#7514, thanks @crcrpar!)
    • Make squared_difference alias of squared_error (#7547)
    • Avoid code duplication and access violation between Optimizer and GradientMethod (#7585)
    • Use chainerx.clipped_relu in F.clipped_relu (#7588)
    • Use old syntax to suppressing warning in ChainerX (#7615)
    • Rename split functions in pybind implementation (#7617)
    • Cleanup CMakeList.txt (#7647)
    • Fix flake8 error (#7663)
    • Avoid else after return (#7666)
    • Use curly braces for constructors (#7667)

    Documentation

    • Improve contribution docs (#6492)
    • Explain corresponding Links (#6512)
    • Fix inconsistent document for extension finalizer (#7557)
    • Document CHAINERX_CUDNN_USE_CUPY (#7574)
    • Fix typos in ResNet prepare method (#7577)
    • Tiny fix of BackwardContext comment (#7595, thanks @crcrpar!)
    • Fixes typos in expand_dims.py (#7602)
    • Remove moved comment (#7607)
    • Correct missing parenthesis in documents (#7611, thanks @tinunkai!)
    • Minor grammar Improvements to broadcast documentation. (#7621)
    • Edits FunctionNode docs. (#7622)
    • Fix a typo in chainer/functions/math/average.py (#7653, thanks @ktns!)
    • Fixes a grammar error (#7658)
    • Fix typo in F.squeeze documentation (#7682)

    Examples

    • Support default dtype in sentiment example's recursive minibatch version (#7438)
    • Warn NaN in FP16 mode in sentiment example's recursive minibatch version (#7447)
    • Fix typo in examples/vae/train_vae.py (#7578, thanks @m4saka!)
    • Example fix: stateful triggers cannot be reused (#7665)

    Tests

    • Simplify F.polygamma test (#6970, thanks @ishanrai05!)
    • Simplify F.cast test (#7034)
    • Refactor optimizer test for multi-backend (#7590)
    • Fix y_shape not used in tests (#7610)
    • Test optimizer_hooks.Lasso for ChainerX (#7657, thanks @kshitij12345!)
    • Fix GroupNormalization tests (#7684)
    • Test optimizer_hooks.GradientNoise for ChainerX (#7709, thanks @kshitij12345!)
    • Fix warning filter for protobuf (#7715)
    • Test optimizer_hooks.WeightDecay for ChainerX (#7716, thanks @kshitij12345!)
    • Relax atol/rtol of chainerx.erf float16 test (#7721)
    • Fix flaky TestHuberLoss (#7723)
    • Reverse input array for non-contiguous tests (#7728)
    • Fix eps in Contrastive.backward (#7745)
    • Fix flaky TestContrastive (#7747)

    Others

    • Update pybind version (#7559)
    • Avoid relative paths in third-party.cmake (#7643)
    Source code(tar.gz)
    Source code(zip)
  • v6.2.0(Jul 18, 2019)

    This is the release note of v6.2.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Avoid code duplication in optimizer hook implementation (#7674)
    • Use six.integer_types for axis check in F.concat (#7712, thanks @knorth55!)
    • Use six.integer_types for axis checks (#7770)

    Bug Fixes

    • Fix a bug of chainermn.links.create_mnbn_model (#7618)
    • Fix unit selection of CupyMemoryProfiler (#7639)
    • Skip None array in FunctionNode NaN check (#7642)
    • Fix AMSGrad with intel64 backend (#7689)
    • Fix spectral normalization chainerx conversion (#7705)
    • Fix PickleDataset crash when using multiprocessing (#7729, thanks @zaltoprofen!)
    • Fix pickling issues on MultiprocessIterator (#7742)
    • Fix an error on chainer.grad for multiple devices (#7746)

    Code Fixes

    • Remove backslashes to continue lines of link targets (#7182)
    • Use backend.get_array_module not cuda.get_array_module (#7619, thanks @crcrpar!)
    • Avoid code duplication and access violation between Optimizer and GradientMethod (#7644)

    Documentation

    • Add chainer.get_device to doc (#6831)
    • Correct Embed ID documentation (#7575)
    • Fix documentation for shape in generate_array (#7576)
    • Fix typos in ResNet prepare method (#7579)
    • Fix inconsistent document for extension finalizer (#7581)
    • Fix typos in expand_dims.py (#7608)
    • Minor grammar Improvements to broadcast documentation. (#7623)
    • Explain corresponding Links (#7628)
    • Correct missing parenthesis in documents (#7635, thanks @tinunkai!)
    • Tiny fix of BackwardContext comment (#7636, thanks @crcrpar!)
    • Edit FunctionNode docs. (#7659)
    • Improve contribution docs (#7680)
    • Fix typo in F.squeeze documentation (#7688)
    • Fix a grammar error (#7711)

    Examples

    • Fix typo in examples/vae/train_vae.py (#7580, thanks @m4saka!)
    • Support default dtype in sentiment example's recursive minibatch version (#7596)
    • Warn NaN in FP16 mode in sentiment example's recursive minibatch version (#7598)
    • Example fix: stateful triggers cannot be reused (#7683)

    Tests

    • Fix y_shape not used in tests (#7612)
    • Fix GroupNormalization tests (#7700)
    • Fix warning filter for protobuf (#7744)
    • Fix flaky TestContrastive (#7765)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0b1(Jun 21, 2019)

    This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.

    Highlights

    • TabularDataset is added. This is a new dataset interface that supports rich manipulation in a tabular form (like pandas.DataFrame), e.g. loading only a specified subset of keys (columns), efficient slicing (with less transposition/concatenation), batch-wise preprocessing, etc. The API is still under development; we are adding more functionalities and widening its support in existing features where datasets are involved.

    New Features

    • Add interface to backprop from multiple variables (#5952)
    • Option to show progress bar during evaluation (#6474, thanks @wkentaro!)
    • Elementwise Power for ChainerX (#6496, thanks @dido1998!)
    • Add chainerx.hstack, chainerx.vstack and chainerx.atleast_2d (#6886, thanks @kshitij12345!)
    • Add TabularDataset (#7115)
    • Add TabularDataset.concat/join (#7116)
    • Add chainerx.expm1 and chainerx.exp2 (#7126, thanks @aksub99!)
    • Add chainerx.log2 (#7139)
    • Add TabularDataset.{transform/transform_batch} (#7150)
    • Add chainerx.log1p (#7161, thanks @sky58!)
    • Expose chainerx::AsContiguous as a public C++ API (#7166)
    • Emit warning on chainerx import in debug mode (#7178)
    • Add chainer.as_array for consistency with chainer.as_variable (#7252, thanks @tkerola!)
    • Add chainerx.moveaxis (#7265, thanks @kshitij12345!)
    • Add chainerx.leaky_relu (#7351, thanks @aksub99!)
    • Add chainerx.dstack and chainerx.atleast_3d (#7353, thanks @kshitij12345!)
    • Add Python operator __abs__ with chainerx.ndarray (#7364)
    • Allow turning off the static subgraph optimizations using a config (#7369)
    • Add NumPy constants to ChainerX (#7384)
    • Add chainerx.erf (#7404, thanks @aksub99!)
    • Add align_corners option to resize_images (#7429)
    • Add nearest mode to resize_images (#7443)
    • Add input_device to StandardUpdater (#7472)
    • Add is_array_supported method on backend.Device (#7487)

    Enhancements

    • Refactor roi_max_align_2d and roi_average_align_2d (#6405, thanks @knorth55!)
    • Support Tagged communication with MPI_Status. (#6696, thanks @y1r!)
    • Support ChainerX in F.copy (#6982)
    • Avoid unnecessary updates in F.batch_renormalization, and related fixes (#7104)
    • Support ChainerX in Variable.addgrad (#7132)
    • Fix cuda.DummyDevice inheritance (#7147)
    • Add Device.name property (#7149)
    • Link.serialize to support ChainerX (#7175)
    • Fix typo in Variable.backward (#7196)
    • Call require_grad() on ChainerX Variable.grad setter (#7198)
    • Clear outputs in FunctionNode.unchain and raise error in ChainerX fallback mode (#7216)
    • Support ChainerX in Variable.copydata (#7226)
    • Support ChainerX in MNIST data parallel example (#7227)
    • MultiprocessParallelUpdater to support new devices (#7245)
    • Alias StackVector<int64_t, kMaxNdim> to Dims (#7258)
    • Support bool dtypes in chainerx::{Max,Min}imum (#7261)
    • Fix integral negative powers (#7262)
    • Make chx.backward not cause error even if backprop is not required (#7287)
    • Support None arguments in chainerx.clip and chainerx.ndarray.clip (#7296)
    • Support scalar in chainerx::Where (#7325)
    • F.clip function with None parameter to min/max (#7333)
    • Support cudnn deterministic max pooling (#7390, thanks @anaruse!)
    • Avoid transferring from a native device to another in Array::ToNative() (#7394)
    • Add type hints to Variable (#7400)
    • Improve get_device error message when ChainerX is not available (#7401)
    • get_device to raise a more correct error types (#7421)
    • Make EXEPECT_ARRAY_* macros able to used outside ChainerX (#7434)
    • Add sequence support for ChainerX shape arguments (#7446)
    • Check positive dilation in F.convolution_2d (#7448)
    • Check positive dilation in F.deconvolution_2d (#7449)
    • Explicit check of chainerx arrays on fallback functions (#7452)
    • Support F.copy between non-ChainerX and ChainerX devices only if backprop is not required (#7473)

    Performance Improvements

    • In FunctionNode ChainerX fallback, reuse ChainerxDevice taken from inputs to create outputs (#7397)

    Bug Fixes

    • Fix type check of F.where (#6872)
    • Fix a bug in Bernoulli.log_prob (#7064, thanks @seiyab!)
    • Fix uncopyable MultiNodeBatchNormalization (#7106)
    • Bugfix: MultiNodeChainList should not assume float32 (#7165)
    • Fix initialization of L.Linear when called with n_batch_axes (#7167)
    • Fix float16 and Tensor Core related issue in ChainerX (#7189, thanks @anaruse!)
    • Fix recomputation of L.BatchRenormalization (#7256)
    • Fix F.absolute_error for ChainerX (#7281, thanks @crcrpar!)
    • Fix a bug that root is ignored in scatter_dataset and bcast (#7289)
    • Fix condition to invoke cuDNN dropout (#7293, thanks @crcrpar!)
    • Improve type check in _values_to_dicts so it works with unicode of python 2 too (#7316)
    • Fix DtypeError in chainerx.square (#7321)
    • Fix mypy errors (#7423)
    • Make WeightDecay aware of loss scale (#7491)
    • Fix GradientMethod ChainerX fallback for uninitialized parameters (#7492)
    • Bugfix for pytest 2x2 (#7509)
    • Fix AdamW update rule regression on CPU (#7512)

    Code Fixes

    • Split binary functions from math.cc (#7128)
    • Avoid using cuda.DummyDevice and cuda.get_device_from_array (#7148)
    • Fix pointless comparison compiler warning in ChainerX (#7160)
    • Remove backslashes to continue lines of link targets (#7170)
    • Split trigonometric/hyperbolic routines from math.cc (#7171)
    • Remove duplicated code in logic.cc (#7176)
    • Consistent cases for Inplace (#7181)
    • Improve code in testing.backend.BackendConfig (#7212)
    • Split ChainerX statistics routines from math.cc (#7222)
    • Fix code style for long expressions (#7231)
    • Check device instance using xp when possible (#7234)
    • Move declaration of AMax and AMin to statistics routines (#7269)
    • Split reduction routines from math.cc (#7270)
    • Use _ for private classes under chainer.dataset.tabular (#7275)
    • Remove unused using declaration (#7284)
    • Split misc routines from math.cc (#7298)
    • Fix wrong comment in ChainerX backward implementation (#7311)
    • Split explog routines from math.cc (#7317)
    • Fix style on imports (#7338)
    • Split rounding routines (#7407)
    • Split arithmetic ops from routines/math.h (#7415)
    • Put comments in FindCuDNN.cmake (#7419)
    • DRY optimizer test parameterizations (#7437)
    • Split logic routines from math (#7444)
    • Qualify some arguments of pool kernels const& (#7453)
    • Include cuda_fp16.h instead of cuda_fp16.hpp (#7480)
    • Use py::arg literal in ChainerX python binding (#7490)
    • Remove rounding kernels from math (#7497)
    • Rename and move activation routines from math.h (#7501)
    • Remove ChainerX AsTypeKernel (#7522, thanks @kshitij12345!)
    • Split python binding math routines (#7527)
    • Use absolute namespace in macros (#7536)

    Documentation

    • Improve contribution guide (#6140)
    • Fix dead sphinx links (#6450)
    • Fix F.normalize documentation (#7062, thanks @crcrpar!)
    • Document F.copy view behavior (#7135)
    • Improve device documentation (#7162)
    • Document backend.get_device_from_array (#7163)
    • Remove chainerx.md (#7179)
    • Add optimizers.MSVAG to documentation (#7183)
    • Fix grammatical errors in documentation (#7186)
    • Fix capitalization of F.relu in doc (#7188)
    • Add missing doc entry for CommunicatorBase.allgather (#7192)
    • Fix invalid escape sequences in ChainerX routine docstrings (#7214)
    • Fix typos in chainer.utils.type_check (#7249, thanks @ktns!)
    • Document observe_value and observe_lr trigger interval (#7266)
    • Fix robots.txt to allow indexing root (#7306)
    • Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7363, thanks @knorth55!)
    • Improve F.normalize documentation (#7371, thanks @crcrpar!)
    • Fix format of static_graph.rst (#7389)
    • Change Deformable Convolution 2D docs to match arguments (#7402, thanks @higumachan!)
    • Avoid setting test_iter.epoch manually in the tutorial of training loop (#7405)
    • Remove "Comparison with other frameworks" from docs (#7417)
    • Fix documentation for shape in generate_array (#7450)
    • Remove test coverage from ChainerX contribution guide (#7462)
    • Correct Embed ID documentation (#7484)
    • Fix typo in tabular_dataset.py (#7495, thanks @nai62!)

    Installation

    • Fix ChainerX compilation with MSVC (#7108, thanks @durswd!)
    • Allow CUDNN_LIBNAME to be specified by environment variable (#7243)
    • Use external $MAKEFLAGS instead if set in Travis CI script (#7331)
    • In FindCuDNN.cmake, prioritize explicit variables over environment variables (#7441)
    • Add ChainerX build option to use cuDNN from CuPy installation (#7442)
    • Pin typing == 3.6.6 (#7562)
    • Fix typing requirements (#7564)

    Examples

    • Add CIFAR example to ChainerMN (#6839, thanks @ai-kase!)
    • Support device specifiers in MNIST data parallel example (#6857)
    • Support device specifiers in PTB example (#7055)
    • Support device specifiers in pix2pix example (#7076)
    • Support device specifiers in static graph example (#7153)
    • Support device specifiers in ImageNet data parallel example (#7164)
    • Support ChainerX in MNIST inference example (#7169)
    • Support device specifier in image captioning example (#7204)
    • Support device specifier in image captioning example (predict.py) (#7206)
    • Remove PlotReport.available() check in glance example (#7209)
    • Minor fix in DCGAN example README (#7210)
    • Fix sentiment example test (#7215)
    • Support device specifiers in MNIST model parallel example (#7225)
    • Use Agg backend in examples with plot functionality (#7247)
    • Support ChainerX in PTB gentxt example (#7314)
    • Support ChainerX in MNIST model parallel example (#7330)
    • Warn NaN in FP16 mode in dcgan example (#7344)
    • Warn NaN in FP16 mode in memnn example (#7345)
    • Warn NaN in FP16 mode in pix2pix example (#7346)
    • Warn NaN in FP16 mode in pos example (#7354)
    • Warn NaN in FP16 mode in reinforcement learning examples (#7355)
    • Warn NaN in FP16 mode in sentiment example (#7356)
    • Warn NaN in FP16 mode in static_graph_optimizations/cifar example (#7357)
    • Warn NaN in FP16 mode in static_graph_optimizations/mnist example (#7358)
    • Warn NaN in FP16 mode in vae example (#7362)
    • Warn NaN in FP16 mode in word2vec example (#7366)
    • Fix typo in wavenet example requirements (#7367)
    • Warn NaN in FP16 mode in wavenet example (#7372)
    • Support ChainerX in static subgraph optimization examples (#7431)
    • Implement reset method in the PTB example (#7533)

    Tests

    • Add FP16 test to multi_node_chain_list (#6575)
    • [chainerx] Fix skipped_backward tests to return as PASS (#6815, thanks @kshitij12345!)
    • Add configuration of new CI system (#6843)
    • Simplify F.tensordot test (#6968, thanks @ishanrai05!)
    • Simplify F.cumprod test (#6978, thanks @hikjik!)
    • Simplify F.average test (#6995, thanks @hikjik!)
    • Move test_cuda.py to backends_tests (#7144)
    • Fix missing cuda in chainerx.swapaxes test (#7184, thanks @kshitij12345!)
    • Split Variable.grad and Variable.grad_var tests (#7191)
    • Refactor Variable.zerograd test (#7199)
    • Add Tensor Core test for chainerx.conv and chainerx.conv_transpose (#7203)
    • Move TestTanh from test_math.py to test_trigonometric_hyperbolic.py (#7207)
    • Refactor Variable.copydata test (#7224)
    • Add a test to reproduce the bcast deadlock problem (#7257)
    • Add float16 comparison test (#7260)
    • Use CUDA_VISIBLE_DEVICES in ChainerX tests (#7290)
    • Add chainer.as_array test (#7318)
    • Rewrite StandardUpdater tests with pytest style assertion (#7326)
    • Change 0 to 0.0 for python2 (#7373)
    • Add missing parameter dstack to invalid_shape test (#7457, thanks @kshitij12345!)
    • Use pytest.mark.xfail instead of unittest.expectedFailure (#7488)

    Others

    • Remove "Research projects using Chainer" from README (#7416)
    Source code(tar.gz)
    Source code(zip)
  • v6.1.0(Jun 21, 2019)

    This is the release note of v6.1.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Avoid unnecessary updates in F.batch_renormalization, and related fixes (#7197)
    • Fix typo in Variable.backward (#7208)
    • MultiprocessParallelUpdater to support new devices (#7246)
    • Add type hints to Variable (#7445)
    • Improve get_device error message when ChainerX is not available (#7461)
    • Check positive dilation in F.convolution_2d (#7499)
    • Check positive dilation in F.deconvolution_2d (#7500)

    Bug Fixes

    • Fix uncopyable MultiNodeBatchNormalization (#7254)
    • Fix initialization of L.Linear when called with n_batch_axes (#7300)
    • Improve type check in _values_to_dicts so it works with unicode of python 2 too (#7323)
    • Fix a bug in Bernoulli.log_prob (#7334, thanks @seiyab!)
    • Fix a bug that root is ignored in scatter_dataset and bcast (#7360)
    • Fix condition to invoke cuDNN dropout (#7374, thanks @crcrpar!)
    • Fix mypy errors (#7465)
    • Make WeightDecay aware of loss scale (#7510)
    • Fix AdamW update rule regression on CPU (#7516)
    • Fix type check of F.where (#7532)

    Code Fixes

    • Fix code style for long expressions (#7542)

    Documentation

    • Fix to clarify the description about initializer argument (#7070)
    • Remove extra spaces in docstrings (#7130)
    • Fix link to ChainerMN docs in performance guide (#7131)
    • Document passive attributes in FunctionTestCase (#7134)
    • Fix dead sphinx links (#7159)
    • Document backend.get_device_from_array (#7168)
    • Document F.copy view behavior (#7174)
    • Add optimizers.MSVAG to documentation (#7193)
    • Add missing doc entry for CommunicatorBase.allgather (#7195)
    • Remove chainerx.md (#7218)
    • Fix grammatical errors in documentation (#7219)
    • Fix typos in chainer.utils.type_check (#7274, thanks @ktns!)
    • Improve device documentation (#7288)
    • Fix capitalization of F.relu in doc (#7299)
    • Fix invalid escape sequences in ChainerX routine docstrings (#7336)
    • Fix F.normalize documentation (#7337, thanks @crcrpar!)
    • Fix format of static_graph.rst (#7399)
    • Avoid setting test_iter.epoch manually in the tutorial of training loop (#7410)
    • Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7426, thanks @knorth55!)
    • Fix robots.txt to allow indexing root (#7458)
    • Add reference and warning to F.swish document (#7467, thanks @fiarabbit!)
    • Change Deformable Convolution 2D docs to match arguments (#7468, thanks @higumachan!)
    • Remove test coverage from ChainerX contribution guide (#7469)
    • Remove "Comparison with other frameworks" from docs (#7477)
    • Improve F.normalize documentation (#7482, thanks @crcrpar!)

    Installation

    • Fix ChainerX compilation with MSVC (#7173, thanks @durswd!)
    • Fix typing requirements (#7566)

    Examples

    • Support device specifiers in examples:
      • Support device specifier in image captioning example (#7229)
      • Support device specifiers in MNIST data parallel example (#7233)
      • Support device specifiers in pix2pix example (#7235)
      • Support device specifiers in static graph example (#7236)
      • Support device specifiers in PTB example (#7263)
      • Support device specifiers in ImageNet data parallel example (#7303)
      • Support ChainerX in PTB gentxt example (#7340)
    • Fix sentiment example test (#7238)
    • Warn NaN in FP16 mode in examples:
      • Warn NaN in FP16 mode in wavenet example (#7376)
      • Warn NaN in FP16 mode in static_graph_optimizations/mnist example (#7377)
      • Warn NaN in FP16 mode in word2vec example (#7378)
      • Warn NaN in FP16 mode in sentiment example (#7380)
      • Warn NaN in FP16 mode in static_graph_optimizations/cifar example (#7381)
      • Warn NaN in FP16 mode in reinforcement learning examples (#7382)
      • Warn NaN in FP16 mode in dcgan example (#7383)
      • Warn NaN in FP16 mode in memnn example (#7386)
      • Warn NaN in FP16 mode in pos example (#7387)
      • Warn NaN in FP16 mode in pix2pix example (#7388)
      • Warn NaN in FP16 mode in vae example (#7412)
    • Implement reset method in the PTB example (#7535)

    Tests

    • Use CUDA_VISIBLE_DEVICES in ChainerX tests (#7294)
    • Move test_cuda.py to backends_tests (#7295)
    • Improve mergify configuration (#7301)
    • Add configuration of new CI system (#7403)
    • Change 0 to 0.0 for python2 (#7508)
    • Add a test to reproduce the bcast deadlock problem (#7554)

    Others

    • Add .mergify.yml (#7151)
    • Remove "Research projects using Chainer" from README (#7459)
    Source code(tar.gz)
    Source code(zip)
  • v7.0.0a1(May 16, 2019)

    This is the release note of v7.0.0a1. See here for the complete list of solved issues and merged PRs.

    Highlights

    • Many examples including ImageNet, DCGAN and VAE start supporting ChainerX arrays

    New Features

    • Support orthogonal embedding initialization (#6031)
    • Add an option in links.loss.CRF1d to automatically sort the input sequence (#6351)
    • Add AdaBound (and AMSBound) (#6388, thanks @hitsgub!)
    • Add squared_difference to chainerx (#6501, thanks @aksub99!)
    • Implement array vs array functionality to chainerx.minimum (#6541, thanks @aksub99!)
    • Add FP16 support to send/recv (#6552)
    • Implement array to array functionality to chainerx.maximum (#6570, thanks @aksub99!)
    • Add Mean Var Python Bindings to ChainerX (#6640, thanks @kshitij12345!)
    • Add chainerx.ceil (#6705, thanks @kshitij12345!)
    • Add chainerx.floor (#6707, thanks @kshitij12345!)
    • Add chainerx.absolute (#6715, thanks @dido1998!)
    • Add chainerx.argmin and chainerx.ndarray.argmin (#6740, thanks @Harshan01!)
    • Add chainerx.amin and chainerx.min (#6752, thanks @Harshan01!)
    • Add chainerx.a/sinh,chainerx.a/cosh (#6776, thanks @kshitij12345!)
    • Add chainerx.fabs and chainerx.sign (#6777, thanks @kshitij12345!)
    • Add chainerx.logical_and chainerx.logical_or (#6779, thanks @kshitij12345!)
    • Add chainerx.all and chainerx.any (#6781, thanks @kshitij12345!)
    • Add chainerx::Softmax and chainerx.softmax (#6814, thanks @tohmae!)
    • Add zero fill mode in allreduce of chainermn (#6817)
    • Make BatchNorm states public (#6847)
    • Introduce Native/CUDA macros for registering standard elementwise ops (#6870, thanks @kshitij12345!)
    • Make adam variants more accessible (#6874, thanks @crcrpar!)
    • Add chainerx::Swapaxes and chainerx.swapaxes (#6897, thanks @kshitij12345!)
    • Add chainerx.logical_xor (#7014, thanks @ishanrai05!)
    • Add chainerx.log10 (#7015, thanks @ishanrai05!)
    • Add chainerx.isfinite (#7016, thanks @kshitij12345!)
    • Add bitwise ops to ChainerX (#7017, thanks @kshitij12345!)
    • Add chainerx.arctan2 (#7028, thanks @kshitij12345!)
    • Add chainerx.expand_dims (#7029, thanks @kshitij12345!)
    • Add chainerx.flip, chainerx.fliplr and chainerx.flipud (#7065, thanks @kshitij12345!)
    • Add chainerx.where (#7067, thanks @kshitij12345!)
    • Add F.arctanh (#7095)

    Enhancements

    • Improve error message of gradient_check.check_double_backward (#6427)
    • Improve link_hooks.SpectralNormalization (#6655, thanks @crcrpar!)
    • ChainerX Op registration: normalization (#6719)
    • ChainerX Op registration: arithmetic (#6723)
    • Implement Relu in ChainerX (#6731, thanks @dido1998!)
    • Make device functions public (#6744)
    • ChainerX Op registration: creation (#6745)
    • ChainerX Op registration: linalg (#6746)
    • Allow snapshot_object have condition and writer option (#6762)
    • Support fallbacks of ChainerX on GetItem fail when indices contains chainerx.ndarray (#6769)
    • Fix Evaluator for chainer.dataset.converter (#6768)
    • Rename patients argument to patience in EarlyStoppingTrigger (#6784)
    • Remove Backend ctor and use CreateBackend (#6785)
    • ChainerX Op registration: pooling (#6800)
    • Define __str__ for Device classes (#6816, thanks @nishnik!)
    • Simplify numeric.h (#6832)
    • ChainerX Op registration: connection (#6833)
    • ChainerX Op registration: array members (#6834)
    • ChainerX Op registration: math (#6842)
    • Mixed dtypes: chainerx::Minimum (#6858)
    • Update distributions.independent (#6860, thanks @ganow!)
    • Add chainerx.ndarray.all and chainerx.ndarray.any (#6926)
    • Fix HuberLoss.forward avoid loss of significance (#6940)
    • Support Tensor Core in chainerx::Dot (#6960)
    • Fix F.get_item backward for ChainerX (#6991)
    • Support NumPy scalars in ChainerX arithmetics (#7004)
    • Implement NumPy-like pairwise reduction for stability (#7043, thanks @grafi-tt!)
    • Support mixed dtypes in Stack (#7058)
    • ChainerX Scalar / Array divisions (#7075)
    • Fix Reshape copy condition (#7080)
    • Fix trigger constructors to raise errors instead of assertion failures (#7101)
    • Support Tensor Core in chainerx::Conv (#7112)

    Performance Improvements

    • Optimized ChainerX-to-CuPy ndarray conversion (#6204)
    • Use cuDNN in ReLU (#6993)
    • Fast integer scale unpooling (#7114, thanks @tkerola!)

    Bug Fixes

    • Avoid throwing in destructors (#6725)
    • Fix TypeError during BN deserialization on Win64 (#6765, thanks @hyabe!)
    • Fix chainerx.astype casting from float16 to bool in CUDA (#6780, thanks @kshitij12345!)
    • Fix ArgMax of CUDA when all values are negative (#6783)
    • Fix unchain gradient pull (#6804, thanks @Rishav1!)
    • Remove chainerx.square fallback since it is implemented in C++ (#6823)
    • Fix stack overflow caused when to_gpu/to_cpu/to_intel64 were overridden (#6824)
    • Fix filename arg of PlotReport (#6866)
    • Make InvalidType picklable (#6884, thanks @zaltoprofen!)
    • Rename the macro name for AMinOp (#6922)
    • Fix terminal column width retrieval in backprop traceback in Python 2 (#6949)
    • Avoid using ImportError during import cupy (#6954)
    • Fix cuDNN descriptor double destroy (#6972)
    • Fix ConcatWithAsyncTransfer (#6992)
    • Set allow_pickle=True (#7036)
    • Fix subview of zero-sized arrays (#7037)
    • Fix At output offset (#7046)
    • Fix handling of ndarray offsets (#7047)
    • Fix construction of std::shared_ptr with custom deleter in chianer_interop.cc (#7107)
    • Fix build with clang (#7119)

    Code Fixes

    • Check headers with clang-tidy (#6441)
    • Refactor CUDA batch norm tensor descriptor (#6724)
    • Fix comments and add TODO to indexing routines (#6789)
    • Add cuda_internal::DeviceInternals to wrap handle etc. (#6820)
    • Clean up DeviceInternals (#6827)
    • Rename CHAINERX_REGISTER_OP_{NATIVE,CUDA} to CHAINERX_{NATIVE,CUDA}_REGISTER_OP (#6865)
    • Add comments on del (#6933)
    • Unify variable names in gradient_check (#6935)
    • Align macro parameter name (#6941)
    • Introduce chainerx/kernels/ and rename existing device "op"s to "kernel"s (#6944)
    • Remove obsolete "Op" files (#6959)
    • Prefix macro with CHAINERX as per convention (#7022)
    • Use macro in exp_log.{cc/cu} (#7068)
    • Pass arguments by value in native::Float16 and cuda::Float16 (#7069)
    • Avoid importing object (#7110)

    Documentation

    • Fix to clarify the description about initializer argument (#6317)
    • Add docs for two loss functions (#6349, thanks @hsezhiyan!)
    • Improve docs of square, maximum and squared_difference (#6451, thanks @aksub99!)
    • Append to v6 upgrade guide about Python 3.4 support drop (#6493)
    • Add reference and warning to F.swish document (#6509, thanks @fiarabbit!)
    • Document fix in default initializer (#6519)
    • Convert utilities docs to one page (#6595, thanks @trancenoid!)
    • Add chainer.get_device to doc (#6735)
    • Use search index (#6881)
    • Add chainerx.sigmoid docs (#6889, thanks @crcrpar!)
    • Fix typo in F.convolution_2d (#6890, thanks @crcrpar!)
    • Document chainer.testing.LinkTestCase (#6895, thanks @crcrpar!)
    • Update README.txt for a link to the tutorial (#6896)
    • Fix broken link in chainerx.md (#6899, thanks @tkat0!)
    • Document passive attributes in FunctionTestCase (#6931)
    • Fix documentation of renamed arguments (#6932)
    • Fix typo in pickle_dataset.py (#6942)
    • Update ChainerX contribution guide (#6951)
    • Support Sphinx 2.0 and use absolute path to support the latest RTD (#7027)
    • Fix link to ChainerMN docs in performance guide (#7044)
    • Update supported MPI list (#7086)
    • Document CHAINERX_ENABLE_BLAS environment variable (#7098, thanks @durswd!)
    • Move backend docs to a separate page (#7099)
    • Document backend and device objects (#7102)
    • Remove extra spaces in docstrings (#7125)
    • Fix AdamW docstring (#7137, thanks @crcrpar!)
    • Fix spelling of AMSGrad (#7138, thanks @crcrpar!)

    Installation

    • CMake for Windows(clang-cl) (#7039, thanks @durswd!)
    • Exclude protobuf 3.8.0rc1 from dependencies (#7083)

    Examples

    • Improve chainer examples (#6399, thanks @crcrpar!)
    • Fix reinforcement_learning example to work with default dtype (#6624)
    • Support default dtype in vae example (#6717)
    • Support ChainerX in reinforcement learning example (#6733)
    • Support ChainerX in wavenet example (#6736)
    • Trivial fixes to Wavenet example (#6737)
    • Support ChainerX in VAE example (#6739)
    • Support ChainerX in text classification example (#6769)
    • Support ChainerX in DCGAN example (#6773)
    • Support ChainerX in word2vec example (#6774)
    • Show download progress bar in image-captioning example (#6775)
    • Support ChainerX in memnn example (#6854)
    • Use filename in PlotReport example (#6880, thanks @crcrpar!)
    • Support ChainerX in CIFAR example (#6936)
    • Support ChainerX in POS-tagging example (#7081)
    • Support ChainerX in Sentiment example (#7087)
    • Add progress bar to sentiment analysis example (#7103)
    • Support ChainerX in Model Zoo example (#7129)

    Tests

    • Simplify F.mean_absolute_error test (#6253, thanks @aksub99!)
    • Simplify F.bilinear test (#6488, thanks @ishanrai05!)
    • Simplify F.deconvolution_2d test (#6498, thanks @ishanrai05!)
    • Display pytest summary (#6625, thanks @kshitij12345!)
    • Travis test against v6 branch (#6749)
    • Fix Travis with macOS (#6754)
    • Dodge nondifferentiable inputs in chainerx.max test (#6761)
    • Make too slow initializers' tests faster (#6792)
    • Fix test failures in math test (#6798)
    • Simplify F.flip test (#6801, thanks @ishanrai05!)
    • Simplify F.where test (#6802, thanks @ishanrai05!)
    • Simplify F.repeat test (#6803, thanks @ishanrai05!)
    • Fix F.elu test numeric error (#6841)
    • Relax tolerance for float16 in unary_math_function_unittest (#6845)
    • Relax tolerances and avoid non-differentiable points for FP16 in triplet loss tests (#6855)
    • Simplify F.unpooling_nd test (#6861, thanks @ishanrai05!)
    • Simplify F.local_response_normalization test (#6867, thanks @ishanrai05!)
    • Simplify F.reshape test (#6868, thanks @ishanrai05!)
    • Simplify F.layer_normalization test (#6871, thanks @ishanrai05!)
    • Fix test failure in test_spatial_transformer_sampler.py (#6883)
    • Simplify F.prelu test (#6887, thanks @ishanrai05!)
    • Simplify F.flatten test (#6888, thanks @ishanrai05!)
    • Simplify F.dstack test (#6891, thanks @ishanrai05!)
    • Simplify F.sign test (#6898, thanks @hikjik!)
    • Simplify F.ceil test (#6900, thanks @hikjik!)
    • Simplify F.floor test (#6901, thanks @hikjik!)
    • Fix F.rrelu test instability (#6915)
    • Fix F.max_pooling_nd test instability (#6917)
    • Fix flaky Huber loss test (#6924)
    • Simplify F.fmod test (#6937, thanks @hikjik!)
    • Simplify F.fix test (#6938, thanks @hikjik!)
    • Fix test parameters in ChainerX math tests (#6946)
    • Increase the default columns in Travis CI (#6948)
    • Fold Travis test outputs (#6961)
    • Simplify 'F.min', 'F.max' test (#6962, thanks @hikjik!)
    • Simplify 'F.exp', 'F.log' test (#6963, thanks @hikjik!)
    • Simplify F.expm1 test (#6965, thanks @hikjik!)
    • Fix flaky ChainerX max_pool test (#6975)
    • Simplify F.bias test (#6976, thanks @hikjik!)
    • Simplify F.cumsum test (#6977, thanks @hikjik!)
    • Refactor Variable.addgrad test (#6979)
    • Simplify F.cosh, F.sinh test (#6980, thanks @hikjik!)
    • Simplify F.log1p test (#6981, thanks @hikjik!)
    • Simplify F.linear_interpolate test (#6984, thanks @hikjik!)
    • Simplify F.fft, F.ifft test (#6985, thanks @hikjik!)
    • Simplify F.matmul test (#6987, thanks @ishanrai05!)
    • Fix flaky TestLogSumExp (#6988)
    • Fix flaky TestMin (#6989)
    • Simplify F.get_item test (#6990)
    • Simplify F.inv, F.batch_inv test (#6994, thanks @hikjik!)
    • Simplify F.batch_l2_norm_squared test (#6996, thanks @hikjik!)
    • Simplify F.accuracy test (#7006, thanks @hikjik!)
    • Simplify F.binary_accuracy test (#7007, thanks @hikjik!)
    • Simplify F.r2_score test (#7008, thanks @hikjik!)
    • Simplify F.permutate test (#7010, thanks @hikjik!)
    • Simplify F.scatter_add test (#7012, thanks @hikjik!)
    • Simplify F.separate test (#7013, thanks @hikjik!)
    • Simplify F.logsumexp test (#7018, thanks @hikjik!)
    • Skip tests that fail with NumPy 1.16.3 (#7021)
    • Add broadcast test in test_math.py (#7023)
    • Fix flaky chainerx.abs test (#7024)
    • Remove ChainerX acceptance tests (#7026)
    • Fix flaky chainerx.tan test (#7033)
    • Display pytest summary (cont.) (#7089)

    Others

    • Make it easier to copy the instruction in the issue template (#6665)
    • Make git ignore chainerx/libchainerx.dylib (#6666)
    • Add .mergify.yml (#7074)
    • Improve mergify configuration (#7111)
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0(May 16, 2019)

    This is the release note of v6.0.0. See here for the complete list of solved issues and merged PRs.

    This release note only covers the difference from v6.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:

    See the Upgrade Guide if you are upgrading from previous versions.

    Highlights

    • AdaBound and AMSBound are now supported by Adam
    • The performance of unpooling with integer scaling is improved
    • Many examples including ImageNet, DCGAN and VAE support ChainerX

    New Features

    • Implement array vs array functionality to chainerx.minimum (#6813, thanks @aksub99!)
    • Add logical_and and logical_or to ChainerX (#6821, thanks @kshitij12345!)
    • Add squared_difference to ChainerX (#6822, thanks @aksub99!)
    • Add AdaBound (and AMSBound) (#6846, thanks @hitsgub!)
    • Add condition and writer option to snapshot_object (#6943)
    • Add chainerx.ceil (#6852, thanks @kshitij12345!)

    Enhancements

    • Make ChainerX device functions public (#6760)
    • Fix Evaluator for chainer.dataset.converter (#6790)
    • Remove ChainerX Backend ctor and use CreateBackend (#6809)
    • Improve link_hooks.SpectralNormalization (#6877, thanks @crcrpar!)
    • Update distributions.independent (#6945, thanks @ganow!)
    • Define __str__ for Device classes (#7092, thanks @nishnik!)
    • Fix trigger constructors to raise errors instead of assertion failures (#7105)

    Performance Improvements

    • Fast integer scale unpooling (#7127)

    Bug Fixes

    • Avoid throwing in destructors (#6755)
    • Fix ArgMax of CUDA when all values are negative (#6796)
    • Fix chainerx.astype casting from float16 to bool in CUDA (#6797, thanks @kshitij12345!)
    • Fix TypeError during BN deserialization on win64 (#6812, thanks @hyabe!)
    • Remove chainerx.square fallback since it is implemented in C++ (#6828)
    • Fix stack overflow caused when to_gpu/to_cpu/to_intel64 were overridden (#6849)
    • Fix unchain gradient pull (#6918, thanks @Rishav1!)
    • Fix filename arg of PlotReport (#6928)
    • Make InvalidType picklable (#6934, thanks @zaltoprofen!)
    • Fix terminal column width retrieval in backprop traceback in Python 2 (#6958)
    • Avoid using ImportError during import cupy (#7011)
    • Fix ConcatWithAsyncTransfer (#7019)
    • Set allow_pickle=True (#7048)
    • Fix subview of zero-sized arrays (#7051)
    • Fix At output offset (#7054)
    • Fix handling of ndarray offsets (#7056)
    • Fix construction of std::shared_ptr with custom deleter in chianer_interop.cc (#7109)
    • Add zero fill mode in allreduce of chainermn (#7142)

    Code Fixes

    • Fix comments and add TODO to indexing routines (#6793)
    • Refactor CUDA batch norm tensor descriptor (#6805)
    • Add cuda_internal::DeviceInternals to wrap handle etc. (#6826)
    • Clean up DeviceInternals (#6830)
    • Avoid importing object (#7121)
    • ChainerX op registration: normalization (#6851)

    Documentation

    • Append to v6 upgrade guide about Python 3.4 support drop (#6751)
    • Fix broken link in chainerx.md (#6916, thanks @tkat0!)
    • Use search index (#6930)
    • Fix typo in pickle_dataset.py (#6964)
    • Update ChainerX contribution guide (#6971)
    • Document chainer.testing.LinkTestCase (#7001, thanks @crcrpar!)
    • Update supported MPI list (#7113)
    • Document CHAINERX_ENABLE_BLAS environment variable (#7120)
    • Fix documentation of renamed arguments (#7123)
    • Backport #6595, #7099 and #7102 (#7152)

    Installation

    • Exclude protobuf 3.8.0rc1 from dependencies (#7088)

    Examples

    • Improve Chainer examples (#6753, thanks @crcrpar!)
    • Support ChainerX in reinforcement learning example (#6787)
    • Support ChainerX in VAE example (#6791)
    • Support ChainerX in word2vec example (#6795)
    • Support ChainerX in DCGAN example (#6799)
    • Support ChainerX in wavenet example (#6806)
    • Support ChainerX in CIFAR example (#6957)
    • Support ChainerX in text classification example (#6997)
    • Use filename in PlotReport example (#7009, thanks @crcrpar!)
    • Fix reinforcement_learning example to work with default dtype (#7049)

    Tests

    • Travis test against v6 branch (#6750)
    • Fix Travis with macOS (#6758)
    • Dodge nondifferentiable inputs in chainerx.max test (#6766)
    • Fix F.elu test numeric error (#6844)
    • Fix test failures in math test (#6850)
    • Relax tolerance for float16 in unary_math_function_unittest (#6919)
    • Fix F.rrelu test instability (#6920)
    • Fix F.max_pooling_nd test instability (#6927)
    • Relax tolerances and avoid non-differentiable points for FP16 in triplet loss tests (#6929)
    • Fold Travis test outputs (#6967)
    • Increase the default columns in Travis CI (#6973)
    • Fix flaky TestLogSumExp (#6999)
    • Fix flaky ChainerX max_pool test (#7002)
    • Fix test failure in test_spatial_transformer_sampler.py (#7020)
    • Quickfix: skip tests that fail with NumPy 1.16.3 (#7025)
    • Fix flaky Huber loss test (#7052)
    • Fix flaky chainerx.tan test (#7053)
    • Display pytest summary (#7090)
    • Display pytest summary (cont.) (#7091)
    • Make too slow initializers' tests faster (#7122)

    Others

    • Make git ignore chainerx/libchainerx.dylib (#6885)
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0rc1(Apr 4, 2019)

    This is the release note of v6.0.0rc1. See here for the complete list of solved issues and merged PRs.

    Announcements

    • After this release, the master branch is switched to the development of v7 series. v6.0.0 will continue developing at the v6 branch.
    • (#6629) You can now access the product backlog (the task list that ChainerX core team is willing to work on) as a spreadsheet here. Note that the sheet is actively edited by ChainerX core dev team. The items are NOT promises; we may drop any features from the list any time, but you can use it to know in which direction the development is going on in the near future.

    Highlights

    • Mixed precision training support is improved.
      • In particular, mixed precision mode (a.k.a. mixed16 dtype) is added. You can set an environment variable CHAINER_DTYPE=mixed16 to make Chainer choose appropriate dtypes for mixed precision training (in most places it is float16, but it automatically chooses float32 when it’s better for precision and performance reasons).
      • Loss scaling for avoiding underflow in backprop with float16 now supports dynamic mode. In this mode, the scaling factor is adjusted during training so that backprop does not overflow. You can use it with (optimizer).loss_scaling(). See the documentation for details.

    Changes without compatibility

    • Deprecate old NCCL versions and related communicators (#6506)
      • Support of NCCL<2.3 is deprecated. We encourage users to use NCCL 2.3 or later ones.

    New Features

    • Human readable representation of link and chain (#4853, thanks @wkentaro!)
    • Add variable.item() (#5797, thanks @crcrpar!)
    • Refactor Link.to_device family (#5986)
    • Add decorrelated batch normalization (#6150, thanks @crcrpar!)
    • Add option unit to CupyMemoryProfileHook.print_report() (#6256, thanks @hitsgub!)
    • Add distributions.Independent (#6324, thanks @ganow!)
    • Dynamic loss scaling (#6337, thanks @anaruse!)
    • Add ChainerX FloorDivide (#6350)
    • Customizable forward output check in testing.FunctionTestCase (#6444)
    • Adding fp16 support to the ChainerMN communicators (#6448)
    • mixed16 mode and its support in L.BatchNormalization (#6456)
    • Add shape and dtype check before allrecuce (#6461)
    • Add F.relu6 as an alias to F.clipped_relu (#6463, thanks @aksub99!)
    • Implementation of sigmoid for ChainerX (#6472, thanks @dido1998!)
    • Add minimum to chainerx (#6477, thanks @aksub99!)
    • Add square to chainerx (#6486, thanks @aksub99!)
    • Add chainerx.testing.integral_dtypes (#6526)
    • Support for chainer.mixed16 data type in PureNcclCommunicator (#6548)
    • Add LinkTestCase to simplify link tests (#6559)
    • Add Sin and Cos to chainerx (#6601, thanks @kshitij12345!)
    • Support for fp16 and mixed16 in MultiNodeBatchNormalization of ChainerMN (#6619)
    • Add tan, arcsin, arccos, arctan to ChainerX (#6703, thanks @IvanYashchuk!)

    Enhancements

    • Improve F.resize_images speed (#5753, thanks @grafi-tt!)
    • Improve F.group_normalization via cuDNN call (#5924, thanks @grafi-tt!)
    • Fix backward of F.average_pooling_nd with pad_value of None (#6332, thanks @crcrpar!)
    • Support for fp16 in naive comm (#6333)
    • Change backward of F.log_ndtr to avoid NaN (#6340)
    • Stop retaining y.grad on y.backward(retain_grad=False) (#6348)
    • Set requires_grad explicitly in gradient_check and function test (#6364)
    • Fix error messages in get_fans (#6365)
    • ChainerX dtype promotion: mathematical functions (#6379)
    • Mixed dtype: concatenate (#6381)
    • ResultType to take kind into account (#6419)
    • Improve FunctionTestCase error message (#6426)
    • Mixed dtype: arithmetics (#6432)
    • Change intermediate dtype of Adam for float16 parameters to float32 (#6442)
    • Mixed dtype: dot (#6443)
    • Avoid using pytest attributes during import (#6453)
    • Dot product for higher dimensions chainerX (#6476, thanks @dido1998!)
    • Remove dtype from chainerx.Scalar (#6481)
    • Mixed dtype: BatchNorm and FixedBatchNorm (#6484)
    • Support chainerx::Take indices other dtype than int64 (#6485)
    • Keep backward compatibility on cupy.cudnn.batch_normalization_forward_training (#6497)
    • Deprecate old NCCL versions and related communicators (#6506)
    • Mixed dtype chainerx::conv and chainerx::conv_transpose (#6510)
    • Support non-float cast in F.cast (#6518)
    • Remove restriction of x.dtype == b.dtype in F.convolution_nd and F.deconvolution_nd (#6524)
    • Avoid exposing chainerx.Scalar to Python (#6535)
    • Fix parameterize_pytest to allow parameterizing with tuples (#6554)
    • Change device spec (#6563)
    • Mixed dtype support in chainerx.linear (#6569)
    • Check lengths of args of chainer.grad (#6580)
    • Mixed dtype: comparison (#6590)
    • Fix linspace (#6605, thanks @kshitij12345!)
    • Add PerformanceWarning (#6617)
    • Implemented ChainerX version of Clipped ReLU forward (#6627, thanks @Harshan01!)
    • Allow comma separated keys in testing.product (#6635)
    • BatchNormalization to only allocate dummy mean and var in cuDNN path (#6656)
    • Generate shorter class names for parameterized tests (#6660)
    • ChainerX dynamic op registry (#6675)
    • Remove unnecessary broadcasts from F.layer_normalization (#6680, thanks @hitsgub!)
    • Remove unnecessary broadcasts from F.l2_normalization (#6681, thanks @hitsgub!)
    • Support cupy-cuda101 package (#6700)
    • Properly handle FP16 in D.Normal (#6709)
    • Mixed-dtype: minimum and maximum (#6713)
    • Op registration: indexing (#6718)
    • Op registration: logic (#6727)
    • Op registration: trigonometric (#6729)

    Bug Fixes

    • Forbid calling empty Sequential (#6304)
    • Fix fp16 issue in batch normalization (#6323, thanks @anaruse!)
    • Fix F.softmax_cross_entropy float16 under/overflow (#6366)
    • Fix lazy init of BatchNormalization link (#6369)
    • Fix str.join TypeError in FunctionTestCase helper (#6370)
    • Fix chainer.links.NStepRNN and its variants (#6415, thanks @crcrpar!)
    • Fix an off-by-one in slicing of chainerx::Array (#6540)
    • Fix more corner cases in chainerx::Slice (#6557)
    • Fix dimension check of chainerx::Linear (#6593, thanks @crcrpar!)
    • Fix ChainerX optimzer fallback for non-default devices (#6699)
    • Fix DeviceResident.to_gpu fallback argument (#6712)

    Code Fixes

    • Fix F632 (use == / != to compare str) (#6346)
    • Avoid # NOQA in docstrings (cont.) (#6356)
    • Fix comment style of op_utils.py (#6421)
    • Refactor chainerx::Linear (#6425)
    • Fix ResultTypeResolver multiple definitions (#6439)
    • Assert that input to array props formatter is a list or tuple (#6440)
    • Fix style of .clang-tidy (#6445)
    • Remove unnecessary AsContiguous in CudaConv::ConvGradWeight (#6520)
    • Remove commented out code from _BNMode (#6582)
    • Change the deprecated collections (#6645)
    • Remove obsolete assertions (#6648)
    • Allow ArrayBody::GetArrayNode to return null (#6658)
    • Make BackwardBuilder::Target less stateful (#6659)
    • Clean up test code (#6688)

    Documentation

    • Write guides to implement new-style functions (#4986)
    • Fix typo (#6384, thanks @aksub99!)
    • Fix Sphinx markups in RNNs docs (#6412, thanks @crcrpar!)
    • Fix docment in TimerHook (#6433, thanks @hitsgub!)
    • Refactor documentation of F.prelu (#6455, thanks @fiarabbit!)
    • Fixes typo in docstring for classification_summary (#6515, thanks @yewang!)
    • Write TODOs to address Dot backward cast (#6537)
    • Override forward in LinkHook documentation (#6546, thanks @crcrpar!)
    • Remove duplicated entry in reference (#6571)
    • Fix F.rrelu documentation (#6581, thanks @fiarabbit!)
    • Add gradient_check.check_double_backward in reference (#6584)
    • Fix :meth: link (#6603, thanks @23pointsNorth!)
    • Update broken link in chainerx.md (#6610, thanks @kshitij12345!)
    • Improve docs and exception message in F.erfcx, F.erfcinv and F.erfinv (#6618)
    • Include a link to ChainerX product backlog (#6630)
    • Fix missing module declaration (#6662)
    • Fix chainer.backend.get_array_module documentation (#6663)
    • Fix typo: 'Notatition' -> 'Notation' (#6673, thanks @nai62!)
    • Fix test failures in FunctionNode implementation guide (#6734)

    Installation

    • Environment variable to set ChainerX Python binding build type (#6647)
    • Check CMAKE_BUILD_TYPE (#6664)

    Examples

    • Use args.out in train_cifar_custom_loop.py (#6378, thanks @crcrpar!)
    • Fix to use right device for DALI iterator in imagenet example (#6397)
    • Properly pass device ID to DALI pipelines in imagenet example (#6429)
    • Use __future__.division in imagenet example with Python2 (#6462)
    • Fix broken imagenet example (#6489)
    • Fix wavenet example to support the default dtype (#6536)
    • Use float division instead of __future__.division for Python2 (#6562)
    • Fix DCGAN example to work with default dtype (#6585)
    • Use F.matmul instead of F.batch_matmul in memnn example (#6611)
    • Remove unnecessary unchain_backward() in pix2pix example (#6634, thanks @hayato-maki!)
    • Fix file mode of mushrooms.csv (#6693)
    • Replace deprecated URLopener in download.py (#6694)

    Tests

    • Test all codes in guides/functions.rst (#6194)
    • Test various spatial_scale for roi_average_pooling_2d (#6238, thanks @knorth55!)
    • Test simplifications
      • Simplify F.swish test (#6306, thanks @ishanrai05!)
      • Simplify F.log_softmax test (#6320, thanks @ishanrai05!)
      • Simplify F.softmax_cross_entropy test (#6363)
      • Simplify F.softmax test (#6371, thanks @aksub99!)
      • Simplify F.flipr test (#6389, thanks @ishanrai05!)
      • Simplify F.flipud test (#6390, thanks @ishanrai05!)
      • Simplify F.moveaxis test (#6392, thanks @ishanrai05!)
      • Simplify F.pad test (#6393, thanks @ishanrai05!)
      • Simplify F.test_squared_difference test (#6395, thanks @aksub99!)
      • Simplify F.minimum test (#6396, thanks @aksub99!)
      • Simplify F.maximum test (#6400, thanks @aksub99!)
      • Simplify tests of F.convolution_2d and F.convolution_nd (#6406, thanks @crcrpar!)
      • Simplify F.rollaxis test (#6408, thanks @ishanrai05!)
      • Simplify F.vstack test (#6410, thanks @ishanrai05!)
      • Simplify F.transpose test (#6458, thanks @ishanrai05!)
      • Simplify F.tile test (#6459, thanks @ishanrai05!)
      • Simplify F.swapaxes test (#6460, thanks @ishanrai05!)
      • Simplify F.resize_image test. (#6464, thanks @ishanrai05!)
      • Simplify F.expand_dims test (#6473, thanks @ishanrai05!)
      • Simplify F.prod test (#6479, thanks @aksub99!)
      • Simplify F.squeeze test (#6487, thanks @ishanrai05!)
    • Fix examples/.gitignore (#6391, thanks @crcrpar!)
    • Suppress warning in caffe test (#6402)
    • Add ChainerX test to FunctionTestCases (#6416)
    • Remove SPHINXOPTS env from Makefile (#6417)
    • Rewrite ChainerX connection tests (#6424)
    • Fix regex in test_print_report (#6430)
    • Fix duplicated test (#6434)
    • Add strides check in NumpyOpTest (#6437)
    • Rewrite ChainerX indexing tests (#6438)
    • Add float16 and float 64 to F.group_normalization test (#6468, thanks @crcrpar!)
    • Rewrite ChainerX linalg tests (#6469)
    • Fix F.pad test for Python2 (#6478)
    • Fix input of F.vstack to a list of ndarrays (#6494, thanks @crcrpar!)
    • Change pytest version requirement (#6502)
    • Force camel case class name for OpTest (#6507)
    • Test result dtype permutation (#6511)
    • Fix test class name (#6532)
    • Rewrite ChainerX batch_norm test (#6542)
    • Rewrite ChainerX sorting tests (#6550)
    • Rewrite ChainerX logic tests (#6551)
    • Rewrite ChainerX activation tests (#6553)
    • Rewrite ChainerX manipulation tests (#6556)
    • Rewrite ChainerX fixed_batch_norm test (#6558)
    • Rewrite ChainerX pooling tests (#6560)
    • Rewrite ChainerX arithmetics tests (#6566)
    • Rewrite ChainerX math tests (#6568)
    • Fix tolerance in chainerx.divide test (#6573)
    • Improve arithmetics tests (#6577)
    • Adjust tolerances of F.einsum tests (#6588)
    • Check grads of inputs to test backward of collective communication (#6589)
    • Avoid mutating FunctionTestBase class attributes (#6599)
    • Avoid mutating LinkTestCase and LinkInitializersTestCase class attributes (#6600)
    • Make op_test decorator remove the previous class (#6602)
    • Use compute_60 instead of compute_50 to run test on P100 (#6633)
    • Destroy NCCL communicator after every use (#6636)
    • Run ChainerX python tests in debug build (#6649)
    • Suppress numpy warnings in math tests (#6651)
    • Fix testing condition of BatchNormalizationMultiGpuTest (#6652)
    • Remove C++ routines tests (#6667)
    • Minimize the Travis CI matrix (#6677)
    • Fix conflicts between 6432 and 6486 (#6679)
    • Stop clang-tidy test in Travis CI (#6682)
    • Fix tolerance in TestConvTranspose (#6691)
    • Rewrite the rest of math tests (#6695)
    • Fix test failure in cuDNN v7.5 (#6710)
    • Fix F.convolution_nd test for flake8 (#6711)
    • Relax tolerances in convolution_nd function test (#6728)
    Source code(tar.gz)
    Source code(zip)
  • v5.4.0(Apr 4, 2019)

    This is the release note of v5.4.0. This is the final release of v5.x series. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Fix error messages in get_fans (#6413)
    • Change backward of F.log_ndtr to avoid NaN (#6431)
    • Avoid using pytest attributes during import (#6470)
    • Support cupy-cuda101 package (#6701)

    Bug Fixes

    • Fix text_classification example fails on Python 3 (#5651, thanks @koreyou!)
    • Fix lazy init of BatchNormalization link (#6480)
    • Fix chainer.links.NStepRNN and its variants (#6517, thanks @crcrpar!)
    • Fix NCCL version check error in ChainerMN (#6504)

    Code Fixes

    • Avoid # NOQA in docstrings (#6549)
    • Change the deprecated collections (#6676)
    • Fix F632 (use ==/!= to compare str) (#6714)

    Documentation

    • Remove duplicated entry in reference (#6578)
    • Fix F.rrelu documentation (#6586, thanks @fiarabbit!)
    • Add gradient_check.check_double_backward in reference (#6587)
    • Override forward in LinkHook documentation (#6594, thanks @crcrpar!)
    • Fix :meth: link (#6614, thanks @23pointsNorth!)
    • Improve docs and exception message in F.erfcx, F.erfcinv and F.erfinv (#6632)
    • Fix missing module declaration (#6671)
    • Fix chainer.backend.get_array_module documentation (#6685)
    • Fix typo: 'Notatition' -> 'Notation' (#6686, thanks @nai62!)
    • Fixes typo in docstring for classification_summary (#6697, thanks @yewang!)
    • Write guides to implement new-style functions (#6730)

    Examples

    • Fix dali_util in imagenet example for fp16 (#6377, thanks @anaruse!)
    • Use args.out in train_cifar_custom_loop.py (#6411, thanks @crcrpar!)
    • Remove FP16 specific models from imagenet example (#6564)
    • Fix iterator syntax in MNIST custom loop example (#6565)
    • Use float division instead of __future__.division for Python2 (#6567)
    • Fix DCGAN example to work with default dtype (#6591)
    • Use F.matmul instead of F.batch_matmul in memnn example (#6631)

    Tests

    • Do not ignore FutureWarning other than experimental features (#6052)
    • Suppress warning in caffe test (#6409)
    • Test all codes in guides/functions (#6428)
    • Remove SPHINXOPTS env from Makefile (#6491)
    • Fix Python 3.4 NumPy Accelerate polyfit error (#6495)
    • Change pytest version requirement (#6513)
    • Adjust tolerances of F.einsum tests (#6672)
    • Fix test failure in cuDNN v7.5 (#6716)
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0b3(Feb 28, 2019)

    This is the release note of v6.0.0b3. See here for the complete list of solved issues and merged PRs.

    Highlights

    • Spectral Normalization is supported as a link hook
    • Kuzushiji-MNIST dataset is now available at chainer.datasets

    Changes without compatibility

    • Raise NotImplementedError if Extension.__call__ is not overridden (#6095)
    • Fix get_retained_{in/out}puts to return None for None inputs/outputs (#6121)
    • Rename chainerx -> chx in public API (#6312)

    New Features

    • Unchain all variables after running extensions (#5539, thanks @hitsgub!)
    • Add spectral normalization link hook (#5742, thanks @crcrpar!)
    • Add non-deterministic warning (#5977)
    • Add finished property to once_trigger (#6023, thanks @hitsgub!)
    • Call Iterator.finalize from __del__ and __exit__ (#6098)
    • Add dilate argument to L.Deconvolution2D (#6175, thanks @crcrpar!)
    • Add create_mnbn_model (#6245)
    • Add option align_units to TimerHook.print_report() (#6254, thanks @hitsgub!)
    • Add Kuzushiji-MNIST dataset (#6295, thanks @wintercarver!)
    • Add synchronized iterator (#6345)
    • Converter decorator for ChainerX device support (#5832)
    • Add ChainerX CUDA float16 (#5845)
    • chainerx.ndarray.item (#6050)
    • chainerx.grad Python binding (#6063)
    • Unwrap ChainerX connected array from Variable (#6284)
    • chainerx::ResultType (#6347)

    Enhancements

    • Unify arguments of file names (#5357, thanks @crcrpar!)
    • support spatial_scale >= 1.0 in roi_average_align_2d.py (#5634, thanks @knorth55!)
    • Support spatial_scale >= 1.0 in F.roi_max_align_2d (#5635, thanks @knorth55!)
    • Fix pseudo_connect with None input (#5652)
    • Enforce Link.__init__ in subclasses (#5927)
    • Add sequence and numpy array indices support to ndarray.take (#6081)
    • Reduce memory usage in MultiprocessParallelUpdater (#6100)
    • Fix get_retained_{in/out}puts to return None for None inputs/outputs (#6121)
    • Check input size consistency in RNN and LSTM when using cuDNN (#6169)
    • Add support for importing and exporting Caffe Sigmoid layer (#6234, thanks @notogawa!)
    • Add group option value of Convolution2D to Caffe exporter (#6241, thanks @ohnabe!)
    • Improve errors for disabled Variable operators (#6255)
    • DimsFormatter to print a list of dimensions (#6064)
    • Support FunctionNode None inputs in ChainerX (#6122)
    • Fix ChainerX fallback for replaced optimizer state (#6218)
    • Use FMA in NativeDevice::Dot (#6227)
    • Use float accumulation in ChainerX float16 Dot (#6246)
    • Make Chainer backprop modes affect ChainerX counterparts (#6278)
    • Support ChainerX TrueDivide for integer types (#6281)
    • Rename chainerx -> chx in public API (#6312)
    • Improve accuracy of ChainerX native float16 Sum (#6313)

    Performance Improvements

    • Optimize Variable.xp to avoid creation of Device instance (#6016)
    • Add Variable._init_unchecked() static method for faster instantiation (#6033)
    • Avoid contextmanager in backprop (#6264)
    • Improve F.relu performance with CuPy (#6268)
    • Improve get_variable performance (#6269)
    • Pass debug flag to backprop_step (#6286)
    • Improve hook handling in backward (#6289)
    • Improve performance of using_config (#6290)
    • Reduce chainer.is_debug() overhead (#6291)
    • Improve performance of using_device for NumPy and Intel64 devices (#6292)
    • Support NumPy integers in chainerx.ndarray.__getitem__ (#5989)

    Bug Fixes

    • Make signs generated by initializers.Orthogonal unbiased (#5615)
    • Use ideep in optimizers properly (#5985)
    • Fix warning message for backward on a scalar array (#6026)
    • Validate {Max,Average}Pool kernel_size and stride (#6066)
    • Validate Conv, ConvTranspose stride (#6067)
    • Fix cupy import failure detection (#6085)
    • Fix memory leak during backprop in Python 2 (#6105)
    • Fix FunctionNode.get_retained_outputs to return () if no output is retained (#6118)
    • Do not compare xp with numpy for cupy code path (#6126)
    • CuPy cannot be enabled when cuDNN is unavailable (#6138)
    • Fix double-backprop of F.rrelu (#6139)
    • Check Array constructor for nullptr (#6156)
    • Do not compare xp with numpy for cupy code path (cont.) (#6159)
    • Fix type of internally held grad after Parameter.to_device (#6170)
    • Fix Optimizer to convert state arrays back to ChainerX (#6171)
    • Fix error message of parameterized test (#6287)
    • Add Device.__ne__ for Python 2 (#6335)
    • Fix pickling of ChainerX link (#5988)
    • Fix thread safety of CUDA memory pool FreeUnusedBlocks (#5992)

    Code Fixes

    • Fix import order (#6128)
    • Simplify _check_grad_type (#6213)
    • Cosmetic fix to test_gradient_check (#6271)
    • Fix inappropriate usage of is_arrays_compatible (#6274)
    • Use utils.size_of_shape in F.convolution_nd and F.deconvolution_nd (#6329)
    • Use single quotes (#6352)
    • Simplify _array_to_gpu with stream argument (#6358)
    • Add NOLINT to reinterpret_cast (#6051)
    • Wrap platform specific operations and reduce macro usage (#6054)
    • Use py::isinstance to check types (#6083)
    • Use _has_chainerx_array in Variable (#6214)
    • Write comment about CHAINERX_VISIBILITY_HIDDEN (#6231)
    • Fix clang-tidy errors (#6267)

    Documentation

    • Make docs of functions refer ndarray (#6042)
    • Fix typo in classifier.py (#6090, thanks @hiden-cubist!)
    • Document NumPy 1.16 support (#6111)
    • Remove anchor to non-existing section (#6130)
    • Reorganize documentation for easier access to tutorials and examples (#6142)
    • Fix old and broken PTB url (#6177)
    • Add imports of initializers and math, required in "Define your own function" examples (#6179, thanks @Qwinpin!)
    • Update URL of PTB dataset (#6182)
    • Add upgrade guide for use of Link.forward method (#6183)
    • Avoid # NOQA in docstrings (#6184)
    • Add FunctionTestCase to documentation (#6189)
    • Add references for n-dimensional arrays (#6219)
    • Imagenet README.md typo (#6223)
    • Update docs for Python 3.4 end-of-life (#6300)
    • Remove duplicate periods in Installation section of README.md (#6339, thanks @crcrpar!)
    • Avoid # NOQA in docstrings (#6355)
    • Fix ChainerMN Step-by-Step Troubleshooting (#6328)
    • Document chainermn.links.create_mnbn_model (#6360)
    • Document ChainerX op test tool (#6354)

    Installation

    • Remove bad brew option from Travis CI (#6202)
    • Upgrade clang-tidy to 6.0 (#6062)
    • Use CMAKE_CURRENT_BINARY_DIR in CMakeLists.txt (#6114)
    • Set CMake policy in a proper way (#6166)
    • Make chainerx compiled on Windows (#6176, thanks @durswd!)

    Examples

    • Fix seq2seq example (#6091)
    • Fix iterator syntax in MNIST custom loop example (#6099)
    • Fix seq2seq example encoding problem on Python3 (#6205)
    • Minor fix on README of seq2seq example (#6206)
    • Remove FP16 specific models from imagenet example (#6215)
    • Remove PrintReport entries in seq2seq example (#6308)
    • Fix dali_util in imagenet example for fp16 (#6342, thanks @anaruse!)
    • ChainerX seq2seq example (#5830)
    • Fix Chainer X train_mnist.py example for NumPy 1.16 (#5999, thanks @Guriido!)
    • Fix to check chainerx device in ImageNet example (#6280)

    Tests

    • Simplify F.batch_renormalization test (#5817)
    • Simplify F.mean_squared_error test (#5822)
    • Simplify F.concat test (#5823)
    • Add Windows matrix in Travis CI (#5888)
    • Limit the length of parameterized test class name (#6060)
    • Simplify F.crelu and F.elu test (#6070)
    • Fix Travis CI ignoring non-last command errors in each step (#6082)
    • Fix chainermn tests (#6048)
    • Remove Travis macOS Py34 job (#6107)
    • Remove unused test step (#6123)
    • Move Jenkins mypy check to misc matrix (#6124)
    • Fix filtering FutureWarning (#6135)
    • Fix tolerance and numeric grad precision in F.triplet test (#6136)
    • Remove Travis Ubuntu Py34 job (#6149)
    • Remove commented-out Py34 matrix from AppVeyor (#6160)
    • Fix unit test collection timeout (#6164)
    • Add x_dtype and W_dtype to the if statement of FunctionTestCase._skip_if_chainerx_float16 (#6167, thanks @crcrpar!)
    • Stop mypy in CIs (#6172)
    • Simplify F.tanh test (#6173, thanks @crcrpar!)
    • Simplify F.sigmoid test (#6174, thanks @crcrpar!)
    • Simplify F.hard_sigmoid test (#6192, thanks @crcrpar!)
    • Rewrite the tests of F.average_pooling_2d (#6211, thanks @crcrpar!)
    • Rewrite linear function test (#6236, thanks @crcrpar!)
    • Simplify F.selu test (#6243, thanks @aksub99!)
    • Simplify F.softplus test (#6298, thanks @ishanrai05!)
    • Simplify F.leaky_relu test (#6301, thanks @aksub99!)
    • Simplify F.maxout test (#6302, thanks @aksub99!)
    • Simplify F.sum test (#6307, thanks @aksub99!)
    • Improve accuracy of test of F.rrelu (#6318)
    • Simplify F.diagonal test (#6322, thanks @ishanrai05!)
    • Write test types in Travis CI job names (#6361)
    • Check CUDA device after each test case of chainerx_tests (#6049)
    • Skip ChainerX float16 tests when FunctionTestCase is used (#6069)
    • Remove legacy CHAINERX_CUDA_MULTITHREAD_TEST_SEGV_WORKAROUND from Jenkins script (#6108)
    • Run ChainerX python tests in Travis CI (#6109)
    • Enable ChainerX C++ test in Travis CI (#6110)
    • ChainerX test tool for ops (#6248)
    • Use Chainer-style parameterization in ChainerX op test (#6334)
    Source code(tar.gz)
    Source code(zip)
  • v5.3.0(Feb 28, 2019)

    This is the release note of v5.3.0. See here for the complete list of solved issues and merged PRs.

    Enhancements

    • Reduce memory usage in MultiprocessParallelUpdater (#6113)
    • Check input size consistency in RNN and LSTM when using cuDNN (#6186)
    • Add group option value of Convolution2D to Caffe exporter (#6293, thanks @ohnabe!)
    • Add support for importing and exporting Caffe Sigmoid layer (#6294, thanks @notogawa!)

    Performance Improvements

    • Improve F.relu performance with CuPy (#6270)
    • Reduce chainer.is_debug() overhead (#6297)

    Bug Fixes

    • Bugfix of MultiNodeOptimizer with loss scaling (#5783)
    • Fix BN+F.forget (#6076)
    • Fix cupy import failure detection (#6112)
    • Fix memory leak during backprop in Python 2 (#6125)
    • Use ideep in optimizers properly (#6143)
    • Fix dump_graph not to memory leak (#6147, thanks @hitsgub!)
    • Fix warning message for backward on a scalar array (#6319)

    Documentation

    • Fix wrong MNIST MLP anchor (#6055)
    • Fix document in NStepLSTM/NStepRNN (#6074)
    • Fix typo in classifier.py (#6102, thanks @hiden-cubist!)
    • Document NumPy 1.16 support (#6141)
    • Reorganize documentation for easier access to tutorials and examples (#6152)
    • Fix old and broken PTB url (#6180)
    • Add upgrade guide for use of forward method (#6193)
    • Add imports of initializers and math, required in "Define your own function" examples (#6220, thanks @Qwinpin!)
    • Add references for n-dimensional arrays (#6221)
    • Imagenet README.md typo (#6224)
    • Update URL of PTB dataset (#6239)
    • Make docs of functions refer ndarray (#6288)

    Examples

    • Refactor train_mnist_dual_parallel.py (#5716)
    • Fix seq2seq example (#6093)
    • Minor fix on README of seq2seq example (#6208)
    • Fix seq2seq example encoding problem on Python3 (#6209)
    • Remove PrintReport entries in seq2seq example (#6321)

    Tests

    • Fix tolerance and numeric grad precision in F.triplet test (#6144)
    • Fix chainermn tests (#6086)
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0b2(Jan 24, 2019)

    This is the release note of v6.0.0b2. See here for the complete list of solved issues and merged PRs.

    New Features

    • Asynchronous snapshot writers (#4472, thanks @tyohei!)
    • Add D.Cauchy (#5337)
    • Add D.Geometric (#5343)
    • Add cached_property decorator (#5416)
    • Make build_computational_graph accept single output (#5445)
    • Add trigger to be fired only once (#5565, thanks @hitsgub!)
    • Use default dtype in L.NegativeSampling (#5664)
    • Add optional property finished to trigger object (#5681, thanks @hitsgub!)
    • Support all float dtypes in F.spatial_transformer_sampler (#5751)
    • Add a naive TimerHook link hook. (#5842, thanks @crcrpar!)
    • Add F.as_strided (#5902, thanks @fiarabbit!)
    • Add 'mean' value as an option for VAE loss reduce (#5966, thanks @23pointsNorth!)

    Enhancements

    • Support inputs with ndim!=2 for F.huber_loss (#5534)
    • Show forward stacktrace in backward (#5603)
    • Add type check for r arg of F.rrelu (#5619)
    • Support unretained Variables in _check_grad_type (#5640)
    • FunctionNode automatic fallback of array attributes in forward (#5745)
    • Switch device during gradient_check (#5777)
    • Raise CuPy not available error early in cuda.GpuDevice initialization (#5780)
    • Add hasattr check to user-specified flush call to file-like objects. (#5794, thanks @grafi-tt!)
    • Support custom initializer in links.CRF1d (#5807, thanks @himkt!)
    • Remove F.clip type restriction (#5813)
    • Batched pack/unpack params before/after allreduce (#5829, thanks @anaruse!)
    • Remove unnecessary cast in F.huber_loss (#5835)
    • Reimplement F.LocalResponseNormalization as FunctionNode (#5851)
    • Stop managing memory in max pooling specific manner (#5861)
    • Do not retain input on iDeep F.relu (#5871, thanks @grafi-tt!)
    • Set grad of F.clip 1 at x_min and x_max (#5876, thanks @grafi-tt!)
    • Warn if reset method is not implemented in an iterator (#5882)
    • Cache attributes of distributions (#5892)
    • Use FunctionNode on ROIPooling2D (#5957)
    • Use more precise timer in function_hooks/timer.py (#5971, thanks @crcrpar!)
    • Improve F.elu memory consumption by retaining output (#5972, thanks @grafi-tt!)

    Bug Fixes

    • Fix dump_graph not to memory leak (#5538, thanks @hitsgub!)
    • Fix F.batch_normalization + F.forget combination (#5557)
    • Bugfix of MultiNodeOptimizer with loss scaling (#5659)
    • Fix usage of downsample_fb in resnet (#5737, thanks @milhidaka!)
    • Fix device argument passed to MultiprocessParallelUpdater being modified (#5739, thanks @Guriido!)
    • Fix bug when CuPy not installed and cuda.fuse decorator used without parentheses (#5809, thanks @grafi-tt!)
    • Fix F.cast gradient for casts between the same dtypes (#5811)
    • Accept splitting at the tail of dataset in split_dataset (#5895)
    • Fix broken F.leaky_relu grad when slope = 0 (#5898, thanks @grafi-tt!)
    • Add copyparams method to Sequential (#5914)
    • Override _to_device for consistency (#5948)
    • Allow import chainer.testing without pytest (#5973)
    • Raise an appropriate error on cuDNN RNN backward in testing mode (#5981)
    • Fix stochastic failure in WalkerAlias (#6057)

    Documentation

    • Remove deprecation notices for v1 and v2 in documentation (#5081)
    • Add description for initializer dtype (#5246)
    • Add Code of Conduct (#5629)
    • Improve installation guide of ChainerMN (#5656)
    • Add explanations for LeNet5 (#5686)
    • Make docs of activation functions refer ndarray (#5718)
    • Add robots.txt to hide older versions from search results (#5768)
    • Fix typo in v2 Upgrade Guide (#5771)
    • Fix a couple of broken links from markdown files (#5789)
    • Model Parallel Documentation (#5791, thanks @levelfour!)
    • Fix wording in documentation (#5795)
    • Write "Wx + b" in the document of Linear. (#5852)
    • Make docs of array functions refer ndarray (#5863)
    • Some small fixes to grammar and spelling (#5869)
    • Make docs of connection functions refer ndarray (#5875)
    • Fix static_graph module path in documentation (#5883)
    • Correct the stable version in master branch (#5891, thanks @jinjiren!)
    • Change .data to .array in Guides and Examples docs (#5907, thanks @jinjiren!)
    • Fix typo (#5915, thanks @MannyKayy!)
    • Transform dataset documentation fix (#5938, thanks @23pointsNorth!)
    • Fix typo (#5942)
    • Update the note in DCGAN example to be compatible with the code. (#5951, thanks @jinjiren!)
    • Fix doc of F.softmax_cross_entropy on output shape with reduce=no (#5965)
    • Make some docs of functions refer ndarray (#5975)
    • Fix document in NStepLSTM/NStepRNN (#5979)
    • Make docs of math functions refer ndarray (#6032)
    • Fix wrong MNIST MLP anchor (#6046)

    Installation

    • Check integrity of CuPy wheel for CUDA 10 (#5955)

    Examples

    • Add inference code to MNIST example (#4741)
    • Use iter.reset() in PTB example (#5834)
    • Some small improvements to the Mushrooms example (#5982)

    Tests

    • FunctionTestCase for function tests (#3499)
    • Test statistics of initializers (#5511)
    • Add test mode to text classification example (#5666)
    • Fix test of F.connectionist_temporal_classification (#5727)
    • Refactor tests of F.split_axis and F.concat (#5733)
    • Return exitcode of make html to Travis (#5769)
    • Fix testing.BackendConfig context for repeated use (#5779)
    • Encode parameters in parameterized class name (#5782)
    • Add test for conveter device argument in Evaluator (#5806)
    • Fix error message of testing.assert_allclose (#5814)
    • Refactor CI scripts (#5858)
    • Refactor Travis script (#5859)
    • Remove some CI requirements (#5865)
    • Allow multiple application of testing.parameterize (#5893)
    • Allow mixing testing.inject_backend_tests and testing.parameterize (#5904)
    • Adjust testing tolerance of numerical gradient (#5923)
    • Adjust testing tolerance of F.connectionist_temporal_classification (#5928)
    • Do not ignore FutureWarning other than experimental features (#5949)
    • Move mypy to static checks (#5987)
    • Skip test on Theano<=1.0.3 and NumPy>=1.16.0 (#6001)
    • Fix travis script to continue on failure in each step (#6002)
    • Fix inject_backend_tests multi_gpu test mark (#6028)
    • Allow doctest to run in single-GPU environment (#6029)
    • Test if the default CUDA device keeps being 0 after each test (#6044)

    ChainerX

    • Add ChainerX native float16 (#5761)
    • CuPy/ChainerX memory pool sharing (#5821)
    • Automatic ChainerX fallback of array attributes in Function (#5828)
    • ChainerX backward w.r.t. inputs (C++ chainerx.grad ) (#5747)
    • Improve gradient mismatch error (#5748)
    • Forbid fallback get/setitem for arrays with backprop required (#5754)
    • Implement BFC algorithm in ChainerX CUDA memory pool (#5760)
    • Resolve _as_noncontiguous_array workaround for ChainerX (#5781)
    • L.NegativeSampling ChainerX support (#5816)
    • Stop using Unified Memory by default (#5912)
    • Avoid cudaMemcpyAsync for pinned memory for faster host-to-device transfer (#5940)
    • Remove chainerx.asscalar (#6007)
    • Fix scalar handling of indices_and_sections in chainerx.split (#5788)
    • Fix ChainerX Python docstring allocation issue (#5815)
    • Fix chainerx.maximum to restore CUDA device (#6043)
    • Build ChainerX on ReadTheDocs (#5766)
    • Add chainerx.ndarray to the ndarray doc (#5864)
    • Document CuPy memory pool sharing (#6017)
    • Do not overwrite CMAKE_CXX_FLAGS a user specified (#5770)
    • Patch files for macOS (#5776, thanks @ktnyt!)
    • Update pybind dependency to v2.2.4 (#5798)
    • Update gsl-lite to v0.32.0 (#5849)
    • Enable ChainerX in docker image (#5879)
    • Update third-party.cmake to follow the recent way (#5911)
    • Made ChainerX setup and compile on Windows (#5932, thanks @durswd!)
    • Fix visibility for pybind exception registration for macOS (#5936)
    • Fix manifest typos (#6065)
    • ChainerX MNIST C++ example (#5746)
    • Remove some TODOs of the chainerx resnet example (#5775)
    • Fix jenkins script to allow explicit repo root (#5774)
    • Fix to test against new chainerx.GradientError (#5787)
    • Add Travis matrix for macOS ChainerX tests (#5846)
    • Remove .circleci (#5860)
    • Add C++ linter checks in Travis CI (#5867)
    • Fix FixedCapacityDummyAllocator in CUDA memory pool test (#5993)
    • Fix CUDA specific Python binding (#6037)
    • Add chainerx-generated reference docs to .gitignore (#5805, thanks @knorth55!)
    • Disable clang-tidy modernize-use-auto (#5839)

    Code Fixes

    • Simplify batch normalization with cuDNN (#5568)
    • Add type hints for Link, LinkHook, Initializer and ChainerX (#5675)
    • Refactor gradient setter in gradient_check (#5699)
    • Use new RNN implementation (#5726)
    • Backprop from multiple variables (#5741)
    • Fixes for clang (#5744)
    • Improve coding style (#5763)
    • Fix style of setup.py (#5764)
    • Code enhancements: avoid array copies (#5800)
    • Random code enhancements (#5801)
    • Add comment to MultiprocessIterator.__copy__ (#5833)
    • Move workaround utils._getitem/_setitem to chainerx (#5840)
    • Fix clang-tidy error (#5870)
    • Fix typo on internal attribute (#5894)
    • Fix clang-tidy warnings on clang-tidy 6 (#5901)
    • Fix for clang-tidy 7 (#5933)
    • Fix code formatting (#5941)
    • Remove @overload annotations outside the stub files (#5960)
    • Avoid deprecated numpy.asscalar (#5994)
    • Post macro comment for consistency (#6014)
    • Remove chainerx.asscalar from mypy stub file (#6024)

    Others

    • Fix .gitignore to avoid ignoring some necessary files (#5836)
    • Allow skipping linkcode in docs with environment variable (#5868)
    Source code(tar.gz)
    Source code(zip)
  • v5.2.0(Jan 24, 2019)

    This is the release note of v5.2.0. See here for the complete list of solved issues and merged PRs.

    New Features

    • Support default dtype in L.BinaryHierarchicalSoftmax (#5714)
    • Support all float dtypes in F.embed_id (#5926)
    • Support all float dtypes in F.spatial_transformer_sampler (#6003)
    • Support all float dtypes in F.connectionist_temporal_classification (#6011)
    • Support all float dtypes in F.det and F.inv (#6012)
    • Use default dtype in L.NegativeSampling (#6013)
    • Introduce utils.mixed_presision decorator (#6022)
    • Add a naive TimerHook link hook (#6038, thanks @crcrpar!)

    Enhancements

    • Change Link.add_hook to return self (#5750, thanks @crcrpar!)
    • Add hasattr check to user-specified flush call to file-like objects (#5803, thanks @grafi-tt!)
    • Support unretained Variables in _check_grad_type (#5826)
    • Use new RNN implementation (#5827)
    • Simplify batch normalization with cuDNN (#5853)
    • Reimplement F.LocalResponseNormalization as FunctionNode (#5900)
    • Support custom initializer in links.CRF1d (#5905, thanks @himkt!)
    • Use FunctionNode on ROIPooling2D (#5967)
    • Fix error message of testing.assert_allclose (#5984)
    • Use more precise timer in function_hooks/timer.py (#6021, thanks @crcrpar!)

    Bug Fixes

    • Fix BatchNormalization with lazy initialization fail on GPU (#5713, thanks @koreyou!)
    • Fix device argument passed to MultiprocessParallelUpdater being modified (#5790, thanks @Guriido!)
    • Fix F.cast gradient for casts between the same dtypes (#5818)
    • Fix bug when CuPy not installed and cuda.fuse decorator used without parentheses (#5825, thanks @grafi-tt!)
    • Fix usage of downsample_fb in resnet (#5850, thanks @milhidaka!)
    • Accept splitting at the tail of dataset in split_dataset (#5899)
    • Fix broken F.leaky_relu grad when slope = 0 (#5922, thanks @grafi-tt!)
    • Raise an appropriate error on cuDNN RNN backward in testing mode (#5983)
    • Add copyparams method to Sequential (#5990)
    • Allow import chainer.testing without pytest (#5998)
    • Fix .gitignore to avoid ignoring some necessary files (#5838)

    Documentation

    • Fix image URL in README (#5755, thanks @levelfour!)
    • Fix typo in v2 Upgrade Guide (#5772)
    • Fix a couple of broken links from markdown files (#5792)
    • Fix wording in documentation (#5820)
    • Make docs of activation functions refer ndarray (#5831)
    • Model Parallel Documentation (#5843, thanks @levelfour!)
    • Add explanations for lenet5 (#5855)
    • Add description for initializer dtype (#5872)
    • Add Code of Conduct (#5873)
    • Make docs of array functions refer ndarray (#5881)
    • [v5] Document optional arguments as None (#5886)
    • Make docs of connection functions refer ndarray (#5889)
    • Fix static_graph module path in documentation (#5906)
    • Change .data to .array in Guides and Examples docs (#5913, thanks @jinjiren!)
    • Fix typo (#5917, thanks @MannyKayy!)
    • Write "Wx + b" in the document of Linear. (#5919)
    • Improve installation guide of ChainerMN (#5937)
    • Transform dataset documentation fix (#5947, thanks @23pointsNorth!)
    • Update the note in DCGAN example to be compatible with the code. (#5962, thanks @jinjiren!)
    • Fix doc of F.softmax_cross_entropy on output shape with reduce=no (#5969)
    • Make some docs of functions refer ndarray (#5976)
    • Make docs of math functions refer ndarray (#6034)

    Installation

    • Check integrity of CuPy wheel for CUDA 10 (#5956)

    Examples

    • Use iter.reset() in PTB example (#5857)

    Tests

    • Add test mode to text classification example (#5784)
    • Adjust testing tolerance of numerical gradient (#5946)
    • Test statistics of initializers (#5961)
    • Fix pytest plugin version (#5968)
    • Adjust testing tolerance of F.connectionist_temporal_classification (#6035)
    • Test if the default CUDA device keeps being 0 after each test (#6047)
    Source code(tar.gz)
    Source code(zip)
  • v6.0.0b1(Dec 3, 2018)

    This is the release note of v6.0.0b1. See here for the complete list of solved issues and merged PRs.

    Highlights

    ChainerX

    ChainerX is an ndarray implementation with Define-by-Run automatic differentiation capability. It roughly corresponds to "NumPy/CuPy + Chainer Variable", while some additional features follow:

    • Speed: The whole ndarray and autograd implementation is written in C++, with a thin Python binding. It lowers the overhead existing in the pure Python implementation of Chainer.
    • Extensibility: The backend is pluggable so that it is much easier to add support of new devices.

    The speed is best achieved by directly using ChainerX APIs, while it also provides a compatibility layer through the conventional Variable interface for easier adoption of ChainerX in existing projects. See the ChainerX Tutorial for more details and concrete examples.

    New Features

    • Implement double backward of SLSTM function (#4824, thanks @tohmae!)
    • Add F.roi_max_align_2d (#5198, thanks @knorth55!)
    • Add F.roi_average_pooling_2d (#5285, thanks @knorth55!)
    • Add F.roi_max_pooling_2d (#5304, thanks @knorth55!)
    • Support all float dtypes in F.negative_sampling (#5336)
    • Add D.Chisquare (#5338)
    • Add D.Gumbel (#5352)
    • Add D.Poisson (#5364)
    • Add D.OneHotCategorical (#5372)
    • Serialize BestValueTrigger (#5402, thanks @ktns!)
    • Add return_samples argument to F.negative_sampling and L.NegativeSampling (#5597)
    • Support all float dtypes in F.embed_id (#5624)
    • Support default dtype in L.BlackOut (#5638)
    • Support default dtype in L.BinaryHierarchicalSoftmax (#5648)
    • Support all float dtypes in F.connectionist_temporal_classification (#5680)
    • ChainerX (#5725)

    Enhancements

    • Add type compatibility check in npz deserializer (#5483)
    • Use cupy.linalg.det in F.det (#5525)
    • Avoid unnecessary copy in ndarray.astype (#5547)
    • Avoid cuDNN handle around DropoutStates (#5563)
    • Simplify softmax with cuDNN (#5566)
    • Simplify pooling with cuDNN (#5567)
    • Add KL divergence test for D.OneHotCategorical (#5587)
    • Add compute_stream argument in ConcatWithAsyncTransfer to allow more overlap between computation transfer in CUDA (#5606, thanks @anaruse!)
    • Use chainer.utils.size_of_shape in ChainerMN (#5610)
    • Import testing/backend.py definitions in testing/__init__.py (#5633)
    • Avoid using dype char codes (#5646)
    • More consistent use of Variable.array in codes under links (#5657, thanks @crcrpar!)
    • Use automatic broadcasting instead of F.repeat (#5662)
    • Refactor the statemachine of iterators that iterates indices (#5669, thanks @grafi-tt!)
    • Refactor train_mnist_dual_parallel.py (#5678)
    • Change Link.add_hook to return self (#5736, thanks @crcrpar!)

    Bug Fixes

    • Fix reporter.Summary float value deserialization (#5482)
    • Fix text_classification example fails on Python 3 (#5591, thanks @koreyou!)
    • Improve iDeep version checking (#5600)
    • Fix D.OneHotCategorical (#5604)
    • Fix Python 3.7 test failures in F.roi_average_pooling_2d (#5611)
    • Fix F.negative_sampling output dtype in CPU mode (#5613)
    • Fix args check in F.roi_average_align_2d and F.roi_average_pooling_2d (#5627, thanks @knorth55!)
    • Fix L.BatchNormalization with lazy initialization fail on GPU (#5683, thanks @koreyou!)

    Documentation

    • Simplify array type information fields in function documentation (#4887)
    • Update installation guide of numpy with openblas on macOS (#5021)
    • Add links to ChainerCV documentation (#5434)
    • Add ChainerMN paper to references (#5570)
    • Fix docstring of F.forget (#5586, thanks @fiarabbit!)
    • Fix typo in updaters (#5589, thanks @okayu9!)
    • Fix extensions guide error regarding method to implement (#5602, thanks @lehy!)
    • Update F.roi_average_align_2d doc to refer wrapper function (#5609, thanks @knorth55!)
    • Fix a typo in Chain example code (#5653)
    • Fix typo in F.max_pooling_nd docstring (#5654)
    • Fix a typo in chainer.distributions documentation (#5658)
    • Add documentation of ndarray (#5660)
    • Fix typo in L.ResNetLayers (#5665, thanks @takaaki82!)
    • Minor typo correction (in docs/variables). (#5670, thanks @grigorisg9gr!)
    • Fix typo in docstrings (#5676)
    • Fix docs for backprop_step (#5692)
    • Make docs in chainer.distributions refer ndarray (#5717)
    • Fix image URL in README (#5720, thanks @levelfour!)
    • Add warning in ChainerX documentation (#5752)

    Installation

    • Require setuptools and add docs for (#5532)

    Examples

    • Add WaveNet example (#4922, thanks @dhgrs!)
    • Rewrite the example of VAE using Chainer distributions (#5356, thanks @ganow!)

    Tests

    • Fix test warnings in NumPy 1.15 (#5596)
    • Fix test of F.rrelu (#5618)
    • Fix regex of protobuf modules warned by Python 3.7 (#5642)
    • Ignore h5py warning in Python 3.7 (#5691)
    • Add gradient consistency checks in numerical_grad (#5698)

    Other

    • Update style check tools to the versions compatible with pycodestyle 2.4 (#5643)
    Source code(tar.gz)
    Source code(zip)
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT

CheXbert: Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT CheXbert is an accurate, automated dee

Stanford Machine Learning Group 51 Dec 08, 2022
LaneAF: Robust Multi-Lane Detection with Affinity Fields

LaneAF: Robust Multi-Lane Detection with Affinity Fields This repository contains Pytorch code for training and testing LaneAF lane detection models i

155 Dec 17, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
Kaggle Lyft Motion Prediction for Autonomous Vehicles 4th place solution

Lyft Motion Prediction for Autonomous Vehicles Code for the 4th place solution of Lyft Motion Prediction for Autonomous Vehicles on Kaggle. Discussion

44 Jun 27, 2022
Dilated Convolution with Learnable Spacings PyTorch

Dilated-Convolution-with-Learnable-Spacings-PyTorch Ismail Khalfaoui Hassani Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a no

15 Dec 09, 2022
Generate pixel-style avatars with python.

face2pixel Generate pixel-style avatars with python. Run: Clone the project: git clone https://github.com/theodorecooper/face2pixel install requiremen

Theodore Cooper 2 May 11, 2022
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI

Language Emergence in Multi Agent Dialog Code for the Paper Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog Satwik Kottur, José M.

Karan Desai 105 Nov 25, 2022
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
PyTorch trainer and model for Sequence Classification

PyTorch-trainer-and-model-for-Sequence-Classification After cloning the repository, modify your training data so that the training data is a .csv file

NhanTieu 2 Dec 09, 2022
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
Underwater industrial application yolov5m6

This project wins the intelligent algorithm contest finalist award and stands out from over 2000teams in China Underwater Robot Professional Contest, entering the final of China Underwater Robot Prof

8 Nov 09, 2022
PyTorch implementation of SimSiam: Exploring Simple Siamese Representation Learning

SimSiam: Exploring Simple Siamese Representation Learning This is a PyTorch implementation of the SimSiam paper: @Article{chen2020simsiam, author =

Facebook Research 834 Dec 30, 2022
NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions

NeoDTI NeoDTI: Neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions (Bioinformatics).

62 Nov 26, 2022
NHL 94 AI contests

nhl94-ai The end goals of this project is to: Train Models that play NHL 94 Support AI vs AI contests in NHL 94 Provide an improved AI opponent for NH

Mathieu Poliquin 2 Dec 06, 2021
The easiest tool for extracting radiomics features and training ML models on them.

Simple pipeline for experimenting with radiomics features Installation git clone https://github.com/piotrekwoznicki/ClassyRadiomics.git cd classrad pi

Piotr Woźnicki 17 Aug 04, 2022
Transformer model implemented with Pytorch

transformer-pytorch Transformer model implemented with Pytorch Attention is all you need-[Paper] Architecture Self-Attention self_attention.py class

Mingu Kang 12 Sep 03, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
Learning Continuous Image Representation with Local Implicit Image Function

LIIF This repository contains the official implementation for LIIF introduced in the following paper: Learning Continuous Image Representation with Lo

Yinbo Chen 1k Dec 25, 2022
4K videos with annotated masks in our ICCV2021 paper 'Internal Video Inpainting by Implicit Long-range Propagation'.

Annotated 4K Videos paper | project website | code | demo video 4K videos with annotated object masks in our ICCV2021 paper: Internal Video Inpainting

Tengfei Wang 21 Nov 05, 2022