An Open Source Machine Learning Framework for Everyone

Overview

Python PyPI

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:

Fuzzing Status CII Best Practices Contributor Covenant

Continuous build status

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Nightly GCS Official GCS
Libtensorflow Linux CPU Status Nightly GCS Official GCS
Libtensorflow Linux GPU Status Nightly GCS Official GCS
Libtensorflow Windows CPU Status Nightly GCS Official GCS
Libtensorflow Windows GPU Status Nightly GCS Official GCS

Community Supported Builds

Build Type Status Artifacts
Linux AMD ROCm GPU Nightly Build Status Nightly
Linux AMD ROCm GPU Stable Release Build Status Release 1.15 / 2.x
Linux s390x Nightly Build Status Nightly
Linux s390x CPU Stable Release Build Status Release
Linux ppc64le CPU Nightly Build Status Nightly
Linux ppc64le CPU Stable Release Build Status Release 1.15 / 2.x
Linux ppc64le GPU Nightly Build Status Nightly
Linux ppc64le GPU Stable Release Build Status Release 1.15 / 2.x
Linux aarch64 CPU Nightly (Linaro) Build Status Nightly
Linux aarch64 CPU Stable Release (Linaro) Build Status Release 1.x & 2.x
Linux aarch64 CPU Nightly (OpenLab)
Python 3.6
Build Status Nightly
Linux aarch64 CPU Stable Release (OpenLab) Build Status Release 1.15 / 2.x
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Nightly Build Status Nightly
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Stable Release Build Status Release 1.15 / 2.x
Red Hat® Enterprise Linux® 7.6 CPU & GPU
Python 2.7, 3.6
Build Status 1.13.1 PyPI

Community Supported Containers

Container Type Status Artifacts
TensorFlow aarch64 Neoverse-N1 CPU Stable (Linaro)
Debian
Static Release 2.3

Resources

Learn more about the TensorFlow community and how to contribute.

License

Apache License 2.0

Comments
  • Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
    • TensorFlow installed from (source or binary): Source and Binary (tried both)
    • TensorFlow version: 1.12
    • Python version: 3.6
    • Installed using virtualenv? pip? conda?: conda
    • Bazel version (if compiling from source): 0.18
    • GCC/Compiler version (if compiling from source): gcc 5.4.0
    • CUDA/cuDNN version: Cudnn - 7.4 , CUDA- 9.0
    • GPU model and memory: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8225 8GB

    Describe the problem I tried installting tensorflow 1.12 using both pip install and building from source.However when I am trying to run faster rcnn model i get following error message: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    I only get this with tf 1.12 and python 3.6 ,it works fine with python 3.6

    Provide the exact sequence of commands / steps that you executed before running into the problem

    Any other info / logs Traceback (most recent call last): File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "detection_app.py", line 67, in worker output_q.put(y.get_stats_and_detection(frame)) File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 142, in get_stats_and_detection boxes, scores, classes, num = self.processFrame(img) File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 76, in processFrame feed_dict={self.image_tensor: image_np_expanded}) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D (defined at /home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py:36) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    Caused by op 'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D', defined at: File "detection_app.py", line 94, in pool = Pool(args.num_workers, worker, (input_q, output_q)) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/context.py", line 119, in Pool context=self.get_context()) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 174, in init self._repopulate_pool() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 239, in _repopulate_pool w.start() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/context.py", line 277, in _Popen return Popen(process_obj) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/popen_fork.py", line 73, in _launch code = process_obj._bootstrap() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "detection_app.py", line 62, in worker y = DetectorAPI() File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 36, in init tf.import_graph_def(od_graph_def, name='') File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def _ProcessNewOps(graph) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations for c_op in c_api_util.new_tf_operations(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in for c_op in c_api_util.new_tf_operations(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation ret = Operation(c_op, self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init self._traceback = tf_stack.extract_stack()

    UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D (defined at /home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py:36) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    stat:awaiting response type:build/install 
    opened by deepakrai9185720 299
  • Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes and No (described below)
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Manjaro
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
    • TensorFlow installed from (source or binary): tf-nightly-gpu (Dec 19, r1.13)
    • TensorFlow version (use command below): 1.13.0-dev20181219
    • Python version: 3.7.1
    • Bazel version (if compiling from source):
    • GCC/Compiler version (if compiling from source):
    • CUDA/cuDNN version: CUDA 10 with cuDNN 7.4.1
    • GPU model and memory: RTX 2070 8GB

    Describe the current behavior I'm running the CNN model on MNIST. When I'm running with the GPU, I am encountering 2018-12-20 20:09:13.644176: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    I did some digging and realized that it is a memory issue (which shouldn't be the case as I have 32GB of RAM and 64GB of swap. I ran htop when running the model and I have 20+GB free, which is more than enough to fit the 8GB vRAM mappings.

    Using the gpu_options.allow_growth = True gets the model to work properly, and setting os.environ['CUDA_VISIBLE_DEVICES'] = '-1' also works. This means that I AM facing a memory issue, but I don't see how.

    Also, using gpu_options.allow_growth = True does not fix the same issue when trying to run tensorflow/models/official/mnist/ model, which should have a similar behavior with my code.

    Code to reproduce the issue

    import os
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    import math
    import time
    # Killing optional CPU driver warnings
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    # os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
    tf.logging.set_verbosity(tf.logging.ERROR)
    
    
    class Model:
    
        def __init__(self, image, label):
            """
            A Model class contains a computational graph that classifies images
            to predictions. Each of its methods builds part of the graph
            on Model initialization. Do not modify the constructor, as doing so
            would break the autograder. You may, however, add class variables
            to use in your graph-building. e.g. learning rate, 
    
            image: the input image to the computational graph as a tensor
            label: the correct label of an image as a tensor
            prediction: the output prediction of the computational graph,
                        produced by self.forward_pass()
            optimize: the model's optimizing tensor produced by self.optimizer()
            loss: the model's loss produced by computing self.loss_function()
            accuracy: the model's prediction accuracy
            """
            self.image = image
            self.label = label
    
            # TO-DO: Add any class variables you want to use.
    
            self.prediction = self.forward_pass()
            self.loss = self.loss_function()
            self.optimize = self.optimizer()
            self.accuracy = self.accuracy_function()
    
        def forward_pass(self):
            """
            Predicts a label given an image using convolution layers
    
            :return: the prediction as a tensor
            """
            filter_1 = tf.Variable(tf.truncated_normal([3, 3, 1, 8], stddev=0.1))
            conv_1 = tf.nn.conv2d(self.image, filter_1, [1, 1, 1, 1], "SAME")
    
            reshaped = tf.reshape(conv_1, shape=[50, -1])
    
            L1 = reshaped.shape[1].value
            L2 = 500
            W1 = tf.Variable(tf.random_normal([L1, L2], mean=0, stddev=0.01))
            b1 = tf.Variable(tf.random_normal([L2], mean=0, stddev=0.01))
            relu_1 = tf.nn.relu(tf.matmul(reshaped, W1) + b1)
    
            W2 = tf.Variable(tf.random_normal([L2, 10], mean=0, stddev=0.01))
            b2 = tf.Variable(tf.random_normal([10], mean=0, stddev=0.01))
            logits = tf.nn.relu(tf.matmul(relu_1, W2) + b2)
            return logits
    
        def loss_function(self):
            """
            Calculates the model cross-entropy loss
    
            :return: the loss of the model as a tensor
            """
            loss = tf.losses.softmax_cross_entropy(onehot_labels=self.label, logits=self.prediction)
            return loss
    
        def optimizer(self):
            """
            Optimizes the model loss using an Adam Optimizer
    
            :return: the optimizer as a tensor
            """
            learning_rate = 0.1
            sgd = tf.train.GradientDescentOptimizer(learning_rate)
            train = sgd.minimize(self.loss)
            return train
    
        def accuracy_function(self):
            """
            Calculates the model's prediction accuracy by comparing
            predictions to correct labels – no need to modify this
    
            :return: the accuracy of the model as a tensor
            """
            correct_prediction = tf.equal(tf.argmax(self.prediction, 1),
                                          tf.argmax(self.label, 1))
            return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    
    def main():
        t_start = time.time()
    
        mnist = input_data.read_data_sets("data/mnist/", one_hot=True)
        batch_sz = 50
        batch = 2000
    
        inputs = tf.placeholder(shape=[batch_sz, 28, 28, 1], dtype=tf.float32)
        labels = tf.placeholder(shape=[batch_sz, 10], dtype=tf.float32)
    
        model = Model(inputs, labels)
    
        session_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
        sess = tf.Session(config=session_config)
    
        # sess = tf.Session()
    
        sess.run(tf.global_variables_initializer())
        for i in range(batch):
            next_image, next_label = mnist.train.next_batch(batch_sz)
            next_image = next_image.reshape((batch_sz, 28, 28, 1))
            sess.run(model.optimize, feed_dict={inputs: next_image, labels: next_label})
    
        acc, test_images, test_labels = 0, mnist.test.images, mnist.test.labels
        test_batch = math.ceil(len(test_images) / batch_sz)
        for i in range(test_batch):
            batch_images = test_images[i * batch_sz: (i + 1) * batch_sz]
            batch_images = batch_images.reshape((batch_sz, 28, 28, 1))
            batch_labes = test_labels[i * batch_sz: (i + 1) * batch_sz]
            acc += sess.run(model.accuracy, feed_dict={inputs: batch_images, labels: batch_labes})
        acc /= test_batch
        print(acc)
    
        print(time.time() - t_start, 'seconds')
    
        return
    
    
    if __name__ == '__main__':
        main()
    
    stat:awaiting response type:bug stalled comp:gpu TF 2.0 
    opened by michaelmyc 186
  • Win10: ImportError: DLL load failed: The specified module could not be found

    Win10: ImportError: DLL load failed: The specified module could not be found

    System information:

    Have I written custom code: No OS Platform and Distribution: Windows 10 Pro updated Mobile device: None TensorFlow installed from: pip install TensorFlow version: 1.11.0 Python Version: 3.6.6 Bazel version: not installed CUDA/cuDNN version: CUDA 9.0, cuDNN 8.0 GPU model and memory: GF-GTX970 STRIX Exact command to reproduce: pip install tensorflow pip install tensorflow-gpu python import tensorflow as tf

    Problem

    I have had this error consistently even after trying to downgrade to older versions of CUDA tool, cuDNN, python, tensorflow and tensorflow-gpu. I have updated my enviornment variables. I have installed Visual C++ Redistributable Update. I have read and tried to follow the solutions from other similar issues (such as #10033 and #17101), but have not succeeded in fixing the problem.

    Log

    C:\Users\user>python Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. <> import tensorflow as tf Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_init_.py", line 22, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python_init_.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.

    type:build/install subtype:windows 
    opened by damcclane 184
  • Windows Support and Documentation

    Windows Support and Documentation

    I was excited to see tensorflow, but as many other users, we are on Windows, would be nice to see this support happen. Will you accept Windows port contributions?

    In the meantime, Microsoft recently released their Deep Learning toolkit which scales on multiple machines with GPUs for both Linux and Windows. https://github.com/Microsoft/CNTK

    opened by mohamedmansour 180
  • Upgrade to CuDNN 7 and CUDA 9

    Upgrade to CuDNN 7 and CUDA 9

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows Server 2012
    • TensorFlow installed from (source or binary): binary
    • TensorFlow version (use command below): 1.3.0-rc1
    • Python version: 3.5.2
    • Bazel version (if compiling from source): N/A
    • CUDA/cuDNN version: CUDA V8.0.44, CuDNN 6.0
    • GPU model and memory: Nvidia GeForce GTX 1080 Ti, 11 GB
    • Exact command to reproduce: N/A

    Describe the problem

    Please upgrade TensorFlow to support CUDA 9 and CuDNN 7. Nvidia claims this will provide a 2x performance boost on Pascal GPUs.

    type:feature 
    opened by tpankaj 170
  • Windows C++ tensorflow_cc.dll has overlapping memory address between string gpu options for

    Windows C++ tensorflow_cc.dll has overlapping memory address between string gpu options for "allocator type" and "visible device list"

    Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: NA
    • TensorFlow installed from (source or binary): source
    • TensorFlow version (use command below): 1.12.0 branched from 5b900cfe4b3b848f577315a0dde09a729f770e95
    • Python version: NA
    • Bazel version (if compiling from source): 0.19.2
    • GCC/Compiler version (if compiling from source): MSVC 2015
    • CUDA/cuDNN version: 10.0.130, 9.2.148
    • GPU model and memory: NVIDIA GP100 16Gb

    You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: NA

    Describe the current behavior

    I am creating as session as follows adapted from original code

       std::unique_ptr<tensorflow::Session>* session;
       tensorflow::SessionOptions options;
       tensorflow::ConfigProto* config = &options.config;
       float fraction =0.8;
       int whichGPU = 0;
       int cuda_device_count=1;
       tensorflow::GraphDef graph_def;
       tensorflow::status = tensorflow::ReadBinaryProto(tensorflow::Env::Default(), "C:\\\models\\graph.pb", &graph_def);
       auto* device_count = options.config.mutable_device_count();
       device_count->insert({ "GPU", cuda_device_count });
       device_count->insert({ "CPU", 1 });
       options.config.mutable_gpu_options()->set_per_process_gpu_memory_fraction(fraction);
       options.config.mutable_gpu_options()->set_visible_device_list(std::to_string(whichGPU));
       session->reset(tensorflow::NewSession(options));
      (*session)->Create(graph_def);
    

    which results in

        70 2020-05-12 09:41:28.214176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] 
        Found device 0 with properties: 
       71 name: Quadro GP100 major: 6 minor: 0 memoryClockRate(GHz): 1.4425
       72 pciBusID: 0000:01:00.0
       73 totalMemory: 16.00GiB freeMemory: 13.28GiB
       74 2020-05-12 09:41:28.215329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] 
    Adding visible gpu devices: 0
       75 2020-05-12 09:41:28.952392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
       76 2020-05-12 09:41:28.952785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
       77 2020-05-12 09:41:28.953095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
        78 2020-05-12 09:41:28.953962: E tensorflow/core/common_runtime/gpu/gpu_process_state.cc:106] Invalid allocator type: 0
       79 2020-05-12 09:41:28.954425: E tensorflow/core/common_runtime/session.cc:64] Failed to create session: Internal: Failed to get memory allocator for TF GPU 0 with 6899999744 bytes of memory.
    

    Describe the expected behavior

    Session is created and runs on GPU 0 only using only 80% of available memory

    Standalone code to reproduce the issue

    #include "tensorflow/core/protobuf/control_flow.pb.h"
    #include "tensorflow/core/protobuf/config.pb.h"
    #include <iostream>
    
    int main() {
      tensorflow::GPUOptions gpu_options;
    
      gpu_options.set_visible_device_list("0");
    
      std::cout << "allocator_type " << gpu_options.allocator_type() << std::endl; //print 0
    
    }
    

    Other info / logs

    Please see the following issues https://github.com/tensorflow/tensorflow/issues/16291 https://github.com/fo40225/tensorflow-windows-wheel/issues/39

    I have built my tensorflow.dll as follows:

    $ENV:USE_BAZEL_VERSION="0.19.2" $ENV:PYTHON_BIN_PATH=C:\ProgramData\Anaconda3\python.exe $ENV:Path += ";C:\msys64\usr\bin" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\extras\CUPTI\libx64" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-9.2-windows10-x64-v7.5.0.56\cuda\bin" $ENV:BAZEL_SH = "C:\msys64\usr\bin\bash.exe" $ENV:CUDA_TOOLKIT_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2" $ENV:TF_CUDA_VERSION="9.2" $ENV:CUDNN_INSTALL_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-9.2-windows10-x64-v7.5.0.56\cuda" $ENV:TF_CUDNN_VERSION="7" $ENV:TF_NCCL_VERSION="1" $ENV:TF_CUDA_COMPUTE_CAPABILITIES="3.5,3.7,5.0,5.2,6.0,6.1" $ENV:TF_CUDA_CLANG="0" $ENV:TF_NEED_CUDA="1" $ENV:TF_NEED_ROCM="0" $ENV:TF_NEED_OPENCL_SYCL="0"

    $params = "configure.py","" Remove-Item -Recurse -Force "C:\Windows\system32\config\systemprofile_bazel_SYSTEM\install\75b09cf1ac98c0ffb0534079b30efcc4" cmd /c "ECHO Y" | & python.exe @params bazel.exe clean --expunge bazel.exe build --copt=-nvcc_options=disable-warnings --test_tag_filters=-no_oss,-gpu,-benchmark-test,-nomac,-no_mac --announce_rc --test_timeout 300,450,1200,3600 --test_size_filters=small,medium --jobs=12 //tensorflow:libtensorflow_cc.so //tensorflow:libtensorflow_framework.so

    edits have been made to the following files:

    within

    tensorflow/BUILD

    `"//tensorflow:windows": [],`
    

    becomes

    "//tensorflow:windows": [
                "-def:" +  # This line must be directly followed by the exported_symbols_msvc.lds file
                "$(location //tensorflow:tf_exported_symbols_msvc.lds)",
            ],
    

    and within tf_cc_shared_object the function of tensorflow/BUILD

        visibility = ["//visibility:public"],
        deps = [
            "//tensorflow:tf_exported_symbols.lds",
            "//tensorflow:tf_version_script.lds",
            "//tensorflow/c:c_api",
            "//tensorflow/c/eager:c_api",
    

    becomes

        visibility = ["//visibility:public"],
        deps = [
            "//tensorflow:tf_exported_symbols.lds",
            "//tensorflow:tf_exported_symbols_msvc.lds",
            "//tensorflow:tf_version_script.lds",
            "//tensorflow/c:c_api",
            "//tensorflow/c/eager:c_api",
    

    The contents of tf_exported_symbols_msvc.lds are

    LIBRARY tensorflow_cc
    EXPORTS
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected]@Z
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]@Z
        [email protected]@tensorflow@@[email protected]
        [email protected]@[email protected]@@[email protected][email protected]@std@@@std@@XZ
        [email protected]@[email protected]@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]@@Z
        [email protected]@@[email protected]@A
        [email protected]@@[email protected]@[email protected]@[email protected]@@Z
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@[email protected]@@Z
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        [email protected]@@[email protected]@[email protected]@@Z
        [email protected]@tensorflow@@AEAAXPEAV12@@Z
        [email protected]@tensorflow@@QEAAXAEBV12@@Z
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        ??6tensorflow@@[email protected][email protected]@std@@@std@@[email protected]@0@@Z
        [email protected]@@[email protected]@[email protected]@[email protected]@@@Z
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@[email protected]@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@[email protected]?$char_tr[email protected]@std@@[email protected]@2@@std@@[email protected]@google@@@Z
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@AEBAXXZ
        [email protected]@@[email protected]@A
        [email protected]@tensorflow@@[email protected]@@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@1@@Z
        [email protected]@tensorflow@@@tensorflow@@[email protected]
        [email protected]@tensorflow@@@tensorflow@@[email protected]?$Span@[email protected]@@@Z
        [email protected]@tensorflow@@AEAAXXZ
        [email protected]@@[email protected][email protected]@std@@[email protected]@2@@std@@[email protected]@[email protected]
        [email protected]@tensorflow@@AEAAXAEBV12@@Z
        ?dim_size@[email protected]@tensorflow@@@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected]@@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]@@Z
        [email protected]@@3QEBDEB
        [email protected]@@3QEBDEB
        [email protected]@@3QEBDEB
        [email protected]@stream_executor@@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@@2QBDB
        [email protected]@tensorflow@@@tensorflow@@[email protected]?$Span@[email protected]@@@Z
        [email protected]@tensorflow@@@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@1@@Z
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@AEAAXXZ
        [email protected]@[email protected]@@[email protected][email protected]@std@@@std@@XZ
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@AEAAXAEBV12@@Z
        [email protected]@@[email protected][email protected]@std@@[email protected]@2@@std@@[email protected]@[email protected]
        [email protected]@stream_executor@@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        [email protected]@@[email protected]
        [email protected]@tensorflow@@QE[email protected]
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected]
        [email protected]@tensorflow@@AEBAXXZ
        [email protected]@tensorflow@@[email protected]@@Z
        [email protected]@tensorflow@@[email protected]@2@@Z
        [email protected]@@[email protected]@A
        ?dim_size@[email protected]@tensorflow@@@tensorflow@@[email protected]
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@XZ
        [email protected]@tensorflow@@[email protected][email protected]@std@@[email protected]@2@@std@@[email protected]@[email protected]
        [email protected]@tensorflow@@QEAAXAEBV12@@Z
        [email protected]@@6B@
        [email protected]@tensorflow@@[email protected]@[email protected]@@[email protected]@@PEAV012@@Z
        [email protected]@@[email protected]@@Z
        [email protected]@[email protected]@@[email protected]?$ba[email protected][email protected]@std@@[email protected]@2@@std@@@[email protected]
    

    As documented by https://github.com/tensorflow/tensorflow/issues/22047#issuecomment-421452033

    My software is linked against libprotobuf.lib from https://mirror.bazel.build/github.com/google/protobuf/archive/v3.6.0.tar.gz

    built as

    cmake -G "Visual Studio 14 2015 Win64"  .. -DCMAKE_INSTALL_PREFIX="%current%\protobuf-3.6.0" -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_SHARED_LIBS=ON -Dprotobuf_MSVC_STATIC_RUNTIME=OFF
    cmake --build . --target install --config Release -- /maxcpucount:12
    

    I also tried editing tensorflow\tf_version_script.lds to include

    *protobuf*
    

    I also tried the TF_EXPORT macro from #include "tensorflow/core/platform/macros.h"

    in tensorflow/core/public/session_options.h and tensorflow/core/common_runtime/session_options.cc

    as suggested by https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows

    Do you have any suggestions about how to make sure that

    the GPU options for allocator type and visible device list do not share the same memory but we still have a monolithic DLL under windows?

    comp:runtime comp:gpu TF 1.12 type:performance 
    opened by kognat-docs 152
  • Quantization-Aware Training support in Keras

    Quantization-Aware Training support in Keras

    System information

    • TensorFlow version (you are using): 1.13.1 (but willing to use 2.0.0-alpha0 if there is a good reason)
    • Are you willing to contribute it (Yes/No): Yes (given some pointers on how to best go about it)

    Describe the feature and the current behavior/state. Currently there is no obvious way to apply tf.contrib.quantize.create_training_graph to a keras model. The keras API only allows access to the graph after it has already created a session. Attempting to modify the graph at this point does not work: https://stackoverflow.com/questions/55123417/quantization-aware-retraining-a-keras-model https://stackoverflow.com/questions/52259343/quantize-a-keras-neural-network-model

    I have also tried to create a new session after rewriting the graph, without success:

    tf.contrib.quantize.create_training_graph(input_graph=tf.keras.backend.get_session().graph, quant_delay=0)
    # create a new session after rewriting the graph
    new_session = tf.Session()
    tf.keras.backend.set_session(new_session)
    

    Results in this error when I try to fit the model:

    tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_5/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/dense_5/bias/class tensorflow::Var does not exist.
            [[{{node dense_5/BiasAdd/ReadVariableOp}}]]
    

    Will this change the current api? How? Probably, but in a backwards-compatible way. I imagine some kind of graph rewriting hook would probably be necessary in the tf.keras API.

    Who will benefit with this feature? Users of TF Lite / Edge TPU wishing to easily train quantized models using the keras API (which is being pushed as the new "one true API" for tensorflow).

    Any Other info. Related issue on the main keras project https://github.com/keras-team/keras/issues/11105

    stat:awaiting response type:feature comp:keras 
    opened by ed-alertedh 150
  • Unable to install TensorFlow on Python3.7 with pip

    Unable to install TensorFlow on Python3.7 with pip

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): N/A
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 10.13
    • TensorFlow installed from (source or binary): binary
    • TensorFlow version (use command below): 1.8
    • Python version: 3.7
    • Bazel version (if compiling from source): N/A
    • GCC/Compiler version (if compiling from source): N/A
    • CUDA/cuDNN version: N/A
    • GPU model and memory: N/A
    • Exact command to reproduce: pip install tensorflow

    Describe the problem

    Installing TensorFlow on Python3.7 with pip failed. Please see the failure log below.

    Source code / logs

    Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow

    stat:community support type:build/install 
    opened by natsukium 148
  • Crash: Could not create cuDNN handle when convnets are used

    Crash: Could not create cuDNN handle when convnets are used

    Tensorflow (GPU) was imported successfully, but when running a session that involves a convolutional neural network (CNN), Python crashes with the following message:

    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) 
    

    The problem persists on any combination of CUDA toolkit 7.5/8.0 and Tensorflow installed from pip/source. Test sessions that do not use CNNs are run successfully.

    What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?

    The issue is similar to https://github.com/tensorflow/tensorflow/issues/6586, where I first commented. But since I experience the problem on a Mac, I was suggested to open a separate issue.

    Environment info

    Operating System: macOS Sierra 10.12.2 Xcode version 8.2 (8C38) (When I later tried CUDA 7.5, I installed Command Line Tools version 7.3.1 because CUDA 7.5 lacked support of the more recent compilers.) Python 3.5.2 (anaconda)

    Installed version of CUDA: tried both 8.0 (initially) and 7.5 (reported here, toolkit only -- the driver is still 8.0) Installed version of cuDNN: 5.1 (different installations according to CUDA versions) (please attach the output of ls -l /path/to/cuda/lib/libcud*):

    lrwxr-xr-x  1 root   wheel        33  5 Jan 20:33 /usr/local/cuda/lib/libcuda.1.dylib -> /usr/local/cuda/lib/libcuda.dylib
    -rwxr-xr-x@ 1 root   wheel      8280 13 Apr  2016 /usr/local/cuda/lib/libcuda.dylib
    lrwxr-xr-x@ 1 root   wheel        45 13 Apr  2016 /usr/local/cuda/lib/libcudadevrt.a -> /Developer/NVIDIA/CUDA-7.5/lib/libcudadevrt.a
    lrwxr-xr-x@ 1 root   wheel        50 13 Apr  2016 /usr/local/cuda/lib/libcudart.7.5.dylib -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart.7.5.dylib
    lrwxr-xr-x@ 1 root   wheel        46 13 Apr  2016 /usr/local/cuda/lib/libcudart.dylib -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart.dylib
    lrwxr-xr-x@ 1 root   wheel        49 13 Apr  2016 /usr/local/cuda/lib/libcudart_static.a -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart_static.a
    lrwxr-xr-x  1 root   wheel        16  5 Jan 17:14 /usr/local/cuda/lib/libcudnn.5 -> libcudnn.5.dylib
    -rwxr-xr-x@ 1 ymfa   staff  58975112 10 Jun  2016 /usr/local/cuda/lib/libcudnn.5.dylib
    lrwxr-xr-x@ 1 ymfa   staff        16 10 Jun  2016 /usr/local/cuda/lib/libcudnn.dylib -> libcudnn.5.dylib
    lrwxr-xr-x  1 root   wheel        16  5 Jan 17:14 /usr/local/cuda/lib/libcudnn5.dylib -> libcudnn.5.dylib
    -rw-r--r--@ 1 ymfa   staff  56392320 10 Jun  2016 /usr/local/cuda/lib/libcudnn_static.a
    

    I tried both installing from pip and source. I first installed from binary pip package:

    1. A link to the pip package you installed: tensorflow-gpu
    2. The output from python -c "import tensorflow; print(tensorflow.__version__)". 0.12.head

    Later I installed from source (the pip package was uninstalled):

    1. The commit hash (git rev-parse HEAD) d67c09d98a576e1fbf2f3609ddb842e53890f31c

    2. The output of bazel version

      Build label: 0.4.3-homebrew Build target: bazel-out/local-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar Build time: Thu Dec 22 15:20:15 2016 (1482420015) Build timestamp: 1482420015 Build timestamp as int: 1482420015

    If possible, provide a minimal reproducible example

    I made a minimal example by simplifying the network and reducing the training data to only twenty images and two classes for classification. issue.zip contains the Python code and the data. I wrote two convolutional layers because I found the network with only one convolutional layer runs without problem.

    Complete log using CUDA 7.5 and Tensorflow compiled from source

    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.7.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.7.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.1.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.7.5.dylib locally
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:874] OS X does not support NUMA - returning NUMA node zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
    name: GeForce GT 650M
    major: 3 minor: 0 memoryClockRate (GHz) 0.9
    pciBusID 0000:01:00.0
    Total memory: 1023.69MiB
    Free memory: 740.18MiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 650M, pci bus id: 0000:01:00.0)
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) 
    

    Complete log using CUDA 8.0 and Tensorflow installed from pip

    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.1.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.dylib locally
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] OS X does not support NUMA - returning NUMA node zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
    name: GeForce GT 650M
    major: 3 minor: 0 memoryClockRate (GHz) 0.9
    pciBusID 0000:01:00.0
    Total memory: 1023.69MiB
    Free memory: 590.00MiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 650M, pci bus id: 0000:01:00.0)
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:392] error retrieving driver version: Invalid argument: expected %d.%d or %d.%d.%d form for driver version; got ""
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:532] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
    
    stat:awaiting tensorflower type:build/install 
    opened by ymfa 147
  • ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    I installed tf-nightly build and I get the following error on import of tensorflow. ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory.

    If I check for cuda 9, I get the following:

    ldconfig -v
    /usr/local/cuda-8.0/targets/x86_64-linux/lib:
    	libnvgraph.so.8.0 -> libnvgraph.so.8.0.61
    	libnppicom.so.8.0 -> libnppicom.so.8.0.61
    	libnppial.so.8.0 -> libnppial.so.8.0.61
    	libcufftw.so.8.0 -> libcufftw.so.8.0.61
    	libcufft.so.8.0 -> libcufft.so.8.0.61
    	libnppif.so.8.0 -> libnppif.so.8.0.61
    	libcublas.so.8.0 -> libcublas.so.8.0.88
    	libnvblas.so.8.0 -> libnvblas.so.8.0.88
    	libnppi.so.8.0 -> libnppi.so.8.0.61
    	libcusolver.so.8.0 -> libcusolver.so.8.0.61
    	libnppidei.so.8.0 -> libnppidei.so.8.0.61
    	libnvrtc-builtins.so.8.0 -> libnvrtc-builtins.so.8.0.61
    	libnvrtc.so.8.0 -> libnvrtc.so.8.0.61
    	libnpps.so.8.0 -> libnpps.so.8.0.61
    	libcuinj64.so.8.0 -> libcuinj64.so.8.0.61
    	libnppig.so.8.0 -> libnppig.so.8.0.61
    	libOpenCL.so.1 -> libOpenCL.so.1.0.0
    	libnppicc.so.8.0 -> libnppicc.so.8.0.61
    	libnppist.so.8.0 -> libnppist.so.8.0.61
    	libnppisu.so.8.0 -> libnppisu.so.8.0.61
    	libnppim.so.8.0 -> libnppim.so.8.0.61
    	libcurand.so.8.0 -> libcurand.so.8.0.61
    	libcudart.so.8.0 -> libcudart.so.8.0.61
    	libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
    	libnppitc.so.8.0 -> libnppitc.so.8.0.61
    	libnppc.so.8.0 -> libnppc.so.8.0.61
    	libcusparse.so.8.0 -> libcusparse.so.8.0.61
    /usr/local/cuda-9.1/targets/x86_64-linux/lib:
    	libnppicc.so.9.1 -> libnppicc.so.9.1.85
    	libnppisu.so.9.1 -> libnppisu.so.9.1.85
    	libcufftw.so.9.1 -> libcufftw.so.9.1.85
    	libcufft.so.9.1 -> libcufft.so.9.1.85
    	libnppial.so.9.1 -> libnppial.so.9.1.85
    	libnppist.so.9.1 -> libnppist.so.9.1.85
    	libcublas.so.9.1 -> libcublas.so.9.1.85
    	libnvblas.so.9.1 -> libnvblas.so.9.1.85
    	libnppitc.so.9.1 -> libnppitc.so.9.1.85
    	libcusolver.so.9.1 -> libcusolver.so.9.1.85
    	libnvrtc.so.9.1 -> libnvrtc.so.9.1.85
    	libnvrtc-builtins.so.9.1 -> libnvrtc-builtins.so.9.1.85
    	libnppidei.so.9.1 -> libnppidei.so.9.1.85
    	libOpenCL.so.1 -> libOpenCL.so.1.0.0
    	libnppig.so.9.1 -> libnppig.so.9.1.85
    	libnppc.so.9.1 -> libnppc.so.9.1.85
    	libcudart.so.9.1 -> libcudart.so.9.1.85
    	libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
    	libnvgraph.so.9.1 -> libnvgraph.so.9.1.85
    	libnppif.so.9.1 -> libnppif.so.9.1.85
    	libcusparse.so.9.1 -> libcusparse.so.9.1.85
    	libaccinj64.so.9.1 -> libaccinj64.so.9.1.85
    	libcuinj64.so.9.1 -> libcuinj64.so.9.1.85
    	libnppim.so.9.1 -> libnppim.so.9.1.85
    	libnppicom.so.9.1 -> libnppicom.so.9.1.85
    	libnpps.so.9.1 -> libnpps.so.9.1.85
    	libcurand.so.9.1 -> libcurand.so.9.1.85
    

    I that due to a name mismatch. libcublas.so.9.0 =! libcublas.so.9.1? And if so how can we overcome this?

    opened by kirk86 142
  • [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite?

    [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite?

    HI.

    Developing an android application using tensorflow-lite.

    https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md Not found detection model.

    Also, I try to convert SSD-Inceptionv2 using tensorflow-lite-API. But there seems to be a problem.

    ##Command

    
    bazel run --config=opt --copt=-msse4.1 --copt=-msse4.2 \
      //tensorflow/contrib/lite/toco:toco -- \
      --input_file=/home/danshin/tensorflow_lite/lite_model/fire_incpetion_v2.pb \
      --output_file=/home/danshin/tensorflow_lite/lite_model/fire_inception_v2.lite \
      --input_format=TENSORFLOW_GRAPHDEF \
      --output_format=TFLITE \
      --inference_type=FLOAT \
      --input_shape=1,300,300,3 \
      --input_array=image_tensor \
      --output_array={detection_boxes,detection_scores,detection_classes,num_detections}
    

    ##Error code

    
    2017-12-26 14:59:25.159220: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 2029 operators, 3459 arrays (0 quantized)
    2017-12-26 14:59:25.251633: F tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:95] Check failed: other_op->type == OperatorType::kTensorFlowMerge 
    

    The fire_inception_v2 file is created, but its size is zero bytes. What is a problem?

    also, please let me know what's the best way to deploy custom model for object detection?

    Somebody help me plz!.

    thank you.

    type:feature comp:lite 
    opened by Nanamare 141
  • [Documentation] `raw_ops.RealDiv`: Input Tensors cannot be of integer dtype

    [Documentation] `raw_ops.RealDiv`: Input Tensors cannot be of integer dtype

    Click to expand!

    Issue Type

    Documentation Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    binary

    Tensorflow Version

    tf 2.9.1

    Custom Code

    Yes

    OS Platform and Distribution

    Linux WSL2 Ubuntu 20.04 LTS

    Mobile device

    No response

    Python version

    3.8.10

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    The documentation for `tf.raw_ops.RealDiv` states that the input argument `x` must be a Tensor of types `bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, uint32, uint64, int64, complex64, complex128.` Upon usage, however, this op throws an exception when run with an input of any integer dtype when run on a CPU. The op only works for `float` and `complex` dtypes.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import numpy as np
    
    dtype = "int64"
    x = np.array([[1,2,4],[2,3,5]], dtype=dtype)
    y = np.array([[1,2,4],[2,3,5]], dtype=dtype)
    x = tf.constant(x, dtype=dtype)
    y = tf.constant(y, dtype=dtype)
    tf.raw_ops.RealDiv(
        x=x, y=y, name=None
    )
    

    Relevant log output

    ---------------------------------------------------------------------------
    
    NotFoundError                             Traceback (most recent call last)
    
    <ipython-input-3-553c77700511> in <module>
          4 x = tf.constant(x, dtype=dtype)
          5 y = tf.constant(y, dtype=dtype)
    ----> 6 tf.raw_ops.RealDiv(
          7     x=x, y=y, name=None
          8 )
    
    /usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
       7162 def raise_from_not_ok_status(e, name):
       7163   e.message += (" name: " + name if name is not None else "")
    -> 7164   raise core._status_to_exception(e) from None  # pylint: disable=protected-access
       7165 
       7166 
    
    NotFoundError: Could not find device for node: {{node RealDiv}} = RealDiv[T=DT_INT64]
    All kernels registered for op RealDiv:
      device='CPU'; T in [DT_COMPLEX128]
      device='CPU'; T in [DT_COMPLEX64]
      device='CPU'; T in [DT_BFLOAT16]
      device='CPU'; T in [DT_DOUBLE]
      device='CPU'; T in [DT_HALF]
      device='CPU'; T in [DT_FLOAT]
      device='GPU'; T in [DT_COMPLEX128]
      device='GPU'; T in [DT_COMPLEX64]
      device='GPU'; T in [DT_DOUBLE]
      device='GPU'; T in [DT_FLOAT]
      device='GPU'; T in [DT_HALF]
     [Op:RealDiv]
    
    type:docs-bug type:bug 
    opened by hmahmood24 0
  • Process killed when running generic_utils.make_batches

    Process killed when running generic_utils.make_batches

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Probably due to very large input argument.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.keras.utils import generic_utils
    try:
      arg_0 = 125091515651
      arg_1 = 512
      out = generic_utils.make_batches(arg_0,arg_1,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    _No response_</details>
    type:bug 
    opened by nimashiri 0
  • Segfault when running tensorflow.python.ops.math_ops.sobol_sample

    Segfault when running tensorflow.python.ops.math_ops.sobol_sample

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Segmentation fault on very large input arguments.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import math_ops
    try:
      arg_0 = 5
      arg_1 = 125091515651
      dtype = None
      out = math_ops.sobol_sample(arg_0,arg_1,dtype=dtype,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    _No response_</details>
    type:bug comp:ops TF 2.10 
    opened by nimashiri 2
  • Segfault when running tensorflow.python.ops.gen_sparse_ops.sparse_concat

    Segfault when running tensorflow.python.ops.gen_sparse_ops.sparse_concat

    Click to expand!

    Issue Type

    Bug

    Have you reproduced the bug with TF nightly?

    Yes

    Source

    source

    Tensorflow Version

    2.10.0

    Custom Code

    Yes

    OS Platform and Distribution

    Ubuntu 22.04

    Mobile device

    No response

    Python version

    3.9

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Segfault probably due to dimension mismatch on input tensor parameters.
    

    Standalone code to reproduce the issue

    import tensorflow as tf
    import os
    import numpy as np
    from tensorflow.python.ops import gen_sparse_ops
    try:
      arg_0 = []
      arg_1_0_tensor = tf.random.uniform([], dtype=tf.float32)
      arg_1_0 = tf.identity(arg_1_0_tensor)
      arg_1_1_tensor = tf.random.uniform([], dtype=tf.float32)
      arg_1_1 = tf.identity(arg_1_1_tensor)
      arg_1 = [arg_1_0,arg_1_1,]
      arg_2_0_tensor = tf.random.uniform([], minval=-256, maxval=257, dtype=tf.int64)
      arg_2_0 = tf.identity(arg_2_0_tensor)
      arg_2_1_tensor = tf.random.uniform([], minval=-256, maxval=257, dtype=tf.int64)
      arg_2_1 = tf.identity(arg_2_1_tensor)
      arg_2 = [arg_2_0,arg_2_1,]
      arg_3 = -2
      out = gen_sparse_ops.sparse_concat(arg_0,arg_1,arg_2,arg_3,)
    except Exception as e:
      print("Error:"+str(e))
    
    
    
    ### Relevant log output
    
    ```shell
    Segmentation fault
    
    type:bug 
    opened by nimashiri 0
  • Fix stats bug in cuda_malloc_async allocator

    Fix stats bug in cuda_malloc_async allocator

    There was a race condition between calling cuMemAllocFromPoolAsync or cuMemFreeAsync and updating the corresponding stats. This caused inconsistent stats, and a failure of the DCHECK in DeallocateRaw in debug builds.

    This PR moves the lock around the alloc/free calls as well as the stats update so that they remain consistent.

    cc @nluehr @pjannaty

    awaiting review comp:xla size:S 
    opened by benbarsdell 0
Releases(v2.11.0)
  • v2.11.0(Nov 18, 2022)

    Release 2.11.0

    Breaking Changes

    • The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace.

      If you find your workflow failing due to this change, you may be facing one of the following issues:

      • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
      • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend migrating your workflow to TF2 for stable support and new features.
      • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
      • Learning rate schedule access. When using a tf.keras.optimizers.schedules.LearningRateSchedule, the new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
      • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
      • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
      • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

      The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on the new tf.keras.optimizers.Optimizer base class.

    • tensorflow/python/keras code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras and use the public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

    Major Features and Improvements

    • tf.lite:

      • New operations supported: tf.math.unsorted_segment_sum, tf.atan2 and tf.sign.
      • Updates to existing operations:
        • tfl.mul now supports complex32 inputs.
    • tf.experimental.StructuredTensor:

      • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
    • tf.keras:

      • Added a new get_metrics_result() method to tf.keras.models.Model.
        • Returns the current metrics values of the model as a dict.
      • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
      • Added weight decay support for all Keras optimizers via the weight_decay argument.
      • Added the Adafactor optimizer - tf.keras.optimizers.Adafactor.
      • Added warmstart_embedding_matrix to tf.keras.utils.
        • This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
    • tf.Variable:

      • Added CompositeTensor as a base class to ResourceVariable.
        • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
      • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
        • When it's set to False, the variable won't be lifted out of tf.function; thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
    • TF SavedModel:

      • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
    • TF pip:

      • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided on an as-is basis. TensorFlow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

    Bug Fixes and Other Changes

    • tf.image:

      • Added an optional parameter return_index_map to tf.image.ssim, which causes the returned value to be the local SSIM map instead of the global mean.
    • TF Core:

      • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
      • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
      • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
    • tf.SparseTensor:

      • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

    Security

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

    Source code(tar.gz)
    Source code(zip)
  • v2.10.1(Nov 16, 2022)

    Release 2.10.1

    This release introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.9.3(Nov 16, 2022)

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.8.4(Nov 16, 2022)

    Release 2.8.4

    This release introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.11.0-rc2(Nov 2, 2022)

    Release 2.11.0

    Breaking Changes

    • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace.
      If you find your workflow failing due to this change, you may be facing one of the following issues:

      • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
      • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
      • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
      • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
      • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
      • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
      • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

      The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

    • tensorflow/python/keras code is a legacy copy of Keras since 2.7 release, and will be deleted in 2.12 release. Please remove any import of tensorflow.python.keras and use public API with from tensorflow import keras or import tensorflow as tf; tf.keras.

    Major Features and Improvements

    • tf.lite:

      • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
      • Updates to existing operations:
        • tfl.mul now supports complex32 inputs.
    • tf.experimental.StructuredTensor

      • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
    • tf.keras:

      • Added a new get_metrics_result() method to tf.keras.models.Model.
        • Returns the current metrics values of the model as a dict.
      • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
      • Added weight decay support for all Keras optimizers.
      • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
      • Added warmstart_embedding_matrix to tf.keras.utils.
        • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
    • tf.Variable:

      • Added CompositeTensor as a baseclass to ResourceVariable.
        • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
      • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
        • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
    • TF SavedModel:

      • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
    • TF pip:

      • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

    Bug Fixes and Other Changes

    • tf.image

      • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global mean.
    • TF Core:

      • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
      • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
      • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
    • tf.SparseTensor:

      • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

    Source code(tar.gz)
    Source code(zip)
  • v2.11.0-rc1(Oct 19, 2022)

    Release 2.11.0

    Breaking Changes

    • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:

      • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
      • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
      • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
      • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
      • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
      • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
      • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

      The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

    Major Features and Improvements

    • tf.lite:

      • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
      • Updates to existing operations:
        • tfl.mul now supports complex32 inputs.
    • tf.experimental.StructuredTensor

      • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
    • tf.keras:

      • Added a new get_metrics_result() method to tf.keras.models.Model.
        • Returns the current metrics values of the model as a dict.
      • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
      • Added weight decay support for all Keras optimizers.
      • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
      • Added warmstart_embedding_matrix to tf.keras.utils.
        • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
    • tf.Variable:

      • Added CompositeTensor as a baseclass to ResourceVariable.
        • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
      • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
        • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
    • TF SavedModel:

      • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
    • TF pip:

      • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

    Bug Fixes and Other Changes

    • tf.image

      • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global mean.
    • TF Core:

      • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
      • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
      • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
    • tf.SparseTensor:

      • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

    Source code(tar.gz)
    Source code(zip)
  • v2.11.0-rc0(Oct 18, 2022)

    Release 2.11.0

    Breaking Changes

    • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:

      • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
      • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
      • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
      • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
      • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
      • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
      • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

      The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

    Major Features and Improvements

    • tf.lite:

      • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
      • Updates to existing operations:
        • tfl.mul now supports complex32 inputs.
    • tf.experimental.StructuredTensor

      • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
    • tf.keras:

      • Added a new get_metrics_result() method to tf.keras.models.Model.
        • Returns the current metrics values of the model as a dict.
      • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
      • Added weight decay support for all Keras optimizers.
      • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
      • Added warmstart_embedding_matrix to tf.keras.utils.
        • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
    • tf.Variable:

      • Added CompositeTensor as a baseclass to ResourceVariable.
        • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
      • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
        • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
    • TF SavedModel:

      • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
    • TF pip:

      • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

    Bug Fixes and Other Changes

    • tf.image

      • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global mean.
    • TF Core:

      • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
      • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
      • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
    • tf.SparseTensor:

      • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0(Sep 6, 2022)

    Release 2.10.0

    Breaking Changes

    • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
    • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
    • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
    • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
      • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
      • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

    Deprecations

    • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
      • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
      • Stop constructing Status objects from tensorflow::error::Code.
      • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
        • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
        • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
        • In the last resort, use e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
    • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

    Major Features and Improvements

    • tf.lite:

      • New operations supported:
        • tflite SelectV2 now supports 5D.
        • tf.einsum is supported with multiple unknown shapes.
        • tf.unsortedsegmentprod op is supported.
        • tf.unsortedsegmentmax op is supported.
        • tf.unsortedsegmentsum op is supported.
      • Updates to existing operations:
        • tfl.scatter_nd now supports I1 for update arg.
      • Upgrade Flatbuffers v2.0.5 from v1.12.0
    • tf.keras:

      • EinsumDense layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
      • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
      • Added subset="both" support in tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
      • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
      • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
      • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
      • Improved masking support for tf.keras.layers.MultiHeadAttention.
        • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
        • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
      • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
      • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
    • tf.data:

      • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
      • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
      • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True,tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
      • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
      • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
    • tf.distribute:

      • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
    • tf.math:

      • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
    • tf.train:

      • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
    • tf.vectorized_map:

      • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
    • XLA:

    • CPU performance optimizations:

      • x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from auto_mixed_precision_mkl to auto_mixed_precision_onednn_bfloat16. See example usage here.
      • aarch64 CPUs: Experimental performance optimizations from Compute Library for the Arm® Architecture (ACL) are available through oneDNN in the default Linux aarch64 package (pip install tensorflow).
        • The optimizations are disabled by default.
        • Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations.
        • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
        • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • New argument experimental_device_ordinal in LogicalDeviceConfiguration to control the order of logical devices. (GPU only)

    • tf.keras:

      • Changed the TensorBoard tag names produced by the tf.keras.callbacks.TensorBoard callback, so that summaries logged automatically for model weights now include either a /histogram or /image suffix in their tag names, in order to prevent tag name collisions across summary types.
    • When running on GPU (with cuDNN version 7.6.3 or later),tf.nn.depthwise_conv2d backprop to filter (and therefore also tf.keras.layers.DepthwiseConv2D) now operate deterministically (and tf.errors.UnimplementedError is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism. This closes issue 47174.

    • tf.random

      • Added tf.random.experimental.stateless_shuffle, a stateless version of tf.random.shuffle.

    Security

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

    Source code(tar.gz)
    Source code(zip)
  • v2.9.2(Sep 3, 2022)

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.8.3(Sep 2, 2022)

    Release 2.8.3

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.7.4(Sep 2, 2022)

    Release 2.7.4

    Note: This is the last release in the 2.7.x series

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0-rc3(Aug 29, 2022)

    Release 2.10.0

    Breaking Changes

    • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
    • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
    • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
    • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
      • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
      • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

    Deprecations

    • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
      • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
      • Stop constructing Status objects from tensorflow::error::Code.
      • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
        • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
        • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
        • In the last resort, use e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
    • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

    Major Features and Improvements

    • tf.lite:

      • New operations supported:
        • tflite SelectV2 now supports 5D.
        • tf.einsum is supported with multiple unknown shapes.
        • tf.unsortedsegmentprod op is supported.
        • tf.unsortedsegmentmax op is supported.
        • tf.unsortedsegmentsum op is supported.
      • Updates to existing operations:
        • tfl.scatter_nd now supports I1 for update arg.
      • Upgrade Flatbuffers v2.0.5 from v1.12.0
    • tf.keras:

      • EinsumDense layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
      • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
      • Added subset="both" support in tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
      • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
      • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
      • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
      • Improved masking support for tf.keras.layers.MultiHeadAttention.
        • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
        • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
      • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
      • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
    • tf.data:

      • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
      • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
      • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True,tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
      • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
      • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
    • tf.distribute:

      • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
    • tf.math:

      • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
    • tf.train:

      • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
    • tf.vectorized_map:

      • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
    • XLA:

    • CPU performance optimizations:

      • x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from auto_mixed_precision_mkl to auto_mixed_precision_onednn_bfloat16. See example usage here.
      • aarch64 CPUs: Experimental performance optimizations from Compute Library for the Arm® Architecture (ACL) are available through oneDNN in the default Linux aarch64 package (pip install tensorflow).
        • The optimizations are disabled by default.
        • Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations.
        • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
        • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • New argument experimental_device_ordinal in LogicalDeviceConfiguration to control the order of logical devices. (GPU only)

    • tf.keras:

      • Changed the TensorBoard tag names produced by the tf.keras.callbacks.TensorBoard callback, so that summaries logged automatically for model weights now include either a /histogram or /image suffix in their tag names, in order to prevent tag name collisions across summary types.
    • When running on GPU (with cuDNN version 7.6.3 or later),tf.nn.depthwise_conv2d backprop to filter (and therefore also tf.keras.layers.DepthwiseConv2D) now operate deterministically (and tf.errors.UnimplementedError is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism. This closes issue 47174.

    • tf.random

      • Added tf.random.experimental.stateless_shuffle, a stateless version of tf.random.shuffle.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0-rc2(Aug 23, 2022)

    Release 2.10.0

    Breaking Changes

    • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
    • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
    • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
    • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
      • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
      • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

    Major Features and Improvements

    • tf.lite:

      • New operations supported:
        • tflite SelectV2 now supports 5D.
        • tf.einsum is supported with multiple unknown shapes.
        • tf.unsortedsegmentprod op is supported.
        • tf.unsortedsegmentmax op is supported.
        • tf.unsortedsegmentsum op is supported.
      • Updates to existing operations:
        • tfl.scatter_nd now supports I1 for update arg.
      • Upgrade Flatbuffers v2.0.5 from v1.12.0
    • tf.keras:

      • EinsumDense layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
      • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
      • Added subset="both" support in tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
      • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
      • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
      • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
      • Improved masking support for tf.keras.layers.MultiHeadAttention.
        • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
        • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
      • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
      • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
    • tf.data:

      • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
      • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
      • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True,tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
      • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
      • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
    • tf.distribute:

      • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
    • tf.math:

      • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
    • tf.train:

      • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
    • tf.vectorized_map:

      • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
    • XLA:

      • MWMS is now compilable with XLA.
    • oneDNN CPU performance optimizations:

      • x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from auto_mixed_precision_mkl to auto_mixed_precision_onednn_bfloat16. See example usage here.
      • aarch64 CPUs: Experimental Arm Compute Library (ACL) CPU performance optimizations through oneDNN are available in the default Linux aarch64 package (pip install tensorflow).
        • The optimizations are disabled by default.
        • Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations.
        • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
        • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • New argument experimental_device_ordinal in LogicalDeviceConfiguration to control the order of logical devices. (GPU only)

    • tf.keras:

      • Changed the TensorBoard tag names produced by the tf.keras.callbacks.TensorBoard callback, so that summaries logged automatically for model weights now include either a /histogram or /image suffix in their tag names, in order to prevent tag name collisions across summary types.
    • When running on GPU (with cuDNN version 7.6.3 or later),tf.nn.depthwise_conv2d backprop to filter (and therefore also tf.keras.layers.DepthwiseConv2D) now operate deterministically (and tf.errors.UnimplementedError is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism. This closes issue 47174.

    • tf.random

      • Added tf.random.experimental.stateless_shuffle, a stateless version of tf.random.shuffle.

    Deprecations

    • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
      • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
      • Stop constructing Status objects from tensorflow::error::Code.
      • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
        • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
        • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
        • In the last resort, use e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
    • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0-rc1(Aug 15, 2022)

    Release 2.10.0

    Breaking Changes

    • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
    • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
    • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
    • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
      • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
      • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

    Major Features and Improvements

    • tf.lite:

      • New operations supported:
        • tflite SelectV2 now supports 5D.
        • tf.einsum is supported with multiple unknown shapes.
        • tf.unsortedsegmentprod op is supported.
        • tf.unsortedsegmentmax op is supported.
        • tf.unsortedsegmentsum op is supported.
      • Updates to existing operations:
        • tfl.scatter_nd now supports I1 for update arg.
      • Upgrade Flatbuffers v2.0.5 from v1.12.0
    • tf.keras:

      • EinsumDense layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
      • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
      • Added subset="both" support in tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
      • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
      • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
      • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
      • Improved masking support for tf.keras.layers.MultiHeadAttention.
        • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
        • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
      • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
      • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
    • tf.data:

      • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
      • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
      • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True,tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
      • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
      • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
    • tf.distribute:

      • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
    • tf.math:

      • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
    • tf.train:

      • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
    • tf.vectorized_map:

      • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
    • XLA:

      • MWMS is now compilable with XLA.
    • oneDNN CPU performance optimizations:

      • x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from auto_mixed_precision_mkl to auto_mixed_precision_onednn_bfloat16. See example usage here.
      • aarch64 CPUs: Experimental oneDNN optimizations are available in the default Linux aarch64 package (pip install tensorflow).
        • The optimizations are disabled by default.
        • Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations.
        • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
        • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • New argument experimental_device_ordinal in LogicalDeviceConfiguration to control the order of logical devices. (GPU only)

    • tf.keras:

      • Changed the TensorBoard tag names produced by the tf.keras.callbacks.TensorBoard callback, so that summaries logged automatically for model weights now include either a /histogram or /image suffix in their tag names, in order to prevent tag name collisions across summary types.
    • When running on GPU (with cuDNN version 7.6.3 or later),tf.nn.depthwise_conv2d backprop to filter (and therefore also tf.keras.layers.DepthwiseConv2D) now operate deterministically (and tf.errors.UnimplementedError is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism. This closes issue 47174.

    • tf.random

      • Added tf.random.experimental.stateless_shuffle, a stateless version of tf.random.shuffle.

    Deprecations

    • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
      • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
      • Stop constructing Status objects from tensorflow::error::Code.
      • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
        • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
        • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
        • In the last resort, use e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
    • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

    Source code(tar.gz)
    Source code(zip)
  • v2.10.0-rc0(Aug 3, 2022)

    Release 2.10.0

    Breaking Changes

    • Causal attention in keras.layers.Attention and keras.layers.AdditiveAttention is now specified in the call() method via the use_causal_mask argument (rather than in the constructor), for consistency with other layers.
    • Some files in tensorflow/python/training have been moved to tensorflow/python/tracking and tensorflow/python/checkpoint. Please update your imports accordingly, the old files will be removed in Release 2.11.
    • tf.keras.optimizers.experimental.Optimizer will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer will be an alias of tf.keras.optimizers.experimental.Optimizer. The current tf.keras.optimizers.Optimizer will continue to be supported as tf.keras.optimizers.legacy.Optimizer, e.g., tf.keras.optimizers.legacy.Adam. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer.
    • RNG behavior change for tf.keras.initializers. Keras initializers will now use stateless random ops to generate random numbers.
      • Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (seed=None), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
      • An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

    Major Features and Improvements

    • tf.lite:

      • New operations supported:
        • tflite SelectV2 now supports 5D.
        • tf.einsum is supported with multiple unknown shapes.
        • tf.unsortedsegmentprod op is supported.
        • tf.unsortedsegmentmax op is supported.
        • tf.unsortedsegmentsum op is supported.
      • Updates to existing operations:
        • tfl.scatter_nd now supports I1 for update arg.
      • Upgrade Flatbuffers v2.0.5 from v1.12.0
    • tf.keras:

      • EinsumDense layer moved from experimental to core. Its import path moved from tf.keras.layers.experimental.EinsumDense to tf.keras.layers.EinsumDense.
      • Added tf.keras.utils.audio_dataset_from_directory utility to easily generate audio classification datasets from directories of .wav files.
      • Added subset="both" support in tf.keras.utils.image_dataset_from_directory, tf.keras.utils.text_dataset_from_directory, and audio_dataset_from_directory, to be used with the validation_split argument, for returning both dataset splits at once, as a tuple.
      • Added tf.keras.utils.split_dataset utility to split a Dataset object or a list/tuple of arrays into two Dataset objects (e.g. train/test).
      • Added step granularity to BackupAndRestore callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
      • Added tf.keras.dtensor.experimental.optimizers.AdamW. This optimizer is similar as the existing keras.optimizers.experimental.AdamW, and works in the DTensor training use case.
      • Improved masking support for tf.keras.layers.MultiHeadAttention.
        • Implicit masks for query, key and value inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding with mask_zero=True to automatically infer a correct padding mask.
        • Added a use_causal_mask call time arugment to the layer. Passing use_causal_mask=True will compute a causal attention mask, and optionally combine it with any attention_mask passed in directly when calling the layer.
      • Added ignore_class argument in the loss SparseCategoricalCrossentropy and metrics IoU and MeanIoU, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
      • Added tf.keras.models.experimental.SharpnessAwareMinimization. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
    • tf.data:

      • Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers for more details.
      • Added dataset_id to tf.data.experimental.service.register_dataset. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset with the same dataset_id.
      • Added a new field, inject_prefetch, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will now automatically add a prefetch transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True in tf.data.experimental.OptimizationOptions.
      • Added a new value to tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
      • Added tf.data.experimental.from_list, a new API for creating Datasets from lists of elements.
    • tf.distribute:

      • Added tf.distribute.experimental.PreemptionCheckpointHandler to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
    • tf.math:

      • Added tf.math.approx_max_k and tf.math.approx_min_k which are the optimized alternatives to tf.math.top_k on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
    • tf.train:

      • Added tf.train.TrackableView which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module, Keras Layers and models).
    • tf.vectorized_map:

      • Added an optional parameter: warn. This parameter controls whether or not warnings will be printed when operations in the provided fn fall back to a while loop.
    • XLA:

      • MWMS is now compilable with XLA.

    Bug Fixes and Other Changes

    • New argument experimental_device_ordinal in LogicalDeviceConfiguration to control the order of logical devices. (GPU only)

    • tf.keras:

      • Changed the TensorBoard tag names produced by the tf.keras.callbacks.TensorBoard callback, so that summaries logged automatically for model weights now include either a /histogram or /image suffix in their tag names, in order to prevent tag name collisions across summary types.
    • When running on GPU (with cuDNN version 7.6.3 or later), tf.nn.depthwise_conv2d backprop to filter (and therefore also tf.keras.layers.DepthwiseConv2D) now operate deterministically (and tf.errors.UnimplementedError is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism. This closes issue 47174.

    • tf.random

      • Added tf.random.experimental.stateless_shuffle, a stateless version of tf.random.shuffle.

    Deprecations

    • The C++ tensorflow::Code and tensorflow::Status will become aliases of respectively absl::StatusCode and absl::Status in some future release.
      • Use tensorflow::OkStatus() instead of tensorflow::Status::OK().
      • Stop constructing Status objects from tensorflow::error::Code.
      • One MUST NOT access tensorflow::errors::Code fields. Accessing tensorflow::error::Code fields is fine.
        • Use the constructors such as tensorflow::errors:InvalidArgument to create status using an error code without accessing it.
        • Use the free functions such as tensorflow::errors::IsInvalidArgument if needed.
        • In the last resort, use e.g. static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT) or static_cast<int>(code) for comparisons.
    • tensorflow::StatusOr will also become in the future alias to absl::StatusOr, so use StatusOr::value instead of StatusOr::ConsumeValueOrDie.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

    Source code(tar.gz)
    Source code(zip)
  • v2.9.1(May 23, 2022)

    Release 2.9.1

    Add an upper bound for protobuf in setup.py since protobuf after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

    Source code(tar.gz)
    Source code(zip)
  • v2.8.2(May 23, 2022)

    Release 2.8.2

    Add an upper bound for protobuf in setup.py since protobuf after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

    Source code(tar.gz)
    Source code(zip)
  • v2.6.5(May 23, 2022)

    Release 2.6.5

    Add an upper bound for protobuf in setup.py since protobuf after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

    This is the final release in the 2.6.x series.

    Source code(tar.gz)
    Source code(zip)
  • v2.7.3(May 23, 2022)

    Release 2.7.3

    Add an upper bound for protobuf in setup.py since protobuf after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

    Source code(tar.gz)
    Source code(zip)
  • v2.9.0(May 16, 2022)

    Release 2.9.0

    Breaking Changes

    • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
    • Build, Compilation and Packaging
      • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
      • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
      • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
    • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
      • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
        • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
        • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
        • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
      • In the following rare cases, you need to make more changes when switching to the non-experimental API:
        • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
        • If you passed a value to the loss_scale argument (the second argument) of Policy:
          • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
        • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
          • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
    • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
    • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

    Major Features and Improvements

    • tf.keras:

      • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
      • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
      • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
      • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
      • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
      • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
      • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
      • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
      • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
        • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
      • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
      • Add support for unsigned 16-bit integer tensor types in cast op.
      • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
      • Enabled a new MLIR-based dynamic range quantization backend by default
        • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
        • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
      • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
    • tf.function:

      • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
      • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
      • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
    • Unified eager and tf.function execution:

      • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
      • It is available for immediate use.
        • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
        • Eager performance should be similar with this feature enabled.
          • A roughly 5us per-op overhead may be observed when running many small functions.
          • Note a known issue with GPU performance.
        • The behavior of tf.function itself is unaffected.
      • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
    • tf.experimental.dtensor: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under tf.keras.dtensor in this release (refer to the tf.keras entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.

    • oneDNN CPU performance optimizations are available in Linux x86, Windows x86, and Linux aarch64 packages.

      • Linux x86 packages:
        • oneDNN optimizations are enabled by default on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. (Intel Cascade Lake and newer CPUs.)
        • For older CPUs, oneDNN optimizations are disabled by default.
      • Windows x86 package: oneDNN optimizations are disabled by default.
      • Linux aach64 (--config=mkl_aarch64) package:
        • Experimental oneDNN optimizations are disabled by default.
        • If you experience issues with oneDNN optimizations on, we recommend turning them off.
      • To explicitly enable or disable oneDNN optimizations, set the environment variable TF_ENABLE_ONEDNN_OPTS to 1 (enable) or 0 (disable) before running TensorFlow. (The variable is checked during import tensorflow.) To fall back to default settings, unset the environment variable.
      • These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
      • To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.

    Bug Fixes and Other Changes

    • tf.data:

      • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
      • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
    • tf.keras:

      • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
    • tf.random:

      • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
    • tf.RaggedTensor:

      • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
      • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

    Security

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09

    Source code(tar.gz)
    Source code(zip)
  • v2.6.4(May 16, 2022)

    Release 2.6.4

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.8.1(May 16, 2022)

    Release 2.8.1

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.7.2(May 16, 2022)

    Release 2.7.2

    This releases introduces several vulnerability fixes:

    Source code(tar.gz)
    Source code(zip)
  • v2.9.0-rc2(May 4, 2022)

    Release 2.9.0

    Breaking Changes

    • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
    • Build, Compilation and Packaging
      • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
      • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
      • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
    • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
      • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
        • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
        • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
        • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
      • In the following rare cases, you need to make more changes when switching to the non-experimental API:
        • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
        • If you passed a value to the loss_scale argument (the second argument) of Policy:
          • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
        • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
          • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
    • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
    • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

    Major Features and Improvements

    • tf.keras:

      • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
      • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
      • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
      • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
      • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
      • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
      • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
      • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
      • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
        • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
      • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
      • Add support for unsigned 16-bit integer tensor types in cast op.
      • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
      • Enabled a new MLIR-based dynamic range quantization backend by default
        • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
        • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
      • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
    • tf.function:

      • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
      • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
      • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
    • Unified eager and tf.function execution:

      • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
      • It is available for immediate use.
        • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
        • Eager performance should be similar with this feature enabled.
          • A roughly 5us per-op overhead may be observed when running many small functions.
          • Note a known issue with GPU performance.
        • The behavior of tf.function itself is unaffected.
      • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
    • tf.experimental.dtensor: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under tf.keras.dtensor in this release (refer to the tf.keras entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.

    Bug Fixes and Other Changes

    • tf.data:

      • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
      • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
    • tf.keras:

      • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
    • tf.random:

      • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
    • tf.RaggedTensor:

      • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
      • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09

    Source code(tar.gz)
    Source code(zip)
  • v2.9.0-rc1(Apr 21, 2022)

    Release 2.9.0

    Breaking Changes

    • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
    • Build, Compilation and Packaging
      • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
      • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
      • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
    • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
      • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
        • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
        • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
        • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
      • In the following rare cases, you need to make more changes when switching to the non-experimental API:
        • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
        • If you passed a value to the loss_scale argument (the second argument) of Policy:
          • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
        • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
          • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
    • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
    • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

    Major Features and Improvements

    • tf.keras:

      • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
      • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
      • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
      • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
      • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
      • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
      • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
      • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
      • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
        • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
      • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
      • Add support for unsigned 16-bit integer tensor types in cast op.
      • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
      • Enabled a new MLIR-based dynamic range quantization backend by default
        • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
        • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
      • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
    • tf.function:

      • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
      • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
      • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
    • Unified eager and tf.function execution:

      • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
      • It is available for immediate use.
        • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
        • Eager performance should be similar with this feature enabled.
          • A roughly 5us per-op overhead may be observed when running many small functions.
          • Note a known issue with GPU performance.
        • The behavior of tf.function itself is unaffected.
      • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
    • tf.experimental.dtensor: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published under tf.keras.dtensor in this release (refer to the tf.keras entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned.

    Bug Fixes and Other Changes

    • tf.data:

      • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
      • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
    • tf.keras:

      • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
    • tf.random:

      • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
    • tf.RaggedTensor:

      • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
      • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09

    Source code(tar.gz)
    Source code(zip)
  • v2.9.0-rc0(Apr 12, 2022)

    Release 2.9.0

    Breaking Changes

    • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
    • Build, Compilation and Packaging
      • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
      • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
      • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
    • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
      • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
        • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
        • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
        • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
      • In the following rare cases, you need to make more changes when switching to the non-experimental API:
        • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
        • If you passed a value to the loss_scale argument (the second argument) of Policy:
          • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
        • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
          • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
    • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
    • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

    Major Features and Improvements

    • tf.keras:

      • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
      • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
      • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
      • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
      • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
      • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
      • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
      • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
      • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
        • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
      • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
      • Add support for unsigned 16-bit integer tensor types in cast op.
      • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
      • Enabled a new MLIR-based dynamic range quantization backend by default
        • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
        • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
      • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
    • tf.function:

      • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
      • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
      • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
    • Unified eager and tf.function execution:

      • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
      • It is available for immediate use.
        • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
        • Eager performance should be similar with this feature enabled.
          • A roughly 5us per-op overhead may be observed when running many small functions.
          • Note a known issue with GPU performance.
        • The behavior of tf.function itself is unaffected.
      • Note: This feature will be enabled by default in an upcoming version of TensorFlow.

    Bug Fixes and Other Changes

    • tf.data:

      • Fixed bug in tf.data.experimental.parse_example_dataset when tf.io.RaggedFeatures would specify value_key but no partitions. Before the fix, setting value_key but no partitions would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>} instead of {'key': <RaggedTensor>}. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset to match the behavior of tf.io.parse_example.
      • Added a new field, filter_parallelization, to tf.data.experimental.OptimizationOptions. If it is set to True, tf.data will run Filter transformation with multiple threads. Its default value is False if not specified.
    • tf.keras:

      • Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are ShardedVariables (used for training with tf.distribute.experimental.ParameterServerStrategy).
    • tf.random:

      • Added tf.random.experimental.index_shuffle, for shuffling a sequence without materializing the sequence in memory.
    • tf.RaggedTensor:

      • Introduced tf.experimental.RowPartition, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
      • Introduced tf.experimental.DynamicRaggedShape, which represents the shape of a RaggedTensor.

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09

    Source code(tar.gz)
    Source code(zip)
  • v2.8.0(Feb 2, 2022)

    Release 2.8.0

    Major Features and Improvements

    • tf.lite:

      • Added TFLite builtin op support for the following TF ops:
        • tf.raw_ops.Bucketize op on CPU.
        • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
        • tf.random.normal op for output data type tf.float32 on CPU.
        • tf.random.uniform op for output data type tf.float32 on CPU.
        • tf.random.categorical op for output data type tf.int64 on CPU.
    • tensorflow.experimental.tensorrt:

      • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
      • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
      • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
    • tf.tpu.experimental.embedding:

      • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
      • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
    • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

    • (Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.

    Bug Fixes and Other Changes

    • tf.data:

      • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
      • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
        • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
        • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
    • tf.lite:

      • Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
      • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
    • tf.keras:

      • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
      • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
      • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
      • Added additional standardize and split modes to TextVectorization:
        • standardize="lower" will lowercase inputs.
        • standardize="string_punctuation" will remove all puncuation.
        • split="character" will split on every unicode character.
      • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
      • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
      • tf.random.Generator for keras initializers and all RNG code.
      • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
      • tf.keras.callbacks.experimental.BackupAndRestore is now available as tf.keras.callbacks.BackupAndRestore. The experimental endpoint is deprecated and will be removed in a future release.
      • tf.keras.experimental.SidecarEvaluator is now available as tf.keras.utils.SidecarEvaluator. The experimental endpoint is deprecated and will be removed in a future release.
      • Metrics update and collection logic in default Model.train_step() is now customizable via overriding Model.compute_metrics().
      • Losses computation logic in default Model.train_step() is now customizable via overriding Model.compute_loss().
      • jit_compile added to Model.compile() on an opt-in basis to compile the model's training step with XLA. Note that jit_compile=True may not necessarily work for all models.
    • Deterministic Op Functionality:

      • Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
      • Add determinsitic GPU implementations of:
        • tf.function(jit_compile=True)'s that use Scatter.
        • (since v2.7) Stateful ops used in tf.data.Dataset
        • (since v2.7) tf.convert_to_tensor when fed with (sparse) tf.IndexedSlices (because it uses tf.math.unsorted_segment_sum)
        • (since v2.7) tf.gather backprop (because tf.convert_to_tensor reduces tf.gather's (sparse) tf.IndexedSlices gradients into its dense params input)
        • (since v2.7) tf.math.segment_mean
        • (since v2.7) tf.math.segment_prod
        • (since v2.7) tf.math.segment_sum
        • (since v2.7) tf.math.unsorted_segment_mean
        • (since v2.7) tf.math.unsorted_segment_prod
        • (since v2.7) tf.math.unsorted_segment_sum
        • (since v2.7) tf.math.unsorted_segment_sqrt
        • (since v2.7) tf.nn.ctc_loss (resolved, possibly in prior release, and confirmed with tests)
        • (since v2.7)tf.nn.sparse_softmax_crossentropy_with_logits
      • (since v2.7) Run tf.scatter_nd and other related scatter functions, such as tf.tensor_scatter_nd_update, on CPU (with significant performance penalty).
      • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after tf.config.experimental.enable_op_determinism has been called), an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message), unless otherwise specified, to be thrown.
        • FakeQuantWithMinMaxVarsGradient and FakeQuantWithMinMaxVarsPerChannelGradient
        • (since v2.7) tf.compat.v1.get_seed if the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
        • (since v2.7) tf.compat.v1.nn.fused_batch_norm backprop to offset when is_training=False
        • (since v2.7) tf.image.adjust_contrast forward
        • (since v2.7) tf.image.resize with method=ResizeMethod.NEAREST backprop
        • (since v2.7) tf.linalg.svd
        • (since v2.7) tf.math.bincount
        • (since v2.7) tf.nn.depthwise_conv2d backprop to filter when not using cuDNN convolution
        • (since v2.7) tf.nn.dilation2d gradient
        • (since v2.7) tf.nn.max_pool_with_argmax gradient
        • (since v2.7) tf.raw_ops.DebugNumericSummary and tf.raw_ops.DebugNumericSummaryV2
        • (since v2.7) tf.timestamp. Throws FailedPrecondition
        • (since v2.7) tf.Variable.scatter_add (and other scatter methods, both on ref and resource variables)
        • (since v2.7) The random-number-generating ops in the tf.random module when the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
    • TensorFlow-oneDNN no longer supports explicit use of oneDNN blocked tensor format, e.g., setting the environment variable TF_ENABLE_MKL_NATIVE_FORMAT will not have any effect.

    • TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2) for both GPUs and CPUs.

    • Due to security issues (see section below), all boosted trees code has been deprecated. Users should switch to TensorFlow Decision Forests. TF's boosted trees code will be eliminated before the branch cut for TF 2.9 and will no longer be present since that release.

    Security

    • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
    • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
    • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
    • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
    • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
    • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
    • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
    • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
    • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
    • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
    • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
    • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
    • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
    • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
    • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
    • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
    • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
    • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
    • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
    • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
    • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
    • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
    • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
    • Fixes an integer overflow in TFLite (CVE-2022-23559)
    • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
    • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
    • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
    • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
    • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
    • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
    • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
    • Fixes a heap OOB write in Grappler (CVE-2022-23566)
    • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
    • Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
    • Fixes a crash when type cannot be specialized (CVE-2022-23572)
    • Fixes a heap OOB read/write in SpecializeType (CVE-2022-23574)
    • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
    • Fixes a null dereference in GetInitOp (CVE-2022-23577)
    • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
    • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
    • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
    • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
    • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
    • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
    • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
    • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
    • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
    • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
    • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
    • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
    • Fixes a CHECK failure in constant folding (CVE-2021-41197)
    • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
    • Fixes a heap OOB access in RunForwardTypeInference (CVE-2022-23592)
    • Fixes a crash due to erroneous StatusOr (CVE-2022-23590)
    • Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
    • Fixes a segfault in simplifyBroadcast (MLIR) (CVE-2022-23593)
    • Fixes a null pointer dereference in BuildXlaCompilationCache (XLA) (CVE-2022-23595)
    • Updates icu to 69.1 to handle CVE-2020-10531

    Thanks to our Contributors

    This release contains contributions from many people at Google, as well as:

    8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer, tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei Sun, Yong Tang, Yuduo Wu

    Source code(tar.gz)
    Source code(zip)
  • v2.7.1(Feb 2, 2022)

    Release 2.7.1

    This releases introduces several vulnerability fixes:

    • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
    • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
    • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
    • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
    • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
    • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
    • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
    • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
    • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
    • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
    • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
    • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
    • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
    • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
    • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
    • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
    • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
    • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
    • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
    • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
    • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
    • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
    • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
    • Fixes an integer overflow in TFLite (CVE-2022-23559)
    • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
    • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
    • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
    • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
    • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
    • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
    • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
    • Fixes a heap OOB write in Grappler (CVE-2022-23566)
    • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
    • Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
    • Fixes a crash when type cannot be specialized (CVE-2022-23572)
    • Fixes a heap OOB read/write in SpecializeType (CVE-2022-23574)
    • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
    • Fixes a null dereference in GetInitOp (CVE-2022-23577)
    • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
    • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
    • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
    • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
    • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
    • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
    • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
    • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
    • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
    • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
    • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
    • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
    • Fixes a CHECK failure in constant folding (CVE-2021-41197)
    • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
    • Fixes a crash due to erroneous StatusOr (CVE-2022-23590)
    • Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
    • Fixes a null pointer dereference in BuildXlaCompilationCache (XLA) (CVE-2022-23595)
    • Updates icu to 69.1 to handle CVE-2020-10531
    Source code(tar.gz)
    Source code(zip)
  • v2.6.3(Feb 2, 2022)

    Release 2.6.3

    This releases introduces several vulnerability fixes:

    • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
    • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
    • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
    • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
    • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
    • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
    • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
    • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
    • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
    • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
    • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
    • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
    • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
    • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
    • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
    • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
    • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
    • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
    • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
    • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
    • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
    • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
    • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
    • Fixes an integer overflow in TFLite (CVE-2022-23559)
    • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
    • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
    • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
    • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
    • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
    • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
    • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
    • Fixes a heap OOB write in Grappler (CVE-2022-23566)
    • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
    • Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
    • Fixes a crash when type cannot be specialized (CVE-2022-23572)
    • Fixes a heap OOB read/write in SpecializeType (CVE-2022-23574)
    • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
    • Fixes a null dereference in GetInitOp (CVE-2022-23577)
    • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
    • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
    • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
    • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
    • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
    • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
    • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
    • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
    • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
    • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
    • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
    • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
    • Fixes a CHECK failure in constant folding (CVE-2021-41197)
    • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
    • Fixes a null pointer dereference in BuildXlaCompilationCache (XLA) (CVE-2022-23595)
    • Updates icu to 69.1 to handle CVE-2020-10531
    Source code(tar.gz)
    Source code(zip)
  • v2.5.3(Feb 2, 2022)

    Release 2.5.3

    Note: This is the last release in the 2.5 series.

    This releases introduces several vulnerability fixes:

    • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
    • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
    • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
    • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
    • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
    • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
    • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
    • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
    • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
    • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
    • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
    • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
    • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
    • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
    • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
    • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
    • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
    • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
    • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
    • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
    • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
    • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
    • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
    • Fixes an integer overflow in TFLite (CVE-2022-23559)
    • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
    • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
    • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
    • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
    • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
    • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
    • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
    • Fixes a heap OOB write in Grappler (CVE-2022-23566)
    • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
    • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
    • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
    • Fixes a null dereference in GetInitOp (CVE-2022-23577)
    • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
    • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
    • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
    • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
    • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
    • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
    • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
    • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
    • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
    • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
    • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
    • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
    • Fixes a CHECK failure in constant folding (CVE-2021-41197)
    • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
    • Updates icu to 69.1 to handle CVE-2020-10531
    Source code(tar.gz)
    Source code(zip)
基于tensorflow 2.x的图片识别工具集

Classification.tf2 基于tensorflow 2.x的图片识别工具集 功能 粗粒度场景图片分类 细粒度场景图片分类 其他场景图片分类 模型部署 tensorflow serving本地推理和docker部署 tensorRT onnx ... 数据集 https://hyper.a

Wei Qi 1 Nov 03, 2021
A collection of SOTA Image Classification Models in PyTorch

A collection of SOTA Image Classification Models in PyTorch

sithu3 85 Dec 30, 2022
A wrapper around SageMaker ML Lineage Tracking extending ML Lineage to end-to-end ML lifecycles, including additional capabilities around Feature Store groups, queries, and other relevant artifacts.

ML Lineage Helper This library is a wrapper around the SageMaker SDK to support ease of lineage tracking across the ML lifecycle. Lineage artifacts in

AWS Samples 12 Nov 01, 2022
Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Adversarial Long-Tail This repository contains the PyTorch implementation of the paper: Adversarial Robustness under Long-Tailed Distribution, CVPR 20

Tong WU 89 Dec 15, 2022
Code for the paper "Adapting Monolingual Models: Data can be Scarce when Language Similarity is High"

Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling Adapting Monolingual Models: Data can be Scarce when Language Similarity is High

Wietse de Vries 5 Aug 02, 2021
A curated list of awesome neural radiance fields papers

Awesome Neural Radiance Fields A curated list of awesome neural radiance fields papers, inspired by awesome-computer-vision. How to submit a pull requ

Yen-Chen Lin 3.9k Dec 27, 2022
Geometric Vector Perceptrons --- a rotation-equivariant GNN for learning from biomolecular structure

Geometric Vector Perceptron Implementation of equivariant GVP-GNNs as described in Learning from Protein Structure with Geometric Vector Perceptrons b

Dror Lab 142 Dec 29, 2022
Code for Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019)

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation (AAAI 2019) We propose Disentangled Audio-Visual System (DAVS) to ad

Hang_Zhou 750 Dec 23, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
RP-GAN: Stable GAN Training with Random Projections

RP-GAN: Stable GAN Training with Random Projections This repository contains a reference implementation of the algorithm described in the paper: Behna

Ayan Chakrabarti 20 Sep 18, 2021
Generate images from texts. In Russian

ruDALL-E Generate images from texts pip install rudalle==1.1.0rc0 🤗 HF Models: ruDALL-E Malevich (XL) ruDALL-E Emojich (XL) (readme here) ruDALL-E S

AI Forever 1.6k Dec 31, 2022
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
Neural style transfer in PyTorch.

style-transfer-pytorch An implementation of neural style transfer (A Neural Algorithm of Artistic Style) in PyTorch, supporting CPUs and Nvidia GPUs.

Katherine Crowson 395 Jan 06, 2023
AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis.

AITom Introduction AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis. AITom is originated from the tomominer l

93 Jan 02, 2023
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
Generative Adversarial Text-to-Image Synthesis

###Generative Adversarial Text-to-Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee This is the

Scott Ellison Reed 883 Dec 31, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to generate a dynamic forecast from your own data.

📈 Automated Time Series Forecasting Background: This MVP data web app uses the Streamlit framework and Facebook's Prophet forecasting package to gene

Zach Renwick 42 Jan 04, 2023