Lattice methods in TensorFlow

Overview

TensorFlow Lattice

TensorFlow Lattice is a library that implements constrained and interpretable lattice based models. It is an implementation of Monotonic Calibrated Interpolated Look-Up Tables in TensorFlow.

The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints. This is done using a collection of Keras layers that can satisfy constraints such as monotonicity, convexity and pairwise trust:

  • PWLCalibration: piecewise linear calibration of signals.
  • CategoricalCalibration: mapping of categorical inputs into real values.
  • Lattice: interpolated look-up table implementation.
  • Linear: linear function with monotonicity and norm constraints.

The library also provides easy to setup canned estimators for common use cases:

  • Calibrated Linear
  • Calibrated Lattice
  • Random Tiny Lattices (RTL)
  • Crystals

With TF Lattice you can use domain knowledge to better extrapolate to the parts of the input space not covered by the training dataset. This helps avoid unexpected model behaviour when the serving distribution is different from the training distribution.

You can install our prebuilt pip package using

pip install tensorflow-lattice
Comments
  • Unable to execute example program

    Unable to execute example program

    I have installed tensorflow-lattice using pip 9.0.1 in Python 3.5.2 on Ubuntu 16.04 LTS. Tensorflow version is 1.3.1. For testing purpose I tried to execute example program

    import tensorflow as tf
    import tensorflow_lattice as tfl
    
    x = tf.placeholder(tf.float32, shape=(None, 2))
    (y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))
    
    with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      print(sess.run(y, feed_dict={x: [[0.0, 0.0]]}))
    

    which resulted in error. Here is stack trace from Jupyter notebook

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-7-444d1bededea> in <module>()
    ----> 1 (y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))
    
    /usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/lib/lattice_layers.py in lattice_layer(input_tensor, lattice_sizes, is_monotone, output_dim, interpolation_type, lattice_initializer, l1_reg, l2_reg, l1_torsion_reg, l2_torsion_reg, l1_laplacian_reg, l2_laplacian_reg)
        193   parameter_tensor = variable_scope.get_variable(
        194       interpolation_type + '_lattice_parameters',
    --> 195       initializer=lattice_initializer)
        196 
        197   output_tensor = lattice_ops.lattice(
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
       1063       collections=collections, caching_device=caching_device,
       1064       partitioner=partitioner, validate_shape=validate_shape,
    -> 1065       use_resource=use_resource, custom_getter=custom_getter)
       1066 get_variable_or_local_docstring = (
       1067     """%s
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
        960           collections=collections, caching_device=caching_device,
        961           partitioner=partitioner, validate_shape=validate_shape,
    --> 962           use_resource=use_resource, custom_getter=custom_getter)
        963 
        964   def _get_partitioned_variable(self,
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
        365           reuse=reuse, trainable=trainable, collections=collections,
        366           caching_device=caching_device, partitioner=partitioner,
    --> 367           validate_shape=validate_shape, use_resource=use_resource)
        368 
        369   def _get_partitioned_variable(
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
        350           trainable=trainable, collections=collections,
        351           caching_device=caching_device, validate_shape=validate_shape,
    --> 352           use_resource=use_resource)
        353 
        354     if custom_getter is not None:
    
    /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
        662                          " Did you mean to set reuse=True in VarScope? "
        663                          "Originally defined at:\n\n%s" % (
    --> 664                              name, "".join(traceback.format_list(tb))))
        665       found_var = self._vars[name]
        666       if not shape.is_compatible_with(found_var.get_shape()):
    
    ValueError: Variable hypercube_lattice_parameters already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
    
      File "/usr/local/lib/python3.5/dist-packages/tensorflow_lattice/python/lib/lattice_layers.py", line 195, in lattice_layer
        initializer=lattice_initializer)
      File "<ipython-input-1-e860b057ec64>", line 5, in <module>
        (y, _, _, _) = tfl.lattice_layer(x, lattice_sizes=(2, 2))
      File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2847, in run_code
        exec(code_obj, self.user_global_ns, self.user_ns)
    
    
    opened by kamilmahmood 22
  • _pwl_calibration_ops.so image not found

    _pwl_calibration_ops.so image not found

    I just installed tensorflow-lattice on a MacOS but got the following importing error. Do you know what's happening here?

    Python 3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin Type "help", "copyright", "credits" or "license" for more information.

    import tensorflow_lattice /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Traceback (most recent call last): File "", line 1, in File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/init.py", line 33, in from tensorflow_lattice.python.estimators.calibrated import input_calibration_layer_from_hparams File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/estimators/calibrated.py", line 28, in from tensorflow_lattice.python.lib import pwl_calibration_layers File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/lib/pwl_calibration_layers.py", line 36, in from tensorflow_lattice.python.ops import pwl_calibration_ops File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/ops/pwl_calibration_ops.py", line 45, in '../../cc/ops/_pwl_calibration_ops.so')) File "/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library lib_handle = py_tf.TF_LoadLibrary(library_filename) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/anaconda3/envs/tf_lattice/lib/python3.6/site-packages/tensorflow_lattice/python/ops/../../cc/ops/_pwl_calibration_ops.so, 6): image not found

    opened by gyz0807-ai 10
  • Cannot save keras model with tensorflow lattice layers

    Cannot save keras model with tensorflow lattice layers

    Saving the model having keras tfl layers creates the following problem.

    `/usr/local/lib/python3.6/dist-packages/h5py/_hl/group.py in setitem(self, name, obj) 371 372 if isinstance(obj, HLObject): --> 373 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl) 374 375 elif isinstance(obj, SoftLink):

    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

    h5py/h5o.pyx in h5py.h5o.link()

    RuntimeError: Unable to create link (name already exists)`

    The error is reproduced in the colab example here. https://colab.research.google.com/drive/1tknejj9CtM27bHGktsZSTnvLvH3eCgG8

    opened by devavratTomar 7
  • TensorFlow 2.0 plan

    TensorFlow 2.0 plan

    Is there any plan for tensorflow 2.0 release? I heard that the eager mode will be default in TF 2.0, but not sure TF lattice is ready for it.

    Thank you!

    opened by si-you 7
  • setup.py fix sklearn → scikit-learn

    setup.py fix sklearn → scikit-learn

    The package name is scikit-learn while the import is sklearn.

    See https://pypi.org/project/sklearn/ for the official recommendation.

    Requiring sklearn can lead to subtle problems as explained in https://github.com/scikit-learn/scikit-learn/issues/8215 .

    opened by maresb 5
  • Optimizer helper functions

    Optimizer helper functions

    SGD or Adagrad with a square root decay learning rate schedule can help training. Since TensorFlow lattice estimators accepts arbitrary callable as an optimizer, so we can use learning rate scheduling, but it is not easy to configure for a beginner. This pull requests include some helper functions and tests to illustrate how to use a custom learning rate schedule.

    Also this pull requests include modify keep_dims argument to keepdims in linear model construction since tensorflow is deprecating a former argument.

    opened by si-you 5
  • Issue with gast

    Issue with gast

    I installed tensorflow lattice with TF 2.3.0 but when trying to run I receive a conversion error and an attribute error "gast has no attribute 'Index'"

    I tried to install three versions of tensorflow lattice (0.9.9, 2.0, and 2.0.8) but recieved the same error and I could not install an older version of gast (currently running 0.4.0) due to the dependencies of my current setup.

    Please let me know if you have additional recommendations. Thank you!

    opened by josem789 4
  • Error in running the example of lattice models

    Error in running the example of lattice models

    I was running the uci_census.py file, with the create_calibrated_lattice function. When parameter lattice_size is set to 2, the program can run successfully. However, when the parameter is set to 3 (also 4 or other values, which I have not tested yet), the program will crash with the following error:

    2018-06-17 19:54:25.814852: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA
    Traceback (most recent call last):
      File "uci_census.py", line 616, in <module>
        run()
      File "uci_census.py", line 609, in run
        main(argv)
      File "uci_census.py", line 586, in main
        train(estimator)
      File "uci_census.py", line 550, in train
        batch_size=FLAGS.batch_size, num_epochs=epochs, shuffle=True))
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
        loss = self._train_model(input_fn, hooks, saving_listeners)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 812, in _train_model
        log_step_count_steps=self._config.log_step_count_steps) as mon_sess:
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 380, in MonitoredTrainingSession
        stop_grace_period_secs=stop_grace_period_secs)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 787, in __init__
        stop_grace_period_secs=stop_grace_period_secs)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 511, in __init__
        self._sess = _RecoverableSession(self._coordinated_creator)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 972, in __init__
        _WrappedSession.__init__(self, self._create_session())
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 977, in _create_session
        return self._sess_creator.create_session()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 668, in create_session
        self.tf_sess = self._session_creator.create_session()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 440, in create_session
        init_fn=self._scaffold.init_fn)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/session_manager.py", line 273, in prepare_session
        config=config)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/session_manager.py", line 205, in _restore_checkpoint
        saver.restore(sess, ckpt.model_checkpoint_path)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1686, in restore
        {self.saver_def.filename_tensor_name: save_path})
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
        run_metadata_ptr)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1128, in _run
        feed_dict_tensor, options, run_metadata)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
        options, run_metadata)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
        raise type(e)(node_def, op, message)
    tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192]
    	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](calibrated_tf_lattice_model/lattice/calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters/Adam_1, save/RestoreV2_3)]]
    
    Caused by op u'save/Assign_3', defined at:
      File "uci_census.py", line 616, in <module>
        run()
      File "uci_census.py", line 609, in run
        main(argv)
      File "uci_census.py", line 586, in main
        train(estimator)
      File "uci_census.py", line 550, in train
        batch_size=FLAGS.batch_size, num_epochs=epochs, shuffle=True))
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 314, in train
        loss = self._train_model(input_fn, hooks, saving_listeners)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 812, in _train_model
        log_step_count_steps=self._config.log_step_count_steps) as mon_sess:
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 380, in MonitoredTrainingSession
        stop_grace_period_secs=stop_grace_period_secs)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 787, in __init__
        stop_grace_period_secs=stop_grace_period_secs)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 511, in __init__
        self._sess = _RecoverableSession(self._coordinated_creator)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 972, in __init__
        _WrappedSession.__init__(self, self._create_session())
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 977, in _create_session
        return self._sess_creator.create_session()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 668, in create_session
        self.tf_sess = self._session_creator.create_session()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 431, in create_session
        self._scaffold.finalize()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 212, in finalize
        self._saver.build()
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1248, in build
        self._build(self._filename, build_save=True, build_restore=True)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1284, in _build
        build_save=build_save, build_restore=build_restore)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 759, in _build_internal
        restore_sequentially, reshape)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 471, in _AddShardedRestoreOps
        name="restore_shard"))
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 440, in _AddRestoreOps
        assign_ops.append(saveable.restore(tensors, shapes))
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 160, in restore
        self.op.get_shape().is_fully_defined())
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 276, in assign
        validate_shape=validate_shape)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 59, in assign
        use_locking=use_locking, name=name)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
        op_def=op_def)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
        op_def=op_def)
      File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
        self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
    
    InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192]
    	 [[Node: save/Assign_3 = Assign[T=DT_FLOAT, _class=["loc:@calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](calibrated_tf_lattice_model/lattice/calibrated_tf_lattice_model/lattice/hypercube_lattice_parameters/Adam_1, save/RestoreV2_3)]]
    

    IMO, the point should be this line: Assign requires shapes of both tensors to match. lhs shape= [1,1594323] rhs shape= [1,8192], in which 1594323 = 3^13 and 8192 = 2^13. Here 13 is the number of features used in this example, and 3 is the lattice_size we defined. Could anyone help me with this?

    opened by arrowx123 4
  • Using lattice in tf serving

    Using lattice in tf serving

    Currently when trying to serve a lattice model with tf serving, I run into an op that isn't supported in the serving kernel

    ... 2018-03-01 23:45:59.827196: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: fail. Took 429225 microseconds. 2018-03-01 23:45:59.828892: E tensorflow_serving/util/retrier.cc:38] Loading servable: {name: default version: 1519947547} failed: Not found: Op type not registered 'PwlIndexingCalibrator' in binary running on dsexperiment-prod-0fe24ce9bf2552633. Make sure the Op and Kernel are registered in the binary running in this process.

    Both tf and tf-serving on the system are at version 1.5.0. Are lattice models not supported with serving yet ? If they are, could you point me to how to make it happen.

    opened by fabrol 4
  • convex by pieces function

    convex by pieces function

    Hi,

    I wonder if you have a functionality to specify that the target function should be convex by pieces, and/or monotonic by pieces.

    Thanks for writing this amazing piece of software :) Matias

    opened by matibilkis 3
  • How to use multi CPU easily?

    How to use multi CPU easily?

    It is so great to see such a good package. However the speed is too slow.

    I am using Crystal ensemble model config. tfl.estimators.CannedRegressor estimator. It seems only one CPU is using, though I have 48 CPUs on the machine.

    I have set the dataset with multiple threads:

    feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
        x=train_xs.loc[feature_analysis_index].copy(), 
        y=train_ys.loc[feature_analysis_index].copy(), 
        batch_size=128, 
        num_epochs=1, 
        shuffle=True, 
        queue_capacity=1000,
        num_threads=40)
    
    prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
        x=train_xs.loc[prefitting_index].copy(), 
        y=train_ys.loc[prefitting_index].copy(), 
        batch_size=128, 
        num_epochs=1, 
        shuffle=True, 
        queue_capacity=1000,
        num_threads=40)
    
    train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
        x=train_xs.loc[train_index].copy(), 
        y=train_ys.loc[train_index].copy(), 
        batch_size=128, 
        num_epochs=100, 
        shuffle=True, 
        queue_capacity=1000,
        num_threads=40)
    

    The usage of CPU is still only 1.25 CPU. Any suggestion?

    opened by fuyanlu1989 3
  • experiment result explanation in tutorial

    experiment result explanation in tutorial

    Hi, I'm looking at the shape constraints tutorial, and the results for GBDT and DNN are listed in the tutorial as follows:

    • GBT Validation AUC: 0.7248634099960327
    • GBT Test AUC: 0.6980501413345337
    • DNN Validation AUC: 0.7518489956855774
    • DNN Testing AUC: 0.745200514793396

    After the experiment results, the tutorial comments Note that even though the validation metric is better than the tree solution, the testing metric is much worse.

    I don't understand where this comment comes from, since DNN outperforms GBT in both validation AUC and testing AUC.

    opened by liangchen1ceeee 2
  • Many-batches predictions

    Many-batches predictions

    Hi,

    When trying to get predictions of Lattice Models on more than one batch of data at once, Errors are raised. This is a nice feature to efficiently get predictions, and is present in basic Neural Network Keras models; find some examples in this colab.

    As far as I can tell from looking at API docs + source code, this should be related to the inputs admitted by PWC layers, but I wonder if there is an easy way around.

    In particular, this piece of code captures what I would like to get (and retrieves an error when calling on batched_inputs):

    
    class LatticeModel(tf.keras.Model):
        def __init__(self, nodes=[2,2], nkeypoints=100):
            super(LatticeModel,self).__init__()
            self.combined_calibrators = tfl.layers.ParallelCombination()
            for ind,i in enumerate(range(2)):
              calibration_layer = tfl.layers.PWLCalibration(input_keypoints=np.linspace(0,1,nkeypoints),output_min=0.0, output_max=nodes[ind])
              self.combined_calibrators.append(calibration_layer)
            self.lattice = tfl.layers.Lattice(lattice_sizes=nodes,interpolation="simplex")
            
        def call(self, x):
            rescaled = self.combined_calibrators(x)
            feat = self.lattice(rescaled)
            return feat
        
    #we define some input data
    x1 = np.random.randn(100,1).astype(np.float32)
    x2 = np.random.randn(100,1).astype(np.float32)
    
    inputs = tf.concat([x1,x2], axis=-1)
    
    #we initialize out model, and feed it with a batch of size 100
    model = LatticeModel()
    model(inputs)
    
    ### now we would like to efficiently predict the output of the lattice model on many batches of data at once (in this case 2)
    batched_inputs = np.random.randn(2,100,1)
    model(batched_inputs)
    

    Thanks a lot! Matías.

    opened by matibilkis 2
  • [*.py] Rename

    [*.py] Rename "Arguments:" to "Args:"

    I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue with the TensorFlow codebase: inconsistent use of Args: and Arguments: in its docstrings. It is easy enough to extend my parsers to support both variants, however it looks like Arguments: is wrong anyway, as per:

    • https://google.github.io/styleguide/pyguide.html#doc-function-args @ ddccc0f

    • https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ 9fc0fc0

    • https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ c0ae8e3

    Therefore, only Args: is valid. This PR replaces them throughout the codebase.

    PS: For related PRs, see tensorflow/tensorflow/pull/45420

    opened by SamuelMarks 0
  • Feature Request - Is there a way to enforce an S-shape constraint ?

    Feature Request - Is there a way to enforce an S-shape constraint ?

    First off - Thank you so much for open sourcing Tensorflow lattice! It is great to make use of lattice interpolation to enforce predicate domain knowledge concerning monotonicity and convexity. Looking through the current documentation, I see it is possible to enforce an increasing and concave graph for diminishing returns, but what if I want to enforce an S-curve (i.e. an increasing convex curve with an inflection point that then turns concave)?

    opened by marwan116 3
  • Link to API docs 404s

    Link to API docs 404s

    https://github.com/tensorflow/lattice/blob/master/docs/overview.md#tutorials-and-api-docs points to https://github.com/tensorflow/lattice/blob/master/docs/api_docs/python/tfl.ipynb

    opened by kevinykuo 1
Releases(v2.0.11)
  • v2.0.11(Oct 20, 2022)

    Changes:

    • Updating code, tests and tutorials to support changes to tf.keras.optimizers.
    • Documentation updates.
    • Minor bug fixes.

    PyPI Release:

    • Generic package for py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.10(Jan 13, 2022)

    Changes:

    • Support for weighted quantiles for Estimators and Premade.
    • Helper functions for computing quantiles in premade_lib
    • Documentation updates.
    • Minor bug fixes.

    PyPI Release:

    • Generic package for py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.9(Sep 30, 2021)

    Changes:

    • (experimental) Cumulative Distribution Function (CDF) Layer that supports projection free monotonicity.
    • 'input_keypoints_type' parameter for PWLCalibration integration with Premade/Estimator models.
    • Estimator support for tf.data.Dataset inputs.
    • General tutorial/code cleanup.
    • Typo fixes.
    • Bug fixes.

    PyPI Release:

    • Generic package for py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.8(Feb 17, 2021)

    Changes:

    • (experimental) Parameterization option for Premade/Estimators that enables the use of both normal tfl.layers.Lattice layers ('all_vertices') and tfl.layers.KroneckerFactoredLattice layers ('kronecker_factored').
    • (experimental) KroneckerFactoredLattice layer visualization support for Estimators.
    • (experimental) KroneckerFactoredLattice bound constraints.
    • 'input_keypoints_type' parameter for PWLCalibration layers that enables learned input keypoints ('learned_interior') or the original fixed keypoints ('fixed').
    • General tutorial/code cleanup
    • Typo fixes
    • Bug fixes

    PyPI Release:

    • Generic package for py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.7(Dec 14, 2020)

    Changes:

    • (experimental) KroneckerFactoredLattice initialization now sorts on kernel axis 1 such that we sort each term individually.
    • (experimental) KroneckerFactoredLattice initialization defaults to [0.5, 1.5] instead of [0,1].
    • (experimental) KroneckerFactoredLattice custom_reduce_prod in interpolation for faster gradient computations.
    • Update bound and trust projection algorithms to compute violations for each unit separately.
    • 'loss_fn' option for estimators to use custom loss without having to define a custom head.
    • Enable calibrators to return a list of outputs per unit.
    • Enable RTL layer to return non-averaged outputs.
    • General tutorial/code cleanup
    • Typo fixes
    • Bug fixes

    PyPI Release:

    • Generic package for py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.6(Aug 10, 2020)

    TensorFlow is dropping py2 support, so we will be dropping support as well in our future releases. This is the last release that will support py2.

    Changes:

    • New (experimental) KroneckerFactoredLattice Layer, which introduces a new parameterization of our Lattice layer with linear space/time complexity.
    • rtl_lib.py helper functions for RTL Layer.
    • Utils module with useful helper functions for all layers.
    • 'rtl_layer' option for CalibratedLatticeEnsemble Premade Models and Canned Estimators, which uses an RTL Layer for the underlying ensemble. Can potentially give a speed-boost for models with a large number of lattices.
    • General code cleanup
    • Typo fixes
    • Bug fixes

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.5(Jun 15, 2020)

    Changes:

    • Simplex interpolation support for lattices: O(d log(d)) simplex interpolation compared to O(2^d) hypercube interpolation is 2-10x faster with similar or improved training loss.
    • RTL layer performance optimization: 2-3x faster and scales much better with wider and deeper models with tens of thousands of lattices.
    • Optimization of 2^D hypercube lattices: 10-15% speedup.
    • PWL Calibration Sonnet Module (more to come in follow up releases)
    • New aggregation function tutorial
    • Linear combination support for canned ensemble models.
    • Improvement and bug fixes for save/load functionality
    • Bug fixes

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.4(Apr 14, 2020)

    Changes:

    • Save/load support for Keras models (HDF5/H5 format)
    • RTL layer: An ensemble of Lattice layers that takes in a collection of monotonic and unconstrained features and randomly arranges them into lattices of a given rank.
    • AggregateFunction Premade model and Aggregation layer: Applies monotonic function on set inputs passed in as ragged tensors.
    • Crystals Lattice ensemble with Premade model
    • Feature updates to Lattice layer
    • Updates to tutorials
    • Bug fixes

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3(Mar 6, 2020)

    Changes:

    • Two new tutorials: premade models, shape constraints for ML fairness
    • Improvements and additions to premade models
    • New range dominance for Lattice and Linear layers
    • Added 'peak' mode to unimodality constraint
    • Updates to documentation
    • Bug fixes

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Feb 8, 2020)

    Changes:

    • Adding premade Keras models in tfl.premade module.
    • Adding RandomMonotonicInitializer for lattices.
    • Several edits to tutorials and API docs.

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.

    Notes:

    • The API for the premade Keras Models is experimental.
    • Creating premade models currently require a fully specified model configuration. We plan to use the new preprocessing mechanism in Keras to support keypoint initialization in future releases.
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Feb 4, 2020)

    Changes:

    • Several edits to tutorials and API docs.
    • Bug fixes.

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Jan 28, 2020)

    This is a completely new implementation of the TensorFlow Lattice library. It is not backwards compatible with the previous versions of the library.

    Changes:

    • Core TF implementation: TFL v2 is a python-only library with all operations implemented in core TensorFlow, making it compatible with any platform that can run TensorFlow (cpu, gpu, tpu).
    • Keras layers: The new library provides Keras layers that can be mixed and matched with other Keras layers and used in Keras models. All constraints and regularizers are handled through Keras mechanisms and should work seamlessly without the need for hooks or callbacks.
    • New and improved canned estimators: The new library has a new simplified API for creating canned estimators. Calculation of feature quantiles is now automated in estimator construction. A version of the Crystals algorithm is now supported.
    • New constraint types: Several new types of constraints are added to the library, including convexity, unimodality, and pairwise feature trust and dominance relations.
    • Improved documentation and tutorials: Examples and tutorials are provided as notebooks. All documentations, examples, and API docs will be available on tensorflow.org.
    • Faster release cycle: With the library implemented in core TF, we hope to be able to release updates and improvements more frequently.

    Notes:

    • Some of the new 2-dimensional constraints are under active development and might undergo API changes.

    PyPI release:

    • Generic package for py2/py3 that should work for TF 1.15 or TF 2.x.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.9(Jul 31, 2019)

    Changes:

    • Updating the code base for TF 2.0 compatibility in tf.compat.v1 mode
    • Changing tensorflow branch to 1.14
    • Changes to build scripts for bazel 0.25.2
    • Bug fixes

    PyPI release:

    • Includes python 2.7 and python 3 on macos and ubuntu
    • No gpu binary package is released with 0.9.9

    Important Note:

    This is the last release of the current version of the Tensorflow Lattice library. A new version of the library will be released soon:

    • Eager compatible base lattice and calibration library implemented in core TF (no custom ops)
    • Includes Keras layer and canned estimators
    • Not backwards compatible with the current version, but conversion should be easy
    Source code(tar.gz)
    Source code(zip)
  • v0.9.8(Oct 8, 2018)

  • v0.9.7(Jul 30, 2018)

  • v0.9.6(Feb 15, 2018)

    • New Estimators for separately-calibrated random tiny lattices: each lattices has its own calibrators for each input feature.
    • Updating TensorFlow submodule to r1.5.
    • Bug fixes.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.5(Feb 1, 2018)

  • v0.9.4(Nov 9, 2017)

    • Update BUILD rules to work with TensorFlow 1.4.
    • Compile the binary package targeting TensorFlow 1.4 branch.
    • Changed the bias term in lattice initialization strategy from -0.5 to 0.0.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(Oct 18, 2017)

Utilize Korean BERT model in sentence-transformers library

ko-sentence-transformers 이 프로젝트는 KoBERT 모델을 sentence-transformers 에서 보다 쉽게 사용하기 위해 만들어졌습니다. Ko-Sentence-BERT-SKTBERT 프로젝트에서는 KoBERT 모델을 sentence-trans

Junghyun 40 Dec 20, 2022
【原神】自动演奏风物之诗琴的程序

疯物之诗琴 读取midi并自动演奏原神风物之诗琴。 可以自定义配置文件自动调整音符来适配风物之诗琴。 (原神1.4直播那天就开始做了!到现在才能放出来。。) 如何使用 在Release页面中下载打包好的程序和midi压缩包并解压。 双击运行“疯物之诗琴.exe”。 在原神中打开风物之诗琴,软件内输入

435 Jan 04, 2023
jiant is an NLP toolkit

jiant is an NLP toolkit The multitask and transfer learning toolkit for natural language processing research Why should I use jiant? jiant supports mu

ML² AT CILVR 1.5k Jan 04, 2023
[NeurIPS 2021] Code for Learning Signal-Agnostic Manifolds of Neural Fields

Learning Signal-Agnostic Manifolds of Neural Fields This is the uncleaned code for the paper Learning Signal-Agnostic Manifolds of Neural Fields. The

60 Dec 12, 2022
华为商城抢购手机的Python脚本 Python script of Huawei Store snapping up mobile phones

HUAWEI STORE GO 2021 说明 基于Python3+Selenium的华为商城抢购爬虫脚本,修改自近两年没更新的项目BUY-HW,为女神抢Nova 8(什么时候华为开始学小米玩饥饿营销了?) 原项目的登陆以及抢购部分已经不可用,本项目对原项目进行了改正以适应新华为商城,并增加一些功能

ZhangLiang 111 Dec 22, 2022
The guide to tackle with the Text Summarization

The guide to tackle with the Text Summarization

Takahiro Kubo 1.2k Dec 30, 2022
Translation for Trilium Notes. Trilium Notes 中文版.

Trilium Translation 中文说明 This repo provides a translation for the awesome Trilium Notes. Currently, I have translated Trilium Notes into Chinese. Test

743 Jan 08, 2023
Autoregressive Entity Retrieval

The GENRE (Generative ENtity REtrieval) system as presented in Autoregressive Entity Retrieval implemented in pytorch. @inproceedings{decao2020autoreg

Meta Research 611 Dec 16, 2022
The Classical Language Toolkit

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.clt

Classical Language Toolkit 754 Jan 09, 2023
NLP applications using deep learning.

NLP-Natural-Language-Processing NLP applications using deep learning like text generation etc. 1- Poetry Generation: Using a collection of Irish Poem

KASHISH 1 Jan 27, 2022
A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).

Rebiber: A tool for normalizing bibtex with official info. We often cite papers using their arXiv versions without noting that they are already PUBLIS

(Bill) Yuchen Lin 2k Jan 01, 2023
Search msDS-AllowedToActOnBehalfOfOtherIdentity

前言 现在进行RBCD的攻击手段主要是搜索mS-DS-CreatorSID,如果机器的创建者是我们可控的话,那就可以修改对应机器的msDS-AllowedToActOnBehalfOfOtherIdentity,利用工具SharpAllowedToAct-Modify 那我们索性也试试搜索所有计算机

Jumbo 26 Dec 05, 2022
Pytorch NLP library based on FastAI

Quick NLP Quick NLP is a deep learning nlp library inspired by the fast.ai library It follows the same api as fastai and extends it allowing for quick

Agis pof 283 Nov 21, 2022
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Deepvoice3_pytorch PyTorch implementation of convolutional networks-based text-to-speech synthesis models: arXiv:1710.07654: Deep Voice 3: Scaling Tex

Ryuichi Yamamoto 1.8k Dec 30, 2022
Code for the paper "Are Sixteen Heads Really Better than One?"

Are Sixteen Heads Really Better than One? This repository contains code to reproduce the experiments in our paper Are Sixteen Heads Really Better than

Paul Michel 143 Dec 14, 2022
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations

Expediting Vision Transformers via Token Reorganizations This repository contain

Youwei Liang 101 Dec 26, 2022
Auto translate textbox from Japanese to English or Indonesia

priconne-auto-translate Auto translate textbox from Japanese to English or Indonesia How to use Install python first, Anaconda is recommended Install

Aji Priyo Wibowo 5 Aug 25, 2022
Open-source offline translation library written in Python. Uses OpenNMT for translations

Open source neural machine translation in Python. Designed to be used either as a Python library or desktop application. Uses OpenNMT for translations and PyQt for GUI.

Argos Open Tech 1.6k Jan 01, 2023
Under the hood working of transformers, fine-tuning GPT-3 models, DeBERTa, vision models, and the start of Metaverse, using a variety of NLP platforms: Hugging Face, OpenAI API, Trax, and AllenNLP

Transformers-for-NLP-2nd-Edition @copyright 2022, Packt Publishing, Denis Rothman Contact me for any question you have on LinkedIn Get the book on Ama

Denis Rothman 150 Dec 23, 2022