Bayesian Optimization using GPflow

Overview

Note: This package is for use with GPFlow 1.

For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind.

GPflowOpt

GPflowOpt is a python package for Bayesian Optimization using GPflow, and uses TensorFlow. It was initiated and is currently maintained by Joachim van der Herten and Ivo Couckuyt. The full list of contributors (in alphabetical order) is Ivo Couckuyt, Tom Dhaene, James Hensman, Nicolas Knudde, Alexander G. de G. Matthews and Joachim van der Herten. Special thanks also to all GPflow contributors as this package would not be able to exist without their effort.

Build Status Coverage Status Documentation Status

Install

The easiest way to install GPflowOpt involves cloning this repository and running

pip install . --process-dependency-links

in the source directory. This also installs all required dependencies (including TensorFlow, if needed). For more detailed installation instructions, see the documentation.

Contributing

If you are interested in contributing to this open source project, contact us through an issue on this repository. For more information, see the notes for contributors.

Citing GPflowOpt

To cite GPflowOpt, please reference the preliminary arXiv paper. Sample Bibtex is given below:

@ARTICLE{GPflowOpt2017,
   author = {Knudde, Nicolas and {van der Herten}, Joachim and Dhaene, Tom and Couckuyt, Ivo},
    title = "{{GP}flow{O}pt: {A} {B}ayesian {O}ptimization {L}ibrary using Tensor{F}low}",
  journal = {arXiv preprint -- arXiv:1711.03845},
  year    = {2017},
  url     = {https://arxiv.org/abs/1711.03845}
}
Comments
  • GPflow 1.0

    GPflow 1.0

    Following up on #86 , development of a 1.0 compatible GPflowOpt version. This is far from done, the biggest difficulty (Acquisition) is still ahead unfortunately.

    At the same time lots of tests are affected: I'm reworking them to use some of the cool pytest features. When this work is over, I hope to do another PR to improve testing further (split out computationally demanding tests to system, use mock to test things which now trigger BO runs and model optimizations)

    do not merge yet Discussion 
    opened by javdrher 17
  • Cholesky faillures due to inappropriate initial hyperparameters

    Cholesky faillures due to inappropriate initial hyperparameters

    As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.

    Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.

    Some things we may consider:

    • Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
    • Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
    • Applying hyperpriors is going to be important.

    I'd love to hear thoughts on how to improve this.

    help wanted 
    opened by javdrher 14
  • Question regarding getting started example from documentation

    Question regarding getting started example from documentation

    Good morning guys,

    I am currently trying to get gpflowopt up and running for some optimization problem. Naturally, I first tried the example you provided in the documentation Now, while I do get the same result as in the documentation, I am a bit puzzled about the function evaluations the optimizer is choosing. To be more specific, the optimizer always chooses to evaluate the function at the point [0.0, 0.5] in all 15 iterations. I am probably overlooking something, as this does not seem to be the desired behavior, right? The optimizer seems to be not really optimizing. Can anyone point out the mistake I made during the setup of the problem? I am pretty sure I follow the instructions of the example in the documentation to the letter.

    This is the code, that I am running:

    import numpy as np
    from gpflowopt.domain import ContinuousParameter
    import gpflow
    from gpflowopt.bo import BayesianOptimizer
    from gpflowopt.design import LatinHyperCube
    from gpflowopt.acquisition import ExpectedImprovement
    #from gpflowopt.optim import SciPyOptimizer
    
    def fx(X):
        X = np.atleast_2d(X)
        result = np.sum(np.square(X), axis=1)[:, None]
        print("X: {}".format(X))
        print("fx: {}".format(result))
        return result
    
    
    domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
    
    # Use standard Gaussian process Regression
    lhd = LatinHyperCube(21, domain)
    X = lhd.generate()
    Y = fx(X)
    model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
    model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3)
    
    # Now create the Bayesian Optimizer
    alpha = ExpectedImprovement(model)
    optimizer = BayesianOptimizer(domain, alpha)
    
    # Run the Bayesian optimization
    #with optimizer.silent():
    r = optimizer.optimize(fx, n_iter=15)
    print(r)
    

    And this is the output I am seeing:

    python try_out_gpflowopt.py 
    2017-12-11 09:28:09.790044: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.
    1 SSE4.2 AVX AVX2 FMA
    X: [[ 0.2   0.65]
     [ 0.   -0.1 ]
     [-0.8   0.5 ]
     [ 0.4   1.4 ]
     [-0.6   1.25]
     [ 1.    0.05]
     [ 1.2   0.8 ]
     [-1.   -0.25]
     [ 0.8  -0.7 ]
     [ 1.4   1.55]
     [-1.6   1.1 ]
     [-1.8   0.35]
     [-0.2  -0.85]
     [ 1.8   0.2 ]
     [-0.4   1.85]
     [ 0.6   2.  ]
     [ 2.    0.95]
     [ 1.6  -0.55]
     [-1.4   1.7 ]
     [-1.2  -1.  ]
     [-2.   -0.4 ]]
    fx: [[ 0.4625]
     [ 0.01  ]
     [ 0.89  ]
     [ 2.12  ]
     [ 1.9225]
     [ 1.0025]
     [ 2.08  ]
     [ 1.0625]
     [ 1.13  ]
     [ 4.3625]
     [ 3.77  ]
     [ 3.3625]
     [ 0.7625]
     [ 3.28  ]
     [ 3.5825]
     [ 4.36  ]
     [ 4.9025]
     [ 2.8625]
     [ 4.85  ]
     [ 2.44  ]
     [ 4.16  ]]
    Warning: optimization restart 4/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ -3.76012797e-10   4.99988509e-01]]
    fx: [[ 0.24998851]]
    Warning: optimization restart 1/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 4/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: optimization restart 3/5 failed
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: optimization restart 3/5 failed
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    Warning: inf or nan in gradient: replacing with zeros
    X: [[ 0.   0.5]]
    fx: [[ 0.25]]
         fun: array([ 0.01])
     message: 'OK'
        nfev: 15
     success: True
           x: array([[ 0. , -0.1]])
    

    For the sake of completeness, these are the steps I took to setup GPFlowOpt:

    1. Created new conda environment based on Python 3.5
    2. Cloned the gpflowopt repo
    3. Ran pip install . --process-dependency-links

    BTW, this is the first issue I've ever created on GiHub, so please forgive me if I am violating any conventions and please let me know if I left out crucial information.

    opened by jbi35 9
  • Overflow warnings

    Overflow warnings

    During calls to optimize(), sometimes UserWarnings pop up:

    /home/javdrher/.virtualenvs/gpflowopt/lib/python3.5/site-packages/GPflow/transforms.py:129: RuntimeWarning: overflow encountered in exp result = np.log(1. + np.exp(x)) + self._lower

    Typically this is quite harmless, if its really causing troubles its usually followed by a cholesky decomposition exception. However, those warnings mess up output. Specifically in documentation notebooks. I was thinking of silencing the warnings, any reason not to?

    question 
    opened by javdrher 9
  • Bug fix in pareto.py, Pareto::divide_conquer_nd

    Bug fix in pareto.py, Pareto::divide_conquer_nd

    The algorithm in Pareto::divide_conquer_nd fails when two points in the Pareto set have the same value in a certain dimension. An example is included in the modified pareto.py: The Pareto set d21 contains three points, two of which have a value of 2.0 in the first dimension. Ordering the three points results in different ways results in different values of the hypervolume (28 and 32), which are all wrong (should be 29).

    The issue is in the dominance test associated with _is_test_required method and pseudo_pf. The array pseudo_pf assigns different ranks to the same values. Therefore, by reordering the Pareto set in the test case, different pseudo Pareto sets are generated for the same Pareto set.

    I figured out two ways for the fix. One is to fix pseudo_pf by sorting the Pareto set such that the same values are assigned the same rank (e.g. using scipy.stats.rankdata). The other is to fix the dominance test by checking the actual Pareto set. The first one leads to more iterations in the algorithm. Therefore, I implemented the second approach in pareto.py with a minimum modification - although some simplifications of the code are possible.

    opened by smanist 5
  • when installing GPflowOpt it is down grading GPflow from 1.2 to  0.4

    when installing GPflowOpt it is down grading GPflow from 1.2 to 0.4

    i have installed latest gpflow version (1.2) using pip Later when i tried to install gpflowopt using the following code pip install git+https://github.com/GPflow/GPflowOpt.git

    it is down grading GPflow from 1.2 to 0.4

    opened by pullanagari 5
  • Is GPflowOpt compatible anymore with GPflow functions?

    Is GPflowOpt compatible anymore with GPflow functions?

    I just downloaded GPflowOpt, yet nothing can run due to slight changes made in GPflow. For example, you have now made a 'core' subfolder and seemed to changed the AutoFlow function. I have tried to apply come changes on my own (it is not hard to change 'from gpflow.param import DataHolder, Autoflow' to two separate imports in their correct folders, params and core), but note @Autoflow calls now need to be @gpflow.autoflow. I spent several hours changing this for pretty much every function/class.

    Yet now it seems certain classes are also significantly changed. 'Parameterized' no longer has the attribute 'highest_parent', something needed for SciPyOptimizer.

    At this point, not a single thing can be called from GPflowOpt without error.

    opened by grahamski2323 5
  • Max-Value Entropy Search

    Max-Value Entropy Search

    I implemented the recent acquisition function Max-Value Entropy Search from:

    Wang, Z. & Jegelka, S.. (2017). Max-value Entropy Search for Efficient Bayesian Optimization. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3627-3635

    We named it Min-Value Entropy Search because the GPflow framework seeks the minimum if the function. There is a notebook evaluating the method on the Shekel function and it seems to perform well.

    opened by nknudde 5
  • MGP

    MGP

    In this pull request I implemented the Approximatively Marginalised GP as described as indicated in issue #39 . It currently supports multi-output GPs. A notebook and some tests are included.

    opened by nknudde 5
  • GPR works, VGP doesn't

    GPR works, VGP doesn't

    Do improve the speed of optimisation, I replaced GPR with VGP as follows:

    domain = np.sum([GPflowOpt.domain.ContinuousParameter(f'mux{i}', mm[i], mx[i]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'muy{i}', mm[i+7], mx[i+7]) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmax{i}', 1e-7, 1.) for i in range(7)]) domain += np.sum([GPflowOpt.domain.ContinuousParameter(f'sigmay{i}', 1e-7, 1.) for i in range(7)]) domain += GPflowOpt.domain.ContinuousParameter('offset', endo * 0.7, endo * 1.3) design = GPflowOpt.design.RandomDesign(500, domain) X = design.generate() Y = np.vstack([obj(x.reshape(1, -1)) for x in X]) model = GPflow.vgp.VGP(X, Y, GPflow.kernels.RBF(29, lengthscales=X.std(axis=0)), likelihood=GPflow.likelihoods.Gaussian()) acquisition = GPflowOpt.acquisition.ExpectedImprovement(model) opt = GPflowOpt.optim.StagedOptimizer([GPflowOpt.optim.MCOptimizer(domain, 500), GPflowOpt.optim.SciPyOptimizer(domain)]) optimizer = GPflowOpt.BayesianOptimizer(domain, acquisition, optimizer=opt) optimizer.optimize(obj, n_iter=500)

    GPR works, but with VGP I receive the following error:

    [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] 2017-07-18 23:03:28.798171: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [501,1] vs. [500,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/j ob:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datascaler.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape, gradients/unnamed._models.model_datascale r.model.build_likelihood/unnamed._models.model_datascaler.model.likelihood.variational_expectations/sub_1_grad/Shape_1)]] Warning: optimization restart 1/5 failed 2017-07-18 23:03:28.898935: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.898992: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899066: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] 2017-07-18 23:03:28.899289: W tensorflow/core/framework/op_kernel.cc:1158] Invalid argument: Incompatible shapes: [500,1] vs. [501,1] [[Node: gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](gradients/unnamed._models.model_datas caler.model.build_likelihood/add_1_grad/Shape, gradients/unnamed._models.model_datascaler.model.build_likelihood/add_1_grad/Shape_1)]] Warning: optimization restart 2/5 failed

    I'm using master GPflow and GPflowOpt on TensorFlow 1.2 and Python 3.6.

    Thanks.

    bug 
    opened by mccajm 5
  • Avoid (some) duplicate optimizes

    Avoid (some) duplicate optimizes

    Following #52 here is some code which gets rid of 1 optimize call (in case of no initial design specified). The PR also includes a context which can be used to suspend all optimizes. This is mostly useful for the lower-level api.

    note: In the following release the data mechanism will undergo some changes and enabling the scaling should get moved to avoid another scaling

    enhancement do not merge yet 
    opened by javdrher 4
  • Issue: Install package gpflowopt

    Issue: Install package gpflowopt

    Hello can you help me to solve this issue ERROR: Could not find a version that satisfies the requirement GPflow==0.5.0 (from gpflowopt) (from versions: 1.4.1.linux-x86_64, 1.5.0.linux-x86_64, 1.0.0, 1.1.0, 1.1.1, 1.2.0, 1.3.0, 1.4.1, 1.5.0, 1.5.1, 2.0.0rc1, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.2.0, 2.2.1, 2.3.0, 2.3.1, 2.4.0, 2.5.1, 2.5.2) ERROR: No matching distribution found for GPflow==0.5.0

    I tried to install in google colab and spyder but not working

    opened by SamirLamin 2
  • Coupled of decoupled constrained BO?

    Coupled of decoupled constrained BO?

    I am very new to this BO domain and working on constraint BO and found this wonderful tool that has constrained BO application. As I can understand from the code that the acquisition function for objective and constraint are multiplied. I have a question about that. Is this constrained BO considered as coupled then?

    opened by pallavimitra 0
  • Ask/tell interface

    Ask/tell interface

    Is there a ask/tell interface? If not, is there a workaround?

    I'd like to initialize the optimizer with already known function evaluations. Likewise, I'd like to query the next data point that should be evaluated, according to the acquisition function.

    opened by moi90 2
  • Discrete variables optimization

    Discrete variables optimization

    Hey there, I found your project which seems promising for a problem, I am currently working on. However, I as far as I can see, the option to include discrete variables in the optimization is not yet implemented? Is this correct? and if so, are there any developement in this direction currently going on?

    opened by HolmKiilerich 2
  • Can I use PyTorch within the objective function?

    Can I use PyTorch within the objective function?

    Hi, Before I start writing my coding with GPflowOpt, I need to check with you whether I can use a PyTorch model within the objective function? My objective function needs the decoder of a VAE defined using PyTorch. In addition to other essential parameters, I want to pass this model as one parameter to the objective function. I understand that GPflowOpt is based on TF, but I am not sure whether the objective can be any function independent of TF. Thanks in advance...

    opened by yifeng-li 0
Releases(v0.1.0)
  • v0.1.0(Sep 11, 2017)

    Initial version of the GPflowOpt framework, including some basic acquisition functions and support for standard Bayesian Optimization strategies.

    Source code(tar.gz)
    Source code(zip)
Owner
GPflow
GPflow
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT)

Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT) Paper, Project Page This repo contains the official implementation of CVPR

Yassine 344 Dec 29, 2022
Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

SCAPT-ABSA Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training" Overvie

Zhengyan Li 66 Dec 04, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.

Intermdiate layer matters - SSL The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper. Downl

Aakash Kaku 35 Sep 19, 2022
Code for our WACV 2022 paper "Hyper-Convolution Networks for Biomedical Image Segmentation"

Hyper-Convolution Networks for Biomedical Image Segmentation Code for our WACV 2022 paper "Hyper-Convolution Networks for Biomedical Image Segmentatio

Tianyu Ma 17 Nov 02, 2022
Official PyTorch implementation of GDWCT (CVPR 2019, oral)

This repository provides the official code of GDWCT, and it is written in PyTorch. Paper Image-to-Image Translation via Group-wise Deep Whitening-and-

WonwoongCho 135 Dec 02, 2022
Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Implementations of the algorithms in the paper Approximative Algorithms for Multi-Marginal Optimal Transport and Free-Support Wasserstein Barycenters

Johannes von Lindheim 3 Oct 29, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Pose estimation for iOS and android using TensorFlow 2.0

💃 Mobile 2D Single Person (Or Your Own Object) Pose Estimation for TensorFlow 2.0 This repository is forked from edvardHua/PoseEstimationForMobile wh

tucan9389 165 Nov 16, 2022
Resco: A simple python package that report the effect of deep residual learning

resco Description resco is a simple python package that report the effect of dee

Pierre-Arthur Claudé 1 Jun 28, 2022
Self-Supervised Contrastive Learning of Music Spectrograms

Self-Supervised Music Analysis Self-Supervised Contrastive Learning of Music Spectrograms Dataset Songs on the Billboard Year End Hot 100 were collect

27 Dec 10, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Apply a perspective transformation to a raster image inside Inkscape (no need to use an external software such as GIMP or Krita).

Raster Perspective Apply a perspective transformation to bitmap image using the selected path as envelope, without the need to use an external softwar

s.ouchene 19 Dec 22, 2022
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.

Microsoft 8.4k Jan 01, 2023
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 05, 2022
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
A Python library for working with arbitrary-dimension hypercomplex numbers following the Cayley-Dickson construction of algebras.

Hypercomplex A Python library for working with quaternions, octonions, sedenions, and beyond following the Cayley-Dickson construction of hypercomplex

7 Nov 04, 2022