🐦 Opytimizer is a Python library consisting of meta-heuristic optimization techniques.

Overview

Opytimizer: A Nature-Inspired Python Optimizer

Latest release DOI Build status Open issues License

Welcome to Opytimizer.

Did you ever reach a bottleneck in your computational experiments? Are you tired of selecting suitable parameters for a chosen technique? If yes, Opytimizer is the real deal! This package provides an easy-to-go implementation of meta-heuristic optimizations. From agents to search space, from internal functions to external communication, we will foster all research related to optimizing stuff.

Use Opytimizer if you need a library or wish to:

  • Create your optimization algorithm;
  • Design or use pre-loaded optimization tasks;
  • Mix-and-match different strategies to solve your problem;
  • Because it is fun to optimize things.

Read the docs at opytimizer.readthedocs.io.

Opytimizer is compatible with: Python 3.6+.


Package guidelines

  1. The very first information you need is in the very next section.
  2. Installing is also easy if you wish to read the code and bump yourself into, follow along.
  3. Note that there might be some additional steps in order to use our solutions.
  4. If there is a problem, please do not hesitate, call us.
  5. Finally, we focus on minimization. Take that in mind when designing your problem.

Citation

If you use Opytimizer to fulfill any of your needs, please cite us:

@misc{rosa2019opytimizer,
    title={Opytimizer: A Nature-Inspired Python Optimizer},
    author={Gustavo H. de Rosa, Douglas Rodrigues and João P. Papa},
    year={2019},
    eprint={1912.13002},
    archivePrefix={arXiv},
    primaryClass={cs.NE}
}

Getting started: 60 seconds with Opytimizer

First of all. We have examples. Yes, they are commented. Just browse to examples/, chose your subpackage, and follow the example. We have high-level examples for most tasks we could think of and amazing integrations (Learnergy, NALP, OPFython, PyTorch, Scikit-Learn, Tensorflow).

Alternatively, if you wish to learn even more, please take a minute:

Opytimizer is based on the following structure, and you should pay attention to its tree:

- opytimizer
    - core
        - agent
        - function
        - node
        - optimizer
        - space
    - functions
        - weighted
    - math
        - distribution
        - general
        - hyper
        - random
    - optimizers
        - boolean
        - evolutionary
        - misc
        - population
        - science
        - social
        - swarm
    - spaces
        - boolean
        - grid
        - hyper_complex
        - search
        - tree
    - utils
        - constants
        - decorator
        - exception
        - history
        - logging
    - visualization
        - convergence
        - surface

Core

Core is the core. Essentially, it is the parent of everything. You should find parent classes defining the basis of our structure. They should provide variables and methods that will help to construct other modules.

Functions

Instead of using raw and straightforward functions, why not try this module? Compose high-level abstract functions or even new function-based ideas in order to solve your problems. Note that for now, we will only support multi-objective function strategies.

Math

Just because we are computing stuff, it does not means that we do not need math. Math is the mathematical package, containing low-level math implementations. From random numbers to distributions generation, you can find your needs on this module.

Optimizers

This is why we are called Opytimizer. This is the heart of the heuristics, where you can find a large number of meta-heuristics, optimization techniques, anything that can be called as an optimizer. Please take a look on the available optimizers.

Spaces

One can see the space as the place that agents will update their positions and evaluate a fitness function. However, the newest approaches may consider a different type of space. Thinking about that, we are glad to support diverse space implementations.

Utils

This is a utility package. Common things shared across the application should be implemented here. It is better to implement once and use as you wish than re-implementing the same thing over and over again.

Visualization

Everyone needs images and plots to help visualize what is happening, correct? This package will provide every visual-related method for you. Check a specific variable convergence, your fitness function convergence, plot benchmark function surfaces, and much more!


Installation

We believe that everything has to be easy. Not tricky or daunting, Opytimizer will be the one-to-go package that you will need, from the very first installation to the daily-tasks implementing needs. If you may just run the following under your most preferred Python environment (raw, conda, virtualenv, whatever):

pip install opytimizer

Alternatively, if you prefer to install the bleeding-edge version, please clone this repository and use:

pip install -e .

Environment configuration

Note that sometimes, there is a need for additional implementation. If needed, from here, you will be the one to know all of its details.

Ubuntu

No specific additional commands needed.

Windows

No specific additional commands needed.

MacOS

No specific additional commands needed.


Support

We know that we do our best, but it is inevitable to acknowledge that we make mistakes. If you ever need to report a bug, report a problem, talk to us, please do so! We will be available at our bests at this repository or [email protected].


Comments
  • [BUG] AttributeError: 'History' object has no attribute 'show'

    [BUG] AttributeError: 'History' object has no attribute 'show'

    Describe the bug A clear and concise description of what the bug is.

    It looks like there is no show() method for the returned opytimizer history.

    To Reproduce Steps to reproduce the behavior:

    1. Follow the steps from the wiki Tutorial: Your first optimization
      1. Run the optimizer with o.start(): o = Opytimizer(space=s, optimizer=p, function=f) history = o.start()
      2. Show the history: history.how()

    Expected behavior Not sure what I expected, just curious :)

    Screenshots

    2020-01-02 13:26:28,270 - opytimizer.optimizers.fa — INFO — Iteration 1000/1000
    2020-01-02 13:26:28,278 - opytimizer.optimizers.fa — INFO — Fitness: 4.077875713322641e-14
    2020-01-02 13:26:28,279 - opytimizer.optimizers.fa — INFO — Position: [[-1.42791381e-07]
     [-1.42791381e-07]]
    2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — Optimization task ended.
    2020-01-02 13:26:28,279 - opytimizer.opytimizer — INFO — It took 7.672951936721802 seconds.
    >>> history.show()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: 'History' object has no attribute 'show'
    >>> history
    <opytimizer.utils.history.History object at 0x113b75be0>
    >>> history.show()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    AttributeError: 'History' object has no attribute 'show'
    

    Desktop (please complete the following information):

    • OS: macOS Mojave 10.14.6
    • Virtual Environment: conda base
    • Python Version:
    (base) justinmai$ python3
    Python 3.7.3 (default, Mar 27 2019, 16:54:48) 
    [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
    

    Additional context Add any other context about the problem here.

    bug 
    opened by justinTM 5
  • [REG] How to supress DEBUG log message in opytimizer.core.space

    [REG] How to supress DEBUG log message in opytimizer.core.space

    Hello, After inititializing Searchspace there is a debug message that is printed to stdout. How can we turn it off/on? Following is the message. opytimizer.core.space — DEBUG — Agents: 25 |....

    I believe its printed because of line #223 in file opytimizer/core/space.py.

    For large dimensions it prints all the lower and upper bounds which we may not always require.

    Thanks.

    general 
    opened by amir1m 4
  • [NEW] Dump optimization progress

    [NEW] Dump optimization progress

    Is your feature request related to a problem? Please describe. If the server running fails. We lose the time running.

    Describe the solution you'd like Dump the optimization object from a time to time.

    Describe alternatives you've considered Maybe dump the agents?

    enhancement 
    opened by gcoimbra 4
  • [NEW] Constrained optimization

    [NEW] Constrained optimization

    Hi, thanks for the work.

    Any plans to add functionality to define constraints for the optimization? For instance, inequalities or any arbitrary non-linear constraints on the inputs and/or on the outputs?

    enhancement 
    opened by ogencoglu 4
  • [NEW] Using population data for population-based algorithms?

    [NEW] Using population data for population-based algorithms?

    Hello, First of all, thank you for sharing such a fantastic repo that can be used as an off-the-shelf meta-heuristic optimization algorithms!

    I have a question regarding how to use my own population data for optimizers in optimizer. Rather than using SearchSpace that uses predetermined upper/lower bounds, is there any way I can use my own population samples to start the optimization from?

    Thank you, hope you have a wonderful day!

    enhancement 
    opened by jimmykimmy68 3
  • [REG] What is the difference between grid space, and discrete space example?

    [REG] What is the difference between grid space, and discrete space example?

    Greetings,

    I have a mixed search space problem of five dimensions, 4 discrete and one continuous. how to implement a discrete search space, with these dimensions where the increment won't be the same. I found that grid space offer me the flexibility I need, yet I noticed you used a different implementation in discrete space example so which one should I use?

    My search space:

    step = [1, 1, 1, 1, 0.1]
    lower_bound = [16, 3, 2, 0, 0]
    upper_bound = [400, 20, 20, 1, 0.33]
    

    Thanks in advance.

    general 
    opened by TamerAbdelmigid 3
  • [NEW] Different number of step for each variable

    [NEW] Different number of step for each variable

    Is your feature request related to a problem? Please describe. It's not

    Describe the solution you'd like I'd like a different step size for each

    Additional context Sometimes variables are less sensible than others. Some are integers. Is there anyway to use a different step for each one?

    enhancement 
    opened by gcoimbra 3
  • [REG] How to get a detailed print out during optimization?

    [REG] How to get a detailed print out during optimization?

    Greetings,

    My function takes time getting evaluated, and I want to closely monitor what happens during optimization process. When using grid search space and printing the fitness from my function, I could see the movement along the grid. But, when using normal search space, my processor utilization is 100% and it takes hours with no print out. So, hot to get more details? is there something like a degree of verbosity?

    Thanks in advance.

    general 
    opened by TamerAbdelmigid 2
  • [NEW] Define objective function for regression problem

    [NEW] Define objective function for regression problem

    Hi there,

    I attempted to define an objective function (using CatBoost model for data) to solve a minimum problem in a regression task, however failed to create new objective function. So, do your package offer the solution for regression and can we define such an objective function in this case?

    My desired objective function something like this to minimize the MSE:

    from catboost import CatBoostRegressor as cbr
    cbr_model = cbr()
    
    def objective_function(cbr_model,X_train3, y_train3, X_test3, y_test3):      
        cbr_model.fit(X_train3,y_train3)  
        mse=mean_squared_error(y_test3,cbr_model.predict(X_test3))
        return mse
    

    Many thanks, Thang

    enhancement 
    opened by hanamthang 2
  • [REG]How to plot convergence diagram?

    [REG]How to plot convergence diagram?

    Hello, I was looking for convergence diagram. Found an example of using convergence function opytimizer/examples/visualization/convergence_plotting.py /. However, it uses few constant values of agent positions. Is there a convergence example that shows how to use this function with actual optimization problem such as after carrying out PSO?

    Thanks,

    general 
    opened by amir1m 2
  • [REG]Is there a way to provide initial values before starting optimization?

    [REG]Is there a way to provide initial values before starting optimization?

    Hello, Hope you keeping well.

    I am looking to provide an initial value to optimization loop. Is there a way through searchspace or otherwise to provide initial/starting solution?

    Thanks.

    general 
    opened by amir1m 2
Releases(v3.1.2)
  • v3.1.2(Sep 9, 2022)

    Changelog

    Description

    Welcome to v3.1.2 release.

    In this release, we added variables name mapping to search spaces

    Includes (or changes)

    • core.search_space
    Source code(tar.gz)
    Source code(zip)
  • v3.1.1(May 4, 2022)

    Changelog

    Description

    Welcome to v3.1.1 release.

    In this release, we added pre-commit hooks and annotated typing.

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Jan 7, 2022)

    Changelog

    Description

    Welcome to v3.1.0 release.

    In this release, we implemented the initial parts for holding graph-related searches, which will support Neural Architecture Search (NAS) in the feature.

    Additionally, we have added the first algorithm for calculating the Pareto frontier of pre-defined points (Non-Dominated Sorting).

    Includes (or changes)

    • core
    • optimizers
    • spaces
    Source code(tar.gz)
    Source code(zip)
  • v3.0.2(Jun 28, 2021)

    Changelog

    Description

    Welcome to v3.0.2 release.

    In this release, we implemented the remaining meta-heuristics that were on hold. Please note that they are supposed to be 100% working, yet we still need to experimentally evaluate their performance.

    Additionally, an important hot-fix regarding the calculation of Euclidean Distance has been corrected.

    Includes (or changes)

    • optimizers
    • math.general
    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(May 31, 2021)

    Changelog

    Description

    Welcome to v3.0.1 release.

    In this release, we have added a bunch of meta-heuristics that were supposed to be implemented earlier.

    Includes (or changes)

    • optimizers
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(May 12, 2021)

    Changelog

    Description

    Welcome to v3.0.0 release.

    In this release, we have revamped the whole library, rewriting base packages, such as agent, function, node and space, as well as implementing new features, such as callbacks and a better Opytimizer bundler.

    Additionally, we have rewritten every optimizer and their tests, removing more than 2.5k lines that were tagged as "repeatable". Furthermore, we have removed excessive commentaries to provide a cleaner reading and have rewritten every example to include the newest features.

    Please take a while to check our most important advancements and read the docs at: opytimizer.readthedocs.io

    Note that this is a major release and we expect everyone to update their corresponding packages, as this update will not work with v2.x.x versions.

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v2.1.4(Apr 28, 2021)

    Changelog

    Description

    Welcome to v2.1.4 release.

    In this release, we have added the following optimizers: AOA and AO. Additionally, we have implemented a step size for each variable in the grid, as pointed by @gcoimbra.

    Please read the docs at: opytimizer.readthedocs.io

    Note that this is the latest update of v2 branch. The following update will feature a major rework on Opytimizer classes and will not be retroactive with past versions.

    Includes (or changes)

    • optimizers.misc.aoa
    • optimizers.population.ao
    • optimizers.spaces.grid
    Source code(tar.gz)
    Source code(zip)
  • v2.1.3(Mar 10, 2021)

    Changelog

    Description

    Welcome to v2.1.3 release.

    In this release, we have added the following optimizers: COA, JS and NBJS. Additionally, we have fixed the roulette selection for minimization problems, as pointed by @Martsks.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.evolutionary.ga
    • optimizers.population.coa
    • optimizers.swarm.js
    Source code(tar.gz)
    Source code(zip)
  • v2.1.2(Dec 3, 2020)

    Changelog

    Description

    Welcome to v2.1.2 release.

    In this release, we have added a set of new optimizers, as follows: ABO, ASO, BOA, BSA, CSA, DOA, EHO, EPO, GWO, HGSO, HHO, MFO, PIO, QSA, SFO, SOS, SSA, SSO, TWO, WOA and WWO.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers
    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Nov 25, 2020)

    Changelog

    Description

    Welcome to v2.1.1 release.

    In this release, we have added some HS-based optimizers and fixed an issue regarding the store_best_only flag.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers
    • utils.history
    Source code(tar.gz)
    Source code(zip)
  • v2.1.0(Jul 29, 2020)

    Changelog

    Description

    Welcome to v2.1.0 release.

    In this release, we have fixed the UMDA nomenclature, corrected some optimizers docstrings and added a soft-penalization to constrained optimization, which we believe that will enable users in designing more appropriate constrained objectives. Furthermore, we added a deterministic trait to the optimizers' unitary tests so they are able to converge.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • core.function
    • optimizers
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Jul 10, 2020)

    Changelog

    Description

    Welcome to v2.0.2 release.

    In this release, we have added BMRFO, SAVPSO, UDMA and VPSO optimizers.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.boolean.bmrfo
    • optimizers.boolean.udma
    • optimizers.swarm.pso
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Jun 26, 2020)

  • v2.0.0(May 7, 2020)

    Changelog

    Description

    Welcome to v2.0.0 release.

    In this release, we have reworked some inner structures, added the usability of decorators and revamped some optimizers. Additionally, we added a bunch of new optimizers and their tests.

    Note that this is a major release, therefore, we suggest to update the library as soon as possible, as it will not be compatible with older versions.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v1.1.3(Mar 31, 2020)

    Changelog

    Description

    Welcome to v1.1.3 release.

    In this release, we have improved the code readability, as well as we have merged some child optimizers to their parents' modules.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • opytimizer
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Feb 20, 2020)

    Changelog

    Description

    Welcome to v1.1.2 release.

    In this release, we have fixed some nasty bugs that may cause some particular issues. Additionally, we added a method for handling constrained optimization.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • core.function
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Jan 15, 2020)

    Changelog

    Description

    Welcome to v1.1.1 release.

    In this release, we have fixed some nasty bugs that may cause some particular issues. Additionally, we added several new optimizers to the library and a convergence module (belongs to the visualization package).

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • optimizers.gsa
    • optimizer.hc
    • otimizer.rpso
    • optimizer.sa
    • optimizer.sca
    • visualization.convergence
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 3, 2019)

    Changelog

    Description

    Welcome to v1.1.0 release.

    This is a minor release that will probably prevent retro-compatibility. Some base structures were changed in order to enhance the library performance.

    Essentially, new structures were created to provide the bulding tools for tree-based evolutionary algorithms, such as Genetic Programming.

    Additionally, we added a constants module to the utilities package and an exception module. This will help guiding the users when inputting wrong information.

    MultiFunction was renamed to WeightedFunction, which is more suitable according to its usage.

    History have been reworked as well, making it possible to dynamically create properties through the dump() method.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • examples/applications
    • core.node
    • function.weighted
    • math.general
    • optimizers.gp
    • opytimizer
    • spaces.tree
    • tests
    • utils.constants
    • utils.exceptions
    • utils.history
    Source code(tar.gz)
    Source code(zip)
  • v1.0.7(Sep 12, 2019)

    Changelog

    Description

    Welcome to v1.0.7 release. We added Opytimizer to pip repository, fixed up the optimization task time, reworked some tests, added new integrations (Recogners library), changed our license for further publication, and fixed an issue regarding agents being out of bounds.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes (or changes)

    • examples/integrations/recogners
    • opytimizer
    • optimizers
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Jun 21, 2019)

    Changelog

    Description

    Welcome to v1.0.6 release. We added new interesting things, such as a new optimizer, more benchmarking functions, bug fixes (mostly agents being out of bounds), a reworked history object and some adjusted tests.

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes

    • math.benchmark
    • optimizers.wca
    • utils.history
    • tests
    Source code(tar.gz)
    Source code(zip)
  • v1.0.5(Apr 18, 2019)

    Changelog

    Description

    Welcome to v1.0.5 release. We added new interesting things, such as new optimizers, some reworked tests for better uses' cases, a multi-objective strategy for handling problems with more than one objective functions, and much more!

    Please read the docs at: opytimizer.readthedocs.io

    Also, stay tuned for our next updates!

    Includes

    • examples.integrations.sklearn
    • functions.multi
    • optimizers.abc
    • optimizers.bha
    • optimizers.hs
    • tests
    • optimizers.ihs
    Source code(tar.gz)
    Source code(zip)
  • v1.0.4(Mar 22, 2019)

  • v1.0.3(Mar 22, 2019)

    Changelog

    Description

    Welcome to v1.0.3 release. We have added methods for supporting new optimizers. Additionally, we have fixed some previous implementations and improved their convergence. Everything should be appropriate now.

    Some examples integrating PyTorch with Opytimizer were created as well. Ranging from linear regression to long short-term memory networks, we hope to continue improving our library to serve you well.

    Again, every test is implemented, making a 100% score of coverage. Please refer to the wiki in order to running them.

    Please stay tuned for our next updates and our newest integrations (Sklearn and Tensorflow)!

    Includes

    • optimizers.aiwpso
    • optimizers.ba
    • optimizers.cs
    • optimizers.fa
    • examples.integrations.pytorch
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Mar 7, 2019)

    Changelog

    Description

    Welcome to v1.0.2 release. We have added methods for supporting hypercomplex representations. From math modules to new spaces, we do support any hypercomplex approach, ranging from complexes to octonions.

    A History class has been added as well. It will server as the one to hold vital information from the optimization task. In the future, we will support visualization and plots graphing.

    Also, internal class has been removed. All of its contents were moved to core.function module. For now, this will be our new structure (there is a slighly chance to be modified in the future to accomodate multi-objective functions).

    Finally, we have tests. Every test is implemented, making a 100% score of coverage. Please refer to the wiki in order to running them.

    Please stay tuned for our next updates!

    Includes

    • math.hypercomplex
    • spaces
    • utils.history
    • tests (100% coverage)

    Excludes

    • functions
    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Feb 26, 2019)

    Changelog

    Description

    Welcome to v1.0.1 release. Essentialy, we have reworked some basic structures, added a new math distribution module and a new optimizer (Flower Pollination Algorithm). Please stay tuned for our next updates!

    Includes

    • math.distribution
    • optimizers.fpa
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Feb 26, 2019)

    Changelog

    Description

    This is the initial release of Opytimizer. It includes all basic modules in order to work with it. One can create an internal optimization function and apply a Particle Swarm Optimization optimizer onto it. Please check examples folder or read the docs in order to know how to use this library.

    Includes

    • core
    • functions
    • math
    • optimizers
    • utils
    Source code(tar.gz)
    Source code(zip)
Owner
Gustavo Rosa
There are no programming languages that can match up to programming logic. Machine learning researcher on work time and software engineer on free time.
Gustavo Rosa
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 04, 2023
Code in conjunction with the publication 'Contrastive Representation Learning for Hand Shape Estimation'

HanCo Dataset & Contrastive Representation Learning for Hand Shape Estimation Code in conjunction with the publication: Contrastive Representation Lea

Computer Vision Group, Albert-Ludwigs-Universität Freiburg 38 Dec 13, 2022
Rethinking Portrait Matting with Privacy Preserving

Rethinking Portrait Matting with Privacy Preserving This is the official repository of the paper Rethinking Portrait Matting with Privacy Preserving.

184 Jan 03, 2023
Sequence Modeling with Structured State Spaces

Structured State Spaces for Sequence Modeling This repository provides implementations and experiments for the following papers. S4 Efficiently Modeli

HazyResearch 896 Jan 01, 2023
This repo provides code for QB-Norm (Cross Modal Retrieval with Querybank Normalisation)

This repo provides code for QB-Norm (Cross Modal Retrieval with Querybank Normalisation) Usage example python dynamic_inverted_softmax.py --sims_train

36 Dec 29, 2022
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
190 Jan 03, 2023
automatic color-grading

color-matcher Description color-matcher enables color transfer across images which comes in handy for automatic color-grading of photographs, painting

hahnec 168 Jan 05, 2023
This repo provides the official code for TransBTS: Multimodal Brain Tumor Segmentation Using Transformer (https://arxiv.org/pdf/2103.04430.pdf).

TransBTS: Multimodal Brain Tumor Segmentation Using Transformer This repo is the official implementation for TransBTS: Multimodal Brain Tumor Segmenta

Raymond 247 Dec 28, 2022
The official implementation of paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks" (IJCV under review).

DGMS This is the code of the paper "Finding the Task-Optimal Low-Bit Sub-Distribution in Deep Neural Networks". Installation Our code works with Pytho

Runpei Dong 3 Aug 28, 2022
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
Koç University deep learning framework.

Knet Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU

1.4k Dec 31, 2022
DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021) input image, aligned reconstruction, animation with various poses & expressions This is

Yao Feng 1.5k Jan 02, 2023
PyTorch reimplementation of the Smooth ReLU activation function proposed in the paper "Real World Large Scale Recommendation Systems Reproducibility and Smooth Activations" [arXiv 2022].

Smooth ReLU in PyTorch Unofficial PyTorch reimplementation of the Smooth ReLU (SmeLU) activation function proposed in the paper Real World Large Scale

Christoph Reich 10 Jan 02, 2023
Codebase for testing whether hidden states of neural networks encode discrete structures.

structural-probes Codebase for testing whether hidden states of neural networks encode discrete structures. Based on the paper A Structural Probe for

John Hewitt 349 Dec 17, 2022
Fermi Problems: A New Reasoning Challenge for AI

Fermi Problems: A New Reasoning Challenge for AI Fermi Problems are questions whose answer is a number that can only be reasonably estimated as a prec

AI2 15 May 28, 2022
A benchmark for the task of translation suggestion

WeTS: A Benchmark for Translation Suggestion Translation Suggestion (TS), which provides alternatives for specific words or phrases given the entire d

zhyang 55 Dec 24, 2022
OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

OpenDILab 205 Dec 29, 2022