Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

Overview

Tests codecov PyPI - Downloads Documentation Status

Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill

This is a port of the amazing openskill.js package.

Installation

pip install openskill

Usage

>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)

If a1 and a2 are on a team, and wins against a team of b1 and b2, send this into rate:

>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
([28.669648436582808, 8.071520788025197], [33.83086971107981, 5.062772998705765], [43.071274808241974, 2.4166900452721256], [23.149503312339064, 6.1378606973362135])

You can also create Rating objects by importing create_rating:

>>> from openskill import create_rating
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)

Ranks

When displaying a rating, or sorting a list of ratings, you can use ordinal:

>>> from openskill import ordinal
>>> ordinal(mu=43.07, sigma=2.42)
35.81

By default, this returns mu - 3 * sigma, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.

Artificial Ranks

If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[[20.96265504062538, 8.083731307186588]], [[27.795084971874736, 8.263160757613477]], [[24.68943500312503, 8.083731307186588]], [[26.552824984374855, 8.179213704945203]]]

It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.

Ties should have either equivalent rank or score.

>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[[24.68943500312503, 8.179213704945203]], [[22.826045021875203, 8.179213704945203]], [[24.68943500312503, 8.179213704945203]], [[27.795084971874736, 8.263160757613477]]]

Choosing Models

The default model is PlackettLuce. You can import alternate models from openskill.models like so:

>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[[17.09430584957905, 7.5012190693964005]], [[32.90569415042095, 7.5012190693964005]], [[22.36476861652635, 7.5012190693964005]], [[27.63523138347365, 7.5012190693964005]]]

Predicting Winners

You can compare two or more teams to get the probabilities of each team winning.

>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0

Available Models

  • BradleyTerryFull: Full Pairing for Bradley-Terry
  • BradleyTerryPart: Partial Pairing for Bradley-Terry
  • PlackettLuce: Generalized Bradley-Terry
  • ThurstoneMostellerFull: Full Pairing for Thurstone-Mosteller
  • ThurstoneMostellerPart: Partial Pairing for Thurstone-Mosteller

Which Model Do I Want?

  • Bradley-Terry rating models follow a logistic distribution over a player's skill, similar to Glicko.
  • Thurstone-Mosteller rating models follow a gaussian distribution, similar to TrueSkill. Gaussian CDF/PDF functions differ in implementation from system to system (they're all just chebyshev approximations anyway). The accuracy of this model isn't usually as great either, but tuning this with an alternative gamma function can improve the accuracy if you really want to get into it.
  • Full pairing should have more accurate ratings over partial pairing, however in high k games (like a 100+ person marathon race), Bradley-Terry and Thurstone-Mosteller models need to do a calculation of joint probability which involves is a k-1 dimensional integration, which is computationally expensive. Use partial pairing in this case, where players only change based on their neighbors.
  • Plackett-Luce (default) is a generalized Bradley-Terry model for k ≥ 3 teams. It scales best.

Implementations in other Languages

Comments
  • Support for partial play/weighting/player performance

    Support for partial play/weighting/player performance

    Would it be possible to add some sort of system to weight player performance like how the official trueskill module does it? I'm trying to create a system that weights players overall performance compared to their teams to get a more accurate skill rating.

    enhancement help wanted 
    opened by spookybear0 7
  • Support for Python 3.7+

    Support for Python 3.7+

    I've been using openskill in my local Python 3.9 environment for a while now. I've been liking it a lot and I wanted to add it to one of my projects for use in production. I was surprised to find that the latest version on pypi said it only supported 3.10+, which is pretty rough from a compatibility perspective. I wanted to check how much it actually depended on features in 3.10, so I switched to a 3.6 virtual env and tried running the tests. They failed, and indicated that isinstance(var, Union[T1, T2]) calls were an issue. This is a fairly easy thing to express in older pythons with: isinstance(var, (T1, T2)). I did a search and replace on these, and then the tests passed. That was the only incompatibility I could find.

    If the maintainers are open to supporting older Pythons for a new release, that would be very helpful to me. I would also be OK with the minimum version being something like 3.7 or 3.8.

    I haven't used towncrier before, and it wasn't immediately obvious how to impute the changelog, so I figure I'll wait until a maintainer greenlights this before I continue any work on details like that.

    Affirmation

    -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Erotemic -----BEGIN PGP SIGNATURE-----

    iHUEARYIAB0WIQT3wcl6TvLPlkPZTGZswF8VWxKrNgUCYnUyfgAKCRBswF8VWxKr Nv5kAQCy8UMZZYCX3+dG5sLn1UmF/SZ1qnGwsMAUWsVgTdJvFAEAzOCgQwWOpIma wPXZMrEOpp8S9IAgNpeFetms+kp9lQ0= =7IXK -----END PGP SIGNATURE-----

    enhancement 
    opened by Erotemic 6
  • let score difference be reflected in rating

    let score difference be reflected in rating

    When you enter scores into rate(), the difference between the scores have no effect on the rating - meaning: rate([team1,team2],score(1,0)) == rate([team1,team2],score(100,0)) is true. They have exactly the same rating effect on team1 and team2.

    I don't know if it is mathematical possible and how it would look like. But it would be great if the difference could be somehow factored into the calculation, as it is (if your game has a score) quite an important datapoint for skill evaluation.

    enhancement 
    opened by jonathan-scholz 4
  • Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    Are `predict_win` and `predict_draw` functions accidentally using Thurstone-Mosteller specific calculations?

    If I understand it correctly, those two functions seem to perform calculations using equations numbered (65) in the paper. However, those equations seems to be specific to Thurstone-Mosteller model and as far as I can tell, the proper way to calculate probabilities for Bradley-Terry model would be to use equations (48) and (51) (also seen as p_iq in equation (49)). Is this intended? Or am I misunderstanding either the paper or the code of these functions?

    question 
    opened by asyncth 2
  • Update link

    Update link

    Description of Changes

    • [ ] Wrote at least one-line docstrings (for any new functions)
    • [ ] Added test(s) covering the changes (if testable)

    Updated link to repository

    Issue(s) Resolved

    Fixes #

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: bstummer

    opened by bstummer 2
  • Unable to install on Google Colab

    Unable to install on Google Colab

    Describe the bug I can't install openskill on Google Colab via pip. What should I do?

    Screenshots スクリーンショット 2022-03-31 23 14 18

    Platform Information

    • Google Colab
    • Python Version: 3.7.13
    wontfix 
    opened by toshi71 2
  • Implement additive dynamics factor 'tau'.

    Implement additive dynamics factor 'tau'.

    Description of Changes

    • [x] Wrote at least one-line docstrings (for any new functions)
    • [x] Added test(s) covering the changes (if testable)

    Ports https://github.com/philihp/openskill.js/pull/233

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: daegontaven

    opened by daegontaven 2
  • Faster runtime of predict_win and predict_draw

    Faster runtime of predict_win and predict_draw

    Description of Changes

    • [x] Adds some unit tests that I have for the Javascript library, to prevent any regressions.
    • [x] Tells itertools.permutations to give permutation pairs, rather than all full permutations
    • [x] Additionally, the length of permutation_pairs is shorter, and that simplifies the denominator to just a triangular number.

    I noticed that for teams of ABCD, your code as written was finding all permutations (ABCD, ABDC, ACBD, ACDB, ...) and then only using the first two from the permutation, which for n>=4 teams, this causes the same pairings to be calculated multiple times.

    This should reduce runtime from O(n^n) to O(n^2).

    Affirmation

    By submitting this Pull Request or typing my (user)name below, I affirm the Developer Certificate of Origin with respect to all commits and content included in this PR, and understand I am releasing the same under openskill.py's MIT license.

    I certify the above statement is true and correct: Philihp Busby

    opened by philihp 2
  • Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bump prompt-toolkit from 3.0.26 to 3.0.27

    Bumps prompt-toolkit from 3.0.26 to 3.0.27.

    Changelog

    Sourced from prompt-toolkit's changelog.

    3.0.27: 2022-02-07

    New features:

    • Support for cursor shapes. The cursor shape for prompts/applications can now be configured, either as a fixed cursor shape, or in case of Vi input mode, according to the current input mode.
    • Handle "cursor forward" command in ANSI formatted text. This makes it possible to render many kinds of generated ANSI art.
    • Accept align attribute in Label widget.
    • Added PlainTextOutput: an output implementation that doesn't render any ANSI escape sequences. This will be used by default when redirecting stdout to a file.
    • Added create_app_session_from_tty: a context manager that enforces input/output to go to the current TTY, even if stdin/stdout are attached to pipes.
    • Added to_plain_text utility for converting formatted text into plain text.

    Fixes:

    • Don't automatically use sys.stderr for output when sys.stdout is not a TTY, but sys.stderr is. The previous behavior was confusing, especially when rendering formatted text to the output, and we expect it to follow redirection.
    Commits
    • 6ac867a Release 3.0.27
    • 96ec6fb Removed unused imports.
    • b4d728e Added support for cursor shapes.
    • 4a66820 Added to_plain_text utility. A function to turn formatted text into a string.
    • 7dd8435 Added create_app_session_from_tty.
    • 2c96fe2 Stop preferring a TTY output when creating the default output.
    • 402b6a3 Added PlainTextOutput: an output that doesn't write ANSI escape sequences to ...
    • 57b42c4 Added ansi-art-and-textarea.py example.
    • a71de3c Accept 'align' attribute in Label widget.
    • 64d870a Added ptk-logo-ansi-art.py example.
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump scipy from 1.7.3 to 1.8.0

    Bump scipy from 1.7.3 to 1.8.0

    Bumps scipy from 1.7.3 to 1.8.0.

    Release notes

    Sourced from scipy's releases.

    SciPy 1.8.0 Release Notes

    SciPy 1.8.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with python -Wd and check for DeprecationWarning s). Our development attention will now shift to bug-fix releases on the 1.8.x branch, and on adding new features on the master branch.

    This release requires Python 3.8+ and NumPy 1.17.3 or greater.

    For running on PyPy, PyPy3 6.0+ is required.

    Highlights of this release

    • A sparse array API has been added for early testing and feedback; this work is ongoing, and users should expect minor API refinements over the next few releases.
    • The sparse SVD library PROPACK is now vendored with SciPy, and an interface is exposed via scipy.sparse.svds with solver='PROPACK'. It is currently default-off due to potential issues on Windows that we aim to resolve in the next release, but can be optionally enabled at runtime for friendly testing with an environment variable setting of USE_PROPACK=1.
    • A new scipy.stats.sampling submodule that leverages the UNU.RAN C library to sample from arbitrary univariate non-uniform continuous and discrete distributions
    • All namespaces that were private but happened to miss underscores in their names have been deprecated.

    New features

    scipy.fft improvements

    Added an orthogonalize=None parameter to the real transforms in scipy.fft which controls whether the modified definition of DCT/DST is used without changing the overall scaling.

    scipy.fft backend registration is now smoother, operating with a single

    ... (truncated)

    Commits
    • b5d8bab REL: 1.8.0 release commit.
    • d84f731 Merge pull request #15521 from tylerjereddy/treddy_prep_180_final
    • 315dd53 DOC: update 1.8.0 relnotes.
    • b54b7ae MAINT: fix broken link and remove CI badges
    • 920e27b REL: 1.8.0 unreleased.
    • ea004bd REL: 1.8.0rc4 released.
    • 4f3969d Merge pull request #15479 from tylerjereddy/treddy_180rc4
    • 8ed6aa9 DOC: update 1.8.0 relnotes.
    • efe4ca5 MAINT: PR 15479 revisions
    • 1803913 MAINT: remove non-default settings (except shallow) in .gitmodules
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Bump twine from 3.7.1 to 3.8.0

    Bump twine from 3.7.1 to 3.8.0

    Bumps twine from 3.7.1 to 3.8.0.

    Release notes

    Sourced from twine's releases.

    3.8.0

    https://pypi.org/project/twine/3.8.0/

    Changelog

    Changelog

    Sourced from twine's changelog.

    Twine 3.8.0 (2022-02-02)

    Features ^^^^^^^^

    • Add --verbose logging for querying keyring credentials. ([#849](https://github.com/pypa/twine/issues/849) <https://github.com/pypa/twine/issues/849>_)
    • Log all upload responses with --verbose. ([#859](https://github.com/pypa/twine/issues/859) <https://github.com/pypa/twine/issues/859>_)
    • Show more helpful error message for invalid metadata. ([#861](https://github.com/pypa/twine/issues/861) <https://github.com/pypa/twine/issues/861>_)

    Bugfixes ^^^^^^^^

    • Require a recent version of urllib3. ([#858](https://github.com/pypa/twine/issues/858) <https://github.com/pypa/twine/issues/858>_)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Tournament Interface

    Tournament Interface

    Is your feature request related to a problem? Please describe. Creating models of tournaments is hard since you have to parse the data using another library (depending on the format) and then pass everything into rate and predict manually. It's a lot of effort to predict the entire outcome of say, "2022 FIFA World Cup" easily.

    Describe the solution you'd like it would be nice if there was a tournament class of some kind that allowed us to pass in rounds which themselves contained matches. Then using an exhaustive approach predict winners and move them along each bracket/round. Especially now that #74 has landed it would be easier to predict whole matches and in turn tournaments.

    The classes should be customizable to allow our own logic. For instance, allow using the munkres algorithm and other such methods.

    Describe alternatives you've considered I don't know any other libraries that do this already.

    enhancement help wanted 
    opened by daegontaven 0
  • Improve win predictions for 1v1 teams

    Improve win predictions for 1v1 teams

    First of all, congrats and thanks for the great repo!

    In a scenario that Player A has 2x the rating of Player B, the predicted win probability is 60% vs 40%. This seems strange.

    players = [ [Rating(50)], [Rating(25)] ]
    
    predict_win(teams=players)
    
    [ 1 ]: [0.6002914159316424, 0.39970858406835763]
    
    

    If I use this function implementation, I get 97% vs 3% which sounds more reasonable to me.

    Maybe the predict_win function has some flaw?

    enhancement 
    opened by nthypes 1
Releases(v4.0.0)
Owner
Open Debates Project
Debate the way it's meant to be.
Open Debates Project
A Multipurpose Library for Synthetic Time Series Generation in Python

TimeSynth Multipurpose Library for Synthetic Time Series Please cite as: J. R. Maat, A. Malali, and P. Protopapas, “TimeSynth: A Multipurpose Library

278 Dec 26, 2022
My capstone project for Udacity's Machine Learning Nanodegree

MLND-Capstone My capstone project for Udacity's Machine Learning Nanodegree Lane Detection with Deep Learning In this project, I use a deep learning-b

Michael Virgo 407 Dec 12, 2022
inding a method to objectively quantify skill versus chance in games, using reinforcement learning

Skill-vs-chance-games-analysis - Finding a method to objectively quantify skill versus chance in games, using reinforcement learning

Marcus Chiam 4 Nov 19, 2022
Accelerating model creation and evaluation.

EmeraldML A machine learning library for streamlining the process of (1) cleaning and splitting data, (2) training, optimizing, and testing various mo

Yusuf 0 Dec 06, 2021
This is a curated list of medical data for machine learning

Medical Data for Machine Learning This is a curated list of medical data for machine learning. This list is provided for informational purposes only,

Andrew L. Beam 5.4k Dec 26, 2022
This is an auto-ML tool specialized in detecting of outliers

Auto-ML tool specialized in detecting of outliers Description This tool will allows you, with a Dash visualization, to compare 10 models of machine le

1 Nov 03, 2021
李航《统计学习方法》复现

本项目复现李航《统计学习方法》每一章节的算法 特点: 笔记摘要:在每个文件开头都会有一些核心的摘要 pythonic:这里会用尽可能规范的方式来实现,包括编程风格几乎严格按照PEP8 循序渐进:前期的算法会更list的方式来做计算,可读性比较强,后期几乎完全为numpy.array的计算,并且辅助详

58 Oct 22, 2021
MegFlow - Efficient ML solutions for long-tailed demands.

Efficient ML solutions for long-tailed demands.

旷视天元 MegEngine 371 Dec 21, 2022
Uber Open Source 1.6k Dec 31, 2022
A Powerful Serverless Analysis Toolkit That Takes Trial And Error Out of Machine Learning Projects

KXY: A Seemless API to 10x The Productivity of Machine Learning Engineers Documentation https://www.kxy.ai/reference/ Installation From PyPi: pip inst

KXY Technologies, Inc. 35 Jan 02, 2023
Mixing up the Invariant Information clustering architecture, with self supervised concepts from SimCLR and MoCo approaches

Self Supervised clusterer Combined IIC, and Moco architectures, with some SimCLR notions, to get state of the art unsupervised clustering while retain

Bendidi Ihab 9 Feb 13, 2022
Predicting diabetes over a five year period using logistic regression and the Pima First-Nation dataset

Diabetes This script uses the Pima First Nations dataset to create a model to predict whether or not an individual will develop Diabetes Mellitus Type

1 Mar 28, 2022
stability-selection - A scikit-learn compatible implementation of stability selection

stability-selection - A scikit-learn compatible implementation of stability selection stability-selection is a Python implementation of the stability

185 Dec 03, 2022
flexible time-series processing & feature extraction

A corona statistics and information telegram bot.

PreDiCT.IDLab 206 Dec 28, 2022
Tools for Optuna, MLflow and the integration of both.

HPOflow - Sphinx DOC Tools for Optuna, MLflow and the integration of both. Detailed documentation with examples can be found here: Sphinx DOC Table of

Telekom Open Source Software 17 Nov 20, 2022
🌊 River is a Python library for online machine learning.

River is a Python library for online machine learning. It is the result of a merger between creme and scikit-multiflow. River's ambition is to be the go-to library for doing machine learning on strea

OnlineML 4k Jan 03, 2023
Forecasting prices using Facebook/Meta's Prophet model

CryptoForecasting using Machine and Deep learning (Part 1) CryptoForecasting using Machine Learning The main aspect of predicting the stock-related da

1 Nov 27, 2021
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 02, 2023
Massively parallel self-organizing maps: accelerate training on multicore CPUs, GPUs, and clusters

Somoclu Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs, it is able to rely on MPI for distributing

Peter Wittek 239 Nov 10, 2022
Python package for machine learning for healthcare using a OMOP common data model

This library was developed in order to facilitate rapid prototyping in Python of predictive machine-learning models using longitudinal medical data from an OMOP CDM-standard database.

Sontag Lab 75 Jan 03, 2023