Shōgun

Overview

The SHOGUN machine learning toolbox


Unified and efficient Machine Learning since 1999.

Latest release:

Release

Cite Shogun:

DOI

Develop branch build status:

Build status codecov

Donate to Shogun via NumFocus:

Powered by NumFOCUS

Buildbot: https://buildbot.shogun.ml.

Interfaces


Shogun is implemented in C++ and offers automatically generated, unified interfaces to Python, Octave, Java / Scala, Ruby, C#, R, Lua. We are currently working on adding more languages including JavaScript, D, and Matlab.

Interface Status
Python mature (no known problems)
Octave mature (no known problems)
Java/Scala stable (no known problems)
Ruby stable (no known problems)
C# stable (no known problems)
R beta (most examples work, static calls unavailable)
Perl pre-alpha (work in progress quality)
JS pre-alpha (work in progress quality)

See our website for examples in all languages.

Platforms


Shogun is supported under GNU/Linux, MacOSX, FreeBSD, and Windows.

Directory Contents


The following directories are found in the source distribution. Note that some folders are submodules that can be checked out with git submodule update --init.

  • src - source code, separated into C++ source and interfaces
  • doc - readmes (doc/readme, submodule), Jupyter notebooks, cookbook (API examples), licenses
  • examples - example files for all interfaces
  • data - data sets (submodule, required for examples)
  • tests - unit tests and continuous integration of interface examples
  • applications - applications of SHOGUN (outdated)
  • benchmarks - speed benchmarks
  • cmake - cmake build scripts

License


Shogun is distributed under BSD 3-clause license, with optional GPL3 components. See doc/licenses for details.

Comments
  • Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    Implement heterogeneous (GPU+CPU) dot product computation routines (Deep learning project)

    The dot product operation is one of the major building blocks for deep architecture neural networks. The routine implemented in this task should be able to handle batch computation of dot products. For some references see Theano, CUDA, OpenCL, ViennaCL. It is also worth to implement some tests to measure performance and memory specific things.

    Please join the discussion before starting working on any code. We expect to refine the task with further discussion.

    good first issue 
    opened by lisitsyn 101
  • #2068 Simple Gaussian Process Regression on Movielens.

    #2068 Simple Gaussian Process Regression on Movielens.

    How to commit data to the shogun-data? Need I use another pull request on shogun-data?

    This is the simple example of using Gaussian Process Regression on Movielens.

    opened by pl8787 61
  • Add kmeans page to cookbook

    Add kmeans page to cookbook

    • There's no parameter CLabels in class CKMeans, so I can't find a way to use apply_*, eval.evaluate, so as to compare test and training dataset.
    • Though I didn't see why CKMeans cannot have CLabels - we can just label the clusters 1..N.
    • I thought about evaluating the clustering performance by calculating the Euclidean distances between centers of training dataset and test dataset, but there's no handy method for now.
    • I didn't see the difference between dataset fm_train_real.dat and classifier_binary__2d_linear_features_train.datbut I think it doesn't really matter which one to use..?
    opened by OXPHOS 59
  • Add meta example features-char-string

    Add meta example features-char-string

    A simple meta example for CStringFeatures.

    I would like to make changes and add output file for an integration test but I am not sure if the current outputs are enough for that. Currently, it stores "max_string_length", "number_of_strings" and "length_of_first_string". I don't think that is possible to practically check all the values of "strings".

    However, if you don't have a better idea I could add eight variables that store the value of the first vector before and after the change to "test".

    opened by avramidis 56
  • Refactor laplacian

    Refactor laplacian

    @karlnapf take a look at this. I will send the link for the notebook tomorrow.

    Note that the original implementation of LaplacianInferenceMethod in Shogun used log(lu.determinant()) to compute the log_determinant is not numerical stable. (In fact, this implementation do not follow the GPML code)

    Maybe MatrixOperations.h will be merged in Math.h. However, I think in that case the Math.h file need to include the Eigen3 header.

    Another issue is currently I use MatrixXd and VectorXd to pass variables in MatrixOperations.h. maybe SGVector and SGMatrix will be better. (should I use "SGVector &" or "SGVector") I do now know whether passing SGVector to a function is to copy elements in the SGVector.

    opened by yorkerlin 54
  • Implement an example of variational approximation for binary GP classification

    Implement an example of variational approximation for binary GP classification

    This task is for the Variational Learning for Recommendations with Big data http://shogun-toolbox.org/page/Events/gsoc2014_ideas#variational_learning

    Our goal is to reproduce a simple example of variational approximation. We will use a GP prior with zero mean and a linear Kernel, and generate synthetic data using a logit likelihood. We will then compute an approximate Gaussian posterior N(m,V) with a restriction that the diagonal of V is 1. Our goal is to find m and V. We will use the KL method of Kuss and Rasmussen, 2005

    I have a demo code in MATLAB here, and the hope is to generate this using Shogun. https://github.com/emtiyaz/VariationalApproxExample

    You need to do the following two main tasks: (1) Write a function similar to ElogLik.m for logit likelihood (2) Interface the optimization in example.m using Shogun's LBFGS implementation.

    Please let us know that you are working on it, and feel free to ask any questions to @karlnapf or me.

    Tag: Development Task good first issue 
    opened by emtiyaz 51
  • cv::Mat to CDenseFeature conversion Factory and vice versa.

    cv::Mat to CDenseFeature conversion Factory and vice versa.

    I have made a factory which directly converts any cvMat object into any(required) type of CDenseFeatures. and CDenseFeature<float64_t> into the required type of cvMat (any)

    opened by kislayabhi 48
  • Added Documentation regarding issue #1878

    Added Documentation regarding issue #1878

    Added 'pca_notebook.ipynb' named python notebook in doc/ipython-notebooks/pca Implemented PCA on toy data for 2d to 1d and 3d to 2d projection. Implemented Eigenfaces for data compression and face recognition using att_face dataset.

    opened by kislayabhi 48
  • WIP Write Generalized Linear Machine class

    WIP Write Generalized Linear Machine class

    #5005 #5000 This is the basic framework for the Generalized Linear Machine class. This class is supposed to implement the following distributions BINOMIAL, GAMMA, SOFTPLUS, PROBIT, POISSON

    The code has been written keeping in mind this reference: PyGLMNet library However, I have only written code for the Poisson distribution till now.

    THIS IS A WORK IN PROGRESS

    This PR is so that a discussion can be held about the implementation of the GLM and so that Some feedback can be obtained for my code. @lgoetz @geektoni

    TODO

    • [x] Write code.
    • [x] Add basic test.
    • [x] Add gradient test.
    • [X] Link github gists for generating data.
    • [X] Check why the SGObject test is failing.
    • [ ] Use FeatureDispatchCRTP.
    opened by Hephaestus12 47
  • Added DEPRECATED versions of statistic and variance in streaming MMD

    Added DEPRECATED versions of statistic and variance in streaming MMD

    DEPRECATED versions are available with

    • statistic type S_UNBIASED_DEPRECATED
    • null variance estimation method NO_PERMUTATION_DEPRECATED
    • null approximation method MMD1_GAUSSIAN_DEPRECATED
    opened by lambday 47
  • one issue  about using Shogun's optimizers in target languages

    one issue about using Shogun's optimizers in target languages

    @karlnapf In CInference class

    virtual void register_minimizer(Minimizer* minimizer);
    

    In Minimizer class

    #ifndef MINIMIZER_H
    #define MINIMIZER_H
    #include <shogun/lib/config.h>
    namespace shogun
    {
    
    /** @brief The minimizer base class.
     *
     */
    class Minimizer
    {
    public: 
            /** Do minimization and get the optimal value 
             * 
             * @return optimal value
             */
            virtual float64_t minimize()=0;
    
            virtual ~Minimizer() {}
    };
    
    }
    #endif
    

    Note that

    • CInference is a sub-class of SGObject
    • LBFGSMinimizer class is a sub-class of Minimizer
    • CSingleLaplaceInferenceMethod is a sub-class of CInference
    • Minimizer is NOT a sub-class of CSGObject

    The following lines of C++ code work.

    CSingleLaplaceInferenceMethod* inf = new CSingleLaplaceInferenceMethod();
    LBFGSMinimizer* opt=new LBFGSMinimizer();
    inf->register_minimizer(opt);
    

    However, the following lines of Python code do not work

    inf=SingleLaplaceInferenceMethod()
    opt = LBFGSMinimizer()
    inf.register_minimizer(opt)
    

    Error output:

    TypeError: in method 'Inference_register_minimizer', argument 2 of type 'Minimizer *'
    
    Type: Bug 
    opened by yorkerlin 44
  • Official Website shogun.ml is unavailable

    Official Website shogun.ml is unavailable

    image

    nslookup.exe www.shogun.ml
    

    服务器: cra-123-dns Address: 10.20.110.123 非权威应答: 名称: shogun.ml Address: 213.239.207.21 Aliases: www.shogun.ml


    ping 213.239.207.21

    正在 Ping 213.239.207.21 具有 32 字节的数据: 请求超时。 请求超时。 请求超时。 请求超时。 213.239.207.21 的 Ping 统计信息: 数据包: 已发送 = 4,已接收 = 0,丢失 = 4 (100% 丢失),

    opened by MIngPAPA 0
  • Frustrated with building shogun on RHEL9

    Frustrated with building shogun on RHEL9

    I'm having all sorts of issues building shogun on RHEL9. Is shogun supported on RHEL9?

    Also is there a place where you document the version of supported dependencies? I installed the latest version of eigen3 just to find out it was not compatible with shogun. The gpl stuff is also confusing for anyone building your code for the first time.

    opened by omidnabi 0
  • Website Not Secure warning

    Website Not Secure warning

    using mac os monterey and chrome v 104.0.5112.101

    Website linked from github: https://www.shogun.ml/

    image

    Website linked from NumFocus https://www.shogun-toolbox.org/ image

    opened by droumis 0
  • Machine object should return a reference to themselves

    Machine object should return a reference to themselves

    Machine object should return a reference to themselves (like in sklearn)

    auto machine = pipeline->over(std::make_shared<NormOne>())
                                           ->composite()
                                                  ->over(std::make_shared<MulticlassLibLinear>())
                				      ->over(std::make_shared<MulticlassOCAS>())
                            	        ->then(std::make_shared<MeanRule>());
    
    machine->train(train_feats, train_labels);
    auto pred = machine->apply_multiclass(test_feats);
    

    should be simply

    auto pred = pipeline->over(std::make_shared<NormOne>())
                                     ->composite()
                                              ->over(std::make_shared<MulticlassLibLinear>())
                		                  ->over(std::make_shared<MulticlassOCAS>())
                            	 ->then(std::make_shared<MeanRule>())
                                     ->train(train_feats, train_labels)
                                     ->apply_multiclass(test_feats);
    

    This should be a simple fix in Machine::train signature, but it might break some code..

    good first issue 
    opened by gf712 7
  • Error freeing memory LibSVM when exiting sample application

    Error freeing memory LibSVM when exiting sample application

    I build shogun master on Windows 10 x64, VisualStudio 2019. I built the sample classifier_minimal_svm, it works but I get this error exiting the application

    Critical error detected c0000374
    classifier_minimal_svm.exe has triggered a breakpoint.
    
    Exception thrown at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted 
    (parameters: 0x00007FFC396427F0).
    Unhandled exception at 0x00007FFC395DB0B9 (ntdll.dll) in classifier_minimal_svm.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFC396427F0).
    

    This is the stack trace:

    ntdll.dll!00007ffc395db0b9()	Unknown
    ntdll.dll!00007ffc395db083()	Unknown
    ntdll.dll!00007ffc395e390e()	Unknown
    ntdll.dll!00007ffc395e3c1a()	Unknown
    ntdll.dll!00007ffc3957ecb1()	Unknown
    ntdll.dll!00007ffc3958ce62()	Unknown
    ucrtbase.dll!00007ffc357ec7eb()	Unknown
    classifier_minimal_svm.exe!shogun::sg_free(void * ptr) Line 186	C++
    classifier_minimal_svm.exe!shogun::sg_generic_free<int,0>(int * ptr) Line 124	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::free_data() Line 405	C++
    classifier_minimal_svm.exe!shogun::SGReferencedData::unref() Line 102	C++
    classifier_minimal_svm.exe!shogun::SGVector<int>::~SGVector<int>() Line 173	C++
    classifier_minimal_svm.exe!shogun::KernelMachine::~KernelMachine() Line 79	C++
    classifier_minimal_svm.exe!shogun::SVM::~SVM() Line 40	C++
    classifier_minimal_svm.exe!shogun::LibSVM::~LibSVM() Line 37	C++
    classifier_minimal_svm.exe!shogun::LibSVM::`scalar deleting destructor'(unsigned int)	C++
    classifier_minimal_svm.exe!std::_Destroy_in_place<shogun::LibSVM>(shogun::LibSVM & _Obj) Line 269	C++
    classifier_minimal_svm.exe!std::_Ref_count_obj2<shogun::LibSVM>::_Destroy() Line 1446	C++
    classifier_minimal_svm.exe!std::_Ref_count_base::_Decref() Line 542	C++
    classifier_minimal_svm.exe!std::_Ptr_base<shogun::LibSVM>::_Decref() Line 776	C++
    classifier_minimal_svm.exe!std::shared_ptr<shogun::LibSVM>::~shared_ptr<shogun::LibSVM>() Line 1034	C++
    classifier_minimal_svm.exe!main(int argc, char * * argv) Line 41	C++
    [Inline Frame] classifier_minimal_svm.exe!invoke_main() Line 78	C++
    classifier_minimal_svm.exe!__scrt_common_main_seh() Line 288	C++
    

    I see in previous release there was this line of code now removed

    // free up memory
    SG_UNREF(svm);
    
    Type: Bugfixing Tag: Cleanup 
    opened by spiovesan 15
  • Make Machine class stateless

    Make Machine class stateless

    @LiuYuHui's main GSoC project. Machine class becomes stateless wrt Features and Labels which means that the user has to provide features and labels when fitting a Machine. This is essentially done by adding the notion of (Non)Parametric Machines.

    Tag: GSoC 
    opened by gf712 2
Releases(shogun_6.1.4)
  • shogun_6.1.4(Jul 5, 2019)

  • shogun_6.1.3(Dec 7, 2017)

    Features

    • Drop all <math.h> function calls [Viktor Gal]
    • Use c++11 std::isnan, std:isfinite, std::isinf [Viktor Gal]

    Bugfixes

    • Port ipython notebooks to be python3 compatible [Viktor Gal]
    • Use the shogun-static library on Windows when linking the interface library [Viktor Gal]
    • Fix python typemap when compiling with MSVC [Viktor Gal]
    • Fix ShogunConfig.cmake paths [Viktor Gal]
    • Fix meta example parser bug in parallel builds [Esben Sørig]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.2(Nov 29, 2017)

  • shogun_6.1.1(Nov 29, 2017)

    Bugfixes

    • Install headers of GPL models when LICENSE_GPL_SHOGUN is enabled [Viktor Gal]
    • Always turn on LIBSHOGUN_BUILD_STATIC when compiling with MSVC [Viktor Gal]
    • Fix ipython notebook errors [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.1.0(Nov 28, 2017)

    • This release is dedicated for Heiko's successful PhD defense!

    • Add conda-forge packages, to get prebuilt binaries via the cross-platform conda package manager [Dougal Sutherland]

    • Change interface cmake variables to INTERFACE_*

    • Move GPL code to gpl submodule [Heiko Strathmann]

    Features

    • Enable using BLAS/LAPACK from Eigen by default [Viktor Gal]
    • Add iterators to SGVector and SGMatrix [Viktor Gal]
    • Significantly lower the runtime of KernelPCA (GSoC '17) [Michele Mazzoni]
    • Refactor FisherLDA and LDA solvers (GSoC '17) [Michele Mazzoni]
    • Add automated test for trained model serialization (GSoC '17) [Michele Mazzoni]
    • Enable SWIG director classes by default [Viktor Gal]
    • Vectorize DotFeatures covariance/mean calculation [Michele Mazzoni]
    • Support for premature stopping of model training (GSoC '17) [Giovanni De Toni]
    • Add support for observable variables (GSoC '17) [Giovanni De Toni]
    • Use TFLogger to serialize observed variables for TensorBoard (GSoC '17) [Giovanni De Toni]
    • Drop CMath::dot and SGVector::dot and use linalg::dot [Viktor Gal]
    • Added class probabilities for BaggingMachine (GSoC '17) [Olivier Nguyen]

    Bugfixes

    • Fix transpose bug in Ruby typemap for matrices [Elias Saalmann]
    • Fix MKL detection and linking; use mkl_rt when available [Viktor Gal]
    • Fix Windows static linking [Viktor Gal]
    • Fix SWIG interface compilation on Windows [qcrist]
    • Fix CircularBuffer bug that broke parsing of big CSV and LibSVM files #1991 [Viktor Gal]
    • Fix R interface when using clang to compile the interface [Viktor Gal]
    Source code(tar.gz)
    Source code(zip)
  • shogun_6.0.0(Apr 23, 2017)

    • Add native MS Windows support [Viktor Gal]
    • Shogun requires the compiler to support C++11 features
    • Shogun cloud online: Jupyter notebook with Shogun from the browser, https://cloud.shogun.ml

    Features

    • LDA now supports 32, 64 and 128 bit floating point numbers [Chris Goldsworthy]
    • Add SHOGUN_NUM_THREADS enviroment variable to control the number of threads used by the models in runtime [Viktor Gal]
    • Added Scala Interface to the build [Abhinav Rai]
    • Major re-writing and API changes in kernel statistical hypothesis testing framework, significant speed up in permutation test for quadratic time MMD, new kernel selection algorithms for quadratic time MMD [Soumyajit De]

    Bugfixes:

    • Fix build error of R interface for R>=3.3.0, #3460 [Heiko Strathmann]
    • Make the code compatible with Eigen 3.3.0 [Viktor Gal]
    • Fix number of CPUs detected on Linux [Viktor Gal]
    • Fix multi-threading in KMeansBase [Viktor Gal]
    • Make ExponentialARDKernel thread-safe [Viktor Gal]
    • Make PRNG thread-safe [Viktor Gal]
    • Fix python interface when using libshogun compiled with OpenMP [Viktor Gal]
    • Fix CART to work with cross-validation [Fernando Iglesias]

    Cleanup, efficiency updates, and API Changes:

    • Port multi-threading to use OpenMP backend in Kernel [Viktor Gal]
    • Fix false sharing in EuclideanDistance [Viktor Gal]
    • Fix out of source build of the whole project [Viktor Gal]
    • Add LIBSHOGUN cmake flag to turn off libshogun compilation [Viktor Gal]
    • Export Shogun target with cmake to enable to build modular interfaces to a pre-compiled libshogun on the system without requiring to compile libshogun itself [Viktor Gal]

    Notes

    • Contains major rewrite and clean-up of developer documentation in doc/readme [Heiko Strathmann, Lea Götz]
    • Known issue: Octave multithreaded crashes, currently bindings are initialized single-threaded, https://github.com/shogun-toolbox/shogun/issues/3772 [Heiko Strathmann]
    Source code(tar.gz)
    Source code(zip)
  • shogun_5.0.0(Nov 4, 2016)

    Features

    • GSoC 2016 project of Saurabh Mahindre: Major efficiency improvements for KMeans, LARS, Random Forests, Bagging, KNN.
    • Add new Shogun cookbook for documentation and testing across all target languages [Heiko Strathmann, Sergey Lisitsyn, Esben Sorig, Viktor Gal].
    • Added option to learn CombinedKernel weights with GP approximate inference [Wu Lin].
    • LARS now supports 32, 64, and 128 bit floating point numbers [Chris Goldsworthy].

    Bugfixes:

    • Fix gTest segfaults with GCC >= 6.0.0 [Björn Esser].
    • Make Java and CSharp install-dir configurable [Björn Esser].
    • Autogenerate modshogun.rb with correct module-suffix [Björn Esser].
    • Fix KMeans++ initialization [Saurabh Mahindre].

    Cleanup, efficiency updates, and API Changes:

    • Make Eigen3 a hard requirement. Bundle if not found on system. [Heiko Strathmann]
    • Drop ALGLIB (GPL) dependency in CStatistics and ship CDFLIB (public domain) instead [Heiko Strathmann]
    • Drop p-value estimation in model-selection [Heiko Strathmann]
    • Static interfaces have been removed [Viktor Gal]
    • New base class ShiftInvariantKernel of which GaussianKernel inherits [Rahul De].

    NOTE

    This version contains a new CMake option USE_GPL_SHOGUN, which when set to OFF will exclude all GPL codes from Shogun [Heiko Strathmann].

    Source code(tar.gz)
    Source code(zip)
  • shogun_4.1.0(May 17, 2016)

    This is a new feature and cleanup release.

    Features:

    • Added GEMPLP for approximate inference to the structured output framework [Jiaolong Xu].
    • Effeciency improvements of the FITC framework for GP inference (FITC_Laplce, FITC, VarDTC) [Wu Lin].
    • Added optimisation of inducing variables in sparse GP inference [Wu Lin].
    • Added optimisation methods for GP inference (Newton, Cholesky, LBFGS, ...) [Wu Lin].
    • Added Automatic Relevance Determination (ARD) kernel functionality for variational GP inference [Wu Lin].
    • Updated Notebook for variational GP inference [Wu Lin].
    • New framework for stochastic optimisation (L1/2 loss, mirror descent, proximal gradients, adagrad, SVRG, RMSProp, adadelta, ...) [Wu Lin].
    • New Shogun meta-language for automatically generating code listings in all target languages [Esben Sörig].
    • Added periodic kernel [Esben Sörig].
    • Add gradient output functionality in Neural Nets [Sanuj Sharma].

    Bugfixes:

    • Fixes for java_modular build using OpenJDK [Björn Esser].
    • Catch uncaught exceptions in Neural Net code [Khaled Nasr].
    • Fix build of modular interfaces with SWIG 3.0.5 on MacOSX [Björn Esser].
    • Fix segfaults when calling delete[] twice on SGMatrix-instances [Björn Esser].
    • Fix for building with full-hardening-(CXX|LD)FLAGS [Björn Esser].
    • Patch SWIG to fix a problem with SWIG and Python >= 3.5 [Björn Esser].
    • Add modshogun.rb: make sure narray is loaded before modshogun.so [Björn Esser].
    • set working-dir properly when running R (#2654) [Björn Esser].

    Cleanup, efficiency updates, and API Changes:

    • Added GPU based dot-products to linalg [Rahul De].
    • Added scale methods to linalg [Rahul De].
    • Added element wise products to linalg [Rahul De].
    • Added element-wise unary operators in linalg [Rahul De].
    • Dropped parameter migration framework [Heiko Strathmann].
    • Disabled Python integration tests by default [Sergey Lisitsyn, Heiko Strathmann].
    Source code(tar.gz)
    Source code(zip)
  • shogun_4.0.0(Jan 18, 2015)

    • This release features the work of our 8 GSoC 2014 students [student; mentors]:
      • OpenCV Integration and Computer Vision Applications [Abhijeet Kislay; Kevin Hughes]
      • Large-Scale Multi-Label Classification [Abinash Panda; Thoralf Klein]
      • Large-scale structured prediction with approximate inference [Jiaolong Xu; Shell Hu]
      • Essential Deep Learning Modules [Khaled Nasr; Sergey Lisitsyn, Theofanis Karaletsos]
      • Fundamental Machine Learning: decision trees, kernel density estimation [Parijat Mazumdar ; Fernando Iglesias]
      • Shogun Missionary & Shogun in Education [Saurabh Mahindre; Heiko Strathmann]
      • Testing and Measuring Variable Interactions With Kernels [Soumyajit De; Dino Sejdinovic, Heiko Strathmann]
      • Variational Learning for Gaussian Processes [Wu Lin; Heiko Strathmann, Emtiyaz Khan]
    • This release also contains several cleanups and bugfixes:
      • Features:
        • New Shogun project description [Heiko Strathmann]
        • ID3 algorithm for decision tree learning [Parijat Mazumdar]
        • New modes for PCA matrix factorizations: SVD & EVD, in-place or reallocating [Parijat Mazumdar]
        • Add Neural Networks with linear, logistic and softmax neurons [Khaled Nasr]
        • Add kernel multiclass strategy examples in multiclass notebook [Saurabh Mahindre]
        • Add decision trees notebook containing examples for ID3 algorithm [Parijat Mazumdar]
        • Add sudoku recognizer ipython notebook [Alejandro Hernandez]
        • Add in-place subsets on features, labels, and custom kernels [Heiko Strathmann]
        • Add Principal Component Analysis notebook [Abhijeet Kislay]
        • Add Multiple Kernel Learning notebook [Saurabh Mahindre]
        • Add Multi-Label classes to enable Multi-Label classification [Thoralf Klein]
        • Add rectified linear neurons, dropout and max-norm regularization to neural networks [Khaled Nasr]
        • Add C4.5 algorithm for multiclass classification using decision trees [Parijat Mazumdar]
        • Add support for arbitrary acyclic graph-structured neural networks [Khaled Nasr]
        • Add CART algorithm for classification and regression using decision trees [Parijat Mazumdar]
        • Add CHAID algorithm for multiclass classification and regression using decision trees [Parijat Mazumdar]
        • Add Convolutional Neural Networks [Khaled Nasr]
        • Add Random Forests algorithm for ensemble learning using CART [Parijat Mazumdar]
        • Add Restricted Botlzmann Machines [Khaled Nasr]
        • Add Stochastic Gradient Boosting algorithm for ensemble learning [Parijat Mazumdar]
        • Add Deep contractive and denoising autoencoders [Khaled Nasr]
        • Add Deep belief networks [Khaled Nasr]
      • Bugfixes:
        • Fix reference counting bugs in CList when reference counting is on [Heiko Strathmann, Thoralf Klein, lambday]
        • Fix memory problem in PCA::apply_to_feature_matrix [Parijat Mazumdar]
        • Fix crash in LeastAngleRegression for the case D greater than N [Parijat Mazumdar]
        • Fix memory violations in bundle method solvers [Thoralf Klein]
        • Fix fail in library_mldatahdf5.cpp example when http://mldata.org is not working properly [Parijat Mazumdar]
        • Fix memory leaks in Vowpal Wabbit, LibSVMFile and KernelPCA [Thoralf Klein]
        • Fix memory and control flow issues discovered by Coverity [Thoralf Klein]
        • Fix R modular interface SWIG typemap (Requires SWIG >= 2.0.5) [Matt Huska]
      • Cleanup and API Changes:
        • PCA now depends on Eigen3 instead of LAPACK [Parijat Mazumdar]
        • Removing redundant and fixing implicit imports [Thoralf Klein]
        • Hide many methods from SWIG, reducing compile memory by 500MiB [Heiko Strathmann, Fernando Iglesias, Thoralf Klein]
    Source code(tar.gz)
    Source code(zip)
  • shogun_3.2.0(Feb 17, 2014)

    we are pleased to announce Shogun 3.2.0 !

    This release also contains several cleanups and bugfixes:

    • Features:
      • Fully support python3 now
      • Add mini-batch k-means [Parijat Mazumdar]
      • Add k-means++ for more details see the notebook [Parijat Mazumdar]
      • Add sub-sequence string kernel [lambday]
    • Bugfixes:
      • Compile fixes for upcoming swig3.0
      • Speedup for gaussian process' apply()
      • Improve unit / integration test checks
      • libbmrm uninitialized memory reads
      • libocas uninitialized memory reads
      • Octave 3.8 compile fixes [Orion Poplawski]
      • Fix java modular compile error [Bjoern Esser]
    Source code(tar.gz)
    Source code(zip)
Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling

RHGN Source code for CIKM 2021 paper for Relation-aware Heterogeneous Graph for User Profiling Dependencies torch==1.6.0 torchvision==0.7.0 dgl==0.7.1

Big Data and Multi-modal Computing Group, CRIPAC 6 Nov 29, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022
Code for Neurips2021 Paper "Topology-Imbalance Learning for Semi-Supervised Node Classification".

Topology-Imbalance Learning for Semi-Supervised Node Classification Introduction Code for NeurIPS 2021 paper "Topology-Imbalance Learning for Semi-Sup

Victor Chen 40 Nov 23, 2022
[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Anycost GAN video | paper | website Anycost GANs for Interactive Image Synthesis and Editing Ji Lin, Richard Zhang, Frieder Ganz, Song Han, Jun-Yan Zh

MIT HAN Lab 726 Dec 28, 2022
[ICCV 2021 Oral] Just Ask: Learning to Answer Questions from Millions of Narrated Videos

Just Ask: Learning to Answer Questions from Millions of Narrated Videos Webpage • Demo • Paper This repository provides the code for our paper, includ

Antoine Yang 87 Jan 05, 2023
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation.

MosaicOS Mosaic of Object-centric Images as Scene-centric Images (MosaicOS) for long-tailed object detection and instance segmentation. Introduction M

Cheng Zhang 27 Oct 12, 2022
Compare GAN code.

Compare GAN This repository offers TensorFlow implementations for many components related to Generative Adversarial Networks: losses (such non-saturat

Google 1.8k Jan 05, 2023
[CVPR2021] Invertible Image Signal Processing

Invertible Image Signal Processing This repository includes official codes for "Invertible Image Signal Processing (CVPR2021)". Figure: Our framework

Yazhou XING 281 Dec 31, 2022
buildseg is a building extraction plugin of QGIS based on PaddlePaddle.

buildseg buildseg is a building extraction plugin of QGIS based on PaddlePaddle. TODO Extract building on 512x512 remote sensing images. Extract build

Yizhou Chen 11 Sep 26, 2022
DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time

DR-GAN: Automatic Radial Distortion Rectification Using Conditional GAN in Real-Time Introduction This is official implementation for DR-GAN (IEEE TCS

Kang Liao 18 Dec 23, 2022
A library for uncertainty quantification based on PyTorch

Torchuq [logo here] TorchUQ is an extensive library for uncertainty quantification (UQ) based on pytorch. TorchUQ currently supports 10 representation

TorchUQ 96 Dec 12, 2022
Self-Learned Video Rain Streak Removal: When Cyclic Consistency Meets Temporal Correspondence

In this paper, we address the problem of rain streaks removal in video by developing a self-learned rain streak removal method, which does not require any clean groundtruth images in the training pro

Yang Wenhan 44 Dec 06, 2022
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation

📖 Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR 2022) 🔥 If DaGAN is helpful in your photos/projects, please hel

Fa-Ting Hong 503 Jan 04, 2023
Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters"

Manga Character Screentone Synthesis Official PyTorch implementation of "Synthesis of Screentone Patterns of Manga Characters" presented in IEEE ISM 2

Tsubota 2 Nov 20, 2021
Detecting Blurred Ground-based Sky/Cloud Images

Detecting Blurred Ground-based Sky/Cloud Images With the spirit of reproducible research, this repository contains all the codes required to produce t

1 Oct 20, 2021
the official code for ICRA 2021 Paper: "Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation"

G2S This is the official code for ICRA 2021 Paper: Multimodal Scale Consistency and Awareness for Monocular Self-Supervised Depth Estimation by Hemang

NeurAI 4 Jul 27, 2022
Code for paper "Multi-level Disentanglement Graph Neural Network"

Multi-level Disentanglement Graph Neural Network (MD-GNN) This is a PyTorch implementation of the MD-GNN, and the code includes the following modules:

Lirong Wu 6 Dec 29, 2022
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

Pan Lu 81 Dec 27, 2022
Convolutional 2D Knowledge Graph Embeddings resources

ConvE Convolutional 2D Knowledge Graph Embeddings resources. Paper: Convolutional 2D Knowledge Graph Embeddings Used in the paper, but do not use thes

Tim Dettmers 586 Dec 24, 2022