zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Related tags

Algorithmszoofs
Overview

zoofs ( Zoo Feature Selection )

zoofs Logo Header

zoofs is a Python library for performing feature selection using an variety of nature inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics based to Evolutionary. It's easy to use ,flexible and powerful tool to reduce your feature size.

Installation

PyPI version

Using pip

Use the package manager to install zoofs.

pip install zoofs

Available Algorithms

Algorithm Name Class Name Description References doi
Particle Swarm Algorithm ParticleSwarmOptimization Utilizes swarm behaviour 10.1007/978-3-319-13563-2_51
Grey Wolf Algorithm GreyWolfOptimization Utilizes wolf hunting behaviour https://doi.org/10.1016/j.neucom.2015.06.083
Dragon Fly Algorithm DragonFlyOptimization Utilizes dragonfly swarm behaviour 10.1016/j.knosys.2020.106131
Genetic Algorithm Algorithm GeneticOptimization Utilizes genetic mutation behaviour 10.1109/ICDAR.2001.953980
Gravitational Algorithm GravitationalOptimization Utilizes newtons gravitational behaviour 10.1109/ICASSP.2011.5946916

More algos soon, stay tuned !

  • [Try It Now?] Open In Colab

Usage

Define your own objective function for optimization !

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()
   

Suggestions for Usage

  • As available algorithms are wrapper algos. It is better to use ml models that build quicker, e.g lightgbm, catboost.
  • Take sufficient amount for 'population_size' , as this will determine the extent of exploration and exploitation of the algo.
  • Ensure that your ml model has its hyperparamters optimized before passing it to zoofs algos.

objective score plot

objective score Header



Algorithms

Particle Swarm Algorithm

Particle Swarm


class zoofs.ParticleSwarmOptimization(objective_function,n_iteration=50,population_size=50,minimize=True,c1=2,c2=2,w=0.9)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
c1 : float, default=2.0
first acceleration coefficient of particle swarm
c2 : float, default=2.0
second acceleration coefficient of particle swarm
w : float, default=0.9
weight parameter
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train, y_train, X_test, y_test,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import ParticleSwarmOptimization
# create object of algorithm
algo_object=ParticleSwarmOptimization(objective_function_topass,n_iteration=20,
                                       population_size=20,minimize=True,c1=2,c2=2,w=0.9)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                      
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,verbose=True)
#plot your results
algo_object.plot_history()


Grey Wolf Algorithm

Grey Wolf


class zoofs.GreyWolfOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method=1,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {1, 2}, default=1
Choose the between the two methods of grey wolf optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GreyWolfOptimization
# create object of algorithm
algo_object=GreyWolfOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                       
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid,method=1,verbose=True)
#plot your results
algo_object.plot_history()


Dragon Fly Algorithm

Dragon Fly


class zoofs.DragonFlyOptimization(objective_function,n_iteration=50,population_size=50,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration : int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
minimize : bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,method='sinusoidal',verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
method : {'linear','random','quadraic','sinusoidal'}, default='sinusoidal'
Choose the between the three methods of Dragon Fly optimization
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import DragonFlyOptimization
# create object of algorithm
algo_object=DragonFlyOptimization(objective_function_topass,n_iteration=20,
                                    population_size=20,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                     
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, method='sinusoidal', verbose=True)
#plot your results
algo_object.plot_history()


Genetic Algorithm

Dragon Fly


class zoofs.GeneticOptimization(objective_function,n_iteration=20,population_size=20,selective_pressure=2,elitism=2,mutation_rate=0.05,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
selective_pressure: int, default=2
measure of reproductive opportunities for each organism in the population
elitism: int, default=2
number of top individuals to be considered as elites
mutation_rate: float, default=0.05
rate of mutation in the population's gene
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GeneticOptimization
# create object of algorithm
algo_object=GeneticOptimization(objective_function_topass,n_iteration=20,
                            population_size=20,selective_pressure=2,elitism=2,
                            mutation_rate=0.05,minimize=True)
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                            
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train,X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Gravitational Algorithm

Gravitational Algorithm


class zoofs.GravitationalOptimization(self,objective_function,n_iteration=50,population_size=50,g0=100,eps=0.5,minimize=True)


Parameters objective_function : user made function of the signature 'func(model,X_train,y_train,X_test,y_test)'.
The function must return a value, that needs to be minimized/maximized.
n_iteration: int, default=50
Number of time the algorithm will run
population_size : int, default=50
Total size of the population
g0: float, default=100
gravitational strength constant
eps: float, default=0.5
distance constant
minimize: bool, default=True
Defines if the objective value is to be maximized or minimized
Attributes best_feature_list : array-like
Final best set of features

Methods

Methods Class Name
fit Run the algorithm
plot_history Plot results achieved across iteration

fit(model,X_train,y_train,X_valid,y_valid,verbose=True)

Parameters model :
machine learning model's object
X_train : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Training input samples to be used for machine learning model
y_train : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The target values (class labels in classification, real numbers in regression).
X_valid : pandas.core.frame.DataFrame of shape (n_samples, n_features)
Validation input samples
y_valid : pandas.core.frame.DataFrame or pandas.core.series.Series of shape (n_samples)
The Validation target values .
verbose : bool,default=True
Print results for iterations
Returns best_feature_list : array-like
Final best set of features

plot_history()

Plot results across iterations

Example

from sklearn.metrics import log_loss
# define your own objective function, make sure the function receives four parameters,
#  fit your model and return the objective value ! 
def objective_function_topass(model,X_train, y_train, X_valid, y_valid):      
    model.fit(X_train,y_train)  
    P=log_loss(y_valid,model.predict_proba(X_valid))
    return P
    
# import an algorithm !  
from zoofs import GravitationalOptimization
# create object of algorithm
algo_object=GravitationalOptimization(objective_function,n_iteration=50,
                                population_size=50,g0=100,eps=0.5,minimize=True) 
import lightgbm as lgb
lgb_model = lgb.LGBMClassifier()                                
# fit the algorithm
algo_object.fit(lgb_model,X_train, y_train, X_valid, y_valid, verbose=True)
#plot your results
algo_object.plot_history()

Support zoofs

The development of zoofs relies completely on contributions.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

First roll out

18,08,2021

License

apache-2.0

Comments
  • Looking for integrated Harris Haw Optimization in the zoofs

    Looking for integrated Harris Haw Optimization in the zoofs

    Additional context Harris Haw Optimization (HHO) is a novel meta-heuristic optimization algorithm released in 2019 with an increasing of applied research papers. It would be great if the team can add the HHO to the zoofs which will be potential for further testing and make the zoofs more popular.

    enhancement 
    opened by hanamthang 9
  • [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    [Snyk] Security upgrade mistune from 0.8.4 to 2.0.1

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.2.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | Cross-site Scripting (XSS)
    SNYK-PYTHON-MISTUNE-2328096 | mistune:
    0.8.4 -> 2.0.1
    | No | No Known Exploit

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    πŸ›  Adjust project settings

    πŸ“š Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • [Snyk] Fix for 3 vulnerabilities

    [Snyk] Fix for 3 vulnerabilities

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.13 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- high severity | 624/1000
    Why? Has a fix available, CVSS 8.2 | Arbitrary Code Execution
    SNYK-PYTHON-IPYTHON-2348630 | ipython:
    5.10.0 -> 7.16.3
    | No | No Known Exploit high severity | 696/1000
    Why? Proof of Concept exploit, Has a fix available, CVSS 7.5 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-PYGMENTS-1086606 | pygments:
    2.5.2 -> 2.7.4
    | No | Proof of Concept high severity | 589/1000
    Why? Has a fix available, CVSS 7.5 | Denial of Service (DoS)
    SNYK-PYTHON-PYGMENTS-1088505 | pygments:
    2.5.2 -> 2.7.4
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the effected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    πŸ›  Adjust project settings

    πŸ“š Read more about Snyk's upgrade and patch logic

    opened by snyk-bot 2
  • Feature importance

    Feature importance

    Hi, Thanks for the great repo. I would like to know whether we can get the ranking of the selected features after using one of your algorithm (ex: particle swarm optimization)

    opened by veeresh-dammur 2
  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    jaswinder9051998/zoofs now has a Chat Room on Gitter

    @jaswinder9051998 has just created a chat room. You can visit it here: https://gitter.im/zooFeatureSelection/general.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

    opened by gitter-badger 1
  • Hyperparameter optimization for algorithms in zoofs

    Hyperparameter optimization for algorithms in zoofs

    Hi Jaswinder,

    Would you consider to add the function like GridSearch for hyper-parameter optimization of the algorithm, such as GWO, in the zoofs? This library, PySwarm (https://github.com/tisimst/pyswarm) for instance, they provide a GridSearch to find the best combination of the parameters c, w1, w2.

    For now, I have to do the trial and error to test which ranges of parameters in the GWO (population, iteration, method) deliver the best result for my dataset.

    Many thanks, Thang

    opened by hanamthang 1
  • Disabling verbose still prints logs

    Disabling verbose still prints logs

    Setting verbose=False still produces output at every iteration. This is problematic since the JSON file can get very large when the fit function runs for prolonged period of time.

    opened by aigarspetresevics 0
  • [Snyk] Fix for 2 vulnerabilities

    [Snyk] Fix for 2 vulnerabilities

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    mkdocs-material 8.0.1 requires pygments, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    jupyter-nbextensions-configurator 0.6.1 requires notebook, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires nbconvert, which is not installed.
    jupyter-contrib-nbextensions 0.7.0 requires notebook, which is not installed.
    jupyter-contrib-core 0.4.2 requires notebook, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3180413 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    πŸ›  Adjust project settings

    πŸ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    πŸ¦‰ Regular Expression Denial of Service (ReDoS) πŸ¦‰ Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 2
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- low severity | 441/1000
    Why? Recently disclosed, Has a fix available, CVSS 3.1 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3113904 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    πŸ›  Adjust project settings

    πŸ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    πŸ¦‰ Regular Expression Denial of Service (ReDoS)

    opened by jaswinder9051998 1
  • [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    [Snyk] Security upgrade wheel from 0.30.0 to 0.38.0

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • docs/requirement.txt
    ⚠️ Warning
    notebook 5.7.16 requires pyzmq, which is not installed.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.4.
    mkdocs-material 8.0.1 requires mkdocs, which is not installed.
    mkdocs-material 8.0.1 requires markdown, which is not installed.
    mkdocs-material 8.0.1 requires mkdocs-material-extensions, which is not installed.
    mkdocs-material 8.0.1 requires pymdown-extensions, which is not installed.
    jupyter-nbextensions-configurator 0.5.0 has requirement notebook>=6.0, but you have notebook 5.7.16.
    jupyter-client 5.3.5 requires pyzmq, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-WHEEL-3092128 | wheel:
    0.30.0 -> 0.38.0
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    πŸ›  Adjust project settings

    πŸ“š Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    πŸ¦‰ Regular Expression Denial of Service (ReDoS)

    opened by snyk-bot 1
  • Speed-up suggestions

    Speed-up suggestions

    It doesn't accept numpy arrays and so numba is out of question. Any suggestions to improve speed? When you have 100+ feature columns it takes atleast 2 weeks running 24/7

    opened by aigarspetresevics 0
  • Number of features

    Number of features

    First of all i want to thank you for this amazing library , just i want to ask can the size of best_feature_list can be declared before starting the algorithm ??

    opened by klil21 2
Releases(v0.1.24)
Owner
Jaswinder Singh
Associate Software Engineer - Data Science
Jaswinder Singh
8-puzzle-solver with UCS, ILS, IDA* algorithm

Eight Puzzle 8-puzzle-solver with UCS, ILS, IDA* algorithm pre-usage requirements python3 python3-pip virtualenv prepare enviroment virtualenv -p pyth

Mohsen Arzani 4 Sep 22, 2021
So far implements A* will add more later

Pathfinding_Visualization Finds the shortest path between two nodes. The light blue path is the shortest path. The black nodes are barriers. Created i

Lukas DeLoach 1 Jan 18, 2022
Implements (high-dimenstional) clustering algorithm

Description Implements (high-dimenstional) clustering algorithm described in https://arxiv.org/pdf/1804.02624.pdf Dependencies python3 pytorch (=0.4)

Eric Elmoznino 5 Dec 27, 2022
The test data, code and detailed description of the AW t-SNE algorithm

AW-t-SNE The test data, code and result of the AW t-SNE algorithm Structure of the folder Datasets: This folder contains two datasets, the MNIST datas

1 Mar 09, 2022
8 Puzzle with A* , Greedy & BFS Search in Python

8_Puzzle 8 Puzzle with A* , Greedy & BFS Search in Python Python Install Python from here. Pip Install pip from here. How to run? πŸš€ Install 8_Puzzle

I3L4CK H4CK3l2 1 Jan 30, 2022
Algorithms and utilities for SAR sensors

WARNING: THIS CODE IS NOT READY FOR USE Sarsen Algorithms and utilities for SAR sensors Objectives Be faster and simpler than ESA SNAP and cloud nativ

B-Open 201 Dec 27, 2022
A selection of a few algorithms used to sort or search an array

Sort and search algorithms This repository has some common search / sort algorithms written in python, I also included the pseudocode of each algorith

0 Apr 02, 2022
Ralebel is an interpreted, Haitian Creole programming language that aims to help Haitians by starting with the fundamental algorithm

Ralebel is an interpreted, Haitian Creole programming language that aims to help Haitians by starting with the fundamental algorithm

Lub Lorry Lamysère 5 Dec 01, 2022
Silver Trading Algorithm

Silver Trading Algorithm This project was done in the context of the Applied Algorithm Trading Course (FINM 35910) at the University of Chicago. Motiv

Laurent Lanteigne 1 Jan 29, 2022
Exact algorithm for computing two-sided statistical tolerance intervals under a normal distribution assumption using Python.

norm-tol-int Exact algorithm for computing two-sided statistical tolerance intervals under a normal distribution assumption using Python. Methods The

Jed Ludlow 1 Jan 06, 2022
Minimal pure Python library for working with little-endian list representation of bit strings.

bitlist Minimal Python library for working with bit vectors natively. Purpose This library allows programmers to work with a native representation of

Andrei Lapets 0 Jul 25, 2022
With this algorithm you can see all best positions for a Team.

Best Positions Imagine that you have a favorite team, and you want to know until wich position your team can reach With this algorithm you can see all

darlyn 4 Jan 28, 2022
This is an implementation of the QuickHull algorithm in Python. I

QuickHull This is an implementation of the QuickHull algorithm in Python. It randomly generates a set of points and finds the convex hull of this set

Anant Joshi 4 Dec 04, 2022
Implementation of an ordered dithering algorithm used in computer graphics

Ordered Dithering Project In this project, we use an ordered dithering method to turn an RGB image, first to a gray scale image and then, turn the gra

1 Oct 26, 2021
Distributed Grid Descent: an algorithm for hyperparameter tuning guided by Bayesian inference, designed to run on multiple processes and potentially many machines with no central point of control

Distributed Grid Descent: an algorithm for hyperparameter tuning guided by Bayesian inference, designed to run on multiple processes and potentially many machines with no central point of control.

Martin 1 Jan 01, 2022
Algorithms and data structures for educational, demonstrational and experimental purposes.

Algorithms and Data Structures (ands) Introduction This project was created for personal use mostly while studying for an exam (starting in the month

50 Dec 06, 2022
Python algorithm to determine the optimal elevation threshold of a GNSS receiver, by using a statistical test known as the Brown-Forsynthe test.

Levene and Brown-Forsynthe: Test for variances Application to Global Navigation Satellite Systems (GNSS) Python algorithm to determine the optimal ele

Nicolas Gachancipa 2 Aug 09, 2022
Visualisation for sorting algorithms. Version 2.0

Visualisation for sorting algorithms v2. Upped a notch from version 1. This program provides animates simple, common and popular sorting algorithms, t

Ben Woo 7 Nov 08, 2022
Python Client for Algorithmia Algorithms and Data API

Algorithmia Common Library (python) Python client library for accessing the Algorithmia API For API documentation, see the PythonDocs Algorithm Develo

Algorithmia 138 Oct 26, 2022
This python algorithm creates a simple house floor plan based on a user-provided CSV file.

This python algorithm creates a simple house floor plan based on a user-provided CSV file. The algorithm generates possible router placements and evaluates where a signal will be reached in every roo

Joshua Miller 1 Nov 12, 2021